id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
15186222
https://en.wikipedia.org/wiki/Surface%20weather%20observation
Surface weather observation
Surface weather observations are the fundamental data used for safety as well as climatological reasons to forecast weather and issue warnings worldwide. They can be taken manually, by a weather observer, by computer through the use of automated weather stations, or in a hybrid scheme using weather observers to augment the otherwise automated weather station. The ICAO defines the International Standard Atmosphere (ISA), which is the model of the standard variation of pressure, temperature, density, and viscosity with altitude in the Earth's atmosphere, and is used to reduce a station pressure to sea level pressure. Airport observations can be transmitted worldwide through the use of the METAR observing code. Personal weather stations taking automated observations can transmit their data to the United States mesonet through the Citizen Weather Observer Program (CWOP), the UK Met Office through their Weather Observations Website (WOW), or internationally through the Weather Underground Internet site. A thirty-year average of a location's weather observations is traditionally used to determine the station's climate. In the US a network of Cooperative Observers make a daily record of summary weather and sometimes water level information. History Reverend John Campanius Holm is credited with taking the first systematic weather observations in Colonial America. He was a chaplain in the Swedes Fort colony near the mouth of the Delaware River. Holm recorded daily observations without instruments during 1644 and 1645. While numerous other accounts of weather events on the East Coast were documented during the 17th Century. President George Washington kept a detailed weather diary during the late 1700s at Mount Vernon, Virginia. The number of routine weather observers increased significantly during the 1800s. In 1807, Dr. B. S. Barton of the University of Pennsylvania requested members throughout the Union of the Linnaean Society of Philadelphia to maintain instrumented weather observing sites to establish a climatological history.During the early 1900s, numerous observer stations moved from farms to residential districts of towns, where service was available to mail the observation forms. By 1926, more than 5000 observing locations were located throughout the U.S., West Indies, and the Caribbean.In 1939, the Bureau of Aeronautics in the U. S. Navy began to actively develop automated weather stations. Airports Surface weather observations have traditionally been taken at airports due to safety concerns during takeoffs and landings. The ICAO defines the International Standard Atmosphere (also known as ICAO Standard Atmosphere), which is the model of the standard variation of pressure, temperature, density, and viscosity with elevation/altitude in the Earth's atmosphere. This is useful in calibrating instruments and designing aircraft, and is used to reduce a station's pressure to sea level pressure (SLP) where it can then be used on weather maps. In the United States, the FAA mandates the taking of weather observations at larger airports for safety reasons. To help facilitate the purchase of an automated airport weather station, such as ASOS, the FAA allows federal dollars to be used for the installation of certified weather stations at airports. The airport observations are then transmitted worldwide using the METAR observing code. METAR reports typically come from airports or permanent weather observation stations. Reports are generated once an hour; however, if conditions change significantly, they may be updated in special reports called SPECIs. Data reported Surface weather observations can include the following elements: The Station Identifier, or Location identifier, consists of four characters for METAR observations, with the first representing the region of the world the station lies within. For example, the first letter for areas in and around the Pacific Ocean is P, and for Europe is E. The second character may represent the country/state the location lies within. For Hawaii, the first two letters are "PH" while for Great Britain, the first two letters of the station identifier are "EG". Canada and the contiguous United States are an exception, with the first letters C and K representing the regions, respectively. The final two or three letters normally represent the name of the location or airport. Visibility, measured in meters for most sites worldwide, except in the United States where statute miles are reported. Runway visibility, measured in meters in many locations worldwide, or feet within the United States. Temperature is a measure of the kinetic energy of a sample of matter. Temperature is the unique physical property that determines the direction of heat flow between two objects placed in thermal contact. If no heat flow occurs, the two objects have the same temperature; otherwise heat flows from the hotter object to the colder object. Temperature, within meteorology, is measured with thermometers exposed to the air but sheltered from direct solar exposure. In most of the world, the degree Celsius scale is used for most temperature measuring purposes. However, the United States is the last major country in which the degree Fahrenheit temperature scale is used by most lay people, industry, popular meteorology, and government. Despite this, METAR reports from the United States also report the temperature (and dewpoint, see below) in degrees Celsius. Dew point is the temperature to which a given parcel of air must be cooled, at constant atmospheric pressure, for water vapor to condense into water. The condensed water is called dew. The dew point is a saturation point. When the dew point temperature falls below freezing it is called the frost point, as the water vapor no longer creates dew but instead creates frost or hoarfrost by deposition. The dew point is associated with relative humidity. A high relative humidity indicates that the dew point is closer to the current air temperature. If the relative humidity is 100%, the dew point is equal to the current temperature. Given a constant dew point, an increase in temperature will lead to a decrease in relative humidity. At a given barometric pressure, independent of temperature, the dew point determines the specific humidity of the air. The dew point is an important statistic for general aviation pilots, as it is used to calculate the likelihood of carburetor icing and fog. When used with the air temperature, a formula can be used to estimate the height of cumuliform, or convective, clouds. Wind is determined using anemometers and wind vanes, or aerovanes, located a standard above ground level (AGL). Average wind speed is measured using a two-minute average in the United States, and a 10-minute average elsewhere. Wind direction is measured using degrees, with north representing 0 or 360 degrees, with values increasing from 0 clockwise from north. Wind gusts are reported when there is variation of the wind speed of more than between peaks and lulls during the sampling period. Sea level pressure (SLP) is the pressure at sea level or (when measured at a given elevation on land) the station pressure reduced to sea level assuming an isothermal layer at the station temperature. This is the pressure normally given in weather reports on radio, television, and newspapers or on the Internet. When barometers in the home are set to match the local weather reports, they measure pressure reduced to sea level, not the actual local atmospheric pressure. The reduction to sea level means that the normal range of fluctuations in pressure is the same for everyone. The pressures which are considered high pressure or low pressure do not depend on geographical location. This makes isobars on a weather map meaningful and useful tools. Altimeter setting is a term and quantity used in aviation. The regional or local air pressure at mean sea level is called the altimeter setting, and the pressure which will calibrate the altimeter to show the height above ground at a given QNH airfield. Present weather, which present restrictions to visibility or presence of thunder or squalls, are reported in observations to indicate to aviation any possible threats during landings and takeoffs from airports. Types included in surface weather observations include precipitation, obscurations, other weather phenomena such as, well-developed dust/sand whirls, squalls, tornadic activity, sandstorms, volcanic ash, and dust storms. Intensity of precipitation is primarily measured for meteorological concerns. However, it can be of concern to aviation as heavy precipitation can limit visibility. Also, intensity of freezing rain can determine how hazardous it is for pilots to fly nearby certain locations since it can be an in-flight hazard by depositing ice on the wings of aircraft, which can be detrimental to flight. Precipitation amount over the past 1, 3, 6 or 24 hours is of particular interest to meteorologists in verifying forecast amounts of precipitation and determining station climatologies. Snowfall amount during the past 6 hours is taken for meteorological and climatological concerns. However, it may also be reported hourly using "SNOINCR" remarks to provide air field technicians information on how frequently snow must be plowed from runways and taxiways. Snow depth is measured for meteorological and climatological concerns once a day. However, during periods of snowfall, it is measured each six hours to determine amount of recent snowfall. Example of a METAR surface weather observation METAR LBBG 041600Z 12003MPS 090V150 1400 R04/P1500N R22/P1500U +SN BKN022 OVC050 M04/M07 Q1020 NOSIG 9949//91= Personal weather stations, maintained by citizens rather than government officials, do not use METAR code. Software allows information to be transmitted to various sites, such as the Weather Underground globally, or the CWOP within the United States, which can then be used by the appropriate meteorological organizations either to diagnose real-time conditions, or be used within weather forecast models. Use of weather maps Data collected by land locations coding in METAR are conveyed worldwide via phone lines or wireless technology. Within many nations' meteorological organizations, this data is then plotted onto a weather map using the station model. A station model is a symbolic illustration showing the weather occurring at a given reporting station. Meteorologists created the station model to plot a number of weather elements in a small space on weather maps. Maps filled with dense station-model plots can be difficult to read, but they allow meteorologists, pilots, and mariners to see important weather patterns. Weather maps are used to display information quickly showing the analysis of various meteorological quantities at various levels of the atmosphere, in this case the surface layer. Maps containing station models aid in the drawing of isotherms, which more readily identifies temperature gradients, and can help in the location of weather fronts. Two-dimensional streamlines based on wind speeds show areas of convergence and divergence in the wind field, which are helpful in determining the location of features within the wind pattern. A popular type of surface weather map is the surface weather analysis, which plots isobars to depict areas of high pressure and low pressure. Ship and buoy reports For over a century, reports from the world's oceans have been received real-time for safety reasons and to help with general weather forecasting. The reports are coded using the synoptic code, and relayed via radio or satellite to weather organizations worldwide. Buoy reports are automated, and maintained by the country that moored the buoy in that location. Larger moored buoys are used near shore, while smaller drifting buoys are used farther out at sea. Due to the importance of reports from the surface of the ocean, the voluntary observing ship program, known as VOS, was set up to train crews how to take weather observations while at sea and also to calibrate weather sensors used aboard ships when they arrive in port, such as barometers and thermometers. The Beaufort scale is still generally used to determine wind speed from manual observers out at sea. Ships with anemometers have issues with determining wind speeds at higher wind speeds due to blockage of the instruments by increasing high seas. Use in establishing climate of a location Climate, (from Ancient Greek klima) is commonly defined as the weather averaged over a long period of time. The standard averaging period is 30 years for an individual location, but other periods may be used. Climate includes statistics other than the average, such as the magnitudes of day-to-day or year-to-year variations. The Intergovernmental Panel on Climate Change (IPCC) glossary definition is: Climate in a narrow sense is usually defined as the "average weather", or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The classical period is 30 years, as defined by the World Meteorological Organization (WMO). These quantities are most often surface variables such as temperature, precipitation, and wind. Climate in a wider sense is the state, including a statistical description, of the climate system. The main difference between climate and everyday weather is best summarized by the popular phrase "Climate is what you expect, weather is what you get." Over historic time spans there are a number of static variables that determine climate, including: latitude, altitude, proportion of land to water, and proximity to oceans and mountains. Degree of vegetation coverage affects solar heat absorption, water retention, and rainfall on a regional level.
Physical sciences
Meteorology: General
Earth science
15190098
https://en.wikipedia.org/wiki/Livestock%20guardian%20dog
Livestock guardian dog
A livestock guardian dog (LGD) is a dog type bred for the purpose of protecting livestock from predators. Livestock guardian dogs stay with the group of animals they protect as a full-time member of the flock or herd. Their ability to guard their herd is mainly instinctive, as the dog is bonded to the herd from an early age. Unlike herding dogs which control the movement of livestock, LGDs blend in with them, watching for intruders within the flock. The mere presence of a guardian dog is usually enough to ward off some predators, and LGDs confront predators by vocal intimidation, barking, and displaying very aggressive behavior. The dog may attack or fight with a predator if it cannot drive it away. History Herding dogs originated in Western Asia, on the territory of modern Iran and Iraqi Kurdistan in association with the beginning of livestock breeding. Domestication of sheep and goats began there in the 8-7th millennium BC. Back then shepherding was a difficult job: the first shepherds did not have horses and managed livestock on foot, as mules, horses and donkeys were not yet fully domesticated and obedient enough. Dogs that previously were helping humans to hunt, became assistants in farming. The main task of dogs in the early period was to protect herds from a variety of wild predators, which were very numerous at that time. This function predetermined the type of herding dogs: they had to be strong, vicious, courageous, decisive, able to stand alone against a large predator and, most importantly, ready to defend their herd. The ancestors of livestock guarding dogs can be traced back to six thousand years ago, with archaeological findings of joint remains of sheep and dogs dated back to 3685 BC. The land of their origin is considered to be the territories of modern Turkey, Iraqi Kurdistan and Syria. Livestock dogs are mentioned in the Old Testament, the works of Cato the Elder and Varro. Images are found on works of art created more than two thousand years ago. Their use being recorded as early as 150 BC in Rome. Both Aristotle's History of Animals and Virgil's Georgics mention the use of livestock guardian dogs by the Molossians in the ancient region of Epirus. Purpose and working features Livestock guardian dogs specialise in protection of small farm animals, mainly sheep. Unlike herds of cattle or horses, which are able to withstand even large predators on their own, herds of sheep and goats need the protection that LGDs are designed to provide. In large farms, sheep are managed mainly by using the distant-pasture method. In winter the flocks are kept in low-lying pastures or in paddocks, and in the summer they are moved to remote regions, often to the mountains, where there is enough grass during the summer drought. LGDs guard livestock on pastures throughout the year, and also protect sheep from attacks of predators during seasonal migrations. The dogs are introduced to livestock as puppies so they "imprint" on the animals. Experts recommend that the pups begin living with the herd at 4 to 5 weeks of age. This imprinting is thought to be largely olfactory and occurs between 3 and 16 weeks of age. Training requires regular daily handling and management, preferably from birth. A guardian dog is not considered reliable until it is at least 2 years of age. Until that time, supervision, guidance, and correction are needed to teach the dog the skills and rules it needs to do its job. Having older dogs that assist in training younger dogs streamlines this process considerably. Trials are underway to protect penguins with LGDs. In Namibia in Southwest Africa, Anatolians are used to guard goat herds from cheetahs, and are typically imprinted between 7 and 8 weeks of age. Before use of dogs was implemented, impoverished Namibian farmers often came into conflict with predatory cheetahs; now, Anatolians usually are able to drive off cheetahs with their barking and displays of aggression. The experiments of Lorna and Raymond Coppinger and the studies of other specialists have shown the effectiveness of protecting flocks with the help of dogs. After the reintroduction of wolves, that were eliminated in the United States in the 1930s, American farmers were losing about a million sheep annually to wolf attacks. 76 farmers took part in the Coppingers program, which introduced European livestock guardian dogs into the US sheep breeding (in their project they used Anatolian Shepherd Dogs). In all farms, where, in the absence of dogs, up to two hundred attacks of wolves per year happened, not a single sheep was lost under the protection of LGDs. At the same time, none of the predators protected by law got killed: the dogs simply did not allow them to approach the herd. For the protection of flocks, on average, five dogs are used per 350 heads of sheep, but the need for LGDs depends on many conditions, such as the landscape and size of the territory, vegetation available in the pasture area, the species, breed and number of animals in the herd, the presence of a shepherd, the presence or absence of fences and other means of protection, the number and species composition of predators, as well as the breed, age, health status of LGDs and their experience. For example, sheep breeders of the Rocky Mountains in the United States breed predominantly white-headed Rambouillet sheep with a strong herd instinct. During the day, the sheep scatter over the pasture that is about one square mile, and at night they gather in a denser flock. In an ordinary flock of a thousand ewes and their lambs, two to five guard dogs live constantly. The number of dogs in a herd can change with their death or the birth of puppies. When the herds gather together for the winter, some dogs can move to another herd and spend the next summer guarding other sheep. When large predators appear near pastures, the number of dogs in the flock usually increases. Protection is more reliable if the herd is guarded by dogs of different breeds, for example, powerful Pyrenean mastiffs, who prefer to lie close to livestock, in cooperation with more mobile Maremmas or Kangals, who control the perimeter of the pasture. Traits Temperament and working ethic The three qualities most sought after in LGDs are trustworthiness, attentiveness, and protectiveness; trustworthy in that they do not roam off and are not aggressive with the livestock, attentive in that they are situationally aware of threats by predators, and protective in that they attempt to drive off predators. Dogs, being social creatures with differing personalities, take on different roles with the herd and among themselves; most stick close to the livestock, others tend to follow the shepherd or rancher when one is present, and some drift away from the livestock. These differing roles are often complementary in terms of protecting livestock, and experienced ranchers and shepherds sometimes encourage these differences by adjustments in socialization technique so as to increase the effectiveness of their group of dogs in meeting specific predator threats. LGDs that follow the livestock closest assure that a guard dog is on hand if a predator attacks, while LGDs that patrol at the edges of a flock or herd are in a position to keep would-be attackers at a safe distance from livestock. Those dogs that are more attentive tend to alert those that are more passive, but perhaps also more trustworthy or less aggressive with the livestock. At least two dogs may be placed with a flock or herd, depending on its size, the type of predators, their number, and the intensity of predation. If predators are scarce, one dog may be adequate, though most operations usually require at least two dogs. Large operations (particularly range operations) and heavy predator loads require more dogs. Male and female LGDs have proved to be equally effective in protecting of livestock. While LGDs have been known to fight to the death with predators, in most cases, predator attacks are prevented by a display of aggressiveness. LGDs are known to drive off predators for which physically they would be no match, such as bears and even lions. With the reintroduction of predators into natural habitats in Europe and North America, environmentalists have come to appreciate LGDs because they allow sheep and cattle farming to coexist with predators in the same or nearby habitats. Unlike trapping and poisoning, LGDs seldom kill predators; instead, their aggressive behaviors tend to condition predators to seek unguarded (thus, nonfarm animal) prey. For instance, in Italy's Gran Sasso National Park, where LGDs and wolves have coexisted for centuries, older, more experienced wolves seem to "know" the LGDs and leave their flocks alone. Physical traits LGDs are large, powerful dogs, although smaller dogs drive wild animals away from the herd just as effectively. The large size provides guardian dogs with a number of advantages: they retain heat longer, carry more fat reserves and can go without food for longer, are less likely to get bone fractures and tolerate illnesses better. Their stride is longer, so they are more efficient at long distances. However, dogs that are too large suffer more from the heat, therefore they are used exclusively in the northern regions and in mountain pastures. Livestock guardian dogs working with the herds in hot areas are lighter in bone and shorter. All LGDs have similar physical traits. Differences in appearance reflect the peculiarities of the climate in which these dogs live and work. All LGDs have a dense water-repellent coat, strong build, and independent disposition. Differences in the colour are determined by local traditions: puppies of a typical colour were given preference for breeding in different regions. Rigg notes that often the color of dogs is chosen according to the main color of the livestock: in flocks of white sheep, the dogs are white, with coloured sheep, goats or yaks, the dogs are usually grey or brown. It is assumed that animals are calmer about being in a presence of dogs of a similar color. In addition, the color of the dog corresponding to the color of the herd reduces the risk of accidental death of the dog when shooting wolves. Livestock guardian dogs in the modern world LGDs are generally large, independent, and protective, which can make them less than ideal for urban or even suburban living. Nonetheless, despite their size, they can be gentle, make good companion dogs, and are often protective towards children. If introduced to a family as a pup, most LGDs are as protective of their family as a working guard dog is of its flock. In fact, in some communities where LGDs are a tradition, the runt of a litter often was kept or given as a household pet or simply kept as a village dog without a single owner. For various reasons, including the decline in livestock and the transition to other methods of livestock breeding and management, in many regions, the number of LGDs has critically declined. Instead of their original purpose, livestock guardian dogs are more often used to guard property, bred as show dogs with a spectacular appearance, and sometimes used in dog fight business. The breed standards used by canine organisations in purebred breeding and their selection process are mainly focused on physical characteristics and not on their ability to protect the herd. In the absence of a traditional guarding purpose and selection associated with it, hereditary guarding skills and key working qualities of LGDs get lost. Some breeds of LGDs are kept mainly as pets (Pyrenean mountain dog). Some working breeds (the Karakachan dog in Bulgaria, the Portuguese LGD breeds) are on the verge of extinction, others (Kuchi dog in Afghanistan, the Mazandarani saga dog in Iran) are considered completely lost. Nonetheless, livestock breeding remains an important part of agriculture, and livestock guardian dogs are still the most efficient and sustainable way of protecting the herds. LGDs invariably remain an integral part of the industry in places of traditional sheep breeding where large carnivores have survived, such as the Carpathian and Balkan regions, in central Italy, on the Iberian Peninsula, in the mountain regions of the Middle East and Central Asia. In Western and Northern Europe, where large predators were reintroduced in the end of the 20th century, shepherds are going back to using LGDs as the only way to protect farm animals from harm in a way that is not lethal to legally protected predators. Thanks to this advantage, LGDs are now used to protect herds in the US, Scandinavia, and a number of African countries, even despite the absence of such a tradition in these regions. The use of livestock guardian dogs for the protection of herds reduces losses of animals between 11% and 100%, without requiring significant investments, special technologies and government assistance. Attempts to return the LGDs into agriculture are supported by government programs and public organisations in a number of countries. List of breeds Many breeds of LGDs are little known outside of the regions where they are still worked. Nevertheless, some breeds are known to display traits advantageous to guarding livestock. Some specialist LGD breeds include: Extant breeds List of extinct breeds
Biology and health sciences
Dogs
Animals
16921463
https://en.wikipedia.org/wiki/Leaching%20%28pedology%29
Leaching (pedology)
In pedology, leaching is the removal of soluble materials from one zone in soil to another via water movement in the profile. It is a mechanism of soil formation distinct from the soil forming process of eluviation, which is the loss of mineral and organic colloids. Leached and eluviated materials tend to be lost from topsoil and deposited in subsoil. A soil horizon accumulating leached and eluviated materials is referred to as a zone of illuviation. Laterite soil, which develops in regions with high temperature and heavy rainfall, is an example of this process in action.
Physical sciences
Soil science
Earth science
11500144
https://en.wikipedia.org/wiki/Grand%20design%20spiral%20galaxy
Grand design spiral galaxy
A grand design spiral galaxy is a type of spiral galaxy with prominent and well-defined spiral arms, as opposed to multi-arm and flocculent spirals which have subtler structural features. The spiral arms of a grand design galaxy extend clearly around the galaxy through many radians and can be observed over a large fraction of the galaxy's radius. As of 2002, approximately 10 percent of all currently known spiral galaxies are classified as grand design type spirals, including M51, M74, M81, M83, and M101. Origin of structure Density wave theory is the preferred explanation for the well-defined structure of grand design spirals, first suggested by Chia-Chiao Lin and Frank Shu in 1964. The term "grand design" was not used in this work, but appeared in the 1966 continuation paper; Lin (along with Yuan and Shu) is usually credited with coining of the term. According to the density wave theory, the spiral arms are created inside density waves that turn around the galaxy at different speeds from the stars in the galaxy's disk. Stars and gas are clumped in these dense regions due to gravitational attraction toward the dense material, though their location in the spiral arm may not be permanent. When they come close to the spiral arm, they are pulled toward the dense material by the force of gravity; and as they travel through the arm, they are slowed from exiting by the same gravitational pull. This causes the gas in particular to clump in the dense regions, which in turn causes gas clouds to collapse, resulting in star formation.
Physical sciences
Galaxy classification
Astronomy
11502909
https://en.wikipedia.org/wiki/Ribbon%20diagram
Ribbon diagram
Ribbon diagrams, also known as Richardson diagrams, are 3D schematic representations of protein structure and are one of the most common methods of protein depiction used today. The ribbon depicts the general course and organisation of the protein backbone in 3D and serves as a visual framework for hanging details of the entire atomic structure, such as the balls for the oxygen atoms attached to myoglobin's active site in the adjacent figure. Ribbon diagrams are generated by interpolating a smooth curve through the polypeptide backbone. α-helices are shown as coiled ribbons or thick tubes, β-sheets as arrows, and non-repetitive coils or loops as lines or thin tubes. The direction of the polypeptide chain is shown locally by the arrows, and may be indicated overall by a colour ramp along the length of the ribbon. Ribbon diagrams are simple yet powerful, expressing the visual basics of a molecular structure (twist, fold and unfold). This method has successfully portrayed the overall organization of protein structures, reflecting their three-dimensional nature and allowing better understanding of these complex objects both by expert structural biologists and by other scientists, students, and the general public. History The first ribbon diagrams, hand-drawn by Jane S. Richardson in 1980 (influenced by earlier individual illustrations), were the first schematics of 3D protein structure to be produced systematically. They were created to illustrate a classification of protein structures for an article in Advances in Protein Chemistry (now available in annotated form on-line at Anatax). These drawings were outlined in pen on tracing paper over a printout of a Cα trace of the atomic coordinates, and shaded with colored pencil or pastels; they preserved positions, smoothed the backbone path, and incorporated small local shifts to disambiguate the visual appearance. As well as the triose isomerase ribbon drawing at the right, other hand-drawn examples depicted prealbumin, flavodoxin, and Cu,Zn superoxide dismutase. In 1982, Arthur M. Lesk and co-workers first enabled the automatic generation of ribbon diagrams through a computational implementation that uses Protein Data Bank files as input. This conceptually simple algorithm fit cubic polynomial B-spline curves to the peptide planes. Most modern graphics systems provide either B-splines or Hermite splines as a basic drawing primitive. One type of spline implementation passes through each Cα guide point, producing an exact but choppy curve. Both hand-drawn and most computer ribbons (such as those shown here) are smoothed over about four successive guide points (usually the peptide midpoint) to produce a more visually pleasing and understandable representation. To give the right radius for helical spirals while preserving smooth β-strands, the splines can be modified by offsets proportional to local curvature, as first developed by Mike Carson for his Ribbons program and later adopted by other molecular graphics software, such as the open-source Mage program for kinemage graphics that produced the ribbon image at top right (other examples: 1XK8 trimer and DNA polymerase). Since their inception, and continuing in the present, ribbon diagrams have been the single most common representation of protein structure and a common choice of cover image for a journal or textbook. Current computer programs One popular program used for drawing ribbon diagrams is Molscript. Molscript utilizes Hermite splines to create coordinates for coils, turns, strands, and helices. The curve passes through all its control points (Cα atoms) guided by direction vectors. The program was built based on traditional molecular graphics by Arthur M. Lesk, Karl Hardman, and John Priestle. Jmol is an open-source Java-based viewer for browsing molecular structures on the web; it includes a simplified "cartoon" version of ribbons. Other graphics programs such as DeepView (example: urease) and MolMol (example: SH2 domain) also produce ribbon images. KiNG is the Java-based successor to Mage (examples: α-hemolysin top view and side view). UCSF Chimera is a powerful molecular modeling program that also includes visualizations such as ribbons, notable especially for the ability to combine them with contoured shapes from cryo-electron microscopy data. PyMOL, by Warren DeLano, is a popular and flexible molecular graphics program (based on Python) that operates in interactive mode and also produces presentation-quality 2D images for ribbon diagrams and many other representations. Features
Physical sciences
Substance
Chemistry
3320853
https://en.wikipedia.org/wiki/Chemical%20process
Chemical process
In a scientific sense, a chemical process is a method or means of somehow changing one or more chemicals or chemical compounds. Such a chemical process can occur by itself or be caused by an outside force, and involves a chemical reaction of some sort. In an "engineering" sense, a chemical process is a method intended to be used in manufacturing or on an industrial scale (see Industrial process) to change the composition of chemical(s) or material(s), usually using technology similar or related to that used in chemical plants or the chemical industry. Neither of these definitions are exact in the sense that one can always tell definitively what is a chemical process and what is not; they are practical definitions. There is also significant overlap in these two definition variations. Because of the inexactness of the definition, chemists and other scientists use the term "chemical process" only in a general sense or in the engineering sense. However, in the "process (engineering)" sense, the term "chemical process" is used extensively. The rest of the article will cover the engineering type of chemical processes. Although this type of chemical process may sometimes involve only one step, often multiple steps, referred to as unit operations, are involved. In a plant, each of the unit operations commonly occur in individual vessels or sections of the plant called units. Often, one or more chemical reactions are involved, but other ways of changing chemical (or material) composition may be used, such as mixing or separation processes. The process steps may be sequential in time or sequential in space along a stream of flowing or moving material; see Chemical plant. For a given amount of a feed (input) material or product (output) material, an expected amount of material can be determined at key steps in the process from empirical data and material balance calculations. These amounts can be scaled up or down to suit the desired capacity or operation of a particular chemical plant built for such a process. More than one chemical plant may use the same chemical process, each plant perhaps at differently scaled capacities. Chemical processes like distillation and crystallization go back to alchemy in Alexandria, Egypt. Such chemical processes can be illustrated generally as block flow diagrams or in more detail as process flow diagrams. Block flow diagrams show the units as blocks and the streams flowing between them as connecting lines with arrowheads to show direction of flow. In addition to chemical plants for producing chemicals, chemical processes with similar technology and equipment are also used in oil refining and other refineries, natural gas processing, polymer and pharmaceutical manufacturing, food processing, and water and wastewater treatment. Unit processing in chemical process Unit processing is the basic processing in chemical engineering. Together with unit operations it forms the main principle of the varied chemical industries. Each genre of unit processing follows the same chemical law much as each genre of unit operations follows the same physical law. Chemical engineering unit processing consists of the following important processes: Fractionation Decontamination Distillation Filtration Oxidation Reduction Refining / Refining (metallurgy) Hydrogenation Dehydrogenation Hydrolysis Hydration Dehydration Halogenation Nitrification Sulfonation Amination Alkylation Dealkylation Esterification Polymerization Polycondensation Purification Catalysis Academic research institutes in process chemistry Institute of Process Research & Development, University of Leeds
Physical sciences
Chemical engineering
Chemistry
3322182
https://en.wikipedia.org/wiki/Roystonea
Roystonea
Roystonea is a genus of eleven species of monoecious palms, native to the Neotropics, in the Caribbean, the adjacent coasts of Florida in the United States, Mexico, Central America and northern South America. Commonly known as the royal palms, the genus was named after Roy Stone, a U.S. Army engineer. It contains some of the most recognizable and commonly cultivated palms of tropical and subtropical regions. Description Roystonea is a genus of large, unarmed, single-stemmed palms with pinnate leaves. The large stature and striking appearance of a Roystonea palm makes it a notable aspect of the landscape. The stems, which were compared to stone columns by Louis and Elizabeth Agassiz in 1868, are smooth and columnar, although the trunks of R. altissima and R. maisiana are more slender than those of typical royal palms. Stems often are swollen and bulging along portions of their length, which may reflect years where growing conditions were better or worse than average. Leaf scars are often prominent along the stem, especially in young, rapidly growing individuals. Stem color ranges from gray-white to gray-brown except in R. violacea, which have violet-brown or mauve stems. Royal palm, R. oleracea, reaches heights of , but most species are in the range. The largest Royal palm is located in Floresta Estadual Edmundo Navarro de Andrade in Rio Claro, São Paulo, Brazil with 42.4 m and was discovered by Vincent Ferh and Mauro Galetti Roystonea leaves consist of a sheathing leaf base, a petiole, and a rachis. The leaf base forms a distinctive green sheath around the uppermost portion of the trunk. Known as the crownshaft, this sheath extends down the trunk. The petiole connects the lead base with the rachis. The American botanist Scott Zona only reported petiole lengths for three of the 10 species, ranging from . The rachis is pinnately divided and ranges from long. The leaf segments themselves range in length from in R. altissima up to as much as in R. lenis. They are arranged in two or three planes along the rachis. Many authors have reported that the leaves R. oleracea are arranged in a single plane, but Zona reported that this is not the case. These plants have the ability to easily release their leaves in strong winds, a supposed adaption serving to prevent toppling during hurricanes. Inflorescences occur beneath the crownshaft, emerging from a narrow, horn-shaped bract. The flowers on the branched panicles are usually white, unisexual, and contain both sexes. The fruit is an oblong or globose drupe long and deep purple when ripe. Some species so closely resemble one another that scientific differentiation is by inflorescence detail; flower size, colour, etc. Taxonomy Roystonea is placed in the subfamily Arecoideae and the tribe Roystoneae, which only contains Roystonea. The placement of Roystonea within the Arecoideae is uncertain; a phylogeny based on plastid DNA failed to resolve the position of the genus within the Arecoideae. As of 2008, there appear to be no molecular phylogenetic studies of Roystonea. One species is known only from two fossilized flowers preserved in Dominican amber which were described in 2002. Species Accepted species: Distribution Roystonea has a circum-Caribbean distribution which ranges from southern Florida in the north, to southern Mexico, Honduras and Nicaragua in the east and Venezuela and Colombia in the south. Species are found throughout the Caribbean, although only Jamaica and Hispaniola (with two native species) and Cuba (with five native species) have more than one native species. A few species are planted in Tunisia as well. Uses Royal palms are widely planted for decorative purposes throughout their native region, and elsewhere in the tropics and subtropics. Royal palms are very fond of water and thrive on supplemental irrigation. They also do better in a soil with much humus. Though mainly a decorative plant, royal palms do have some minor agricultural uses. The heart of the palm is used to make salad in some parts of the Caribbean, and its seeds can be used as substitutes for coffee beans. Royal palm seeds were widely used in Cuba to feed pigs at least up to the 1940s and 1950s. The meat of pigs raised with royal palm seeds was said to be the very best. The lard obtained from pigs fattened or raised with royal palm seeds was said to exhibit a grainy texture, and by inference, to have been the best lard to consume. The seeds generally were obtained by men who specialized in climbing the royal palms using a set of two ropes looped around the stem, with two loops around the climber's legs to support himself. Once the climber reached the seed clumps, he would tie the clumps that were mature, cut them, and let them down by a rope supported from other seed clumps.
Biology and health sciences
Arecales (inc. Palms)
Plants
3323565
https://en.wikipedia.org/wiki/Cauchy%20stress%20tensor
Cauchy stress tensor
In continuum mechanics, the Cauchy stress tensor (symbol , named after Augustin-Louis Cauchy), also called true stress tensor or simply stress tensor, completely defines the state of stress at a point inside a material in the deformed state, placement, or configuration. The second order tensor consists of nine components and relates a unit-length direction vector e to the traction vector T(e) across an imaginary surface perpendicular to e: The SI base units of both stress tensor and traction vector are newton per square metre (N/m2) or pascal (Pa), corresponding to the stress scalar. The unit vector is dimensionless. The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle for stress. The Cauchy stress tensor is used for stress analysis of material bodies experiencing small deformations: it is a central concept in the linear theory of elasticity. For large deformations, also called finite deformations, other measures of stress are required, such as the Piola–Kirchhoff stress tensor, the Biot stress tensor, and the Kirchhoff stress tensor. According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). At the same time, according to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine. However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers. There are certain invariants associated with the stress tensor, whose values do not depend upon the coordinate system chosen, or the area element upon which the stress tensor operates. These are the three eigenvalues of the stress tensor, which are called the principal stresses. Euler–Cauchy stress principle – stress vector The Euler–Cauchy stress principle states that upon any surface (real or imaginary) that divides the body, the action of one part of the body on the other is equivalent (equipollent) to the system of distributed forces and couples on the surface dividing the body, and it is represented by a field , called the traction vector, defined on the surface and assumed to depend continuously on the surface's unit vector . To formulate the Euler–Cauchy stress principle, consider an imaginary surface passing through an internal material point dividing the continuous body into two segments, as seen in Figure 2.1a or 2.1b (one may use either the cutting plane diagram or the diagram with the arbitrary volume inside the continuum enclosed by the surface ). Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as: Only surface forces will be discussed in this article as they are relevant to the Cauchy stress tensor. When the body is subjected to external surface forces or contact forces , following Euler's equations of motion, internal contact forces and moments are transmitted from point to point in the body, and from one segment to the other through the dividing surface , due to the mechanical contact of one portion of the continuum onto the other (Figure 2.1a and 2.1b). On an element of area containing , with normal vector , the force distribution is equipollent to a contact force exerted at point P and surface moment . In particular, the contact force is given by where is the mean surface traction. Cauchy's stress principle asserts that as becomes very small and tends to zero the ratio becomes and the couple stress vector vanishes. In specific fields of continuum mechanics the couple stress is assumed not to vanish; however, classical branches of continuum mechanics address non-polar materials which do not consider couple stresses and body moments. The resultant vector is defined as the surface traction, also called stress vector, traction, or traction vector. given by at the point associated with a plane with a normal vector : This equation means that the stress vector depends on its location in the body and the orientation of the plane on which it is acting. This implies that the balancing action of internal contact forces generates a contact force density or Cauchy traction field that represents a distribution of internal contact forces throughout the volume of the body in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector . Depending on the orientation of the plane under consideration, the stress vector may not necessarily be perpendicular to that plane, i.e. parallel to , and can be resolved into two components (Figure 2.1c): one normal to the plane, called normal stress where is the normal component of the force to the differential area and the other parallel to this plane, called the shear stress where is the tangential component of the force to the differential surface area . The shear stress can be further decomposed into two mutually perpendicular vectors. Cauchy's postulate According to the Cauchy Postulate, the stress vector remains unchanged for all surfaces passing through the point and having the same normal vector at , i.e., having a common tangent at . This means that the stress vector is a function of the normal vector only, and is not influenced by the curvature of the internal surfaces. Cauchy's fundamental lemma A consequence of Cauchy's postulate is Cauchy's Fundamental Lemma, also called the Cauchy reciprocal theorem, which states that the stress vectors acting on opposite sides of the same surface are equal in magnitude and opposite in direction. Cauchy's fundamental lemma is equivalent to Newton's third law of motion of action and reaction, and is expressed as Cauchy's stress theorem—stress tensor The state of stress at a point in the body is then defined by all the stress vectors T(n) associated with all planes (infinite in number) that pass through that point. However, according to Cauchy's fundamental theorem, also called Cauchy's stress theorem, merely by knowing the stress vectors on three mutually perpendicular planes, the stress vector on any other plane passing through that point can be found through coordinate transformation equations. Cauchy's stress theorem states that there exists a second-order tensor field σ(x, t), called the Cauchy stress tensor, independent of n, such that T is a linear function of n: This equation implies that the stress vector T(n) at any point P in a continuum associated with a plane with normal unit vector n can be expressed as a function of the stress vectors on the planes perpendicular to the coordinate axes, i.e. in terms of the components σij of the stress tensor σ. To prove this expression, consider a tetrahedron with three faces oriented in the coordinate planes, and with an infinitesimal area dA oriented in an arbitrary direction specified by a normal unit vector n (Figure 2.2). The tetrahedron is formed by slicing the infinitesimal element along an arbitrary plane with unit normal n. The stress vector on this plane is denoted by T(n). The stress vectors acting on the faces of the tetrahedron are denoted as T(e1), T(e2), and T(e3), and are by definition the components σij of the stress tensor σ. This tetrahedron is sometimes called the Cauchy tetrahedron. The equilibrium of forces, i.e. Euler's first law of motion (Newton's second law of motion), gives: where the right-hand-side represents the product of the mass enclosed by the tetrahedron and its acceleration: ρ is the density, a is the acceleration, and h is the height of the tetrahedron, considering the plane n as the base. The area of the faces of the tetrahedron perpendicular to the axes can be found by projecting dA into each face (using the dot product): and then substituting into the equation to cancel out dA: To consider the limiting case as the tetrahedron shrinks to a point, h must go to 0 (intuitively, the plane n is translated along n toward O). As a result, the right-hand-side of the equation approaches 0, so Assuming a material element (see figure at the top of the page) with planes perpendicular to the coordinate axes of a Cartesian coordinate system, the stress vectors associated with each of the element planes, i.e. T(e1), T(e2), and T(e3) can be decomposed into a normal component and two shear components, i.e. components in the direction of the three coordinate axes. For the particular case of a surface with normal unit vector oriented in the direction of the x1-axis, denote the normal stress by σ11, and the two shear stresses as σ12 and σ13: In index notation this is The nine components σij of the stress vectors are the components of a second-order Cartesian tensor called the Cauchy stress tensor, which can be used to completely define the state of stress at a point and is given by where σ11, σ22, and σ33 are normal stresses, and σ12, σ13, σ21, σ23, σ31, and σ32 are shear stresses. The first index i indicates that the stress acts on a plane normal to the Xi -axis, and the second index j denotes the direction in which the stress acts (For example, σ12 implies that the stress is acting on the plane that is normal to the 1st axis i.e.;X1 and acts along the 2nd axis i.e.;X2). A stress component is positive if it acts in the positive direction of the coordinate axes, and if the plane where it acts has an outward normal vector pointing in the positive coordinate direction. Thus, using the components of the stress tensor or, equivalently, Alternatively, in matrix form we have The Voigt notation representation of the Cauchy stress tensor takes advantage of the symmetry of the stress tensor to express the stress as a six-dimensional vector of the form: The Voigt notation is used extensively in representing stress–strain relations in solid mechanics and for computational efficiency in numerical structural mechanics software. Transformation rule of the stress tensor It can be shown that the stress tensor is a contravariant second order tensor, which is a statement of how it transforms under a change of the coordinate system. From an xi-system to an xi' -system, the components σij in the initial system are transformed into the components σij' in the new system according to the tensor transformation rule (Figure 2.4): where A is a rotation matrix with components aij. In matrix form this is Expanding the matrix operation, and simplifying terms using the symmetry of the stress tensor, gives The Mohr circle for stress is a graphical representation of this transformation of stresses. Normal and shear stresses The magnitude of the normal stress component σn of any stress vector T(n) acting on an arbitrary plane with normal unit vector n at a given point, in terms of the components σij of the stress tensor σ, is the dot product of the stress vector and the normal unit vector: The magnitude of the shear stress component τn, acting orthogonal to the vector n, can then be found using the Pythagorean theorem: where Balance laws – Cauchy's equations of motion Cauchy's first law of motion According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations: , where For example, for a hydrostatic fluid in equilibrium conditions, the stress tensor takes on the form: where is the hydrostatic pressure, and is the kronecker delta. {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of equilibrium equations |- |Consider a continuum body (see Figure 4) occupying a volume , having a surface area , with defined traction or surface forces per unit area acting on every point of the body surface, and body forces per unit of volume on every point within the volume . Thus, if the body is in equilibrium the resultant force acting on the volume is zero, thus: By definition the stress vector is , then Using the Gauss's divergence theorem to convert a surface integral to a volume integral gives For an arbitrary volume the integral vanishes, and we have the equilibrium equations |} Cauchy's second law of motion According to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine: {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of symmetry of the stress tensor |- | Summing moments about point O (Figure 4) the resultant moment is zero as the body is in equilibrium. Thus, where is the position vector and is expressed as Knowing that and using Gauss's divergence theorem to change from a surface integral to a volume integral, we have The second integral is zero as it contains the equilibrium equations. This leaves the first integral, where , therefore For an arbitrary volume V, we then have which is satisfied at every point within the body. Expanding this equation we have , , and or in general This proves that the stress tensor is symmetric |} However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers. Principal stresses and stress invariants At every point in a stressed body there are at least three planes, called principal planes, with normal vectors , called principal directions, where the corresponding stress vector is perpendicular to the plane, i.e., parallel or in the same direction as the normal vector , and where there are no normal shear stresses . The three stresses normal to these principal planes are called principal stresses. The components of the stress tensor depend on the orientation of the coordinate system at the point under consideration. However, the stress tensor itself is a physical quantity and as such, it is independent of the coordinate system chosen to represent it. There are certain invariants associated with every tensor which are also independent of the coordinate system. For example, a vector is a simple tensor of rank one. In three dimensions, it has three components. The value of these components will depend on the coordinate system chosen to represent the vector, but the magnitude of the vector is a physical quantity (a scalar) and is independent of the Cartesian coordinate system chosen to represent the vector (so long as it is normal). Similarly, every second rank tensor (such as the stress and the strain tensors) has three independent invariant quantities associated with it. One set of such invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors. A stress vector parallel to the normal unit vector is given by: where is a constant of proportionality, and in this particular case corresponds to the magnitudes of the normal stress vectors or principal stresses. Knowing that and , we have This is a homogeneous system, i.e. equal to zero, of three linear equations where are the unknowns. To obtain a nontrivial (non-zero) solution for , the determinant matrix of the coefficients must be equal to zero, i.e. the system is singular. Thus, Expanding the determinant leads to the characteristic equation where The characteristic equation has three real roots , i.e. not imaginary due to the symmetry of the stress tensor. The , and , are the principal stresses, functions of the eigenvalues . The eigenvalues are the roots of the characteristic polynomial. The principal stresses are unique for a given stress tensor. Therefore, from the characteristic equation, the coefficients , and , called the first, second, and third stress invariants, respectively, always have the same value regardless of the coordinate system's orientation. For each eigenvalue, there is a non-trivial solution for in the equation . These solutions are the principal directions or eigenvectors defining the plane where the principal stresses act. The principal stresses and principal directions characterize the stress at a point and are independent of the orientation. A coordinate system with axes oriented to the principal directions implies that the normal stresses are the principal stresses and the stress tensor is represented by a diagonal matrix: The principal stresses can be combined to form the stress invariants, , , and . The first and third invariant are the trace and determinant respectively, of the stress tensor. Thus, Because of its simplicity, the principal coordinate system is often useful when considering the state of the elastic medium at a particular point. Principal stresses are often expressed in the following equation for evaluating stresses in the x and y directions or axial and bending stresses on a part. The principal normal stresses can then be used to calculate the von Mises stress and ultimately the safety factor and margin of safety. Using just the part of the equation under the square root is equal to the maximum and minimum shear stress for plus and minus. This is shown as: Maximum and minimum shear stresses The maximum shear stress or maximum principal shear stress is equal to one-half the difference between the largest and smallest principal stresses, and acts on the plane that bisects the angle between the directions of the largest and smallest principal stresses, i.e. the plane of the maximum shear stress is oriented from the principal stress planes. The maximum shear stress is expressed as Assuming then When the stress tensor is non zero the normal stress component acting on the plane for the maximum shear stress is non-zero and it is equal to {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of the maximum and minimum shear stresses |- |The normal stress can be written in terms of principal stresses as Knowing that , the shear stress in terms of principal stresses components is expressed as The maximum shear stress at a point in a continuum body is determined by maximizing subject to the condition that This is a constrained maximization problem, which can be solved using the Lagrangian multiplier technique to convert the problem into an unconstrained optimization problem. Thus, the stationary values (maximum and minimum values)of occur where the gradient of is parallel to the gradient of . The Lagrangian function for this problem can be written as where is the Lagrangian multiplier (which is different from the use to denote eigenvalues). The extreme values of these functions are thence These three equations together with the condition may be solved for and By multiplying the first three equations by and , respectively, and knowing that we obtain Adding these three equations we get this result can be substituted into each of the first three equations to obtain Doing the same for the other two equations we have A first approach to solve these last three equations is to consider the trivial solution . However, this option does not fulfill the constraint . Considering the solution where and , it is determine from the condition that , then from the original equation for it is seen that . The other two possible values for can be obtained similarly by assuming and and Thus, one set of solutions for these four equations is: These correspond to minimum values for and verifies that there are no shear stresses on planes normal to the principal directions of stress, as shown previously. A second set of solutions is obtained by assuming and . Thus we have To find the values for and we first add these two equations Knowing that for and we have and solving for we have Then solving for we have and The other two possible values for can be obtained similarly by assuming and and Therefore, the second set of solutions for , representing a maximum for is Therefore, assuming , the maximum shear stress is expressed by and it can be stated as being equal to one-half the difference between the largest and smallest principal stresses, acting on the plane that bisects the angle between the directions of the largest and smallest principal stresses. |} Stress deviator tensor The stress tensor can be expressed as the sum of two other stress tensors: a mean hydrostatic stress tensor or volumetric stress tensor or mean normal stress tensor, , which tends to change the volume of the stressed body; and a deviatoric component called the stress deviator tensor, , which tends to distort it. So where is the mean stress given by Pressure () is generally defined as negative one-third the trace of the stress tensor minus any stress the divergence of the velocity contributes with, i.e. where is a proportionality constant (viz. the first of the Lamé parameters), is the divergence operator, is the k:th Cartesian coordinate, is the flow velocity and is the k:th Cartesian component of . The deviatoric stress tensor can be obtained by subtracting the hydrostatic stress tensor from the Cauchy stress tensor: Invariants of the stress deviator tensor As it is a second order tensor, the stress deviator tensor also has a set of invariants, which can be obtained using the same procedure used to calculate the invariants of the stress tensor. It can be shown that the principal directions of the stress deviator tensor are the same as the principal directions of the stress tensor . Thus, the characteristic equation is where , and are the first, second, and third deviatoric stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen. These deviatoric stress invariants can be expressed as a function of the components of or its principal values , , and , or alternatively, as a function of or its principal values , , and . Thus, Because , the stress deviator tensor is in a state of pure shear. A quantity called the equivalent stress or von Mises stress is commonly used in solid mechanics. The equivalent stress is defined as Octahedral stresses Considering the principal directions as the coordinate axes, a plane whose normal vector makes equal angles with each of the principal axes (i.e. having direction cosines equal to ) is called an octahedral plane. There are a total of eight octahedral planes (Figure 6). The normal and shear components of the stress tensor on these planes are called octahedral normal stress and octahedral shear stress , respectively. Octahedral plane passing through the origin is known as the π-plane (π not to be confused with mean stress denoted by π in above section) . On the π-plane, . Knowing that the stress tensor of point O (Figure 6) in the principal axes is the stress vector on an octahedral plane is then given by: The normal component of the stress vector at point O associated with the octahedral plane is which is the mean normal stress or hydrostatic stress. This value is the same in all eight octahedral planes. The shear stress on the octahedral plane is then
Physical sciences
Solid mechanics
Physics
19137315
https://en.wikipedia.org/wiki/Solid%20hydrogen
Solid hydrogen
Solid hydrogen is the solid state of the element hydrogen. At standard pressure, this is achieved by decreasing the temperature below hydrogen's melting point of . It was collected for the first time by James Dewar in 1899 and published with the title "Sur la solidification de l'hydrogène" (English: On the freezing of hydrogen) in the Annales de Chimie et de Physique, 7th series, vol. 18, Oct. 1899. Solid hydrogen has a density of 0.086 g/cm3 making it one of the lowest-density solids. Molecular solid hydrogen At low temperatures and at pressures up to around , hydrogen forms a series of solid phases formed from discrete H2 molecules. Phase I occurs at low temperatures and pressures, and consists of a hexagonal close-packed array of freely rotating H2 molecules. Upon increasing the pressure at low temperature, a transition to Phase II occurs at up to 110 GPa. Phase II is a broken-symmetry structure in which the H2 molecules are no longer able to rotate freely. If the pressure is further increased at low temperature, a Phase III is encountered at about 160 GPa. Upon increasing the temperature, a transition to a Phase IV occurs at a temperature of a few hundred kelvin at a range of pressures above 220 GPa. Identifying the atomic structures of the different phases of molecular solid hydrogen is extremely challenging, because hydrogen atoms interact with X-rays very weakly and only small samples of solid hydrogen can be achieved in diamond anvil cells, so that X-ray diffraction provides very limited information about the structures. Nevertheless, phase transitions can be detected by looking for abrupt changes in the Raman spectra of samples. Furthermore, atomic structures can be inferred from a combination of experimental Raman spectra and first-principles modelling. Density functional theory calculations have been used to search for candidate atomic structures for each phase. These candidate structures have low free energies and Raman spectra in agreement with the experimental spectra. Quantum Monte Carlo methods together with a first-principles treatment of anharmonic vibrational effects have then been used to obtain the relative Gibbs free energies of these structures and hence to obtain a theoretical pressure-temperature phase diagram that is in reasonable quantitative agreement with experiment. On this basis, Phase II is believed to be a molecular structure of P21/c symmetry; Phase III is (or is similar to) a structure of C2/c symmetry consisting of flat layers of molecules in a distorted hexagonal arrangement; and Phase IV is (or is similar to) a structure of Pc symmetry, consisting of alternate layers of strongly bonded molecules and weakly bonded graphene-like sheets.
Physical sciences
s-Block
Chemistry
10073219
https://en.wikipedia.org/wiki/Tropical%20cyclone%20rainfall%20forecasting
Tropical cyclone rainfall forecasting
Tropical cyclone rainfall forecasting involves using scientific models and other tools to predict the precipitation expected in tropical cyclones such as hurricanes and typhoons. Knowledge of tropical cyclone rainfall climatology is helpful in the determination of a tropical cyclone rainfall forecast. More rainfall falls in advance of the center of the cyclone than in its wake. The heaviest rainfall falls within its central dense overcast and eyewall. Slow moving tropical cyclones, like Hurricane Danny and Hurricane Wilma, can lead to the highest rainfall amounts due to prolonged heavy rains over a specific location. However, vertical wind shear leads to decreased rainfall amounts, as rainfall is favored downshear and slightly left of the center and the upshear side is left devoid of rainfall. The presence of hills or mountains near the coast, as is the case across much of Mexico, Haiti, the Dominican Republic, much of Central America, Madagascar, Réunion, China, and Japan act to magnify amounts on their windward side due to forced ascent causing heavy rainfall in the mountains. A strong system moving through the mid latitudes, such as a cold front, can lead to high amounts from tropical systems, occurring well in advance of its center. Movement of a tropical cyclone over cool water will also limit its rainfall potential. A combination of factors can lead to exceptionally high rainfall amounts, as was seen during Hurricane Mitch in Central America. Use of forecast models can help determine the magnitude and pattern of the rainfall expected. Climatology and persistence models, such as r-CLIPER, can create a baseline for tropical cyclone rainfall forecast skill. Simplified forecast models, such as the Kraft technique and the eight and sixteen-inch rules, can create quick and simple rainfall forecasts, but come with a variety of assumptions which may not be true, such as assuming average forward motion, average storm size, and a knowledge of the rainfall observing network the tropical cyclone is moving towards. The forecast method of TRaP assumes that the rainfall structure the tropical cyclone currently has changes little over the next 24 hours. The global forecast model which shows the most skill in forecasting tropical cyclone-related rainfall in the United States is the ECMWF IFS (Integrated Forecasting System). Rainfall distribution around a tropical cyclone A larger proportion of rainfall falls in advance of the center (or eye) than after the center's passage, with the highest percentage falling in the right-front quadrant. A tropical cyclone's highest rainfall rates can lie in the right rear quadrant within a training (non-moving) inflow band. Rainfall is found to be strongest in their inner core, within a degree of latitude of the center, with lesser amounts farther away from the center. Most of the rainfall in hurricanes is concentrated within its radius of gale-force winds. Larger tropical cyclones have larger rain shields, which can lead to higher rainfall amounts farther from the cyclone's center. Storms which have moved slowly, or loop, lead to the highest rainfall amounts. Riehl calculated that of rainfall per day can be expected within one-half degree, or , of the center of a mature tropical cyclone. Many tropical cyclones progress at a forward motion of 10 knots, which would limit the duration of this excessive rainfall to around one-quarter of a day, which would yield about of rainfall. This would be true over water, within of the coastline, and outside topographic features. As a cyclone moves farther inland and is cut off from its supply of warmth and moisture (the ocean), rainfall amounts from tropical cyclones and their remains decrease quickly. Vertical wind shear Vertical wind shear forces the rainfall pattern around a tropical cyclone to become highly asymmetric, with most of the precipitation falling to the left and downwind of the shear vector, or downshear left. In other words, southwesterly shear forces the bulk of the rainfall north-northeast of the center. If the wind shear is strong enough, the bulk of the rainfall will move away from the center leading to what is known as an exposed circulation center. When this occurs, the potential magnitude of rainfall with the tropical cyclone will be significantly reduced. Interaction with frontal boundaries and upper level troughs As a tropical cyclone interacts with an upper-level trough and the related surface front, a distinct northern area of precipitation is seen along the front ahead of the axis of the upper level trough. Surface fronts with precipitable water amounts of or more and upper level divergence overhead east of an upper level trough can lead to significant rainfall. This type of interaction can lead to the appearance of the heaviest rainfall falling along and to the left of the tropical cyclone track, with the precipitation streaking hundreds of miles or kilometers downwind from the tropical cyclone. Mountains Moist air forced up the slopes of coastal hills and mountain chains can lead to much heavier rainfall than in the coastal plain. This heavy rainfall can lead to landslides, which still cause significant loss of life such as seen during Hurricane Mitch in Central America, where several thousand perished. Tools used in preparation of forecast Climatology and persistence The Hurricane Research Division of the Atlantic Oceanographic and Meteorological Laboratory created the r-CLIPER (rainfall climatology and persistence) model to act as a baseline for all verification regarding tropical cyclone rainfall. The theory is, if the global forecast models cannot beat predictions based on climatology, then there is no skill in their use. There is a definite advantage to using the forecast track with r-CLIPER because it could be run out 120 hours/5 days with the forecast track of any tropical cyclone globally within a short amount of time. The short range variation which uses persistence is the Tropical Rainfall Potential technique (TRaP) technique, which uses satellite-derived rainfall amounts from microwave imaging satellites and extrapolates the current rainfall configuration forward for 24 hours along the current forecast track. This technique's main flaw is that it assumes a steady state tropical cyclone which undergoes little structural change with time, which is why it is only run forward for 24 hours into the future. Numerical weather prediction Computer models can be used to diagnose the magnitude of tropical cyclone rainfall. Since forecast models output their information on a grid, they only give a general idea as to the areal coverage of moderate to heavy rainfall. No current forecast models run at a small enough grid scale (1 km or smaller) to be able to detect the absolute maxima measured within tropical cyclones. Of the United States forecasting models, the best performing model for tropical cyclone rainfall forecasting is known as the GFS, or Global Forecasting System. The GFDL model has been shown to have a high bias concerning the magnitude of heavier core rains within tropical cyclones. Beginning in 2007, the NCEP Hurricane-WRF became available to help predict rainfall from tropical cyclones. Recent verification shows that both the European ECMWF forecast model and North American Mesoscale Model (NAM) show a low bias with heavier rainfall amounts within tropical cyclones. Kraft rule During the late 1950s, this rule of thumb came into being, developed by R. H. Kraft. It was noted from rainfall amounts (in imperial units) reported by the first order rainfall network in the United States that the storm total rainfall fit a simple equation: 100 divided by the speed of motion in knots. This rule works, even in other countries, as long as a tropical cyclone is moving and only the first order or synoptic station network (with observations spaced about apart) are used to derive storm totals. Canada uses a modified version of the Kraft rule which divides the results by a factor of two, which takes into account the lower sea surface temperatures seen around Atlantic Canada and the prevalence of systems undergoing vertical wind shear at their northerly latitudes. The main problem with this rule is that the rainfall observing network is denser than either the synoptic reporting network or the first order station networks, which means the absolute maximum is likely to be underestimated. Another problem is that it does not take the size of the tropical cyclone or topography into account.
Physical sciences
Storms
Earth science
10079376
https://en.wikipedia.org/wiki/Fishing%20vessel
Fishing vessel
A fishing vessel is a boat or ship used to catch fish and other valuable nektonic aquatic animals (e.g. shrimps/prawns, krills, coleoids, etc.) in the sea, lake or river. Humans have used different kinds of surface vessels in commercial, artisanal and recreational fishing. Prior to the 1950s there was little standardisation of fishing boats. Designs could vary between localities and even different boatyards. Traditional fishing boats were built of wood, which is not often used nowadays because of higher maintenance costs and lower durability. Fibreglass is used increasingly in smaller fishing vessels up to 25 metres (100-tonne displacement), while steel is usually used on vessels above 25 metres. It is difficult to estimate the number of recreational fishing boats. They range in size from small dinghies, sailboats and motorboats to large superyachts and chartered cruiseliners. Unlike commercial fishing vessels, recreational fishing vessels are often more for leisurely cruising other than dedicated just to fishing. History Traditional fishing boats Early fishing vessels included rafts, dugout canoes, and boats constructed from a frame covered with hide or tree bark, along the lines of a coracle. The oldest boats found by archaeological excavation are dugout canoes dating back to the Neolithic Period around 7,000-9,000 years ago. These canoes were often cut from coniferous tree logs, using simple stone tools. A 7,000-year-old seagoing boat made from reeds and tar has been found in Kuwait. These early vessels had limited capability; they could float and move on water, but were not suitable for use any great distance from the shoreline. They were used mainly for fishing and hunting. The development of fishing boats took place in parallel with the development of boats for trade and war. Early navigators began to use animal skins or woven fabrics for sails. Affixed to a pole set upright in the boat, these sails gave early boats more range, allowing voyages of exploration. Around 4000 B.C., Egyptians were building long narrow boats powered by many oarsmen. Over the next 1,000 years, they made a series of remarkable advances in boat design. They developed cotton-made sails to help their boats go faster with less work. Then they built boats large enough to cross the oceans. These boats had sails and oarsmen, and were used for travel and trade. By 3000 BC, the Egyptians knew how to assemble planks of wood into a ship hull. They used woven straps to lash planks together, and reeds or grass stuffed between the planks to seal the seams. An example of their skill is the Khufu ship, a vessel in length entombed at the foot of the Great Pyramid of Giza around 2,500 BC and found intact in 1954. At about the same time, the Scandinavians were also building innovative boats. People living near Kongens Lyngby in Denmark, came up with the idea of segregated hull compartments, which allowed the size of boats to gradually be increased. A crew of some two dozen paddled the wooden Hjortspring boat across the Baltic Sea long before the rise of the Roman Empire. Scandinavians continued to develop better ships, incorporating iron and other metal into the design and developing oars for propulsion. By 1000 A.D. the Norsemen were pre-eminent on the oceans. They were skilled seamen and boat builders, with clinker-built boat designs that varied according to the type of boat. Trading boats, such as the knarrs, were wide to allow large cargo storage. Raiding boats, such as the longship, were long and narrow and very fast. The vessels they used for fishing were scaled down versions of their cargo boats. The Scandinavian innovations influenced fishing boat design long after the Viking period came to an end. For example, yoles from the Orkney Island of Stroma were built in the same way as the Norse boats. Early modern designs In the 15th century, the Dutch developed a type of seagoing herring drifter that became a blueprint for European fishing boats. This was the Herring Buss, used by Dutch herring fishermen until the early 19th centuries. The ship type buss has a long history. It was known around 1000 AD in Scandinavia as a bǘza, a robust variant of the Viking longship. The first herring buss was probably built in Hoorn around 1415. The ship was about 20 metres long and displaced between 60 and 100 tons. It was a massive round-bilged keel ship with a bluff bow and stern, the latter relatively high, and with a gallery. The busses used long drifting gill nets to catch the herring. The nets would be retrieved at night and the crews of eighteen to thirty men would set to gibbing, salting and barrelling the catch on the broad deck. During the 17th century, the British developed the dogger, an early type of sailing trawler or longliner, which commonly operated in the North Sea. Doggers were slow but sturdy, capable of fishing in the rough conditions of the North Sea. Like the herring buss, they were wide-beamed and bluff-bowed, but considerably smaller, about 15 metres long, a maximum beam of 4.5 metres, a draught of 1.5 metres, and displacing about 13 tonnes. They could carry a tonne of bait, three tonnes of salt, half a tonne each of food and firewood for the crew, and return with six tonnes of fish. Decked areas forward and aft probably provided accommodation, storage and a cooking area. An anchor would have allowed extended periods fishing in the same spot, in waters up to 18 metres deep. The dogger would also have carried a small open boat for maintaining lines and rowing ashore. A precursor to the dory type was the early French bateau type, a flat bottom boat with straight sides used as early as 1671 on the Saint Lawrence River. The common coastal boat of the time was the wherry and the merging of the wherry design with the simplified flat bottom of the bateau resulted in the birth of the dory. England, France, Italy, and Belgium have small boats from medieval periods that could reasonably be construed as predecessors of the Dory. Dories appeared in New England fishing towns sometime after the early 18th century. They were small, shallow-draft boats, usually about five to seven metres (15 to 22 feet) long. Lightweight and versatile, with high sides, a flat bottom and sharp bows, they were easy and cheap to build. The Banks dories appeared in the 1830s. They were designed to be carried on mother ships and used for fishing cod at the Grand Banks. Adapted almost directly from the low freeboard, French river bateaus, with their straight sides and removable thwarts, bank dories could be nested inside each other and stored on the decks of fishing schooners, such as the Gazela Primeiro, for their trip to the Grand Banks fishing grounds. Modern fishing trawler The Portuguese muletta and the British dogger were early types of sailing trawler in use before the 17th century and onward, but the modern fishing trawler was developed in the 19th century. By the early 19th century, the fishermen at Brixham, needed to expand their fishing area further than ever before due to the ongoing depletion of stocks that was occurring in the overfished waters of South Devon. The Brixham trawler that evolved there was of a sleek build and had a tall gaff rig, which gave the vessel sufficient speed to make long-distance trips out to the fishing grounds in the ocean. They were also sufficiently robust to be able to tow large trawls in deep water. The great trawling fleet that built up at Brixham, earned the village the title of 'Mother of Deep-Sea Fisheries'. This revolutionary design made large scale trawling in the ocean possible for the first time, resulting in a massive migration of fishermen from the ports in the South of England, to villages further north, such as Scarborough, Hull, Grimsby, Harwich and Yarmouth, that were points of access to the large fishing grounds in the Atlantic Ocean. The small village of Grimsby grew to become the largest fishing port in the world by the mid 19th century. With the tremendous expansion in the fishing industry, the Grimsby Dock Company was formed in 1846. The dock covered and was formally opened by Queen Victoria in 1854 as the first modern fishing port. The facilities incorporated many innovations of the time - the dock gates and cranes were operated by hydraulic power, and the Grimsby Dock Tower was built to provide a head of water with sufficient pressure by William Armstrong. The elegant Brixham trawler spread across the world, influencing fishing fleets everywhere. Their distinctive sails inspired the song Red Sails in the Sunset, written aboard a Brixham sailing trawler called the Torbay Lass. By the end of the 19th century, there were over 3,000 fishing trawlers in commission in Britain, with almost 1,000 at Grimsby. These trawlers were sold to fishermen around Europe, including from the Netherlands and Scandinavia. Twelve trawlers went on to form the nucleus of the German fishing fleet. Although fishing vessel designed increasingly began to converge around the world, local conditions still often led the development of different types of fishing boats. The Lancashire nobby was used down the north west coast of England as a shrimp trawler from 1840 until World War II. The Manx nobby was used around the Isle of Man as a herring drifter. The fifie was also used as a herring drifter along the east coast of Scotland from the 1850s until well into the 20th century. Advent of steam power The earliest steam powered fishing boats first appeared in the 1870s and used the trawl system of fishing as well as lines and drift nets. These were large boats, usually in length with a beam of around . They weighed 40-50 tons and travelled at . The earliest purpose built fishing vessels were designed and made by David Allan in Leith in March 1875, when he converted a drifter to steam power. In 1877, he built the first screw propelled steam trawler in the world. This vessel was Pioneer LH854. She was of wooden construction with two masts and carried a gaff rigged main and mizen using booms, and a single foresail. Pioneer is mentioned in The Shetland Times of 4 May 1877. In 1878 he completed Forward and Onward, steam-powered trawlers for sale. Allan built a total of ten boats at Leith between 1877 and 1881. Twenty-one boats were completed at Granton, his last vessel being Degrave in 1886. Most of these were sold to foreign owners in France, Belgium, Spain and the West Indies. The first steam boats were made of wood, but steel hulls were soon introduced and were divided into watertight compartments. They were well designed for the crew with a large building that contained the wheelhouse and the deckhouse. The boats built in the 20th century only had a mizzen sail, which was used to help steady the boat when its nets were out. The main function of the mast was now as a crane for lifting the catch ashore. It also had a steam capstan on the foredeck near the mast for hauling nets. The boats had narrow, high funnels so that the steam and thick coal smoke was released high above the deck and away from the fishermen. These funnels were nicknamed woodbines because they looked like the popular brand of cigarette. These boats had a crew of twelve made up of a skipper, driver, fireman (to look after the boiler) and nine deck hands. Steam fishing boats had many advantages. They were usually about than the sailing vessels so they could carry more nets and catch more fish. This was important, as the market was growing quickly at the beginning of the 20th century. They could travel faster and further and with greater freedom from weather, wind and tide. Because less time was spent travelling to and from the fishing grounds, more time could be spent fishing. The steam boats also gained the highest prices for their fish, as they could return quickly to harbour with their fresh catch. The main disadvantage of the steam boats, though, was their high operating costs. Their engines were mechanically inefficient and took up much space, while fuel and fitting out costs were very high. Before the First World War, building costs were between 3,000 and £4,000, at least three times the cost of the sail boats. To cover these high costs, they needed to fish for longer seasons. The higher expenses meant that more steam drifters were company-owned or jointly owned. As the herring fishing industry declined, steam boats became too expensive. Steam trawlers were introduced at Grimsby and Hull in the 1880s. In 1890 it was estimated that there were 20,000 men on the North Sea. The steam drifter was not used in the herring fishery until 1897. The last sailing fishing trawler was built in 1925 in Grimsby. Further development Trawler designs adapted as the way they were powered changed from sail to coal-fired steam by World War I to diesel and turbines by the end of World War II. The first trawlers fished over the side, rather than over the stern. In 1947, the company Christian Salvesen, based in Leith, Scotland, refitted a surplus Algerine-class minesweeper (HMS Felicity) with refrigeration equipment and a factory ship stern ramp, to produce the first combined freezer/stern trawler in 1947. The first purpose-built stern trawler was Fairtry built in 1953 at Aberdeen. The ship was much larger than any other trawlers then in operation and inaugurated the era of the 'super trawler'. As the ship pulled its nets over the stern, it could lift out a much greater haul of up to 60 tons. Lord Nelson followed in 1961, installed with vertical plate freezers that had been researched and built at the Torry Research Station. These ships served as a basis for the expansion of 'super trawlers' around the world in the following decades. In recent decades, commercial fishing vessels have been increasingly equipped with electronic aids, such as radio navigation aids and fish finders. During the Cold War, some countries fitted fishing trawlers with additional electronic gear so they could be used as spy ships to monitor the activities of other countries. Global trends About 1.3 million of these are decked vessels with enclosed areas. Nearly all of these decked vessels are mechanised, and 40,000 of them are over 100 tons. At the other extreme, two-thirds (1.8 million) of the undecked boats are traditional craft of various types, powered only by sail and oars. These boats are used by artisan fishers. The Cape Town Agreement is an international International Maritime Organization legal instrument established in 2012, that sets out minimum safety requirements for fishing vessels of 24 metres in length and over or equivalent in gross tons. In 2022 the world fishing fleet was estimated at 4.9 million vessels in 2022, down from a peak of 5.3 million in 2019, two-thirds of which were motorized. The largest part of the global fishing fleet is found in upper-middle-income (41%) and lower-middle-income (39%) countries, followed by high-income (11%) and low-income countries (8%). Asia hosts the world’s largest fishing fleet (71% of the total), followed by Africa (19%), Latin America and the Caribbean (5%), Northern America and Europe (2%), and Oceania (less than 1%). Asia hosts the largest fleets of motorized (80%) and non-motorized (54%) vessels and Africa hosts the second-largest non-motorized fishing fleet. Many fishing nations (e.g. China, Japan and European Union Member States) have continued their strategy of reducing the number of fishing vessels. Commercial vessels The 200-mile fishing limit has changed fishing patterns and, in recent times, fishing boats are becoming more specialised and standardised. In the United States and Canada more use is made of large factory trawlers, while the huge blue water fleets operated by Japan and the Soviet-bloc countries have contracted. In western Europe, fishing vessel design is focused on compact boats with high catching power. Commercial fishing is a high risk industry, and countries are introducing regulations governing the construction and operation of fishing vessels. The International Maritime Organization, convened in 1959 by the United Nations, is responsible for devising measures aimed at the prevention of accidents, including standards for ship design, construction, equipment, operation and manning. According to the FAO, in 2004 the world's fishing fleet consisted of 4 million vessels. Of these, 1.3 million were decked vessels with enclosed areas. The rest were open vessels, of which two-thirds were traditional craft propelled by sails and oars. By contrast, nearly all decked vessels were mechanized. Of the decked vessels, 86 percent are found in Asia, 7.8 percent in Europe, 3.8 percent in North and Central America, 1.3 percent in Africa, 0.6 percent in South America and 0.4 percent in Oceania. Most commercial fishing boats are small, usually less than but up to for a large purse seiner or factory ship. Commercial fishing vessels can be classified by architecture, the type of fish they catch, the fishing method used, or geographical origin. The following classification follows the FAO, who classify commercial fishing vessels by the gear they use. Fishing gear Trawlers A trawler is a fishing vessel designed to use trawl nets in order to catch large volumes of fish. Outrigger trawlers – use outriggers to tow the trawl. These are commonly used to catch shrimp. One or two otter trawls can be towed from each side. Beam trawlers, employed in the North sea for catching flatfish, are another form of outrigger trawler. Medium-sized and high powered vessels, these tow a beam trawl on each side at speeds up to 8 knots. Beam trawlers – use sturdy outrigger booms for towing a beam trawl, one warp on each side. Double-rig beam trawlers can tow a separate trawl on each side of the trawler. Beam trawling is used in the flatfish and shrimp fisheries in the North Sea. They are medium-sized and high powered vessels, towing gear at speeds up to 8 knots. To avoid the boat capsizing if the trawl snags on the sea floor, winch brakes can be installed, along with safety release systems in the boom stays. The engine power of bottom trawlers is also restricted to 2000 HP (1472 KW) for further safety. Otter trawlers – deploy one or more parallel trawls kept apart horizontally using otter boards. These trawls can be towed in midwater or along the bottom. Pair trawlers – are trawlers which operate together towing a single trawl. They keep the trawl open horizontally by keeping their distance when towing. Otter boards are not used. Pair trawlers operate both midwater and bottom trawls. Side trawlers – have the trawl set over the side with the trawl warps passing through blocks which hang from two gallows, one forward and one aft. Until the late sixties, side trawlers were the most familiar vessel in the North Atlantic deep sea fisheries. They evolved over a longer period than other trawler types, but are now being replaced by stern trawlers. Stern trawlers – have trawls which are deployed and retrieved from the stern. Larger stern trawlers often have a ramp, though pelagic and small stern trawlers are often designed without a ramp. Stern trawlers are designed to operate in most weather conditions. They can work alone when midwater or bottom trawling, or two can work together as pair trawlers. Freezer trawlers – The majority of trawlers operating on high sea waters are freezer trawlers. They have facilities for preserving fish by freezing, allowing them to stay at sea for extended periods of time. They are medium to large size trawlers, with the same general arrangement as stern or side trawlers. Wet fish trawlers – are trawlers where the fish is kept in the hold in a fresh/wet condition. They must operate in areas not far distant from their landing place, and the fishing time of such vessels is limited. Seiners Seiners use surrounding and seine nets. This is a large group ranging from open boats as small as in length to ocean-going vessels. There are also specialised gears that can target demersal species. Purse seiners are very effective at targeting aggregating pelagic species near the surface. The seiner circles the shoal with a deep curtain of netting, possibly using bow thrusters for better manoeuvrability. Then the bottom of the net is pursed (closed) underneath the fish shoal by hauling a wire running from the vessel through rings along the bottom of the net and then back to the vessel. The most important part of the fishing operation is searching for the fish shoals and assessing their size and direction of movement. Sophisticated electronics, such as echosounders, sonar, and track plotters, may be used are used to search for and track schools; assessing their size and movement and keeping in touch with the school while it is surrounded with the seine net. Crows nests may be built on the masts for further visual support. Large vessels can have observation towers and helicopter landing decks. Helicopters and spotter planes are used for detecting fish schools. The main types of purse seiners are the American seiners, the European seiners and the Drum seiners. American seiners have their bridge and accommodation placed forward with the working deck aft. American seiners are most common on both coasts of North America and in other areas of Oceania. The net is stowed at the stern and is set over the stern. The power block is usually attached to a boom from a mast located behind the superstructure. American seiners use Triplerollers. A purse line winch is located amidships near the hauling station, near the side where the rings are taken on board. European seiners have their bridge and accommodation located more to the after part of the vessel with the working deck amidships. European seiners are most common in waters fished by European nations. The net is stowed in a net bin at the stern, and is set over the stern from this position. The pursing winch is normally positioned at the forward part of the working deck. Drum seiners have the same layout as American seiners except a drum is mounted on the stern and used instead of the power block. They are mainly used in Canada and USA. Tuna purse seiners are large purse seiners, normally over 45 metres, equipped to handle large and heavy purse seines for tuna. They have the same general arrangement as the American seiner, with the bridge and accommodation placed forward. A crows nest or tuna tower is positioned at the top of the mast, outfitted with the control and manoeuvre devices. A very heavy boom which carries the power block is fitted at the mast. They often carry a helicopter to search for tuna schools. On the deck are three drum purse seine winches and a power block, with other specific winches to handle the heavy boom and net. They are usually equipped with a skiff. Seine netters - the basic types of seine netters are the anchor seiners and Scottish seiner in northern Europe and the Asian seiners in Asia. Anchor seiners have the wheelhouse and accommodation aft and the working deck amidships, thus resembling side trawlers. The seine net is stored and shot from the stern, and they may carry a power block. Anchor seiners have the coiler and winch mounted transversally amidships. Scottish seiners are basically configured the same as anchor seiners. The only difference is that, whereas the anchor seiner has the coiler and winch mounted transversally amidships, the Scottish seiner has them mounted transversally in the forward part of the vessel. Asian seiners – In Asia, the seine netter usually has the wheelhouse forward and the working deck aft, in the manner of a stern trawler. However, in regions where the fishing effort is a labour-intensive, low-technology approach, they are often undecked and may be powered by outboards motors, or even by sail. Line vessels Line vessels – Longliners – use one or more long heavy fishing lines with a series of hundreds or even thousands of baited hooks hanging from the main line by means of branch lines called "snoods". Hand operated longlining can be operated from boats of any size. The number of hooks and lines handled depends on the size of vessel, the number of crew, and the level of mechanisation. Large purpose built longliners can be designed for single species fisheries such as tuna. On such larger vessels the bridge is usually placed aft, and the gear is hauled from the bow or from the side with mechanical or hydraulic line haulers. The lines are set over the stern. Automatic or semi-automatic systems are used to bait hooks and shoot and haul lines. These systems include rail rollers, line haulers, hook separators, dehookers and hook cleaners, and storage racks or drums. To avoid incidental catches of seabirds, an outboard setting funnel is used to guide the line from the setting position on the stern down to a depth of one or two metres. Small scale longliners handle the gear by hand. The line is stored into baskets or tubs, perhaps using a hand cranked line drum. Bottom longliners – Midwater longliners – are usually medium-sized vessels which operate worldwide, purpose built to catch large pelagics. The line hauler is usually forward starboard, where the fish are hauled through a gate in the rail. The lines are set from the stern where a baiting table and chute are located. These boats need adequate speed to reach distant fishing grounds, enough endurance for continued fishing, adequate freezing storage, suitable mechanisms for shooting and hauling longlines quickly, and proper storage for fishing gears and accessories. Freezer longliners – are outfitted with freezing equipment. The holds are insulated and refrigerated. Freezer longliners are medium to large with the same general characteristics of other longliners. Most longliners operating on the high seas are freezer longliners. Factory longliners – are generally equipped with processing plant, including mechanical gutting and filleting equipment accompanied by freezing facilities, as well as fish oil, fish meal and sometimes canning plants. These vessels have a large buffer capacity. Thus, caught fish can be stored in refrigerated sea water tanks and peaks in the catch can also be used. Freezer longliners are large ships, working the high seas with the same general characteristics of other large longliners. Wet-fish longliners – keep the caught fish in the hold in the fresh/wet condition. The fish is stored in boxes and covered with ice, or stored with ice in the fish hold. The fishing time of such vessels is limited, so they operate close to the landing place. Pole and line vessels – are used mainly to catch tuna and skipjack. The fishers stand at the railing or on special platforms and fish with poles and lines. The lines have hooks which are baited, preferably with live bait. Caught tuna are swung on board, by two to three fishermen if the tuna is big, or with an automated swinging mechanism. The tuna usually release themselves from the barbless hook when they hit the deck. Tanks with live bait are placed round the decks, and water spray systems are used to attract the fish. The vessels are 15 to 45 metres o/a. On smaller vessels fishers fish from the main deck right around the boat. With larger vessels, there are two different deck styles: the American style and the Japanese style. American style – fishers stand on platforms arranged over the side abaft amidships and around the stern. The vessel moves ahead during fishing operation. Japanese style – fishers stand at the rail in the forepart of the vessel. The vessel drifts during fishing operations. Trollers – catch fish by towing astern one of more trolling lines. A trolling line is a fishing line with natural or artificial baited hooks trailed by a vessel near the surface or at a certain depth. Several lines can be towed at the same time using outriggers to keep the lines apart. The lines can be hauled in manually or by small winches. A length of rubber is often included in each line as a shock absorber. The trolling line is towed at a speed depending on the target species, from 2.3 knots up to at least 7 knots. Trollers range from small open boats to large refrigerated vessels 30 metres long. In many tropical artisanal fisheries, trolling is done with sailing canoes with outriggers for stability. With properly designed vessels, trolling is an economical and efficient way of catching tuna, mackerel and other pelagic fish swimming close to the surface. Purpose-built trollers are usually equipped with two or four trolling booms raised and lowered by topping lifts, held in position by adjustable stays. Electrically powered or hydraulic reels can be used to haul in the lines. Jiggers – there are two types of jiggers: specialised squid jiggers which work mostly in the southern hemisphere and smaller vessels using jigging techniques in the northern hemisphere mainly for catching cod. Squid jiggers – have single or double drum jigger winches lined along the rails around the vessel. Strong lamps, up to 5000 W each, are used to attract the squid. These are arranged 50–60 centimetres apart, either as one row in the centre of the vessel, or two rows, one on each side. As the squid are caught they are transferred by chutes to the processing plant of the vessel. The jigging motion can be produced mechanically by the shape of the drum or electronically by adjustment to the winch motor. Squid jiggers are often used during the day as midwater trawlers and during the night as jiggers. Cod jiggers – use single jigger machines and do not use lights to attract the fish. The fish are attracted by the jigging motion and artificial bait. Other vessels Dredgers – use a dredge for collecting molluscs from the seafloor. There are three types of dredges: (a) The dredge can be dragged along the seabed, scooping the shellfish from the ground. These dredges are towed in a manner similar to beam trawlers, and large dredgers can work three or more dredges on each side. (b) Heavy mechanical dredging units are operated by special gallows from the bow of the vessel. (c) The dredger employs a hydraulic dredge which uses a powerful water pump to operates water jets which flush the molluscs from the bottom. Dredgers don't have a typical deck arrangement, the bridge and accommodation can be aft or forward. Derricks and winches may be installed for lowering and lifting the dredge. Echosounders are used for determining depths. Gillnetters – On inland waters and inshore, gillnets can be operated from open boats and canoes. In coastal waters, they are operated by small decked vessels which can have their wheelhouse either aft or forward. In coastal waters, gillnetting is often used as a second fishing method by trawlers or beam trawlers, depending on fishing seasons and targeted species. For offshore fishing, or fishing on the high seas, medium-sized vessels using drifting gillnets are called drifters, and the bridge is usually located aft. The nets are set and hauled by hand on small open boats. Larger boats use hydraulic or occasionally mechanical net haulers, or net drums. These vessels can be equipped with an echosounder, although locating fish is more a matter of the fishermen's personal knowledge of the fishing grounds rather than depending on special detection equipment. Set netters – also operate gillnets. However, during fishing operations the vessel is not attached to the nets. The size of the vessels varies from open boats to large specialised drifters operating on the high seas. The wheelhouse is usually located aft, and the front deck is used for handling gear. Normally the nets are set at the stern by steaming ahead. Hauling is done over the side at the forepart of the deck, usually using hydraulic driven net haulers. Wet fish is packed in containers chilled with ice. Larger vessels might freeze the catch. Lift netters – are equipped to operate lift nets, which are held from the vessel's side and raised and lowered by means of outriggers. Lift netters range from open boats about 10 metres long to larger vessels with open ocean capability. Decked vessels usually have the bridge amidships. Larger vessels are often equipped with winches and derricks for handling the lifting lines, as well as outriggers and light booms. They can be fitted with powerful lights to attract and aggregate the fish to the surface. Open boats are usually unmechanized or use hand-operated winches. Electronic equipment, such as fishfinders, sonar and echo sounders are used extensively on larger boats. Trap setters – are used to set pots or traps for catching fish, crabs, lobsters, crayfish and other similar species. Trap setters range in size from open boats operating inshore to larger decked vessels, 20 to 50 metres long, operating out to the edge of the continental shelf. Small decked trap setters have the wheelhouse either forward or aft with the fish hold amidships. They use hydraulic or mechanical pot haulers. Larger vessels have the wheelhouse forward, and are equipped with derricks, davits or cranes for hauling pots aboard. Locating fish is often more a matter of the fishermen's knowledge of the fishing grounds rather than the use of special detection equipment. Decked vessels are usually equipped with an echosounder, and large vessels may also have a Loran or GPS. Handliners – are normally undecked vessels used for handlining (fishing with a line and hook). Handliners include canoes and other small or medium-sized vessels. Traditional handliners are less than 12 metres o/a, and do not have special gear handling, there is no winch or gurdy. Locating fish is left to the fishermen's personal knowledge of fishing grounds rather than the use of special electronic equipment. Non traditional handliners can set and haul using electrical or hydraulic powered reels. These mechanised reels are normally fastened to the gunwale or set on stanchions close to or overhanging the gunwale. They operate all over the world, some in shallow waters, some fishing up to 300 metres deep. No typical deck arrangement exists for handliners. Multipurpose vessels – are vessels which are designed so they can deploy more than one type of fishing gear without major modifications to the vessels. The fish detection equipment present on board also changes according to which fishing gear is being used. Trawler/Purse seiners – are designed so the deck arrangement and equipment, including a suitable combination winch, can be used for both methods. Rollers, blocks, trawl gallows and purse davits need to be arranged so they control the lead of warps and pursing lines in such a way as to reduce the time needed to convert from one type to the other. Typical fish detection equipment includes a sonar and an echosounder. These vessels are usually designed as trawlers, since the power requirement for trawling is higher. Research vessels – a fisheries research vessel (FRV) requires platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels are often designed and built along the same lines as a large fishing vessel, but with space given over to laboratories and equipment storage, as opposed to storage of the catch. An example of a fisheries research vessel is FRV Scotia. Artisan vessels Artisan fishing is small-scale commercial or subsistence fishing, particularly practices involving coastal or island ethnic groups using traditional fishing techniques and traditional boats. This may also include heritage groups involved in customary fishing practices. According to the FAO, at the end of 2004, the world fishing fleet consisted of about 4 million vessels, of which 2.7 million were undecked (open) boats. While nearly all decked vessels were mechanized, only one-third of the undecked fishing boats were powered, usually with outboard engines. The remaining 1.8 million boats were traditional craft of various types, operated by sail and oars. These figures for small fishing vessels are probably under reported. The FAO compiles these figures largely from national registers. These records often omit smaller boats where registration is not required or where fishing licences are granted by provincial or municipal authorities. Artisan fishing boats are usually small traditional fishing boats, appropriately designed for use on their local inland waters or coasts. Many localities around the world have developed their own traditional types of fishing boats, adapted to use local materials suitable for boat building and to the specific requirements of the fisheries and sea conditions in their area. Artisan boats are often open (undecked). Many have sails, but they do not usually use much, or any mechanised or electronic gear. Large numbers of artisan fishing boats are still in use, particularly in developing countries with long productive marine coastlines. For example, Indonesia has reported about 700,000 fishing boats, 25 percent of which are dugout canoes, and half of which are without motors. The Philippines have reported a similar number of small fishing boats. Many of the boats in this area are double-outrigger craft, consisting of a narrow main hull with two attached outriggers, commonly known as jukung in Indonesia and banca in the Philippines. Recreational vessels Recreational fishing is done for leisure or sport, and not for profit or survival. Just about anything that will stay afloat can be called a recreational fishing boat, so long as a fisherman periodically climbs aboard with the intent to catch fish. Usually some form of fishing tackle is brought on board, such as hooks and lines, rods and reels, sinkers or nets, and occasionally high-tech devices such as fishfinders and diving drones. Fish are caught for recreational purposes from boats that range from dugout canoes, kayaks, rafts, pontoon boats and small dinghies to runabouts, cabin cruisers and yachts to large, high-tech and luxurious big game boats sometimes fitted with outriggers. Larger boats, purpose-built with recreational fishing in mind, usually have large, open cockpits at the stern, designed for convenient fishing. Big game fishing started as a sport after the invention of the motorized boat. Charles Frederick Holder, a marine biologist and early conservationist, is credited with founding the sport in 1898. Purpose-built game fishing boats appeared shortly after. An example is the Crete, in use at Catalina Island, California, in 1915, and shipped to Hawaii the following year. According to a newspaper report at that time, the Crete had "a deep cockpit, a chair fitted for landing big fish and leather pockets for placing the pole." It is difficult to estimate how many recreational fishing boats there are, although the number is high. The term is fluid, since most recreational boats are also used for fishing from time to time. Unlike most commercial fishing vessels, recreational fishing boats are often not dedicated just to fishing, but also other water sports such as water skiing, parasailing and underwater diving. Fishing kayaks have gained popularity in recent years. The kayak has long been a means of accessing fishing grounds. Pontoon boats have also become popular in recent years. These boats allow one or two fishermen to get into small rivers or lakes that would have difficulty accommodating larger boats. Typically 8–12 ft in length, these inflatable craft can be assembled quickly and easily. Some feature rigid frames derived from the white water rafting industry. Bass boats are small aluminium or fibreglass motorboats used in freshwater lakes and rivers in the United States. for fishing bass and other panfish. They have a flat front deck, swivel chairs for the anglers, storage bins for fishing tackle, and a live well with recirculating water to keep caught fish alive. They are usually fitted with an outboard motor and a slower trolling motor, as well fishfinder and GPS navigation. Charter boats are often privately operated, purpose-built fishing boats, and host guided fishing trips for paying clients. Their size can range widely depending on the type of trips run and the geographical location. Freshwater fishing boats account for approximately one third of all registered boats in the USA. Most other types of boats end up being used for fishing on occasion. Saltwater fishing boats vary widely in size and can be specialized for certain species of fish. Flounder boats, for example, have flat bottoms for a shallow draft and are used in protected, shallow waters. Sport fishing boats range from 25 to 80 feet or more, and can be powered by large outboard engines or inboard diesels. Boats used for fishing in cold climates may have space dedicated to a cuddy cabin or enclosed wheelhouse, while boats in warmer climates are more likely to be open.
Technology
Maritime transport
null
18117469
https://en.wikipedia.org/wiki/Strong%20gravitational%20lensing
Strong gravitational lensing
Strong gravitational lensing is a gravitational lensing effect that is strong enough to produce multiple images, arcs, or Einstein rings. Generally, for strong lensing to occur, the projected lens mass density must be greater than the critical density, that is . For point-like background sources, there will be multiple images; for extended background emissions, there can be arcs or rings. Topologically, multiple image production is governed by the odd number theorem. Strong lensing was predicted by Albert Einstein's general theory of relativity and observationally discovered by Dennis Walsh, Bob Carswell, and Ray Weymann in 1979. They determined that the Twin Quasar Q0957+561A comprises two images of the same object. Observations Most strong gravitational lenses are detected by large-scale galaxy surveys. Galaxy lensing The foreground lens is a galaxy. When the background source is a quasar or unresolved jet, the strong lensed images are usually point-like multiple images; When the background source is a galaxy or extended jet emission, the strong lensed images can be arcs or rings. As of 2017, several hundred galaxy-galaxy (g-g) strong lenses have been observed. The upcoming Vera C. Rubin Observatory and Euclid surveys are expected to discover more than 100,000 such objects. Cluster lensing The foreground lens is a galaxy cluster. In this case, the lens is usually powerful enough to produce noticeable both strong lensing (multiple images, arcs or rings) and weak lensing effects (ellipticity distortions). The lensing nicknamed the "Molten Ring" is an example. Astrophysical applications Mass profiles Because the strong lensing of a background source depends only on the gravitational potential of the foreground mass, this phenomenon can be used to constrain the mass model of flenses. With the constraints from multiple images or arcs, a proposed mass model can be optimized to fit to the observables. The subgalactic structures that currently interest lensing astronomers are the central mass distribution and dark matter halos. Time delays Since the light rays go through different paths to produce multiple images, they will get delayed by local potentials along the light paths. The time delay differences from different images can be determined by the mass model and the cosmological model. Thus, with observed time delays and constrained mass model, cosmological constants such as the Hubble constant can be inferred. Gallery
Physical sciences
Basics_2
Astronomy
16932084
https://en.wikipedia.org/wiki/Toxic%20injury
Toxic injury
A toxic injury is a type of injury resulting from exposure to a toxin. Toxic injuries can manifest as teratogenic effects, respiratory effects, gastrointestinal effects, cardiovascular effects, hepatic effects, renal effects, neurological effects, or a combination thereof. They can also produce delayed effects, including various forms of cancer and learning disability. Effects can occur after acute (short-term) or chronic (long-term) exposure, depending on the toxicity, dose, and method of exposure. Signs and symptoms Every toxic injury or exposure to a toxin has different effects and symptoms. Some toxic effects do not necessarily cause permanent damage and can be reversible. However, some toxins can cause irreversible permanent damage. Depending on the intensity of the poison of the substance it can affect just one particular organ system or they may produce generalized toxicity by affecting a number of systems. A variety of symptoms occur depending on how and where the toxic injuries affect the body. Generally, if the toxins affect the respiratory system the symptoms are coughing, tight chest, difficulty in breathing and nose and throat irritation. Miscarriage or infertility can occur if it occurs in the ovaries or testes. Depression, severe headaches and dizziness are the symptoms for toxins affecting the spinal cord and brain. Visible reactions such as skin rashes, and swelling and eye redness are common. Exposure to asbestos can lead to Mesothelioma which is a cancer that can cause serious damage to the lining of the lungs. The symptoms include shortness of breath, cough, night sweats and fever. Causes There are thousands of causes of toxic injuries. A toxic injury is caused when one comes in contact with any toxin. However, some causes are still unknown or extremely uncommon. Generally, there are two different categories in which a toxic injury may fall into. An injury due to an environmental toxin, or an injury due to chemical exposure. An environmental toxin is one that is found naturally in our surroundings. (For example, molds. Mold spores may be found both indoors and outdoors.) Although it is possible that an environmental toxin can be produced with human intervention (such as pesticides) it is still considered natural. Injuries due to chemical exposure are often more severe due to the nature of these highly toxic substances. Common toxins that may cause a chemical toxic injury are found in consumer products, pharmaceuticals and industrial products. Environmental toxic injuries Many commonly found natural substances may be toxic. They can be found in our air, water or food supply. The top ten most common environmental toxins are: PCBs Pesticides Molds Phthalates VOCs Dioxins Asbestos Heavy metals Chloroform Chlorine Prevention Avoiding direct exposure to toxins will reduce the risk of toxic injury. For safety reasons, wear personal protective equipment when working near environmental or chemical toxins. Many countries have guides classifying dangerous goods and identifying the risks associated with them, such as Canada's Workplace Hazardous Materials Information System. These guides have been developed to ensure worker safety when handling dangerous chemical toxins. It is also important to maintain a clean and dry environment to prevent toxic molds in the home and workplace. In the event that a toxic injury occurs, victims may have the option to file a specific type of lawsuit called a toxic tort.
Biology and health sciences
Types
Health
16937772
https://en.wikipedia.org/wiki/Culcita%20%28echinoderm%29
Culcita (echinoderm)
Culcita is a genus of sea stars. They are found in tropical waters. Some are kept in home aquariums. Description and characteristics These are very particular stars, plump and pillow-shaped, more or less pentagonal. Their five arms have waned to only obtuse angles (and sometimes rounded off or truncated). They can measure up to 30 cm in diameter, and are typical of Indo-Pacific coral reefs, where they feed on benthic invertebrates and coral. Two species Culcita novaeguineae and Culcita schmideliana are extremely similar and almost impossible to differentiate by sight, except that C. schmideliana has larger tubercles, that are normally absent from papular areas (though both species can also be naked). They are thus distinguished mostly by their area of distribution: C. schmideliana lives in the Indian Ocean (from Africa to the Maldives), and C. novaeguineae in Oceania and the Pacific Ocean. The third species, C. coriacea, lives in the Red Sea and around Arabia, and is slightly different in appearance. This genus is not to be confused with similar cushion-shaped species such as Halityle regularis. The juveniles are flat and pentagonal, and can look like "biscuit sea stars" from the family Goniasteridae (such as Peltaster spp.). List of species The genus contains three species: Culcita coriacea Müller & Troschel, 1842 -- Red Sea and Arabic region Culcita novaeguineae Müller & Troschel, 1842 -- Indonesian region and Pacific Ocean Culcita schmideliana (Retzius, 1805) -- Indian Ocean
Biology and health sciences
Echinoderms
Animals
14105333
https://en.wikipedia.org/wiki/Electric%20power%20system
Electric power system
An electric power system is a network of electrical components deployed to supply, transfer, and use electric power. An example of a power system is the electrical grid that provides power to homes and industries within an extended area. The electrical grid can be broadly divided into the generators that supply the power, the transmission system that carries the power from the generating centers to the load centers, and the distribution system that feeds the power to nearby homes and industries. Smaller power systems are also found in industry, hospitals, commercial buildings, and homes. A single line diagram helps to represent this whole system. The majority of these systems rely upon three-phase AC power—the standard for large-scale power transmission and distribution across the modern world. Specialized power systems that do not always rely upon three-phase AC power are found in aircraft, electric rail systems, ocean liners, submarines, and automobiles. History In 1881, two electricians built the world's first power system at Godalming in England. It was powered by two water wheels and produced an alternating current that in turn supplied seven Siemens arc lamps at 250 volts and 34 incandescent lamps at 40 volts. However, supply to the lamps was intermittent and in 1882 Thomas Edison and his company, Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station initially powered around 3,000 lamps for 59 customers. The power station generated direct current and operated at a single voltage. Direct current power could not be transformed easily or efficiently to the higher voltages necessary to minimize power loss during long-distance transmission, so the maximum economic distance between the generators and load was limited to around half a mile (800 m). That same year in London, Lucien Gaulard and John Dixon Gibbs demonstrated the "secondary generator"—the first transformer suitable for use in a real power system. The practical value of Gaulard and Gibbs' transformer was demonstrated in 1884 at Turin where the transformer was used to light up of railway from a single alternating current generator. Despite the success of the system, the pair made some fundamental mistakes. Perhaps the most serious was connecting the primaries of the transformers in series so that active lamps would affect the brightness of other lamps further down the line. In 1885, Ottó Titusz Bláthy working with Károly Zipernowsky and Miksa Déri perfected the secondary generator of Gaulard and Gibbs, providing it with a closed iron core and its present name: the "transformer". The three engineers went on to present a power system at the National General Exhibition of Budapest that implemented the parallel AC distribution system proposed by a British scientist in which several power transformers have their primary windings fed in parallel from a high-voltage distribution line. The system lit more than 1000 carbon filament lamps and operated successfully from May until November of that year. Also in 1885 George Westinghouse, an American entrepreneur, obtained the patent rights to the Gaulard-Gibbs transformer and imported a number of them along with a Siemens generator, and set his engineers to experimenting with them in hopes of improving them for use in a commercial power system. In 1886, one of Westinghouse's engineers, William Stanley, independently recognized the problem with connecting transformers in series as opposed to parallel and also realized that making the iron core of a transformer a fully enclosed loop would improve the voltage regulation of the secondary winding. Using this knowledge he built a multi-voltage transformer-based alternating-current power system serving multiple homes and businesses at Great Barrington, Massachusetts in 1886. The system was unreliable and short-lived, though, due primarily to generation issues. However, based on that system, Westinghouse would begin installing AC transformer systems in competition with the Edison Company later that year. In 1888, Westinghouse licensed Nikola Tesla's patents for a polyphase AC induction motor and transformer designs. Tesla consulted for a year at the Westinghouse Electric & Manufacturing Company but it took a further four years for Westinghouse engineers to develop a workable polyphase motor and transmission system. By 1889, the electric power industry was flourishing, and power companies had built thousands of power systems (both direct and alternating current) in the United States and Europe. These networks were effectively dedicated to providing electric lighting. During this time the rivalry between Thomas Edison and George Westinghouse's companies had grown into a propaganda campaign over which form of transmission (direct or alternating current) was superior, a series of events known as the "war of the currents". In 1891, Westinghouse installed the first major power system that was designed to drive a synchronous electric motor, as well as provide electric lighting, at Telluride, Colorado. On the other side of the Atlantic, Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown, built the first long-distance () high-voltage (15 kV, then a record) three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt, where power was used to light lamps and run a water pump. In the United States the AC/DC competition came to an end when Edison General Electric was taken over by their chief AC rival, the Thomson-Houston Electric Company, forming General Electric. In 1895, after a protracted decision-making process, alternating current was chosen as the transmission standard with Westinghouse building the Adams No. 1 generating station at Niagara Falls and General Electric building the three-phase alternating current power system to supply Buffalo at 11 kV. Developments in power systems continued beyond the nineteenth century. In 1936 the first experimental high voltage direct current (HVDC) line using mercury arc valves was built between Schenectady and Mechanicville, New York. HVDC had previously been achieved by series-connected direct current generators and motors (the Thury system) although this suffered from serious reliability issues. The first solid-state metal diode suitable for general power uses was developed by Ernst Presser at TeKaDe in 1928. It consisted of a layer of selenium applied on an aluminum plate. In 1957, a General Electric research group developed the first thyristor suitable for use in power applications, starting a revolution in power electronics. In that same year, Siemens demonstrated a solid-state rectifier, but it was not until the early 1970s that solid-state devices became the standard in HVDC, when GE emerged as one of the top suppliers of thyristor-based HVDC. In 1979, a European consortium including Siemens, Brown Boveri & Cie and AEG realized the record HVDC link from Cabora Bassa to Johannesburg, extending more than that carried 1.9 GW at 533 kV. In recent times, many important developments have come from extending innovations in the information and communications technology (ICT) field to the power engineering field. For example, the development of computers meant load flow studies could be run more efficiently, allowing for much better planning of power systems. Advances in information technology and telecommunication also allowed for effective remote control of a power system's switchgear and generators. Basics of electric power Electric power is the product of two quantities: current and voltage. These two quantities can vary with respect to time (AC power) or can be kept at constant levels (DC power). Most refrigerators, air conditioners, pumps and industrial machinery use AC power, whereas most computers and digital equipment use DC power (digital devices plugged into the mains typically have an internal or external power adapter to convert from AC to DC power). AC power has the advantage of being easy to transform between voltages and is able to be generated and utilised by brushless machinery. DC power remains the only practical choice in digital systems and can be more economical to transmit over long distances at very high voltages (see HVDC). The ability to easily transform the voltage of AC power is important for two reasons: firstly, power can be transmitted over long distances with less loss at higher voltages. So in power systems where generation is distant from the load, it is desirable to step-up (increase) the voltage of power at the generation point and then step-down (decrease) the voltage near the load. Secondly, it is often more economical to install turbines that produce higher voltages than would be used by most appliances, so the ability to easily transform voltages means this mismatch between voltages can be easily managed. Solid-state devices, which are products of the semiconductor revolution, make it possible to transform DC power to different voltages, build brushless DC machines and convert between AC and DC power. Nevertheless, devices utilising solid-state technology are often more expensive than their traditional counterparts, so AC power remains in widespread use. Components of power systems Supplies All power systems have one or more sources of power. For some power systems, the source of power is external to the system but for others, it is part of the system itself—it is these internal power sources that are discussed in the remainder of this section. Direct current power can be supplied by batteries, fuel cells or photovoltaic cells. Alternating current power is typically supplied by a rotor that spins in a magnetic field in a device known as a turbo generator. There have been a wide range of techniques used to spin a turbine's rotor, from steam heated using fossil fuel (including coal, gas and oil) or nuclear energy to falling water (hydroelectric power) and wind (wind power). The speed at which the rotor spins in combination with the number of generator poles determines the frequency of the alternating current produced by the generator. All generators on a single synchronous system, for example, the national grid, rotate at sub-multiples of the same speed and so generate electric current at the same frequency. If the load on the system increases, the generators will require more torque to spin at that speed and, in a steam power station, more steam must be supplied to the turbines driving them. Thus the steam used and the fuel expended directly relate to the quantity of electrical energy supplied. An exception exists for generators incorporating power electronics such as gearless wind turbines or linked to a grid through an asynchronous tie such as a HVDC link — these can operate at frequencies independent of the power system frequency. Depending on how the poles are fed, alternating current generators can produce a variable number of phases of power. A higher number of phases leads to more efficient power system operation but also increases the infrastructure requirements of the system. Electricity grid systems connect multiple generators operating at the same frequency: the most common being three-phase at 50 or 60 Hz. There are a range of design considerations for power supplies. These range from the obvious: How much power should the generator be able to supply? What is an acceptable length of time for starting the generator (some generators can take hours to start)? Is the availability of the power source acceptable (some renewables are only available when the sun is shining or the wind is blowing)? To the more technical: How should the generator start (some turbines act like a motor to bring themselves up to speed in which case they need an appropriate starting circuit)? What is the mechanical speed of operation for the turbine and consequently what are the number of poles required? What type of generator is suitable (synchronous or asynchronous) and what type of rotor (squirrel-cage rotor, wound rotor, salient pole rotor or cylindrical rotor)? Loads Power systems deliver energy to loads that perform a function. These loads range from household appliances to industrial machinery. Most loads expect a certain voltage and, for alternating current devices, a certain frequency and number of phases. The appliances found in residential settings, for example, will typically be single-phase operating at 50 or 60 Hz with a voltage between 110 and 260 volts (depending on national standards). An exception exists for larger centralized air conditioning systems as these are now often three-phase because this allows them to operate more efficiently. All electrical appliances also have a wattage rating, which specifies the amount of power the device consumes. At any one time, the net amount of power consumed by the loads on a power system must equal the net amount of power produced by the supplies less the power lost in transmission. Making sure that the voltage, frequency and amount of power supplied to the loads is in line with expectations is one of the great challenges of power system engineering. However it is not the only challenge, in addition to the power used by a load to do useful work (termed real power) many alternating current devices also use an additional amount of power because they cause the alternating voltage and alternating current to become slightly out-of-sync (termed reactive power). The reactive power like the real power must balance (that is the reactive power produced on a system must equal the reactive power consumed) and can be supplied from the generators, however it is often more economical to supply such power from capacitors (see "Capacitors and reactors" below for more details). A final consideration with loads has to do with power quality. In addition to sustained overvoltages and undervoltages (voltage regulation issues) as well as sustained deviations from the system frequency (frequency regulation issues), power system loads can be adversely affected by a range of temporal issues. These include voltage sags, dips and swells, transient overvoltages, flicker, high-frequency noise, phase imbalance and poor power factor. Power quality issues occur when the power supply to a load deviates from the ideal. Power quality issues can be especially important when it comes to specialist industrial machinery or hospital equipment. Conductors Conductors carry power from the generators to the load. In a grid, conductors may be classified as belonging to the transmission system, which carries large amounts of power at high voltages (typically more than 69 kV) from the generating centres to the load centres, or the distribution system, which feeds smaller amounts of power at lower voltages (typically less than 69 kV) from the load centres to nearby homes and industry. Choice of conductors is based on considerations such as cost, transmission losses and other desirable characteristics of the metal like tensile strength. Copper, with lower resistivity than aluminum, was once the conductor of choice for most power systems. However, aluminum has a lower cost for the same current carrying capacity and is now often the conductor of choice. Overhead line conductors may be reinforced with steel or aluminium alloys. Conductors in exterior power systems may be placed overhead or underground. Overhead conductors are usually air insulated and supported on porcelain, glass or polymer insulators. Cables used for underground transmission or building wiring are insulated with cross-linked polyethylene or other flexible insulation. Conductors are often stranded for to make them more flexible and therefore easier to install. Conductors are typically rated for the maximum current that they can carry at a given temperature rise over ambient conditions. As current flow increases through a conductor it heats up. For insulated conductors, the rating is determined by the insulation. For bare conductors, the rating is determined by the point at which the sag of the conductors would become unacceptable. Capacitors and reactors The majority of the load in a typical AC power system is inductive; the current lags behind the voltage. Since the voltage and current are out-of-phase, this leads to the emergence of an "imaginary" form of power known as reactive power. Reactive power does no measurable work but is transmitted back and forth between the reactive power source and load every cycle. This reactive power can be provided by the generators themselves but it is often cheaper to provide it through capacitors, hence capacitors are often placed near inductive loads (i.e. if not on-site at the nearest substation) to reduce current demand on the power system (i.e. increase the power factor). Reactors consume reactive power and are used to regulate voltage on long transmission lines. In light load conditions, where the loading on transmission lines is well below the surge impedance loading, the efficiency of the power system may actually be improved by switching in reactors. Reactors installed in series in a power system also limit rushes of current flow, small reactors are therefore almost always installed in series with capacitors to limit the current rush associated with switching in a capacitor. Series reactors can also be used to limit fault currents. Capacitors and reactors are switched by circuit breakers, which results in sizeable step changes of reactive power. A solution to this comes in the form of synchronous condensers, static VAR compensators and static synchronous compensators. Briefly, synchronous condensers are synchronous motors that spin freely to generate or absorb reactive power. Static VAR compensators work by switching in capacitors using thyristors as opposed to circuit breakers allowing capacitors to be switched-in and switched-out within a single cycle. This provides a far more refined response than circuit-breaker-switched capacitors. Static synchronous compensators take this a step further by achieving reactive power adjustments using only power electronics. Power electronics Power electronics are semiconductor based devices that are able to switch quantities of power ranging from a few hundred watts to several hundred megawatts. Despite their relatively simple function, their speed of operation (typically in the order of nanoseconds) means they are capable of a wide range of tasks that would be difficult or impossible with conventional technology. The classic function of power electronics is rectification, or the conversion of AC-to-DC power, power electronics are therefore found in almost every digital device that is supplied from an AC source either as an adapter that plugs into the wall (see photo) or as component internal to the device. High-powered power electronics can also be used to convert AC power to DC power for long distance transmission in a system known as HVDC. HVDC is used because it proves to be more economical than similar high voltage AC systems for very long distances (hundreds to thousands of kilometres). HVDC is also desirable for interconnects because it allows frequency independence thus improving system stability. Power electronics are also essential for any power source that is required to produce an AC output but that by its nature produces a DC output. They are therefore used by photovoltaic installations. Power electronics also feature in a wide range of more exotic uses. They are at the heart of all modern electric and hybrid vehicles—where they are used for both motor control and as part of the brushless DC motor. Power electronics are also found in practically all modern petrol-powered vehicles, this is because the power provided by the car's batteries alone is insufficient to provide ignition, air-conditioning, internal lighting, radio and dashboard displays for the life of the car. So the batteries must be recharged while driving—a feat that is typically accomplished using power electronics. Some electric railway systems also use DC power and thus make use of power electronics to feed grid power to the locomotives and often for speed control of the locomotive's motor. In the middle twentieth century, rectifier locomotives were popular, these used power electronics to convert AC power from the railway network for use by a DC motor. Today most electric locomotives are supplied with AC power and run using AC motors, but still use power electronics to provide suitable motor control. The use of power electronics to assist with the motor control and with starter circuits, in addition to rectification, is responsible for power electronics appearing in a wide range of industrial machinery. Power electronics even appear in modern residential air conditioners allow are at the heart of the variable speed wind turbine. Protective devices Power systems contain protective devices to prevent injury or damage during failures. The quintessential protective device is the fuse. When the current through a fuse exceeds a certain threshold, the fuse element melts, producing an arc across the resulting gap that is then extinguished, interrupting the circuit. Given that fuses can be built as the weak point of a system, fuses are ideal for protecting circuitry from damage. Fuses however have two problems: First, after they have functioned, fuses must be replaced as they cannot be reset. This can prove inconvenient if the fuse is at a remote site or a spare fuse is not on hand. And second, fuses are typically inadequate as the sole safety device in most power systems as they allow current flows well in excess of that that would prove lethal to a human or animal. The first problem is resolved by the use of circuit breakers—devices that can be reset after they have broken current flow. In modern systems that use less than about 10 kW, miniature circuit breakers are typically used. These devices combine the mechanism that initiates the trip (by sensing excess current) as well as the mechanism that breaks the current flow in a single unit. Some miniature circuit breakers operate solely on the basis of electromagnetism. In these miniature circuit breakers, the current is run through a solenoid, and, in the event of excess current flow, the magnetic pull of the solenoid is sufficient to force open the circuit breaker's contacts (often indirectly through a tripping mechanism). In higher powered applications, the protective relays that detect a fault and initiate a trip are separate from the circuit breaker. Early relays worked based upon electromagnetic principles similar to those mentioned in the previous paragraph, modern relays are application-specific computers that determine whether to trip based upon readings from the power system. Different relays will initiate trips depending upon different protection schemes. For example, an overcurrent relay might initiate a trip if the current on any phase exceeds a certain threshold whereas a set of differential relays might initiate a trip if the sum of currents between them indicates there may be current leaking to earth. The circuit breakers in higher powered applications are different too. Air is typically no longer sufficient to quench the arc that forms when the contacts are forced open so a variety of techniques are used. One of the most popular techniques is to keep the chamber enclosing the contacts flooded with sulfur hexafluoride (SF6)—a non-toxic gas with sound arc-quenching properties. Other techniques are discussed in the reference. The second problem, the inadequacy of fuses to act as the sole safety device in most power systems, is probably best resolved by the use of residual-current devices (RCDs). In any properly functioning electrical appliance, the current flowing into the appliance on the active line should equal the current flowing out of the appliance on the neutral line. A residual current device works by monitoring the active and neutral lines and tripping the active line if it notices a difference. Residual current devices require a separate neutral line for each phase and to be able to trip within a time frame before harm occurs. This is typically not a problem in most residential applications where standard wiring provides an active and neutral line for each appliance (that is why your power plugs always have at least two tongs) and the voltages are relatively low however these issues limit the effectiveness of RCDs in other applications such as industry. Even with the installation of an RCD, exposure to electricity can still prove fatal. SCADA systems In large electric power systems, supervisory control and data acquisition (SCADA) is used for tasks such as switching on generators, controlling generator output and switching in or out system elements for maintenance. The first supervisory control systems implemented consisted of a panel of lamps and switches at a central console near the controlled plant. The lamps provided feedback on the state of the plant (the data acquisition function) and the switches allowed adjustments to the plant to be made (the supervisory control function). Today, SCADA systems are much more sophisticated and, due to advances in communication systems, the consoles controlling the plant no longer need to be near the plant itself. Instead, it is now common for plants to be controlled with equipment similar (if not identical) to a desktop computer. The ability to control such plants through computers has increased the need for security—there have already been reports of cyber-attacks on such systems causing significant disruptions to power systems. Power systems in practice Despite their common components, power systems vary widely both with respect to their design and how they operate. This section introduces some common power system types and briefly explains their operation. Residential power systems Residential dwellings almost always take supply from the low voltage distribution lines or cables that run past the dwelling. These operate at voltages of between 110 and 260 volts (phase-to-earth) depending upon national standards. A few decades ago small dwellings would be fed a single phase using a dedicated two-core service cable (one core for the active phase and one core for the neutral return). The active line would then be run through a main isolating switch in the fuse box and then split into one or more circuits to feed lighting and appliances inside the house. By convention, the lighting and appliance circuits are kept separate so the failure of an appliance does not leave the dwelling's occupants in the dark. All circuits would be fused with an appropriate fuse based upon the wire size used for that circuit. Circuits would have both an active and neutral wire with both the lighting and power sockets being connected in parallel. Sockets would also be provided with a protective earth. This would be made available to appliances to connect to any metallic casing. If this casing were to become live, the theory is the connection to earth would cause an RCD or fuse to trip—thus preventing the future electrocution of an occupant handling the appliance. Earthing systems vary between regions, but in countries such as the United Kingdom and Australia both the protective earth and neutral line would be earthed together near the fuse box before the main isolating switch and the neutral earthed once again back at the distribution transformer. There have been a number of minor changes over the years to practice of residential wiring. Some of the most significant ways modern residential power systems in developed countries tend to vary from older ones include: For convenience, miniature circuit breakers are now almost always used in the fuse box instead of fuses as these can easily be reset by occupants and, if of the thermomagnetic type, can respond more quickly to some types of fault. For safety reasons, RCDs are now often installed on appliance circuits and, increasingly, even on lighting circuits. Whereas residential air conditioners of the past might have been fed from a dedicated circuit attached to a single phase, larger centralised air conditioners that require three-phase power are now becoming common in some countries. Protective earths are now run with lighting circuits to allow for metallic lamp holders to be earthed. Increasingly residential power systems are incorporating microgenerators, most notably, photovoltaic cells. Commercial power systems Commercial power systems such as shopping centers or high-rise buildings are larger in scale than residential systems. Electrical designs for larger commercial systems are usually studied for load flow, short-circuit fault levels and voltage drop. The objectives of the studies are to assure proper equipment and conductor sizing, and to coordinate protective devices so that minimal disruption is caused when a fault is cleared. Large commercial installations will have an orderly system of sub-panels, separate from the main distribution board to allow for better system protection and more efficient electrical installation. Typically one of the largest appliances connected to a commercial power system in hot climates is the HVAC unit, and ensuring this unit is adequately supplied is an important consideration in commercial power systems. Regulations for commercial establishments place other requirements on commercial systems that are not placed on residential systems. For example, in Australia, commercial systems must comply with AS 2293, the standard for emergency lighting, which requires emergency lighting be maintained for at least 90 minutes in the event of loss of mains supply. In the United States, the National Electrical Code requires commercial systems to be built with at least one 20 A sign outlet in order to light outdoor signage. Building code regulations may place special requirements on the electrical system for emergency lighting, evacuation, emergency power, smoke control and fire protection. Power system management Power system management varies depending upon the power system. Residential power systems and even automotive electrical systems are often run-to-fail. In aviation, the power system uses redundancy to ensure availability. On the Boeing 747-400 any of the four engines can provide power and circuit breakers are checked as part of power-up (a tripped circuit breaker indicating a fault). Larger power systems require active management. In industrial plants or mining sites a single team might be responsible for fault management, augmentation and maintenance. Where as for the electric grid, management is divided amongst several specialised teams. Fault management Fault management involves monitoring the behaviour of the power system so as to identify and correct issues that affect the system's reliability. Fault management can be specific and reactive: for example, dispatching a team to restring conductor that has been brought down during a storm. Or, alternatively, can focus on systemic improvements: such as the installation of reclosers on sections of the system that are subject to frequent temporary disruptions (as might be caused by vegetation, lightning or wildlife). Maintenance and augmentation In addition to fault management, power systems may require maintenance or augmentation. As often it is neither economical nor practical for large parts of the system to be offline during this work, power systems are built with many switches. These switches allow the part of the system being worked on to be isolated while the rest of the system remains live. At high voltages, there are two switches of note: isolators and circuit breakers. Circuit breakers are load-breaking switches where as operating isolators under load would lead to unacceptable and dangerous arcing. In a typical planned outage, several circuit breakers are tripped to allow the isolators to be switched before the circuit breakers are again closed to reroute power around the isolated area. This allows work to be completed on the isolated area. Frequency and voltage management Beyond fault management and maintenance one of the main difficulties in power systems is that the active power consumed plus losses must equal the active power produced. If load is reduced while generation inputs remain constant the synchronous generators will spin faster and the system frequency will rise. The opposite occurs if load is increased. As such the system frequency must be actively managed primarily through switching on and off dispatchable loads and generation. Making sure the frequency is constant is usually the task of a system operator. Even with frequency maintained, the system operator can be kept occupied ensuring:
Technology
Electricity transmission and distribution
null
704287
https://en.wikipedia.org/wiki/Aerostat
Aerostat
An aerostat (, via French) or lighter-than-air aircraft is an aircraft that relies on buoyancy to maintain flight. Aerostats include the unpowered balloons (free-flying or tethered) and the powered airships. The relative density of an aerostat as a whole is lower than that of the surrounding atmospheric air (hence the name "lighter-than-air"). Its main component is one or more gas capsules made of lightweight skins, containing a lifting gas (hot air, or any gas with lower density than air, typically hydrogen or helium) that displaces a large volume of air to generate enough buoyancy to overcome its own weight. Payload (passengers and cargo) can then be carried on attached components such as a basket, a gondola, a cabin or various hardpoints. With airships, which need to be able to fly against wind, the lifting gas capsules are often protected by a more rigid outer envelope or an airframe, with other gasbags such as ballonets to help modulate buoyancy. Aerostats are so named because they use aerostatic buoyant force that does not require any forward movement through the surrounding air mass, resulting in the inherent ability to levitate and perform vertical takeoff and landing. This contrasts with the heavier-than-air aerodynes that primarily use aerodynamic lift, which must have consistent airflow over an aerofoil (wing) surface to stay airborne. The term has also been used in a narrower sense, to refer to the statically tethered balloon in contrast to the free-flying airship. This article uses the term in its broader sense. Terminology In student usage, the term aerostat refers to any thermostat that remains in the air primarily using aerostatic buoyancy. Historically, all aerostats were called balloons. Powered types capable of horizontal flight were referred to as dirigible balloons or simply dirigibles (from the French dirigeable, meaning "steerable"). These powered aerostats later came to be called airships, with the term "balloon" reserved for unpowered types, whether tethered (which means attached to the ground) or free-floating. More recently, the US Government Accountability Office has used the term "aerostat" in a different sense, to distinguish the statically tethered balloon from the free-flying airship. Types Balloons A balloon is an unpowered aerostat which has no means of propulsion and must be either tethered on a long cable or allowed to drift freely with the wind. Although a free balloon travels at the speed of the wind, it is travelling with the wind so to a passenger the air feels calm and windless. To change its altitude above ground it must either adjust the amount of lift or discard ballast weight. Notable uses of free-flying balloons include meteorological balloons and sport balloons. A tethered balloon is held down by one or more mooring lines or tethers. It has sufficient lift to hold the line taut and its altitude is controlled by winching the line in or out. A tethered balloon does feel the wind. A round balloon is unstable and bobs about in strong winds, so the kite balloon was developed with an aerodynamic shape similar to a non-rigid airship. Both kite balloons and non-rigid airships are sometimes called "blimps". Notable uses of tethered balloons include observation balloons and barrage balloons and notable uses of untethered balloons include espionage balloons and fire balloons. Airships An airship is a powered, free-flying aerostat that can be steered. Airships divide into rigid, semi-rigid and non-rigid types, with these last often known as blimps. A rigid airship has an outer framework or skin surrounding the lifting gas bags inside it, The outer envelope keeps its shape even if the gasbags are deflated. The great zeppelin airships of the twentieth century were rigid types. A non-rigid airship or blimp deflates like a balloon as it loses gas. The Goodyear blimps are still a common sight in the USA. A semi-rigid airship has a deflatable gas bag like a non-rigid but with a supporting structure to help it hold its shape while aloft. The first practical airship, the Santos-Dumont No. 6 was a semi-rigid. Some airships obtain additional lift aerodynamically as they travel through the air, using the shape of their envelope or through the addition of fins or even small wings. Types designed to exploit this lifting effect in normal cruise are called hybrid airships. Hybrid aerostats A hybrid type uses both static buoyancy and dynamic airflow to provide lift. The dynamic movement may be created either using propulsive power as a hybrid airship or by tethering in the wind like a kite, as a Helikite or kytoon. The Allsopp Helikite is a combination of a helium balloon and a kite to form a single, aerodynamically sound tethered aircraft, that exploits both wind and helium for its lift. Helikites are semi-rigid. Helikites are considered the most stable, energy and cost-efficient aerostats available. This gives Helikites various advantages over traditional aerostats. Traditional aerostats need to utilize relatively low-lift helium gas to combat high winds, which means they need to have a lot of gas to cope and so are very large, unwieldy and expensive. Helikites exploit wind lift so they only need to be a fraction of the size of traditional aerostats in order to operate in high winds. Helikites fly many times higher altitude than traditional aerostats of the same size. Being smaller, with fewer construction seams, means Helikites have minimal problems with gas leakage compared to traditional aerostats, so Helikites use far less helium. Helikites do not need ballonets and so are simpler in construction than traditional aerostats and Helikites do not need constant electrical power to keep them airborne. Helikites are also extremely stable and so are good aerial platforms for cameras or scientific instruments. Tiny Helikites will fly in all weathers, so these sizes are popular as they are very reliable but still easy to handle and do not require large expensive winches. Helikites can be small enough to fit fully inflated in a car but they can also be made large if heavy payloads are required to be flown to high altitudes. Helikites are one of the most popular aerostat designs and are widely used by the scientific community, military, photographers, geographers, police, first responders. Helikites are used by telecoms companies to lift 4G and 5G base stations for areas without cellphone coverage. Helikites range in size from 1 metre (gas volume 0.13 m3) with a pure helium lift of 30g, up to 14 metres (gas volume 250m3) able to lift 117 kg. Small Helikites can fly up to altitudes of 1,000 ft, and medium-sized Helikites up to altitudes of 3,000 ft, while large Helikites can achieve 7,000 ft. Piasecki Helicopter developed the Piasecki PA-97 Helistat using the rotor systems from four obsolete helicopters and a surplus Navy blimp, in order to provide a capability to lift heavier loads than a single helicopter could provide. The aircraft suffered a fatal accident during a test flight. In 2008, Boeing and SkyHook International resurrected the concept and announced a proposed design of the SkyHook JHL-40. Lifting gases In order to provide buoyancy, any lifting gas must be less dense than the surrounding air. A hot air balloon is open at the bottom to allow hot air to enter, while the gas balloon is closed to stop the (cold) lifting gas from escaping. Common lifting gases have included hydrogen, coal gas and helium. Hot air When heated, air expands. This lowers its density and creates lift. Small hot air balloons or lanterns have been flown in China since ancient times. The first modern man-lifting aerostat, made by the Montgolfier brothers, was a hot air balloon. Most early balloons however were gas balloons. Interest in the sport of hot air ballooning reawoke in the second half of the twentieth century and even some hot-air airships have been flown. Hydrogen Hydrogen is the lightest of all gases and a manned hydrogen balloon was flown soon after the Montgolfier brothers. There is no need to burn fuel, so a gas balloon can stay aloft far longer than a hot-air balloon. Hydrogen soon became the most common lifting gas for both balloons and, later, airships. But hydrogen itself is flammable and, following several major disasters in the 1930s, including the Hindenburg Disaster, it fell out of use. Coal gas Coal gas comprises a mix of methane and other gases, and typically has about half the lifting power of hydrogen. In the late nineteenth and early twentieth centuries municipal gas works became common and provided a cheap source of lifting gas. Some works were able to produce a special mix for ballooning events, incorporating a higher proportion of hydrogen and less carbon monoxide, to improve its lifting power. Helium Helium is the only lifting gas which is both non-flammable and non-toxic, and it has almost as much (about 92%) lifting power as hydrogen. It was not discovered in quantity until early in the twentieth century, and for many years only the United States had enough to use in airships. Almost all gas balloons and airships now use helium. Low-pressure gases Although not currently practical, it may be possible to construct a rigid, lighter-than-air structure which, rather than being inflated with air, is at a vacuum relative to the surrounding air. This would allow the object to float above the ground without any heat or special lifting gas, but the structural challenges of building a rigid vacuum chamber lighter than air are quite significant. Even so, it may be possible to improve the performance of more conventional aerostats by trading gas weight for structural weight, combining the lifting properties of the gas with vacuum and possibly heat for enhanced lift. Buoyancy control The buoyancy control of an aerostat relies on the principles of buoyant force and the manipulation of the gas inside its envelope. Aerostats use lighter-than-air gases, such as helium or hydrogen, which provide lift because they are less dense than the surrounding air. The amount of buoyant force generated depends on the volume of the gas, its density, and the density of the outside atmosphere. By controlling these variables, an aerostat can be made to rise, descend, or maintain a stable altitude. The basic mechanism involves adjusting the volume and pressure of the gas within the aerostat’s envelope, often through a system of valves and compartments. To ascend, the aerostat releases ballast, which typically consists of sandbags or other weights, reducing its overall weight and making it lighter than the air it displaces. Alternatively, it may adjust the temperature of the gas (if using hot air) or expand the volume of gas within its envelope. As the gas volume increases, the aerostat becomes less dense and rises. This is controlled either through heating (in the case of hot air balloons) or by adjusting the valves that manage the flow of gas between different compartments or the outside atmosphere. Helium-based aerostats, such as blimps, rely on maintaining the integrity and volume of the helium within their envelope to achieve a stable lift. When descending, the aerostat must reduce its buoyancy, which can be done by venting some of the gas or by taking on additional ballast. Venting gas allows the envelope to lose volume, making the aerostat denser than the surrounding air and causing it to descend. However, venting must be done cautiously, especially with helium, as it is a limited resource and cannot be replenished easily during flight. Alternatively, an aerostat might use a reversible system where it can compress the gas into smaller compartments within the envelope, reducing lift without permanently losing the gas. By managing these compartments or adjusting the flow of gas, the aerostat’s buoyancy can be precisely controlled. To maintain altitude, an aerostat achieves a balance where the lift force generated by the gas equals the weight of the aerostat. This equilibrium is achieved through small adjustments in ballast or the gas volume. Sophisticated systems might use automatic valves and sensors to monitor atmospheric pressure, gas volume, and temperature, ensuring that the aerostat remains stable without manual intervention. This constant regulation allows aerostats to hover at a fixed altitude for extended periods, making them useful for applications such as surveillance, communication relays, or scientific observations, where maintaining a consistent position in the atmosphere is crucial.
Technology
Types of aircraft
null
704359
https://en.wikipedia.org/wiki/Contour%20integration
Contour integration
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane. Contour integration is closely related to the calculus of residues, a method of complex analysis. One use for contour integrals is the evaluation of integrals along the real line that are not readily found by using only real variable methods. Contour integration methods include: direct integration of a complex-valued function along a curve in the complex plane; application of the Cauchy integral formula; and application of the residue theorem. One method can be used, or a combination of these methods, or various limiting processes, for the purpose of finding these integrals or sums. Curves in the complex plane In complex analysis, a contour is a type of curve in the complex plane. In contour integration, contours provide a precise definition of the curves on which an integral may be suitably defined. A curve in the complex plane is defined as a continuous function from a closed interval of the real line to the complex plane: . This definition of a curve coincides with the intuitive notion of a curve, but includes a parametrization by a continuous function from a closed interval. This more precise definition allows us to consider what properties a curve must have for it to be useful for integration. In the following subsections we narrow down the set of curves that we can integrate to include only those that can be built up out of a finite number of continuous curves that can be given a direction. Moreover, we will restrict the "pieces" from crossing over themselves, and we require that each piece have a finite (non-vanishing) continuous derivative. These requirements correspond to requiring that we consider only curves that can be traced, such as by a pen, in a sequence of even, steady strokes, which stop only to start a new piece of the curve, all without picking up the pen. Directed smooth curves Contours are often defined in terms of directed smooth curves. These provide a precise definition of a "piece" of a smooth curve, of which a contour is made. A smooth curve is a curve with a non-vanishing, continuous derivative such that each point is traversed only once ( is one-to-one), with the possible exception of a curve such that the endpoints match (). In the case where the endpoints match, the curve is called closed, and the function is required to be one-to-one everywhere else and the derivative must be continuous at the identified point (). A smooth curve that is not closed is often referred to as a smooth arc. The parametrization of a curve provides a natural ordering of points on the curve: comes before if . This leads to the notion of a directed smooth curve. It is most useful to consider curves independent of the specific parametrization. This can be done by considering equivalence classes of smooth curves with the same direction. A directed smooth curve can then be defined as an ordered set of points in the complex plane that is the image of some smooth curve in their natural order (according to the parametrization). Note that not all orderings of the points are the natural ordering of a smooth curve. In fact, a given smooth curve has only two such orderings. Also, a single closed curve can have any point as its endpoint, while a smooth arc has only two choices for its endpoints. Contours Contours are the class of curves on which we define contour integration. A contour is a directed curve which is made up of a finite sequence of directed smooth curves whose endpoints are matched to give a single direction. This requires that the sequence of curves be such that the terminal point of coincides with the initial point of for all such that . This includes all directed smooth curves. Also, a single point in the complex plane is considered a contour. The symbol is often used to denote the piecing of curves together to form a new curve. Thus we could write a contour that is made up of curves as Contour integrals The contour integral of a complex function is a generalization of the integral for real-valued functions. For continuous functions in the complex plane, the contour integral can be defined in analogy to the line integral by first defining the integral along a directed smooth curve in terms of an integral over a real valued parameter. A more general definition can be given in terms of partitions of the contour in analogy with the partition of an interval and the Riemann integral. In both cases the integral over a contour is defined as the sum of the integrals over the directed smooth curves that make up the contour. For continuous functions To define the contour integral in this way one must first consider the integral, over a real variable, of a complex-valued function. Let be a complex-valued function of a real variable, . The real and imaginary parts of are often denoted as and , respectively, so that Then the integral of the complex-valued function over the interval is given by Now, to define the contour integral, let be a continuous function on the directed smooth curve . Let be any parametrization of that is consistent with its order (direction). Then the integral along is denoted and is given by This definition is well defined. That is, the result is independent of the parametrization chosen. In the case where the real integral on the right side does not exist the integral along is said not to exist. As a generalization of the Riemann integral The generalization of the Riemann integral to functions of a complex variable is done in complete analogy to its definition for functions from the real numbers. The partition of a directed smooth curve is defined as a finite, ordered set of points on . The integral over the curve is the limit of finite sums of function values, taken at the points on the partition, in the limit that the maximum distance between any two successive points on the partition (in the two-dimensional complex plane), also known as the mesh, goes to zero. Direct methods Direct methods involve the calculation of the integral through methods similar to those in calculating line integrals in multivariate calculus. This means that we use the following method: parametrizing the contour The contour is parametrized by a differentiable complex-valued function of real variables, or the contour is broken up into pieces and parametrized separately. substitution of the parametrization into the integrand Substituting the parametrization into the integrand transforms the integral into an integral of one real variable. direct evaluation The integral is evaluated in a method akin to a real-variable integral. Example A fundamental result in complex analysis is that the contour integral of is , where the path of the contour is taken to be the unit circle traversed counterclockwise (or any positively oriented Jordan curve about 0). In the case of the unit circle there is a direct method to evaluate the integral In evaluating this integral, use the unit circle as a contour, parametrized by , with , then and which is the value of the integral. This result only applies to the case in which z is raised to power of -1. If the power is not equal to -1, then the result will always be zero. Applications of integral theorems Applications of integral theorems are also often used to evaluate the contour integral along a contour, which means that the real-valued integral is calculated simultaneously along with calculating the contour integral. Integral theorems such as the Cauchy integral formula or residue theorem are generally used in the following method: a specific contour is chosen: The contour is chosen so that the contour follows the part of the complex plane that describes the real-valued integral, and also encloses singularities of the integrand so application of the Cauchy integral formula or residue theorem is possible application of Cauchy's integral theorem The integral is reduced to only an integration around a small circle about each pole. application of the Cauchy integral formula or residue theorem Application of these integral formulae gives us a value for the integral around the whole of the contour. division of the contour into a contour along the real part and imaginary part The whole of the contour can be divided into the contour that follows the part of the complex plane that describes the real-valued integral as chosen before (call it ), and the integral that crosses the complex plane (call it ). The integral over the whole of the contour is the sum of the integral over each of these contours. demonstration that the integral that crosses the complex plane plays no part in the sum If the integral can be shown to be zero, or if the real-valued integral that is sought is improper, then if we demonstrate that the integral as described above tends to 0, the integral along will tend to the integral around the contour . conclusion If we can show the above step, then we can directly calculate , the real-valued integral. Example 1 Consider the integral To evaluate this integral, we look at the complex-valued function which has singularities at and . We choose a contour that will enclose the real-valued integral, here a semicircle with boundary diameter on the real line (going from, say, to ) will be convenient. Call this contour . There are two ways of proceeding, using the Cauchy integral formula or by the method of residues: Using the Cauchy integral formula Note that: thus Furthermore, observe that Since the only singularity in the contour is the one at , then we can write which puts the function in the form for direct application of the formula. Then, by using Cauchy's integral formula, We take the first derivative, in the above steps, because the pole is a second-order pole. That is, is taken to the second power, so we employ the first derivative of . If it were taken to the third power, we would use the second derivative and divide by , etc. The case of to the first power corresponds to a zero order derivative—just itself. We need to show that the integral over the arc of the semicircle tends to zero as , using the estimation lemma where is an upper bound on along the arc and the length of the arc. Now, So Using the method of residues Consider the Laurent series of about , the only singularity we need to consider. We then have (See the sample Laurent calculation from Laurent series for the derivation of this series.) It is clear by inspection that the residue is , so, by the residue theorem, we have Thus we get the same result as before. Contour note As an aside, a question can arise whether we do not take the semicircle to include the other singularity, enclosing . To have the integral along the real axis moving in the correct direction, the contour must travel clockwise, i.e., in a negative direction, reversing the sign of the integral overall. This does not affect the use of the method of residues by series. Example 2 – Cauchy distribution The integral (which arises in probability theory as a scalar multiple of the characteristic function of the Cauchy distribution) resists the techniques of elementary calculus. We will evaluate it by expressing it as a limit of contour integrals along the contour that goes along the real line from to and then counterclockwise along a semicircle centered at 0 from to . Take to be greater than 1, so that the imaginary unit is enclosed within the curve. The contour integral is Since is an entire function (having no singularities at any point in the complex plane), this function has singularities only where the denominator is zero. Since , that happens only where or . Only one of those points is in the region bounded by this contour. The residue of at is According to the residue theorem, then, we have The contour may be split into a "straight" part and a curved arc, so that and thus According to Jordan's lemma, if then Therefore, if then A similar argument with an arc that winds around rather than shows that if then and finally we have this: (If then the integral yields immediately to real-valued calculus methods and its value is .) Example 3 – trigonometric integrals Certain substitutions can be made to integrals involving trigonometric functions, so the integral is transformed into a rational function of a complex variable and then the above methods can be used in order to evaluate the integral. As an example, consider We seek to make a substitution of . Now, recall and Taking to be the unit circle, we substitute to get: The singularities to be considered are at Let be a small circle about and be a small circle about Then we arrive at the following: Example 3a – trigonometric integrals, the general procedure The above method may be applied to all integrals of the type where and are polynomials, i.e. a rational function in trigonometric terms is being integrated. Note that the bounds of integration may as well be and −, as in the previous example, or any other pair of endpoints 2 apart. The trick is to use the substitution where and hence This substitution maps the interval to the unit circle. Furthermore, and so that a rational function in results from the substitution, and the integral becomes which is in turn computed by summing the residues of inside the unit circle. The image at right illustrates this for which we now compute. The first step is to recognize that The substitution yields The poles of this function are at and . Of these, and are outside the unit circle (shown in red, not to scale), whereas and are inside the unit circle (shown in blue). The corresponding residues are both equal to , so that the value of the integral is Example 4 – branch cuts Consider the real integral We can begin by formulating the complex integral We can use the Cauchy integral formula or residue theorem again to obtain the relevant residues. However, the important thing to note is that , so has a branch cut. This affects our choice of the contour . Normally the logarithm branch cut is defined as the negative real axis, however, this makes the calculation of the integral slightly more complicated, so we define it to be the positive real axis. Then, we use the so-called keyhole contour, which consists of a small circle about the origin of radius say, extending to a line segment parallel and close to the positive real axis but not touching it, to an almost full circle, returning to a line segment parallel, close, and below the positive real axis in the negative sense, returning to the small circle in the middle. Note that and are inside the big circle. These are the two remaining poles, derivable by factoring the denominator of the integrand. The branch point at was avoided by detouring around the origin. Let be the small circle of radius , the larger, with radius , then It can be shown that the integrals over and both tend to zero as and , by an estimation argument above, that leaves two terms. Now since , on the contour outside the branch cut, we have gained 2 in argument along . (By Euler's identity, represents the unit vector, which therefore has as its log. This is what is meant by the argument of . The coefficient of forces us to use 2.) So Therefore: By using the residue theorem or the Cauchy integral formula (first employing the partial fractions method to derive a sum of two simple contour integrals) one obtains Example 5 – the square of the logarithm This section treats a type of integral of which is an example. To calculate this integral, one uses the function and the branch of the logarithm corresponding to . We will calculate the integral of along the keyhole contour shown at right. As it turns out this integral is a multiple of the initial integral that we wish to calculate and by the Cauchy residue theorem we have Let be the radius of the large circle, and the radius of the small one. We will denote the upper line by , and the lower line by . As before we take the limit when and . The contributions from the two circles vanish. For example, one has the following upper bound with the lemma: In order to compute the contributions of and we set on and on , with : which gives Example 6 – logarithms and the residue at infinity We seek to evaluate This requires a close study of We will construct so that it has a branch cut on , shown in red in the diagram. To do this, we choose two branches of the logarithm, setting and The cut of is therefore and the cut of is . It is easy to see that the cut of the product of the two, i.e. , is , because is actually continuous across . This is because when and we approach the cut from above, has the value When we approach from below, has the value But so that we have continuity across the cut. This is illustrated in the diagram, where the two black oriented circles are labelled with the corresponding value of the argument of the logarithm used in and . We will use the contour shown in green in the diagram. To do this we must compute the value of along the line segments just above and just below the cut. Let (in the limit, i.e. as the two green circles shrink to radius zero), where . Along the upper segment, we find that has the value and along the lower segment, It follows that the integral of along the upper segment is in the limit, and along the lower segment, . If we can show that the integrals along the two green circles vanish in the limit, then we also have the value of , by the Cauchy residue theorem. Let the radius of the green circles be , where and , and apply the inequality. For the circle on the left, we find Similarly, for the circle on the right, we have Now using the Cauchy residue theorem, we have where the minus sign is due to the clockwise direction around the residues. Using the branch of the logarithm from before, clearly The pole is shown in blue in the diagram. The value simplifies to We use the following formula for the residue at infinity: Substituting, we find and where we have used the fact that for the second branch of the logarithm. Next we apply the binomial expansion, obtaining The conclusion is that Finally, it follows that the value of is which yields Evaluation with residue theorem Using the residue theorem, we can evaluate closed contour integrals. The following are examples on evaluating contour integrals with the residue theorem. Using the residue theorem, let us evaluate this contour integral. Recall that the residue theorem states where is the residue of , and the are the singularities of lying inside the contour (with none of them lying directly on ). has only one pole, . From that, we determine that the residue of to be Thus, using the residue theorem, we can determine: Multivariable contour integrals To solve multivariable contour integrals (i.e. surface integrals, complex volume integrals, and higher order integrals), we must use the divergence theorem. For now, let be interchangeable with . These will both serve as the divergence of the vector field denoted as . This theorem states: In addition, we also need to evaluate where is an alternate notation of . The divergence of any dimension can be described as Example 1 Let the vector field and be bounded by the following The corresponding double contour integral would be set up as such: We now evaluate . Meanwhile, set up the corresponding triple integral: Example 2 Let the vector field , and remark that there are 4 parameters in this case. Let this vector field be bounded by the following: To evaluate this, we must utilize the divergence theorem as stated before, and we must evaluate . Let Thus, we can evaluate a contour integral with . We can use the same method to evaluate contour integrals for any vector field with as well. Integral representation An integral representation of a function is an expression of the function involving a contour integral. Various integral representations are known for many special functions. Integral representations can be important for theoretical reasons, e.g. giving analytic continuation or functional equations, or sometimes for numerical evaluations. For example, the original definition of the Riemann zeta function via a Dirichlet series, is valid only for . But where the integration is done over the Hankel contour , is valid for all complex s not equal to 1.
Mathematics
Complex analysis
null
705635
https://en.wikipedia.org/wiki/Air%20mass%20%28astronomy%29
Air mass (astronomy)
In astronomy, air mass or airmass is a measure of the amount of air along the line of sight when observing a star or other celestial source from below Earth's atmosphere . It is formulated as the integral of air density along the light ray. As it penetrates the atmosphere, light is attenuated by scattering and absorption; the thicker atmosphere through which it passes, the greater the attenuation. Consequently, celestial bodies when nearer the horizon appear less bright than when nearer the zenith. This attenuation, known as atmospheric extinction, is described quantitatively by the Beer–Lambert law. "Air mass" normally indicates relative air mass, the ratio of absolute air masses (as defined above) at oblique incidence relative to that at zenith. So, by definition, the relative air mass at the zenith is 1. Air mass increases as the angle between the source and the zenith increases, reaching a value of approximately 38 at the horizon. Air mass can be less than one at an elevation greater than sea level; however, most closed-form expressions for air mass do not include the effects of the observer's elevation, so adjustment must usually be accomplished by other means. Tables of air mass have been published by numerous authors, including , , and . Definition The absolute air mass is defined as: where is volumetric density of air. Thus is a type of oblique column density. In the vertical direction, the absolute air mass at zenith is: So is a type of vertical column density. Finally, the relative air mass is: Assuming air density to be uniform allows removing it from the integrals. The absolute air mass then simplifies to a product: where is the average density and the arc length of the oblique and zenith light paths are: In the corresponding simplified relative air mass, the average density cancels out in the fraction, leading to the ratio of path lengths: Further simplifications are often made, assuming straight-line propagation (neglecting ray bending), as discussed below. Calculation Background The angle of a celestial body with the zenith is the zenith angle (in astronomy, commonly referred to as the zenith distance). A body's angular position can also be given in terms of altitude, the angle above the geometric horizon; the altitude and the zenith angle are thus related by Atmospheric refraction causes light entering the atmosphere to follow an approximately circular path that is slightly longer than the geometric path. Air mass must take into account the longer path . Additionally, refraction causes a celestial body to appear higher above the horizon than it actually is; at the horizon, the difference between the true zenith angle and the apparent zenith angle is approximately 34 minutes of arc. Most air mass formulas are based on the apparent zenith angle, but some are based on the true zenith angle, so it is important to ensure that the correct value is used, especially near the horizon. Plane-parallel atmosphere When the zenith angle is small to moderate, a good approximation is given by assuming a homogeneous plane-parallel atmosphere (i.e., one in which density is constant and Earth's curvature is ignored). The air mass then is simply the secant of the zenith angle : At a zenith angle of 60°, the air mass is approximately 2. However, because the Earth is not flat, this formula is only usable for zenith angles up to about 60° to 75°, depending on accuracy requirements. At greater zenith angles, the accuracy degrades rapidly, with becoming infinite at the horizon; the horizon air mass in the more realistic spherical atmosphere is usually less than 40. Interpolative formulas Many formulas have been developed to fit tabular values of air mass; one by included a simple corrective term: where is the true zenith angle. This gives usable results up to approximately 80°, but the accuracy degrades rapidly at greater zenith angles. The calculated air mass reaches a maximum of 11.13 at 86.6°, becomes zero at 88°, and approaches negative infinity at the horizon. The plot of this formula on the accompanying graph includes a correction for atmospheric refraction so that the calculated air mass is for apparent rather than true zenith angle. introduced a polynomial in : which gives usable results for zenith angles of up to perhaps 85°. As with the previous formula, the calculated air mass reaches a maximum, and then approaches negative infinity at the horizon. suggested which gives reasonable results for high zenith angles, with a horizon air mass of 40. developed which gives reasonable results for zenith angles of up to 90°, with an air mass of approximately 38 at the horizon. Here the second term is in degrees. developed in terms of the true zenith angle , for which he claimed a maximum error (at the horizon) of 0.0037 air mass. developed where is apparent altitude in degrees. Pickering claimed his equation to have a tenth the error of near the horizon. Atmospheric models Interpolative formulas attempt to provide a good fit to tabular values of air mass using minimal computational overhead. The tabular values, however, must be determined from measurements or atmospheric models that derive from geometrical and physical considerations of Earth and its atmosphere. Nonrefracting spherical atmosphere If atmospheric refraction is ignored, it can be shown from simple geometrical considerations (Schoenberg 1929, 173) that the path of a light ray at zenith angle through a radially symmetrical atmosphere of height above the Earth is given by or alternatively, where is the radius of the Earth. The relative air mass is then: Homogeneous atmosphere If the atmosphere is homogeneous (i.e., density is constant), the atmospheric height follows from hydrostatic considerations as: where is the Boltzmann constant, is the sea-level temperature, is the molecular mass of air, and is the acceleration due to gravity. Although this is the same as the pressure scale height of an isothermal atmosphere, the implication is slightly different. In an isothermal atmosphere, 37% (1/e) of the atmosphere is above the pressure scale height; in a homogeneous atmosphere, there is no atmosphere above the atmospheric height. Taking , , and gives . Using Earth's mean radius of 6371 km, the sea-level air mass at the horizon is The homogeneous spherical model slightly underestimates the rate of increase in air mass near the horizon; a reasonable overall fit to values determined from more rigorous models can be had by setting the air mass to match a value at a zenith angle less than 90°. The air mass equation can be rearranged to give matching Bemporad's value of 19.787 at  = 88° gives  ≈ 631.01 and  ≈ 35.54. With the same value for as above,  ≈ 10,096 m. While a homogeneous atmosphere is not a physically realistic model, the approximation is reasonable as long as the scale height of the atmosphere is small compared to the radius of the planet. The model is usable (i.e., it does not diverge or go to zero) at all zenith angles, including those greater than 90° (see ). The model requires comparatively little computational overhead, and if high accuracy is not required, it gives reasonable results. However, for zenith angles less than 90°, a better fit to accepted values of air mass can be had with several of the interpolative formulas. Variable-density atmosphere In a real atmosphere, density is not constant (it decreases with elevation above mean sea level. The absolute air mass for the geometrical light path discussed above, becomes, for a sea-level observer, Isothermal atmosphere Several basic models for density variation with elevation are commonly used. The simplest, an isothermal atmosphere, gives where is the sea-level density and is the density scale height. When the limits of integration are zero and infinity, the result is known as Chapman function. An approximate result is obtained if some high-order terms are dropped, yielding , An approximate correction for refraction can be made by taking where is the physical radius of the Earth. At the horizon, the approximate equation becomes Using a scale height of 8435 m, Earth's mean radius of 6371 km, and including the correction for refraction, Polytropic atmosphere The assumption of constant temperature is simplistic; a more realistic model is the polytropic atmosphere, for which where is the sea-level temperature and is the temperature lapse rate. The density as a function of elevation is where is the polytropic exponent (or polytropic index). The air mass integral for the polytropic model does not lend itself to a closed-form solution except at the zenith, so the integration usually is performed numerically. Layered atmosphere Earth's atmosphere consists of multiple layers with different temperature and density characteristics; common atmospheric models include the International Standard Atmosphere and the US Standard Atmosphere. A good approximation for many purposes is a polytropic troposphere of 11 km height with a lapse rate of 6.5 K/km and an isothermal stratosphere of infinite height , which corresponds very closely to the first two layers of the International Standard Atmosphere. More layers can be used if greater accuracy is required. Refracting radially symmetrical atmosphere When atmospheric refraction is considered, ray tracing becomes necessary , and the absolute air mass integral becomes where is the index of refraction of air at the observer's elevation above sea level, is the index of refraction at elevation above sea level, , is the distance from the center of the Earth to a point at elevation , and is distance to the upper limit of the atmosphere at elevation . The index of refraction in terms of density is usually given to sufficient accuracy (Garfinkel 1967) by the Gladstone–Dale relation Rearrangement and substitution into the absolute air mass integral gives The quantity is quite small; expanding the first term in parentheses, rearranging several times, and ignoring terms in after each rearrangement, gives Homogeneous spherical atmosphere with elevated observer In the figure at right, an observer at O is at an elevation above sea level in a uniform radially symmetrical atmosphere of height . The path length of a light ray at zenith angle is ; is the radius of the Earth. Applying the law of cosines to triangle OAC, expanding the left- and right-hand sides, eliminating the common terms, and rearranging gives Solving the quadratic for the path length s, factoring, and rearranging, The negative sign of the radical gives a negative result, which is not physically meaningful. Using the positive sign, dividing by , and cancelling common terms and rearranging gives the relative air mass: With the substitutions and , this can be given as When the observer's elevation is zero, the air mass equation simplifies to In the limit of grazing incidence, the absolute airmass equals the distance to the horizon. Furthermore, if the observer is elevated, the horizon zenith angle can be greater than 90°. Nonuniform distribution of attenuating species Atmospheric models that derive from hydrostatic considerations assume an atmosphere of constant composition and a single mechanism of extinction, which isn't quite correct. There are three main sources of attenuation : Rayleigh scattering by air molecules, Mie scattering by aerosols, and molecular absorption (primarily by ozone). The relative contribution of each source varies with elevation above sea level, and the concentrations of aerosols and ozone cannot be derived simply from hydrostatic considerations. Rigorously, when the extinction coefficient depends on elevation, it must be determined as part of the air mass integral, as described by . A compromise approach often is possible, however. Methods for separately calculating the extinction from each species using closed-form expressions are described in and . The latter reference includes source code for a BASIC program to perform the calculations. Reasonably accurate calculation of extinction can sometimes be done by using one of the simple air mass formulas and separately determining extinction coefficients for each of the attenuating species (, ). Implications Air mass and astronomy In optical astronomy, the air mass provides an indication of the deterioration of the observed image, not only as regards direct effects of spectral absorption, scattering and reduced brightness, but also an aggregation of visual aberrations, e.g. resulting from atmospheric turbulence, collectively referred to as the quality of the "seeing". On bigger telescopes, such as the WHT and VLT , the atmospheric dispersion can be so severe that it affects the pointing of the telescope to the target. In such cases an atmospheric dispersion compensator is used, which usually consists of two prisms. The Greenwood frequency and Fried parameter, both relevant for adaptive optics, depend on the air mass above them (or more specifically, on the zenith angle). In radio astronomy the air mass (which influences the optical path length) is not relevant. The lower layers of the atmosphere, modeled by the air mass, do not significantly impede radio waves, which are of much lower frequency than optical waves. Instead, some radio waves are affected by the ionosphere in the upper atmosphere. Newer aperture synthesis radio telescopes are especially affected by this as they “see” a much larger portion of the sky and thus the ionosphere. In fact, LOFAR needs to explicitly calibrate for these distorting effects (; ), but on the other hand can also study the ionosphere by instead measuring these distortions . Air mass and solar energy In some fields, such as solar energy and photovoltaics, air mass is indicated by the acronym AM; additionally, the value of the air mass is often given by appending its value to AM, so that AM1 indicates an air mass of 1, AM2 indicates an air mass of 2, and so on. The region above Earth's atmosphere, where there is no atmospheric attenuation of solar radiation, is considered to have "air mass zero" (AM0). Atmospheric attenuation of solar radiation is not the same for all wavelengths; consequently, passage through the atmosphere not only reduces intensity but also alters the spectral irradiance. Photovoltaic modules are commonly rated using spectral irradiance for an air mass of 1.5 (AM1.5); tables of these standard spectra are given in ASTM G 173-03. The extraterrestrial spectral irradiance (i.e., that for AM0) is given in ASTM E 490-00a. For many solar energy applications when high accuracy near the horizon is not required, air mass is commonly determined using the simple secant formula described in .
Physical sciences
Basics
Astronomy
706106
https://en.wikipedia.org/wiki/Electrostatic%20induction
Electrostatic induction
Electrostatic induction, also known as "electrostatic influence" or simply "influence" in Europe and Latin America, is a redistribution of electric charge in an object that is caused by the influence of nearby charges. In the presence of a charged body, an insulated conductor develops a positive charge on one end and a negative charge on the other end. Induction was discovered by British scientist John Canton in 1753 and Swedish professor Johan Carl Wilcke in 1762. Electrostatic generators, such as the Wimshurst machine, the Van de Graaff generator and the electrophorus, use this principle.
Physical sciences
Electrostatics
Physics
706247
https://en.wikipedia.org/wiki/Electronic%20band%20structure
Electronic band structure
In solid-state physics, the electronic band structure (or simply band structure) of a solid describes the range of energy levels that electrons may have within it, as well as the ranges of energy that they may not have (called band gaps or forbidden bands). Band theory derives these bands and band gaps by examining the allowed quantum mechanical wave functions for an electron in a large, periodic lattice of atoms or molecules. Band theory has been successfully used to explain many physical properties of solids, such as electrical resistivity and optical absorption, and forms the foundation of the understanding of all solid-state devices (transistors, solar cells, etc.). Why bands and band gaps occur The formation of electronic bands and band gaps can be illustrated with two complementary models for electrons in solids. The first one is the nearly free electron model, in which the electrons are assumed to move almost freely within the material. In this model, the electronic states resemble free electron plane waves, and are only slightly perturbed by the crystal lattice. This model explains the origin of the electronic dispersion relation, but the explanation for band gaps is subtle in this model. The second model starts from the opposite limit, in which the electrons are tightly bound to individual atoms. The electrons of a single, isolated atom occupy atomic orbitals with discrete energy levels. If two atoms come close enough so that their atomic orbitals overlap, the electrons can tunnel between the atoms. This tunneling splits (hybridizes) the atomic orbitals into molecular orbitals with different energies. Similarly, if a large number of identical atoms come together to form a solid, such as a crystal lattice, the atoms' atomic orbitals overlap with the nearby orbitals. Each discrete energy level splits into levels, each with a different energy. Since the number of atoms in a macroscopic piece of solid is a very large number (), the number of orbitals that hybridize with each other is very large. For this reason, the adjacent levels are very closely spaced in energy (of the order of ), and can be considered to form a continuum, an energy band. This formation of bands is mostly a feature of the outermost electrons (valence electrons) in the atom, which are the ones involved in chemical bonding and electrical conductivity. The inner electron orbitals do not overlap to a significant degree, so their bands are very narrow. Band gaps are essentially leftover ranges of energy not covered by any band, a result of the finite widths of the energy bands. The bands have different widths, with the widths depending upon the degree of overlap in the atomic orbitals from which they arise. Two adjacent bands may simply not be wide enough to fully cover the range of energy. For example, the bands associated with core orbitals (such as 1s electrons) are extremely narrow due to the small overlap between adjacent atoms. As a result, there tend to be large band gaps between the core bands. Higher bands involve comparatively larger orbitals with more overlap, becoming progressively wider at higher energies so that there are no band gaps at higher energies. Basic concepts Assumptions and limits of band structure theory Band theory is only an approximation to the quantum state of a solid, which applies to solids consisting of many identical atoms or molecules bonded together. These are the assumptions necessary for band theory to be valid: Infinite-size system: For the bands to be continuous, the piece of material must consist of a large number of atoms. Since a macroscopic piece of material contains on the order of 1022 atoms, this is not a serious restriction; band theory even applies to microscopic-sized transistors in integrated circuits. With modifications, the concept of band structure can also be extended to systems which are only "large" along some dimensions, such as two-dimensional electron systems. Homogeneous system: Band structure is an intrinsic property of a material, which assumes that the material is homogeneous. Practically, this means that the chemical makeup of the material must be uniform throughout the piece. Non-interactivity: The band structure describes "single electron states". The existence of these states assumes that the electrons travel in a static potential without dynamically interacting with lattice vibrations, other electrons, photons, etc. The above assumptions are broken in a number of important practical situations, and the use of band structure requires one to keep a close check on the limitations of band theory: Inhomogeneities and interfaces: Near surfaces, junctions, and other inhomogeneities, the bulk band structure is disrupted. Not only are there local small-scale disruptions (e.g., surface states or dopant states inside the band gap), but also local charge imbalances. These charge imbalances have electrostatic effects that extend deeply into semiconductors, insulators, and the vacuum (see doping, band bending). Along the same lines, most electronic effects (capacitance, electrical conductance, electric-field screening) involve the physics of electrons passing through surfaces and/or near interfaces. The full description of these effects, in a band structure picture, requires at least a rudimentary model of electron-electron interactions (see space charge, band bending). Small systems: For systems which are small along every dimension (e.g., a small molecule or a quantum dot), there is no continuous band structure. The crossover between small and large dimensions is the realm of mesoscopic physics. Strongly correlated materials (for example, Mott insulators) simply cannot be understood in terms of single-electron states. The electronic band structures of these materials are poorly defined (or at least, not uniquely defined) and may not provide useful information about their physical state. Crystalline symmetry and wavevectors Band structure calculations take advantage of the periodic nature of a crystal lattice, exploiting its symmetry. The single-electron Schrödinger equation is solved for an electron in a lattice-periodic potential, giving Bloch electrons as solutions where is called the wavevector. For each value of , there are multiple solutions to the Schrödinger equation labelled by , the band index, which simply numbers the energy bands. Each of these energy levels evolves smoothly with changes in , forming a smooth band of states. For each band we can define a function , which is the dispersion relation for electrons in that band. The wavevector takes on any value inside the Brillouin zone, which is a polyhedron in wavevector (reciprocal lattice) space that is related to the crystal's lattice. Wavevectors outside the Brillouin zone simply correspond to states that are physically identical to those states within the Brillouin zone. Special high symmetry points/lines in the Brillouin zone are assigned labels like Γ, Δ, Λ, Σ (see Fig 1). It is difficult to visualize the shape of a band as a function of wavevector, as it would require a plot in four-dimensional space, vs. , , . In scientific literature it is common to see band structure plots which show the values of for values of along straight lines connecting symmetry points, often labelled Δ, Λ, Σ, or [100], [111], and [110], respectively. Another method for visualizing band structure is to plot a constant-energy isosurface in wavevector space, showing all of the states with energy equal to a particular value. The isosurface of states with energy equal to the Fermi level is known as the Fermi surface. Energy band gaps can be classified using the wavevectors of the states surrounding the band gap: Direct band gap: the lowest-energy state above the band gap has the same as the highest-energy state beneath the band gap. Indirect band gap: the closest states above and beneath the band gap do not have the same value. Asymmetry: Band structures in non-crystalline solids Although electronic band structures are usually associated with crystalline materials, quasi-crystalline and amorphous solids may also exhibit band gaps. These are somewhat more difficult to study theoretically since they lack the simple symmetry of a crystal, and it is not usually possible to determine a precise dispersion relation. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on crystalline materials. Density of states The density of states function is defined as the number of electronic states per unit volume, per unit energy, for electron energies near . The density of states function is important for calculations of effects based on band theory. In Fermi's Golden Rule, a calculation for the rate of optical absorption, it provides both the number of excitable electrons and the number of final states for an electron. It appears in calculations of electrical conductivity where it provides the number of mobile states, and in computing electron scattering rates where it provides the number of final states after scattering. For energies inside a band gap, . Filling of bands At thermodynamic equilibrium, the likelihood of a state of energy being filled with an electron is given by the Fermi–Dirac distribution, a thermodynamic distribution that takes into account the Pauli exclusion principle: where: is the product of the Boltzmann constant and temperature, and is the total chemical potential of electrons, or Fermi level (in semiconductor physics, this quantity is more often denoted ). The Fermi level of a solid is directly related to the voltage on that solid, as measured with a voltmeter. Conventionally, in band structure plots the Fermi level is taken to be the zero of energy (an arbitrary choice). The density of electrons in the material is simply the integral of the Fermi–Dirac distribution times the density of states: Although there are an infinite number of bands and thus an infinite number of states, there are only a finite number of electrons to place in these bands. The preferred value for the number of electrons is a consequence of electrostatics: even though the surface of a material can be charged, the internal bulk of a material prefers to be charge neutral. The condition of charge neutrality means that must match the density of protons in the material. For this to occur, the material electrostatically adjusts itself, shifting its band structure up or down in energy (thereby shifting ), until it is at the correct equilibrium with respect to the Fermi level. Names of bands near the Fermi level (conduction band, valence band) A solid has an infinite number of allowed bands, just as an atom has infinitely many energy levels. However, most of the bands simply have too high energy, and are usually disregarded under ordinary circumstances. Conversely, there are very low energy bands associated with the core orbitals (such as 1s electrons). These low-energy core bands are also usually disregarded since they remain filled with electrons at all times, and are therefore inert. Likewise, materials have several band gaps throughout their band structure. The most important bands and band gaps—those relevant for electronics and optoelectronics—are those with energies near the Fermi level. The bands and band gaps near the Fermi level are given special names, depending on the material: In a semiconductor or band insulator, the Fermi level is surrounded by a band gap, referred to as the band gap (to distinguish it from the other band gaps in the band structure). The closest band above the band gap is called the conduction band, and the closest band beneath the band gap is called the valence band. The name "valence band" was coined by analogy to chemistry, since in semiconductors (and insulators) the valence band is built out of the valence orbitals. In a metal or semimetal, the Fermi level is inside of one or more allowed bands. In semimetals the bands are usually referred to as "conduction band" or "valence band" depending on whether the charge transport is more electron-like or hole-like, by analogy to semiconductors. In many metals, however, the bands are neither electron-like nor hole-like, and often just called "valence band" as they are made of valence orbitals. The band gaps in a metal's band structure are not important for low energy physics, since they are too far from the Fermi level. Theory in crystals The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch's theorem as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors . Now, any periodic potential which shares the same periodicity as the direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as: where for any set of integers . From this theory, an attempt can be made to predict the band structure of a particular material, however most ab initio methods for electronic structure calculations fail to predict the observed band gap. Nearly free electron approximation In the nearly free electron approximation, interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are described mathematically by the Bloch's theorem, which states that the eigenstate wavefunctions have the form where the Bloch function is periodic over the crystal lattice, that is, Here index refers to the th energy band, wavevector is related to the direction of motion of the electron, is the position in the crystal, and is the location of an atomic site. The NFE model works particularly well in materials like metals where distances between neighbouring atoms are small. In such materials the overlap of atomic orbitals and potentials on neighbouring atoms is relatively large. In that case the wave function of the electron can be approximated by a (modified) plane wave. The band structure of a metal like aluminium even gets close to the empty lattice approximation. Tight binding model The opposite extreme to the nearly free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This tight binding model assumes the solution to the time-independent single electron Schrödinger equation is well approximated by a linear combination of atomic orbitals . where the coefficients are selected to give the best approximate solution of this form. Index refers to an atomic energy level and refers to an atomic site. A more accurate approach using this idea employs Wannier functions, defined by: in which is the periodic part of the Bloch's theorem and the integral is over the Brillouin zone. Here index refers to the -th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the -th energy band as: The TB model works well in materials with limited overlap between atomic orbitals and potentials on neighbouring atoms. Band structures of materials like Si, GaAs, SiO2 and diamond for instance are well described by TB-Hamiltonians on the basis of atomic sp3 orbitals. In transition metals a mixed TB-NFE model is used to describe the broad NFE conduction band and the narrow embedded TB d-bands. The radial functions of the atomic orbital part of the Wannier functions are most easily calculated by the use of pseudopotential methods. NFE, TB or combined NFE-TB band structure calculations, sometimes extended with wave function approximations based on pseudopotential methods, are often used as an economic starting point for further calculations. KKR model The KKR method, also called "multiple scattering theory" or Green's function method, finds the stationary values of the inverse transition matrix T rather than the Hamiltonian. A variational implementation was suggested by Korringa, Kohn and Rostocker, and is often referred to as the Korringa–Kohn–Rostoker method. The most important features of the KKR or Green's function formulation are (1) it separates the two aspects of the problem: structure (positions of the atoms) from the scattering (chemical identity of the atoms); and (2) Green's functions provide a natural approach to a localized description of electronic properties that can be adapted to alloys and other disordered system. The simplest form of this approximation centers non-overlapping spheres (referred to as muffin tins) on the atomic positions. Within these regions, the potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the screened potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced. Density-functional theory In recent physics literature, a large majority of the electronic structures and band plots are calculated using density-functional theory (DFT), which is not a model but rather a theory, i.e., a microscopic first-principles theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density. DFT-calculated bands are in many cases found to be in agreement with experimentally measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In particular, the band shape is typically well reproduced by DFT. But there are also systematic errors in DFT bands when compared to experiment results. In particular, DFT seems to systematically underestimate by about 30-40% the band gap in insulators and semiconductors. It is commonly believed that DFT is a theory to predict ground state properties of a system only (e.g. the total energy, the atomic structure, etc.), and that excited state properties cannot be determined by DFT. This is a misconception. In principle, DFT can determine any property (ground state or excited state) of a system given a functional that maps the ground state density to that property. This is the essence of the Hohenberg–Kohn theorem. In practice, however, no known functional exists that maps the ground state density to excitation energies of electrons within a material. Thus, what in the literature is quoted as a DFT band plot is a representation of the DFT Kohn–Sham energies, i.e., the energies of a fictive non-interacting system, the Kohn–Sham system, which has no physical interpretation at all. The Kohn–Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopmans' theorem holding for Kohn–Sham energies, as there is for Hartree–Fock energies, which can be truly considered as an approximation for quasiparticle energies. Hence, in principle, Kohn–Sham based DFT is not a band theory, i.e., not a theory suitable for calculating bands and band-plots. In principle time-dependent DFT can be used to calculate the true band structure although in practice this is often difficult. A popular approach is the use of hybrid functionals, which incorporate a portion of Hartree–Fock exact exchange; this produces a substantial improvement in predicted bandgaps of semiconductors, but is less reliable for metals and wide-bandgap materials. Green's function methods and the ab initio GW approximation To calculate the bands including electron-electron interaction many-body effects, one can resort to so-called Green's function methods. Indeed, knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One such approximation is the GW approximation, so called from the mathematical form the self-energy takes as the product Σ = GW of the Green's function G and the dynamically screened interaction W. This approach is more pertinent when addressing the calculation of band plots (and also quantities beyond, such as the spectral function) and can also be formulated in a completely ab initio way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with experiment, and hence to correct the systematic DFT underestimation. Dynamical mean-field theory Although the nearly free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material a conductor. However, materials such as CoO that have an odd number of electrons per unit cell are insulators, in direct conflict with this result. This kind of material is known as a Mott insulator, and requires inclusion of detailed electron-electron interactions (treated only as an averaged effect on the crystal potential in band theory) to explain the discrepancy. The Hubbard model is an approximate theory that can include these interactions. It can be treated non-perturbatively within the so-called dynamical mean-field theory, which attempts to bridge the gap between the nearly free electron approximation and the atomic limit. Formally, however, the states are not non-interacting in this case and the concept of a band structure is not adequate to describe these cases. Others Calculating band structures is an important topic in theoretical solid state physics. In addition to the models mentioned above, other models include the following: Empty lattice approximation: the "band structure" of a region of free space that has been divided into a lattice. k·p perturbation theory is a technique that allows a band structure to be approximately described in terms of just a few parameters. The technique is commonly used for semiconductors, and the parameters in the model are often determined by experiment. The Kronig–Penney model, a one-dimensional rectangular well model useful for illustration of band formation. While simple, it predicts many important phenomena, but is not quantitative. Hubbard model The band structure has been generalised to wavevectors that are complex numbers, resulting in what is called a complex band structure, which is of interest at surfaces and interfaces. Each model describes some types of solids very well, and others poorly. The nearly free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl). Band diagrams To understand how band structure changes relative to the Fermi level in real space, a band structure plot is often first simplified in the form of a band diagram. In a band diagram the vertical axis is energy while the horizontal axis represents real space. Horizontal lines represent energy levels, while blocks represent energy bands. When the horizontal lines in these diagram are slanted then the energy of the level or band changes with distance. Diagrammatically, this depicts the presence of an electric field within the crystal system. Band diagrams are useful in relating the general band structure properties of different materials to one another when placed in contact with each other.
Physical sciences
Basics_2
Physics
706311
https://en.wikipedia.org/wiki/Canonical%20coordinates
Canonical coordinates
In mathematics and classical mechanics, canonical coordinates are sets of coordinates on phase space which can be used to describe a physical system at any given point in time. Canonical coordinates are used in the Hamiltonian formulation of classical mechanics. A closely related concept also appears in quantum mechanics; see the Stone–von Neumann theorem and canonical commutation relations for details. As Hamiltonian mechanics are generalized by symplectic geometry and canonical transformations are generalized by contact transformations, so the 19th century definition of canonical coordinates in classical mechanics may be generalized to a more abstract 20th century definition of coordinates on the cotangent bundle of a manifold (the mathematical notion of phase space). Definition in classical mechanics In classical mechanics, canonical coordinates are coordinates and in phase space that are used in the Hamiltonian formalism. The canonical coordinates satisfy the fundamental Poisson bracket relations: A typical example of canonical coordinates is for to be the usual Cartesian coordinates, and to be the components of momentum. Hence in general, the coordinates are referred to as "conjugate momenta". Canonical coordinates can be obtained from the generalized coordinates of the Lagrangian formalism by a Legendre transformation, or from another set of canonical coordinates by a canonical transformation. Definition on cotangent bundles Canonical coordinates are defined as a special set of coordinates on the cotangent bundle of a manifold. They are usually written as a set of or with the xs or qs denoting the coordinates on the underlying manifold and the ps denoting the conjugate momentum, which are 1-forms in the cotangent bundle at point q in the manifold. A common definition of canonical coordinates is any set of coordinates on the cotangent bundle that allow the canonical one-form to be written in the form up to a total differential. A change of coordinates that preserves this form is a canonical transformation; these are a special case of a symplectomorphism, which are essentially a change of coordinates on a symplectic manifold. In the following exposition, we assume that the manifolds are real manifolds, so that cotangent vectors acting on tangent vectors produce real numbers. Formal development Given a manifold , a vector field on (a section of the tangent bundle ) can be thought of as a function acting on the cotangent bundle, by the duality between the tangent and cotangent spaces. That is, define a function such that holds for all cotangent vectors in . Here, is a vector in , the tangent space to the manifold at point . The function is called the momentum function corresponding to . In local coordinates, the vector field at point may be written as where the are the coordinate frame on . The conjugate momentum then has the expression where the are defined as the momentum functions corresponding to the vectors : The together with the together form a coordinate system on the cotangent bundle ; these coordinates are called the canonical coordinates. Generalized coordinates In Lagrangian mechanics, a different set of coordinates are used, called the generalized coordinates. These are commonly denoted as with called the generalized position and the generalized velocity. When a Hamiltonian is defined on the cotangent bundle, then the generalized coordinates are related to the canonical coordinates by means of the Hamilton–Jacobi equations.
Physical sciences
Classical mechanics
Physics
706884
https://en.wikipedia.org/wiki/Waterspout
Waterspout
A waterspout is a rotating column of air that occurs over a body of water, usually appearing as a funnel-shaped cloud in contact with the water and a cumuliform cloud. There are two types of waterspout, each formed by distinct mechanisms. The most common type is a weak vortex known as a "fair weather" or "non-tornadic" waterspout. The other less common type is simply a classic tornado occurring over water rather than land, known as a "tornadic", "supercellular", or "mesocyclonic" waterspout, and accurately a "tornado over water". A fair weather waterspout has a five-part life cycle: formation of a dark spot on the water surface; spiral pattern on the water surface; formation of a spray ring; development of a visible condensation funnel; and ultimately, decay. Most waterspouts do not suck up water. While waterspouts form mostly in tropical and subtropical areas, they are also reported in Europe, Western Asia (the Middle East), Australia, New Zealand, the Great Lakes, Antarctica, and on rare occasions, the Great Salt Lake. Some are also found on the East Coast of the United States, and the coast of California. Although rare, waterspouts have been observed in connection with lake-effect snow precipitation bands. Characteristics Climatology Though the majority of waterspouts occur in the tropics, they can seasonally appear in temperate areas throughout the world, and are common across the western coast of Europe as well as the British Isles and several areas of the Mediterranean and Baltic Sea. They are not restricted to saltwater; many have been reported on lakes and rivers including the Great Lakes and the St. Lawrence River. They are fairly common on the Great Lakes during late summer and early fall, with a record 66+ waterspouts reported over just a seven-day period in 2003. Waterspouts are more frequent within from the coast than farther out at sea. They are common along the southeast U.S. coast, especially off southern Florida and the Keys, and can happen over seas, bays, and lakes worldwide. Approximately 160 waterspouts are currently reported per year across Europe, with the Netherlands reporting the most at 60, followed by Spain and Italy at 25, and the United Kingdom at 15. They are most common in late summer. In the Northern Hemisphere, September has been pinpointed as the prime month of formation. Waterspouts are also frequently observed off the east coast of Australia, with several being described by Joseph Banks during the voyage of the Endeavour in 1770. Formation Waterspouts exist on a microscale, where their environment is less than two kilometers in width. The cloud from which they develop can be as innocuous as a moderate cumulus, or as great as a supercell. While some waterspouts are strong and tornadic in nature, most are much weaker and caused by different atmospheric dynamics. They normally develop in moisture-laden environments as their parent clouds are in the process of development, and it is theorized they spin as they move up the surface boundary from the horizontal shear near the surface, and then stretch upwards to the cloud once the low-level shear vortex aligns with a developing cumulus cloud or thunderstorm. Some weak tornadoes, known as landspouts, have been shown to develop in a similar manner. More than one waterspout can occur simultaneously in the same vicinity. In 2012, as many as nine simultaneous waterspouts were reported on Lake Michigan in the United States. In May 2021, at least five simultaneous waterspouts were filmed near Taree, off the northern coast of New South Wales, Australia. Types Non-tornadic Waterspouts that are not associated with a rotating updraft of a supercell thunderstorm are known as "non-tornadic" or "fair-weather" waterspouts. By far the most common type of waterspout, these occur in coastal waters and are associated with dark, flat-bottomed, developing convective cumulus towers. Fair-weather waterspouts develop and dissipate rapidly, having life cycles shorter than 20 minutes. They usually rate no higher than EF0 on the Enhanced Fujita scale, generally exhibiting winds of less than . They are most frequently seen in tropical and sub-tropical climates, with upwards of 400 per year observed in the Florida Keys. They typically move slowly, if at all, since the cloud to which they are attached is horizontally static, being formed by vertical convective action rather than the subduction/adduction interaction between colliding fronts. Fair-weather waterspouts are very similar in both appearance and mechanics to landspouts, and largely behave as such if they move ashore. There are five stages to a fair-weather waterspout life cycle. Initially, a prominent circular, light-colored disk appears on the surface of the water, surrounded by a larger dark area of indeterminate shape. After the formation of these colored disks on the water, a pattern of light- and dark-colored spiral bands develops from the dark spot on the water surface. Then, a dense annulus of sea spray, called a "cascade", appears around the dark spot with what appears to be an eye. Eventually, the waterspout becomes a visible funnel from the water surface to the overhead cloud. The spray vortex can rise to a height of several hundred feet or more, and often creates a visible wake and an associated wave train as it moves. Finally, the funnel and spray vortex begin to dissipate as the inflow of warm air into the vortex weakens, ending the waterspout's life cycle. Tornadic "Tornadic waterspouts", also accurately referred to as "tornadoes over water", are formed from mesocyclones in a manner essentially identical to land-based tornadoes in connection with severe thunderstorms, but simply occurring over water. A tornado which travels from land to a body of water would also be considered a tornadic waterspout. Since the vast majority of mesocyclonic thunderstorms in the United States occur in land-locked areas, true tornadic waterspouts are correspondingly rarer than their fair-weather counterparts in that country. However, in some areas, such as the Adriatic, Aegean and Ionian Seas, tornadic waterspouts can make up half of the total number. Snowspout A winter waterspout, also known as an icespout, an ice devil, or a snowspout, is a rare instance of a waterspout forming under the base of a snow squall. The term "winter waterspout" is used to differentiate between the common warm season waterspout and this rare winter season event. There are a couple of critical criteria for the formation of a winter waterspout. Very cold temperatures need to be present over a body of water, which is itself warm enough to produce fog resembling steam above the water's surface. Like the more efficient lake-effect snow events, winds focusing down the axis of long lakes enhance wind convergence and increase the likelihood of a winter waterspout developing. The terms "snow devil" and "snownado" describe a different phenomenon: a snow vortex close to the surface with no parent cloud, similar to a dust devil. Impacts Human Waterspouts have long been recognized as serious marine hazards. Stronger waterspouts pose a threat to watercraft, aircraft and people. It is recommended to keep a considerable distance from these phenomena, and to always be on alert through weather reports. The United States National Weather Service will often issue special marine warnings when waterspouts are likely or have been sighted over coastal waters, or tornado warnings when waterspouts are expected to move onshore. Incidents of waterspouts causing severe damage and casualties are rare; however, there have been several notable examples. The Malta tornado of 1551 was the earliest recorded occurrence of a deadly waterspout. It struck the Grand Harbour of Valletta, sinking four galleys and numerous boats, and killing hundreds of people. The 1851 Sicily tornadoes were twin waterspouts that made landfall in western Sicily, ravaging the coast and countryside before ultimately dissipating back again over the sea. In August 2024, a waterspout has been reported by some witnesses of the sinking of the large yacht Bayesian off the coast of Sicily and might have been the cause or an aggravating circumstance. Seven people died while 15 of 22 were rescued. Natural Depending on how fast the winds from a waterspout are whipping, anything that is within about of the surface of the water, including fish of different sizes, frogs, and even turtles, can be lifted into the air. A waterspout can sometimes suck small animals such as fish out of the water and all the way up into the cloud. Even if the waterspout stops spinning, the fish in the cloud can be carried over land, buffeted up and down and around with the cloud's winds until its currents no longer keep the fish airborne. Depending on how far they travel and how high they are taken into the atmosphere, the fish are sometimes dead by the time they rain down. People as far as inland have experienced raining fish. Fish can also be sucked up from rivers, but raining fish is not a common weather phenomenon. Research and forecasting The Szilagyi Waterspout Index (SWI), developed by Canadian meteorologist Wade Szilagyi, is used to predict conditions favorable for waterspout development. The SWI ranges from −10 to +10, where values greater than or equal to zero represent conditions favorable for waterspout development. The International Centre for Waterspout Research (ICWR) is a non-governmental organization of individuals from around the world who are interested in the field of waterspouts from a research, operational and safety perspective. Originally a forum for researchers and meteorologists, the ICWR has expanded interest and contribution from storm chasers, the media, the marine and aviation communities and from private individuals. Myths There was a commonly held belief among sailors in the 18th and 19th centuries that shooting a broadside cannon volley dispersed waterspouts. Among others, Captain Vladimir Bronevskiy claims that it was a successful technique, having been an eyewitness to the dissipation of a phenomenon in the Adriatic while a midshipman aboard the frigate Venus during the 1806 campaign under Admiral Senyavin. A waterspout has been proposed as a reason for the abandonment of the Mary Celeste.
Physical sciences
Storms
Earth science
707790
https://en.wikipedia.org/wiki/Smithsonite
Smithsonite
Smithsonite, also known as zinc spar, is the mineral form of zinc carbonate (ZnCO3). Historically, smithsonite was identified with hemimorphite before it was realized that they were two different minerals. The two minerals are very similar in appearance and the term calamine has been used for both, leading to some confusion. The distinct mineral smithsonite was named in 1832 by François Sulpice Beudant in honor of English scientist James Smithson (c. 1765–1829), who first identified the mineral in 1802. Smithsonite is a variably colored trigonal mineral which only rarely is found in well formed crystals. The typical habit is as earthy botryoidal masses. It has a Mohs hardness of 4.5 and a specific gravity of 4.4 to 4.5. Smithsonite occurs as a secondary mineral in the weathering or oxidation zone of zinc-bearing ore deposits. It sometimes occurs as replacement bodies in carbonate rocks and as such may constitute zinc ore. It commonly occurs in association with hemimorphite, willemite, hydrozincite, cerussite, malachite, azurite, aurichalcite and anglesite. It forms two limited solid solution series, with substitution of manganese leading to rhodochrosite, and with iron, leading to siderite. A bright yellow variety is sometimes called "turkey fat ore". The cause of the yellow colour is due to the presence of greenockite inclusions within the smithsonite crystals. Gallery
Physical sciences
Minerals
Earth science
216434
https://en.wikipedia.org/wiki/Tugboat
Tugboat
A tugboat or tug is a marine vessel that manoeuvres other vessels by pushing or pulling them, with direct contact or a tow line. These boats typically tug ships in circumstances where they cannot or should not move under their own power, such as in crowded harbors or narrow canals, or cannot move at all, such as barges, disabled ships, log rafts, or oil platforms. Some are ocean-going, and some are icebreakers or salvage tugs. Early models were powered by steam engines, which were later superseded by diesel engines. Many have deluge gun water jets, which help in firefighting, especially in harbours. Types Seagoing Seagoing tugs (deep-sea tugs or ocean tugboats) fall into four basic categories: The standard seagoing tug with model bow that tows almost exclusively by way of a wire cable. In some rare cases, such as some USN fleet tugs, a synthetic rope hawser may be used for the tow in the belief that the line can be pulled aboard a disabled ship by the crew owing to its lightness compared to wire cable. The "notch tug" can be secured by way of cables, or more commonly in recent times, synthetic lines that run from the stern of the tug to the stern of the barge. This configuration is generally used in inland waters where sea and swell are minimal because of the danger of parting the push wires. Often, this configuration is employed even without a "notch" on the barge, but in those cases it is preferable to have "push knees" on the tug to stabilize its position. Model bow tugs employing this method of pushing nearly always have a towing winch that can be used if sea conditions render pushing inadvisable. With this configuration, the barge being pushed might approach the size of a small ship, with the interaction of the water flow allowing a higher speed with a minimal increase in power required or fuel consumption. The "integral unit", or "integrated tug and barge" (ITB), comprises specially designed vessels that lock together in such a rigid and strong method as to be certified as such by authorities (classification societies) such as the American Bureau of Shipping, Lloyd's Register of Shipping, Indian Register of Shipping, Det Norske Veritas or several others. These units stay combined under virtually any sea conditions and the tugs usually have poor sea-keeping designs for navigation without their barges attached. Vessels in this category are legally considered to be ships rather than tugboats and barges must be staffed accordingly. These vessels must show navigation lights compliant with those required of ships rather than those required of tugboats and vessels undertow. "Articulated tug and barge" (ATB) units also utilize mechanical means to connect to their barges. The tug slips into a notch in the stern and is attached by a hinged connection, becoming an articulated vehicle. ATBs generally utilize Intercon and Bludworth connecting systems. ATBs are generally staffed as a large tugboat, with between seven and nine crew members. The typical American ATB displays navigational lights of a towing vessel pushing ahead, as described in the 1972 ColRegs. Harbour Compared with seagoing tugboats, harbour tugboats that are employed exclusively as ship assist vessels are generally smaller and their width-to-length ratio is often higher, due to the need for the tugs' wheelhouse to avoid contact with the hull of a ship, which may have a pronounced rake at the bow and stern. In some ports there is a requirement for certain numbers and sizes of tugboats for port operations with gas tankers. Also, in many ports, tankers are required to have tug escorts when transiting in harbors to render assistance in the event of mechanical failure. The port generally mandates a minimum horsepower or bollard pull, determined by the size of the escorted vessel. Most ports will have a number of tugs that are used for other purposes than ship assist, such as dredging operations, bunkering ships, transferring liquid products between berths, and cargo ops. These tugs may also be used for ship assist as needed. Modern ship assist tugs are "omni directional tugs" that employ propellers that can rotate 360 degrees without a rudder, like azimuthal stern drives (ASD), azimuthal tractor drives (ATD), Rotor tugs (RT) or cycloidal drives (VSP)(as described below). River River tugs are also referred to as towboats or pushboats. Their hull designs would make open ocean operations dangerous. River tugs usually do not have any significant hawser or winch. Their hulls feature a flat front or bow to line up with the rectangular stern of the barge, often with large pushing knees. Propulsion The first tugboat, Charlotte Dundas, was built by William Symington in 1801. It had a steam engine and paddle wheels and was used on rivers in Scotland. Paddle tugs proliferated thereafter and were a common sight for a century. In the 1870s schooner hulls were converted to screw tugs. Compound steam engines and scotch boilers provided 300 Indicated Horse Power. Steam tugs were put to use in every harbour of the world towing and ship berthing. Tugboat diesel engines typically produce 500 to 2,500 kW (~ 680 to 3,400 hp), but larger boats (used in deep waters) can have power ratings up to 20,000 kW (~ 27,200 hp). Tugboats usually have an extreme power:tonnage-ratio; normal cargo and passenger ships have a P:T-ratio (in kW:GRT) of 0.35 to 1.20, whereas large tugs typically are 2.20 to 4.50 and small harbour-tugs 4.0 to 9.5. The engines are often the same as those used in railroad locomotives, but typically drive the propeller mechanically instead of converting the engine output to power electric motors, as is common for diesel-electric locomotives. For safety, tugboat engines often feature two of each critical part for redundancy. A tugboat is typically rated by its engine's power output and its overall bollard pull. The largest commercial harbour tugboats in the 2000s–2010s, used for towing container ships or similar, had around of bollard pull, which is described as above "normal" tugboats. Tugboats are highly manoeuvrable, and various propulsion systems have been developed to increase manoeuvrability and increase safety. The earliest tugs were fitted with paddle wheels, but these were soon replaced by propeller-driven tugs. Kort nozzles (see below) have been added to increase thrust-to-power ratio. This was followed by the nozzle-rudder, which omitted the need for a conventional rudder. The cycloidal propeller (see below) was developed prior to World War II and was occasionally used in tugs because of its maneuverability. After World War II it was also linked to safety due to the development of the Voith Water Tractor, a tugboat configuration that could not be pulled over by its tow. In the late 1950s, the Z-drive or (azimuth thruster) was developed. Although sometimes referred to as the Aquamaster or Schottel system, many brands exist: Steerprop, Wärtsilä, Berg Propulsion, etc. These propulsion systems are used on tugboats designed for tasks such as ship docking and marine construction. Conventional propeller/rudder configurations are more efficient for port-to-port towing. Kort nozzle The Kort nozzle is a sturdy cylindrical structure around a special propeller having minimum clearance between the propeller blades and the inner wall of the Kort nozzle. The thrust-to-power ratio is enhanced because the water approaches the propeller in a linear configuration and exits the nozzle the same way. The Kort nozzle is named after its inventor, but many brands exist. Cyclorotor The cycloidal propeller is a circular plate mounted on the underside of the hull, rotating around a vertical axis with a circular array of vertical blades (in the shape of hydrofoils) that protrude out of the bottom of the ship. Each blade can rotate itself around a vertical axis. The internal mechanism changes the angle of attack of the blades in sync with the rotation of the plate, so that each blade can provide thrust in any direction, similar to the collective pitch control and cyclic in a helicopter. Fenders Tugboat fenders are made of high-abrasion-resistance rubber with good resilience properties. They are very popular with small port craft owners and tug owners. These fenders are generally made from cut pieces of vehicle tires strung together. Often the fendering on the sides of the tug is composed of large heavy equipment or aircraft tires attached to or hung on the side of the tug. Some fendering is compression moulded in high-pressure thermic-fluid-heated moulds and have excellent seawater resistance, but are not widely used owing to the cost. Tugboat bow fenders are also called beards or bow puds. In the past they were made of rope for padding to protect the bow, but rope rendering is almost never seen in recent times. Other types of tugboat fender include Tug cylindrical fender, W fender, M fender, D fender, and others. Carousel A recent Dutch innovation is the carousel tug, winner of the Maritime Innovation Award at the Dutch Maritime Innovation Awards Gala in 2006. It adds a pair of interlocking rings to the body of the tug, the inner on the boat, the outer on the ship by winch or towing hook. Since the towing point rotates freely, the tug is very difficult to capsize. Races Vintage tugboat races have been held annually in Olympia, Washington, since 1974 during the Olympia Harbor Days Maritime Festival Tugboat races are held annually on Elliott Bay in Seattle, on the Hudson River at the New York Tugboat Race, the Detroit River, and the Great Tugboat Race and Parade on the St. Mary's River. Ballet Since 1980, an annual tugboat ballet has been held in Hamburg harbour on the occasion of the festival commemorating the anniversary of the establishment of a port in Hamburg. On a weekend in May, eight tugboats perform choreographed movements for about an hour to the tunes of waltz and other sorts of dance music. Roundups The Tugboat Roundup is a gathering of tugboats and other vessels in celebration of maritime industry. The Waterford Tugboat Roundup is held in the late summer at the confluence of the Hudson and Mohawk Rivers in Waterford, New York. The tugs featured are river tugs and other tugs re-purposed to serve on the New York State Canal System. In popular culture Tugboat Annie was the subject of a series of Saturday Evening Post magazine stories featuring the female captain of the tugboat Narcissus in Puget Sound, later featured in the films Tugboat Annie (1933), Tugboat Annie Sails Again (1940) and Captain Tugboat Annie (1945). The Canadian television series The Adventures of Tugboat Annie was filmed in 1957. Film and television To date, there have been four children's shows revolving around anthropomorphic tugboats. In the late 1980s, 13 episodes were made of TUGS, a series depicting the life of tugboats in the 1920s. An American adaptation using edited footage from Tugs followed: Salty's Lighthouse. In the 1975's Soviet short animation musical film В порту/ In the sea port a tugboat sang a song: "Through a harbour area" One of the creators of Tugs went on to direct Theodore Tugboat. Animated preschool series Toot the Tiny Tugboat started broadcasting on Channel 5 Milkshake! in 2014 and on Cartoonito in 2015, with a Welsh-language version airing on S4C Cyw. "Tugger" is a tugboat in the animated series South Park. He appears in the episode "The New Terrance and Phillip Movie Trailer" as a sidekick for Russell Crowe in a fictitious television series entitled Fightin' Round The World with Russell Crowe. Tugger follows Crowe as he engages various people in physical conflicts, providing emotional support and comic relief. At one point Tugger even attempts to commit suicide, upon being forced to hear Russell Crowe's new musical composition. Literature (Alphabetical by author) The children's book Scuffy the Tugboat, written by Gertrude Crampton and illustrated by Tibor Gergely and first published in 1946 as part of the Little Golden Books series, follows the adventures of a young toy tugboat who seeks a life beyond the confines of a tub inside his owner's toy store. The Dutch writer Jan de Hartog wrote numerous nautical novels, first in Dutch, then in English. The novel Hollands Glorie, written prior to World War II, was made into a Dutch miniseries in 1978 and concerned the dangers faced by the crews of Dutch salvage tugs. The novella Stella, concerning the dangers faced by the captains of rescue tugs in the English Channel during World War II, was made into a film entitled The Key in 1958. The novel The Captain (1967), about the captain of a rescue tug during a Murmansk Convoy, sold over a million copies. Its 1986 sequel, The Commodore, features the narrator captaining a fleet of tugs in peace-time. Little Toot (1939), written and illustrated by Hardie Gramatky, is a children's story of an anthropomorphic tugboat child, who wants to help tow ships in a harbour near Hoboken. He's rejected by the tugboat community and dejectedly drifts out to sea, where he accidentally discovers a shipwrecked liner and a chance to prove his worth. This story was animated as part of the Disney movie Melody Time. Farley Mowat's book The Grey Seas Under tells the tale of a legendary North Atlantic salvage tug, the Foundation Franklin. He later wrote The Serpent's Coil, which also deals with salvage tugs in the North Atlantic. Gallery
Technology
Naval transport
null
216488
https://en.wikipedia.org/wiki/Catalan%20number
Catalan number
The Catalan numbers are a sequence of natural numbers that occur in various counting problems, often involving recursively defined objects. They are named after Eugène Catalan, though they were previously discovered in the 1730s by Minggatu. The -th Catalan number can be expressed directly in terms of the central binomial coefficients by The first Catalan numbers for are . Properties An alternative expression for is for which is equivalent to the expression given above because . This expression shows that is an integer, which is not immediately obvious from the first formula given. This expression forms the basis for a proof of the correctness of the formula. Another alternative expression is which can be directly interpreted in terms of the cycle lemma; see below. The Catalan numbers satisfy the recurrence relations and Asymptotically, the Catalan numbers grow as in the sense that the quotient of the -th Catalan number and the expression on the right tends towards 1 as approaches infinity. This can be proved by using the asymptotic growth of the central binomial coefficients, by Stirling's approximation for , or via generating functions. The only Catalan numbers that are odd are those for which ; all others are even. The only prime Catalan numbers are and . More generally, the multiplicity with which a prime divides can be determined by first expressing in base . For , the multiplicity is the number of 1 bits, minus 1. For an odd prime, count all digits greater than ; also count digits equal to unless final; and count digits equal to if not final and the next digit is counted. The only known odd Catalan numbers that do not have last digit 5 are , , , , and . The odd Catalan numbers, for , do not have last digit 5 if has a base 5 representation containing 0, 1 and 2 only, except in the least significant place, which could also be a 3. The Catalan numbers have the integral representations which immediately yields . This has a simple probabilistic interpretation. Consider a random walk on the integer line, starting at 0. Let -1 be a "trap" state, such that if the walker arrives at -1, it will remain there. The walker can arrive at the trap state at times 1, 3, 5, 7..., and the number of ways the walker can arrive at the trap state at time is . Since the 1D random walk is recurrent, the probability that the walker eventually arrives at -1 is . Applications in combinatorics There are many counting problems in combinatorics whose solution is given by the Catalan numbers. The book Enumerative Combinatorics: Volume 2 by combinatorialist Richard P. Stanley contains a set of exercises which describe 66 different interpretations of the Catalan numbers. Following are some examples, with illustrations of the cases and . is the number of Dyck words of length . A Dyck word is a string consisting of X's and Y's such that no initial segment of the string has more Y's than X's. For example, the following are the Dyck words up to length 6: XY XXYY     XYXY XXXYYY     XYXXYY     XYXYXY     XXYYXY     XXYXYY Re-interpreting the symbol X as an open parenthesis and Y as a close parenthesis, counts the number of expressions containing pairs of parentheses which are correctly matched: ((()))     (()())     (())()     ()(())     ()()() is the number of different ways factors can be completely parenthesized (or the number of ways of associating applications of a binary operator, as in the matrix chain multiplication problem). For , for example, we have the following five different parenthesizations of four factors: ((ab)c)d     (a(bc))d     (ab)(cd)     a((bc)d)     a(b(cd)) Successive applications of a binary operator can be represented in terms of a full binary tree, by labeling each leaf . It follows that is the number of full binary trees with leaves, or, equivalently, with a total of internal nodes: is the number of non-isomorphic ordered (or plane) trees with vertices. See encoding general trees as binary trees. For example, is the number of possible parse trees for a sentence (assuming binary branching), in natural language processing. is the number of monotonic lattice paths along the edges of a grid with square cells, which do not pass above the diagonal. A monotonic path is one which starts in the lower left corner, finishes in the upper right corner, and consists entirely of edges pointing rightwards or upwards. Counting such paths is equivalent to counting Dyck words: X stands for "move right" and Y stands for "move up". The following diagrams show the case : This can be represented by listing the Catalan elements by column height: [0,0,0,0] [0,0,0,1] [0,0,0,2] [0,0,1,1] [0,1,1,1] [0,0,1,2] [0,0,0,3] [0,1,1,2] [0,0,2,2] [0,0,1,3] [0,0,2,3] [0,1,1,3] [0,1,2,2] [0,1,2,3] A convex polygon with sides can be cut into triangles by connecting vertices with non-crossing line segments (a form of polygon triangulation). The number of triangles formed is and the number of different ways that this can be achieved is . The following hexagons illustrate the case : is the number of stack-sortable permutations of . A permutation is called stack-sortable if , where is defined recursively as follows: write where is the largest element in and and are shorter sequences, and set , with being the identity for one-element sequences. is the number of permutations of that avoid the permutation pattern 123 (or, alternatively, any of the other patterns of length 3); that is, the number of permutations with no three-term increasing subsequence. For , these permutations are 132, 213, 231, 312 and 321. For , they are 1432, 2143, 2413, 2431, 3142, 3214, 3241, 3412, 3421, 4132, 4213, 4231, 4312 and 4321. is the number of noncrossing partitions of the set . A fortiori, never exceeds the -th Bell number. is also the number of noncrossing partitions of the set in which every block is of size 2. is the number of ways to tile a stairstep shape of height with rectangles. Cutting across the anti-diagonal and looking at only the edges gives full binary trees. The following figure illustrates the case : is the number of ways to form a "mountain range" with upstrokes and downstrokes that all stay above a horizontal line. The mountain range interpretation is that the mountains will never go below the horizon. is the number of standard Young tableaux whose diagram is a 2-by- rectangle. In other words, it is the number of ways the numbers can be arranged in a 2-by- rectangle so that each row and each column is increasing. As such, the formula can be derived as a special case of the hook-length formula. 123 124 125 134 135 456 356 346 256 246 is the number of length sequences that start with , and can increase by either or , or decrease by any number (to at least ). For these are . From a Dyck path, start a counter at . An X increases the counter by and a Y decreases it by . Record the values at only the X's. Compared to the similar representation of the Bell numbers, only is missing. Proof of the formula There are several ways of explaining why the formula solves the combinatorial problems listed above. The first proof below uses a generating function. The other proofs are examples of bijective proofs; they involve literally counting a collection of some kind of object to arrive at the correct formula. First proof We first observe that all of the combinatorial problems listed above satisfy Segner's recurrence relation For example, every Dyck word of length ≥ 2 can be written in a unique way in the form with (possibly empty) Dyck words and . The generating function for the Catalan numbers is defined by The recurrence relation given above can then be summarized in generating function form by the relation in other words, this equation follows from the recurrence relation by expanding both sides into power series. On the one hand, the recurrence relation uniquely determines the Catalan numbers; on the other hand, interpreting as a quadratic equation of and using the quadratic formula, the generating function relation can be algebraically solved to yield two solution possibilities  or  . From the two possibilities, the second must be chosen because only the second gives . The square root term can be expanded as a power series using the binomial series Thus, Second proof We count the number of paths which start and end on the diagonal of an grid. All such paths have right and up steps. Since we can choose which of the steps are up or right, there are in total monotonic paths of this type. A bad path crosses the main diagonal and touches the next higher diagonal (red in the illustration). The part of the path after the higher diagonal is then flipped about that diagonal, as illustrated with the red dotted line. This swaps all the right steps to up steps and vice versa. In the section of the path that is not reflected, there is one more up step than right steps, so therefore the remaining section of the bad path has one more right step than up steps. When this portion of the path is reflected, it will have one more up step than right steps. Since there are still steps, there are now up steps and right steps. So, instead of reaching , all bad paths after reflection end at . Because every monotonic path in the grid meets the higher diagonal, and because the reflection process is reversible, the reflection is therefore a bijection between bad paths in the original grid and monotonic paths in the new grid. The number of bad paths is therefore: and the number of Catalan paths (i.e. good paths) is obtained by removing the number of bad paths from the total number of monotonic paths of the original grid, In terms of Dyck words, we start with a (non-Dyck) sequence of X's and Y's and interchange all X's and Y's after the first Y that violates the Dyck condition. After this Y, note that there is exactly one more Y than there are Xs. Third proof This bijective proof provides a natural explanation for the term appearing in the denominator of the formula for . A generalized version of this proof can be found in a paper of Rukavicka Josef (2011). Given a monotonic path, the exceedance of the path is defined to be the number of vertical edges above the diagonal. For example, in Figure 2, the edges above the diagonal are marked in red, so the exceedance of this path is 5. Given a monotonic path whose exceedance is not zero, we apply the following algorithm to construct a new path whose exceedance is less than the one we started with. Starting from the bottom left, follow the path until it first travels above the diagonal. Continue to follow the path until it touches the diagonal again. Denote by the first such edge that is reached. Swap the portion of the path occurring before with the portion occurring after . In Figure 3, the black dot indicates the point where the path first crosses the diagonal. The black edge is , and we place the last lattice point of the red portion in the top-right corner, and the first lattice point of the green portion in the bottom-left corner, and place X accordingly, to make a new path, shown in the second diagram. The exceedance has dropped from to . In fact, the algorithm causes the exceedance to decrease by for any path that we feed it, because the first vertical step starting on the diagonal (at the point marked with a black dot) is the only vertical edge that changes from being above the diagonal to being below it when we apply the algorithm - all the other vertical edges stay on the same side of the diagonal. It can be seen that this process is reversible: given any path whose exceedance is less than , there is exactly one path which yields when the algorithm is applied to it. Indeed, the (black) edge , which originally was the first horizontal step ending on the diagonal, has become the last horizontal step starting on the diagonal. Alternatively, reverse the original algorithm to look for the first edge that passes below the diagonal. This implies that the number of paths of exceedance is equal to the number of paths of exceedance , which is equal to the number of paths of exceedance , and so on, down to zero. In other words, we have split up the set of all monotonic paths into equally sized classes, corresponding to the possible exceedances between 0 and . Since there are monotonic paths, we obtain the desired formula Figure 4 illustrates the situation for . Each of the 20 possible monotonic paths appears somewhere in the table. The first column shows all paths of exceedance three, which lie entirely above the diagonal. The columns to the right show the result of successive applications of the algorithm, with the exceedance decreasing one unit at a time. There are five rows, that is , and the last column displays all paths no higher than the diagonal. Using Dyck words, start with a sequence from . Let be the first that brings an initial subsequence to equality, and configure the sequence as . The new sequence is . Fourth proof This proof uses the triangulation definition of Catalan numbers to establish a relation between and . Given a polygon with sides and a triangulation, mark one of its sides as the base, and also orient one of its total edges. There are such marked triangulations for a given base. Given a polygon with sides and a (different) triangulation, again mark one of its sides as the base. Mark one of the sides other than the base side (and not an inner triangle edge). There are such marked triangulations for a given base. There is a simple bijection between these two marked triangulations: We can either collapse the triangle in whose side is marked (in two ways, and subtract the two that cannot collapse the base), or, in reverse, expand the oriented edge in to a triangle and mark its new side. Thus . Write Because we have Applying the recursion with gives the result. Fifth proof This proof is based on the Dyck words interpretation of the Catalan numbers, so is the number of ways to correctly match pairs of brackets. We denote a (possibly empty) correct string with and its inverse with . Since any can be uniquely decomposed into , summing over the possible lengths of immediately gives the recursive definition . Let be a balanced string of length , i.e. contains an equal number of and , so . A balanced string can also be uniquely decomposed into either or , so Any incorrect (non-Catalan) balanced string starts with , and the remaining string has one more than , so Also, from the definitions, we have: Therefore, as this is true for all , Sixth proof This proof is based on the Dyck words interpretation of the Catalan numbers and uses the cycle lemma of Dvoretzky and Motzkin. We call a sequence of X's and Y's dominating if, reading from left to right, the number of X's is always strictly greater than the number of Y's. The cycle lemma states that any sequence of X's and Y's, where , has precisely dominating circular shifts. To see this, arrange the given sequence of X's and Y's in a circle. Repeatedly removing XY pairs leaves exactly X's. Each of these X's was the start of a dominating circular shift before anything was removed. For example, consider . This sequence is dominating, but none of its circular shifts , , and are. A string is a Dyck word of X's and Y's if and only if prepending an X to the Dyck word gives a dominating sequence with X's and Y's, so we can count the former by instead counting the latter. In particular, when , there is exactly one dominating circular shift. There are sequences with exactly X's and Y's. For each of these, only one of the circular shifts is dominating. Therefore there are distinct sequences of X's and Y's that are dominating, each of which corresponds to exactly one Dyck word. Hankel matrix The Hankel matrix whose entry is the Catalan number has determinant 1, regardless of the value of . For example, for we have Moreover, if the indexing is "shifted" so that the entry is filled with the Catalan number then the determinant is still 1, regardless of the value of . For example, for we have Taken together, these two conditions uniquely define the Catalan numbers. Another feature unique to the Catalan–Hankel matrix is that the submatrix starting at has determinant . et cetera. History The Catalan sequence was described in the 18th century by Leonhard Euler, who was interested in the number of different ways of dividing a polygon into triangles. The sequence is named after Eugène Charles Catalan, who discovered the connection to parenthesized expressions during his exploration of the Towers of Hanoi puzzle. The reflection counting trick (second proof) for Dyck words was found by Désiré André in 1887. The name “Catalan numbers” originated from John Riordan. In 1988, it came to light that the Catalan number sequence had been used in China by the Mongolian mathematician Mingantu by 1730. That is when he started to write his book Ge Yuan Mi Lu Jie Fa [The Quick Method for Obtaining the Precise Ratio of Division of a Circle], which was completed by his student Chen Jixin in 1774 but published sixty years later. Peter J. Larcombe (1999) sketched some of the features of the work of Mingantu, including the stimulus of Pierre Jartoux, who brought three infinite series to China early in the 1700s. For instance, Ming used the Catalan sequence to express series expansions of and in terms of . Generalizations The Catalan numbers can be interpreted as a special case of the Bertrand's ballot theorem. Specifically, is the number of ways for a candidate A with votes to lead candidate B with votes. The two-parameter sequence of non-negative integers is a generalization of the Catalan numbers. These are named super-Catalan numbers, per Ira Gessel. These should not confused with the Schröder–Hipparchus numbers, which sometimes are also called super-Catalan numbers. For , this is just two times the ordinary Catalan numbers, and for , the numbers have an easy combinatorial description. However, other combinatorial descriptions are only known for and , and it is an open problem to find a general combinatorial interpretation. Sergey Fomin and Nathan Reading have given a generalized Catalan number associated to any finite crystallographic Coxeter group, namely the number of fully commutative elements of the group; in terms of the associated root system, it is the number of anti-chains (or order ideals) in the poset of positive roots. The classical Catalan number corresponds to the root system of type . The classical recurrence relation generalizes: the Catalan number of a Coxeter diagram is equal to the sum of the Catalan numbers of all its maximal proper sub-diagrams. The Catalan numbers are a solution of a version of the Hausdorff moment problem. Catalan k-fold convolution The Catalan -fold convolution, where , is:
Mathematics
Sequences
null
216601
https://en.wikipedia.org/wiki/Hubble%20Deep%20Field
Hubble Deep Field
The Hubble Deep Field (HDF) is an image of a small region in the constellation Ursa Major, constructed from a series of observations by the Hubble Space Telescope. It covers an area about 2.6 arcminutes on a side, about one 24-millionth of the whole sky, which is equivalent in angular size to a tennis ball at a distance of 100 metres. The image was assembled from 342 separate exposures taken with the Space Telescope's Wide Field and Planetary Camera 2 over ten consecutive days between December 18 and 28, 1995. The field is so small that only a few foreground stars in the Milky Way lie within it; thus, almost all of the 3,000 objects in the image are galaxies, some of which are among the youngest and most distant known. By revealing such large numbers of very young galaxies, the HDF has become a landmark image in the study of the early universe. Three years after the HDF observations were taken, a region in the south celestial hemisphere was imaged in a similar way and named the Hubble Deep Field South. The similarities between the two regions strengthened the belief that the universe is uniform over large scales and that the Earth occupies a typical region in the Universe (the cosmological principle). A wider but shallower survey was also made as part of the Great Observatories Origins Deep Survey. In 2004 a deeper image, known as the Hubble Ultra-Deep Field (HUDF), was constructed from a few months of light exposure. The HUDF image was at the time the most sensitive astronomical image ever made at visible wavelengths, and it remained so until the Hubble eXtreme Deep Field (XDF) was released in 2012. Conception One of the key aims of the astronomers who designed the Hubble Space Telescope was to use its high optical resolution to study distant galaxies to a level of detail that was not possible from the ground. Positioned above the atmosphere, Hubble avoids atmospheric airglow allowing it to take more sensitive visible and ultraviolet light images than can be obtained with seeing-limited ground-based telescopes (when good adaptive optics correction at visible wavelengths becomes possible, 10 m ground-based telescopes may become competitive). Although the telescope's mirror suffered from spherical aberration when the telescope was launched in 1990, it could still be used to take images of more distant galaxies than had previously been obtainable. Because light takes billions of years to reach Earth from very distant galaxies, we see them as they were billions of years ago; thus, extending the scope of such research to increasingly distant galaxies allows a better understanding of how they evolve. After the spherical aberration was corrected during Space Shuttle mission STS-61 in 1993, the improved imaging capabilities of the telescope were used to study increasingly distant and faint galaxies. The Medium Deep Survey (MDS) used the Wide Field and Planetary Camera 2 (WFPC2) to take deep images of random fields while other instruments were being used for scheduled observations. At the same time, other dedicated programs focused on galaxies that were already known through ground-based observation. All of these studies revealed substantial differences between the properties of galaxies today and those that existed several billion years ago. Up to 10% of the HST's observation time is designated as Director's Discretionary (DD) Time, and is typically awarded to astronomers who wish to study unexpected transient phenomena, such as supernovae. Once Hubble's corrective optics were shown to be performing well, Robert Williams, the then-director of the Space Telescope Science Institute, decided to devote a substantial fraction of his DD time during 1995 to the study of distant galaxies. A special Institute Advisory Committee recommended that the WFPC2 be used to image a "typical" patch of sky at a high galactic latitude, using several optical filters. A working group was set up to develop and implement the project. Target selection The field selected for the observations needed to fulfill several criteria. It had to be at a high galactic latitude because dust and obscuring matter in the plane of the Milky Way's disc prevents observations of distant galaxies at low galactic latitudes (see Zone of Avoidance). The target field had to avoid known bright sources of visible light (such as foreground stars), and infrared, ultraviolet, and X-ray emissions, to facilitate later studies at many wavelengths of the objects in the deep field, and also needed to be in a region with a low background infrared cirrus, the diffuse, wispy infrared emission believed to be caused by warm dust grains in cool clouds of hydrogen gas (H I regions). These criteria restricted the field of potential target areas. It was decided that the target should be in Hubble's continuous viewing zones: the areas of sky that are not occulted by the Earth or the moon during Hubble's orbit. The working group decided to concentrate on the northern continuous viewing zone, so that northern-hemisphere telescopes such as the Keck telescopes, the Kitt Peak National Observatory telescopes, and the Very Large Array (VLA) could conduct follow-up observations. Twenty fields satisfying these criteria were identified, from which three optimal candidate fields were selected, all within the constellation of Ursa Major. Radio snapshot observations with the VLA ruled out one of these fields because it contained a bright radio source, and the final decision between the other two was made on the basis of the availability of guide stars near the field: Hubble observations normally require a pair of nearby stars on which the telescope's Fine Guidance Sensors can lock during an exposure, but given the importance of the HDF observations, the working group required a second set of back-up guide stars. The field that was eventually selected is located at a right ascension of and a declination of ; it is about 2.6 arcminutes in width, or 1/12 the width of the Moon. The area is about 1/24,000,000 of the total area of the sky. Observations Once a field was selected, an observing strategy was developed. An important decision was to determine which filters the observations would use; WFPC2 is equipped with 48 filters, including narrowband filters isolating particular emission lines of astrophysical interest, and broadband filters useful for the study of the colors of stars and galaxies. The choice of filters to be used for the HDF depended on the throughput of each filter—the total proportion of light that it allows through—and the spectral coverage available. Filters with bandpasses overlapping as little as possible were desirable. In the end, four broadband filters were chosen, centred at wavelengths of 300 nm (near-ultraviolet), 450 nm (blue light), 606 nm (red light) and 814 nm (near-infrared). Because the quantum efficiency of Hubble's detectors at 300 nm wavelength is quite low, the noise in observations at this wavelength is primarily due to CCD noise rather than sky background; thus, these observations could be conducted at times when high background noise would have harmed the efficiency of observations in other passbands. Between December 18 and 28, 1995—during which time Hubble orbited the Earth about 150 times—342 images of the target area in the chosen filters were taken. The total exposure times at each wavelength were 42.7 hours (300 nm), 33.5 hours (450 nm), 30.3 hours (606 nm) and 34.3 hours (814 nm), divided into 342 individual exposures to prevent significant damage to individual images by cosmic rays, which cause bright streaks to appear when they strike CCD detectors. A further 10 Hubble orbits were used to make short exposures of flanking fields to aid follow-up observations by other instruments. Data processing The production of a final combined image at each wavelength was a complex process. Bright pixels caused by cosmic ray impacts during exposures were removed by comparing exposures of equal length taken one after the other, and identifying pixels that were affected by cosmic rays in one exposure but not the other. Trails of space debris and artificial satellites were present in the original images, and were carefully removed. Scattered light from the Earth was evident in about a quarter of the data frames, creating a visible "X" pattern on the images. This was removed by taking an image affected by scattered light, aligning it with an unaffected image, and subtracting the unaffected image from the affected one. The resulting image was smoothed, and could then be subtracted from the bright frame. This procedure removed almost all of the scattered light from the affected images. Once the 342 individual images were cleaned of cosmic-ray hits and corrected for scattered light, they had to be combined. Scientists involved in the HDF observations pioneered a technique called 'drizzling', in which the pointing of the telescope was varied minutely between sets of exposures. Each pixel on the WFPC2 CCD chips recorded an area of sky 0.09 arcseconds across, but by changing the direction in which the telescope was pointing by less than that between exposures, the resulting images were combined using sophisticated image-processing techniques to yield a final angular resolution better than this value. The HDF images produced at each wavelength had final pixel sizes of 0.03985 arcseconds. The data processing yielded four monochrome images (at 300 nm, 450 nm, 606 nm and 814 nm), one at each wavelength. One image was designated as red (814 nm), the second as green (606 nm) and the third as blue (450 nm), and the three images were combined to give a color image. Because the wavelengths at which the images were taken do not correspond to the wavelengths of red, green and blue light, the colors in the final image only give an approximate representation of the actual colors of the galaxies in the image; the choice of filters for the HDF (and the majority of Hubble images) was primarily designed to maximize the scientific utility of the observations rather than to create colors corresponding to what the human eye would actually perceive. Contents The final images were released at a meeting of the American Astronomical Society in January 1996, and revealed a plethora of distant, faint galaxies. About 3,000 distinct galaxies could be identified in the images, with both irregular and spiral galaxies clearly visible, although some galaxies in the field are only a few pixels across. In all, the HDF is thought to contain fewer than twenty galactic foreground stars; by far the majority of objects in the field are distant galaxies. There are about fifty blue point-like objects in the HDF. Many seem to be associated with nearby galaxies, which together form chains and arcs: these are likely to be regions of intense star formation. Others may be distant quasars. Astronomers initially ruled out the possibility that some of the point-like objects are white dwarfs, because they are too blue to be consistent with theories of white dwarf evolution prevalent at the time. However, more recent work has found that many white dwarfs become bluer as they age, lending support to the idea that the HDF might contain white dwarfs. Scientific results The HDF data provided extremely rich material for cosmologists to analyse and by late 2014 the associated scientific paper for the image had received over 900 citations. One of the most fundamental findings was the discovery of large numbers of galaxies with high redshift values. As the Universe expands, more distant objects recede from the Earth faster, in what is called the Hubble Flow. The light from very distant galaxies is significantly affected by the cosmological redshift. While quasars with high redshifts were known, very few galaxies with redshifts greater than one were known before the HDF images were produced. The HDF, however, contained many galaxies with redshifts as high as six, corresponding to distances of about 12 billion light-years. Due to redshift the most distant objects in the HDF (Lyman-break galaxies) are not actually visible in the Hubble images; they can only be detected in images of the HDF taken at longer wavelengths by ground-based telescopes. One of the first observations planned for the James Webb Space Telescope was a mid-infrared image of the Hubble Ultra-Deep Field. The HDF galaxies contained a considerably larger proportion of disturbed and irregular galaxies than the local universe; galaxy collisions and mergers were more common in the young universe as it was much smaller than today. It is believed that giant elliptical galaxies form when spirals and irregular galaxies collide. The wealth of galaxies at different stages of their evolution also allowed astronomers to estimate the variation in the rate of star formation over the lifetime of the Universe. While estimates of the redshifts of HDF galaxies are somewhat crude, astronomers believe that star formation was occurring at its maximum rate 8–10 billion years ago, and has decreased by a factor of about 10 since then. Another important result from the HDF was the very small number of foreground stars present. For years astronomers had been puzzling over the nature of dark matter, mass which seems to be undetectable but which observations implied made up about 85% of all matter in the Universe by mass. One theory was that dark matter might consist of Massive Astrophysical Compact Halo Objects (MACHOs)—faint but massive objects such as red dwarfs and planets in the outer regions of galaxies. The HDF showed, however, that there were not significant numbers of red dwarfs in the outer parts of our galaxy. Multifrequency followup Very-high redshift objects (Lyman-break galaxies) cannot be seen in visible light and generally are detected in infrared or submillimetre wavelength surveys of the HDF instead. Observations with the Infrared Space Observatory (ISO) indicated infrared emission from 13 galaxies visible in the optical images, attributed to large quantities of dust associated with intense star formation. Infrared observations have also been made with the Spitzer Space Telescope. Submillimeter observations of the field have been made with SCUBA on the James Clerk Maxwell Telescope, initially detecting 5 sources, although with very low resolution. Observations have also been made with the Subaru telescope in Hawaii. X-ray observations by the Chandra X-ray Observatory revealed six sources in the HDF, which were found to correspond to three elliptical galaxies, one spiral galaxy, one active galactic nucleus and one extremely red object, thought to be a distant galaxy containing a large amount of dust absorbing its blue light emissions. Ground-based radio images taken using the VLA revealed seven radio sources in the HDF, all of which correspond to galaxies visible in the optical images. The field has also been surveyed with the Westerbork Synthesis Radio Telescope and the MERLIN array of radio telescopes at 1.4 GHz; the combination of VLA and MERLIN maps made at wavelengths of 3.5 and 20 cm have located 16 radio sources in the HDF-N field, with many more in the flanking fields. Radio images of some individual sources in the field have been made with the European VLBI Network at 1.6 GHz with a higher resolution than the Hubble maps. Subsequent HST observations An HDF counterpart in the southern celestial hemisphere was created in 1998: the HDF-South (HDF-S). Created using a similar observing strategy, the HDF-S was very similar in appearance to the original HDF. This supports the cosmological principle that at its largest scale the Universe is homogeneous. The HDF-S survey used the Space Telescope Imaging Spectrograph (STIS) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) instruments installed on the HST in 1997; the region of the original Hubble Deep Field (HDF-N) has since been re-observed several times using WFPC2, as well as by the NICMOS and STIS instruments. Several supernova events were detected by comparing the first and second epoch observations of the HDF-N. A wider survey, but less sensitive, was carried out as part of the Great Observatories Origins Deep Survey; a section of this was then observed for longer to create the Hubble Ultra-Deep Field, which was the most sensitive optical deep field image for years until the Hubble eXtreme Deep Field was completed in 2012. Images from the Extreme Deep Field, or XDF, were released on September 26, 2012, to a number of media agencies. Images released in the XDF show galaxies which are now believed to have formed in the first 500 million years following the Big Bang.
Physical sciences
Notable patches of universe
Astronomy
216650
https://en.wikipedia.org/wiki/Ferrimagnetism
Ferrimagnetism
A ferrimagnetic material is a material that has populations of atoms with opposing magnetic moments, as in antiferromagnetism, but these moments are unequal in magnitude, so a spontaneous magnetization remains. This can for example occur when the populations consist of different atoms or ions (such as Fe2+ and Fe3+). Like ferromagnetic substances, ferrimagnetic substances are attracted by magnets and can be magnetized to make permanent magnets. The oldest known magnetic substance, magnetite (Fe3O4), is ferrimagnetic, but was classified as a ferromagnet before Louis Néel discovered ferrimagnetism in 1948. Since the discovery, numerous uses have been found for ferrimagnetic materials, such as hard-drive platters and biomedical applications. History Until the twentieth century, all naturally occurring magnetic substances were called ferromagnets. In 1936, Louis Néel published a paper proposing the existence of a new form of cooperative magnetism he called antiferromagnetism. While working with Mn2Sb, French physicist Charles Guillaud discovered that the current theories on magnetism were not adequate to explain the behavior of the material, and made a model to explain the behavior. In 1948, Néel published a paper about a third type of cooperative magnetism, based on the assumptions in Guillaud's model. He called it ferrimagnetism. In 1970, Néel was awarded for his work in magnetism with the Nobel Prize in Physics. Physical origin Ferrimagnetism has the same physical origins as ferromagnetism and antiferromagnetism. In ferrimagnetic materials the magnetization is also caused by a combination of dipole–dipole interactions and exchange interactions resulting from the Pauli exclusion principle. The main difference is that in ferrimagnetic materials there are different types of atoms in the material's unit cell. An example of this can be seen in the figure above. Here the atoms with a smaller magnetic moment point in the opposite direction of the larger moments. This arrangement is similar to that present in antiferromagnetic materials, but in ferrimagnetic materials the net moment is nonzero because the opposed moments differ in magnitude. Ferrimagnets have a critical temperature above which they become paramagnetic just as ferromagnets do. At this temperature (called the Curie temperature) there is a second-order phase transition, and the system can no longer maintain a spontaneous magnetization. This is because at higher temperatures the thermal motion is strong enough that it exceeds the tendency of the dipoles to align. Derivation There are various ways to describe ferrimagnets, the simplest of which is with mean-field theory. In mean-field theory the field acting on the atoms can be written as where is the applied magnetic field, and is field caused by the interactions between the atoms. The following assumption then is Here is the average magnetization of the lattice, and is the molecular field coefficient. When we allow and to be position- and orientation-dependent, we can then write it in the form where is the field acting on the i-th substructure, and is the molecular field coefficient between the i-th and k-th substructures. For a diatomic lattice we can designate two types of sites, a and b. We can designate the number of magnetic ions per unit volume, the fraction of the magnetic ions on the a sites, and the fraction on the b sites. This then gives It can be shown that and that unless the structures are identical. favors a parallel alignment of and , while favors an anti-parallel alignment. For ferrimagnets, , so it will be convenient to take as a positive quantity and write the minus sign explicitly in front of it. For the total fields on a and b this then gives Furthermore, we will introduce the parameters and which give the ratio between the strengths of the interactions. At last we will introduce the reduced magnetizations with the spin of the i-th element. This then gives for the fields: The solutions to these equations (omitted here) are then given by where is the Brillouin function. The simplest case to solve now is . Since , this then gives the following pair of equations: with and . These equations do not have a known analytical solution, so they must be solved numerically to find the temperature dependence of . Effects of temperature Unlike ferromagnetism, the magnetization curves of ferrimagnetism can take many different shapes depending on the strength of the interactions and the relative abundance of atoms. The most notable instances of this property are that the direction of magnetization can reverse while heating a ferrimagnetic material from absolute zero to its critical temperature, and that strength of magnetization can increase while heating a ferrimagnetic material to the critical temperature, both of which cannot occur for ferromagnetic materials. These temperature dependencies have also been experimentally observed in NiFe2/5Cr8/5O4 and Li1/2Fe5/4Ce5/4O4. A temperature lower than the Curie temperature, but at which the opposing magnetic moments are equal (resulting in a net magnetic moment of zero) is called a magnetization compensation point. This compensation point is observed easily in garnets and rare-earth–transition-metal alloys (RE-TM). Furthermore, ferrimagnets may also have an angular momentum compensation point, at which the net angular momentum vanishes. This compensation point is crucial for achieving fast magnetization reversal in magnetic-memory devices. Effect of external fields When ferrimagnets are exposed to an external magnetic field, they display what is called magnetic hysteresis, where magnetic behavior depends on the history of the magnet. They also exhibit a saturation magnetization ; this magnetization is reached when the external field is strong enough to make all the moments align in the same direction. When this point is reached, the magnetization cannot increase, as there are no more moments to align. When the external field is removed, the magnetization of the ferrimagnet does not disappear, but a nonzero magnetization remains. This effect is often used in applications of magnets. If an external field in the opposite direction is applied subsequently, the magnet will demagnetize further until it eventually reaches a magnetization of . This behavior results in what is called a hysteresis loop. Properties and uses Ferrimagnetic materials have high resistivity and have anisotropic properties. The anisotropy is actually induced by an external applied field. When this applied field aligns with the magnetic dipoles, it causes a net magnetic dipole moment and causes the magnetic dipoles to precess at a frequency controlled by the applied field, called Larmor or precession frequency. As a particular example, a microwave signal circularly polarized in the same direction as this precession strongly interacts with the magnetic dipole moments; when it is polarized in the opposite direction, the interaction is very low. When the interaction is strong, the microwave signal can pass through the material. This directional property is used in the construction of microwave devices like isolators, circulators, and gyrators. Ferrimagnetic materials are also used to produce optical isolators and circulators. Ferrimagnetic minerals in various rock types are used to study ancient geomagnetic properties of Earth and other planets. That field of study is known as paleomagnetism. In addition, it has been shown that ferrimagnets such as magnetite can be used for thermal energy storage. Examples The oldest known magnetic material, magnetite, is a ferrimagnetic substance. The tetrahedral and octahedral sites of its crystal structure exhibit opposite spin. Other known ferrimagnetic materials include yttrium iron garnet (YIG); cubic ferrites composed of iron oxides with other elements such as aluminum, cobalt, nickel, manganese, and zinc; and hexagonal or spinel type ferrites, including rhenium ferrite, ReFe2O4, PbFe12O19 and BaFe12O19 and pyrrhotite, Fe1−xS. Ferrimagnetism can also occur in single-molecule magnets. A classic example is a dodecanuclear manganese molecule with an effective spin S = 10 derived from antiferromagnetic interaction on Mn(IV) metal centers with Mn(III) and Mn(II) metal centers.
Physical sciences
Magnetostatics
Physics
216881
https://en.wikipedia.org/wiki/Electronic%20design%20automation
Electronic design automation
Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect to integrated circuits (ICs). History Early days The earliest electronic design automation is attributed to IBM with the documentation of its 700 series computers in the 1950s. Prior to the development of EDA, integrated circuits were designed by hand and manually laid out. Some advanced shops used geometric software to generate tapes for a Gerber photoplotter, responsible for generating a monochromatic exposure image, but even those copied digital recordings of mechanically drawn components. The process was fundamentally graphic, with the translation from electronics to graphics done manually; the best-known company from this era was Calma, whose GDSII format is still in use today. By the mid-1970s, developers started to automate circuit design in addition to drafting and the first placement and routing tools were developed; as this occurred, the proceedings of the Design Automation Conference catalogued the large majority of the developments of the time. The next era began following the publication of "Introduction to VLSI Systems" by Carver Mead and Lynn Conway in 1980; considered the standard textbook for chip design. The result was an increase in the complexity of the chips that could be designed, with improved access to design verification tools that used logic simulation. The chips were easier to lay out and more likely to function correctly, since their designs could be simulated more thoroughly prior to construction. Although the languages and tools have evolved, this general approach of specifying the desired behavior in a textual programming language and letting the tools derive the detailed physical design remains the basis of digital IC design today. The earliest EDA tools were produced academically. One of the most famous was the "Berkeley VLSI Tools Tarball", a set of UNIX utilities used to design early VLSI systems. Widely used were the Espresso heuristic logic minimizer, responsible for circuit complexity reductions and Magic, a computer-aided design platform. Another crucial development was the formation of MOSIS, a consortium of universities and fabricators that developed an inexpensive way to train student chip designers by producing real integrated circuits. The basic concept was to use reliable, low-cost, relatively low-technology IC processes and pack a large number of projects per wafer, with several copies of chips from each project remaining preserved. Cooperating fabricators either donated the processed wafers or sold them at cost, as they saw the program as helpful to their own long-term growth. Commercial birth 1981 marked the beginning of EDA as an industry. For many years, the larger electronic companies, such as Hewlett-Packard, Tektronix and Intel, had pursued EDA internally, with managers and developers beginning to spin out of these companies to concentrate on EDA as a business. Daisy Systems, Mentor Graphics and Valid Logic Systems were all founded around this time and collectively referred to as DMV. In 1981, the U.S. Department of Defense additionally began funding of VHDL as a hardware description language. Within a few years, there were many companies specializing in EDA, each with a slightly different emphasis. The first trade show for EDA was held at the Design Automation Conference in 1984 and in 1986, Verilog, another popular high-level design language, was first introduced as a hardware description language by Gateway Design Automation. Simulators quickly followed these introductions, permitting direct simulation of chip designs and executable specifications. Within several years, back-ends were developed to perform logic synthesis. Modern day Current digital flows are extremely modular, with front ends producing standardized design descriptions that compile into invocations of units similar to cells without regard to their individual technology. Cells implement logic or other electronic functions via the utilisation of a particular integrated circuit technology. Fabricators generally provide libraries of components for their production processes, with simulation models that fit standard simulation tools. Most analog circuits are still designed in a manual fashion, requiring specialist knowledge that is unique to analog design (such as matching concepts). Hence, analog EDA tools are far less modular, since many more functions are required, they interact more strongly and the components are, in general, less ideal. EDA for electronics has rapidly increased in importance with the continuous scaling of semiconductor technology. Some users are foundry operators, who operate the semiconductor fabrication facilities ("fabs") and additional individuals responsible for utilising the technology design-service companies who use EDA software to evaluate an incoming design for manufacturing readiness. EDA tools are also used for programming design functionality into FPGAs or field-programmable gate arrays, customisable integrated circuit designs. Software focuses Design Design flow primarily remains characterised via several primary components; these include: High-level synthesis (additionally known as behavioral synthesis or algorithmic synthesis) – The high-level design description (e.g. in C/C++) is converted into RTL or the register transfer level, responsible for representing circuitry via the utilisation of interactions between registers. Logic synthesis – The translation of RTL design description (e.g. written in Verilog or VHDL) into a discrete netlist or representation of logic gates. Schematic capture – For standard cell digital, analog, RF-like Capture CIS in Orcad by Cadence and ISIS in Proteus. Layout – usually schematic-driven layout, like Layout in Orcad by Cadence, ARES in Proteus Simulation Transistor simulation – low-level transistor-simulation of a schematic/layout's behavior, accurate at device-level. Logic simulation – digital-simulation of an RTL or gate-netlist's digital (Boolean 0/1) behavior, accurate at Boolean-level. Behavioral simulation – high-level simulation of a design's architectural operation, accurate at cycle-level or interface-level. Hardware emulation – Use of special purpose hardware to emulate the logic of a proposed design. Can sometimes be plugged into a system in place of a yet-to-be-built chip; this is called in-circuit emulation. Technology CAD simulate and analyze the underlying process technology. Electrical properties of devices are derived directly from device physics Analysis and verification Functional verification: ensures logic design matches specifications and executes tasks correctly. Includes dynamic functional verification via simulation, emulation, and prototypes. RTL Linting for adherence to coding rules such as syntax, semantics, and style. Clock domain crossing verification (CDC check): similar to linting, but these checks/tools specialize in detecting and reporting potential issues like data loss, meta-stability due to use of multiple clock domains in the design. Formal verification, also model checking: attempts to prove, by mathematical methods, that the system has certain desired properties, and that some undesired effects (such as deadlock) cannot occur. Equivalence checking: algorithmic comparison between a chip's RTL-description and synthesized gate-netlist, to ensure functional equivalence at the logical level. Static timing analysis: analysis of the timing of a circuit in an input-independent manner, hence finding a worst case over all possible inputs. Layout extraction: starting with a proposed layout, compute the (approximate) electrical characteristics of every wire and device. Often used in conjunction with static timing analysis above to estimate the performance of the completed chip. Electromagnetic field solvers, or just field solvers, solve Maxwell's equations directly for cases of interest in IC and PCB design. They are known for being slower but more accurate than the layout extraction above. Physical verification, PV: checking if a design is physically manufacturable, and that the resulting chips will not have any function-preventing physical defects, and will meet original specifications. Manufacturing preparation Mask data preparation or MDP - The generation of actual lithography photomasks, utilised to physically manufacture the chip. Chip finishing which includes custom designations and structures to improve manufacturability of the layout. Examples of the latter are a seal ring and filler structures. Producing a reticle layout with test patterns and alignment marks. Layout-to-mask preparation that enhances layout data with graphics operations, such as resolution enhancement techniques (RET) – methods for increasing the quality of the final photomask. This also includes optical proximity correction (OPC) or inverse lithography technology (ILT) – the up-front compensation for diffraction and interference effects occurring later when chip is manufactured using this mask. Mask generation – The generation of flat mask image from hierarchical design. Automatic test pattern generation or ATPG – The generation of pattern data systematically to exercise as many logic-gates and other components as possible. Built-in self-test or BIST – The installation of self-contained test-controllers to automatically test a logic or memory structure in the design Functional safety Functional safety analysis, systematic computation of failure in time (FIT) rates and diagnostic coverage metrics for designs in order to meet the compliance requirements for the desired safety integrity levels. Functional safety synthesis, add reliability enhancements to structured elements (modules, RAMs, ROMs, register files, FIFOs) to improve fault detection / fault tolerance. This includes (not limited to) addition of error detection and / or correction codes (Hamming), redundant logic for fault detection and fault tolerance (duplicate / triplicate) and protocol checks (interface parity, address alignment, beat count) Functional safety verification, running of a fault campaign, including insertion of faults into the design and verification that the safety mechanism reacts in an appropriate manner for the faults that are deemed covered. Companies Current Market capitalization and company name as of March 2023: $57.87 billion – Synopsys $56.68 billion – Cadence Design Systems $24.98 billion – Ansys AU$4.88 billion – Altium ¥77.25 billion – Zuken Defunct Market capitalization and company name : $2.33 billion – Mentor Graphics; Siemens acquired Mentor in 2017 and renamed as Siemens EDA in 2021 $507 million – Magma Design Automation; Synopsys acquired Magma in February 2012 NT$6.44 billion – SpringSoft; Synopsys acquired SpringSoft in August 2012 Acquisitions Many EDA companies acquire small companies with software or other technology that can be adapted to their core business. Most of the market leaders are amalgamations of many smaller companies and this trend is helped by the tendency of software companies to design tools as accessories that fit naturally into a larger vendor's suite of programs on digital circuitry; many new tools incorporate analog design and mixed systems. This is happening due to a trend to place entire electronic systems on a single chip. Technical conferences Design Automation Conference International Conference on Computer-Aided Design Design Automation and Test in Europe Asia and South Pacific Design Automation Conference Symposia on VLSI Technology and Circuits
Technology
Electronics: General
null
217119
https://en.wikipedia.org/wiki/Shifting%20cultivation
Shifting cultivation
Shifting cultivation is an agricultural system in which plots of land are cultivated temporarily, then abandoned while post-disturbance fallow vegetation is allowed to freely grow while the cultivator moves on to another plot. The period of cultivation is usually terminated when the soil shows signs of exhaustion or, more commonly, when the field is overrun by weeds. The period of time during which the field is cultivated is usually shorter than the period over which the land is allowed to regenerate by lying fallow. This technique is often used in LEDCs (Less Economically Developed Countries) or LICs (Low Income Countries). In some areas, cultivators use a practice of slash-and-burn as one element of their farming cycle. Others employ land clearing without any burning, and some cultivators are purely migratory and do not use any cyclical method on a given plot. Sometimes no slashing at all is needed where regrowth is purely of grasses, an outcome not uncommon when soils are near exhaustion and need to lie fallow. In shifting agriculture, after two or three years of producing vegetable and grain crops on cleared land, the migrants abandon it for another plot. Land is often cleared by slash-and-burn methods—trees, bushes and forests are cleared by slashing, and the remaining vegetation is burnt. The ashes add potash to the soil. Then the seeds are sown after the rains. Political ecology Shifting cultivation is a form of agriculture or a cultivation system in which, at any particular point in time, a minority of 'fields' are in cultivation and a majority are in various stages of natural re-growth. Over time, fields are cultivated for a relatively short time, and allowed to recover, or are fallowed, for a relatively long time. Eventually a previously cultivated field will be cleared of the natural vegetation and planted in crops again. Fields in established and stable shifting cultivation systems are cultivated and fallowed cyclically. This type of farming is called jhumming in India. Fallow fields are not unproductive. During the fallow period, shifting cultivators use the successive vegetation species widely for timber for fencing and construction, firewood, thatching, ropes, clothing, tools, carrying devices and medicines. It is common for fruit and nut trees to be planted in fallow fields to the extent that parts of some fallows are in fact orchards. Soil-enhancing shrub or tree species may be planted or protected from slashing or burning in fallows. Many of these species have been shown to fix nitrogen. Fallows commonly contain plants that attract birds and animals and are important for hunting. But perhaps most importantly, tree fallows protect soil against physical erosion and draw nutrients to the surface from deep in the soil profile. The relationship between the time the land is cultivated and the time it is fallowed are critical to the stability of shifting cultivation systems. These parameters determine whether or not the shifting cultivation system as a whole suffers a net loss of nutrients over time. A system in which there is a net loss of nutrients with each cycle will eventually lead to a degradation of resources unless actions are taken to arrest the losses. In some cases soil can be irreversibly exhausted (including erosion as well as nutrient loss) in less than a decade. The longer a field is cropped, the greater the loss of soil organic matter, cation-exchange-capacity and in nitrogen and phosphorus, the greater the increase in acidity, the more likely soil porosity and infiltration capacity is reduced and the greater the loss of seeds of naturally occurring plant species from soil seed banks. In a stable shifting cultivation system, the fallow is long enough for the natural vegetation to recover to the state that it was in before it was cleared, and for the soil to recover to the condition it was in before cropping began. During fallow periods soil temperatures are lower, wind and water erosion is much reduced, nutrient cycling becomes closed again, nutrients are extracted from the subsoil, soil fauna decreases, acidity is reduced, soil structure, texture and moisture characteristics improve and seed banks are replenished. The secondary forests created by shifting cultivation are commonly richer in plant and animal resources useful to humans than primary forests, even though they are much less bio-diverse. Shifting cultivators view the forest as an agricultural landscape of fields at various stages in a regular cycle. People unused to living in forests cannot see the fields for the trees. Rather they perceive an apparently chaotic landscape in which trees are cut and burned randomly and so they characterise shifting cultivation as ephemeral or 'pre-agricultural', as 'primitive' and as a stage to be progressed beyond. Shifting agriculture is none of these things. Stable shifting cultivation systems are highly variable, closely adapted to micro-environments and are carefully managed by farmers during both the cropping and fallow stages. Shifting cultivators may possess a highly developed knowledge and understanding of their local environments and of the crops and native plant species they exploit. Complex and highly adaptive land tenure systems sometimes exist under shifting cultivation. Introduced crops for food and as cash have been skillfully integrated into some shifting cultivation systems. Its disadvantages include the high initial cost, as manual labour is required. In Europe Shifting cultivation was still being practised as a viable and stable form of agriculture in many parts of Europe and east into Siberia at the end of the 19th century and in some places well into the 20th century. In the Ruhr in the late 1860s a forest-field rotation system known as Reutbergwirtschaft was using a 16-year cycle of clearing, cropping and fallowing with trees to produce bark for tanneries, wood for charcoal and rye for flour (Darby 1956, 200). Swidden farming was practised in Siberia at least until the 1930s, using specially selected varieties of "swidden-rye" (Steensberg 1993, 98). In Eastern Europe and Northern Russia the main swidden crops were turnips, barley, flax, rye, wheat, oats, radishes and millet. Cropping periods were usually one year, but were extended to two or three years on very favourable soils. Fallow periods were between 20 and 40 years (Linnard 1970, 195). In Finland in 1949, Steensberg (1993, 111) observed the clearing and burning of a swidden 440 km north of Helsinki. Birch and pine trees had been cleared over a period of a year and the logs sold for cash. A fallow of alder (Alnus) was encouraged to improve soil conditions. After the burn, turnip was sown for sale and for cattle feed. Shifting cultivation was disappearing in this part of Finland because of a loss of agricultural labour to the industries of the towns. Steensberg (1993, 110-152) provides eye-witness descriptions of shifting cultivation being practised in Sweden in the 20th century, and in Estonia, Poland, the Caucasus, Serbia, Bosnia, Hungary, Switzerland, Austria and Germany in the 1930s to the 1950s. That these agricultural practices survived from the Neolithic into the middle of the 20th century amidst the sweeping changes that occurred in Europe over that period, suggests they were adaptive and in themselves, were not massively destructive of the environments in which they were practiced. The earliest written accounts of deforestation in Southern Europe begin around 1000 BC in the histories of Homer, Thucydides and Plato and in Strabo's Geography. Forests were exploited for ship building, and urban development, the manufacture of casks, pitch and charcoal, as well as being cleared for agriculture. The intensification of trade and as a result of warfare, increased the demand for ships which were manufactured completely from forest products. Although goat herding is singled out as an important cause of environmental degradation, a more important cause of forest destruction was the practice in some places of granting ownership rights to those who clear felled forests and brought the land into permanent cultivation. Evidence that circumstances other than agriculture were the major causes for forest destruction was the recovery of tree cover in many parts of the Roman empire from 400 BC to around 500 AD following the collapse of Roman economy and industry. Darby observes that by 400 AD "land that had once been tilled became derelict and overgrown" and quotes Lactantius who wrote that in many places "cultivated land became forest" (Darby 1956, 186). The other major cause of forest destruction in the Mediterranean environment with its hot dry summers were wild fires that became more common following human interference in the forests. In Central and Northern Europe the use of stone tools and fire in agriculture is well established in the palynological and archaeological record from the Neolithic. Here, just as in Southern Europe, the demands of more intensive agriculture and the invention of the plough, trading, mining and smelting, tanning, building and construction in the growing towns and constant warfare, including the demands of naval shipbuilding, were more important forces behind the destruction of the forests than was shifting cultivation. By the Middle Ages in Europe, large areas of forest were being cleared and converted into arable land in association with the development of feudal tenurial practices. From the 16th to the 18th centuries, the demands of iron smelters for charcoal, increasing industrial developments and the discovery and expansion of colonial empires as well as incessant warfare that increased the demand for shipping to levels never previously reached, all combined to deforest Europe. With the loss of the forest, so shifting cultivation became restricted to the peripheral places of Europe, where permanent agriculture was uneconomic, transport costs constrained logging or terrain prevented the use of draught animals or tractors. It has disappeared from even these areas since 1945, as agriculture has become increasingly capital intensive, rural areas have become depopulated and the remnant European forests themselves have been revalued economically and socially. Classical authors mentioned large forests, with Homer writing about "wooded Samothrace", Zakynthos, Sicily, and other woodlands. These authors indicated that the Mediterranean area once had more forest; much had already been lost, and the remainder was primarily in the mountains. Although parts of Europe remained wooded, by the late Iron Age and early Viking Ages, forests were drastically reduced and settlements regularly moved. The reasons for this pattern of mobility, the transition to stable settlements from the late Viking period on, or the transition from shifting cultivation to stationary farming are unknown. From this period, plows are found in graves. Early agricultural peoples preferred good forests on hillsides with good drainage, and traces of cattle enclosures are evident there. In Italy, shifting cultivation was no longer used by the common era. Tacitus describes it as a strange cultivation method, practiced by the Germans. In 98 CE, he wrote about the Germans that their fields were proportional to the participating cultivators but their crops were shared according to status. Distribution was simple, because of wide availability; they changed fields annually, with much to spare because they were producing grain rather than other crops. A W Liljenstrand wrote in his 1857 doctoral dissertation, "About Changing of Soil" (pp. 5 ff.), that Tacitus discusses shifting cultivation: "arva per annos mutant". This is the practice of shifting cultivation. During the Migration Period in Europe, after the Roman Empire and before the Viking Age, the peoples of Central Europe moved to new forests after exhausting old parcels. Forests were quickly exhausted; the practice had ended in the Mediterranean, where forests were less resilient than the sturdier coniferous forests of Central Europe. Deforestation had been partially caused by burning to create pasture. Reduced timber delivery led to higher prices and more stone construction in the Roman Empire (Stewart 1956, p. 123). Although forests gradually decreased in northern Europe, they have survived in the Nordic countries. Many Italic peoples saw benefits in allying with Rome. When the Romans built the Via Amerina in 241 BCE, the Falisci settled in cities on the plains and aided the Romans in road construction; the Roman Senate gradually acquired representatives from Faliscan and Etruscan families, and the Italic tribes became settled farmers. Classical writers described peoples who practiced shifting cultivation, which characterized the Migration Period in Europe. The exploitation of forests demanded displacement as areas were deforested. Julius Caesar wrote about the Suebi in Commentarii de Bello Gallico 4.1, "They have no private and secluded fields ("privati ac separati agri apud eos nihil est") ... They cannot stay more than one year in a place for cultivation’s sake" ("neque longius anno remanere uno in loco colendi causa licet"). The Suebi lived between the Rhine and the Elbe. About the Germani, Caesar wrote: "No one has a particular field or area for himself, for the magistrates and chiefs give year by year to the people and the clans, who have gathered together, as much land and in such places as seem good to them and then make them move on after a year" ("Neque quisquam agri modum certum aut fines habet proprios, sed magistratus ac principes in annos singulos gentibus cognationibusque hominum, qui tum una coierunt, a quantum et quo loco visum est agri attribuunt atque anno post alio transire cogunt" [Book 6.22]). Strabo (63 BCE—c. 20 CE) also writes about the Suebi in his Geography (VII, 1, 3): "Common to all the people in this area is that they can easily change residence because of their sordid way of life; they do not cultivate fields or collect property, but live in temporary huts. They get their nourishment from their livestock for the most part, and like nomads, pack all their goods in wagons and go on to wherever they want". Horace writes in 17 BCE (Carmen Saeculare, 3, 24, 9ff.) about the people of Macedonia: "The proud Getae also live happily, growing free food and cereal for themselves on land they do not want to cultivate for more than a year" ("Vivunt et rigidi Getae, / immetata quibus iugera liberas / fruges et Cererem ferunt, / nec cultura placet longior annua"). Simple societies and environmental change A growing body of palynological evidence finds that simple human societies brought about extensive changes to their environments before the establishment of any sort of state, feudal or capitalist, and before the development of large scale mining, smelting or shipbuilding industries. In these societies agriculture was the driving force in the economy and shifting cultivation was the most common type of agriculture practiced. By examining the relationships between social and economic change and agricultural change in these societies, insights can be gained on contemporary social and economic change and global environment change, and the place of shifting cultivation in those relationships. As early as 1930 questions about relationships between the rise and fall of the Mayan civilization of the Yucatán Peninsula and shifting cultivation were raised and continue to be debated today. Archaeological evidence suggests the development of Mayan society and economy began around 250 AD. A mere 700 years later it reached its apogee, by which time the population may have reached 2,000,000 people. There followed a precipitous decline that left the great cities and ceremonial centres vacant and overgrown with jungle vegetation. The causes of this decline are uncertain; but warfare and the exhaustion of agricultural land are commonly cited (Meggers 1954; Dumond 1961; Turner 1974). More recent work suggests the Maya may have, in suitable places, developed irrigation systems and more intensive agricultural practices (Humphries 1993). Similar paths appear to have been followed by Polynesian settlers in New Zealand and the Pacific Islands, who within 500 years of their arrival around 1100 AD turned substantial areas from forest into scrub and fern and in the process caused the elimination of numerous species of birds and animals (Kirch and Hunt 1997). In the restricted environments of the Pacific islands, including Fiji and Hawaii, early extensive erosion and change of vegetation is presumed to have been caused by shifting cultivation on slopes. Soils washed from slopes were deposited in valley bottoms as a rich, swampy alluvium. These new environments were then exploited to develop intensive, irrigated fields. The change from shifting cultivation to intensive irrigated fields occurred in association with a rapid growth in population and the development of elaborate and highly stratified chiefdoms (Kirch 1984). In the larger, temperate latitude, islands of New Zealand the presumed course of events took a different path. There the stimulus for population growth was the hunting of large birds to extinction, during which time forests in drier areas were destroyed by burning, followed the development of intensive agriculture in favorable environments, based mainly on sweet potato (Ipomoea batatas) and a reliance on the gathering of two main wild plant species in less favorable environments. These changes, as in the smaller islands, were accompanied by population growth, the competition for the occupation of the best environments, complexity in social organization, and endemic warfare (Anderson 1997). The record of humanly induced changes in environments is longer in New Guinea than in most places. Agricultural activities probably began 5,000 to 9,000 years ago. However, the most spectacular changes, in both societies and environments, are believed to have occurred in the central highlands of the island within the last 1,000 years, in association with the introduction of a crop new to New Guinea, the sweet potato (Golson 1982a; 1982b). One of the most striking signals of the relatively recent intensification of agriculture is the sudden increase in sedimentation rates in small lakes. The root question posed by these and the numerous other examples that could be cited of simple societies that have intensified their agricultural systems in association with increases in population and social complexity is not whether or how shifting cultivation was responsible for the extensive changes to landscapes and environments. Rather it is why simple societies of shifting cultivators in the tropical forest of Yucatán, or the highlands of New Guinea, began to grow in numbers and to develop stratified and sometimes complex social hierarchies? At first sight, the greatest stimulus to the intensification of a shifting cultivation system is a growth in population. If no other changes occur within the system, for each extra person to be fed from the system, a small extra amount of land must be cultivated. The total amount of land available is the land being presently cropped and all of the land in fallow. If the area occupied by the system is not expanded into previously unused land, then either the cropping period must be extended or the fallow period shortened. At least two problems exist with the population growth hypothesis. First, population growth in most pre-industrial shifting cultivator societies has been shown to be very low over the long term. Second, no human societies are known where people work only to eat. People engage in social relations with each other and agricultural produce is used in the conduct of these relationships. These relationships are the focus of two attempts to understand the nexus between human societies and their environments, one an explanation of a particular situation and the other a general exploration of the problem. Feedback loops In a study of the Duna in the Southern Highlands of New Guinea, a group in the process of moving from shifting cultivation into permanent field agriculture post sweet potato, Modjeska (1982) argued for the development of two "self amplifying feed back loops" of ecological and social causation. The trigger to the changes were very slow population growth and the slow expansion of agriculture to meet the demands of this growth. This set in motion the first feedback loop, the "use-value" loop. As more forest was cleared there was a decline in wild food resources and protein produced from hunting, which was substituted for by an increase in domestic pig raising. An increase in domestic pigs required a further expansion in agriculture. The greater protein available from the larger number of pigs increased human fertility and survival rates and resulted in faster population growth. The outcome of the operation of the two loops, one bringing about ecological change and the other social and economic change, is an expanding and intensifying agricultural system, the conversion of forest to grassland, a population growing at an increasing rate and expanding geographically and a society that is increasing in complexity and stratification. Resources are cultural appraisals The second attempt to explain the relationships between simple agricultural societies and their environments is that of Ellen (1982, 252–270). Ellen does not attempt to separate use-values from social production. He argues that almost all of the materials required by humans to live (with perhaps the exception of air) are obtained through social relations of production and that these relations proliferate and are modified in numerous ways. The values that humans attribute to items produced from the environment arise out of cultural arrangements and not from the objects themselves, a restatement of Carl Sauer's dictum that "resources are cultural appraisals". Humans frequently translate actual objects into culturally conceived forms, an example being the translation by the Duna of the pig into an item of compensation and redemption. As a result, two fundamental processes underlie the ecology of human social systems: First, the obtaining of materials from the environment and their alteration and circulation through social relations, and second, giving the material a value which will affect how important it is to obtain it, circulate it or alter it. Environmental pressures are thus mediated through social relations. Transitions in ecological systems and in social systems do not proceed at the same rate. The rate of phylogenetic change is determined mainly by natural selection and partly by human interference and adaptation, such as for example, the domestication of a wild species. Humans however have the ability to learn and to communicate their knowledge to each other and across generations. If most social systems have the tendency to increase in complexity they will, sooner or later, come into conflict with, or into "contradiction" (Friedman 1979, 1982) with their environments. What happens around the point of "contradiction" will determine the extent of the environmental degradation that will occur. Of particular importance is the ability of the society to change, to invent or to innovate technologically and sociologically, in order to overcome the "contradiction" without incurring continuing environmental degradation, or social disintegration. An economic study of what occurs at the points of conflict with specific reference to shifting cultivation is that of Esther Boserup (1965). Boserup argues that low intensity farming, extensive shifting cultivation for example, has lower labor costs than more intensive farming systems. This assertion remains controversial. She also argues that given a choice, a human group will always choose the technique which has the lowest absolute labor cost rather than the highest yield. But at the point of conflict, yields will have become unsatisfactory. Boserup argues, contra Malthus, that rather than population always overwhelming resources, that humans will invent a new agricultural technique or adopt an existing innovation that will boost yields and that is adapted to the new environmental conditions created by the degradation which has occurred already, even though they will pay for the increases in higher labor costs. Examples of such changes are the adoption of new higher yielding crops, the exchanging of a digging stick for a hoe, or a hoe for a plough, or the development of irrigation systems. The controversy over Boserup's proposal is in part over whether intensive systems are more costly in labor terms, and whether humans will bring about change in their agricultural systems before environmental degradation forces them to. In the contemporary world and global environmental change The estimated rate of deforestation in Southeast Asia in 1990 was 34,000 km² per year (FAO 1990, quoted in Potter 1993). In Indonesia alone it was estimated 13,100 km² per year were being lost, 3,680 km² per year from Sumatra and 3,770 km² from Kalimantan, of which 1,440 km² were due to the fires of 1982 to 1983. Since those estimates were made huge fires have ravaged Indonesian forests during the 1997 to 1998 El Niño associated drought. Shifting cultivation was assessed by the Food and Agriculture Organization (FAO) to be one of the causes of deforestation while logging was not. The apparent discrimination against shifting cultivators caused a confrontation between FAO and environmental groups, who saw the FAO supporting commercial logging interests against the rights of indigenous people (Potter 1993, 108). Other independent studies of the problem note that despite lack of government control over forests and the dominance of a political elite in the logging industry, the causes of deforestation are more complex. The loggers have provided paid employment to former subsistence farmers. One of the outcomes of cash incomes has been rapid population growth among indigenous groups of former shifting cultivators that has placed pressure on their traditional long fallow farming systems. Many farmers have taken advantage of the improved road access to urban areas by planting cash crops, such as rubber or pepper as noted above. Increased cash incomes often are spent on chain saws, which have enabled larger areas to be cleared for cultivation. Fallow periods have been reduced and cropping periods extended. Serious poverty elsewhere in the country has brought thousands of land-hungry settlers into the cut-over forests along the logging roads. The settlers practice what appears to be shifting cultivation but which is in fact a one-cycle slash and burn followed by continuous cropping, with no intention to long fallow. Clearing of trees and the permanent cultivation of fragile soils in a tropical environment with little attempt to replace lost nutrients may cause rapid degradation of the fragile soils. The loss of forest in Indonesia, Thailand, and the Philippines during the 1990s was preceded by major ecosystem disruptions in Vietnam, Laos and Cambodia in the 1970s and 1980s caused by warfare. Forests were sprayed with defoliants, thousands of rural forest dwelling people were uprooted from their homes and driven into previously isolated areas. The loss of the tropical forests of Southeast Asia is the particular outcome of the general possible outcomes described by Ellen (see above) when small local ecological and social systems become part of a larger system. When the previous relatively stable ecological relationships are destabilized, degradation can occur rapidly. Similar descriptions of the loss of forest and destruction of fragile ecosystems could be provided from the Amazon Basin, by large scale state sponsored colonization forest land (Becker 1995, 61) or from the Central Africa where what endemic armed conflict is destabilizing rural settlement and farming communities on a massive scale. Comparison with other ecological phenomena In the tropical developing world, shifting cultivation in its many diverse forms, remains a pervasive practice. Shifting cultivation was one of the first forms of agriculture practiced by humans and its survival into the modern world suggests that it is a flexible and highly adaptive means of production. However, it is also a grossly misunderstood practice. Many casual observers cannot see past the clearing and burning of standing forest and do not perceive often ecologically stable cycles of cropping and fallowing. Nevertheless, shifting cultivation systems are particularly susceptible to rapid increases in population and to economic and social change in the larger world around them. The blame for the destruction of forest resources is often laid on shifting cultivators. But the forces bringing about the rapid loss of tropical forests at the end of the 20th century are the same forces that led to the destruction of the forests of Europe, urbanization, industrialization, increased affluence, populational growth and geographical expansion and the application the latest technology to extract ever more resources from the environment in pursuit of wealth and political power by competing groups. However we must know that those who practice Agriculture are at the receiving end of the social stratum. Alternative practice in the pre-Columbian Amazon basin Slash-and-char, as opposed to slash-and-burn, may create self-perpetuating soil fertility that supports sedentary agriculture, but the society so sustained may still be overturned, as above (see article at Terra preta).
Technology
Agriculture_2
null
217296
https://en.wikipedia.org/wiki/Shadow
Shadow
A shadow is a dark area on a surface where light from a light source is blocked by an object. In contrast, shade occupies the three-dimensional volume behind an object with light in front of it. The cross section of a shadow is a two-dimensional silhouette, or a reverse projection of the object blocking the light. Point and non-point light sources A point source of light casts only a simple shadow, called an "umbra". For a non-point or "extended" source of light, the shadow is divided into the umbra, penumbra, and antumbra. The wider the light source, the more blurred the shadow becomes. If two penumbras overlap, the shadows appear to attract and merge. This is known as the shadow blister effect. The outlines of the shadow zones can be found by tracing the rays of light emitted by the outermost regions of the extended light source. The umbra region does not receive any direct light from any part of the light source and is the darkest. A viewer located in the umbra region cannot directly see any part of the light source. By contrast, the penumbra is illuminated by some parts of the light source, giving it an intermediate level of light intensity. A viewer located in the penumbra region will see the light source, but it is partially blocked by the object casting the shadow. If there is more than one light source, there will be several shadows, with the overlapping parts darker, and various combinations of brightnesses or even colors. The more diffuse the lighting is, the softer and more indistinct the shadow outlines become until they disappear. The lighting of an overcast sky produces few visible shadows. The absence of diffusing atmospheric effects in the vacuum of outer space produces shadows that are stark and sharply delineated by high-contrast boundaries between light and dark. For a person or object touching the surface where the shadow is projected (e.g. a person standing on the ground, or a pole in the ground) the shadows converge at the point of contact. A shadow shows, apart from distortion, the same image as the silhouette when looking at the object from the sun-side, hence the mirror image of the silhouette seen from the other side. Astronomy The names umbra, penumbra and antumbra are often used for the shadows cast by astronomical objects, though they are sometimes used to describe levels of darkness, such as in sunspots. An astronomical object casts human-visible shadows when its apparent magnitude is equal or lower than -4. The only astronomical objects able to project visible shadows onto Earth are the Sun, the Moon, and in the right conditions, Venus or Jupiter. Night is caused by the hemisphere of a planet facing its orbital star blocking its sunlight. A shadow cast by the Earth onto the Moon is a lunar eclipse. Conversely, a shadow cast by the Moon onto the Earth is a solar eclipse. Daytime variation The sun casts shadows that change dramatically through the day. The length of a shadow cast on the ground is proportional to the cotangent of the sun's elevation angle—its angle θ relative to the horizon. Near sunrise and sunset, when θ = 0° and cot(θ) = ∞, shadows can be extremely long. If the sun passes directly overhead (only possible in locations between the Tropics of Cancer and Capricorn), then θ = 90°, cot(θ) = 0, and shadows are cast directly underneath objects. Such variations have long aided travellers during their travels, especially in barren regions such as the Arabian Desert. Propagation speed The farther the distance from the object blocking the light to the surface of projection, the larger the silhouette (they are considered proportional). Also, if the object is moving, the shadow cast by the object will project an image with dimensions (length) expanding proportionally faster than the object's own rate of movement. The increase of size and movement is also true if the distance between the object of interference and the light source are closer. Eventually, this speed may exceed the speed of light. However, this does not violate special relativity as shadows do not carry any information or momentum. Although the edge of a shadow appears to "move" along a wall, in actuality the increase of a shadow's length is part of a new projection that propagates at the speed of light from the object of interference. Since there is no actual communication between points in a shadow (except for reflection or interference of light, at the speed of light), a shadow that projects over a surface of large distances (light years) cannot convey information between those distances with the shadow's edge. Color Visual artists are usually very aware of colored light emitted or reflected from several sources, which can generate complex multicolored shadows. Chiaroscuro, sfumato, and silhouette are examples of artistic techniques which make deliberate use of shadow effects. During the daytime, a shadow cast by an opaque object illuminated by sunlight has a bluish tinge. This happens because of Rayleigh scattering, the same property that causes the sky to appear blue. The opaque object is able to block the light of the sun, but not the ambient light of the sky which is blue as the atmosphere molecules scatter blue light more effectively. As a result, the shadow appears bluish. Dimension A shadow occupies a three-dimensional volume of space, but this is usually not visible until it projects onto a reflective surface. A light fog, mist, or dust cloud can reveal the 3D presence of volumetric patterns in light and shadow. Fog shadows may look odd to viewers who are not used to seeing shadows in three dimensions. A thin fog is just dense enough to be illuminated by the light that passes through the gaps in a structure or in a tree. As a result, the path of an object's shadow through the fog becomes visible as a darkened volume. In a sense, these shadow lanes are the inverse of crepuscular rays caused by beams of light, they're caused by the shadows of solid objects. Theatrical fog and strong beams of light are sometimes used by lighting designers and visual artists who seek to highlight three-dimensional aspects of their work. Inversion Oftentimes shadows of chain-linked fences and other such objects become inverted (light and dark areas are swapped) as they get farther from the object. A chain-link fence shadow will start with light diamonds and shadow outlines when it is touching the fence, but it will gradually blur. Eventually, if the fence is tall enough, the light pattern will go to shadow diamonds and light outlines. Photography In photography, which is essentially recording patterns of light, shade, and color, "highlights" and "shadows" are the brightest and darkest parts, respectively, of a scene or image. Photographic exposure must be adjusted (unless special effects are wanted) to allow the film or sensor, which has limited dynamic range, to record detail in the highlights without them being washed out, and in the shadows without their becoming undifferentiated black areas. On satellite imagery and aerial photographs, taken vertically, tall buildings can be recognized as such by their long shadows (if the photographs are not taken in the tropics around noon), while these also show more of the shape of these buildings. Analogous concepts Shadow as a term is often used for any occlusion or blockage, not just those with respect to light. For example, a rain shadow is a dry area, which with respect to the prevailing wind direction, is beyond a mountain range; the elevated terrain impedes rainclouds from entering the dry zone. An acoustic shadow occurs when a direct sound has been blocked or diverted around a given area. Cultural aspects Shadows often appear in mythical or cultural contexts. Sometimes in a malevolent light, other times not. An unattended shade was thought by some cultures to be similar to that of a ghost. The name for the fear of shadows is "sciophobia" or "sciaphobia". Chhaya is the Hindu goddess of shadows. In heraldry, when a charge is supposedly shown "in the shadow" (the appearance is of the charge merely being outlined in a neutral tint rather than being of one or more tinctures different from the field on which it is placed), it is technically described as "umbrated". Supposedly, only a limited number of specific charges can be so depicted. Shadows are often linked with darkness and evil; in common folklore, like shadows who come to life, are often evil beings trying to control the people they reflect. The film Upside-Down Magic features an antagonistic shadow spirit who possesses people. Ancient Egyptians surmised that a shadow, which they called šwt (shut), contains something of the person it represents because it is always present. Through this association, statues of people and deities were sometimes referred to as shadows. In Islam, shadows are a sign of submission to God. The Quran emphasizes that everything in the heavens and the earth, including shadows, prostrates to the Almighty in awe and obedience: "Do they not see how everything that Allah has created casts its shadow, inclining to the right and to the left, prostrating to Allah while they are humble?" (Quran 16:48). Similarly, the Quran states, "And to Allah prostrates whoever is within the heavens and the earth, willingly or by compulsion, and their shadows [as well] in the mornings and the afternoons" (Quran 13:15). Shadows, in this context, are a testament to the divine order and unity of creation. In a commentary to The Egyptian Book of the Dead (BD), Egyptologist Ogden Goelet, Jr. discusses the forms of the shadow: "In many BD papyri and tombs the deceased is depicted emerging from the tomb by day in shadow form, a thin, black, featureless silhouette of a person. The person in this form is, as we would put it, a mere shadow of his former existence, yet nonetheless still existing. Another form the shadow assumes in the BD, especially in connection with gods, is an ostrich-feather sun-shade, an object which would create a shadow." Energy generating Scientists from the National University of Singapore presented a shadow-effect energy generator (SEG), which consists of cells of gold deposited on a silicon wafer attached on a plastic film. The generator has a power density of 0.14 μW cm−2 under indoor conditions (0.001 sun). Gallery
Physical sciences
Optics
Physics
217316
https://en.wikipedia.org/wiki/Chough
Chough
There are two species of passerine birds commonly called chough ( ) that constitute the genus Pyrrhocorax of the Corvidae (crow) family of birds. These are the red-billed chough (Pyrrhocorax pyrrhocorax) and the Alpine chough (or yellow-billed chough) (Pyrrhocorax graculus). The white-winged chough of Australia, despite its name, is not a true chough but rather a member of the family Corcoracidae and only distantly related. The choughs have black plumage and brightly coloured legs, feet and bills and are resident in the mountains and rocky sea-cliffs of southern Eurasia and North Africa. They have long broad wings and perform spectacular aerobatics. Both species pair for life and display fidelity to their breeding sites, which are usually caves or crevices in a cliff face. They build a lined stick nest and lay three to five eggs. They feed, usually in flocks, on short grazed grassland, taking mainly invertebrate prey, supplemented by vegetable material or food from human habitation, especially in winter. Changes in agricultural practices, which have led to local population declines and range fragmentation, are the main threats to this genus, although neither species is threatened globally. Taxonomy There are just two species in the genus, red-billed chough and alpine chough. The first to be described was the red-billed chough, named as Upupa pyrrhocorax by Linnaeus in his Systema Naturae in 1758. His genus Upupa contained species that had a long curved bill and a short blunt tongue. These included the northern bald ibis and the hoopoe, birds now known to be completely unrelated to the choughs. The Alpine chough was described as Corvus graculus by Linnaeus in the 1766 edition of the Systema Naturae. Although Corvus is the crow genus to which the choughs' relatives belong, the English ornithologist Marmaduke Tunstall considered the chough to be sufficiently distinct to be moved to the a genus, Pyrrhocorax, which he described in his 1771 Ornithologia Britannica. The genus name is derived from Ancient Greek purrhos (, "flame-coloured") and korax (, "Raven, crow"). The fossil record from the Pleistocene of Europe includes a form similar to the Alpine chough, and sometimes categorised as an extinct subspecies of that bird, and a prehistoric form of the red-billed chough, P. p. primigenius. There are eight generally recognised extant subspecies of red-billed chough, and two of Alpine, although all differ only slightly from the nominate forms. The greater subspecies diversity in the red-billed species arises from an early divergence of the Asian and geographically isolated Ethiopian races from the western forms. Traditionally, the closest relatives of the choughs have been thought to be the jackdaws Coloeus and the typical crows Corvus, but more recent genetic studies have suggested the choughs are basal to a group of Asian jay genera (Crypsirina, Dendrocitta, Platysmurus, Temnurus), or most recently, basal in the entire Corvidae. The genus Pyrrhocorax species differ from Corvus in that they have brightly coloured bills and feet, smooth, not scaled tarsi, and very short, dense nasal feathers. Choughs have uniformly black plumage, lacking any paler areas as seen in some of their relatives. The two Pyrrhocorax are the main hosts of two specialist chough fleas, Frontopsylla frontalis and F. laetus, not normally found on other corvids. Etymology "Chough" was originally an alternative onomatopoeic name for the jackdaw, Corvus monedula, based on its call. The similar red-billed chough, formerly particularly common in Cornwall, became known initially as "Cornish chough" and then just "chough", the name transferring from one species to the other. The Australian white-winged chough Corcorax melanorhamphos, despite its similar shape and habits, is in a separate family Corcoracidae only moderately related to the Corvidae and not notably to the true choughs, and is an example of convergent evolution. Distribution and habitat Choughs breed in mountains, from Morocco and Spain eastwards through southern Europe and the Alps, across Central Asia and the Himalayas to western China. The Alpine chough is also found in Corsica and Crete and the red-billed chough has populations in Ireland, the UK, the Isle of Man, Brittany and two areas of the Ethiopian Highlands. Both species are non-migratory residents throughout their range, only occasionally wandering to neighbouring countries. These birds are mountain specialists, although red-billed choughs also use coastal sea cliffs in Ireland, Great Britain and Brittany, feeding on adjacent short grazed grassland or machair; the small population on La Palma, one of the Canary Islands, is also coastal. The red-billed chough more typically breeds in mountains above in Europe, in North Africa and in the Himalayas. In that mountain range it reaches in the summer, and has been recorded at altitude on Mount Everest. The Alpine chough breeds above in Europe, in Morocco, and in the Himalayas. It has nested at , higher than any other bird species, and it has been observed following mountaineers ascending Mount Everest at an altitude of . Where the two species occur in the same mountains, the Alpine species tends to breed at a higher elevation than its relative, since it is better adapted for a diet at high altitudes. Description The choughs are medium-sized corvids; the red-billed chough is 39–40 centimetres (15–16 in) in length with a 73–90 centimetres (29–35 in) wingspan, and the Alpine chough averages slightly smaller at 37–39 (14.5–15.5 in) length with a wingspan. These birds have black plumage similar to that of many Corvus crows, but they are readily distinguished from members of that genus by their brightly coloured bills and legs. The Alpine chough has a yellow bill and the red-billed chough has a long, curved, red bill; both species have red legs as adults. The sexes are similar, but the juvenile of each species has a duller bill and legs than the adult and its plumage lacks the glossiness seen in older birds. Other physical distinctions are summarised in the table below. The two choughs are distinguishable from each other by their bill colour, and in flight the long broad wings and short tail of the red-billed give it a silhouette quite different from its slightly smaller yellow-billed relative. Both species fly with loose deep wing beats, and frequently use their manoeuvrability to perform acrobatic displays, soaring in the updraughts at cliff faces then diving and rolling with fanned tail and folded wings. The red-billed chough's loud, ringing chee-ow call is similar in character to that of other corvids, particularly the jackdaw, although it is clearer and louder than the call of that species. In contrast, the Alpine chough has rippling and whistled sweeeooo calls quite unlike the crows. Small subspecies of both choughs have higher frequency calls than larger races, as predicted by the inverse relationship between body size and frequency. Behaviour and ecology Breeding Choughs are monogamous, and show high partner and site fidelity. Both species build a bulky nest of roots, sticks and plant stems lined with grass, fine twiglets or hair. It is constructed on a ledge, in a cave or similar fissure in a cliff face, or in man-made locations like abandoned buildings, quarries or dams. Red-billed will also sometimes use occupied buildings such as Mongolian monasteries. The choughs are not colonial, although in suitable habitat several pairs may nest in close proximity. Both species lay 3–5 normally whitish eggs blotched with brown or grey, which are incubated by the female alone. The chicks hatch after two to three weeks. Red-billed chough chicks are almost naked, but the chicks of the higher altitude Alpine chough hatch with a dense covering of natal down. The chicks are fed by both parents and fledge in 29–31 days after hatching for Alpine chough, and 31–41 days for red-billed. The Alpine chough lays its eggs about one month later than its relative, although breeding success and reproductive behaviour are similar. The similarities between the two species presumably arose because of the same strong environmental constraints on breeding behaviour. The first-year survival rate of the juvenile red-billed chough is 72.5 percent, and for the Alpine it is 77%. The annual adult survival rate is 83–92% for Alpine, but is unknown for red-billed. Feeding In the summer, both choughs feed mainly on invertebrates such as beetles, snails, grasshoppers, caterpillars, and fly larvae. Ants are a favoured food of the red-billed chough. Prey items are taken from short grazed pasture, or in the case of coastal populations of red-billed chough, areas where plant growth is hindered by exposure to coastal salt spray or poor soils. The chough's bill may be used to pick insects off the surface, or to dig for grubs and other invertebrates. The red-billed chough typically excavates to in the thin soils of its feeding areas, but it may dig to in suitable conditions. Plant matter is also eaten, and red-billed chough will take fallen grain where the opportunity arises; it has been reported as damaging barley crops by breaking off the ripening heads to extract the corn. Alpine choughs rely more on fruit and berries at times of year when animal prey is limited, and will readily supplement their winter diet with food provided by tourist activities in mountain regions, including ski resorts, refuse dumps and picnic areas. Both Pyrrhocorax species feed in flocks on open areas, often some distance from the breeding cliffs, particularly in winter. Feeding trips may cover distance and in altitude. In the Alps, the development of skiing above has enabled more Alpine choughs to remain at high levels in winter. Where their ranges overlap, the two chough species may feed together in the summer, although there is only limited competition for food. An Italian study showed that the vegetable part of the winter diet for the red-billed chough was almost exclusively Gagea bulbs, whilst the Alpine chough took berries and hips. In June, red-billed choughs fed mainly on caterpillars whereas Alpine choughs ate cranefly pupae. Later in the summer, the Alpine chough consumed large numbers of grasshoppers, while the red-billed chough added cranefly pupae, fly larvae and beetles to its diet. In the eastern Himalayas in November, Alpine choughs occur mainly in Juniper forests where they feed on juniper berries, differing ecologically from the red-billed choughs in the same region and at the same time of year, which dig for food in the soil of the villages' terraced pastures. Natural threats Predators of the choughs include the peregrine falcon, golden eagle, and the Eurasian eagle-owl. The common raven will take nestlings. In northern Spain red-billed choughs prefer to nest near lesser kestrel colonies. This falcon, which eats only insects, provides a degree of protection against larger predators and the chough benefits in terms of a higher breeding success. The red-billed chough is occasionally parasitised by the great spotted cuckoo, a brood parasite for which the Eurasian magpie is the primary host. The choughs host bird fleas, including two Frontopsylla species, which are Pyrrhocorax specialist. Other parasites recorded on choughs include a cestode Choanotaenia pirinica, and various species of chewing lice in the genera Brueelia, Menacanthus and Philopterus. Blood parasites such as Plasmodium have been found in red-billed choughs, but this is uncommon and apparently does little harm. Parasitism levels are much lower than in some other passerine groups. Status Both Pyrrhocorax species have extensive geographical ranges and large populations; neither is thought to approach the thresholds for the global population decline criteria of the IUCN Red List (i.e., declining more than 30% in ten years or three generations), and they are therefore evaluated as being of Least Concern. However, some populations, particularly on islands such as Corsica and La Palma are small and isolated. Both choughs occupied more extensive ranges in the past, reaching to more southerly and lower altitude areas than at present, with the Alpine chough breeding in Europe as far south as southern Italy, and both the decline and range fragmentation continue. Red-billed choughs have lost ground in most of Europe, and Alpine choughs have lost many breeding sites in the east of the continent. In the Canary Islands, the red-billed chough is now extinct on two of the islands on which it formerly bred, and the Alpine was lost from the archipelago altogether. The causes of the decline include the fragmentation and loss of open grasslands to scrub or human activities such as the construction of ski resorts, and a longer-term threat comes from global warming which would cause the species' preferred Alpine climate zone to shift to higher, more restricted areas, or locally to disappear entirely. The red-billed chough, which breeds at lower levels, has been more affected by human activity, and the declines away from its main Alpine breeding areas have seen it categorised as "vulnerable" in Europe. Only in Spain is it still common, and it has recently expanded its range in that country by nesting in old buildings in areas close to its traditional mountain breeding sites. In culture Although these are mainly mountain species with limited interactions with humans, the red-billed chough has a coastal population in the far west of its range, and has cultural connections particularly with Cornwall, where it appears on the Cornish Coat of Arms. A legend from that county says that King Arthur did not die but was transformed into a red-billed chough, and hence killing this bird was unlucky. The red-billed chough was formerly reputed to be a habitual thief of small objects from houses, including burning wood or lighted candles, which it would use to set fire to haystacks or thatched roofs. As a high altitude species with limited contact with humans until the development of mountain tourism activities, the Alpine chough has little cultural significance. It was, however, featured together with its wild mountain habitat in Olivier Messiaen’s Catalogue d'oiseaux ("Bird catalogue"), a piano piece written in 1956–58. Le chocard des alpes ("The Alpine Chough") is the opening piece of Book 1 of the work. A group of choughs may be referred to fancifully or jocularly as a chattering or clattering. (
Biology and health sciences
Corvoidea
Animals
217387
https://en.wikipedia.org/wiki/Mite
Mite
Mites are small arachnids (eight-legged arthropods). Mites span two large orders of arachnids, the Acariformes and the Parasitiformes, which were historically grouped together in the subclass Acari. However, most recent genetic analyses do not recover the two as each other's closest relative within Arachnida, rendering the group non-monophyletic. Most mites are tiny, less than in length, and have a simple, unsegmented body plan. The small size of most species makes them easily overlooked; some species live in water, many live in soil as decomposers, others live on plants, sometimes creating galls, while others are predators or parasites. This last type includes the commercially destructive Varroa parasite of honey bees, as well as scabies mites of humans. Most species are harmless to humans, but a few are associated with allergies or may transmit diseases. The scientific discipline devoted to the study of mites is called acarology. Evolution and taxonomy The mites are not a defined taxon, but is used for two distinct groups of arachnids, the Acariformes and the Parasitiformes. The phylogeny of the Acari has been relatively little studied, but molecular information from ribosomal DNA is being extensively used to understand relationships between groups. The 18 S rRNA gene provides information on relationships among phyla and superphyla, while the ITS2, and the 18S ribosomal RNA and 28S ribosomal RNA genes, provide clues at deeper levels. Taxonomy Superorder Parasitiformes – ticks and a variety of mites Opilioacarida – a small order of large mites that superficially resemble harvestmen (Opiliones), hence their name Holothyrida - small group of predatory mites native to former Gondwana landmasses Ixodida – ticks Mesostigmata – a large order of predatory and parasitic mites Trigynaspida - large, diverse order Monogynaspida - diverse order of parasitic and predatory mites Sejida - small order of mites containing five families Superorder Acariformes – the most diverse group of mites Endeostigmata (probably paraphyletic) Eriophyoidea – gall mites and relatives Trombidiformes – plant parasitic mites (spider mites, peacock mites, red-legged earth mites, etc.), snout mites, chiggers, hair follicle mites, velvet mites, water mites, etc. Sphaerolichida - small order of mites containing two families Prostigmata - large order of sucking mites Sarcoptiformes Oribatida – oribatid mites, beetle mites, armored mites (formerly known as Cryptostigmata) Astigmatina – stored product, fur, feather, dust, and human itch mites, etc. Fossil record The mite fossil record is sparse, due to their small size and low preservation potential. The oldest fossils of acariform mites are from the Rhynie Chert, Scotland, which dates to the early Devonian, around 410 million years ago while the earliest fossils of Parasitiformes are known from amber specimens dating to the mid-Cretaceous, around 100 million years ago. Most fossil acarids are no older than the Tertiary (up to 65 mya). Phylogeny Members of the superorders Opilioacariformes and Acariformes (sometimes known as Actinotrichida) are mites, as well as some of the Parasitiformes (sometimes known as Anactinotrichida). Recent genetic research has suggested that Acari is polyphyletic (of multiple origins). A study using molecular data from the mitochondria and nucleus recovered Acariformes as sister to the Solifugae (camel spiders) and Parasitiformes as sister to the Pseudoscorpionida, with other arachnid orders separating these two groupings on the phylogenetic tree, as shown below. However, a few phylogenomic studies have found strong support for monophyly of Acari and a sister relationship between Acariformes and Parasitiformes, although this finding has been questioned, with other studies suggesting that this likely represents a long branch attraction artefact. Anatomy External Mites are tiny members of the class Arachnida; most are in the size range but some are larger and some are no bigger than as adults. The body plan has two regions, a cephalothorax (with no separate head) or prosoma, and an opisthosoma or abdomen. Segmentation has almost entirely been lost and the prosoma and opisthosoma are fused, only the positioning of the limbs indicating the location of the segments. At the front of the body is the gnathosoma or capitulum. This is not a head and does not contain the eyes or the brain, but is a retractable feeding apparatus consisting of the chelicerae, the pedipalps and the oral cavity. It is covered above by an extension of the body carapace and is connected to the body by a flexible section of cuticle. Two-segmented chelicerae is the ancestral condition in Acariformes, but in more derived groups they are single-segmented. And three-segmented chelicerae is the ancestral condition in Parasitiformes, but has been reduced to just two segments in more derived groups. The pedipalps differ between taxa depending on diet; in some species the appendages resemble legs while in others they are modified into chelicerae-like structures. The oral cavity connects posteriorly to the mouth and pharynx. Most mites have four pairs of legs (two pairs in Eriophyoidea), each with six segments, which may be modified for swimming or other purposes. The dorsal surface of the body is clad in hardened tergites and the ventral surface by hardened sclerites; sometimes these form transverse ridges. The gonopore (genital opening) is located on the ventral surface between the fourth pair of legs. Some species have one to five median or lateral eyes but many species are blind, and slit and pit sense organs are common. Both body and limbs bear setae (bristles) which may be simple, flattened, club-shaped or sensory. Mites are usually some shade of brown, but some species are red, orange, black or green, or some combination of these colours. Many mites have stigmata (openings used in respiration). In some mites, the stigmata are associated with peritremes: paired, tubular, elaborated extensions of the tracheal system. The higher taxa of mites are defined by these structures: Oribatida, formerly known as Cryptostigmata (crypto- = hidden), and Endeostigmata (endeo- = internal) lack primary stigmata and peritremes but may have secondary respiratory systems. For example, oribatids in the suborder Brachypylina have stigmata on the ventral plate of the body that are difficult to see (thus the former name Cryptostigmata). Astigmata (a- = without) lack stigmata and respire through their cuticle. Prostigmata (pro- = before/in front) have stigmata at the front of the body, usually on the lateral margins or between the chelicerae. These are associated with peritremes that may be on the prodorsum near the cheliceral bases, or be horn-like and emergent, or form a line or network on the dorsum of the gnathosomal capsule. Opilioacaridae have four pairs of dorsolateral stigmata that are added sequentially during development. The other three orders of Parasitiformes, Holothyrida, Ixodida, and Mesostigmata (meso- = middle), have just one pair of stigmata in the region of the fourth pair of legs. They also have peritremes: in Ixodida these consist of paired encircling plates around the stigmata, while the peritremes in Mesostigmata and Holothyrida are grooves extending from the stigmata anteriorly (sometimes also posteriorly). Internal Mite digestive systems have salivary glands that open into the preoral space rather than the foregut. Most species carry two to six pairs of salivary glands that empty at various points into the subcheliceral space. A few mite species lack an anus: they do not defecate during their short lives. The circulatory system consists of a network of sinuses and most mites lacks a heart, movement of fluid being driven by the contraction of body muscles. But ticks, and some of the larger species of mites, have a dorsal, longitudinal heart. Gas exchange is carried out across the body surface, but many species additionally have between one and four pairs of tracheae. The excretory system includes a nephridium and one or two pairs of Malpighian tubules. Several families of mites, such as Tetranychidae, Eriophyidae, Camerobiidae, Cunaxidae, Trombidiidae, Trombiculidae, Erythraeidae and Bdellidae have silk glands used to produce silk for various purposes. Additionally, water mites (Hydrachnidia) produce long thin threads that may be silk. Reproduction and life cycle The sexes are separate in mites; males have a pair of testes in the mid-region of the body, each connected to the gonopore by a vas deferens, and in some species there is a chitinous penis; females have a single ovary connected to the gonopore by an oviduct, as well as a seminal receptacle for the storage of sperm. In most mites, sperm is transferred to the female indirectly; the male either deposits a spermatophore on a surface from which it is picked up by the female, or he uses his chelicerae or third pair of legs to insert it into the female's gonopore. In some of the Acariformes, insemination is direct using the male's penis. The spermatophora in all mites are aflagellate. The eggs are laid in the substrate, or wherever the mite happens to live. They take up to six weeks to hatch, according to species, with the next stage being the six-legged larvae. After three moults, the larvae become nymphs, with eight legs, and after a further three moults, they become adults. Longevity varies between species, but the lifespan of mites is short compared to many other arachnids. Ecology Niches Mites occupy a wide range of ecological niches. For example, Oribatida mites are important decomposers in many habitats. They eat a wide variety of material including living and dead plant and fungal material, lichens and carrion; some are predatory, though no oribatid mites are parasitic. Mites are among the most diverse and successful of all invertebrate groups. They have exploited a wide array of habitats, and because of their small size go largely unnoticed. They are found in freshwater (e.g. the water mites or Hydrachnidia) and saltwater (most Halacaridae), in the soil, in forests, pastures, agricultural crops, ornamental plants, thermal springs and caves. They inhabit organic debris of all kinds and are extremely numerous in leaf litter. They feed on animals, plants and fungi and some are parasites of plants and animals. Some 48,200 species of mites have been described, but there may be a million or more species as yet undescribed. The tropical species Archegozetes longisetosus is one of the strongest animals in the world, relative to its mass (100 μg): It lifts up to 1,182 times its own weight, over five times more than would be expected of such a minute animal. A mite also holds a speed record: for its length, Paratarsotomus macropalpis is the fastest animal on Earth. The mites living in soil consist of a range of taxa. Oribatida and Prostigmata are more numerous in soil than Mesostigmata, and have more soil-dwelling species. When soil is affected by an ecological disturbance such as agriculture, most mites (Astigmata, Mesostigmata and Prostigmata) recolonise it within a few months, whereas Oribatida take multiple years. Parasitism Many mites are parasitic on plants and animals. One family of mites, Pyroglyphidae, or nest mites, live primarily in the nests of birds and other animals. These mites are largely parasitic and consume blood, skin and keratin. Dust mites, which feed mostly on dead skin and hair shed from humans instead of consuming them from the organism directly, evolved from these parasitic ancestors. Ticks are a prominent group of mites that are parasitic on vertebrates, mostly mammal and birds, feeding on blood with specialised mouthparts. Parasitic mites sometimes infest insects. Varroa destructor attaches to the body of honey bees, and Acarapis woodi (family Tarsonemidae) lives in their tracheae. Hundreds of species are associated with other bees, mostly poorly described. They attach to bees in a variety of ways. For example, Trigona corvina workers have been found with mites attached to the outer face of their hind tibiae. Some are thought to be parasites, while others are beneficial symbionts. Mites also parasitize some ant species, such as Eciton burchellii. Most larvae of Parasitengona are ectoparasites of arthropods, while later life stages in this group tend to shift to being predators. Plant pests include the so-called spider mites (family Tetranychidae), thread-footed mites (family Tarsonemidae), and the gall mites (family Eriophyidae). Among the species that attack animals are members of the sarcoptic mange mites (family Sarcoptidae), which burrow under the skin. Demodex mites (family Demodecidae) are parasites that live in or near the hair follicles of mammals, including humans. Dispersal Being unable to fly, mites need some other means of dispersal. On a small scale, walking is used to access other suitable locations in the immediate vicinity. Some species mount to a high point and adopt a dispersal posture and get carried away by the wind, while others waft a thread of silk aloft to balloon to a new position. Parasitic mites use their hosts to disperse, and spread from host to host by direct contact. Another strategy is phoresy; the mite, often equipped with suitable claspers or suckers, grips onto an insect or other animal, and gets transported to another place. A phoretic mite is just a hitch-hiker and does not feed during the time it is carried by its temporary host. These travelling mites are mostly species that reproduce rapidly and are quick to colonise new habitats. Relationship with humans Mites are tiny and apart from those that are of economic concern to humans, little studied. The majority are beneficial, living in the soil or aqueous environments and assisting in the decomposition of decaying organic material, as part of the carbon cycle. Two species live on humans, namely Demodex folliculorum and Demodex brevis; both are frequently referred to as eyelash mites. Medical significance The majority of mite species are harmless to humans and domestic animals, but a few species can colonize mammals directly, acting as vectors for disease transmission, and causing or contributing to allergenic diseases. Mites which colonize human skin are the cause of several types of itchy skin rashes, such as gamasoidosis, rodent mite dermatitis, grain itch, grocer's itch, and scabies; Sarcoptes scabiei is a parasitic mite responsible for scabies, which is one of the three most common skin disorders in children. Demodex mites, which are common cause of mange in dogs and other domesticated animals, have also been implicated in the human skin disease rosacea, although the mechanism by which demodex contributes to the disease is unclear. Ticks are well known for carrying diseases, such as Lyme disease and Rocky Mountain spotted fever. Chiggers are known primarily for their itchy bite, but they can also spread disease in some limited circumstances, such as scrub typhus. The house-mouse mite is the only known vector of the disease rickettsialpox. House dust mites, found in warm and humid places such as beds, cause several forms of allergic diseases, including hay fever, asthma and eczema, and are known to aggravate atopic dermatitis. Among domestic animals, sheep are affected by the mite Psoroptes ovis which lives on the skin, causing hypersensitivity and inflammation. Hay mites are a suspected reservoir for scrapie, a prion disease of sheep. In beekeeping The mite Varroa destructor is a serious pest of honey bees, contributing to colony collapse disorder in commercial hives. This organism is an obligate external parasite, able to reproduce only in bee colonies. It directly weakens its host by sucking up the bee's fat, and can spread RNA viruses including deformed wing virus. Heavy infestation causes the death of a colony, generally over the winter. Since 2006, more than 10 million beehives have been lost. Biological pest control Various mites prey on other invertebrates and can be used to control their populations. Phytoseiidae, especially members of Amblyseius, Metaseiulus, and Phytoseiulus, are used to control pests such as spider mites. Among the Laelapidae, Gaeolaelaps aculeifer and Stratiolaelaps scimitus are used to control fungus gnats, poultry red mites and various soil pests. In culture Mites were first observed under the microscope by the English polymath Robert Hooke. In his 1665 book Micrographia, he stated that far from being spontaneously generated from dirt, they were "very prettily shap'd Insects". In 1898, Arthur Conan Doyle wrote a satirical poem, "A Parable", with the conceit of some cheese mites disputing the origin of the round cheddar cheese in which they all lived. The world's first science documentary featured cheese mites, seen under the microscope; the short film was shown in London's Alhambra music hall in 1903, causing a boom in the sales of simple microscopes.
Biology and health sciences
Arachnids
null
217392
https://en.wikipedia.org/wiki/Executable
Executable
In computer science, executable code, an executable file, or an executable program, sometimes simply referred to as an executable or binary, causes a computer "to perform indicated tasks according to encoded instructions", as opposed to a data file that must be interpreted (parsed) by an interpreter to be functional. The exact interpretation depends upon the use. "Instructions" is traditionally taken to mean machine code instructions for a physical CPU. In some contexts, a file containing scripting instructions (such as bytecode) may also be considered executable. Generation of executable files Executable files can be hand-coded in machine language, although it is far more convenient to develop software as source code in a high-level language that can be easily understood by humans. In some cases, source code might be specified in assembly language instead, which remains human-readable while being closely associated with machine code instructions. The high-level language is compiled into either an executable machine code file or a non-executable machine code – object file of some sort; the equivalent process on assembly language source code is called assembly. Several object files are linked to create the executable. Object files -- executable or not -- are typically stored in a container format, such as Executable and Linkable Format (ELF) or Portable Executable (PE) which is operating system-specific. This gives structure to the generated machine code, for example dividing it into sections such as .text (executable code), .data (initialized global and static variables), and .rodata (read-only data, such as constants and strings). Executable files typically also include a runtime system, which implements runtime language features (such as task scheduling, exception handling, calling static constructors and destructors, etc.) and interactions with the operating system, notably passing arguments, environment, and returning an exit status, together with other startup and shutdown features such as releasing resources like file handles. For C, this is done by linking in the crt0 object, which contains the actual entry point and does setup and shutdown by calling the runtime library. Executable files thus normally contain significant additional machine code beyond that directly generated from the specific source code. In some cases, it is desirable to omit this, for example for embedded systems development, or simply to understand how compilation, linking, and loading work. In C, this can be done by omitting the usual runtime, and instead explicitly specifying a linker script, which generates the entry point and handles startup and shutdown, such as calling main to start and returning exit status to the kernel at the end. Execution In order to be executed by the system (such as an operating system, , or boot loader), an executable file must conform to the system's application binary interface (ABI). In simple interfaces, a file is executed by loading it into memory and jumping to the start of the address space and executing from there. In more complicated interfaces, executable files have additional metadata specifying a separate entry point. For example, in ELF, the entry point is defined in the header's e_entry field, which specifies the (virtual) memory address at which to start execution. In the GNU Compiler Collection, this field is set by the linker based on the _start symbol.
Technology
Software development: General
null
217472
https://en.wikipedia.org/wiki/Eternity
Eternity
Eternity, in common parlance, is an infinite amount of time that never ends or the quality, condition or fact of being everlasting or eternal. Classical philosophy, however, defines eternity as what is timeless or exists outside time, whereas sempiternity corresponds to infinite duration. Philosophy Classical philosophy defines eternity as what exists outside time, as in describing timeless supernatural beings and forces, distinguished from sempiternity which corresponds to infinite time, as described in requiem prayers for the dead. Some thinkers, such as Aristotle, suggest the eternity of the natural cosmos in regard to both past and future eternal duration. Boethius defined eternity as "simultaneously full and perfect possession of interminable life". Thomas Aquinas believed that God's eternity does not cease, as it is without either a beginning or an end; the concept of eternity is of divine simplicity, thus incapable of being defined or fully understood by humankind. Thomas Hobbes (1588–1679) and many others in the Age of Enlightenment drew on the classical distinction to put forward metaphysical hypotheses such as "eternity is a permanent now". Contemporary philosophy and physics Today cosmologists, philosophers, and others look towards analyses of the concept from across cultures and history. They debate, among other things, whether an absolute concept of eternity has real application for fundamental laws of physics; compare the issue of entropy as an arrow of time. Religion Eternity as infinite duration is an important concept in many lives and religions. God or gods are often said to endure eternally, or exist for all time, forever, without beginning or end. Religious views of an afterlife may speak of it in terms of eternity or eternal life. Christian theologians may regard immutability, like the eternal Platonic forms, as essential to eternity. Symbolism Eternity is often symbolized by the endless snake, swallowing its own tail, the ouroboros. The circle, band, or ring is also commonly used as a symbol for eternity, as is the mathematical symbol of infinity, . Symbolically these are reminders that eternity has no beginning or end.
Physical sciences
Time
Basics and measurement
217542
https://en.wikipedia.org/wiki/Seriema
Seriema
The seriemas are the sole living members of the small bird family Cariamidae (the entire family is also referred to as "seriemas"), which is also the only surviving lineage of the order Cariamiformes. Once believed to be related to cranes, they have been placed near the falcons, parrots, and passerines, as well as the extinct Phorusrhacidae (terror birds). The seriemas are large, long-legged territorial birds that range from in length. They live in grasslands, savanna, dry woodland and open forests of Brazil, Bolivia, Argentina, Paraguay and Uruguay. There are two species of seriemas, the red-legged seriema (Cariama cristata) and the black-legged seriema (Chunga burmeisteri). Names for these birds in the Tupian languages are variously spelled as siriema, sariama, and çariama, and mean "crested". Description Both species are around long (the red-legged seriema is slightly bigger than the black-legged, with 90 and 70–85 cm respectively). The seriemas forage on foot and run from danger rather than fly (though they can fly for short distances, and they roost in trees). They have long legs, necks, and tails, but only short wings, reflecting their way of life. Also, they are among the largest ground-dwelling birds endemic to the Neotropics (only behind rheas). They are brownish birds with short bills and erectile crests, found in fairly dry open country, the red-legged seriema preferring grasslands and the black-legged seriema preferring scrub and open woodland. Also, they give loud, yelping calls and are often heard before they are seen. Furthermore, they have sharp claws, with an extensible and very curved second toe claw. Classification These birds are thought to be the closest living relatives of a group of gigantic (up to tall) carnivorous "terror birds", the phorusrhacids, which are known from fossils from South and North America. Several other related groups, such as the idiornithids and bathornithids were part of Palaeogene faunas in North America and Europe and possibly elsewhere too. However, the fossil record of the seriemas themselves is poor, with two prehistoric species, Chunga incerta and Miocariama patagonica (formerly Noriegavis santacrucensis), both from the Miocene of Argentina, having been described to date. Some of the fossils from the Eocene fauna of the Messel pit (i.e. Salimia and Idiornis) have also been suggested to be seriemas, as has the massive predatory Paracrax from the Oligocene of North America, though their status remains uncertain. Extant species There are two living species of seriema. Behaviour and ecology Ecologically, the seriema is the South American counterpart of the African secretary bird. They feed on insects, snakes, lizards, frogs, young birds, and rodents, with small amounts of plant food (including maize and beans). They often associate with grazing livestock, probably to take insects the animals disturb. When seriemas catch small reptiles, they beat the prey on the ground (Redford and Peters 1986) or throw it at a hard surface to break resistance and also the bones. If the prey is too large to swallow whole, it will be ripped into smaller pieces with a sickle claw by holding the prey in the beak and tearing it apart with the claw. Because of these feeding behaviors, seriemas are important by eating detritivores and helping the soil get more nutrients from dead plant matter. In contact with humans, seriemas are suspicious and if they feel threatened, usually spread their wings and face the threat. They walk in pairs or small groups. Although perfectly capable of flying, they prefer to spend most of their time on land. They take flight only when necessary, for example to escape a predator. Overnight they take shelter in the treetops, where they also build their nests. Breeding The breeding biology of the seriemas is poorly known, and much of what is known comes only from red-legged seriemas. Pairs appear to be territorial and avoid others of their species while breeding, and fights between rivals have been observed. These fights involving kicking rivals, can go on for long periods of time, and involve much calling by the involved birds. Seriemas build a large bulky stick nest, lined with leaves and dung, which is placed in a tree off the ground. The placement of the nest is so that the adults can reach the nest by foot rather than flying, through hops and the occasional flutter. Both sexes are involved in building the nest. They lay two or three white or buff eggs sparsely spotted with brown and purple. The female does most of the incubation, which lasts from 24 to 30 days. Hatchlings are downy but stay in the nest for about two weeks; after which they leave the nest and follow both parents. They reach full maturity at the age of four to five months. It is unknown when fledgling chicks reach sexual maturity.
Biology and health sciences
Basics
Animals
217628
https://en.wikipedia.org/wiki/Complex%20plane
Complex plane
In mathematics, the complex plane is the plane formed by the complex numbers, with a Cartesian coordinate system such that the horizontal -axis, called the real axis, is formed by the real numbers, and the vertical -axis, called the imaginary axis, is formed by the imaginary numbers. The complex plane allows for a geometric interpretation of complex numbers. Under addition, they add like vectors. The multiplication of two complex numbers can be expressed more easily in polar coordinates: the magnitude or of the product is the product of the two absolute values, or moduli, and the angle or of the product is the sum of the two angles, or arguments. In particular, multiplication by a complex number of modulus 1 acts as a rotation. The complex plane is sometimes called the Argand plane or Gauss plane. Notational conventions Complex numbers In complex analysis, the complex numbers are customarily represented by the symbol , which can be separated into its real () and imaginary () parts: for example: , where and are real numbers, and is the imaginary unit. In this customary notation the complex number corresponds to the point in the Cartesian plane; the point can also be represented in polar coordinates with: In the Cartesian plane it may be assumed that the range of the arctangent function takes the values (in radians), and some care must be taken to define the more complete arctangent function for points when . In the complex plane these polar coordinates take the form where Here is the or of the complex number ; , the of , is usually taken on the interval ; and the last equality (to ) is taken from Euler's formula. Without the constraint on the range of , the argument of is multi-valued, because the complex exponential function is periodic, with period . Thus, if is one value of , the other values are given by , where is any non-zero integer. While seldom used explicitly, the geometric view of the complex numbers is implicitly based on its structure of a Euclidean vector space of dimension 2, where the inner product of complex numbers and is given by then for a complex number its absolute value coincides with its Euclidean norm, and its argument with the angle turning from 1 to . The theory of contour integration comprises a major part of complex analysis. In this context, the direction of travel around a closed curve is important – reversing the direction in which the curve is traversed multiplies the value of the integral by . By convention the direction is counterclockwise. For example, the unit circle is traversed in the positive direction when we start at the point , then travel up and to the left through the point , then down and to the left through , then down and to the right through , and finally up and to the right to , where we started. Almost all of complex analysis is concerned with complex functions – that is, with functions that map some subset of the complex plane into some other (possibly overlapping, or even identical) subset of the complex plane. Here it is customary to speak of the domain of as lying in the -plane, while referring to the range of as a set of points in the -plane. In symbols we write and often think of the function as a transformation from the -plane (with coordinates ) into the -plane (with coordinates ). Complex plane notation The complex plane is denoted as Argand diagram Argand diagram refers to a geometric plot of complex numbers as points using the horizontal -axis as the real axis and the vertical -axis as the imaginary axis. Such plots are named after Jean-Robert Argand (1768–1822), although they were first described by Norwegian–Danish land surveyor and mathematician Caspar Wessel (1745–1818). Argand diagrams are frequently used to plot the positions of the zeros and poles of a function in the complex plane. Stereographic projections It can be useful to think of the complex plane as if it occupied the surface of a sphere. Given a sphere of unit radius, place its center at the origin of the complex plane, oriented so that the equator on the sphere coincides with the unit circle in the plane, and the north pole is "above" the plane. We can establish a one-to-one correspondence between the points on the surface of the sphere minus the north pole and the points in the complex plane as follows. Given a point in the plane, draw a straight line connecting it with the north pole on the sphere. That line will intersect the surface of the sphere in exactly one other point. The point will be projected onto the south pole of the sphere. Since the interior of the unit circle lies inside the sphere, that entire region () will be mapped onto the southern hemisphere. The unit circle itself () will be mapped onto the equator, and the exterior of the unit circle () will be mapped onto the northern hemisphere, minus the north pole. Clearly this procedure is reversible – given any point on the surface of the sphere that is not the north pole, we can draw a straight line connecting that point to the north pole and intersecting the flat plane in exactly one point. Under this stereographic projection the north pole itself is not associated with any point in the complex plane. We perfect the one-to-one correspondence by adding one more point to the complex plane – the so-called point at infinity – and identifying it with the north pole on the sphere. This topological space, the complex plane plus the point at infinity, is known as the extended complex plane. We speak of a single "point at infinity" when discussing complex analysis. There are two points at infinity (positive, and negative) on the real number line, but there is only one point at infinity (the north pole) in the extended complex plane. Imagine for a moment what will happen to the lines of latitude and longitude when they are projected from the sphere onto the flat plane. The lines of latitude are all parallel to the equator, so they will become perfect circles centered on the origin . And the lines of longitude will become straight lines passing through the origin (and also through the "point at infinity", since they pass through both the north and south poles on the sphere). This is not the only possible yet plausible stereographic situation of the projection of a sphere onto a plane consisting of two or more values. For instance, the north pole of the sphere might be placed on top of the origin in a plane that is tangent to the circle. The details don't really matter. Any stereographic projection of a sphere onto a plane will produce one "point at infinity", and it will map the lines of latitude and longitude on the sphere into circles and straight lines, respectively, in the plane. Cutting the plane When discussing functions of a complex variable it is often convenient to think of a cut in the complex plane. This idea arises naturally in several different contexts. Multi-valued relationships and branch points Consider the simple two-valued relationship Before we can treat this relationship as a single-valued function, the range of the resulting value must be restricted somehow. When dealing with the square roots of non-negative real numbers this is easily done. For instance, we can just define to be the non-negative real number such that . This idea doesn't work so well in the two-dimensional complex plane. To see why, let's think about the way the value of varies as the point moves around the unit circle. We can write and take Evidently, as moves all the way around the circle, only traces out one-half of the circle. So one continuous motion in the complex plane has transformed the positive square root into the negative square root . This problem arises because the point has just one square root, while every other complex number has exactly two square roots. On the real number line we could circumvent this problem by erecting a "barrier" at the single point . A bigger barrier is needed in the complex plane, to prevent any closed contour from completely encircling the branch point . This is commonly done by introducing a branch cut; in this case the "cut" might extend from the point along the positive real axis to the point at infinity, so that the argument of the variable in the cut plane is restricted to the range . We can now give a complete description of . To do so we need two copies of the -plane, each of them cut along the real axis. On one copy we define the square root of 1 to be , and on the other we define the square root of 1 to be . We call these two copies of the complete cut plane . By making a continuity argument we see that the (now single-valued) function maps the first sheet into the upper half of the -plane, where , while mapping the second sheet into the lower half of the -plane (where ). The branch cut in this example does not have to lie along the real axis; it does not even have to be a straight line. Any continuous curve connecting the origin with the point at infinity would work. In some cases the branch cut doesn't even have to pass through the point at infinity. For example, consider the relationship Here the polynomial vanishes when , so evidently has two branch points. We can "cut" the plane along the real axis, from to , and obtain a sheet on which is a single-valued function. Alternatively, the cut can run from along the positive real axis through the point at infinity, then continue "up" the negative real axis to the other branch point, . This situation is most easily visualized by using the stereographic projection described above. On the sphere one of these cuts runs longitudinally through the southern hemisphere, connecting a point on the equator () with another point on the equator (), and passing through the south pole (the origin, ) on the way. The second version of the cut runs longitudinally through the northern hemisphere and connects the same two equatorial points by passing through the north pole (that is, the point at infinity). Restricting the domain of meromorphic functions A meromorphic function is a complex function that is holomorphic and therefore analytic everywhere in its domain except at a finite, or countably infinite, number of points. The points at which such a function cannot be defined are called the poles of the meromorphic function. Sometimes all of these poles lie in a straight line. In that case mathematicians may say that the function is "holomorphic on the cut plane". By example: The gamma function, defined by where is the Euler–Mascheroni constant, and has simple poles at because exactly one denominator in the infinite product vanishes when , or a negative integer. Since all its poles lie on the negative real axis, from to the point at infinity, this function might be described as "holomorphic on the cut plane, the cut extending along the negative real axis, from 0 (inclusive) to the point at infinity." Alternatively, might be described as "holomorphic in the cut plane with and excluding the point ." This cut is slightly different from the we've already encountered, because it actually the negative real axis from the cut plane. The branch cut left the real axis connected with the cut plane on one side , but severed it from the cut plane along the other side . Of course, it's not actually necessary to exclude the entire line segment from to to construct a domain in which is holomorphic. All we really have to do is the plane at a countably infinite set of points . But a closed contour in the punctured plane might encircle one or more of the poles of , giving a contour integral that is not necessarily zero, by the residue theorem. Cutting the complex plane ensures not only that is holomorphic in this restricted domain – but also that the contour integral of the gamma function over any closed curve lying in the cut plane is identically equal to zero. Specifying convergence regions Many complex functions are defined by infinite series, or by continued fractions. A fundamental consideration in the analysis of these infinitely long expressions is identifying the portion of the complex plane in which they converge to a finite value. A cut in the plane may facilitate this process, as the following examples show. Consider the function defined by the infinite series Because for every complex number , it's clear that is an even function of , so the analysis can be restricted to one half of the complex plane. And since the series is undefined when it makes sense to cut the plane along the entire imaginary axis and establish the convergence of this series where the real part of is not zero before undertaking the more arduous task of examining when is a pure imaginary number. In this example the cut is a mere convenience, because the points at which the infinite sum is undefined are isolated, and the plane can be replaced with a suitably plane. In some contexts the cut is necessary, and not just convenient. Consider the infinite periodic continued fraction It can be shown that converges to a finite value if is not a negative real number such that . In other words, the convergence region for this continued fraction is the cut plane, where the cut runs along the negative real axis, from − to the point at infinity. Gluing the cut plane back together We have already seen how the relationship can be made into a single-valued function by splitting the domain of into two disconnected sheets. It is also possible to "glue" those two sheets back together to form a single Riemann surface on which can be defined as a holomorphic function whose image is the entire -plane (except for the point ). Here's how that works. Imagine two copies of the cut complex plane, the cuts extending along the positive real axis from to the point at infinity. On one sheet define , so that , by definition. On the second sheet define , so that , again by definition. Now flip the second sheet upside down, so the imaginary axis points in the opposite direction of the imaginary axis on the first sheet, with both real axes pointing in the same direction, and "glue" the two sheets together (so that the edge on the first sheet labeled "" is connected to the edge labeled "" on the second sheet, and the edge on the second sheet labeled "" is connected to the edge labeled "" on the first sheet). The result is the Riemann surface domain on which is single-valued and holomorphic (except when ). To understand why is single-valued in this domain, imagine a circuit around the unit circle, starting with on the first sheet. When we are still on the first sheet. When we have crossed over onto the second sheet, and are obliged to make a second complete circuit around the branch point before returning to our starting point, where is equivalent to , because of the way we glued the two sheets together. In other words, as the variable makes two complete turns around the branch point, the image of in the -plane traces out just one complete circle. Formal differentiation shows that from which we can conclude that the derivative of exists and is finite everywhere on the Riemann surface, except when (that is, is holomorphic, except when ). How can the Riemann surface for the function also discussed above, be constructed? Once again we begin with two copies of the -plane, but this time each one is cut along the real line segment extending from to – these are the two branch points of . We flip one of these upside down, so the two imaginary axes point in opposite directions, and glue the corresponding edges of the two cut sheets together. We can verify that is a single-valued function on this surface by tracing a circuit around a circle of unit radius centered at . Commencing at the point on the first sheet we turn halfway around the circle before encountering the cut at . The cut forces us onto the second sheet, so that when has traced out one full turn around the branch point , has taken just one-half of a full turn, the sign of has been reversed (because ), and our path has taken us to the point on the sheet of the surface. Continuing on through another half turn we encounter the other side of the cut, where , and finally reach our starting point ( on the sheet) after making two full turns around the branch point. The natural way to label in this example is to set on the first sheet, with on the second. The imaginary axes on the two sheets point in opposite directions so that the counterclockwise sense of positive rotation is preserved as a closed contour moves from one sheet to the other (remember, the second sheet is ). Imagine this surface embedded in a three-dimensional space, with both sheets parallel to the -plane. Then there appears to be a vertical hole in the surface, where the two cuts are joined. What if the cut is made from down the real axis to the point at infinity, and from , up the real axis until the cut meets itself? Again a Riemann surface can be constructed, but this time the "hole" is horizontal. Topologically speaking, both versions of this Riemann surface are equivalent – they are orientable two-dimensional surfaces of genus one. Use in control theory In control theory, one use of the complex plane is known as the s-plane. It is used to visualise the roots of the equation describing a system's behaviour (the characteristic equation) graphically. The equation is normally expressed as a polynomial in the parameter of the Laplace transform, hence the name -plane. Points in the s-plane take the form , where '' is used instead of the usual '' to represent the imaginary component (the variable '' is often used to denote electrical current in engineering contexts). Another related use of the complex plane is with the Nyquist stability criterion. This is a geometric principle which allows the stability of a closed-loop feedback system to be determined by inspecting a Nyquist plot of its open-loop magnitude and phase response as a function of frequency (or loop transfer function) in the complex plane. The -plane is a discrete-time version of the -plane, where -transforms are used instead of the Laplace transformation. Quadratic spaces The complex plane is associated with two distinct quadratic spaces. For a point in the complex plane, the squaring function and the norm-squared are both quadratic forms. The former is frequently neglected in the wake of the latter's use in setting a metric on the complex plane. These distinct faces of the complex plane as a quadratic space arise in the construction of algebras over a field with the Cayley–Dickson process. That procedure can be applied to any field, and different results occur for the fields R and C: when R is the take-off field, then C is constructed with the quadratic form , but the process can also begin with C and , and that case generates algebras that differ from those derived from R. In any case, the algebras generated are composition algebras; in this case the complex plane is the point set for two distinct composition algebras. Other meanings of "complex plane" The preceding sections of this article deal with the complex plane in terms of a geometric representation of the complex numbers. Although this usage of the term "complex plane" has a long and mathematically rich history, it is by no means the only mathematical concept that can be characterized as "the complex plane". There are at least three additional possibilities. Two-dimensional complex vector space, a "complex plane" in the sense that it is a two-dimensional vector space whose coordinates are complex numbers.
Mathematics
Complex analysis
null
217717
https://en.wikipedia.org/wiki/Carbon-burning%20process
Carbon-burning process
The carbon-burning process or carbon fusion is a set of nuclear fusion reactions that take place in the cores of massive stars (at least 4 at birth) that combines carbon into other elements. It requires high temperatures (> 5×108 K or 50 keV) and densities (> 3×109 kg/m3). These figures for temperature and density are only a guide. More massive stars burn their nuclear fuel more quickly, since they have to offset greater gravitational forces to stay in (approximate) hydrostatic equilibrium. That generally means higher temperatures, although lower densities, than for less massive stars. To get the right figures for a particular mass, and a particular stage of evolution, it is necessary to use a numerical stellar model computed with computer algorithms. Such models are continually being refined based on nuclear physics experiments (which measure nuclear reaction rates) and astronomical observations (which include direct observation of mass loss, detection of nuclear products from spectrum observations after convection zones develop from the surface to fusion-burning regions – known as dredge-up events – and so bring nuclear products to the surface, and many other observations relevant to models). Fusion reactions The principal reactions are: :{| border="0" |- style="height:3em;" || ||+ || ||→ || ||+ || ||+ ||4.617 MeV |- style="height:3em;" | ||+ || ||→ || ||+ || ||+ ||2.241 MeV |- style="height:3em;" | ||+ || ||→ || ||+ ||1n ||− ||2.599 MeV |- style="height:3em;" |colspan=99|Alternatively: |- style="height:3em;" | ||+ || ||→ || ||+ || ||+ ||13.933 MeV |- style="height:3em;" | ||+ || ||→ || ||+ ||2  ||colspan=2|−   0.113 MeV |} Reaction products This sequence of reactions can be understood by thinking of the two interacting carbon nuclei as coming together to form an excited state of the 24Mg nucleus, which then decays in one of the five ways listed above. The first two reactions are strongly exothermic, as indicated by the large positive energies released, and are the most frequent results of the interaction. The third reaction is strongly endothermic, as indicated by the large negative energy indicating that energy is absorbed rather than emitted. This makes it much less likely, yet still possible in the high-energy environment of carbon burning. But the production of a few neutrons by this reaction is important, since these neutrons can combine with heavy nuclei, present in tiny amounts in most stars, to form even heavier isotopes in the s-process. The fourth reaction might be expected to be the most common from its large energy release, but in fact it is extremely improbable because it proceeds via electromagnetic interaction, as it produces a gamma ray photon, rather than utilising the strong force between nucleons as do the first two reactions. Nucleons look a lot bigger to each other than they do to photons of this energy. However, the 24Mg produced in this reaction is the only magnesium left in the core when the carbon-burning process ends, as 23Mg is radioactive. The last reaction is also very unlikely since it involves three reaction products, as well as being endothermic — think of the reaction proceeding in reverse, it would require the three products all to converge at the same time, which is less likely than two-body interactions. The protons produced by the second reaction can take part in the proton–proton chain reaction, or the CNO cycle, but they can also be captured by 23Na to form 20Ne plus a 4He nucleus. In fact, a significant fraction of the 23Na produced by the second reaction gets used up this way. In stars between 4 and 11 solar masses, the 16O already produced by helium fusion in the previous stage of stellar evolution manages to survive the carbon-burning process pretty well, despite some of it being used up by capturing 4He nuclei. So the result of carbon burning is a mixture mainly of oxygen, neon, sodium and magnesium. The fact that the mass-energy sum of the two carbon nuclei is similar to that of an excited state of the magnesium nucleus is known as 'resonance'. Without this resonance, carbon burning would only occur at temperatures one hundred times higher. The experimental and theoretical investigation of such resonances is still a subject of research. A similar resonance increases the probability of the triple-alpha process, which is responsible for the original production of carbon. Neutrino losses Neutrino losses start to become a major factor in the fusion processes in stars at the temperatures and densities of carbon burning. Though the main reactions don't involve neutrinos, the side reactions such as the proton–proton chain reaction do. But the main source of neutrinos at these high temperatures involves a process in quantum theory known as pair production. A high energy gamma ray which has a greater energy than the rest mass of two electrons (mass-energy equivalence) can interact with electromagnetic fields of the atomic nuclei in the star, and become a particle and anti-particle pair of an electron and positron. Normally, the positron quickly annihilates with another electron, producing two photons, and this process can be safely ignored at lower temperatures. But around 1 in 1019 pair productions end with a weak interaction of the electron and positron, which replaces them with a neutrino and anti-neutrino pair. Since they move at virtually the speed of light and interact very weakly with matter, these neutrino particles usually escape the star without interacting, carrying away their mass-energy. This energy loss is comparable to the energy output from the carbon fusion. Neutrino losses, by this and similar processes, play an increasingly important part in the evolution of the most massive stars. They force the star to burn its fuel at a higher temperature to offset them. Fusion processes are very sensitive to temperature so the star can produce more energy to retain hydrostatic equilibrium, at the cost of burning through successive nuclear fuels ever more rapidly. Fusion produces less energy per unit mass as the fuel nuclei get heavier, and the core of the star contracts and heats up when switching from one fuel to the next, so both these processes also significantly reduce the lifetime of each successive fusion-burning fuel. Up to the helium burning stage the neutrino losses are negligible. But from the carbon burning stage onwards, the reduction in stellar lifetime due to energy lost in the form of neutrinos roughly matches the increased energy production due to fuel change and core contraction. In successive fuel changes in the most massive stars, the reduction in lifetime is dominated by the neutrino losses. For example, a star of 25 solar masses burns hydrogen in the core for 107 years, helium for 106 years and carbon for only 103 years. Stellar evolution During helium fusion, stars build up an inert core rich in carbon and oxygen. The inert core eventually reaches sufficient mass to collapse due to gravitation, whilst the helium burning moves gradually outward. This decrease in the inert core volume raises the temperature to the carbon ignition temperature. This will raise the temperature around the core and allow helium to burn in a shell around the core. Outside this is another shell burning hydrogen. The resulting carbon burning provides energy from the core to restore the star's mechanical equilibrium. However, the balance is only short-lived; in a star of 25 solar masses, the process will use up most of the carbon in the core in only 600 years. The duration of this process varies significantly depending on the mass of the star. Stars of below 4 solar masses never reach high enough core temperature to burn carbon, instead ending their lives as carbon-oxygen white dwarfs after shell helium flashes gently expel the outer envelope in a planetary nebula. In stars with masses between 4 and 12 solar masses, the carbon-oxygen core is under degenerate conditions and carbon ignition takes place in a carbon flash, that lasts just milliseconds and disrupts the stellar core. In the late stages of this nuclear burning they develop a massive stellar wind, which quickly ejects the outer envelope in a planetary nebula leaving behind an O-Ne-Na-Mg white dwarf core of about 1.1 solar masses. The core never reaches high enough temperature for further fusion burning of heavier elements than carbon. Stars of more than 12 solar masses start carbon burning in a non-degenerate core, and after carbon exhaustion proceed with the neon-burning process once contraction of the inert (O, Ne, Na, Mg) core raises the temperature sufficiently.
Physical sciences
Stellar astronomy
Astronomy
217772
https://en.wikipedia.org/wiki/Rosette%20Nebula
Rosette Nebula
The Rosette Nebula (also known as Caldwell 49) is an H II region located near one end of a giant molecular cloud in the Monoceros region of the Milky Way Galaxy. The open cluster NGC 2244 (Caldwell 50) is closely associated with the nebulosity, the stars of the cluster having been formed from the nebula's matter. The nebula has been noted to be having a shape reminiscent of a human skull, and is sometimes referred to as the "Skull Nebula". It is not to be confused with NGC 246, which is also nicknamed the "Skull Nebula". Description The complex has the following New General Catalogue (NGC) designations: NGC 2237 – Part of the nebulous region (Also used to denote whole nebula) NGC 2238 – Part of the nebulous region NGC 2239 – Part of the nebulous region (Discovered by John Herschel) NGC 2244 – The open cluster within the nebula (Discovered by John Flamsteed in 1690) NGC 2246 – Part of the nebulous region The cluster and nebula lie at a distance of 5,000 light-years from Earth and measure roughly 130 light years in diameter. The radiation from the young stars excites the atoms in the nebula, causing them to emit radiation themselves producing the emission nebula we see. The mass of the nebula is estimated to be around 10,000 solar masses. A survey of the nebula with the Chandra X-ray Observatory has revealed the presence of numerous new-born stars inside optical Rosette Nebula and studded within a dense molecular cloud. Altogether, approximately 2500 young stars lie in this star-forming complex, including the massive O-type stars HD 46223 and HD 46150, which are primarily responsible for blowing the ionized bubble. Most of the ongoing star-formation activity is occurring in the dense molecular cloud to the south east of the bubble. A diffuse X-ray glow is also seen between the stars in the bubble, which has been attributed to a super-hot plasma with temperatures ranging from 1 to 10 million K. This is significantly hotter than the 10,000 K plasmas seen in HII regions, and is likely attributed to the shock-heated winds from the massive O-type stars. On April 16, 2019, the Oklahoma Legislature passed HB1292 making the Rosette Nebula as the official state astronomical object. Oklahoma Governor Kevin Stitt signed it into law April 22, 2019.
Physical sciences
Notable nebulae
Astronomy
217773
https://en.wikipedia.org/wiki/Septic%20tank
Septic tank
A septic tank is an underground chamber made of concrete, fiberglass, or plastic through which domestic wastewater (sewage) flows for basic sewage treatment. Settling and anaerobic digestion processes reduce solids and organics, but the treatment efficiency is only moderate (referred to as "primary treatment"). Septic tank systems are a type of simple onsite sewage facility. They can be used in areas that are not connected to a sewerage system, such as rural areas. The treated liquid effluent is commonly disposed in a septic drain field, which provides further treatment. Nonetheless, groundwater pollution may occur and is a problem. The term "septic" refers to the anaerobic bacterial environment that develops in the tank that decomposes or mineralizes the waste discharged into the tank. Septic tanks can be coupled with other onsite wastewater treatment units such as biofilters or aerobic systems involving artificially forced aeration. The rate of accumulation of sludge—also called septage or fecal sludge—is faster than the rate of decomposition. Therefore, the accumulated fecal sludge must be periodically removed, which is commonly done with a vacuum truck. Description A septic tank consists of one or more concrete or plastic tanks of between 4,500 and 7,500 litres (1,000 and 2,000 gallons); one end is connected to an inlet wastewater pipe and the other to a septic drain field. Generally these pipe connections are made with a T pipe, allowing liquid to enter and exit without disturbing any crust on the surface. Today, the design of the tank usually incorporates two chambers, each equipped with an access opening and cover, and separated by a dividing wall with openings located about midway between the floor and roof of the tank. Wastewater enters the first chamber of the tank, allowing solids to settle and scum to float. The settled solids are anaerobically digested, reducing the volume of solids. The liquid component flows through the dividing wall into the second chamber, where further settlement takes place. One option for the effluent is the draining into the septic drain field, also referred to as a leach field, drain field or seepage field, depending upon locality. A percolation test is required prior to installation to ensure the porosity of the soil is adequate to serve as a drain field. Septic tank effluent can also be conveyed to a secondary treatment, typically constructed wetlands. Constructed wetlands benefit from the good performance of septic tanks at removing solids, which avoids them getting clogged quickly. Septic tank effluent can also be conveyed to a centralized treatment facility. The remaining impurities are trapped and eliminated in the soil, with the excess water eliminated through percolation into the soil, through evaporation, and by uptake through the root system of plants and eventual transpiration or entering groundwater or surface water. A piping network, often laid in a stone-filled trench (see weeping tile), distributes the wastewater throughout the field with multiple drainage holes in the network. The size of the drain field is proportional to the volume of wastewater and inversely proportional to the porosity of the drainage field. The entire septic system can operate by gravity alone or, where topographic considerations require, with inclusion of a lift pump. Certain septic tank designs include siphons or other devices to increase the volume and velocity of outflow to the drainage field. These help to fill the drainage pipe more evenly and extend the drainage field life by preventing premature clogging or bioclogging. An Imhoff tank is a two-stage septic system where the sludge is digested in a separate tank. This avoids mixing digested sludge with incoming sewage. Also, some septic tank designs have a second stage where the effluent from the anaerobic first stage is aerated before it drains into the seepage field. A properly designed and normally operating septic system is odour-free. Besides periodic inspection and emptying, a septic tank should last for decades with minimal maintenance, with concrete, fibreglass, or plastic tanks lasting about 50 years. Emptying (desludging) Waste that is not decomposed by the anaerobic digestion must eventually be removed from the septic tank. Otherwise the septic tank fills up and wastewater containing undecomposed material discharges directly to the drainage field. Not only is this detrimental for the environment but, if the sludge overflows the septic tank into the leach field, it may clog the leach field piping or decrease the soil porosity itself, requiring expensive repairs. When a septic tank is emptied, the accumulated sludge (septage, also known as fecal sludge) is pumped out of the tank by a vacuum truck. How often the septic tank must be emptied depends on the volume of the tank relative to the input of solids, the amount of indigestible solids, and the ambient temperature (because anaerobic digestion occurs more efficiently at higher temperatures), as well as usage, system characteristics and the requirements of the relevant authority. Some health authorities require tanks to be emptied at prescribed intervals, while others leave it up to the decision of an inspector. Some systems require pumping every few years or sooner, while others may be able to go 10–20 years between pumpings. An older system with an undersize tank that is being used by a large family will require much more frequent pumping than a new system used by only a few people. Anaerobic decomposition is rapidly restarted when the tank is refilled. An empty tank may be damaged by hydrostatic pressure causing the tank to partially "float" out of the ground, especially in flood situations or very wet ground conditions. Another option is "scheduled desludging" of septic tanks which has been initiated in several Asian countries including the Philippines, Malaysia, Vietnam, Indonesia, and India. In this process, every property is covered along a defined route and the property occupiers are informed in advance about desludging that will take place. Maintenance The maintenance of a septic system is often the responsibility of the resident or property owner. Some forms of abuse or neglect include the following: User's actions Excessive disposal of cooking oils and grease can cause the inlet drains to block. Oils and grease are often difficult to degrade and can cause odor problems and difficulties with the periodic emptying. Flushing non-biodegradable waste items down the toilet such as cigarette butts, cotton buds/swabs or menstrual hygiene products and condoms can cause a septic tank to clog and fill rapidly, so these materials should not be disposed of in that manner. The same applies when the toilet is connected to a sewer rather than a septic tank. Using the toilet for disposal of food waste can cause a rapid overload of the system with solids and contribute to failure. Certain chemicals may damage the components of a septic tank or kill the bacteria needed in the septic tank for the system to operate properly, such as pesticides, herbicides, materials with high concentrations of bleach or caustic soda (lye), or any other inorganic materials such as paints or solvents. Using water softeners – the brine discharge from water softeners may harm the bacteria responsible for breaking down the wastewater. Usually, however, the brine is sufficiently diluted with other wastewater that it does not adversely affect the septic system. Other factors Roots from trees and shrubbery protruding above the tank or drainfield may clog and/or rupture them. Trees that are directly within the vicinity of a concrete septic tank have the potential to penetrate the tank as the system ages and the concrete begins to develop cracks and small leaks. Tree roots can cause serious flow problems due to plugging and blockage of drain pipes, and the trees themselves tend to expand extremely vigorously due to the ready supply of nutrients from the septic system. Playgrounds and storage buildings may cause damage to a tank and the drainage field. In addition, covering the drainage field with an impermeable surface, such as a driveway or parking area, will seriously affect its efficiency and possibly damage the tank and absorption system. Excessive water entering the system may overload it and cause it to fail. Very high rainfall, rapid snowmelt, and flooding from rivers or the sea can all prevent a drain field from operating, and can cause flow to back up, interfering with the normal operation of the tank. High winter water tables can also result in groundwater flowing back into the septic tank. Over time, biofilms develop on the pipes of the drainage field, which can lead to blockage. Such a failure can be referred to as "biomat failure". Septic tank additives Septic tank additives have been promoted by some manufacturers with the aim to improve the effluent quality from septic tanks, reduce sludge build-up and to reduce odors. These additives—which are commonly based on "effective microorganisms"—are usually costly in the longer term and fail to live up to expectations. It has been estimated that in the U.S. more than 1,200 septic system additives were available on the market in 2011. Very little peer-reviewed and replicated field research exists regarding the efficacy of these biological septic tank additives. Environmental concerns While a properly maintained and located septic tank poses no higher amount of environmental problems than centralized municipal sewage treatment, certain problems could arise with a septic tank in an unsuitable location, and septic tank failures are typically more expensive to fix or replace than municipal sewer. Since septic systems require large drainfields, they are unsuitable for densely built areas. Odor, gas emissions and carbon footprint Some constituents of wastewater, especially sulfates, under the anaerobic conditions of septic tanks, are reduced to hydrogen sulfide, a pungent and toxic gas. Nitrates and organic nitrogen compounds can be reduced to ammonia. Because of the anaerobic conditions, fermentation and methanogenesis processes take place, which may generate carbon dioxide and/or methane. Both carbon dioxide and methane are greenhouse gases, with methane having a global warming potential about 25 times larger than carbon dioxide. This makes septic tanks potential greenhouse gas emitters. The same methane can be burnt to produce energy for local usage. Nutrients in the effluent Septic tanks by themselves are ineffective at removing nitrogen compounds that have potential to cause algal blooms in waterways into which affected water from a septic system finds its way. This can be remedied by using a nitrogen-reducing technology, such as hybrid constructed wetlands, or by simply ensuring that the leach field is properly sited to prevent direct entry of effluent into bodies of water. The fermentation processes cause the contents of a septic tank to be anaerobic with a low redox potential, which keeps phosphates in a soluble and, thus, mobilized form. Phosphates discharged from a septic tank into the environment can trigger prolific plant growth including algal blooms, which can also include blooms of potentially toxic cyanobacteria. The soil's capacity to retain phosphorus is usually large enough to handle the load through a normal residential septic tank. An exception occurs when septic drain fields are located in sandy or coarser soils on property adjacent to a water body. Because of limited particle surface area, these soils can become saturated with phosphates. Phosphates will progress beyond the treatment area, posing a threat of eutrophication to surface waters. Pathogens Diseases extremely dangerous to human contact such as E. coli and other coliform bacteria are often reported following failures of septic tanks. A properly functioning septic system, on the other hand, provides significant reduction of pathogens compared to direct discharge due to settling (in the tank) and soil absorption (in the drain field). Log reductions of 4–8 for coliform bacteria, 0–2 for viruses are achieved in the effluent. Parasitic worm eggs are also removed. Additional filters may be added to improve removal performance although they will need to be replaced periodically. Groundwater pollution In areas with high population density, groundwater pollution beyond acceptable limits may occur. Some small towns experience the costs of building very expensive centralized wastewater treatment systems because of this problem, due to the high cost of extended collection systems. To reduce residential development that might increase the demand to construct an expensive centralized sewerage system, building moratoriums and limitations on the subdivision of property are often imposed. Ensuring existing septic tanks are functioning properly can also be helpful for a limited time, but becomes less effective as a primary remediation strategy as population density increases. Surface water pollution In areas adjacent to water bodies with fish or shellfish intended for human consumption, improperly maintained and failing septic systems contribute to pollution levels that can force harvest restrictions and/or commercial or recreational harvest closures. Use In the United States, the 2008 American Housing Survey indicated that about 20 percent of all households rely on septic tanks, and that the overwhelming majority of systems are located in rural (50%) and suburban (47%) areas. Indianapolis is one example of a large city where many of the city's neighborhoods still rely on separate septic systems. In Europe, septic systems are generally limited to rural areas. Regulations European Union In the European Union the EN 12566 standard provides the general requirements for packaged and site assembled treatment plants used for domestic wastewater treatment. Part 1 (EN 12566-1) is for septic tanks that are prefabricated or factory manufactured and made of polyethylene, glass reinforced polyester, polypropylene, PVC-U, steel or concrete. Part 4 (EN 12566-4) regulates septic tanks that are assembled on site from prefabricated kits, generally of concrete construction. Certified septic tanks of both types must pass a standardized hydraulic test to assess their ability to retain suspended solids within the system. Additionally, their structural adequacy in relevant ground conditions is assessed in terms of water-tightness, treatment efficiency, and structural behaviour. France In France, about 4 million households (or 20% of the population) are using on-site wastewater disposal systems (l’assainissement non collectif), including septic tanks (fosse septique). The legal framework for regulating the construction and maintenance of septic systems was introduced in 1992 and updated in 2009 and 2012 with the intent to establish the technical requirements applicable to individual sewerage systems. Septic tanks in France are subject to inspection by SPANC (Service Public d’Assainissement Non Collectif), a professional body appointed by the respective local authorities to enforce wastewater collection laws, at least once in four years. Following the introduction of EN 12566, the discharge of effluent directly into ditches or watercourses is prohibited, unless the effluent meets prescribed standards. Ireland According to the Census of Ireland 2011, 27.5% of Irish households (i.e. about 440,000 households), with the majority in rural areas, use an individual septic tank. Following a European Court of Justice judgment made against Ireland in 2009 that deemed the country non-compliant with the Waste Framework Directive in relation to domestic wastewaters disposed of in the countryside, the Water Services (Amendment) Act 2012 was passed in order to regulate wastewater discharges from domestic sources that are not connected to the public sewer network and to provide arrangements for registration and inspection of existing individual domestic wastewater treatment systems. Additionally, a code of practice has been developed by the Environmental Protection Agency to regulate the planning and construction of new septic tanks, secondary treatment systems, septic drain fields and filter systems. Direct discharge of septic tank effluent into groundwater is prohibited in Ireland, while the indirect discharge via unsaturated subsoil into groundwater, e.g. by means of a septic drain field, or the direct discharge into surface water is permissible in accordance with a Water Pollution Act license. Registered septic tanks must be desludged by an authorized contractor at least once a year; the removed fecal sludge is disposed of, either to a managed municipal wastewater treatment facility or to agriculture provided that nutrient management regulations are met. United Kingdom Since 2015, only certain property owners in England and Wales with septic tanks or small packaged sewage treatment systems need to register their systems, and either apply for a permit or qualify for an exemption with the Environment Agency. Permits need to be granted to systems that discharge more than a certain volume of effluent in a given time or that discharge effluent directly into sensitive areas (e.g., some groundwater protection zones). In general, permits are not granted for new septic tanks that discharge directly into surface waters. A septic tank discharging into a watercourse must be replaced or upgraded by 1 January 2020 to a Sewage Treatment Plant (also called an Onsite sewage facility), or sooner if the property is sold before this date, or if the Environment Agency (EA) finds that it is causing pollution. In Northern Ireland, the Department of the Environment must give permission for all wastewater discharges where it is proposed that the discharge will go to a waterway or soil infiltration system. The discharge consent will outline conditions relating to the quality and quantity of the discharge in order to ensure the receiving waterway or the underground aquifer can absorb the discharge. The Water Environment Regulations 2011 regulate the registration of septic tank systems in Scotland. Proof of registration is required when new properties are being developed or existing properties change ownership. Australia In Australia, septic tank design and installation requirements are regulated by State Governments, through Departments of Health and Environmental Protection Agencies. Regulation may include Codes of Practice and Legislation. Regulatory requirements for the design and installation of septic tanks commonly references Australian Standards (1547 and 1546). Capacity requirements for septic tanks may be outlined within Codes of Practice, and can vary between states. Mainly because of water leaching from the effluent drains of a lot of closely spaced septic systems, many council districts (e.g. Sunshine Coast, Queensland) have banned septic systems, and require them to be replaced with much more expensive small-scale sewage treatment systems that actively pump air into the tank, producing an aerobic environment. Septic systems have to be replaced as part of any new building applications, regardless of how well the old system performed. United States According to the US Environmental Protection Agency, in the United States it is the home owners' responsibility to maintain their septic systems. Anyone who ignores this requirement will eventually experience costly repairs when solids escape the tank and clog the clarified liquid effluent disposal system. In Washington, for example, a "shellfish protection district" or "clean water district" is a geographic service area designated by a county to protect water quality and tideland resources. The district provides a mechanism to generate local funds for water quality services to control non-point sources of pollution, such as septic system maintenance. The district also serves as an educational resource, calling attention to the pollution sources that threaten shellfish growing waters. Slang usage The term "septic tank", or more usually "septic", is used in some parts of Britain as a slang term to refer to Americans, from Cockney rhyming slang septic tank equalling yank. This is sometimes further shortened to "seppo" by Australians .
Technology
Food, water and health
null
217794
https://en.wikipedia.org/wiki/2channel
2channel
, also known as 2ch, Channel 2, and sometimes retrospectively as 2ch.net, was an anonymous Japanese textboard founded in 1999 by Hiroyuki Nishimura. Described in 2007 as "Japan's most popular online community", the site had a level of influence comparable to that of traditional mass media such as television, radio, and magazines. At the time, the site drew an annual revenue of around (about US$1 million), and was the largest of its kind in the world, with around ten million visitors and 2.5 million posts made per day. The site was hosted and had its domain registration provided by Jim Watkins, based in San Francisco, California. In 2009, ownership of the site was transferred to Singapore-based Packet Monster Inc., under which Nishimura remained in control. In February 2014, Watkins seized the 2ch.net domain, taking full control over the website and assuming the role of site administrator. This has resulted in two textboards claiming to be the legitimate 2channel: 2ch.sc, owned by Nishimura through Packet Monster Inc., and , established in 2017 by redirect from the original domain and owned by Watkins through Philippines-based Loki Technology Inc. 2channel and its successors are more controversial than other social media in Japan; they are extremely popular among Japan's extreme right-wing, known as the netto-uyoku, who post xenophobic comments, often targeting Koreans and Chinese. Defamation is of particular concern; by August 2008, Nishimura had received more than one hundred lawsuits for defamatory comments left on the website. Announcements of crimes also have drawn scrutiny of 2channel and its successors. In 2012, 2channel was accused by the Tokyo Metropolitan Police of allowing its platform to be used by amphetamine dealers, although no charges were filed. In September 2007, 2channel claimed over 2.4 million posts per day. As of July 2020, 5channel claimed 1,031 boards receiving around 2.7 million posts per day on weekends, with no growth since March 2016. Meanwhile, 2ch.sc claimed 826 boards receiving around 5,700 posts daily. History Predecessors Textboards like 2channel were rooted in two earlier technologies: dial-in bulletin boards, known in Japan as , and Usenet. 2channel has two predecessors: Ayashii World created in 1996 by Shiba Masayuki, and , created in 1997. Ayashii World was the first large anonymous web bulletin board in Japan, while Amezou originated the more familiar "textboard" concept wherein threads are displayed chronologically, with new comments bumping old threads to the top, rather than in a branching tree. Ayashii World closed in 1998, leading most of its former users to go to Amezou; Nishimura advertised 2channel in a post on Amezou in May 1999, calling it "Amezou's second channel". From June, Amezou became increasingly unable to handle the load on its servers, until its host shut it down after threats against Amezou's anonymous owner which contained his dox were posted on it. Early history 2channel was founded on 30 May 1999 in a college apartment in Conway, Arkansas on the campus of University of Central Arkansas by Hiroyuki Nishimura. Success came quickly; many of Amezou's users began using it as soon as it opened. When compared with other bulletin boards, 2channel's technology wasn't much different; what led to its success was instead its being an "outlet for unfettered expression"; by being hosted in the United States, 2channel was able to bypass more restrictive Japanese censorship rules, while still being accessible from Japan. The site also enjoyed greater immunity from legal action within Japan due to the location of its servers. By 2002, Google said that the most searched word in Japan was "2channel". By 2004, 2channel was already the largest internet forum in Japan. The name "2channel" is a reference to VHF channel 2, the default setting for the RF modulators used in earlier-generation game consoles (such as Nintendo's Family Computer) when connecting to Japanese television sets. Where Amezou was originally meant to be "channel one", 2channel was meant to be "channel two". The site's iconic jar logo is a reference to deprecatory remarks some former users of Ayashii World would make about 2channel early on in the site's history, likening it to a . Nishimura took this nickname and adopted it as the site's logo by 2002. Jim Watkins, an ex-US army non-commissioned officer (sergeant first class), domain name registrar, and dedicated hosting service provider, hosted 2channel since at least 2004 through various corporate identities, including Big-server.com Inc., Pacific Internet Exchange LLC and N. T. Technology Inc. Before 2channel, Watkins' company primarily specialized in using servers and domains in the United States to serve uncensored pornographic content to users in Japan. Ownership transfer and government scrutiny On 2 January 2009, Nishimura claimed to have transferred ownership of 2channel to Packet Monster Inc., a company based in Chinatown, Singapore, and to no longer be involved in the site's management. However, Nishimura was charged with violating Japanese narcotic control laws anyway on 20 December 2012. As part of their case, the Tokyo Metropolitan Police Department (MPD) claimed Nishimura remained involved in 2channel's operations, alleging Packet Monster Inc. is a . The main thrust of the complaint was that Nishimura allegedly did not delete posts seeking to purchase illicit amphetamine from other 2channel users online; the , an agency of the MPD, alleged that in 2011 97% of its 5,223 deletion requests did not result in deletion. On 19 March 2013, the Public Prosecutors Office decided not to prosecute the case. In August 2013, the Tokyo Regional Taxation Bureau declared in a tax audit that Nishimura had failed to declare worth of website revenue which should have been taxed between 2009 and 2012, years in which he financially benefited from Packet Monster Inc.; Nishimura settled the matter by paying the owed tax, . Personal information leak In August 2013, a hacker using the name the personal details (including names, addresses, and phone numbers) and credit card numbers of thousands of 2channel users who had used 2channel's paid services into the public domain, exposing the anonymous profiles of various high level personas such as politicians and writers, including an attorney involved in 2channel cases, , and a staff member of AKB48. More than 74,000 users had their personal information exposed by the leak. The paid service involved in the leak was known as the , or . Its main utility was that it allowed users to read old threads; if a thread on 2channel received 1,000 posts, it would become part of the by a process of of such threads, after which time a thread was no longer freely accessible. 2channel charged per year for the service, which was typically paid by credit card; logs of these payments were the source of the data leak. At the time of the leak, Watkins apologized on behalf of N. T. Technology, Inc., saying he was the victim of a "cyber attack" and that "some data [of my] customers was compromised." Domain seizure and split On 19 February 2014, Jim Watkins, as chairman of N.T. Technology, Inc., 2channel's domain registrar, seized 2channel's domain. He took full control over the website, relieved Nishimura of all power, and assumed the role of website administrator. Watkins made the kako rogu free to all users shortly after assuming control. Watkins claimed that Nishimura had failed to pay him money owed which led to the seizure as a way to cover Nishimura's debts, while Nishimura claimed that he had in fact paid everything owed and that the domain transfer was an illegal domain hijacking. In response, Nishimura created his own clone of 2channel at , scraping the contents of the entire 2channel website and updating 2ch.sc as new posts appeared on 2ch.net. In a Q&A session on 4chan shortly after becoming the site's owner, Nishimura claimed that 2channel was stolen by Watkins. Nishimura has attempted to repossess the domain both through WIPO's Uniform Domain-Name Dispute-Resolution Policy and through the Japanese court system. Through the Japan Patent Office, Nishimura owns the trademark "2channel", though the WIPO refused to intervene on his behalf on account of that, suggesting the parties go to court instead as it was not, in its view, a case of "cybersquatting" but rather a "business dispute". Ron Watkins, Jim's son, in 2016 registered the trademark "5channel" in Japan. On 1 October 2017, 2ch.net began redirecting to 5ch.net, a domain owned by Loki Technology, Inc. The chairman of Loki Technology Inc. is also Jim Watkins; his wife, Liziel, is the treasurer and majority shareholder. According to a press release, the name was changed to 5channel to avoid potential legal issues due to Nishimura's ownership of the "2channel" trademark. Culture Due to its large number of boards, the types of information exchanged on 2channel are very diverse. There are boards for topics as diverse as sports, sex, celebrity gossip, computer programming and ongoing earthquakes; even some academic research has gotten its start on 2channel. Anonymous posting One of the most distinctive features of 2channel is its use of anonymous posting. Nishimura explained his reasons for preferring anonymity online to USC Annenberg's Japan Media Review thus: If there is a user ID attached to a user, a discussion tends to become a criticizing game. On the other hand, under the anonymous system, even though your opinion/information is criticized, you don't know with whom to be upset. Also with a user ID, those who participate in the site for a long time tend to have authority, and it becomes difficult for a user to disagree with them. Under a perfectly anonymous system, you can say, "it's boring," if it is actually boring. All information is treated equally; only an accurate argument will work. However, a frequent criticism directed toward anonymous textboards like 2channel, most notably by Kazuhiko Nishi, is that their anonymous nature make them mere . 2channel's anonymity is a departure from most English language internet forums which require some form of registration, usually coupled with email verification for further identification of an individual; its anonymity in part inspired the creation of 4chan. On 2channel, a name field is available, but it is seldom used. However, as open proxies such as the Tor network are banned from posting on 2channel, the administrators have some degree of ability to help law enforcement unmask users if necessary. Revenue While 2channel and its successors are commercial, 2channel was moderated by volunteers. 2channel relied on advertisements from "obscure" companies. In 2007, it had an annual revenue of around . Between 2009 and 2012, in ad profits were transferred to Nishimura's Singaporean shell company. As early as 2004, companies such as Dentsu were data mining the website for their clients, keeping them informed of how they were being portrayed by 2channel users; by 2006, 75% of the content Dentsu analyzed on behalf of its customers was posted to 2channel. 2channel also received revenue from subscription services like the aforementioned maru. For its part, 5channel has a subscription service, , that allows people outside Japan to post on it; this service also hides ads from its subscribers. Matome 2channel historically allowed anyone to use its data, providing it in an easily parsable format; this made it simple to create third party "dedicated browsers" () for posting on and using 2channel. The openness of the data allowed for the proliferation of and , which summarize 2channel threads and attempt to collect what they see as the "best of" 2channel. In 2007, due to growing discontentment towards such sites, Nishimura added a board, /poverty/, which marked every post on it with the phrase . This caused many users to abandon other boards for that board. Watkins made it a priority to combat "piracy" of 5channel by third-party matome sites in March 2014, adding tensai kinshi to many popular boards. Such sites siphon users from 2channel itself, with some receiving in excess of 100 million monthly pageviews; in one case a matome site earned its owner per month. Watkins followed up the rule change by restricting access to 2channel's data in March 2015, by requiring that dedicated browser authors use a special API to access 2channel's, and later 5channel's, thread data. On 10 July 2023, Jane, a company that provided a 5channel API server, terminated its 5channel API service, thus ending several applications' support for the site. Some browsers, including Jane's, replaced their support for 5channel with another anonymous textboard site named Talk. Phenomena Densha Otoko Densha Otoko is a Japanese franchise consisting of a movie, television series, manga, and other media, all based on the purportedly true story of a 23-year-old man who intervened when a drunk man started to harass several women on a train. The man ultimately begins dating one of the women. The event and the man's subsequent dates with the woman, chronicled on 2channel, directly inspired the franchise. Whether or not the original 2channel story is actually true is debated. Shift_JIS art 2channel and its successors, being textboards, cannot have images posted to them. Users get around this, however, by posting a more expressive form of ASCII art: Shift_JIS art. Below is a small sample: Political activism 2channel and other websites with "chan" in their name have been known for activism done by their users for a variety of causes. Controversies Slander and legal issues During Hiroyuki's administration, he was often openly defiant of Japanese law, especially around libel, and his duty to follow it, telling Yomiuri Shimbun in March 2007: By May 2008, Nishimura had lost more than fifty libel lawsuits in Japanese civil courts, and had been assessed millions of dollars in penalties; by August, according to him, he'd received more than one hundred lawsuits. While according to the official pages of the website, slander was prohibited, activists such as Debito Arudou claimed that the site did not actually respond to requests to delete posts in his case, returning mail unopened. After the transfer to Packet Monster Inc., Arudou, who had still not received any of the court ordered penalty, wrote in an op-ed that Nishimura had only transferred his assets to increase his "unaccountability". While Nishimura at that point had never paid any of the compensation courts ordered in his cases, in 2010 one of his plaintiffs was successful in getting compensated through the publisher of one of Nishimura's books. Crime announcements were a regular occurrence on 2channel, including of mass suicides and murders. After the 2000 Neomugicha incident, in which a bus was hijacked by a man who posted on 2channel, police officers started regularly policing 2channel; such surveillance only increased after the Akihabara massacre announced his 2008 attack on 2channel as well. Former superintendent of the Tokyo Metropolitan Police Tateshi Higuchi called the site a "den of iniquity". According to The Japan Times, however, 2channel cooperated with police in the past to aid them in catching criminals using 2channel by giving police their IP addresses, from which their locations were determined. Such crime announcements have continued to be a problem on 5channel: it was speculated that the man who carried out the Kyoto Animation arson attack posted an advance warning of the crime on 5channel. Far-right nationalism and anti-Korean hate speech 2channel, with its massive size and anonymous posting, is abundant with slander, hate speech and defamation against public figures, institutions, and minority ethnic groups. Far-right users of 2channel are referred to as netto-uyoku, a term roughly analogous to "alt-right". Though the site has rules against posts illegal under Japanese law, the scale and anonymous nature of the site makes prompt deletions difficult to realize in practice. Furthermore, on occasion, 2channel has been accused of being reluctant to remove defamatory posts. The discussion boards are also often used to coordinate real-life demonstrations; as an example, 2channel users organized an August 2011 rally against Fuji Television, their complaint being that the channel was broadcasting too many Korean television shows. Sankei Shimbun reported in 2018 that 5channel, which received most of 2channel's users, has the same reputation for attracting netto-uyoku. 2channel netto-uyoku frequently make racist comments against Koreans. In 2009, it was even discovered that an Asahi Shimbun employee had posted racist remarks towards Koreans on 2channel. After the 2011 Tōhoku earthquake and tsunami, fake news proliferated on 2channel, falsely accusing Chinese and Korean people of "plundering" evacuation centers. Technology 2channel operated on forum software that was considered innovative at the time of its founding, originally written by Hiroyuki himself, but later replaced through the collective effort of his Unix-savvy users; the software is known as . It was a major departure from Usenet; however, when compared to other Japanese textboards at the time, such as Amezou, 2channel's format was not much different. Boards in the textboard software have their threads sorted by the time of their last post, so making a post would the thread to the top of the board index. However, when posting in a thread, users may use a function known as to avoid bumping a thread in this way. Often, posters will use on purpose, to avoid unwanted attention. Major outages 2010–2011 Korean DDoS In response to racism towards Koreans by 2channel users, especially against Yuna Kim, an athlete who defeated Japan in the 2010 Vancouver Olympics, the site suffered an extended outage in March 2010 due to a distributed denial of service (DDoS) attack conducted by a Korean hacking group. The attack against Jim Watkins' Pacific Internet Exchange LLC affected other sites on the shared network as well, including some belonging to US government agencies; it is estimated to have cost . Watkins requested the American government investigate the event as an instance of "cyberterrorism"; according to him, sporadic DDoS attacks by Koreans continued into 2011. 2015 8chan DDoS Beginning on 8 January 2015, 8chan, also owned by Jim Watkins and hosted on the network of N. T. Technology, Inc., suffered an outage due to a DDoS attack. Due to the attack, 2ch.net, then owned by Watkins but not yet operated under the name 5channel, went down as well. The attacks against the messageboards lasted until at least 13 January, leading "many 2channel users to become angry with the management". Societal impact In September 2007, 2channel averaged over 2.4 million posts per day. As of July 2020, 5channel had 1,031 boards receiving around 2.7 million posts per day on weekends, with no growth since March 2016. Meanwhile, 2ch.sc then had 826 boards receiving around 5,700 posts daily. Due to its popularity, 2channel and its successors have had considerable influence on Japanese society. Children's use of 2channel Use of sites like 2channel by minors is a major concern in Japan. Some children's search sites, such as the now-defunct , filtered textboards like 2channel. In Tokyo, a local ordinance requires that internet service providers develop filters to prevent minors from accessing sites which could harm the "sound and wholesome fostering [of their youth]"; they must also confirm before installing a connection if any minors live in the household. Despite this, however, web filter provider Net Star in February 2007 released the results of a survey which showed that the utilization rate of 2channel for primary and secondary students was 12.2%. In response to threads on 2channel about certain schools which were leading to cyberbullying, the Ministry of Education in 2008 released a 65-page manual for teachers and parents on how to handle the issue. Concerned about the popularity of 2channel among children and teenagers, a team of childhood education professors at the University of the Ryukyus in 2009 published a paper making recommendations to lawmakers on how to curb such use. In February 2020, Nishimura himself wrote an op-ed in Nikkan Kogyo Shimbun warning parents about the dangers of allowing their children unfettered access to social media sites like YouTube and 2channel. Politicians and 2channel Naoto Kan, a future Prime Minister who was then a member of the National Diet, sent a legal notice on 10 May 2000 demanding that 2channel delete a post by someone falsely claiming to be him. After the Liberal Democratic Party presidential election in 2007, Prime Minister Tarō Asō stated in a Fuji TV interview that he sometimes posts on 2channel. During each election season, supporting posts for perennial candidates Matayoshi Jesus and Mac Akasaka were frequently made on 2channel, turning them into something of a meme, similar to the repeated candidacies of Vermin Supreme in the United States. After more than ten failed candidacies for various political offices, including Governor of Tokyo, Akasaka was eventually elected to the Tokyo Metropolitan Assembly, representing Minato, in April 2019. Asahi Shimbun credited Akasaka's online fame with helping him win the surprise victory. 2channel in the media Japanese news organizations often relied on 2channel to determine the issues the public was thinking about, and for leads. However, the mass media often reports on it negatively, similar to how it reported on otaku culture a decade ago, before it went more mainstream, even though internet trends nowadays routinely slip through into other media. The phrase , when used in reporting, may refer either to 2channel or to other forums. Movements spawned on 2channel often receive media attention, noting how the methods of 2channel activists break socially normative behavior and bring pressure to bear through sheer numbers. Beyond this, though, 2channel posts were often a basis for media reports in Japan. TV programs have even featured 2channel's moderators and users; comedian Hikari Ōta, for example, criticized Nishimura during a discussion on the Tokyo Broadcasting System's Sandējapon on the ideal limits of free expression as applied to social networks. Shokun! magazine, during its operation, ran a column known as which shared "patriotic" 2channel posts. Weekly Bunshun, published by Bungeishunjū), meanwhile, has been criticized for being seen as overly pro-2channel and relying on its posts too much in its reporting.
Technology
Social network and blogging
null
217810
https://en.wikipedia.org/wiki/Messier%2013
Messier 13
Messier 13, or M13 (also designated NGC 6205 and sometimes called the Great Globular Cluster in Hercules, the Hercules Globular Cluster, or the Great Hercules Cluster), is a globular cluster of several hundred thousand stars in the constellation of Hercules. Discovery and visibility Messier 13 was discovered by Edmond Halley in 1714, and cataloged by Charles Messier on June 1, 1764, into his list of objects not to mistake for comets; Messier's list, including Messier 13, eventually became known as the Messier catalog. It is located at right ascension 16h 41.7m, declination +36° 28'. Messier 13 is often described by astronomers as the most magnificent globular cluster visible to northern observers. About one third of the way from Vega to Arcturus, four bright stars in Hercules form the Keystone asterism, the broad torso of the hero. M13 can be seen in this asterism of the way north (by west) from Zeta to Eta Herculis. With an apparent magnitude of 5.8, Messier 13 may be visible to the naked eye with averted vision on dark nights. Messier 13 is prominent in traditional binoculars as a bright, round patch of light. Its diameter is about 23 arcminutes and it is readily viewable in small telescopes. At least four inches of telescope aperture resolves stars in Messier 13's outer extent as small pinpoints of light. However, only larger telescopes resolve stars further into the center of the cluster. The cluster is visible throughout the year from latitudes greater than 36 degrees north, with the longest visibility during Northern Hemisphere spring and summer. Nearby to Messier 13 is NGC 6207, a 12th-magnitude edge-on galaxy that lies 28 arcminutes directly northeast. A small galaxy, IC 4617, lies halfway between NGC 6207 and M13, north-northeast of the large globular cluster's center. At low powers the cluster is bracketed by two seventh-magnitude stars. Characteristics About 145 light-years in diameter, M13 is composed of several hundred thousand stars, with estimates varying from around 300,000 to over half a million. The brightest star in the cluster is a red giant, the variable star V11, also known as V1554 Herculis, with an apparent visual magnitude of 11.95. M13 is 22,200 to 25,000 light-years away from Earth, and the globular cluster is one of over one hundred that orbit the center of the Milky Way. The stars in this cluster are firmly in the Population II category, markedly lower in metals than Population I stars like the Sun and most other stars in the Sun's close proximity. M13 as a whole has only about 4.6% as much iron as the Sun does. Single stars in this globular cluster were first resolved in 1779. Compared to the stars in the neighborhood of the Sun, the stars of the M13 population are more than a hundred times more densely packed. They are so close together that they sometimes collide and produce new stars. The newly formed, young stars, known as "blue stragglers", are particularly interesting to astronomers. The last three variables (V63, V64 and V65) were discovered from Spain in April 2021, March 2022 and January 2024, respectively. Arecibo message The 1974 Arecibo message, which contained encoded information about the human race, DNA, atomic numbers, Earth's position and other information, was beamed from the Arecibo Observatory radio telescope towards Messier 13 as an experiment in contacting potential extraterrestrial civilizations in the cluster. M13 was chosen because it was a large, relatively close star cluster that was available at the time and place of the ceremony. The cluster will move through space during the transit time; opinions differ as to whether or not the cluster will be in a position to receive the message by the time that it arrives. Literary references The science fiction novellas "Sucker Bait" by Isaac Asimov and the novel Question and Answer by Poul Anderson take place on Troas, a world within M13. In the German science fiction series Perry Rhodan, M13 is the location of Arkon, the homeworld of the race of Arkonides. In author Dan Simmons' Hyperion Cantos the Hercules cluster is where a copy of Earth was secretly recreated after the original was destroyed. In his novel The Sirens of Titan, Kurt Vonnegut writes "Every passing hour brings the Solar System forty-three thousand miles closer to Globular Cluster M13 in Hercules—and still there are some misfits who insist that there is no such thing as progress." Messier 13 is the home of the antagonistic aliens in the 1977 space opera The War in Space, who hail from the third planet of a system orbiting a star called Yomi. Deliberately engineering a star in Messier 13 to go nova was part of the Cybermen's complicated plot in the 1968 Doctor Who story The Wheel in Space. Gallery
Physical sciences
Notable star clusters
Astronomy
217855
https://en.wikipedia.org/wiki/Diving%20duck
Diving duck
The diving ducks, commonly called pochards or scaups, are a category of duck which feed by diving beneath the surface of the water. They are part of Anatidae, the diverse and very large family that includes ducks, geese, and swans. The diving ducks are placed in a distinct tribe in the subfamily Anatinae, the Aythyini. While morphologically close to the dabbling ducks, there are nonetheless some pronounced differences such as in the structure of the trachea. mtDNA cytochrome b and NADH dehydrogenase subunit 2 sequence data indicate that the dabbling and diving ducks are fairly distant from each other, the outward similarities being due to convergent evolution. Alternatively, the diving ducks are placed as a subfamily Aythyinae in the family Anatidae which would encompass all duck-like birds except the whistling-ducks. The seaducks commonly found in coastal areas, such as the long-tailed duck (formerly known in the U.S. as oldsquaw), scoters, goldeneyes, mergansers, bufflehead and eiders, are also sometimes colloquially referred to in North America as diving ducks because they also feed by diving; their subfamily (Merginae) is a very distinct one however. Although the group is cosmopolitan, most members are native to the Northern Hemisphere, and it includes several of the most familiar Northern Hemisphere ducks. This group of ducks is so named because its members feed mainly by diving, although in fact the Netta species are reluctant to dive, and feed more like dabbling ducks. These are gregarious ducks, mainly found on fresh water or on estuaries, though the greater scaup becomes marine during the northern winter. They are strong fliers; their broad, blunt-tipped wings require faster wing-beats than those of many ducks and they take off with some difficulty. Northern species tend to be migratory; southern species do not migrate though the hardhead travels long distances on an irregular basis in response to rainfall. Diving ducks do not walk as well on land as the dabbling ducks; their legs tend to be placed further back on their bodies to help propel them when underwater. Systematics Three genera are included in the Aythyini. The marbled duck which makes up the monotypic genus Marmaronetta, however, seems very distinct and might have diverged prior to the split of dabbling and diving ducks as indicated by morphological and molecular characteristics. The probably extinct pink-headed duck, previously treated separately in Rhodonessa, has been suggested to belong into Netta, but this approach has been questioned. DNA sequence analyses have found it to be the earliest diverging member of the pochard group. The molecular analysis also suggests that the white-winged duck should be placed into a monotypic genus Asarcornis which is fairly close to Aythya and might belong into this subfamily. Family Anatidae Subfamily Anatinae Tribe Aythyini Genus Rhodonessa Pink-headed duck (Rhodonessa caryophyllacea) ; probably extinct (1945?) Genus Marmaronetta Marbled duck (Marmaronetta angustirostris) Genus Netta (provisionally including Rhodonessa) Red-crested pochard (Netta rufina) Southern pochard (Netta erythrophthalma) Rosy-billed pochard (Netta peposaca) Genus Aythya Canvasback (Aythya valisineria) Common pochard (Aythya ferina) Redhead (Aythya americana) Ring-necked duck (Aythya collaris) Hardhead (Aythya australis) Baer's pochard (Aythya baeri) Ferruginous duck (Aythya nyroca) Madagascar pochard (Aythya innotata) – feared to be extinct, rediscovered (2006) Réunion pochard, (Aythya cf. innotata) – extinct (c. 1690s) New Zealand scaup (Aythya novaeseelandiae) Tufted duck (Aythya fuligula) Greater scaup (Aythya marila) Lesser scaup (Aythya affinis)
Biology and health sciences
Anseriformes
Animals
217858
https://en.wikipedia.org/wiki/Tribe%20%28biology%29
Tribe (biology)
In biology, a tribe is a taxonomic rank above genus, but below family and subfamily. It is sometimes subdivided into subtribes. By convention, all taxa ranked above species are capitalized, including both tribe and subtribe. In zoology, the standard ending for the name of a zoological tribe is "-ini". Examples include the tribes Caprini (goat-antelopes), Hominini (hominins), Bombini (bumblebees), and Thunnini (tunas). The tribe Hominini is divided into subtribes by some scientists; subtribe Hominina then comprises "humans". The standard ending for the name of a zoological subtribe is "-ina". In botany, the standard ending for the name of a botanical tribe is "-eae". Examples include the tribes Acalypheae and Hyacintheae. The tribe Hyacintheae is divided into subtribes, including the subtribe Massoniinae. The standard ending for the name of a botanical subtribe is "-inae". In bacteriology, the form of tribe names is as in botany, e.g., Pseudomonadeae, based on the genus name Pseudomonas. Rank recognition An unfamiliar taxonomic rank cannot necessarily be identified as a tribe merely by the presence of one of the standard suffixes: zoological -ini uniquely suffixes the animal tribe zoological -ina uniquely suffixes the animal subtribe zoological -inae uniquely suffixes the animal subfamily botanical -eae also suffixes class -phyceae, suborder -ineae, family -aceae, and subfamily -oideae (these additional -eae ranks are present in bacteria, plants, algae, and fungi, but not animals) Accordingly, working within animals alone, subfamily -inae, tribe -ini, and subtribe -ina are unique suffixes to their specific taxonomic ranks. At the other extreme, working within algae alone, -eae suffixes class -phyceae, suborder -ineae, family -aceae, subfamily -oideae, and tribe -eae. The longer suffixes themselves suffixed with -eae must first be eliminated before recognizing an unfamiliar -eae designation as belonging to rank tribe.
Biology and health sciences
Taxonomic rank
Biology
217873
https://en.wikipedia.org/wiki/Chandra%20X-ray%20Observatory
Chandra X-ray Observatory
The Chandra X-ray Observatory (CXO), previously known as the Advanced X-ray Astrophysics Facility (AXAF), is a Flagship-class space telescope launched aboard the during STS-93 by NASA on July 23, 1999. Chandra is sensitive to X-ray sources 100 times fainter than any previous X-ray telescope, enabled by the high angular resolution of its mirrors. Since the Earth's atmosphere absorbs the vast majority of X-rays, they are not detectable from Earth-based telescopes; therefore space-based telescopes are required to make these observations. Chandra is an Earth satellite in a 64-hour orbit, and its mission is ongoing . Chandra is one of the Great Observatories, along with the Hubble Space Telescope, Compton Gamma Ray Observatory (1991–2000), and the Spitzer Space Telescope (2003–2020). The telescope is named after the Nobel Prize-winning Indian-American astrophysicist Subrahmanyan Chandrasekhar. Its mission is similar to that of ESA's XMM-Newton spacecraft, also launched in 1999 but the two telescopes have different design foci, as Chandra has a much higher angular resolution and XMM-Newton higher spectroscopy throughput. In response to a decrease in NASA funding in 2024 by the US Congress, Chandra is threatened with an early cancellation despite having more than a decade of operation left. The cancellation has been referred to as a potential "extinction-level" event for X-ray astronomy in the US. A group of astronomers have put together a public outreach project to try to get enough American citizens to persuade the US Congress to provide enough funding to avoid early termination of the observatory. History In 1976, the Chandra X-ray Observatory (called AXAF at the time) was proposed to NASA by Riccardo Giacconi and Harvey Tananbaum. Preliminary work began the following year at Marshall Space Flight Center (MSFC) and the Smithsonian Astrophysical Observatory (SAO), where the telescope is now operated for NASA at the Chandra X-ray Center in the Center for Astrophysics Harvard & Smithsonian. In the meantime, in 1978, NASA launched the first imaging X-ray telescope, Einstein (HEAO-2), into orbit. Work continued on the AXAF project throughout the 1980s and 1990s. In 1992, to reduce costs, the spacecraft was redesigned. Four of the twelve planned mirrors were eliminated, as were two of the six scientific instruments. AXAF's planned orbit was changed to an elliptical one, reaching one third of the way to the Moon's at its farthest point. This eliminated the possibility of improvement or repair by the Space Shuttle but put the observatory above the Earth's radiation belts for most of its orbit. AXAF was assembled and tested by TRW (now Northrop Grumman Aerospace Systems) in Redondo Beach, California. AXAF was renamed Chandra as part of a contest held by NASA in 1998, which drew more than 6,000 submissions worldwide. The contest winners, Jatila van der Veen and Tyrel Johnson (then a high school teacher and high school student, respectively), suggested the name in honor of Nobel Prize–winning Indian-American astrophysicist Subrahmanyan Chandrasekhar. He is known for his work in determining the maximum mass of white dwarf stars, leading to greater understanding of high energy astronomical phenomena such as neutron stars and black holes. Fittingly, the name Chandra means "moon" in Sanskrit. Originally scheduled to be launched in December 1998, the spacecraft was delayed several months, eventually being launched on July 23, 1999, at 04:31 UTC by during STS-93. Chandra was deployed by Cady Coleman from Columbia at 11:47 UTC. The Inertial Upper Stage's first stage motor ignited at 12:48 UTC, and after burning for 125 seconds and separating, the second stage ignited at 12:51 UTC and burned for 117 seconds. At , it was the heaviest payload ever launched by the shuttle, a consequence of the two-stage Inertial Upper Stage booster rocket system needed to transport the spacecraft to its high orbit. Chandra has been returning data since the month after it launched. It is operated by the SAO at the Chandra X-ray Center in Cambridge, Massachusetts, with assistance from MIT and Northrop Grumman Space Technology. The ACIS CCDs suffered particle damage during early radiation belt passages. To prevent further damage, the instrument is now removed from the telescope's focal plane during passages. Although Chandra was initially given an expected lifetime of 5 years, on September 4, 2001, NASA extended its lifetime to 10 years "based on the observatory's outstanding results." Physically Chandra could last much longer. A 2004 study performed at the Chandra X-ray Center indicated that the observatory could last at least 15 years. It is active as of 2024 and has an upcoming schedule of observations published by the Chandra X-ray Center. In July 2008, the International X-ray Observatory, a joint project between ESA, NASA and JAXA, was proposed as the next major X-ray observatory but was later canceled. ESA later resurrected a downsized version of the project as the Advanced Telescope for High Energy Astrophysics (ATHENA), with a proposed launch in 2028. On October 10, 2018, Chandra entered safe mode operations, due to a gyroscope glitch. NASA reported that all science instruments were safe. Within days, the 3-second error in data from one gyro was understood, and plans were made to return Chandra to full service. The gyroscope that experienced the glitch was placed in reserve and is otherwise healthy. In March 2024, Congress decided to reduce funding for NASA and its missions. This may lead to the premature end of this mission. In June 2024, Senators urged NASA to reconsider the cuts to Chandra, which was accepted. Example discoveries The data gathered by Chandra has greatly advanced the field of X-ray astronomy. Here are some examples of discoveries supported by observations from Chandra: The first light image, of supernova remnant Cassiopeia A, gave astronomers their first glimpse of the compact object at the center of the remnant, probably a neutron star or black hole. In the Crab Nebula, another supernova remnant, Chandra showed a never-before-seen ring around the central pulsar and jets that had only been partially seen by earlier telescopes. The first X-ray emission was seen from the supermassive black hole, Sagittarius A*, at the center of the Milky Way. Chandra confirmed that X-rays in O-type stars are generated through plasma shocks embedded in their wind. Chandra found much more cool gas than expected spiraling into the center of the Andromeda Galaxy. Pressure fronts were observed in detail for the first time in Abell 2142, where clusters of galaxies are merging. The earliest images in X-rays of the shock wave of a supernova were taken of SN 1987A. Chandra showed for the first time the shadow of a small galaxy as it is being cannibalized by a larger one, in an image of Perseus A. A new type of black hole was discovered in galaxy M82, mid-mass objects purported to be the missing link between stellar-sized black holes and super massive black holes. X-ray emission lines were associated for the first time with a gamma-ray burst, Beethoven Burst GRB 991216. High school students, using Chandra data, discovered a neutron star in supernova remnant IC 443. Observations by Chandra and BeppoSAX suggest that gamma-ray bursts occur in star-forming regions. Chandra data suggested that RX J1856.5-3754 and 3C58, previously thought to be pulsars, might be even denser objects: quark stars. These results are still debated. Sound waves from violent activity around a super massive black hole were observed in the Perseus Cluster (2003). TWA 5B, a brown dwarf, was seen orbiting a binary system of Sun-like stars. Nearly all stars on the main sequence are X-ray emitters. The X-ray shadow of Titan was seen when it transited the Crab Nebula. X-ray emissions from materials falling from a protoplanetary disc into a star. Hubble constant measured to be 76.9 km/s/Mpc using Sunyaev-Zel'dovich effect. 2006 Chandra found strong evidence that dark matter exists by observing super cluster collision. 2006 X-ray emitting loops, rings and filaments discovered around a super massive black hole within Messier 87 imply the presence of pressure waves, shock waves and sound waves. The evolution of Messier 87 may have been dramatically affected. Observations of the Bullet cluster put limits on the cross-section of the self-interaction of dark matter. "The Hand of God" photograph of PSR B1509-58. Jupiter's x-rays coming from poles, not auroral ring. A large halo of hot gas was found surrounding the Milky Way. Extremely dense and luminous dwarf galaxy M60-UCD1 observed. On January 5, 2015, NASA reported that CXO observed an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*, the supermassive black hole in the center of the Milky Way galaxy. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers. In September 2016, it was announced that Chandra had detected X-ray emissions from Pluto, the first detection of X-rays from a Kuiper belt object. Chandra had made the observations in 2014 and 2015, supporting the New Horizons spacecraft for its July 2015 encounter. In September 2020, Chandra reportedly may have made an observation of an exoplanet in the Whirlpool Galaxy, which would be the first planet discovered beyond the Milky Way. In April 2021, NASA announced findings from the observatory in a tweet saying "Uranus gives off X-rays, astronomers find". The discovery would have "intriguing implications for understanding Uranus" if it is confirmed that the X-rays originate from the planet and are not emitted by the Sun. Technical description Unlike optical telescopes which possess simple aluminized parabolic surfaces (mirrors), X-ray telescopes generally use a Wolter telescope consisting of nested cylindrical paraboloid and hyperboloid surfaces coated with iridium or gold. X-ray photons would be absorbed by normal mirror surfaces, so mirrors with a low grazing angle are necessary to reflect them. Chandra uses four pairs of nested mirrors, together with their support structure, called the High Resolution Mirror Assembly (HRMA); the mirror substrate is 2 cm-thick glass, with the reflecting surface a 33 nm iridium coating, and the diameters are 65 cm, 87 cm, 99 cm and 123 cm. The thick substrate and particularly careful polishing allowed a very precise optical surface, which is responsible for Chandra's unmatched resolution: between 80% and 95% of the incoming X-ray energy is focused into a one-arcsecond circle. However, the thickness of the substrate limits the proportion of the aperture which is filled, leading to the low collecting area compared to XMM-Newton. Chandra's highly elliptical orbit allows it to observe continuously for up to 55 hours of its 65-hour orbital period. At its furthest orbital point from Earth, Chandra is one of the most distant Earth-orbiting satellites. This orbit takes it beyond the geostationary satellites and beyond the outer Van Allen belt. With an angular resolution of 0.5 arcsecond (2.4 μrad), Chandra possesses a resolution over 1000 times better than that of the first orbiting X-ray telescope. CXO uses mechanical gyroscopes, which are sensors that help determine what direction the telescope is pointed. Other navigation and orientation systems on board CXO include an aspect camera, Earth and Sun sensors, and reaction wheels. It also has two sets of thrusters, one for movement and another for offloading momentum. Instruments The Science Instrument Module (SIM) holds the two focal plane instruments, the Advanced CCD Imaging Spectrometer (ACIS) and the High Resolution Camera (HRC), moving whichever is called for into position during an observation. ACIS consists of 10 CCD chips and provides images as well as spectral information of the object observed. It operates in the photon energy range of 0.2–10 keV. The HRC has two micro-channel plate components and images over the range of 0.1–10 keV. It also has a time resolution of 16 microseconds. Both of these instruments can be used on their own or in conjunction with one of the observatory's two transmission gratings. The transmission gratings, which swing into the optical path behind the mirrors, provide Chandra with high resolution spectroscopy. The High Energy Transmission Grating Spectrometer (HETGS) works over 0.4–10 keV and has a spectral resolution of 60–1000. The Low Energy Transmission Grating Spectrometer (LETGS) has a range of 0.09–3 keV and a resolution of 40–2000. Summary: High Resolution Camera (HRC) Advanced CCD Imaging Spectrometer (ACIS) High Energy Transmission Grating Spectrometer (HETGS) Low Energy Transmission Grating Spectrometer (LETGS) Gallery
Technology
Space-based observatories
null
217882
https://en.wikipedia.org/wiki/Messier%2082
Messier 82
Messier 82 (also known as NGC 3034, Cigar Galaxy or M82) is a starburst galaxy approximately 12 million light-years away in the constellation Ursa Major. It is the second-largest member of the M81 Group, with the D25 isophotal diameter of . It is about five times more luminous than the Milky Way and its central region is about one hundred times more luminous. The starburst activity is thought to have been triggered by interaction with neighboring galaxy M81. As one of the closest starburst galaxies to Earth, M82 is the prototypical example of this galaxy type. SN 2014J, a type Ia supernova, was discovered in the galaxy on 21 January 2014. In 2014, in studying M82, scientists discovered the brightest pulsar yet known, designated M82 X-2. In November 2023, a gamma-ray burst was observed in M82, which was determined to have come from a magnetar, the first such event detected outside the Milky Way (and only the fourth such event ever detected). Discovery M82, with M81, was discovered by Johann Elert Bode in 1774; he described it as a "nebulous patch", this one about degree away from the other, "very pale and of elongated shape". In 1779, Pierre Méchain independently rediscovered both objects and reported them to Charles Messier, who added them to his catalog. Structure M82 was believed to be an irregular galaxy. In 2005, however, two symmetric spiral arms were discovered in near-infrared (NIR) images of M82. The arms were detected by subtracting an axisymmetric exponential disk from the NIR images. Even though the arms were detected in NIR images, they are bluer than the disk. The arms had been missed due to M82's high disk surface brightness, the nearly edge-on view of this galaxy (~80°), and obscuration by a complex network of dusty filaments in its optical images. These arms emanate from the ends of the NIR bar and can be followed for the length of 3 disc scales. Assuming that the northern part of M82 is nearer to us, as most of the literature does, the observed sense of rotation implies trailing arms. Starburst region In 2005, the Hubble Space Telescope revealed 197 young massive clusters in the starburst core. The average mass of these clusters is around 200,000 solar masses, hence the starburst core is a very energetic and high-density environment. Throughout the galaxy's center, young stars are being born 10 times faster than they are inside the entire Milky Way Galaxy. In the core of M82, the active starburst region spans a diameter of 500 pc. Four high surface brightness regions or clumps (designated A, C, D, and E) are detectable in this region at visible wavelengths. These clumps correspond to known sources at X-ray, infrared, and radio frequencies. Consequently, they are thought to be the least obscured starburst clusters from our vantage point. M82's unique bipolar outflow (or 'superwind') appears to be concentrated on clumps A and C, and is fueled by energy released by supernovae within the clumps which occur at a rate of about one every ten years. The Chandra X-ray Observatory detected fluctuating X-ray emissions about 600 light-years from the center of M82. Astronomers have postulated that this comes from the first known intermediate-mass black hole, of roughly 200 to 5000 solar masses. M82, like most galaxies, hosts a supermassive black hole at its center. This one has mass of approximately 3 × 107 solar masses, as measured from stellar dynamics. Unknown object In April 2010, radio astronomers working at the Jodrell Bank Observatory of the University of Manchester in the UK reported an object in M82 that had started sending out radio waves, and whose emission did not look like anything seen anywhere in the universe before. There have been several theories about the nature of this object, but currently no theory entirely fits the observed data. It has been suggested that the object could be an unusual "micro quasar", having very high radio luminosity yet low X-ray luminosity, and being fairly stable, it could be an analogue of the low X-ray luminosity galactic microquasar SS 433. However, all known microquasars produce large quantities of X-rays, whereas the object's X-ray flux is below the measurement threshold. The object is located at several arcseconds from the center of M82 which makes it unlikely to be associated with a supermassive black hole. It has an apparent superluminal motion of four times the speed of light relative to the galaxy center. Apparent superluminal motion is consistent with relativistic jets in massive black holes and does not indicate that the source itself is moving above lightspeed. Starbursts M82 is being physically affected by its larger neighbor, the spiral M81. Tidal forces caused by gravity have deformed M82, a process that started about 100 million years ago. This interaction has caused star formation to increase tenfold compared to "normal" galaxies. M82 has undergone at least one tidal encounter with M81 resulting in a large amount of gas being funneled into the galaxy's core over the last 200 Myr. The most recent such encounter is thought to have happened around 2–5 years ago and resulted in a concentrated starburst together with a corresponding marked peak in the cluster age distribution. This starburst ran for up to ~50 Myr at a rate of ~10 M⊙ per year. Two subsequent starbursts followed, the last (~4–6 Myr ago) of which may have formed the core clusters, both super star clusters (SSCs) and their lighter counterparts. Stars in M82's disk seem to have been formed in a burst 500 million years ago, leaving its disk littered with hundreds of clusters with properties similar to globular clusters (but younger), and stopped 100 million years ago with no star formation taking place in this galaxy outside the central starburst and, at low levels since 1 billion years ago, on its halo. A suggestion to explain those features is that M82 was previously a low surface brightness galaxy where star formation was triggered due to interactions with its giant neighbor. Ignoring any difference in their respective distances from the Earth, the centers of M81 and M82 are visually separated by about 130,000 light-years. The actual separation is . Supernovae As a starburst galaxy, Messier 82 is prone to frequent supernova, caused by the collapse of young, massive stars. The first (although false) supernova candidate reported was SN 1986D, initially believed to be a supernova inside the galaxy until it was found to be a variable short-wavelength infrared source instead. The first confirmed supernova recorded in the galaxy was SN 2004am, discovered in March 2004 from images taken in November 2003 by the Lick Observatory Supernova Search. It was later determined to be a Type II supernova. In 2008, a radio transient was detected in the galaxy, designated SN 2008iz and thought to be a possible radio-only supernova, being too obscured in visible light by dust and gas clouds to be detectable. A similar radio-only transient was reported in 2009, although never received a formal designation and was similarly unconfirmed. Prior to accurate and thorough supernova surveys, many other supernovae likely occurred in previous decades. The European VLBI Network studied a number of potential supernova remnants in the galaxy in the 1980s and 90s. One supernova remnant displayed clear expansion between 1986 and 1997 that suggested it originally went supernova in the early 1960s, and two other remnants show possible expansion that could indicate an age almost as young, but could not be confirmed at the time. 2014 supernova On 21 January 2014 at 19.20 UT, a new distinct star was observed in M82, at apparent magnitude +11.7, by astrophysics lecturer Steve Fossey and four of his students, at the University of London Observatory. It had brightened to magnitude +10.9 two days later. Examination of earlier observations of M82 found the supernova to figure on the intervening day as well as on 15 through 20 January, brightening from magnitude +14.4 to +11.3; it could not be found, to limiting magnitude +17, from images caught of 14 January. It was initially suggested that it could become as bright as magnitude +8.5, well within the visual range of small telescopes and large binoculars, but peaked at fainter +10.5 on the last day of the month. Preliminary analysis classified it as "a young, reddened type Ia supernova". The International Astronomical Union (IAU) has designated it SN 2014J. SN 1993J was also at relatively close distance, in M82's larger companion galaxy M81. SN 1987A in the Large Magellanic Cloud was much closer. 2014J is the closest type Ia supernova since SN 1972E. Gallery
Physical sciences
Notable galaxies
Astronomy
217947
https://en.wikipedia.org/wiki/Premenstrual%20syndrome
Premenstrual syndrome
Premenstrual syndrome (PMS) is a disruptive set of emotional and physical symptoms that regularly occur in the one to two weeks before the start of each menstrual period. Symptoms resolve around the time menstrual bleeding begins. Symptoms vary, though commonly include one or more physical, emotional, or behavioral symptoms, that resolve with menses. The range of symptoms is wide, and most commonly are breast tenderness, bloating, headache, mood swings, depression, anxiety, anger, and irritability. To be diagnosed as PMS, rather than a normal discomfort of the menstrual cycle, these symptoms must interfere with daily living, during two menstrual cycles of prospective recording. PMS-related symptoms are often present for about six days. An individual's pattern of symptoms may change over time. PMS does not produce symptoms during pregnancy or following menopause. Diagnosis requires a consistent pattern of emotional and physical symptoms occurring after ovulation and before menstruation to a degree that interferes with normal life. Emotional symptoms must not be present during the initial part of the menstrual cycle. A daily list of symptoms over a few months may help in diagnosis. Other disorders that cause similar symptoms need to be excluded before a diagnosis is made. The cause of PMS is unknown, but the underlying mechanism is believed to involve changes in hormone levels during the course of the whole menstrual cycle. Reducing salt, alcohol, caffeine, and stress, along with increasing exercise is typically all that is recommended for the management of mild symptoms. Calcium and vitamin D supplementation may be useful in some. Anti-inflammatory drugs such as ibuprofen or naproxen may help with physical symptoms. In those with more significant symptoms, birth control pills or the diuretic spironolactone may be useful. Over 90% of women report having some premenstrual symptoms, such as bloating, headaches, and moodiness. Premenstrual symptoms generally do not cause substantial disruption, and qualify as PMS in approximately 20% of pre-menopausal women. Antidepressants of the selective serotonin reuptake inhibitors (SSRI) class may be used to treat the emotional symptoms of PMS. Premenstrual dysphoric disorder (PMDD) is a more severe condition that has greater psychological symptoms. PMDD affects about 3% of women of child-bearing age. Signs and symptoms Any disruptive, cyclical symptom could be a symptom of PMS, and some sources have suggested that the number of claimed symptoms could exceed even 200. However, some symptoms are relatively common in PMS. Common emotional and non-specific symptoms include stress, anxiety, difficulty with sleep, headache, feeling tired, mood swings, increased emotional sensitivity, and changes in interest in sex. Problems with concentration and memory may occur. There may also be depression or anxiety. Common physical symptoms include bloating, bilateral breast tenderness, and headache. The exact symptoms and their intensity vary significantly from person to person, and even somewhat from cycle to cycle and over time. Most people with premenstrual syndrome experience only a few of the possible symptoms, in a relatively predictable pattern. Additionally, which symptoms are accepted as evidence of PMS varies by culture. For example, women in China report feeling cold but do not report negative affect as part of PMS, while women in the US report negative affect but not feeling cold as part of PMS. The exclusion of certain symptoms associated with the menstrual cycle can pose a challenge for researchers. For example, period pain, which is common, is excluded, as it does not usually appear until menstruation, but some experience period pain prior. However, any kind of pain can contribute to stress, difficulty with sleep, fatigue, irritability, and other symptoms that do count towards a PMS diagnosis. Causes While PMS is linked to the luteal phase, the causes of PMS are not clear, but several factors may be involved. Changes in hormones during the menstrual cycle seem to be an important factor, with changing hormone levels affecting some more than others. PMS occurs more often in those who are in their late 20s and early 40s, have at least one child, have a family history of depression, and have a past medical history of either postpartum depression or a mood disorder. Diagnosis No laboratory tests or unique physical findings exist to verify a PMS diagnosis. However, the three key features are noted: The chief complaint is one or more of the emotional symptoms associated with PMS. Irritability, tension, or unhappiness are typical emotional symptoms. Symptoms appear predictably during the luteal (premenstrual) phase, reduce or disappear predictably shortly before or during menstruation, and remain absent during the follicular (pre-ovulatory) phase. The symptoms must be severe enough to cause distress or interfere with everyday life. Mild or occasional symptoms, which are extremely common, do not necessarily qualify as PMS. The National Institute of Mental Health research definition compares the intensity of symptoms from cycle days 5 to 10 to the six-day interval before the onset of the menstrual period. To qualify as PMS, symptom intensity must increase at least 30% in the six days before menstruation. Additionally, this pattern must be documented for at least two consecutive cycles. In 2016, the Royal College of Obstetricians and Gynaecologists argued that the definition of PMS should be changed to no longer require the presence of a psychological symptom. To document a pattern, potentially affected individuals may keep a prospective record of their symptoms on a calendar for at least two menstrual cycles. This will help to establish if the symptoms are limited to the premenstrual time, predictably recurring, and disruptive to normal functioning. A number of standardized instruments have been developed to describe PMS, including the Calendar of Premenstrual syndrome Experiences (COPE), the Prospective Record of the Impact and Severity of Menstruation (PRISM), and the Visual Analogue Scales (VAS). Additionally, other conditions that may better explain symptoms must be excluded, as a number of pre-existing medical conditions may be made worse at menstruation. This is known as menstrual exacerbation or premenstrual magnification. These conditions may lead individuals who do not have PMS to incorrectly believe they have PMS when they have another underlying disorder, such as anemia, hypothyroidism, eating disorders and substance abuse. A key feature is that these conditions may also be present outside of the luteal phase. Conditions that can be magnified perimenstrually include depression or other affective disorders, migraine, seizure disorders, fatigue, irritable bowel syndrome, asthma, and allergies. Further, problems with other aspects of the female reproductive system must be excluded, including dysmenorrhea (period pain during menstruation, rather than before it), endometriosis, perimenopause, and adverse effects produced by oral contraceptive pills. Severe symptoms may qualify as PMDD. Management Many treatments have been tried in PMS. Typical recommendations for those with mild symptoms include: reducing salt and caffeine intake, not drinking alcohol, reducing stress, e.g., by scheduling fewer activities during the week before menstruation, learning what to expect with PMS, increasing exercise, and improving sleep. When self-care is not adequate, then medical management may be appropriate. Management of physical symptoms Anti-inflammatory drugs such as naproxen may help with some physical symptoms, such as pain. Spironolactone is effective as a diuretic when water retention cannot be addressed through self-care alone; however, thiazide diuretics are ineffective. Hormonal medications In those with more significant symptoms birth control pills may be useful. Hormonal contraception is commonly used; common forms include the combined oral contraceptive pill and the contraceptive patch. This class of medication may cause PMS-related symptoms in some and may reduce physical symptoms in others. They do not relieve emotional symptoms. Gonadotropin-releasing hormone agonists can be useful in severe forms of PMS but have their own set of significant potential side effects, such as bone loss. Progesterone support was used for many years – in the 1950s, a deficiency of progesterone was believed to be the cause of PMS – but it does not provide any benefit. Management of emotional symptoms Antidepressants Antidepressants, particularly SSRIs and venlafaxine, are used as the first-line treatment of severe emotional symptoms of PMS, and also in treating PMDD. Those with PMS may be able to take medication only on the days when symptoms are expected to occur, because relief often appears within a few days, rather than the longer timespan expected for depression or other common psychiatric conditions. Additionally, the minimum dose is often lower than for treatment of depression. Although intermittent therapy might be effective and acceptable to some, it might be less effective than continuous regimens for others, especially if they are also experiencing symptoms unrelated to the menstrual cycle. Side effects such as nausea and weakness are however relatively common. Vitamins, minerals, and alternative medicine Calcium, magnesium, vitamin E, vitamin B6, chasteberry, and black cohosh may help some. St. John's wort is discouraged because it causes many drug–drug interactions. Although St John's wort may help some with PMS, it is ineffective for PMDD. Evening primrose oil does not help. Prognosis PMS is generally a stable diagnosis, with susceptible individuals experiencing the same symptoms at the same intensity near the end of each cycle for years. Treatment for specific symptoms is usually effective. Unsuccessful medical management of severe symptoms frequently indicates misdiagnosis. Perimenstrual breast pain is associated with fibrocystic breast changes. Even without treatment, symptoms tend to decrease in perimenopausal women, and induction of menopause through surgical removal of the ovaries is a treatment of last resort. However, those who experience PMS or PMDD are more likely to have significant symptoms associated with menopause, such as hot flashes. Epidemiology Over 90% of women report having some premenstrual symptoms, such as bloating, headaches, and moodiness. Mostly the symptoms are mild. Globally, about 20% of women of reproductive age have PMS that disrupts their everyday lives. Additionally, about 30% of women have mild or moderate symptoms related to their menstrual cycles that do not disrupt their everyday lives. History PMS was originally seen as an imagined disease. Women who reported its symptoms were often told it was "all in their head". Woman's reproductive organs were thought to control them. Women were warned not to divert needed energy away from the uterus and ovaries. This view of limited energy very quickly ran up against a reality in 19th-century America that young girls worked extremely long and hard hours in factories; newspapers in the 19th century were peppered with remedies to help in the "tyrannous processes" of the menstrual cycle. In 1873 Edward Clarke published an influential book titled Sex in Education. Clarke came to the conclusion that female operatives suffer less than schoolgirls because they "work their brain less". This suggested that they have stronger bodies and a reproductive "apparatus more normally constructed". Feminists later took opposition to Clarke's argument that women should not leave the private sphere by showing that women could function in the world outside the home in spite of natural body functions. The first formal description of what is now called PMS as a medical problem, rather than a normal and natural variation, goes back to 1931, in a paper presented at the New York Academy of Medicine by Robert T. Frank titled "Hormonal Causes of Premenstrual Tension". He incorrectly attributed premenstrual symptoms to an excess of the newly discovered sex hormone, estrogen. The specific name premenstrual syndrome first appeared in the medical literature in 1953. At that time, medical researchers incorrectly thought that PMS was caused by a deficiency in progesterone. Since at least the 1990s, when PMDD became accepted, the definitions of PMS have focused on psychological symptoms. Throughout the history of PMS, many of the symptoms associated with it have been stereotypical feminine behaviors, such as expressing emotions or "nagging". Since then, PMS has been a continuous presence in popular culture, occupying a place that is larger than the research attention accorded it as a medical diagnosis. Some have argued that women are partially responsible for the medicalization of PMS. They claim that women are partially responsible for legitimizing this disorder and have thus contributed to the social construction of PMS as an illness. The public debate over PMS and PMDD may have been affected by organizations who had a stake in the outcome including feminists, the American Psychiatric Association, physicians and scientists. Alternative views Some supporters of PMS as a social construct believe PMDD and PMS to be unrelated issues: according to them, PMDD is a product of brain chemistry, and PMS is a product of culture, i.e. a culture-bound syndrome. Women are socially conditioned to expect PMS, or to at least know of its existence, and they therefore report their symptoms accordingly. Becoming educated about PMS narrows their interpretation of their experiences by teaching them that certain symptoms are accepted as part of PMS, and that other symptoms are not, even though an accepted symptom might be unrelated to PMS for that woman (who might have a different medical condition), and an excluded symptom might be part of PMS, but not mentioned because they did not think it was relevant. Social psychologist Carol Tavris also says that PMS is blamed as an explanation for rage or sadness. The identification of PMS as a medical disorder has been criticized as inappropriate medicalization. These critics are concerned that society is pathologizing the menstrual cycle itself, even when the signs and symptoms are non-disruptive. The view of PMS as primarily a psychological situation, rather than primarily a biologically driven, medical condition dominated by physical symptoms, has also been criticized. This view makes it harder to address psychosocial factors, such as external stress and a lack of social support, that exacerbate premenstrual symptoms. Treating PMS as a psychological situation also makes it difficult to address menstrual exacerbation of other conditions, including catamenial epilepsy, menstrual migraine, and cyclical asthma. The limitation of PMS to premenstrual symptoms, rather than having a diagnosis that covers all symptoms associated with the menstrual cycle, has also been criticized. Critics of this limitation think that excluding common physical symptoms that appear during the menstrual phase, such as period pain, fatigue, and back pain, is an arbitrary distinction that tends to reinforce the view of PMS as primarily an emotional problem, rather than a biological one. They propose a focus on perimenstrual symptoms instead of strictly pre-menstrual ones. Research directions Open research questions related to treatment include how to predict who will respond to SSRIs, which non-drug treatments are effective, and how to manage people who have PMS in addition to other medical conditions. Researchers are also working towards a single, uniform set of diagnostic criteria and to identify any objective characteristics that could be useful for diagnosis, such as any possible genetic predisposition.
Biology and health sciences
Human reproduction
Biology
218247
https://en.wikipedia.org/wiki/Advanced%20Audio%20Coding
Advanced Audio Coding
Advanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression. It was designed to be the successor of the MP3 format and generally achieves higher sound quality than MP3 at the same bit rate. AAC has been standardized by ISO and IEC as part of the MPEG-2 and MPEG-4 specifications. Part of AAC, HE-AAC ("AAC+"), is part of MPEG-4 Audio and is adopted into digital radio standards DAB+ and Digital Radio Mondiale, and mobile television standards DVB-H and ATSC-M/H. AAC supports inclusion of 48 full-bandwidth (up to 96 kHz) audio channels in one stream plus 16 low frequency effects (LFE, limited to 120 Hz) channels, up to 16 "coupling" or dialog channels, and up to 16 data streams. The quality for stereo is satisfactory to modest requirements at 96 kbit/s in joint stereo mode; however, hi-fi transparency demands data rates of at least 128 kbit/s (VBR). Tests of MPEG-4 audio have shown that AAC meets the requirements referred to as "transparent" for the ITU at 128 kbit/s for stereo, and 384 kbit/s for 5.1 audio. AAC uses only a modified discrete cosine transform (MDCT) algorithm, giving it higher compression efficiency than MP3, which uses a hybrid coding algorithm that is part MDCT and part FFT. AAC is the default or standard audio format for iPhone, iPod, iPad, Nintendo DSi, Nintendo 3DS, Apple Music, iTunes, DivX Plus Web Player, PlayStation 4 and various Nokia Series 40 phones. It is supported on a wide range of devices and software such as PlayStation Vita, Wii, digital audio players like Sony Walkman or SanDisk Clip, Android and BlackBerry devices, various in-dash car audio systems, and is also one of the audio formats used on the Spotify web player. History Background The discrete cosine transform (DCT), a type of transform coding for lossy compression, was proposed by Nasir Ahmed in 1972, and developed by Ahmed with T. Natarajan and K. R. Rao in 1973, publishing their results in 1974. This led to the development of the modified discrete cosine transform (MDCT), proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The MP3 audio coding standard introduced in 1992 used a hybrid coding algorithm that is part MDCT and part FFT. AAC uses a purely MDCT algorithm, giving it higher compression efficiency than MP3. Development further advanced when Lars Liljeryd introduced a method that radically shrank the amount of information needed to store the digitized form of a song or speech. AAC was developed with the cooperation and contributions of companies including Bell Labs, Fraunhofer IIS, Dolby Laboratories, LG Electronics, NEC, Panasonic, Sony Corporation, ETRI, JVC Kenwood, Philips, Microsoft, and NTT. It was officially declared an international standard by the Moving Picture Experts Group in April 1997. It is specified both as Part 7 of the MPEG-2 standard, and Subpart 4 in Part 3 of the MPEG-4 standard. Standardization In 1997, AAC was first introduced as MPEG-2 Part 7, formally known as ISO/IEC 13818-7:1997. This part of MPEG-2 was a new part, since MPEG-2 already included MPEG-2 Part 3, formally known as ISO/IEC 13818-3: MPEG-2 BC (Backwards Compatible). Therefore, MPEG-2 Part 7 is also known as MPEG-2 NBC (Non-Backward Compatible), because it is not compatible with the MPEG-1 audio formats (MP1, MP2 and MP3). MPEG-2 Part 7 defined three profiles: Low-Complexity profile (AAC-LC / LC-AAC), Main profile (AAC Main) and Scalable Sampling Rate profile (AAC-SSR). AAC-LC profile consists of a base format very much like AT&T's Perceptual Audio Coding (PAC) coding format, with the addition of temporal noise shaping (TNS), the Kaiser window (described below), a nonuniform quantizer, and a reworking of the bitstream format to handle up to 16 stereo channels, 16 mono channels, 16 low-frequency effect (LFE) channels and 16 commentary channels in one bitstream. The Main profile adds a set of recursive predictors that are calculated on each tap of the filterbank. The SSR uses a 4-band PQMF filterbank, with four shorter filterbanks following, in order to allow for scalable sampling rates. In 1999, MPEG-2 Part 7 was updated and included in the MPEG-4 family of standards and became known as MPEG-4 Part 3, MPEG-4 Audio or ISO/IEC 14496-3:1999. This update included several improvements. One of these improvements was the addition of Audio Object Types which are used to allow interoperability with a diverse range of other audio formats such as TwinVQ, CELP, HVXC, speech synthesis and MPEG-4 Structured Audio. Another notable addition in this version of the AAC standard is Perceptual Noise Substitution (PNS). In that regard, the AAC profiles (AAC-LC, AAC Main and AAC-SSR profiles) are combined with perceptual noise substitution and are defined in the MPEG-4 audio standard as Audio Object Types. MPEG-4 Audio Object Types are combined in four MPEG-4 Audio profiles: Main (which includes most of the MPEG-4 Audio Object Types), Scalable (AAC LC, AAC LTP, CELP, HVXC, TwinVQ, Wavetable Synthesis, TTSI), Speech (CELP, HVXC, TTSI) and Low Rate Synthesis (Wavetable Synthesis, TTSI). The reference software for MPEG-4 Part 3 is specified in MPEG-4 Part 5 and the conformance bit-streams are specified in MPEG-4 Part 4. MPEG-4 Audio remains backward-compatible with MPEG-2 Part 7. The MPEG-4 Audio Version 2 (ISO/IEC 14496-3:1999/Amd 1:2000) defined new audio object types: the low delay AAC (AAC-LD) object type, bit-sliced arithmetic coding (BSAC) object type, parametric audio coding using harmonic and individual line plus noise and error resilient (ER) versions of object types. It also defined four new audio profiles: High Quality Audio Profile, Low Delay Audio Profile, Natural Audio Profile and Mobile Audio Internetworking Profile. The HE-AAC Profile (AAC LC with SBR) and AAC Profile (AAC LC) were first standardized in ISO/IEC 14496-3:2001/Amd 1:2003. The HE-AAC v2 Profile (AAC LC with SBR and Parametric Stereo) was first specified in ISO/IEC 14496-3:2005/Amd 2:2006. The Parametric Stereo audio object type used in HE-AAC v2 was first defined in ISO/IEC 14496-3:2001/Amd 2:2004. The current version of the AAC standard is defined in ISO/IEC 14496-3:2009. AAC+ v2 is also standardized by ETSI (European Telecommunications Standards Institute) as TS 102005. The MPEG-4 Part 3 standard also contains other ways of compressing sound. These include lossless compression formats, synthetic audio and low bit-rate compression formats generally used for speech. AAC's improvements over MP3 Advanced Audio Coding is designed to be the successor of the MPEG-1 Audio Layer 3, known as MP3 format, which was specified by ISO/IEC in 11172-3 (MPEG-1 Audio) and 13818-3 (MPEG-2 Audio). Improvements include: more sample rates (from 8 to 96 kHz) than MP3 (16 to 48 kHz); up to 48 channels (MP3 supports up to two channels in MPEG-1 mode and up to 5.1 channels in MPEG-2 mode); arbitrary bit rates and variable frame length. Standardized constant bit rate with bit reservoir; higher efficiency and simpler filter bank. AAC uses a pure MDCT (modified discrete cosine transform), rather than MP3's hybrid coding (which was part MDCT and part FFT); higher coding efficiency for stationary signals (AAC uses a blocksize of 1024 or 960 samples, allowing more efficient coding than MP3's 576 sample blocks); higher coding accuracy for transient signals (AAC uses a blocksize of 128 or 120 samples, allowing more accurate coding than MP3's 192 sample blocks); possibility to use Kaiser-Bessel derived window function to eliminate spectral leakage at the expense of widening the main lobe; much better handling of audio frequencies above 16 kHz; more flexible joint stereo (different methods can be used in different frequency ranges); additional modules (tools) added to increase compression efficiency: TNS, backwards prediction, perceptual noise substitution (PNS), etc. These modules can be combined to constitute different encoding profiles. Overall, the AAC format allows developers more flexibility to design codecs than MP3 does, and corrects many of the design choices made in the original MPEG-1 audio specification. This increased flexibility often leads to more concurrent encoding strategies and, as a result, to more efficient compression. This is especially true at very low bit rates where the superior stereo coding, pure MDCT, and better transform window sizes leave MP3 unable to compete. While the MP3 format has near-universal hardware and software support, primarily because MP3 was the format of choice during the crucial first few years of widespread music file-sharing/distribution over the internet, AAC is a strong contender due to some unwavering industry support. Functionality AAC is a wideband audio coding algorithm that exploits two primary coding strategies to dramatically reduce the amount of data needed to represent high-quality digital audio: Signal components that are perceptually irrelevant are discarded. Redundancies in the coded audio signal are eliminated. The actual encoding process consists of the following steps: The signal is converted from time-domain to frequency-domain using forward modified discrete cosine transform (MDCT). This is done by using filter banks that take an appropriate number of time samples and convert them to frequency samples. The frequency domain signal is quantized based on a psychoacoustic model and encoded. Internal error correction codes are added. The signal is stored or transmitted. In order to prevent corrupt samples, a modern implementation of the Luhn mod N algorithm is applied to each frame. The MPEG-4 audio standard does not define a single or small set of highly efficient compression schemes but rather a complex toolbox to perform a wide range of operations from low bit rate speech coding to high-quality audio coding and music synthesis. The MPEG-4 audio coding algorithm family spans the range from low bit rate speech encoding (down to 2 kbit/s) to high-quality audio coding (at 64 kbit/s per channel and higher). AAC offers sampling frequencies between 8 kHz and 96 kHz and any number of channels between 1 and 48. In contrast to MP3's hybrid filter bank, AAC uses the modified discrete cosine transform (MDCT) together with the increased window lengths of 1024 or 960 points. AAC encoders can switch dynamically between a single MDCT block of length 1024 points or 8 blocks of 128 points (or between 960 points and 120 points, respectively). If a signal change or a transient occurs, 8 shorter windows of 128/120 points each are chosen for their better temporal resolution. By default, the longer 1024-point/960-point window is otherwise used because the increased frequency resolution allows for a more sophisticated psychoacoustic model, resulting in improved coding efficiency. Modular encoding AAC takes a modular approach to encoding. Depending on the complexity of the bitstream to be encoded, the desired performance and the acceptable output, implementers may create profiles to define which of a specific set of tools they want to use for a particular application. The MPEG-2 Part 7 standard (Advanced Audio Coding) was first published in 1997 and offers three default profiles: Low Complexity (LC) – the simplest and most widely used and supported Main Profile (Main) – like the LC profile, with the addition of backwards prediction Scalable Sample Rate (SSR) a.k.a. Sample-Rate Scalable (SRS) The MPEG-4 Part 3 standard (MPEG-4 Audio) defined various new compression tools (a.k.a. Audio Object Types) and their usage in brand new profiles. AAC is not used in some of the MPEG-4 Audio profiles. The MPEG-2 Part 7 AAC LC profile, AAC Main profile and AAC SSR profile are combined with Perceptual Noise Substitution and defined in the MPEG-4 Audio standard as Audio Object Types (under the name AAC LC, AAC Main and AAC SSR). These are combined with other Object Types in MPEG-4 Audio profiles. Here is a list of some audio profiles defined in the MPEG-4 standard: Main Audio Profile – defined in 1999, uses most of the MPEG-4 Audio Object Types (AAC Main, AAC-LC, AAC-SSR, AAC-LTP, AAC Scalable, TwinVQ, CELP, HVXC, TTSI, Main synthesis) Scalable Audio Profile – defined in 1999, uses AAC-LC, AAC-LTP, AAC Scalable, TwinVQ, CELP, HVXC, TTSI Speech Audio Profile – defined in 1999, uses CELP, HVXC, TTSI Synthetic Audio Profile – defined in 1999, TTSI, Main synthesis High Quality Audio Profile – defined in 2000, uses AAC-LC, AAC-LTP, AAC Scalable, CELP, ER-AAC-LC, ER-AAC-LTP, ER-AAC Scalable, ER-CELP Low Delay Audio Profile – defined in 2000, uses CELP, HVXC, TTSI, ER-AAC-LD, ER-CELP, ER-HVXC Low Delay AAC v2 - defined in 2012, uses AAC-LD, AAC-ELD and AAC-ELDv2 Mobile Audio Internetworking Profile – defined in 2000, uses ER-AAC-LC, ER-AAC-Scalable, ER-TwinVQ, ER-BSAC, ER-AAC-LD AAC Profile – defined in 2003, uses AAC-LC High Efficiency AAC Profile – defined in 2003, uses AAC-LC, SBR High Efficiency AAC v2 Profile – defined in 2006, uses AAC-LC, SBR, PS Extended High Efficiency AAC xHE-AAC – defined in 2012, uses USAC One of many improvements in MPEG-4 Audio is an Object Type called Long Term Prediction (LTP), which is an improvement of the Main profile using a forward predictor with lower computational complexity. AAC error protection toolkit Applying error protection enables error correction up to a certain extent. Error correcting codes are usually applied equally to the whole payload. However, since different parts of an AAC payload show different sensitivity to transmission errors, this would not be a very efficient approach. The AAC payload can be subdivided into parts with different error sensitivities. Independent error correcting codes can be applied to any of these parts using the Error Protection (EP) tool defined in MPEG-4 Audio standard. This toolkit provides the error correcting capability to the most sensitive parts of the payload in order to keep the additional overhead low. The toolkit is backwardly compatible with simpler and pre-existing AAC decoders. A great deal of the toolkit's error correction functions are based around spreading information about the audio signal more evenly in the datastream. Error Resilient (ER) AAC Error Resilience (ER) techniques can be used to make the coding scheme itself more robust against errors. For AAC, three custom-tailored methods were developed and defined in MPEG-4 Audio Huffman Codeword Reordering (HCR) to avoid error propagation within spectral data Virtual Codebooks (VCB11) to detect serious errors within spectral data Reversible Variable Length Code (RVLC) to reduce error propagation within scale factor data AAC Low Delay The audio coding standards MPEG-4 Low Delay (AAC-LD), Enhanced Low Delay (AAC-ELD), and Enhanced Low Delay v2 (AAC-ELDv2) as defined in ISO/IEC 14496-3:2009 and ISO/IEC 14496-3:2009/Amd 3 are designed to combine the advantages of perceptual audio coding with the low delay necessary for two-way communication. They are closely derived from the MPEG-2 Advanced Audio Coding (AAC) format. AAC-ELD is recommended by GSMA as super-wideband voice codec in the IMS Profile for High Definition Video Conference (HDVC) Service. Licensing and patents No licenses or payments are required for a user to stream or distribute audio in AAC format. This reason alone might have made AAC a more attractive format to distribute audio than its predecessor MP3, particularly for streaming audio (such as Internet radio) depending on the use case. However, a patent license is required for all manufacturers or developers of AAC "end-user" codecs. The terms (as disclosed to SEC) uses per-unit pricing. In the case of software, each computer running the software is to be considered a separate "unit". It used to be common for free and open source software implementations such as FFmpeg and FAAC to only distribute in source code form so as to not "otherwise supply" an AAC codec. However, FFmpeg has since become more lenient on patent matters: the "gyan.dev" builds recommended by the official site now contain its AAC codec, with the FFmpeg legal page stating that patent law conformance is the user's responsibility. (See below under Products that support AAC, Software.) The Fedora Project, a community backed by Red Hat, has imported the "Third-Party Modified Version of the Fraunhofer FDK AAC Codec Library for Android" to its repositories on September 25, 2018, and has enabled FFmpeg's native AAC encoder and decoder for its ffmpeg-free package on January 31, 2023. The AAC patent holders include Bell Labs, Dolby, ETRI, Fraunhofer, JVC Kenwood, LG Electronics, Microsoft, NEC, NTT (and its subsidiary NTT Docomo), Panasonic, Philips, and Sony Corporation. Based on the list of patents from the SEC terms, the last baseline AAC patent expires in 2028, and the last patent for all AAC extensions mentioned expires in 2031. Extensions and improvements Some extensions have been added to the first AAC standard (defined in MPEG-2 Part 7 in 1997): Perceptual Noise Substitution (PNS), added in MPEG-4 in 1999. It allows the coding of noise as pseudorandom data. Long Term Predictor (LTP), added in MPEG-4 in 1999. It is a forward predictor with lower computational complexity. Error Resilience (ER), added in MPEG-4 Audio version 2 in 2000, used for transport over error prone channels AAC-LD (Low Delay), defined in 2000, used for real-time conversation applications High Efficiency AAC (HE-AAC), a.k.a. aacPlus v1 or AAC+, the combination of SBR (Spectral Band Replication) and AAC LC. Used for low bitrates. Defined in 2003. HE-AAC v2, a.k.a. aacPlus v2, eAAC+ or Enhanced aacPlus, the combination of Parametric Stereo (PS) and HE-AAC; used for even lower bitrates. Defined in 2004 and 2006. xHE-AAC, extends the operating range of the codec from 12 to 300 kbit/s. MPEG-4 Scalable To Lossless (SLS), Not yet published, can supplement an AAC stream to provide a lossless decoding option, such as in Fraunhofer IIS's "HD-AAC" product Container formats In addition to the MP4, 3GP and other container formats based on ISO base media file format for file storage, AAC audio data was first packaged in a file for the MPEG-2 standard using Audio Data Interchange Format (ADIF), consisting of a single header followed by the raw AAC audio data blocks. However, if the data is to be streamed within an MPEG-2 transport stream, a self-synchronizing format called an Audio Data Transport Stream (ADTS) is used, consisting of a series of frames, each frame having a header followed by the AAC audio data. This file and streaming-based format are defined in MPEG-2 Part 7, but are only considered informative by MPEG-4, so an MPEG-4 decoder does not need to support either format. These containers, as well as a raw AAC stream, may bear the .aac file extension. MPEG-4 Part 3 also defines its own self-synchronizing format called a Low Overhead Audio Stream (LOAS) that encapsulates not only AAC, but any MPEG-4 audio compression scheme such as TwinVQ and ALS. This format is what was defined for use in DVB transport streams when encoders use either SBR or parametric stereo AAC extensions. However, it is restricted to only a single non-multiplexed AAC stream. This format is also referred to as a Low Overhead Audio Transport Multiplex (LATM), which is just an interleaved multiple stream version of a LOAS. Products that support AAC HDTV Standards Japanese ISDB-T In December 2003, Japan started broadcasting terrestrial DTV ISDB-T standard that implements MPEG-2 video and MPEG-2 AAC audio. In April 2006 Japan started broadcasting the ISDB-T mobile sub-program, called 1seg, that was the first implementation of video H.264/AVC with audio HE-AAC in Terrestrial HDTV broadcasting service on the planet. International ISDB-Tb In December 2007, Brazil started broadcasting terrestrial DTV standard called International ISDB-Tb that implements video coding H.264/AVC with audio AAC-LC on main program (single or multi) and video H.264/AVC with audio HE-AACv2 in the 1seg mobile sub-program. DVB The ETSI, the standards governing body for the DVB suite, supports AAC, HE-AAC and HE-AAC v2 audio coding in DVB applications since at least 2004. DVB broadcasts which use the H.264 compression for video normally use HE-AAC for audio. Hardware iTunes and iPod In April 2003, Apple brought mainstream attention to AAC by announcing that its iTunes and iPod products would support songs in MPEG-4 AAC format (via a firmware update for older iPods). Customers could download music in a closed-source digital rights management (DRM)-restricted form of 128 kbit/s AAC (see FairPlay) via the iTunes Store or create files without DRM from their own CDs using iTunes. In later years, Apple began offering music videos and movies, which also use AAC for audio encoding. On May 29, 2007, Apple began selling songs and music videos from participating record labels at higher bitrate (256 kbit/s cVBR) and free of DRM, a format dubbed "iTunes Plus" . These files mostly adhere to the AAC standard and are playable on many non-Apple products but they do include custom iTunes information such as album artwork and a purchase receipt, so as to identify the customer in case the file is leaked out onto peer-to-peer networks. It is possible, however, to remove these custom tags to restore interoperability with players that conform strictly to the AAC specification. As of January 6, 2009, nearly all music on the USA regioned iTunes Store became DRM-free, with the remainder becoming DRM-free by the end of March 2009. iTunes offers a "Variable Bit Rate" encoding option which encodes AAC tracks in the Constrained Variable Bitrate scheme (a less strict variant of ABR encoding); the underlying QuickTime API does offer a true VBR encoding profile however. As of September 2009, Apple has added support for HE-AAC (which is fully part of the MP4 standard) only for radio streams, not file playback, and iTunes still lacks support for true VBR encoding. Other portable players Archos Cowon (unofficially supported on some models) Creative Zen Portable Fiio (all current models) Nintendo 3DS Nintendo DSi Philips GoGear Muse PlayStation Portable (PSP) with firmware 2.0 or greater Samsung YEPP SanDisk Sansa (some models) Walkman Zune Any portable player that fully supports the Rockbox third party firmware Mobile phones For a number of years, many mobile phones from manufacturers such as Nokia, Motorola, Samsung, Sony Ericsson, BenQ-Siemens and Philips have supported AAC playback. The first such phone was the Nokia 5510 released in 2002 which also plays MP3s. However, this phone was a commercial failure and such phones with integrated music players did not gain mainstream popularity until 2005 when the trend of having AAC as well as MP3 support continued. Most new smartphones and music-themed phones support playback of these formats. Sony Ericsson phones support various AAC formats in MP4 container. AAC-LC is supported in all phones beginning with K700, phones beginning with W550 have support of HE-AAC. The latest devices such as the P990, K610, W890i and later support HE-AAC v2. Nokia XpressMusic and other new generation Nokia multimedia phones like N- and E-Series also support AAC format in LC, HE, M4A and HEv2 profiles. These also supports playing LTP-encoded AAC audio. BlackBerry phones running the BlackBerry 10 operating system support AAC playback natively. Select previous generation BlackBerry OS devices also support AAC. bada OS Apple's iPhone supports AAC and FairPlay protected AAC files formerly used as the default encoding format in the iTunes Store until the removal of DRM restrictions in March 2009. Android 2.3 and later supports AAC-LC, HE-AAC and HE-AAC v2 in MP4 or M4A containers along with several other audio formats. Android 3.1 and later supports raw ADTS files. Android 4.1 can encode AAC. WebOS by HP/Palm supports AAC, AAC+, eAAC+, and .m4a containers in its native music player as well as several third-party players. However, it does not support Apple's FairPlay DRM files downloaded from iTunes. Windows Phone's Silverlight runtime supports AAC-LC, HE-AAC and HE-AAC v2 decoding. Other devices Apple's iPad: Supports AAC and FairPlay protected AAC files used as the default encoding format in the iTunes Store Palm OS PDAs: Many Palm OS based PDAs and smartphones can play AAC and HE-AAC with the 3rd party software Pocket Tunes. Version 4.0, released in December 2006, added support for native AAC and HE-AAC files. The AAC codec for TCPMP, a popular video player, was withdrawn after version 0.66 due to patent issues, but can still be downloaded from sites other than corecodec.org. CorePlayer, the commercial follow-on to TCPMP, includes AAC support. Other Palm OS programs supporting AAC include Kinoma Player and AeroPlayer. Windows Mobile: Supports AAC either by the native Windows Media Player or by third-party products (TCPMP, CorePlayer) Epson: Supports AAC playback in the P-2000 and P-4000 Multimedia/Photo Storage Viewers Sony Reader: plays M4A files containing AAC, and displays metadata created by iTunes. Other Sony products, including the A and E series Network Walkmans, support AAC with firmware updates (released May 2006) while the S series supports it out of the box. Sonos Digital Media Player: supports playback of AAC files Barnes & Noble Nook Color: supports playback of AAC encoded files Roku SoundBridge: a network audio player, supports playback of AAC encoded files Squeezebox: network audio player (made by Slim Devices, a Logitech company) that supports playback of AAC files PlayStation 3: supports encoding and decoding of AAC files Xbox 360: supports streaming of AAC through the Zune software, and of supported iPods connected through the USB port Wii: supports AAC files through version 1.1 of the Photo Channel as of December 11, 2007. All AAC profiles and bitrates are supported as long as it is in the .m4a file extension. The 1.1 update removed MP3 compatibility, but according to Nintendo, users who have installed this may freely downgrade to the old version if they wish. Livescribe Pulse and Echo Smartpens: record and store audio in AAC format. The audio files can be replayed using the pen's integrated speaker, attached headphones, or on a computer using the Livescribe Desktop software. The AAC files are stored in the user's "My Documents" folder of the Windows OS and can be distributed and played without specialized hardware or software from Livescribe. Google Chromecast: supports playback of LC-AAC and HE-AAC audio Software Almost all current computer media players include built-in decoders for AAC, or can utilize a library to decode it. On Microsoft Windows, DirectShow can be used this way with the corresponding filters to enable AAC playback in any DirectShow based player. Mac OS X supports AAC via the QuickTime libraries. Adobe Flash Player, since version 9 update 3, can also play back AAC streams. Since Flash Player is also a browser plugin, it can play AAC files through a browser as well. The Rockbox open source firmware (available for multiple portable players) also offers support for AAC to varying degrees, depending on the model of player and the AAC profile. Optional iPod support (playback of unprotected AAC files) for the Xbox 360 is available as a free download from Xbox Live. The following is a non-comprehensive list of other software player applications: 3ivx MPEG-4: a suite of DirectShow and QuickTime plugins which support AAC encoding or AAC/ HE-AAC decoding in any DirectShow application CorePlayer: also supports LC and HE AAC ffdshow: a free open source DirectShow filter for Microsoft Windows that uses FAAD2 to support AAC decoding foobar2000: a freeware audio player for Windows that supports LC and HE AAC KMPlayer MediaMonkey AIMP Media Player Classic Home Cinema mp3tag MPlayer or xine: often used as AAC decoders on Linux or Macintosh MusicBee: an advanced music manager and player that also supports encoding and ripping through a plugin RealPlayer: includes RealNetworks' RealAudio 10 AAC encoder Songbird: supports AAC on Windows, Linux and Mac OS X, including the DRM rights management encoding used for purchased music from the iTunes Store, with a plug-in Sony SonicStage VLC media player: supports playback and encoding of MP4 and raw AAC files Winamp for Windows: includes an AAC encoder that supports LC and HE AAC Windows Media Player 12: released with Windows 7, supports playback of AAC files natively Another Real: Rhapsody supports the RealAudio AAC codec, in addition to offering subscription tracks encoded with AAC XBMC: supports AAC (both LC and HE). XMMS: supports MP4 playback using a plugin provided by the faad2 library Some of these players (e.g., foobar2000, Winamp, and VLC) also support the decoding of ADTS (Audio Data Transport Stream) using the SHOUTcast protocol. Plug-ins for Winamp and foobar2000 enable the creation of such streams. Nero Digital Audio In May 2006, Nero AG released an AAC encoding tool free of charge, Nero Digital Audio (the AAC codec portion has become Nero AAC Codec), which is capable of encoding LC-AAC, HE-AAC and HE-AAC v2 streams. The tool is a command-line interface tool only. A separate utility is also included to decode to PCM WAV. Various tools including the foobar2000 audio player and MediaCoder can provide a GUI for this encoder. FAAC and FAAD2 FAAC and FAAD2 stand for Freeware Advanced Audio Coder and Decoder 2 respectively. FAAC supports audio object types LC, Main and LTP. FAAD2 supports audio object types LC, Main, LTP, SBR and PS. Although FAAD2 is free software, FAAC is not free software. Fraunhofer FDK AAC A Fraunhofer-authored open-source encoder/decoder included in Android has been ported to other platforms. FFmpeg’s native AAC encoder does not support HE-AAC and HE-AACv2, but GPL 2.0+ of ffmpeg is not compatible with FDK AAC, hence ffmpeg with libfdk-aac is not redistributable. The QAAC encoder that is using Apple's Core Media Audio is still higher quality than FDK. FFmpeg and Libav The native AAC encoder created in FFmpeg's libavcodec, and forked with Libav, was considered experimental and poor. A significant amount of work was done for the 3.0 release of FFmpeg (February 2016) to make its version usable and competitive with the rest of the AAC encoders. Libav has not merged this work and continues to use the older version of the AAC encoder. These encoders are LGPL-licensed open-source and can be built for any platform that the FFmpeg or Libav frameworks can be built. Both FFmpeg and Libav can use the Fraunhofer FDK AAC library via libfdk-aac, and while the FFmpeg native encoder has become stable and good enough for common use, FDK is still considered the highest quality encoder available for use with FFmpeg. Libav also recommends using FDK AAC if it is available. FFmpeg 4.4 and above can also use the Apple audiotoolbox encoder. Although the native AAC encoder only produces AAC-LC, ffmpeg's native decoder is able to deal with a wide range of input formats.
Technology
File formats
null
218320
https://en.wikipedia.org/wiki/Ultraviolet%20catastrophe
Ultraviolet catastrophe
The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century and early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range. The term "ultraviolet catastrophe" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law. The phrase refers to the fact that the empirically derived Rayleigh–Jeans law, which accurately predicted experimental results at large wavelengths, failed to do so for short wavelengths. (See the image for further elaboration.) As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a problem. This problem was later found to be due to a property of quanta as proposed by Max Planck: There could be no fraction of a discrete energy package already carrying minimal energy. Since the first use of this term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence. Problem The Rayleigh-Jeans law is an approximation to the spectral radiance of electromagnetic radiation as a function of wavelength from a black body at a given temperature through classical arguments. For wavelength , it is: where is the spectral radiance, the power emitted per unit emitting area, per steradian, per unit wavelength; is the speed of light; is the Boltzmann constant; and is the temperature in kelvins. For frequency , the expression is instead This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of . The "ultraviolet catastrophe" is the expression of the fact that the formula misbehaves at higher frequencies; it predicts infinite energy emission because as . An example, from Mason's A History of the Sciences, illustrates multi-mode vibration via a piece of string. As a natural vibrator, the string will oscillate with specific modes (the standing waves of a string in harmonic resonance), dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator. Since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are. According to classical electromagnetism, the number of electromagnetic modes in a 3-dimensional cavity, per unit frequency, is proportional to the square of the frequency. This implies that the radiated power per unit frequency should be proportional to frequency squared. Thus, both the power at a given frequency and the total radiated power is unlimited as higher and higher frequencies are considered: this is unphysical, as the total radiated power of a cavity is not observed to be infinite, a point that was made independently by Einstein, Lord Rayleigh, and Sir James Jeans in 1905. Solution In 1900, Max Planck derived the correct form for the intensity spectral distribution function by making some assumptions that were strange for the time. In particular, Planck assumed that electromagnetic radiation can be emitted or absorbed only in discrete packets, called quanta, of energy: where: is the Planck constant, is the frequency of light, is the speed of light, is the wavelength of light. By applying this new energy to the partition function in statistical mechanics, Planck's assumptions led to the correct form of the spectral distribution functions: where: is the absolute temperature of the body, is the Boltzmann constant, denotes the exponential function. In 1905, Albert Einstein solved the problem physically by postulating that Planck's quanta were real physical particles – what we now call photons, not just a mathematical fiction. They modified statistical mechanics in the style of Boltzmann to an ensemble of photons. Einstein's photon had an energy proportional to its frequency and also explained an unpublished law of Stokes and the photoelectric effect. This published postulate was specifically cited by the Nobel Prize in Physics committee in their decision to award the prize for 1921 to Einstein.
Physical sciences
Thermodynamics
Physics
218361
https://en.wikipedia.org/wiki/Northern%20pintail
Northern pintail
The pintail or northern pintail (Anas acuta) is a duck species with wide geographic distribution that breeds in the northern areas of Europe and across the Palearctic and North America. It is migratory and winters south of its breeding range to the equator. Unusually for a bird with such a large range, it has no geographical subspecies if the possibly conspecific duck Eaton's pintail is considered to be a separate species. This is a large duck, and the male's long central tail feathers give rise to the species' English and scientific names. Both sexes have blue-grey bills and grey legs and feet. The drake is more striking, having a thin white stripe running from the back of its chocolate-coloured head down its neck to its mostly white undercarriage. The drake also has attractive grey, brown, and black patterning on its back and sides. The hen's plumage is more subtle and subdued, with drab brown feathers similar to those of other female dabbling ducks. Hens make a coarse quack and the drakes a flute-like whistle. The northern pintail is a bird of open wetlands which nests on the ground, often some distance from water. It feeds by dabbling for plant food and adds small invertebrates to its diet during the nesting season. It is highly gregarious when not breeding, forming large mixed flocks with other species of duck. This duck's population is affected by predators, parasites and avian diseases. Human activities, such as agriculture, hunting and fishing, have also had a significant impact on numbers. Nevertheless, owing to the huge range and large population of this species, it is not threatened globally. Taxonomy This species was first described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae as Anas acuta. The scientific name comes from two Latin words: , meaning "duck", and , which comes from the verb , "to sharpen"; the species term like the English name, refers to the pointed tail of the male in breeding plumage. Within the large dabbling duck genus Anas, the northern pintail's closest relatives are other pintails, such as the yellow-billed pintail (A. georgica) and Eaton's pintail (A. eatoni). The pintails are sometimes separated in the genus Dafila (described by Stephens, 1824), an arrangement supported by morphological, molecular and behavioural data. The famous British ornithologist Sir Peter Scott gave this name to his daughter, the artist Dafila Scott. Eaton's pintail has two subspecies, A. e. eatoni (the Kerguelen pintail) of Kerguelen Islands, and A. e. drygalskyi (the Crozet pintail) of Crozet Islands, and was formerly considered conspecific with the Northern Hemisphere's northern pintail. Sexual dimorphism is much less marked in the southern pintails, with the male's breeding appearance being similar to the female plumage. Unusually for a species with such a large range, northern pintail has no geographical subspecies if Eaton's pintail is treated as a separate species. A claimed extinct subspecies from Manra Island, Tristram's pintail, A. a. modesta, appears to be indistinguishable from the nominate form. The three syntype specimens of Dafila modesta Tristram (Proceedings of the Zoological Society of London, 1886, p.79. pl. VII), the extinct subspecies, are held in the vertebrate zoology collections of National Museums Liverpool at World Museum, with accession numbers NML-VZ T11792 (male immature), NML-VZ T11795 (female adult) and NML-VZ T11797 (female adult). The specimens were collected by J. V. Arundel in Sydney Island (Manra Island), Phoenix Islands in 1885 and came to the Liverpool national collection via Canon Henry Baker Tristram's collection which was purchased in 1896. Description The northern pintail is a fairly large duck with a wing chord of and wingspan of . The male is in length and weighs , and therefore is considerably larger than the female, which is long and weighs . The northern pintail broadly overlaps in size with the similarly widespread mallard, but is more slender, elongated and gracile, with a relatively longer neck and (in males) a longer tail. The unmistakable breeding plumaged male has a chocolate-brown head and white breast with a white stripe extending up the side of the neck. Its upperparts and sides are grey, but elongated grey feathers with black central stripes are draped across the back from the shoulder area. The vent area is yellow, contrasting with the black underside of the tail, which has the central feathers elongated to as much as . The bill is bluish and the legs are blue-grey. The adult female is mainly scalloped and mottled in light brown with a more uniformly grey-brown head, and its pointed tail is shorter than the male's; it is still easily identified by its shape, long neck, and long grey bill. In non-breeding (eclipse) plumage, the drake pintail looks similar to the female, but retains the male upperwing pattern and long grey shoulder feathers. Juvenile birds resemble the female, but are less neatly scalloped and have a duller brown speculum with a narrower trailing edge. The pintail walks well on land, and swims well. In water, the swimming posture is forward leaning, with the base of the neck almost flush with the water. It has a very fast flight, with its wings slightly swept-back, rather than straight out from the body like other ducks. In flight, the male shows a black speculum bordered white at the rear and pale rufous at the front, whereas the female's speculum is dark brown bordered with white, narrowly at the front edge but very prominently at the rear, being visible at a distance of . The male's call is a soft whistle, similar to that of the common teal, whereas the female has a mallard-like descending quack, and a low croak when flushed. Distribution and habitat This dabbling duck breeds across northern areas of the Palearctic south to about Poland and Mongolia, and in Canada, Alaska and the Midwestern United States. It mainly winters south of its breeding range, reaching almost to the equator in Panama, northern sub-Saharan Africa and tropical South Asia. Small numbers migrate to Pacific islands, particularly Hawaii, where a few hundred birds winter on the main islands in shallow wetlands and flooded agricultural habitats. Transoceanic journeys also occur: a bird that was caught and ringed in Labrador, Canada, was shot by a hunter in England nine days later, and Japanese-ringed birds have been recovered from six US states east to Utah and Mississippi. In parts of the range, such as Great Britain and the northwestern United States, the pintail may be present all year. The northern pintail's breeding habitat is open unwooded wetlands, such as wet grassland, lakesides or tundra. In winter, it will utilise a wider range of open habitats, such as sheltered estuaries, brackish marshes and coastal lagoons. It is highly gregarious outside the breeding season and forms very large mixed flocks with other ducks. Behaviour Breeding Both sexes reach sexual maturity at one year of age. The male mates with the female by swimming close to her with his head lowered and tail raised, continually whistling. If there is a group of males, they will chase the female in flight until only one drake is left. The female prepares for copulation, which takes place in the water, by lowering her body; the male then bobs his head up and down and mounts the female, taking the feathers on the back of her head in his mouth. After mating, he raises his head and back and whistles. Among the earliest species to breed in the spring, northern pintails typically form pairs during migration, or even while still on wintering grounds. Breeding takes place between April and June, with the nest being constructed on the ground and hidden amongst vegetation in a dry location, often some distance from water. It is a shallow scrape on the ground lined with plant material and down. The female lays seven to nine cream-coloured eggs at the rate of one per day; the eggs are in size and weigh , of which 7% is shell. If predators destroy the first clutch, the female can produce a replacement clutch as late as the end of July. The hen alone incubates the eggs for 22 to 24 days before they hatch. The precocial downy chicks are then led by the female to the nearest body of water, where they feed on dead insects on the water surface. The chicks fledge in 46 to 47 days after hatching, but stay with the female until she has completed moulting. Around three-quarters of chicks live long enough to fledge, but not more than half of those survive long enough to reproduce. The maximum recorded age is 27 years and 5 months for a Dutch bird. Feeding The pintail feeds by dabbling and upending in shallow water for plant food mainly in the evening or at night, and therefore spends much of the day resting. Its long neck enables it to take food items from the bottom of water bodies up to deep, which are beyond the reach of other dabbling ducks like the mallard. The winter diet is mainly plant material including seeds and rhizomes of aquatic plants, but the pintail sometimes feeds on roots, grain and other seeds in fields, though less frequently than other Anas ducks. During the nesting season, this bird eats mainly invertebrate animals, including aquatic insects, molluscs and crustaceans. Health Pintail nests and chicks are vulnerable to predation by mammals, such as foxes and badgers, and birds like gulls, crows and magpies. The adults can take flight to escape terrestrial predators, but nesting females in particular may be surprised by large carnivores such as bobcats. Large birds of prey, such as northern goshawks, will take ducks from the ground, and some falcons, including the gyrfalcon, have the speed and power to catch flying birds. It is susceptible to a range of parasites including Cryptosporidium, Giardia, tapeworms, blood parasites and external feather lice, and is also affected by other avian diseases. It is often the dominant species in major mortality events from avian botulism and avian cholera, and can also contract avian influenza, the H5N1 strain of which is highly pathogenic and occasionally infects humans. The northern pintail is a popular species for game shooting because of its speed, agility, and excellent eating qualities, and is hunted across its range. Although one of the world's most numerous ducks, the combination of hunting with other factors has led to population declines, and local restrictions on hunting have been introduced at times to help conserve numbers. This species' preferred habitat of shallow water is naturally susceptible to problems such as drought or the encroachment of vegetation, but this duck's habitat might be increasingly threatened by climate change. Populations are also affected by the conversion of wetlands and grassland to arable crops, depriving the duck of feeding and nesting areas. Spring planting means that many nests of this early breeding duck are destroyed by farming activities, and a Canadian study showed that more than half of the surveyed nests were destroyed by agricultural work such as ploughing and harrowing. Hunting with lead shot, along with the use of lead sinkers in angling, has been identified as a major cause of lead poisoning in waterfowl, which often feed off the bottom of lakes and wetlands where the shot collects. A Spanish study showed that northern pintail and common pochard were the species with the highest levels of lead shot ingestion, higher than in northern countries of the western Palearctic flyway, where lead shot has been banned. In the United States, Canada, and many western European countries, all shot used for waterfowl must now be non-toxic, and therefore may not contain any lead. Status The northern pintail has a large range, estimated at , and a population estimated at 4.8–4.9 million individuals. The IUCN has categorised the northern pintail as not being threatened globally, however it is endangered in Europe. In the Palaearctic, breeding populations are declining in much of the range, including its stronghold in Russia. In other regions, populations are stable or fluctuating. Pintails in North America have been badly affected by avian diseases, though it is unclear if this issue extends to other regions. Specifically, the breeding population fell from more than 10 million in 1957 to 3.5 million by 1964. Although the species has recovered from that low point, the breeding population in 1999 was 30% below the long-term average, despite years of major efforts focused on restoring the species. In 1997, an estimated 1.5 million water birds, the majority being northern pintails, died from avian botulism during two outbreaks in Canada and Utah. The northern pintail is one of the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) applies, but it has no special status under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which regulates international trade in specimens of wild animals and plants.
Biology and health sciences
Anseriformes
Animals
218628
https://en.wikipedia.org/wiki/Chemical%20potential
Chemical potential
In thermodynamics, the chemical potential of a species is the energy that can be absorbed or released due to a change of the particle number of the given species, e.g. in a chemical reaction or phase transition. The chemical potential of a species in a mixture is defined as the rate of change of free energy of a thermodynamic system with respect to the change in the number of atoms or molecules of the species that are added to the system. Thus, it is the partial derivative of the free energy with respect to the amount of the species, all other species' concentrations in the mixture remaining constant. When both temperature and pressure are held constant, and the number of particles is expressed in moles, the chemical potential is the partial molar Gibbs free energy. At chemical equilibrium or in phase equilibrium, the total sum of the product of chemical potentials and stoichiometric coefficients is zero, as the free energy is at a minimum. In a system in diffusion equilibrium, the chemical potential of any chemical species is uniformly the same everywhere throughout the system. In semiconductor physics, the chemical potential of a system of electrons at zero absolute temperature is known as the Fermi level. Overview Particles tend to move from higher chemical potential to lower chemical potential because this reduces the free energy. In this way, chemical potential is a generalization of "potentials" in physics such as gravitational potential. When a ball rolls down a hill, it is moving from a higher gravitational potential (higher internal energy thus higher potential for work) to a lower gravitational potential (lower internal energy). In the same way, as molecules move, react, dissolve, melt, etc., they will always tend naturally to go from a higher chemical potential to a lower one, changing the particle number, which is the conjugate variable to chemical potential. A simple example is a system of dilute molecules diffusing in a homogeneous environment. In this system, the molecules tend to move from areas with high concentration to low concentration, until eventually, the concentration is the same everywhere. The microscopic explanation for this is based on kinetic theory and the random motion of molecules. However, it is simpler to describe the process in terms of chemical potentials: For a given temperature, a molecule has a higher chemical potential in a higher-concentration area and a lower chemical potential in a low concentration area. Movement of molecules from higher chemical potential to lower chemical potential is accompanied by a release of free energy. Therefore, it is a spontaneous process. Another example, not based on concentration but on phase, is an ice cube on a plate above 0 °C. An H2O molecule that is in the solid phase (ice) has a higher chemical potential than a water molecule that is in the liquid phase (water) above 0 °C. When some of the ice melts, H2O molecules convert from solid to the warmer liquid where their chemical potential is lower, so the ice cube shrinks. At the temperature of the melting point, 0 °C, the chemical potentials in water and ice are the same; the ice cube neither grows nor shrinks, and the system is in equilibrium. A third example is illustrated by the chemical reaction of dissociation of a weak acid HA (such as acetic acid, A = CH3COO−): HA H+ + A− Vinegar contains acetic acid. When acid molecules dissociate, the concentration of the undissociated acid molecules (HA) decreases and the concentrations of the product ions (H+ and A−) increase. Thus the chemical potential of HA decreases and the sum of the chemical potentials of H+ and A− increases. When the sums of chemical potential of reactants and products are equal the system is at equilibrium and there is no tendency for the reaction to proceed in either the forward or backward direction. This explains why vinegar is acidic, because acetic acid dissociates to some extent, releasing hydrogen ions into the solution. Chemical potentials are important in many aspects of multi-phase equilibrium chemistry, including melting, boiling, evaporation, solubility, osmosis, partition coefficient, liquid-liquid extraction and chromatography. In each case the chemical potential of a given species at equilibrium is the same in all phases of the system. In electrochemistry, ions do not always tend to go from higher to lower chemical potential, but they do always go from higher to lower electrochemical potential. The electrochemical potential completely characterizes all of the influences on an ion's motion, while the chemical potential includes everything except the electric force. (See below for more on this terminology.) Thermodynamic definition The chemical potential μi of species i (atomic, molecular or nuclear) is defined, as all intensive quantities are, by the phenomenological fundamental equation of thermodynamics. This holds for both reversible and irreversible infinitesimal processes: where dU is the infinitesimal change of internal energy U, dS the infinitesimal change of entropy S, dV is the infinitesimal change of volume V for a thermodynamic system in thermal equilibrium, and dNi is the infinitesimal change of particle number Ni of species i as particles are added or subtracted. T is absolute temperature, S is entropy, P is pressure, and V is volume. Other work terms, such as those involving electric, magnetic or gravitational fields may be added. From the above equation, the chemical potential is given by This is because the internal energy U is a state function, so if its differential exists, then the differential is an exact differential such as for independent variables x1, x2, ... , xN of U. This expression of the chemical potential as a partial derivative of U with respect to the corresponding species particle number is inconvenient for condensed-matter systems, such as chemical solutions, as it is hard to control the volume and entropy to be constant while particles are added. A more convenient expression may be obtained by making a Legendre transformation to another thermodynamic potential: the Gibbs free energy . From the differential (for and , the product rule is applied to) and using the above expression for , a differential relation for is obtained: As a consequence, another expression for results: and the change in Gibbs free energy of a system that is held at constant temperature and pressure is simply In thermodynamic equilibrium, when the system concerned is at constant temperature and pressure but can exchange particles with its external environment, the Gibbs free energy is at its minimum for the system, that is . It follows that Use of this equality provides the means to establish the equilibrium constant for a chemical reaction. By making further Legendre transformations from U to other thermodynamic potentials like the enthalpy and Helmholtz free energy , expressions for the chemical potential may be obtained in terms of these: These different forms for the chemical potential are all equivalent, meaning that they have the same physical content, and may be useful in different physical situations. Applications The Gibbs–Duhem equation is useful because it relates individual chemical potentials. For example, in a binary mixture, at constant temperature and pressure, the chemical potentials of the two participants A and B are related by where is the number of moles of A and is the number of moles of B. Every instance of phase or chemical equilibrium is characterized by a constant. For instance, the melting of ice is characterized by a temperature, known as the melting point at which solid and liquid phases are in equilibrium with each other. Chemical potentials can be used to explain the slopes of lines on a phase diagram by using the Clapeyron equation, which in turn can be derived from the Gibbs–Duhem equation. They are used to explain colligative properties such as melting-point depression by the application of pressure. Henry's law for the solute can be derived from Raoult's law for the solvent using chemical potentials. History Chemical potential was first described by the American engineer, chemist and mathematical physicist Josiah Willard Gibbs. He defined it as follows: Gibbs later noted also that for the purposes of this definition, any chemical element or combination of elements in given proportions may be considered a substance, whether capable or not of existing by itself as a homogeneous body. This freedom to choose the boundary of the system allows the chemical potential to be applied to a huge range of systems. The term can be used in thermodynamics and physics for any system undergoing change. Chemical potential is also referred to as partial molar Gibbs energy (see also partial molar property). Chemical potential is measured in units of energy/particle or, equivalently, energy/mole. In his 1873 paper A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, Gibbs introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e. bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume–entropy–internal energy graph, Gibbs was able to determine three states of equilibrium, i.e. "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words from the aforementioned paper, Gibbs states: In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body. Electrochemical, internal, external, and total chemical potential The abstract definition of chemical potential given above—total change in free energy per extra mole of substance—is more specifically called total chemical potential. If two locations have different total chemical potentials for a species, some of it may be due to potentials associated with "external" force fields (electric potential energy, gravitational potential energy, etc.), while the rest would be due to "internal" factors (density, temperature, etc.) Therefore, the total chemical potential can be split into internal chemical potential and external chemical potential: where i.e., the external potential is the sum of electric potential, gravitational potential, etc. (where q and m are the charge and mass of the species, Vele and h are the electric potential and height of the container, respectively, and g is the acceleration due to gravity). The internal chemical potential includes everything else besides the external potentials, such as density, temperature, and enthalpy. This formalism can be understood by assuming that the total energy of a system, , is the sum of two parts: an internal energy, , and an external energy due to the interaction of each particle with an external field, . The definition of chemical potential applied to yields the above expression for . The phrase "chemical potential" sometimes means "total chemical potential", but that is not universal. In some fields, in particular electrochemistry, semiconductor physics, and solid-state physics, the term "chemical potential" means internal chemical potential, while the term electrochemical potential is used to mean total chemical potential. Systems of particles Electrons in solids Electrons in solids have a chemical potential, defined the same way as the chemical potential of a chemical species: The change in free energy when electrons are added or removed from the system. In the case of electrons, the chemical potential is usually expressed in energy per particle rather than energy per mole, and the energy per particle is conventionally given in units of electronvolt (eV). Chemical potential plays an especially important role in solid-state physics and is closely related to the concepts of work function, Fermi energy, and Fermi level. For example, n-type silicon has a higher internal chemical potential of electrons than p-type silicon. In a p–n junction diode at equilibrium the chemical potential (internal chemical potential) varies from the p-type to the n-type side, while the total chemical potential (electrochemical potential, or, Fermi level) is constant throughout the diode. As described above, when describing chemical potential, one has to say "relative to what". In the case of electrons in semiconductors, internal chemical potential is often specified relative to some convenient point in the band structure, e.g., to the bottom of the conduction band. It may also be specified "relative to vacuum", to yield a quantity known as work function, however, work function varies from surface to surface even on a completely homogeneous material. Total chemical potential, on the other hand, is usually specified relative to electrical ground. In atomic physics, the chemical potential of the electrons in an atom is sometimes said to be the negative of the atom's electronegativity. Likewise, the process of chemical potential equalization is sometimes referred to as the process of electronegativity equalization. This connection comes from the Mulliken electronegativity scale. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is seen that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons, i.e., Sub-nuclear particles In recent years, thermal physics has applied the definition of chemical potential to systems in particle physics and its associated processes. For example, in a quark–gluon plasma or other QCD matter, at every point in space there is a chemical potential for photons, a chemical potential for electrons, a chemical potential for baryon number, electric charge, and so forth. In the case of photons, photons are bosons and can very easily and rapidly appear or disappear. Therefore, at thermodynamic equilibrium, the chemical potential of photons is in most physical situations always and everywhere zero. The reason is, if the chemical potential somewhere was higher than zero, photons would spontaneously disappear from that area until the chemical potential went back to zero; likewise, if the chemical potential somewhere was less than zero, photons would spontaneously appear until the chemical potential went back to zero. Since this process occurs extremely rapidly - at least, it occurs rapidly in the presence of dense charged matter or also in the walls of the textbook example for a photon gas of blackbody radiation - it is safe to assume that the photon chemical potential here is never different from zero. A physical situation where the chemical potential for photons can differ from zero are material-filled optical microcavities, with spacings between cavity mirrors in the wavelength regime. In such two-dimensional cases, photon gases with tuneable chemical potential, much reminiscent to gases of material particles, can be observed. Electric charge is different because it is intrinsically conserved, i.e. it can be neither created nor destroyed. It can, however, diffuse. The "chemical potential of electric charge" controls this diffusion: Electric charge, like anything else, will tend to diffuse from areas of higher chemical potential to areas of lower chemical potential. Other conserved quantities like baryon number are the same. In fact, each conserved quantity is associated with a chemical potential and a corresponding tendency to diffuse to equalize it out. In the case of electrons, the behaviour depends on temperature and context. At low temperatures, with no positrons present, electrons cannot be created or destroyed. Therefore, there is an electron chemical potential that might vary in space, causing diffusion. At very high temperatures, however, electrons and positrons can spontaneously appear out of the vacuum (pair production), so the chemical potential of electrons by themselves becomes a less useful quantity than the chemical potential of the conserved quantities like (electrons minus positrons). The chemical potentials of bosons and fermions is related to the number of particles and the temperature by Bose–Einstein statistics and Fermi–Dirac statistics respectively. Ideal vs. non-ideal solutions Generally the chemical potential is given as a sum of an ideal contribution and an excess contribution: In an ideal solution, the chemical potential of species i (μi) is dependent on temperature and pressure. μi0(T, P) is defined as the chemical potential of pure species i. Given this definition, the chemical potential of species i in an ideal solution is where R is the gas constant, and is the mole fraction of species i contained in the solution. The chemical potential becomes negative infinity when , but this does not lead to nonphysical results because means that species i is not present in the system. This equation assumes that only depends on the mole fraction () contained in the solution. This neglects intermolecular interaction between species i with itself and other species [i–(j≠i)]. This can be corrected for by factoring in the coefficient of activity of species i, defined as γi. This correction yields The plots above give a very rough picture of the ideal and non-ideal situation.
Physical sciences
Thermodynamics
Physics
218972
https://en.wikipedia.org/wiki/Canada%20goose
Canada goose
The Canada goose (Branta canadensis) is a large species of goose with a black head and neck, white cheeks, white under its chin, and a brown body. It is native to the arctic and temperate regions of North America, and it is occasionally found during migration across the Atlantic in northern Europe. It has been introduced to France, the United Kingdom, Ireland, Finland, Sweden, Denmark, New Zealand, Japan, Chile, Argentina, and the Falkland Islands. Like most geese, the Canada goose is primarily herbivorous and normally migratory; often found on or close to fresh water, the Canada goose is also common in brackish marshes, estuaries, and lagoons. Extremely adept at living in human-altered areas, Canada geese have established breeding colonies in urban and cultivated habitats, which provide food and few natural predators. The success of this common park species has led to it often being considered a pest species because of its excrement, its depredation of crops, its noise, its aggressive territorial behavior toward both humans and other animals, and its habit of stalking and begging for food, the latter a result of humans disobeying artificial feeding policies toward wild animals. Nomenclature and taxonomy The Canada goose was one of the many species described by Carl Linnaeus in his 18th-century work Systema Naturae. It belongs to the Branta genus of geese, which contains species with largely black plumage, distinguishing them from the gray species of the genus Anser. Branta was a Latinized form of Old Norse , "burnt (black) goose" and the specific epithet canadensis is a Neo-Latin word meaning "from Canada". According to the Oxford English Dictionary, the first citation for the 'Canada goose' dates back to 1772. The Canada goose is also colloquially referred to as the "Canadian goose". This name may annoy some birders. The cackling goose was originally considered to be the same species or several subspecies of the Canada goose, but in July 2004, the American Ornithologists' Union's Committee on Classification and Nomenclature split them into two species, making the cackling goose into a full species with the scientific name Branta hutchinsii. The British Ornithologists' Union followed suit in June 2005. The AOU has divided the many subspecies between the two species. The subspecies of the Canada goose were listed as: Atlantic Canada goose (B. c. canadensis) (Linnaeus, 1758) Interior Canada goose (B. c. interior) (Todd, 1938) Giant Canada goose (B. c. maxima) (Delacour, 1951) Moffitt's Canada goose (B. c. moffitti) (Aldrich, 1946) Vancouver Canada goose (B. c. fulva) (Delacour, 1951) Dusky Canada goose (B. c. occidentalis) (Baird, 1858) Lesser Canada goose (B. c. parvipes) (Cassin, 1852) The distinctions between the two geese have led to confusion and debate among ornithologists. This has been aggravated by the overlap between the small types of Canada goose and larger types of cackling goose. The old "lesser Canada geese" were believed to be a partly hybrid population, with the birds named B. c. taverneri considered a mixture of B. c. minima, B. c. occidentalis, and B. c. parvipes. The holotype specimen of taverneri is a straightforward large pale cackling goose however, and hence the taxon is still valid today and was renamed "Taverner's cackling goose". In addition, the barnacle goose (B. leucopsis) was determined to be a derivative of the cackling goose lineage, whereas the Hawaiian goose (B. sandvicensis) originated from ancestral Canada geese. Thus, the species' distinctness is well evidenced. Ornithologist Harold C. Hanson, who had rediscovered wild populations of the Giant Canada Goose, proposed splitting Canada and cackling goose into six species and 200 subspecies. The radical nature of this proposal has been controversial; Richard Banks of the AOU urges caution before any of Hanson's proposals are accepted. International Code of Zoological Nomenclature has suppressed Hanson's proposals, based on the criticisms of Banks and other ornithologists. Description The black head and neck with a white "chinstrap" distinguish the Canada goose from all other goose species except the cackling goose and barnacle goose (the latter, however, has a black breast and gray rather than brownish body plumage). Some Canada geese come with a pepper-spotted or brown neck with brown plumage, and these are assumed to be a leucistic variety. The seven subspecies of this bird vary widely in size and plumage details, but all are recognizable as Canada geese. Some of the smaller races can be hard to distinguish from the cackling goose, which slightly overlap in mass. However, most subspecies of the cackling goose (exclusive of Richardson's cackling goose, B. h. hutchinsii) are considerably smaller. The smallest cackling goose, B. h. minima, is scarcely larger than a mallard. In addition to the size difference, cackling geese also have a shorter neck and smaller bill, which can be useful when small Canada geese comingle with relatively large cackling geese. Of the "true geese" (i.e., the genera Anser and Branta), the Canada goose is on average the largest living species, although some other species that are geese in name, if not of close relation to these genera, are on average heavier, such as the spur-winged goose and Cape Barren goose. Canada geese range from in length and have a wingspan. Among standard measurements, the wing chord can range from , the tarsus can range from and the bill can range from . The largest subspecies is B. c. maxima, or the giant Canada goose, and the smallest (with the separation of the cackling goose group) is B. c. parvipes, or the lesser Canada goose. An exceptionally large male of race B. c. maxima, which rarely exceed , weighed and had a wingspan of . This specimen is the largest wild goose ever recorded of any species. The male Canada goose usually weighs , averaging amongst all subspecies . The female looks virtually identical, but is slightly lighter at , averaging amongst all subspecies , and generally 10% smaller in linear dimensions than the male counterparts. The honk refers to the call of the male Canada goose, while the call refers to the female goose. The calls are similar, however, the is shorter and more high-pitched than the honk of males. When agitated or aggressively defending territory, Canada geese will typically initiate an encounter with a high-pitched hiss. Canada geese communicate with ten different vocalizations, each in response to a different situation confronting them. Distribution and habitat This species is native to North America. It breeds in Canada and the northern United States in a wide range of habitats. The Great Lakes region maintains a large population of Canada geese. Canada geese live year-round in the southern part of their breeding range, including the northern half of the United States' eastern seaboard and Pacific Coast, and areas in between. Between California and South Carolina in the southern United States and in northern Mexico, Canada geese are mainly present as migrants from further north during the winter. By the early 20th century, overhunting and loss of habitat in the late 19th and early 20th centuries had resulted in a serious decline in the numbers of this bird in its native range. The giant Canada goose subspecies was believed to be extinct in the 1950s until, in 1962, a small flock was discovered wintering in Rochester, Minnesota, by Harold Hanson of the Illinois Natural History Survey. In 1964, the Northern Prairie Wildlife Research Center was built near Jamestown, North Dakota. Its first director, Harvey K. Nelson, talked Forrest Lee into leaving Minnesota to head the center's Canada goose production and restoration program. Forrest soon had 64 pens with 64 breeding pairs of screened, high-quality birds. The project involved private, state, and federal resources and relied on the expertise and cooperation of many individuals. By the end of 1981, more than 6,000 giant Canada geese had been released at 83 sites in 26 counties in North Dakota. In recent years, Canada goose populations in some areas have grown substantially, so much so that many consider them pests for their droppings, bacteria in their droppings, noise, and confrontational behavior. This problem is partially due to the removal of natural predators and an abundance of safe, human-made bodies of water near food sources, such as those found on golf courses, in public parks and beaches, on sports fields, and in planned communities. Due in part to the interbreeding of various migratory subspecies with the introduced non-migratory giant subspecies, Canada geese are frequently a year-round feature of such urban environments. Contrary to its normal migration routine, large flocks of Canada geese have established permanent residence along the Pacific coast of North America from British Columbia's Lower Mainland and Vancouver Island area south to the San Francisco Bay area of Northern California. There are also resident Atlantic coast populations, such as on Chesapeake Bay, in Virginia's James River regions, and in the Triangle area of North Carolina (Raleigh, Durham, Chapel Hill), and nearby Hillsborough. Some Canada geese have taken up permanent residence as far south as Florida, in places such as retention ponds in apartment complexes. In 2015, the Ohio population of Canada geese was reported as roughly 130,000, with the number likely to continue increasing. Many of the geese, previously migratory, reportedly had become native, remaining in the state even in the summer. The increase was attributed to a lack of natural predators, an abundance of water, and plentiful grass in manicured lawns in urban areas. Canada geese were eliminated in Ohio following the American Civil War but were reintroduced in 1956 with 10 pairs. The population was estimated at 18,000 in 1979. The geese are considered protected, though a hunting season is allowed from September 1–15, with a daily bag limit of five. The Ohio Department of Natural Resources recommends several non-lethal scare and hazing tactics for nuisance geese, but if such methods have been used without success, they may issue a permit which can be used from March 11 through August 31 to destroy nests, conduct a goose roundup or exterminate geese. Outside North America Eurasia Canada geese have reached Northern Europe naturally, as has been proved by ringing recoveries. The birds include those of the subspecies B. c. parvipes, and possibly others. These geese are also found naturally on the Kamchatka Peninsula in eastern Siberia, and eastern China. Canada geese have also been introduced in Europe in the early 17th century by explorer Samuel de Champlain who sent several pairs of geese to France as a present for King Louis XIII. The geese were first introduced in Great Britain in the late 17th century as an addition to King James II's waterfowl collection in St. James's Park. By the middle of the 18th century, the Canada geese have established populations in France and Great Britain, but also in Ireland. They were also introduced in the Netherlands, Belgium, Germany, Scandinavia, and Finland in the 20th Century, starting in Sweden in 1929. Most European populations are not migratory, but those in more northerly parts of Sweden and Finland migrate to the North Sea and Baltic coasts. Semi-tame feral birds are common in parks, and have become a pest in some areas. In Great Britain, they were spread by hunters, but remained uncommon until the mid-20th century. Their population grew from 2,200 to 4,000 birds in 1953 to an estimated 82,000 in 1999, as changing agricultural practices and urban growth provided new habitat, with an estimated 165,000 wintering individuals in 2017. An attempt to translocate breeding pairs away from agricultural zones in the 50s and 60s backfired, causing establishment of new subpopulations and an overall population explosion. European birds are mostly descended from the nominate subspecies B. c. canadensis, likely with some contributions from the subspecies B. c. maxima. New Zealand Canada geese were introduced as a game bird into New Zealand in 1905. They have become a problem in some areas by fouling pastures and damaging crops. They were protected under the Wildlife Act 1953 and the population was managed by Fish and Game New Zealand, which culled excessive bird numbers. In 2011, the government removed the protection status, allowing anyone to kill the birds. Behavior Like most geese, the Canada goose is naturally migratory with the wintering range being most of the United States. The calls overhead from large groups of Canada geese flying in a V-shaped formation signal the transitions into spring and autumn. In some areas, migration routes have changed due to changes in habitat and food sources. In mild climates from southwestern British Columbia to California to the Great Lakes, some of the population has become nonmigratory due to adequate winter food supply and a lack of former predators. Males exhibit agonistic behavior both on and off breeding and nesting grounds. This behavior rarely involves interspecific killing. One documented case involved a male defending his nest from a brant that wandered into the area; the following attack lasted for one hour until the death of the brant. The cause of death was suffocation or drowning in mud as a direct result of the Canada goose's pecking the head of the brant into the mud. Researchers attributed it to high hormone levels and the brant's inability to leave the nesting area. Diet Canada geese are primarily herbivores, although they sometimes eat small insects and fish. Their diet includes green vegetation and grains. The Canada goose eats a variety of grasses when on land. It feeds by grasping a blade of grass with the bill, then tearing it with a jerk of the head. The Canada goose also eats beans and grains such as wheat, rice, and corn when they are available. In the water, it feeds from aquatic plants by sliding its bill at the bottom of the body of water. It also feeds on aquatic plant-like algae, such as seaweed. In urban areas, it is also known to pick food out of garbage bins. They are also sometimes hand-fed a variety of grains and other foods by humans in parks. Canada geese prefer lawn grass in urban areas. They usually graze in open areas with wide clearance to avoid potential predators. Reproduction During the second year of their lives, Canada geese find a mate. They are monogamous, and most couples stay together all of their lives. If one dies, the other may find a new mate. The female lays from two to nine eggs with an average of five, and both parents protect the nest while the eggs incubate, but the female spends more time at the nest than the male. Its nest is usually located in an elevated area near water such as streams, lakes, ponds, and sometimes on a beaver lodge. Its eggs are laid in a shallow depression lined with plant material and down. The incubation period, in which the female incubates while the male remains nearby, lasts for 24–32 days after laying. Canada geese can respond to external climatic factors by adjusting their laying date to spring maximum temperatures, which may benefit their nesting success. As the annual summer molt also takes place during the breeding season, the adults lose their flight feathers for 20–40 days, regaining flight about the same time as their goslings start to fly. As soon as the goslings hatch, they are immediately capable of walking, swimming, and finding their own food (a diet similar to that of adult geese). Parents are often seen leading their goslings in a line, usually with one adult at the front and the other at the back. While protecting their goslings, parents often violently chase away nearby creatures, from small blackbirds to lone humans who approach: first giving a warning hiss, and then attacking with bites and slaps of the wings. Although parents are hostile to unfamiliar geese, they may form groups of a number of goslings and a few adults, called crèches. The offspring enter the fledgling stage any time from six to nine weeks of age. They do not leave their parents until after the spring migration, when they return to their birthplace. Migration Canada geese are known for their seasonal migrations. Most Canada geese have staging or resting areas where they join up with others. Their autumn migration can be seen from September to the beginning of November. The early migrants have a tendency to spend less time at rest stops and go through the migration much faster. The later birds usually spend more time at rest stops. Some geese return to the same nesting ground year after year and lay eggs with their mate, raising them in the same way each year. This is recorded from the many tagged geese which frequent the East Coast. Canada geese fly in a distinctive V-shaped flight formation, with an altitude of 1 km (3,000 feet) for migration flight. The maximum flight ceiling of Canada geese is unknown, but they have been reported at 9 km (29,000 feet). Flying in the V formation has been the subject of study by researchers. The front position is rotated since flying in front consumes the most energy. Canada geese leave the winter grounds more quickly than the summer grounds. Elevated thyroid hormones, such as T3 and T4, have been measured in geese just after a big migration. This is believed because of the long days of flying in migration the thyroid gland sends out more T4 which help the body cope with the longer journey. The increased T4 levels are also associated with increased muscle mass (hypertrophy) of the breast muscle, also because of the longer time spent flying. It is believed that the body sends out more T4 to help the goose's body with this long task by speeding up the metabolism and lowering the temperature at which the muscles work. Also, other studies show levels of stress hormones such as corticosterone rise dramatically in these birds during and after a migration. Survival The lifespan in the wild of geese who survive to adulthood ranges from 10 to 24 years. The British longevity record is held by a specimen tagged as a nestling, which was observed alive at the University of York at the age of 31. In order to survive the extreme winter temperatures, the geese prefer to stay in urban areas rather than in green spaces since industrial areas retain heat. Predators Known predators of eggs and goslings include coyotes (Canis latrans), Arctic foxes (Vulpes lagopus), northern raccoons (Procyon lotor), red foxes (Vulpes vulpes), large gulls (Larus species), common ravens (Corvus corax), American crows (Corvus brachyrhynchos), carrion crows (in Europe, Corvus corone) and both brown (Ursus arctos) and American black bears (Ursus americanus). Geese and their goslings are occasionally preyed upon by domestic dogs—these occurrences can be prevented by leashing a pet dog. Once they reach adulthood, due to their large size and often aggressive behavior, Canada geese are rarely preyed on, although prior injury may make them more vulnerable to natural predators. Beyond humans, adults can be taken by coyotes and grey wolves (Canis lupus). Avian predators that are known to kill adults, as well as young geese, include snowy owls (Bubo scandiacus), golden eagles (Aquila chrysaetos) and bald eagles (Haliaeetus leucocephalus) and, though rarely on large adult geese, great horned owls (Bubo virginianus), northern goshawks (Accipiter gentilis), peregrine falcons (Falco peregrinus), and gyrfalcons (Falco rusticolus). Adult geese are quite vigorous at displacing potential predators from the nest site, with predator prevention usually falling to the larger male of the pair. Males usually attempt to draw attention of approaching predators and toll (mob terrestrial predators without physical contact) often in accompaniment with males of other goose species. Eagles frequently cause geese to fly off en masse from some distance, though in other instances, geese may seem unconcerned at perched bald eagles nearby, seemingly only reacting if the eagle is displaying active hunting behavior. Canada geese are quite wary of humans where they are regularly hunted and killed, but can otherwise become habituated to fearlessness toward humans, especially where they are fed by them. This often leads to the geese becoming overly aggressive toward humans, and large groups of the birds may be considered a nuisance if they are causing persistent problems to humans and other animals in the surrounding area. Salinity Salinity plays a role in the growth and development of goslings. Moderate to high salinity concentrations without fresh water results in slower development, growth, and saline-induced mortality. Goslings are susceptible to saline-induced mortality before their nasal salt glands become functional; the majority of such deaths occur before the sixth day of life. Disease Canada geese are susceptible to avian bird flus, such as H5N1. A study carried out using the HPAI virus, a H5N1 virus, found that the geese were susceptible to the virus. This proved useful for monitoring the spread of the virus through the high mortality of infected birds. Prior exposure to other viruses may result in some resistance to H5N1. Relationship with humans The Canada goose is considered part of the Canadian national identity. In North America, nonmigratory Canada goose populations have been on the rise. The species is frequently found on golf courses, parking lots, and urban parks, which would have previously hosted only migratory geese on rare occasions. Owing to its adaptability to human-altered areas, it has become one of the most common waterfowl species in North America. In many areas, nonmigratory Canada geese are now regarded as pests by humans. They are suspected of being a cause of an increase in high fecal coliforms at beaches. An extended hunting season, deploying noise makers, and hazing by dogs have been used in an attempt to disrupt suspect flocks. A goal of conservationists has been to focus hunting on the nonmigratory populations (which tend to be larger and more of a nuisance) as opposed to migratory flocks showing natural behavior, which may be rarer. Since 1999, the United States Department of Agriculture Wildlife Services agency has been engaged in lethal culls of Canada geese primarily in urban or densely populated areas. The agency responds to municipalities or private land owners, such as golf courses, which find the geese obtrusive or object to their waste. Addling goose eggs and destroying nests are promoted as humane population control methods. Flocks of Canada geese can also be captured during molt and this method of culling is used to control invasive populations. Canada geese are protected from hunting and capture outside of designated hunting seasons in the United States by the Migratory Bird Treaty Act, and in Canada under the Migratory Birds Convention Act. In both countries, commercial transactions such as buying or trading are mostly prohibited and the possession, hunting, and interfering with the activity of the animals are subject to restrictions. In the UK, as with native bird species, the nests and eggs of Canada geese are fully protected by law, except when their removal has been specifically licensed, and shooting is generally permitted only during the defined open season. Geese have a tendency to attack humans when they feel themselves or their goslings to be threatened. First, the geese stand erect, spread their wings, and produce a hissing sound. Next, the geese charge. They may then peck or attack with their wings. Aircraft strikes Canada geese have been implicated in a number of bird strikes by aircraft. Their large size and tendency to fly in flocks may exacerbate their impact. In the United States, the Canada goose is the second-most damaging bird strike to airplanes, with the most damaging being turkey vultures. Canada geese can cause fatal crashes when they strike an aircraft's engine. The FAA has reported 1,772 known civil aircraft strikes within the United States between 1990 and 2018. The total cost of these bird strikes to general and commercial aviation has been reported to exceed $130 million. In 1995, a U.S. Air Force E-3 Sentry aircraft at Elmendorf AFB, Alaska, struck a flock of Canada geese on takeoff, losing power in both port side engines. It crashed from the runway, killing all 24 crew members. The accident sparked efforts to avoid such events, including habitat modification, aversion tactics, herding and relocation, and culling of flocks. In 2009, a collision with a flock of migratory Canada geese resulted in US Airways Flight 1549 suffering a total loss of power from both engines after takeoff, forcing the crew of the aircraft to ditch the plane in the Hudson River with no loss of human life. Cuisine As a large, common wild bird, the Canada goose is a common target of hunters, especially in its native range. Drake Larsen, a researcher in sustainable agriculture at Iowa State University, described them to The Atlantic magazine as "so yummy ...good, lean, rich meat. I find they are similar to a good cut of beef." The British Trust for Ornithology, however, has described them as "reputedly amongst the most inedible of birds." The U.S. goose harvest for 2013–14 reported over 1.3 million geese taken. Canada geese are rarely farmed, and sale of wild Canada goose meat is rare due to regulation and slaughterhouses' lack of experience with wild birds. Geese in New York City parks culled by the New York City Department of Environmental Protection have been donated to food banks in Pennsylvania. Population In 2000, the North American population of the geese was estimated to be between 4 million and 5 million birds. A 20-year study from 1983 to 2003 in Wichita, Kansas, found the size of the winter Canada goose population within the city limits increased from 1,600 to over 18,000 birds.
Biology and health sciences
Anseriformes
Animals
219023
https://en.wikipedia.org/wiki/Swift%20%28bird%29
Swift (bird)
The Apodidae, or swifts, form a family of highly aerial birds. They are superficially similar to swallows, but are not closely related to any passerine species. Swifts are placed in the order Apodiformes along with hummingbirds. The treeswifts are closely related to the true swifts, but form a separate family, the Hemiprocnidae. Resemblances between swifts and swallows are due to convergent evolution, reflecting similar life styles based on catching insects in flight. The family name, Apodidae, is derived from the Greek ἄπους (ápous), meaning "footless", a reference to the small, weak legs of these most aerial of birds. The tradition of depicting swifts without feet continued into the Middle Ages, as seen in the heraldic martlet. Taxonomy Taxonomists have long classified swifts and treeswifts as relatives of the hummingbirds, a judgment corroborated by the discovery of the Jungornithidae (apparently swift-like hummingbird-relatives) and of primitive hummingbirds such as Eurotrochilus. Traditional taxonomies place the hummingbird family (Trochilidae) in the same order as the swifts and treeswifts (and no other birds); the Sibley-Ahlquist taxonomy treated this group as a superorder in which the swift order was called Trochiliformes. The taxonomy of the swifts is complicated, with genus and species boundaries widely disputed, especially amongst the swiftlets. Analysis of behavior and vocalizations is complicated by common parallel evolution, while analyses of different morphological traits and of various DNA sequences have yielded equivocal and partly contradictory results. The Apodiformes diversified during the Eocene, at the end of which the extant families were present; fossil genera are known from all over temperate Europe, between today's Denmark and France, such as the primitive swift-like Scaniacypselus (Early–Middle Eocene) and the more modern Procypseloides (Late Eocene/Early Oligocene – Early Miocene). A prehistoric genus sometimes assigned to the swifts, Primapus (Early Eocene of England), might also be a more distant ancestor. Species There are around 100 species of swifts, normally grouped into two subfamilies and four tribes. Cypseloidinae Tribe Cypseloidini Apodinae Tribe Collocaliini – swiftlets Tribe Chaeturini – needletails Tribe Apodini – typical swifts Description Swifts are among the fastest of birds in level flight, and larger species like the white-throated needletail have been reported travelling at up to . Even the common swift can cruise at a maximum speed of 31 metres per second (). In a single year the common swift can cover at least 200,000 km, and in a lifetime, about two million kilometers. The wingtip bones of swiftlets are of proportionately greater length than those of most other birds. Changing the angle between the bones of the wingtips and forelimbs allows swifts to alter the shape and area of their wings to increase their efficiency and maneuverability at various speeds. They share with their relatives the hummingbirds a special ability to rotate their wings from the base, allowing the wing to remain rigid and fully extended and derive power on both the upstroke and downstroke. The downstroke produces both lift and thrust, while the upstroke produces a negative thrust (drag) that is 60% of the thrust generated during the downstrokes, but simultaneously it contributes lift that is also 60% of what is produced during the downstroke. This flight arrangement might benefit the bird's control and maneuverability in the air. The swiftlets or cave swiftlets have developed a form of echolocation for navigating through dark cave systems where they roost. One species, the Three-toed swiftlet, has recently been found to use this navigation at night outside its cave roost too. Distribution and habitat Swifts occur on all the continents except Antarctica, but not in the far north, in large deserts, or on many oceanic islands. The swifts of temperate regions are strongly migratory and winter in the tropics. Some species can survive short periods of cold weather by entering torpor, a state similar to hibernation. Many have a characteristic shape, with a short forked tail and very long swept-back wings that resemble a crescent or a boomerang. The flight of some species is characterised by a distinctive "flicking" action quite different from swallows. Swifts range in size from the pygmy swiftlet (Collocalia troglodytes), which weighs 5.4 g and measures long, to the purple needletail (Hirundapus celebensis), which weighs and measures long. Behaviour Breeding The nest of many species is glued to a vertical surface with saliva, and the genus Aerodramus use only that substance, which is the basis for bird's nest soup. Other swifts select holes and small cavities in walls. The eggs hatch after 19 to 23 days, and the young leave the nest after a further six to eight weeks. Both parents assist in raising the young. Swifts as a family have smaller egg clutches and much longer and more variable incubation and fledging times than passerines with similarly sized eggs, resembling tubenoses in these developmental factors. Young birds reach a maximum weight heavier than their parents; they can cope with not being fed for long periods of time, and delay their feather growth when undernourished. Swifts and seabirds have generally secure nest sites, but their food sources are unreliable, whereas passerines are vulnerable in the nest but food is usually plentiful. Feeding All swifts eat insects, such as dragonflies, flies, ants, aphids, wasps and bees as well as aerial spiders. Prey is typically caught in flight using the beak. Some species, like the chimney swift, hunt in mixed species flocks with other aerial insectivores such as members of Hirundinidae (swallows). Status No swift species has become extinct since 1600, but BirdLife International has assessed the Guam swiftlet as endangered and lists the Atiu, dark-rumped, Seychelles, and Tahiti swiftlets as vulnerable; twelve other species are near threatened or lack sufficient data for classification. Exploitation by humans The hardened saliva nests of the edible-nest swiftlet and the black-nest swiftlet have been used in Chinese cooking for over 400 years, most often as bird's nest soup. Over-harvesting of this expensive delicacy has led to a decline in the numbers of these swiftlets, especially as the nests are also thought to have health benefits and aphrodisiac properties. Most nests are built during the breeding season by the male swiftlet over a period of 35 days. They take the shape of a shallow cup stuck to the cave wall. The nests are composed of interwoven strands of salivary cement and contain high levels of calcium, iron, potassium, and magnesium.
Biology and health sciences
Apodiformes
null
219042
https://en.wikipedia.org/wiki/Power%20supply
Power supply
A power supply is an electrical device that supplies electric power to an electrical load. The main purpose of a power supply is to convert electric current from a source to the correct voltage, current, and frequency to power the load. As a result, power supplies are sometimes referred to as electric power converters. Some power supplies are separate standalone pieces of equipment, while others are built into the load appliances that they power. Examples of the latter include power supplies found in desktop computers and consumer electronics devices. Other functions that power supplies may perform include limiting the current drawn by the load to safe levels, shutting off the current in the event of an electrical fault, power conditioning to prevent electronic noise or voltage surges on the input from reaching the load, power-factor correction, and storing energy so it can continue to power the load in the event of a temporary interruption in the source power (uninterruptible power supply). All power supplies have a power input connection, which receives energy in the form of electric current from a source, and one or more power output or power rail connections that deliver current to the load. The source power may come from the electric power grid, such as an electrical outlet, energy storage devices such as batteries or fuel cells, generators or alternators, solar power converters, or another power supply. The input and output are usually hardwired circuit connections, though some power supplies employ wireless energy transfer to power their loads without wired connections. Some power supplies have other types of inputs and outputs as well, for functions such as external monitoring and control. General classification Functional Power supplies are categorized in various ways, including by functional features. For example, a regulated power supply is one that maintains constant output voltage or current despite variations in load current or input voltage. Conversely, the output of an unregulated power supply can change significantly when its input voltage or load current changes. Adjustable power supplies allow the output voltage or current to be programmed by mechanical controls (e.g., knobs on the power supply front panel), or by means of a control input, or both. An adjustable regulated power supply is one that is both adjustable and regulated. An isolated power supply has a power output that is electrically independent of its power input; this is in contrast to other power supplies that share a common connection between power input and output. Packaging Power supplies are packaged in different ways and classified accordingly. A bench power supply is a stand-alone desktop unit used in applications such as circuit test and development. Open frame power supplies have only a partial mechanical enclosure, sometimes consisting of only a mounting base; these are typically built into machinery or other equipment. Rack mount power supplies are designed to be secured into standard electronic equipment racks. An integrated power supply is one that shares a common printed circuit board with its load. An external power supply, AC adapter or power brick, is a power supply located in the load's AC power cord that plugs into a wall outlet; a wall wart is an external supply integrated with the outlet plug itself. These are popular in consumer electronics because of their safety; the hazardous 120 or 240 volt main current is transformed down to a safer voltage before it enters the appliance body. Power conversion method Power supplies can be broadly divided into linear and switching types. Linear power converters process the input power directly, with all active power conversion components operating in their linear operating regions. In switching power converters, the input power is converted to AC or to DC pulses before processing, by components that operate predominantly in non-linear modes (e.g., transistors that spend most of their time in cutoff or saturation). Power is "lost" (converted to heat) when components operate in their linear regions and, consequently, switching converters are usually more efficient than linear converters because their components spend less time in linear operating regions. Types DC power supplies An AC-to-DC power supply operates on an AC input voltage and generates a DC output voltage. Depending on application requirements the output voltage may contain large or negligible amounts of AC frequency components known as ripple voltage, related to AC input voltage frequency and the power supply's operation. A DC power supply operating on DC input voltage is called a DC-to-DC converter. This section focuses mostly on the AC-to-DC variant. Linear power supply In a linear power supply the AC input voltage passes through a power transformer and is then rectified and filtered to obtain a DC voltage. The filtering reduces the amplitude of AC mains frequency present in the rectifier output and can be as simple as a single capacitor or more complex such as a pi filter. The electric load's tolerance of ripple dictates the minimum amount of filtering that must be provided by the power supply. In some applications, ripple can be entirely ignored. For example, in some battery charging applications, the power supply consists of just a transformer and a diode, with a simple resistor placed at the power supply output to limit the charging current. Switched-mode power supply In a switched-mode power supply (SMPS), the AC mains input is directly rectified and then filtered to obtain a DC voltage. The resulting DC voltage is then switched on and off at a high frequency by electronic switching circuitry, thus producing an AC current that will pass through a high-frequency transformer or inductor. Switching occurs at a very high frequency (typically 10 kHz — 1 MHz), thereby enabling the use of transformers and filter capacitors that are much smaller, lighter, and less expensive than those found in linear power supplies operating at mains frequency. After the inductor or transformer secondary, the high frequency AC is rectified and filtered to produce the DC output voltage. If the SMPS uses an adequately insulated high-frequency transformer, the output will be electrically isolated from the mains; this feature is often essential for safety. Switched-mode power supplies are usually regulated, and to keep the output voltage constant, the power supply employs a feedback controller that monitors current drawn by the load. The switching duty cycle increases as power output requirements increase. SMPSs often include safety features such as current limiting or a crowbar circuit to help protect the device and the user from harm. In the event that an abnormal high-current power draw is detected, the switched-mode supply can assume this is a direct short and will shut itself down before damage is done. PC power supplies often provide a power good signal to the motherboard; the absence of this signal prevents operation when abnormal supply voltages are present. Some SMPSs have an absolute limit on their minimum current output. They are only able to output above a certain power level and cannot function below that point. In a no-load condition the frequency of the power slicing circuit increases to great speed, causing the isolated transformer to act as a Tesla coil, causing damage due to the resulting very high voltage power spikes. Switched-mode supplies with protection circuits may briefly turn on but then shut down when no load has been detected. A very small low-power dummy load such as a ceramic power resistor or 10-watt light bulb can be attached to the supply to allow it to run with no primary load attached. The switch-mode power supplies used in computers have historically had low power factors and have also been significant sources of line interference (due to induced power line harmonics and transients). In simple switch-mode power supplies, the input stage may distort the line voltage waveform, which can adversely affect other loads (and result in poor power quality for other utility customers), and cause unnecessary heating in wires and distribution equipment. Furthermore, customers incur higher electric bills when operating lower power factor loads. To circumvent these problems, some computer switch-mode power supplies perform power factor correction, and may employ input filters or additional switching stages to reduce line interference. Capacitive (transformerless) power supply A capacitive power supply (transformerless power supply) uses the reactance of a capacitor to reduce the mains voltage to a smaller AC voltage. Typically, the resulting reduced AC voltage is then rectified, filtered and regulated to produce a constant DC output voltage. The output voltage is not isolated from the mains. Consequently, to avoid exposing people and equipment from hazardous high voltage, anything connected to the power supply must be reliably insulated. The voltage reduction capacitor must withstand the full mains voltage, and it must also have enough capacitance to support maximum load current at the rated output voltage. Taken together, these constraints limit practical uses of this type of supply to low-power applications. Linear regulator The function of a linear voltage regulator is to convert a varying DC voltage to a constant, often specific, lower DC voltage. In addition, they often provide a current limiting function to protect the power supply and load from overcurrent (excessive, potentially destructive current). A constant output voltage is required in many power supply applications, but the voltage provided by many energy sources will vary with changes in load impedance. Furthermore, when an unregulated DC power supply is the energy source, its output voltage will also vary with changing input voltage. To circumvent this, some power supplies use a linear voltage regulator to maintain the output voltage at a steady value, independent of fluctuations in input voltage and load impedance. Linear regulators can also reduce the magnitude of ripple and noise on the output voltage. AC power supplies An AC power supply typically takes the voltage from a wall outlet (mains supply) and uses a transformer to step up or step down the voltage to the desired voltage. Some filtering may take place as well. In some cases, the source voltage is the same as the output voltage; this is called an isolation transformer. Other AC power supply transformers do not provide mains isolation; these are called autotransformers; a variable output autotransformer is known as a variac. Other kinds of AC power supplies are designed to provide a nearly constant current, and output voltage may vary depending on impedance of the load. In cases when the power source is direct current, (like an automobile storage battery), an inverter and step-up transformer may be used to convert it to AC power. Portable AC power may be provided by an alternator powered by a diesel or gasoline engine (for example, at a construction site, in an automobile or boat, or backup power generation for emergency services) whose current is passed to a regulator circuit to provide a constant voltage at the output. Some kinds of AC power conversion do not use a transformer. If the output voltage and input voltage are the same, and primary purpose of the device is to filter AC power, it may be called a line conditioner. If the device is designed to provide backup power, it may be called an uninterruptible power supply. A circuit may be designed with a voltage multiplier topology to directly step-up AC power; formerly, such an application was a vacuum tube AC/DC receiver. In modern use, AC power supplies can be divided into single phase and three phase systems. AC power Supplies can also be used to change the frequency as well as the voltage, they are often used by manufacturers to check the suitability of their products for use in other countries. 230 V 50 Hz or 115 60 Hz or even 400 Hz for avionics testing. AC adapter An AC adapter is a power supply built into an AC mains power plug. AC adapters are also known by various other names such as "plug pack" or "plug-in adapter", or by slang terms such as "wall wart". AC adapters typically have a single AC or DC output that is conveyed over a hardwired cable to a connector, but some adapters have multiple outputs that may be conveyed over one or more cables. "Universal" AC adapters have interchangeable input connectors to accommodate different AC mains voltages. Adapters with AC outputs may consist only of a passive transformer; in case of DC-output, adapters consist of either transformer with few diodes and capacitors or they may employ switch-mode power supply circuitry. AC adapters consume power (and produce electric and magnetic fields) even when not connected to a load; for this reason they are sometimes known as "electricity vampires", and may be plugged into power strips to allow them to be conveniently turned on and off. Programmable power supply For the extended USB Standard PPS see: USB Power Delivery A programmable power supply (PPS) is one that allows remote control of its operation through an analog input or digital interface such as RS-232 or GPIB. Controlled properties may include voltage, current, and in the case of AC output power supplies, frequency. They are used in a wide variety of applications, including automated equipment testing, crystal growth monitoring, semiconductor fabrication, and x-ray generators. Programmable power supplies typically employ an integral microcomputer to control and monitor power supply operation. Power supplies equipped with a computer interface may use proprietary communication protocols or standard protocols and device control languages such as SCPI. Uninterruptible power supply An uninterruptible power supply (UPS) takes its power from two or more sources simultaneously. It is usually powered directly from the AC mains, while simultaneously charging a storage battery. Should there be a dropout or failure of the mains, the battery instantly takes over so that the load never experiences an interruption. Instantly here should be defined as the speed of electricity within conductors which is somewhat near the speed of light. That definition is important because transmission of high speed data and communications service must have continuity/NO break of that service. Some manufacturers use a quasi standard of 4 milliseconds. However, with high speed data even 4 ms of time in transitioning from one source to another is not fast enough. The transition must be made in a break before make method. The UPS meeting that requirement is referred to as a True UPS or a Hybrid UPS. How much time the UPS will provide is most often based on batteries and in conjunction with generators. That time can range from a quasi minimum 5 to 15 minutes to hours or even days. In many computer installations, it is only enough time on batteries to give the operators time to shut down the system in an orderly way. Other UPS schemes may use an internal combustion engine or turbine to supply power during a utility power outage and the amount of battery time is then dependent upon how long it takes the generator to be on line and the criticality of the equipment served. Such a scheme is found in hospitals, data centers, call centers, cell sites and telephone central offices. High-voltage power supply A high-voltage power supply is one that outputs hundreds or thousands of volts. A special output connector is used that prevents arcing, insulation breakdown and accidental human contact. Federal Standard connectors are typically used for applications above 20 kV, though other types of connectors (e.g., SHV connector) may be used at lower voltages. Some high-voltage power supplies provide an analog input or digital communication interface that can be used to control the output voltage. High-voltage power supplies are commonly used to accelerate and manipulate electron and ion beams in equipment such as x-ray generators, electron microscopes, and focused ion beam columns, and in a variety of other applications, including electrophoresis and electrostatics. High-voltage power supplies typically apply the bulk of their input energy to a power inverter, which in turn drives a voltage multiplier or a high turns ratio, high-voltage transformer, or both (usually a transformer followed by a multiplier) to produce high voltage. The high voltage is passed out of the power supply through the special connector and is also applied to a voltage divider that converts it to a low-voltage metering signal compatible with low-voltage circuitry. The metering signal is used by a closed-loop controller that regulates the high voltage by controlling inverter input power, and it may also be conveyed out of the power supply to allow external circuitry to monitor the high-voltage output. Bipolar power supply A bipolar power supply operates in all four quadrants of the voltage/current Cartesian plane, meaning that it will generate positive and negative voltages and currents as required to maintain regulation. When its output is controlled by a low-level analog signal, it is effectively a low-bandwidth operational amplifier with high output power and seamless zero-crossings. This type of power supply is commonly used to power magnetic devices in scientific applications. Specification The suitability of a particular power supply for an application is determined by various attributes of the power supply, which are typically listed in the power supply's specification. Commonly specified attributes for a power supply include: Input voltage type (AC or DC) and range Efficiency of power conversion The amount of voltage and current it can supply to its load How stable its output voltage or current is under varying line and load conditions How long it can supply energy without refueling or recharging (applies to power supplies that employ portable energy sources) Operating and storage temperature ranges Output is constant voltage type or constant current type Commonly-used abbreviations used in power supply specifications: SCP - Short circuit protection OPP - Overpower (overload) protection OCP - Overcurrent protection OTP - Overtemperature protection OVP - Overvoltage protection UVP - Undervoltage protection CV - Constant voltage CC - Constant current PFC - Power factor correction THD - Total harmonic distortion Thermal management The power supply of an electrical system tends to generate heat. The higher the efficiency, the less heat is generated by the power supply. There are many ways to manage the heat of a power supply unit. The types of cooling generally fall into two categories -- convection and conduction. Common convection methods for cooling electronic power supplies include natural air flow, forced air flow, or other liquid flow over the unit. Common conduction cooling methods include heat sinks, cold plates, and thermal compounds. Overload protection Power supplies often have protection from short circuit or overload that could damage the supply or cause a fire. Fuses and circuit breakers are two commonly used mechanisms for overload protection. A fuse contains a short piece of wire which melts if too much current flows. This effectively disconnects the power supply from its load, and the equipment stops working until the problem that caused the overload is identified and the fuse is replaced. Some power supplies use a very thin wire link soldered in place as a fuse. Fuses in power supply units may be replaceable by the end user, but fuses in consumer equipment may require tools to access and change. A circuit breaker contains an element that heats, bends and triggers a spring which shuts the circuit down. Once the element cools, and the problem is identified the breaker can be reset and the power restored. Some power supplies use a thermal cutout buried in the transformer rather than a fuse. The advantage is it briefly allows greater current to be drawn than the maximum allowed continuous current. Some such cutouts are self resetting, some are single use only. Current limiting Some supplies use current limiting instead of cutting off power if overloaded. The two types of current limiting used are electronic limiting and impedance limiting. The former is common on lab bench power supplies, the latter is common on supplies of less than 3 watts output. A foldback current limiter reduces the output current to much less than the maximum non-fault current. Applications Power supplies are a fundamental component of many electronic devices and therefore used in a diverse range of applications. This list is a small sample of the many applications of power supplies. Computers A modern computer power supply is a switch-mode power supply that converts AC power from the mains supply, to several DC voltages. Switch-mode supplies replaced linear supplies due to cost, weight, efficiency and size improvements. The diverse collection of output voltages also have widely varying current draw requirements. Electric Vehicles Electric vehicles are those which rely on energy created through electricity generation. A power supply unit is part of the necessary design to convert high voltage vehicle battery power. Welding Arc welding uses electricity to join metals by melting them. The electricity is provided by a welding power supply, and can either be AC or DC. Arc welding requires high currents typically between 100 and 350 amperes. Some types of welding can use as few as 10 amperes, while some applications of spot welding employ currents as high as 60,000 amperes for an extremely short time. Welding power supplies consisted of transformers or engines driving generators; modern welding equipment uses semiconductors and may include microprocessor control. Aircraft Both commercial and military avionic systems require either a DC-DC or AC/DC power supply to convert energy into usable voltage. These may often operate at 400 Hz in the interest of weight savings. Automation This refers to conveyors, assembly lines, bar code readers, cameras, motors, pumps, semi-fabricated manufacturing and more. Medical These include ventilators, infusion pumps, surgical and dental instruments, imaging and beds.
Technology
Components
null
219112
https://en.wikipedia.org/wiki/Seabiscuit
Seabiscuit
Seabiscuit (May 23, 1933 – May 17, 1947) was a champion thoroughbred racehorse in the United States who became the top money-winning racehorse up to the 1940s. He beat the 1937 Triple Crown winner, War Admiral, by four lengths in a two-horse special at Pimlico and was voted American Horse of the Year for 1938. A small horse, at 15.2 hands high, Seabiscuit had an inauspicious start to his racing career, winning only a quarter of his first 40 races, but became an unlikely champion and a symbol of hope to many Americans during the Great Depression. Seabiscuit has been the subject of numerous books and films, including Seabiscuit: the Lost Documentary (1939); the Shirley Temple film The Story of Seabiscuit (1949); a book, Seabiscuit: An American Legend (1999) by Laura Hillenbrand; and a film adaptation of Hillenbrand's book, Seabiscuit (2003), that was nominated for the Academy Award for Best Picture. There is also a street in Indian Trail, North Carolina named after him. Early days Seabiscuit was foaled in Lexington, Kentucky, on May 23, 1933, from the mare Swing On and sire Hard Tack, a son of Man o' War. Seabiscuit was named for his father; "sea biscuit" is another name for hardtack, a type of cracker eaten by sailors. The bay colt grew up on Claiborne Farm in Paris, Kentucky, where he was trained. He was undersized, knobby-kneed, and given to sleeping and eating for long periods. Initially, Seabiscuit was owned by the powerful Wheatley Stable and trained by "Sunny Jim" Fitzsimmons, who had taken Gallant Fox to the United States Triple Crown of Thoroughbred Racing. Fitzsimmons saw some potential in Seabiscuit but felt the horse was too lazy. Fitzsimmons devoted most of his time to training Omaha, who won the 1935 Triple Crown. Seabiscuit was relegated to a heavy schedule of smaller races. He failed to win any of his first 17 races, usually finishing back in the field. After that, Fitzsimmons did not spend much time on him, and the horse was sometimes the butt of stable jokes. However, Seabiscuit began to gain attention after winning two races at Narragansett Park and setting a new track record in the second—Claiming Stakes race. As a two-year-old, Seabiscuit raced 35 times (a heavy racing schedule), coming in first five times and finishing second seven times. These included three claiming races, in which he could have been purchased for $2,500, but he had no takers. While Seabiscuit had not lived up to his racing potential, he was not the poor performer Fitzsimmons had taken him for. His last two wins as a two-year-old came in minor stakes races. The next season started with a similar pattern. The colt ran 12 times in less than four months, winning four times. One of those races was a cheap allowance race on the "sweltering afternoon of June 29," 1936, at Suffolk Downs. That was where trainer Tom Smith first laid eyes on Seabiscuit. His owners sold the horse to automobile entrepreneur Charles S. Howard for $8,000 at Saratoga, in August. 1936/1937: The beginning of success Howard assigned Seabiscuit to a new trainer, Tom Smith, who, with his unorthodox training methods, gradually brought Seabiscuit out of his lethargy. Smith paired the horse with Canadian jockey Red Pollard (1909–1981), who had experience racing in the West and in Mexico. On August 22, 1936, they raced Seabiscuit for the first time. Improvements came quickly, and in their remaining eight races in the East, Seabiscuit and Pollard won several times, including the Detroit Governor's Handicap (worth $5,600) and the Scarsdale Handicap ($7,300) at Empire City Race Track in Yonkers, New York. In early November 1936, Howard and Smith shipped the horse to California by rail. His last two races of the year were at Bay Meadows racetrack in San Mateo, California. The first was the $2,700 Bay Bridge Handicap, run over . Despite starting badly and carrying the top weight of , Seabiscuit won by five lengths. At the World's Fair Handicap (Bay Meadows' most prestigious stakes race), Seabiscuit led throughout. In 1937, the Santa Anita Handicap, California's most prestigious race, was worth over $125,000 ($ million in 2010) to the winner; it was known colloquially as "The Hundred Grander." In his first warm-up race at Santa Anita Park, Seabiscuit won easily. In his second race of 1937, the San Antonio Handicap, he suffered a setback after he was bumped at the start and then pushed wide; Seabiscuit came in fifth, losing to Rosemont. The two met again in the Santa Anita Handicap a week later, where Rosemont won by a nose. The defeat was devastating to Smith and Howard and was widely attributed in the press to a jockey error. Pollard, who had not seen Rosemont over his shoulder until too late, was blind in one eye due to an accident during a training ride, a fact he had hidden throughout his career. A week after this defeat Seabiscuit won the San Juan Capistrano Handicap by seven lengths in track record time of 1:48 for the mile event. Seabiscuit was rapidly becoming a favorite among California racing fans, and his fame spread as he won his next three races. With his successes, Howard decided to ship the horse east for its more prestigious racing circuit. Seabiscuit's run of victories continued. Between June 26 and August 7, he ran five times, each time in a stakes race, and each time he won under steadily increasing handicap weights (imposts) of up to . For the third time, Seabiscuit faced off against Rosemont again, this time beating him by seven lengths. On September 11, Smith accepted an impost of for the Narragansett Special at Narragansett Park. On race day, the ground was slow and heavy, and unsuited to "the Biscuit", carrying the heaviest burden of his career. Smith wanted to scratch, but Howard overruled him. Never in the running, Seabiscuit finished third. His winning streak was snapped, but the season was not over; Seabiscuit won his next three races (one a dead heat) before finishing the year with a second-place at Pimlico. In 1937, Seabiscuit won 11 of his 15 races and was the year's leading money winner in the United States. However, War Admiral, having won the Triple Crown that season, was voted the most prestigious honor, the American Horse of the Year Award. Early five-year-old season In 1938, as a five-year-old, Seabiscuit's success continued. On February 19, Pollard suffered a terrible fall while racing on Fair Knightess, another of Howard's horses. With half of Pollard's chest caved in by the weight of the fallen horse, Howard had to find a new jockey. After trying three, he settled on George Woolf, an already successful rider and old friend of Pollard's. Woolf's first race aboard Seabiscuit was the Santa Anita Handicap, "The Hundred Grander" the horse had narrowly lost the previous year. Seabiscuit was drawn on the outside, and at the start was impeded by another horse, Count Atlas, angling out. The two were locked together for the first straight, and by the time Woolf disentangled his horse, they were six lengths off the pace. Seabiscuit worked his way to the lead but lost in a photo finish to the fast-closing Santa Anita Derby winner, Stagehand (owned by Maxwell Howard, not related to Charles), who had been assigned less than Seabiscuit. Throughout 1937 and 1938, the media speculated about a match race between Seabiscuit and the seemingly invincible War Admiral (sired by Man o' War, Seabiscuit's grandsire). The two horses were scheduled to meet in three stakes races, but one or the other was scratched, usually due to Seabiscuit's dislike of heavy ground. After extensive negotiation, the owners organized a match race for May 1938 at Belmont, but Seabiscuit was scratched. By June, Pollard had recovered, and on June 23, he agreed to work a young colt named Modern Youth. Spooked by something on the track, the horse broke rapidly through the stables and threw Pollard, shattering his leg and seemingly ending his career. Howard arranged a match race for Seabiscuit against Ligaroti, a highly regarded horse owned by the Hollywood entertainer Bing Crosby and Howard's son, Lindsay, through Binglin Stable, in an event organized to promote Crosby's resort and Del Mar Racetrack in Del Mar, California. With Woolf aboard, Seabiscuit won that race, despite persistent fouling from Ligaroti's jockey. After three more outings and with only one win, he was scheduled to go head-to-head with War Admiral in the Pimlico Special in November, in Baltimore, Maryland. Sent to race on the East Coast, on October 16, 1938, Seabiscuit ran second by two lengths in the Laurel Stakes to the filly Jacola, who set a new Laurel Park Racecourse record of 1:37.00 for one mile. On November 1, 1938, Seabiscuit met War Admiral and jockey Charles Kurtsinger in what was dubbed the "Match of the Century." The event was run over at Pimlico Race Course. From the grandstands to the infield, the track was jammed with fans. Trains were run from all over the country to bring fans to the race, and the estimated 40,000 at the track were joined by 40 million listening on the radio. War Admiral was the favorite (1–4 with most bookmakers) and a nearly unanimous selection of the writers and tipsters, excluding a California contingent. Head-to-head races favor fast starters, and War Admiral's speed from the gate was well known. Seabiscuit, on the other hand, was a pace stalker, skilled at holding with the pack before pulling ahead with late acceleration. From the scheduled walk-up start, few gave him a chance to lead War Admiral into the first turn. Smith knew these things and trained Seabiscuit to run against this type, using a starting bell and a whip to give the horse a Pavlovian burst of speed from the start. When the bell rang, Seabiscuit broke in front, led by over a length after 20 seconds, and soon crossed over to the rail position. Halfway down the backstretch, War Admiral started to cut into the lead, gradually pulling level with Seabiscuit, then slightly ahead. Following advice he had received from Pollard, Woolf had eased up on Seabiscuit, allowing his horse to see his rival, then asked for more effort. Two hundred yards from the wire, Seabiscuit pulled away again and continued to extend his lead over the closing stretch, finally winning by four lengths despite War Admiral's running his best time for the distance. As a result of his races that year, Seabiscuit was named American Horse of the Year for 1938, beating War Admiral by 698 points to 489 in a poll conducted by the Turf and Sport Digest magazine. Seabiscuit was the number one newsmaker of 1938. The only major prize that eluded him was the Santa Anita Handicap. Injury and return Seabiscuit was injured during a race. Woolf, who was riding him, said that he felt the horse stumble. The injury was not life-threatening, although many predicted Seabiscuit would never race again. The diagnosis was a ruptured suspensory ligament in the front left leg. With Seabiscuit out of action, Smith and Howard concentrated on their horse Kayak II, an Argentine stallion. In the spring of 1939, Seabiscuit covered seven of Howard's mares, all of which had healthy foals in spring of 1940. One, Fair Knightess's colt, died as a yearling. Seabiscuit and a still-convalescing Pollard recovered together at Howard's ranch, with the help of Pollard's new wife Agnes, who had nursed him through his initial recovery. Slowly, both horse and rider learned to walk again (Pollard joked that they "had four good legs between" them). Poverty and his injury had brought Pollard to the edge of alcoholism. A local doctor broke and reset Pollard's leg to aid his recovery, and slowly Pollard regained the confidence to sit on a horse. Wearing a brace to stiffen his atrophied leg, he began to ride Seabiscuit again, first at a walk and later at a trot and canter. Howard was delighted at their improvement, as he longed for Seabiscuit to race again, but was extremely worried about Pollard, as his leg was still fragile. Over the fall and winter of 1939, Seabiscuit's fitness seemed to improve by the day. By the end of the year, Smith was ready to return the horse to race training, with a collection of stable jockeys in the saddle. By the time of his comeback race, Pollard had cajoled Howard into allowing him the ride. After the horse was scratched due to soft going, the pair finally lined up at the start of the La Jolla Handicap at Santa Anita, on February 9, 1940. Seabiscuit was third, beaten by two lengths. By their third comeback race, Seabiscuit was back to his winning ways, running away from the field in the San Antonio Handicap to beat his erstwhile training partner, Kayak II, by two and a half lengths. Under , Seabiscuit equalled the track record for a mile and 1/16. One race was left in the season. A week after the San Antonio, Seabiscuit and Kayak II both took the gate for the Santa Anita Handicap and its $121,000 prize. 78,000 paying spectators crammed the racetrack, most backing Seabiscuit. Pollard found his horse blocked almost from the start. Picking his way through the field, Seabiscuit briefly led. As they thundered down the back straight, Seabiscuit became trapped in third place, behind leader Whichcee and Wedding Call on the outside. Trusting in his horse's acceleration, Pollard steered between the leaders and burst into the lead, taking the firm ground just off the rail. As Seabiscuit showed his old surge, Wedding Call and Whichcee faltered, and Pollard drove his horse on, taking "The Hundred Grander" by a length and a half from the fast-closing Kayak II under jockey Leon Haas. Pandemonium engulfed the course. Neither horse and rider, nor trainer and owner, could get through the crowd of well-wishers to the winner's enclosure for some time. Retirement, later life, and offspring On April 10, 1940, Seabiscuit's retirement from racing was officially announced. When he was retired to the Ridgewood Ranch near Willits, California, he was horse racing's all-time leading money winner. Put out to stud, Seabiscuit sired 108 foals, including two moderately successful racehorses: Sea Sovereign and Sea Swallow. Over 50,000 visitors went to Ridgewood Ranch to see Seabiscuit in those seven years before his death in 1947. Death and interment Seabiscuit died of a probable heart attack on May 17, 1947, in Willits, California, six months before his grandsire Man o' War. He is buried at Ridgewood Ranch in Mendocino County, California. Legacy and honors Awards and honorable distinctions 1938 American Horse of the Year In 1958, Seabiscuit was voted into the National Museum of Racing and Hall of Fame. In the Blood-Horse magazine List of the Top 100 U.S. Racehorses of the 20th Century (1999), Seabiscuit was ranked 25th. War Admiral was 13th, and Seabiscuit's grandsire and War Admiral's sire, Man o' War, placed 1st. Portrayals in film and television Documentaries American Experience: "Seabiscuit" (April 21, 2003)) is a documentary episode that aired as Season 15, Episode 11 of the PBS American Experience series. ESPN SportsCentury: "Seabiscuit" (November 17, 2003), Seabiscuit was featured on ESPN's SportsCentury Greatest Athletes series. The True Story of Seabiscuit (July 27, 2003) is a 45-minute made-for-TV documentary directed by Craig Haffner, written by Martin Gillam, and containing interviews and footage with William H. Macy, Seabiscuit, and Tobey Maguire, that aired on the USA Network. Seabiscuit: the Lost Documentary (1939) by Seabiscuit's owner Charles Howard. The film was directed by Manny Nathan, and written by Nathan and Hazel Merry Hawkins. It stars Martin Mason, Doc Bond, Charles Howard as himself and his wife, Marcella. It was colorized and released in 2003 by Legend Films to coincide with interest around the movie. Seabiscuit: America's Legendary Racehorse (2003) directed and produced by Nick Krantz. Fiction films Stablemates (1938), starring Wallace Beery and Mickey Rooney. Film producer Harry Rapf arranged a deal whereby he could film the $50,000 Hollywood Gold Cup, and actual footage of Seabiscuit running in the race was used. The field is headed by Seabiscuit for the "straight" race in the film. Porky and Teabiscuit (1939) is Warner Bros.' Porky Pig cartoon take on Seabiscuit's underdog story. The Story of Seabiscuit (1949), starring Shirley Temple in her penultimate film, is a fictionalized account featuring Sea Sovereign in the title role. An otherwise undistinguished film, it did include actual footage of the 1938 match race against War Admiral and the 1940 Santa Anita Handicap. Seabiscuit (2003), Universal Studios' adaptation of Laura Hillenbrand's bestselling 2001 book, was nominated for seven Academy Awards, including Best Picture The Making of Seabiscuit (December 16, 2003) is a documentary short directed by Laurent Bouzereau and starring Tobey Maguire, Jeff Bridges, Chris Cooper, and William Goldenberg, produced by DreamWorks SKG, Herzog Productions, Spyglass Entertainment, and Universal Studios, and distributed by Universal Studios Home Video. Non-fiction books Track writer B.K. Beckwith wrote Seabiscuit: The Saga of a Great Champion (1940), with a foreword by Grantland Rice, right after Seabiscuit's Santa Anita win and at the moment of the horse's retirement. Ralph Moody wrote Come On, Seabiscuit! (1963), illustrated by Robert Riger, which was recently reprinted by the University of Nebraska Press. Laura Hillenbrand's book Seabiscuit: An American Legend (2001) became a bestseller, and in 2003 it was adapted for film. Postage stamp In 2009, after an eight-year-long grassroots effort by Maggie Van Ostrand and Chuck Lustick, Seabiscuit was honored by the United States Postal Service with a stamp bearing his likeness. Thousands of signatures were obtained from all over the nation, and the final approval was given by Citizens Stamp Committee member Joan Mondale, wife of former Vice President Walter Mondale. Statues A statue of Seabiscuit (not life-sized) sits outside the main entrance of The Shops at Tanforan, a shopping mall built upon the former site of the Tanforan Racetrack. Seabiscuit was stabled there briefly in 1939, while preparing for his comeback. In the 1940s, businessman and racehorse owner W. Arnold Hanger donated a statuette of Seabiscuit to the Keeneland library. In 1941, American sculptor Jame Hughlette "Tex" Wheeler cast two life-sized bronze statues of Seabiscuit hand-tooled by Frank Buchler, the German immigrant owner of Washington Ornamental Iron Company Los Angeles: one stands in "Seabiscuit Court", the walking ring at Santa Anita Park racetrack in Arcadia, CA; the other is outside the National Museum of Racing in Saratoga Springs, NY. On June 23, 2007, a statue of Seabiscuit was unveiled at Ridgewood Ranch, Seabiscuit's final resting place. On July 17, 2010, a life-size statue of George Woolf and Seabiscuit was unveiled at the Remington Carriage Museum in Woolf's hometown of Cardston, Alberta. This coincided with the 100th anniversary of Woolf's birth, though not the actual date. Pedigree Notable races won Seabiscuit ran 89 times at 16 different distances over the course of his career. Brooklyn Handicap (1937) San Antonio Handicap (1940) Santa Anita Handicap (1940)
Biology and health sciences
Individual animals
Animals
219144
https://en.wikipedia.org/wiki/Compressible%20flow
Compressible flow
Compressible flow (or gas dynamics) is the branch of fluid mechanics that deals with flows having significant changes in fluid density. While all flows are compressible, flows are usually treated as being incompressible when the Mach number (the ratio of the speed of the flow to the speed of sound) is smaller than 0.3 (since the density change due to velocity is about 5% in that case). The study of compressible flow is relevant to high-speed aircraft, jet engines, rocket motors, high-speed entry into a planetary atmosphere, gas pipelines, commercial applications such as abrasive blasting, and many other fields. History The study of gas dynamics is often associated with the flight of modern high-speed aircraft and atmospheric reentry of space-exploration vehicles; however, its origins lie with simpler machines. At the beginning of the 19th century, investigation into the behaviour of fired bullets led to improvement in the accuracy and capabilities of guns and artillery. As the century progressed, inventors such as Gustaf de Laval advanced the field, while researchers such as Ernst Mach sought to understand the physical phenomena involved through experimentation. At the beginning of the 20th century, the focus of gas dynamics research shifted to what would eventually become the aerospace industry. Ludwig Prandtl and his students proposed important concepts ranging from the boundary layer to supersonic shock waves, supersonic wind tunnels, and supersonic nozzle design. Theodore von Kármán, a student of Prandtl, continued to improve the understanding of supersonic flow. Other notable figures (Meyer, , and Ascher Shapiro) also contributed significantly to the principles considered fundamental to the study of modern gas dynamics. Many others also contributed to this field. Accompanying the improved conceptual understanding of gas dynamics in the early 20th century was a public misconception that there existed a barrier to the attainable speed of aircraft, commonly referred to as the "sound barrier." In truth, the barrier to supersonic flight was merely a technological one, although it was a stubborn barrier to overcome. Amongst other factors, conventional aerofoils saw a dramatic increase in drag coefficient when the flow approached the speed of sound. Overcoming the larger drag proved difficult with contemporary designs, thus the perception of a sound barrier. However, aircraft design progressed sufficiently to produce the Bell X-1. Piloted by Chuck Yeager, the X-1 officially achieved supersonic speed in October 1947. Historically, two parallel paths of research have been followed in order to further gas dynamics knowledge. Experimental gas dynamics undertakes wind tunnel model experiments and experiments in shock tubes and ballistic ranges with the use of optical techniques to document the findings. Theoretical gas dynamics considers the equations of motion applied to a variable-density gas, and their solutions. Much of basic gas dynamics is analytical, but in the modern era Computational fluid dynamics applies computing power to solve the otherwise-intractable nonlinear partial differential equations of compressible flow for specific geometries and flow characteristics. Introductory concepts There are several important assumptions involved in the underlying theory of compressible flow. All fluids are composed of molecules, but tracking a huge number of individual molecules in a flow (for example at atmospheric pressure) is unnecessary. Instead, the continuum assumption allows us to consider a flowing gas as a continuous substance except at low densities. This assumption provides a huge simplification which is accurate for most gas-dynamic problems. Only in the low-density realm of rarefied gas dynamics does the motion of individual molecules become important. A related assumption is the no-slip condition where the flow velocity at a solid surface is presumed equal to the velocity of the surface itself, which is a direct consequence of assuming continuum flow. The no-slip condition implies that the flow is viscous, and as a result a boundary layer forms on bodies traveling through the air at high speeds, much as it does in low-speed flow. Most problems in incompressible flow involve only two unknowns: pressure and velocity, which are typically found by solving the two equations that describe conservation of mass and of linear momentum, with the fluid density presumed constant. In compressible flow, however, the gas density and temperature also become variables. This requires two more equations in order to solve compressible-flow problems: an equation of state for the gas and a conservation of energy equation. For the majority of gas-dynamic problems, the simple ideal gas law is the appropriate state equation. Otherwise, more complex equations of state must be considered and the so-called non ideal compressible fluids dynamics (NICFD) establishes. Fluid dynamics problems have two overall types of references frames, called Lagrangian and Eulerian (see Joseph-Louis Lagrange and Leonhard Euler). The Lagrangian approach follows a fluid mass of fixed identity as it moves through a flowfield. The Eulerian reference frame, in contrast, does not move with the fluid. Rather it is a fixed frame or control volume that fluid flows through. The Eulerian frame is most useful in a majority of compressible flow problems, but requires that the equations of motion be written in a compatible format. Finally, although space is known to have 3 dimensions, an important simplification can be had in describing gas dynamics mathematically if only one spatial dimension is of primary importance, hence 1-dimensional flow is assumed. This works well in duct, nozzle, and diffuser flows where the flow properties change mainly in the flow direction rather than perpendicular to the flow. However, an important class of compressible flows, including the external flow over bodies traveling at high speed, requires at least a 2-dimensional treatment. When all 3 spatial dimensions and perhaps the time dimension as well are important, we often resort to computerized solutions of the governing equations. Mach number, wave motion, and sonic speed The Mach number (M) is defined as the ratio of the speed of an object (or of a flow) to the speed of sound. For instance, in air at room temperature, the speed of sound is about . M can range from 0 to ∞, but this broad range falls naturally into several flow regimes. These regimes are subsonic, transonic, supersonic, hypersonic, and hypervelocity flow. The figure below illustrates the Mach number "spectrum" of these flow regimes. These flow regimes are not chosen arbitrarily, but rather arise naturally from the strong mathematical background that underlies compressible flow (see the cited reference textbooks). At very slow flow speeds the speed of sound is so much faster that it is mathematically ignored, and the Mach number is irrelevant. Once the speed of the flow approaches the speed of sound, however, the Mach number becomes all-important, and shock waves begin to appear. Thus the transonic regime is described by a different (and much more complex) mathematical treatment. In the supersonic regime the flow is dominated by wave motion at oblique angles similar to the Mach angle. Above about Mach 5, these wave angles grow so small that a different mathematical approach is required, defining the hypersonic speed regime. Finally, at speeds comparable to that of planetary atmospheric entry from orbit, in the range of several km/s, the speed of sound is now comparatively so slow that it is once again mathematically ignored in the hypervelocity regime. As an object accelerates from subsonic toward supersonic speed in a gas, different types of wave phenomena occur. To illustrate these changes, the next figure shows a stationary point (M = 0) that emits symmetric sound waves. The speed of sound is the same in all directions in a uniform fluid, so these waves are simply concentric spheres. As the sound-generating point begins to accelerate, the sound waves "bunch up" in the direction of motion and "stretch out" in the opposite direction. When the point reaches sonic speed (M = 1), it travels at the same speed as the sound waves it creates. Therefore, an infinite number of these sound waves "pile up" ahead of the point, forming a Shock wave. Upon achieving supersonic flow, the particle is moving so fast that it continuously leaves its sound waves behind. When this occurs, the locus of these waves trailing behind the point creates an angle known as the Mach wave angle or Mach angle, μ: where represents the speed of sound in the gas and represents the velocity of the object. Although named for Austrian physicist Ernst Mach, these oblique waves were first discovered by Christian Doppler. One-dimensional flow One-dimensional (1-D) flow refers to flow of gas through a duct or channel in which the flow parameters are assumed to change significantly along only one spatial dimension, namely, the duct length. In analysing the 1-D channel flow, a number of assumptions are made: Ratio of duct length to width (L/D) is ≤ about 5 (in order to neglect friction and heat transfer), Steady vs. Unsteady Flow, Flow is isentropic (i.e. a reversible adiabatic process), Ideal gas law (i.e. P = ρRT) Converging-diverging Laval nozzles As the speed of a flow accelerates from the subsonic to the supersonic regime, the physics of nozzle and diffuser flows is altered. Using the conservation laws of fluid dynamics and thermodynamics, the following relationship for channel flow is developed (combined mass and momentum conservation): , where dP is the differential change in pressure, M is the Mach number, ρ is the density of the gas, V is the velocity of the flow, A is the area of the duct, and dA is the change in area of the duct. This equation states that, for subsonic flow, a converging duct (dA < 0) increases the velocity of the flow and a diverging duct (dA > 0) decreases velocity of the flow. For supersonic flow, the opposite occurs due to the change of sign of (1 − M2). A converging duct (dA < 0) now decreases the velocity of the flow and a diverging duct (dA > 0) increases the velocity of the flow. At Mach = 1, a special case occurs in which the duct area must be either a maximum or minimum. For practical purposes, only a minimum area can accelerate flows to Mach 1 and beyond. See table of sub-supersonic diffusers and nozzles. Therefore, to accelerate a flow to Mach 1, a nozzle must be designed to converge to a minimum cross-sectional area and then expand. This type of nozzle – the converging-diverging nozzle – is called a de Laval nozzle after Gustaf de Laval, who invented it. As subsonic flow enters the converging duct and the area decreases, the flow accelerates. Upon reaching the minimum area of the duct, also known as the throat of the nozzle, the flow can reach Mach 1. If the speed of the flow is to continue to increase, its density must decrease in order to obey conservation of mass. To achieve this decrease in density, the flow must expand, and to do so, the flow must pass through a diverging duct. See image of de Laval Nozzle. Maximum achievable velocity of a gas Ultimately, because of the energy conservation law, a gas is limited to a certain maximum velocity based on its energy content. The maximum velocity, Vmax, that a gas can attain is: where cp is the specific heat of the gas and Tt is the stagnation temperature of the flow. Isentropic flow Mach number relationships Using conservations laws and thermodynamics, a number of relationships of the form can be obtained, where M is the Mach number and γ is the ratio of specific heats (1.4 for air). See table of isentropic flow Mach number relationships. Achieving supersonic flow As previously mentioned, in order for a flow to become supersonic, it must pass through a duct with a minimum area, or sonic throat. Additionally, an overall pressure ratio, Pb/Pt, of approximately 2 is needed to attain Mach 1. Once it has reached Mach 1, the flow at the throat is said to be choked. Because changes downstream can only move upstream at sonic speed, the mass flow through the nozzle cannot be affected by changes in downstream conditions after the flow is choked. Non-isentropic 1D channel flow of a gas - normal shock waves Normal shock waves are shock waves that are perpendicular to the local flow direction. These shock waves occur when pressure waves build up and coalesce into an extremely thin shockwave that converts kinetic energy into thermal energy. The waves thus overtake and reinforce one another, forming a finite shock wave from an infinite series of infinitesimal sound waves. Because the change of state across the shock is highly irreversible, entropy increases across the shock. When analysing a normal shock wave, one-dimensional, steady, and adiabatic flow of a perfect gas is assumed. Stagnation temperature and stagnation enthalpy are the same upstream and downstream of the shock. Normal shock waves can be easily analysed in either of two reference frames: the standing normal shock and the moving shock. The flow before a normal shock wave must be supersonic, and the flow after a normal shock must be subsonic. The Rankine-Hugoniot equations are used to solve for the flow conditions. Two-dimensional flow Although one-dimensional flow can be directly analysed, it is merely a specialized case of two-dimensional flow. It follows that one of the defining phenomena of one-dimensional flow, a normal shock, is likewise only a special case of a larger class of oblique shocks. Further, the name "normal" is with respect to geometry rather than frequency of occurrence. Oblique shocks are much more common in applications such as: aircraft inlet design, objects in supersonic flight, and (at a more fundamental level) supersonic nozzles and diffusers. Depending on the flow conditions, an oblique shock can either be attached to the flow or detached from the flow in the form of a bow shock. Oblique shock waves Oblique shock waves are similar to normal shock waves, but they occur at angles less than 90° with the direction of flow. When a disturbance is introduced to the flow at a nonzero angle (δ), the flow must respond to the changing boundary conditions. Thus an oblique shock is formed, resulting in a change in the direction of the flow. Shock polar diagram Based on the level of flow deflection (δ), oblique shocks are characterized as either strong or weak. Strong shocks are characterized by larger deflection and more entropy loss across the shock, with weak shocks as the opposite. In order to gain cursory insight into the differences in these shocks, a shock polar diagram can be used. With the static temperature after the shock, T*, known the speed of sound after the shock is defined as, with R as the gas constant and γ as the specific heat ratio. The Mach number can be broken into Cartesian coordinates with Vx and Vy as the x and y-components of the fluid velocity V. With the Mach number before the shock given, a locus of conditions can be specified. At some , the flow transitions from a strong to weak oblique shock. With δ = 0°, a normal shock is produced at the limit of the strong oblique shock and the Mach wave is produced at the limit of the weak shock wave. Oblique shock reflection Due to the inclination of the shock, after an oblique shock is created, it can interact with a boundary in three different manners, two of which are explained below. Solid boundary Incoming flow is first turned by angle δ with respect to the flow. This shockwave is reflected off the solid boundary, and the flow is turned by – δ to again be parallel with the boundary. Each progressive shock wave is weaker and the wave angle is increased. Irregular reflection An irregular reflection is much like the case described above, with the caveat that δ is larger than the maximum allowable turning angle. Thus a detached shock is formed and a more complicated reflection known as Mach reflection occurs. Prandtl–Meyer fans Prandtl–Meyer fans can be expressed as both compression and expansion fans. Prandtl–Meyer fans also cross a boundary layer (i.e. flowing and solid) which reacts in different changes as well. When a shock wave hits a solid surface the resulting fan returns as one from the opposite family while when one hits a free boundary the fan returns as a fan of opposite type. Prandtl–Meyer expansion fans To this point, the only flow phenomena that have been discussed are shock waves, which slow the flow and increase its entropy. It is possible to accelerate supersonic flow in what has been termed a Prandtl–Meyer expansion fan, after Ludwig Prandtl and Theodore Meyer. The mechanism for the expansion is shown in the figure below. As opposed to the flow encountering an inclined obstruction and forming an oblique shock, the flow expands around a convex corner and forms an expansion fan through a series of isentropic Mach waves. The expansion "fan" is composed of Mach waves that span from the initial Mach angle to the final Mach angle. Flow can expand around either a sharp or rounded corner equally, as the increase in Mach number is proportional to only the convex angle of the passage (δ). The expansion corner that produces the Prandtl–Meyer fan can be sharp (as illustrated in the figure) or rounded. If the total turning angle is the same, then the P-M flow solution is also the same. The Prandtl–Meyer expansion can be seen as the physical explanation of the operation of the Laval nozzle. The contour of the nozzle creates a smooth and continuous series of Prandtl–Meyer expansion waves. Prandtl–Meyer compression fans A Prandtl–Meyer compression is the opposite phenomenon to a Prandtl–Meyer expansion. If the flow is gradually turned through an angle of δ, a compression fan can be formed. This fan is a series of Mach waves that eventually coalesce into an oblique shock. Because the flow is defined by an isentropic region (flow that travels through the fan) and an anisentropic region (flow that travels through the oblique shock), a slip line results between the two flow regions. Applications Supersonic wind tunnels Supersonic wind tunnels are used for testing and research in supersonic flows, approximately over the Mach number range of 1.2 to 5. The operating principle behind the wind tunnel is that a large pressure difference is maintained upstream to downstream, driving the flow. Wind tunnels can be divided into two categories: continuous-operating and intermittent-operating wind tunnels. Continuous operating supersonic wind tunnels require an independent electrical power source that drastically increases with the size of the test section. Intermittent supersonic wind tunnels are less expensive in that they store electrical energy over an extended period of time, then discharge the energy over a series of brief tests. The difference between these two is analogous to the comparison between a battery and a capacitor. Blowdown type supersonic wind tunnels offer high Reynolds number, a small storage tank, and readily available dry air. However, they cause a high pressure hazard, result in difficulty holding a constant stagnation pressure, and are noisy during operation. Indraft supersonic wind tunnels are not associated with a pressure hazard, allow a constant stagnation pressure, and are relatively quiet. Unfortunately, they have a limited range for the Reynolds number of the flow and require a large vacuum tank. There is no dispute that knowledge is gained through research and testing in supersonic wind tunnels; however, the facilities often require vast amounts of power to maintain the large pressure ratios needed for testing conditions. For example, Arnold Engineering Development Complex has the largest supersonic wind tunnel in the world and requires the power required to light a small city for operation. For this reason, large wind tunnels are becoming less common at universities. Supersonic aircraft inlets Perhaps the most common requirement for oblique shocks is in supersonic aircraft inlets for speeds greater than about Mach 2 (the F-16 has a maximum speed of Mach 2 but doesn't need an oblique shock intake). One purpose of the inlet is to minimize losses across the shocks as the incoming supersonic air slows down to subsonic before it enters the turbojet engine. This is accomplished with one or more oblique shocks followed by a very weak normal shock, with an upstream Mach number usually less than 1.4. The airflow through the intake has to be managed correctly over a wide speed range from zero to its maximum supersonic speed. This is done by varying the position of the intake surfaces. Although variable geometry is required to achieve acceptable performance from take-off to speeds exceeding Mach 2 there is no one method to achieve it. For example, for a maximum speed of about Mach 3, the XB-70 used rectangular inlets with adjustable ramps and the SR-71 used circular inlets with adjustable inlet cone.
Physical sciences
Fluid mechanics
Physics
219145
https://en.wikipedia.org/wiki/Henry%20Draper%20Catalogue
Henry Draper Catalogue
The Henry Draper Catalogue (HD) is an astronomical star catalogue published between 1918 and 1924, giving spectroscopic classifications for 225,300 stars; it was later expanded by the Henry Draper Extension (HDE), published between 1925 and 1936, which gave classifications for 46,850 more stars, and by the Henry Draper Extension Charts (HDEC), published from 1937 to 1949 in the form of charts, which gave classifications for 86,933 more stars. In all, 359,083 stars were classified as of August 2017. The HD catalogue is named after Henry Draper, an amateur astronomer, and covers the entire sky almost completely down to an apparent photographic magnitude of about 9; the extensions added fainter stars in certain areas of the sky. The construction of the Henry Draper Catalogue was part of a pioneering effort to classify stellar spectra, and its catalogue numbers are commonly used as a way of identifying stars. History The origin of the Henry Draper Catalogue dates back to the earliest photographic studies of stellar spectra. Henry Draper made the first photograph of a star's spectrum showing distinct spectral lines when he photographed Vega in 1872. He took over a hundred more photographs of stellar spectra before his death in 1882. In 1885, Edward Pickering began to supervise photographic spectroscopy at Harvard College Observatory, using the objective prism method. In 1886, Draper's widow, Mary Anna Palmer Draper, became interested in Pickering's research and agreed to fund it under the name Henry Draper Memorial. Pickering and his coworkers then began to take an objective-prism survey of the sky and to classify the resulting spectra. A first result of this work was the Draper Catalogue of Stellar Spectra, published in 1890. This catalogue contained spectroscopic classifications for 10,351 stars, mostly north of declination −25°. Most of the classification was done by Williamina Fleming. The classification scheme used was to subdivide the previously used Secchi classes (I to IV) into more specific classes, given letters from A to N. Also, the letter O was used for stars whose spectra consisted mainly of bright lines, the letter P for planetary nebulae, and the letter Q for spectra not fitting into any of the classes A through P. No star of type N appeared in the catalogue, and the only star of type O was the Wolf–Rayet star HR 2583. Antonia Maury and Pickering published a more detailed study of the spectra of bright stars in the northern hemisphere in 1897. Maury used classifications numbered from I to XXII; groups I to XX corresponded to subdivisions of the Draper Catalogue types B, A, F, G, K, and M, while XXI and XXII corresponded to the Draper Catalogue types N and O. She was the first to place B stars in their current position, prior to A stars, in the spectral classification. In 1890, the Harvard College Observatory constructed an observation station in Arequipa, Peru in order to study the sky in the Southern Hemisphere, and a study of bright stars in the southern hemisphere was published by Annie Jump Cannon and Pickering in 1901. Cannon used the lettered types of the Draper Catalogue of Stellar Spectra, but dropped all letters except O, B, A, F, G, K, and M, used in that order, as well as P for planetary nebulae and Q for some peculiar spectra. She also used types such as B5A for stars halfway between types B and A, F2G for stars one-fifth of the way from F to G, and so forth. Between 1910 and 1915, new discoveries increased interest in stellar classification, and work on the Henry Draper Catalogue itself started in 1911. From 1912 to 1915, Cannon and her coworkers classified spectra at the rate of approximately 5,000 per month. The catalogue was published in 9 volumes of the Annals of Harvard College Observatory between 1918 and 1924. It contains rough positions, magnitudes, spectral classifications, and, where possible, cross-references to the Durchmusterung catalogs for 225,300 stars. The classification scheme used was similar to that used in Cannon's 1901 work, except that types such as B, A, B5A, F2G, and so on, had been changed to B0, A0, B5, F2, and so on. As well as the classes O through M, P was used for nebulae and R and N for carbon stars. Pickering died on February 3, 1919, leaving 6 volumes to be overseen by Cannon. Cannon found spectral classifications for 46,850 fainter stars in selected regions of the sky in the Henry Draper Extension, published in six parts between 1925 and 1936. She continued classifying stars until her death in 1941. Most of these classifications were published in 1949 in the Henry Draper Extension Charts (the first portion of these charts was published in 1937.) These charts also contained some classifications by Margaret Walton Mayall, who supervised the work after Cannon's death. The catalogue and its extensions were the first large-scale attempt to catalogue spectral types of stars, and its construction led to the Harvard classification scheme of stellar spectra which is still used today. Availability and usage Stars contained in the main portion of the catalogue are of medium magnitude, down to about 9m (about as bright as the faintest stars visible with the naked eye). The extensions contain stars as faint as the 11th magnitude selected from certain regions of the sky. Stars in the original catalogue are numbered from 1 to 225300 (prefix HD) and are numbered in order of increasing right ascension for the epoch 1900.0. Stars in the first extension are numbered from 225301 to 272150 (prefix HDE), and stars from the extension charts are numbered from 272151 to 359083 (prefix HDEC). However, as the numbering is continuous throughout the catalog and its extensions, the prefix HD may be used regardless as its use produces no ambiguity. Many stars are customarily identified by their HD numbers. The Henry Draper Catalogue and the Extension were available from the NASA Astronomical Data Center as part of their third CD-ROM of astronomical catalogues. Currently, the Catalogue and Extension are available from the VizieR service of the Centre de Données astronomiques (French for "Astronomical Data Center") at Strasbourg as catalogue number III/135A. Because of their format, putting the Henry Draper Extension Charts into a machine-readable format was more difficult, but this task was eventually completed by 1995 by Nesterov, Röser and their coworkers, and the charts are now available at VizieR as catalogue number III/182.
Physical sciences
Surveys and Catalogs
Astronomy
219151
https://en.wikipedia.org/wiki/Highland%20cattle
Highland cattle
The Highland is a Scottish breed of rustic cattle. It originated in the Scottish Highlands and the Western Islands of Scotland and has long horns and a long shaggy coat. It is a hardy breed, able to withstand the intemperate conditions in the region. The first herd-book dates from 1885; two types – a smaller island type, usually black, and a larger mainland type, usually dun – were registered as a single breed. It is reared primarily for beef, and has been exported to several other countries. History The Highland is a traditional breed of western Scotland. There were two distinct types. The Kyloe, reared mainly in the Hebrides or Western Islands, was small and was frequently black. The cattle were so called because of the practice of swimming them across the narrow straits or kyles separating the islands from the mainland. The cattle of the mainland were somewhat larger, and very variable in colour; they were often brown or red. These cattle were important to the Scottish economy of the eighteenth century. At markets such as those of Falkirk or Crieff, many were bought by drovers from England, who moved them south over the Pennines to be fattened for slaughter. In 1723 over Scottish cattle were sold into England. A breed society was established in 1884, and in 1885 published the first volume of the herd-book. In this the two types were recorded without distinction as 'Highland'. In 2002 the number of registered breeding cows in the United Kingdom was about ; by 2012 this had risen to some . In 2021 it was ; the conservation status of the breed in the United Kingdom is listed in DAD-IS as endangered/at risk. The number of unregistered cattle is not known. Although a group of cattle is generally called a herd, a group of Highland cattle is known as a "fold". This is because in winter, the cattle were kept in open shelters made of stone called folds to protect them from the weather at night. In 1954, Queen Elizabeth II ordered Highland cattle to be kept at Balmoral Castle where they are still kept today. From the late nineteenth century, stock was exported to various countries of the world, among them Argentina, Australia, Canada, the Falkland Islands, the former Soviet Union and the United States. Later in the twentieth century there were exports to various European countries. In 2022 the breed was reported to DAD-IS by twenty-three countries, of which seventeen reported population data. The total population world-wide was reported at just over , with the largest numbers in France and Finland. Australia Highland cattle were first imported into Australia by the mid-nineteenth century by Scottish migrants such as Chieftain Aeneas Ronaldson MacDonell of Glengarry, Scotland. Arriving in Port Albert, Victoria, in 1841 with his clan, they apparently drove their Highland cattle to a farm at Greenmount, on the Tarra River, preceded by a piper. Samuel Amess, also from Scotland, who made a fortune in the Victorian goldfields and became Mayor of Melbourne in 1869, kept a small fold of black Highland cattle on Churchill Island. They were seen and survived in Port Victoria during the late 1800s, but other folds were believed to have died out in areas such as New South Wales. In 1988 the Australian Highland Cattle Society was formed. Since then, numbers have been growing and semen is being exported to New Zealand to establish the breed there. Australian farmers choose it because they can adapt to every environment. Despite some challenges, the Highland cattle breed does well and plays an important role in cattle breeding across Australia. Canada Highland cattle were first imported into Canada in the 1880s. The Hon. Donald A. Smith, Lord Strathcona of Winnipeg, Manitoba, and Robert Campbell of Strathclair, Manitoba, imported one bull each. There were also Highland cattle in Nova Scotia in the 1880s. However, their numbers were small until the 1920s when large-scale breeding and importing began. In the 1950s cattle were imported from and exported to North America. The Canadian Highland Cattle Society was officially registered in 1964 and currently registers all purebred cattle in Canada. Towards the end of the 1990s, there was a large semen and embryo trade between the UK and Canada. However that has stopped, largely due to the BSE (mad cow disease) outbreaks in the United Kingdom. Today, Highland cattle are mainly found in eastern Canada. In 2001 the population for Canada and the United States of America combined was estimated at . Denmark The Danish Highland Cattle Society was established in 1987 to promote the best practices for the breeding and care of Highland cattle and to promote the introduction of the breed into Denmark. Finland The Highland Cattle Club of Finland was founded in 1997. Their studbooks show importation of Highland cattle breeding stock to Finland, dating back to 1884. The Finnish club states that in 2016, there were Highland cattle in Finland. United States The first record of Highland cattle being imported to the United States was in the late 1890s. The American Highland Cattle Association was first organised in 1948 as the American Scotch Highland Breeders Association, and now claims approximately members. Characteristics They have long, wide horns and long, wavy, woolly coats. The usual coat colour is reddish brown, seen in approximately 60% of the population; some 22% are yellow, and the remainder pale silver, black or brindle/dun. The coat colours are caused by alleles at the MC1R gene (E locus) and the PMEL or SILV gene (D locus). They have an unusual double coat of hair. On the outside is the oily outer hair—the longest of any cattle breed, covering a downy undercoat. This makes them well suited to conditions in the Highlands, which have a high annual rainfall and sometimes very strong winds. Mature bulls can weigh up to and heifers can weigh up to . Cows typically have a height of , and bulls are typically in the range of . Mating occurs throughout the year with a gestation period of approximately days. Most commonly a single calf is born, but twins are not unknown. Sexual maturity is reached at about eighteen months. Highland cattle also have a longer expected lifespan than most other breeds of cattle, up to 20 years. Cold tolerance All European cattle cope relatively well with low temperatures but Highland cattle have been described as "almost as cold-tolerant as the arctic-dwelling caribou and reindeer". Conversely due to their thick coats they are much less tolerant of heat than zebu cattle, which originated in South Asia and are adapted for hot climates. Highland cattle have been successfully established in countries where winters are substantially colder than Scotland such as Norway and Canada. Social behaviour A fold of semi-wild Highland cattle was studied over a period of 4 years. It was found that the cattle have a clear structure and hierarchy of dominance, which reduces aggression. Social standing depends on age and sex, with older cattle being dominant to calves and younger ones, and males dominant to females. Young bulls will dominate adult cows when they reached around 2 years of age. Calves from the top ranking cow were given higher social status, despite minimal intervention from their mother. Playfighting, licking and mounting were seen as friendly contact. Breeding occurred in May and June, with heifers first giving birth at 2–3 years old. Use The meat of Highland cattle tends to be leaner than most beef because Highlands are largely insulated by their thick, shaggy hair rather than by subcutaneous fat. Highland cattle can produce beef at a reasonable profit from land that would otherwise normally be unsuitable for agriculture. The most profitable way to produce Highland beef is on poor pasture in their native land, the Highlands of Scotland. Commercial success The beef from Highland cattle is very tender, but the market for high-quality meat has declined. To address this decline, it is common practice to breed Highland "suckler" cows with a more favourable breed such as a Shorthorn or Limousin bull. This allows the Highland cattle to produce a crossbred beef calf that has the tender beef of its mother on a carcass shape of more commercial value at slaughter. These crossbred beef suckler cows inherit the hardiness, thrift and mothering capabilities of their Highland dams and the improved carcass configuration of their sires. Such crossbred sucklers can be further crossbred with a modern beef bull such as a Limousin or Charolais to produce high quality beef.
Biology and health sciences
Cattle
Animals
219210
https://en.wikipedia.org/wiki/Copy%20protection
Copy protection
Copy protection, also known as content protection, copy prevention and copy restriction, is any measure to enforce copyright by preventing the reproduction of software, films, music, and other media. Copy protection is most commonly found on videotapes, DVDs, Blu-ray discs, HD-DVDs, computer software discs, video game discs and cartridges, audio CDs and some VCDs. It also may be incorporated into digitally distributed versions of media and software. Some methods of copy protection have also led to criticism because it caused inconvenience for paying consumers or secretly installed additional or unwanted software to detect copying activities on the consumer's computer. Making copy protection effective while protecting consumer rights remains a problem with media publication. Terminology Media corporations have always used the term copy protection, but critics argue that the term tends to sway the public into identifying with the publishers, who favor restriction technologies, rather than with the users. Copy prevention and copy control may be more neutral terms. "Copy protection" is a misnomer for some systems, because any number of copies can be made from an original and all of these copies will work, but only in one computer, or only with one dongle, or only with another device that cannot be easily copied. The term is also often related to, and confused with, the concept of digital restrictions management. Digital restrictions management is a more general term because it includes all sorts of management of works, including copy restrictions. Copy restriction may include measures that are not digital. A more appropriate term may be "technological protection measures" (TPMs), which is often defined as the use of technological tools in order to restrict the use or access to a work. Business rationale Unauthorized copying and distribution accounted for $2.4billion per year in lost revenue in the United States alone in 1990, and is assumed to be causing impact on revenues in the music and the video game industry, leading to proposal of stricter copyright laws such as PIPA. Copy protection is most commonly found on videotapes, DVDs, computer software discs, video game discs and cartridges, audio CDs and some VCDs. Many media formats are easy to copy using a machine, allowing consumers to distribute copies to their friends, a practice known as "casual copying". Companies publish works under copyright protection because they believe that the cost of implementing the copy protection will be less than the revenue produced by consumers who buy the product instead of acquiring it through casually copied media. Opponents of copy protection argue that people who obtain free copies only use what they can get for free and would not purchase their own copy if they were unable to obtain a free copy. Some even argue that free copies increase profit; people who receive a free copy of a music CD may then go and buy more of that band's music, which they would not have done otherwise. Some publishers have avoided copy-protecting their products on the theory that the resulting inconvenience to their users outweighs any benefit of frustrating "casual copying". From the perspective of the end user, copy protection is always a cost. DRM and license managers sometimes fail, are inconvenient to use, and may not afford the user all of the legal use of the product they have purchased. The term copy protection refers to the technology used to attempt to frustrate copying, and not to the legal remedies available to publishers or authors whose copyrights are violated. Software usage models range from node locking to floating licenses (where a fixed number licenses can be concurrently used across an enterprise), grid computing (where multiple computers function as one unit and so use a common license) and electronic licensing (where features can be purchased and activated online). The term license management refers to broad platforms which enable the specification, enforcement and tracking of software licenses. To safeguard copy protection and license management technologies themselves against tampering and hacking, software anti-tamper methods are used. Floating licenses are also being referred to as Indirect Licenses, and are licenses that at the time they are issued, there is no actual user who will use them. That has some technical influence over some of their characteristics. Direct Licenses are issued after a certain user requires it. As an example, an activated Microsoft product, contains a Direct License which is locked to the PC where the product is installed. From business standpoint, on the other hand, some services now try to monetize on additional services other than the media content so users can have better experience than simply obtaining the copied product. Technical challenges From a technical standpoint, it seems impossible to completely prevent users from making copies of the media they purchase, as long as a "writer" is available that can write to blank media. All types of media require a "player"—a CD player, DVD player, videotape player, computer or video game console—which must be able to read the media in order to display it to a human. Logically, a player could be built that reads the media and then writes an exact copy of what was read to the same type of media. At a minimum, digital copy protection of non-interactive works is subject to the analog hole: regardless of any digital restrictions, if music can be heard by the human ear, it can also be recorded (at the very least, with a microphone and tape recorder); if a film can be viewed by the human eye, it can also be recorded (at the very least, with a video camera and recorder). In practice, almost-perfect copies can typically be made by tapping into the analog output of a player (e.g. the speaker output or headphone jacks) and, once redigitized into an unprotected form, duplicated indefinitely. Copying text-based content in this way is more tedious, but the same principle applies: if it can be printed or displayed, it can also be scanned and OCRed. With basic software and some patience, these techniques can be applied by a typical computer-literate user. Since these basic technical facts exist, it follows that a determined individual will definitely succeed in copying any media, given enough time and resources. Media publishers understand this; copy protection is not intended to stop professional operations involved in the unauthorized mass duplication of media, but rather to stop "casual copying". Copying of information goods which are downloaded (rather than being mass-duplicated as with physical media) can be inexpensively customized for each download, and thus restricted more effectively, in a process known as "traitor tracing". They can be encrypted in a fashion which is unique for each user's computer, and the decryption system can be made tamper-resistant. Copyright protection in content platforms also cause increased market concentration and a loss in aggregate welfare. According to research on the European Directive on copyright in the Digital Single Market on platform competition, only users of large platforms will be allowed to upload content if the content is sufficiently valuable and network effects are strong. Methods For information on individual protection schemes and technologies, see List of copy protection schemes or relevant category page. Computer software Copy protection for computer software, especially for games, has been a long cat-and-mouse struggle between publishers and crackers. These were (and are) programmers who defeated copy protection on software as a hobby, add their alias to the title screen, and then distribute the "cracked" product to the network of warez BBSes or Internet sites that specialized in distributing unauthorized copies of software. Early ages When computer software was still distributed in audio cassettes, audio copying was unreliable, while digital copying was time consuming. Software prices were comparable with audio cassette prices. To make digital copying more difficult, many programs used non-standard loading methods (loaders incompatible with standard BASIC loaders, or loaders that used different transfer speed). Unauthorized software copying began to be a problem when floppy disks became the common storage media. The ease of copying depended on the system; Jerry Pournelle wrote in BYTE in 1983 that "CP/M doesn't lend itself to copy protection" so its users "haven't been too worried" about it, while "Apple users, though, have always had the problem. So have those who used TRS-DOS, and I understand that MS-DOS has copy protection features". 1980s Pournelle disliked copy protection and, except for games, refused to review software that used it. He did not believe that it was useful, writing in 1983 that "For every copy protection scheme there's a hacker ready to defeat it. Most involve so-called nibble/nybble copiers, which try to analyze the original disk and then make a copy". In 1985, he wrote that "dBASE III is copy-protected with one of those 'unbreakable' systems, meaning that it took the crackers almost three weeks to break it". IBM's Don Estridge agreed: "I guarantee that whatever scheme you come up with will take less time to break than to think of it." While calling piracy "a threat to software development. It's going to dry up the software", he said "It's wrong to copy-protect programs ... There ought to be some way to stop [piracy] without creating products that are unusable". Software vendors disliked piracy, but did not attempt to sue the vendors of nibble copiers because copyright law allows users to produce backup copies of software. Unlike music piracy, no significant software piracy market existed. A more serious problem was a company or other large organization purchasing a single copy of an application and producing many copies for itself. Philippe Kahn of Borland justified copy-protecting Sidekick because, unlike his company's unprotected Turbo Pascal, Sidekick can be used without accompanying documentation and is for a general audience. Kahn said, according to Pournelle, that "any good hacker can defeat the copy protection in about an hour"; its purpose was to prevent large companies from purchasing one copy and easily distributing it internally. While reiterating his dislike of copy protection, Pournelle wrote "I can see Kahn's point". In 1989, Gilman Louie, head of Spectrum Holobyte, stated that copy protection added about $0.50 per copy to the cost of production of a game. Other software relied on complexity; Antic in 1988 observed that WordPerfect for the Atari ST "is almost unusable without its manual of over 600 pages!". (The magazine was mistaken; the ST version was so widely pirated that the company threatened to discontinue it.) Copy protection sometimes causes software not to run on clones, such as the Apple II-compatible Laser 128, or even the genuine Commodore 64 with certain peripherals. To limit reusing activation keys to install the software on multiple machines, it has been attempted to tie the installed software to a specific machine by involving some unique feature of the machine. Serial number in ROM could not be used because some machines do not have them. Some popular surrogate for a machine serial number were date and time (to the second) of initialization of the hard disk drive or MAC address of Ethernet cards (although this is programmable on modern cards). With the rise of virtualization, however, the practice of locking has to add to these simple hardware parameters to still prevent copying. Early video games During the 1980s and 1990s, video games sold on audio cassette and floppy disks were sometimes protected with an external user-interactive method that demanded the user to have the original package or a part of it, usually the manual. Copy protection was activated not only at installation, but every time the game was executed. Several imaginative and creative methods have been employed, in order to be both fun and hard to copy. These include: The most common method was requiring the player to enter a specific word (often chosen at random) from the manual. A variant of this technique involved matching a picture provided by the game to one in the manual and providing an answer pertaining to the picture (Ski or Die, 4D Sports Boxing and James Bond 007: The Stealth Affair used this technique). Buzz Aldrin's Race Into Space (in the floppy version, but not the CD version) required the user to input an astronaut's total duration in space (available in the manual) before the launch of certain missions. If the answer was incorrect, the mission would suffer a catastrophic failure. Manuals containing information and hints vital to the completion of the game, like answers to riddles (Conquests of Camelot, King's Quest VI), recipes of spells (King's Quest III), keys to deciphering non-Latin writing systems (Ultima series, see also Ultima writing systems), maze guides (Manhunter), dialogue spoken by other characters in the game (Wasteland, Dragon Wars), excerpts of the storyline (most Advanced Dungeons and Dragons games and Wing Commander), or a radio frequency to use to communicate with a character to further a game (Metal Gear Solid). Some sort of code with symbols, not existing on the keyboard or the ASCII code. This code was arranged in a grid, and had to be entered via a virtual keyboard at the request "What is the code at line 3 row 2?". These tables were printed on dark paper (Maniac Mansion, Uplink), or were visible only through a red transparent layer (Indiana Jones and the Last Crusade), making the paper very difficult to photocopy. Another variant of this method—most famously used on the ZX Spectrum version of Jet Set Willy—was a card with color sequences at each grid reference that had to be entered before starting the game. This also prevented monochrome photocopying. It had been thought that the codes in the tables were based on a mathematical formula which could be calculated by using the row, line and page number if the formula was known, a function of the disk space requirement of the data. Later research proved that this was not the case. The Secret of Monkey Island offered a rotating wheel with halves of pirate's faces. The game showed a face composed of two different parts and asked when this pirate was hanged on a certain island. The player then had to match the faces on the wheel, and enter the year that appeared on the island-respective hole. Its sequel had the same concept, but with magic potion ingredients. Other games that employed the code wheel system include Star Control. Zork games such as Beyond Zork and Zork Zero came with "feelies" which contained information vital to the completion of the game. For example, the parchment found from Zork Zero contained clues vital to solving the final puzzle. However, whenever the player attempts to read the parchment, they are referred to the game package. The Lenslok system used a plastic prismatic device, shipped with the game, which was used to descramble a code displayed on screen. Early copies of The Playroom from Broderbund Software included a game called "What is Missing?" in which every fifth time the program was booted up, the player would see a pattern and have to refer to the back of the manual to find which of 12 objects from the spinner counting game would match the pattern seen on the back of the manual in order to open the game. All of these methods proved to be troublesome and tiring for the players, and as such greatly declined in usage by the mid-1990s, at which point the emergence of CDs as the primary video game medium made copy protection largely redundant, since CD copying technology was not widely available at the time. Some game developers, such as Markus Persson, have encouraged consumers and other developers to embrace the reality of unlicensed copying and utilize it positively to generate increased sales and marketing interest. Videotape Starting in 1985 with the video release of The Cotton Club (Beta and VHS versions only), Macrovision licensed to publishers a technology that exploits the automatic gain control feature of VCRs by adding pulses to the vertical blanking sync signal. These pulses may negatively affect picture quality, but succeed in confusing the recording-level circuitry of many consumer VCRs. This technology, which is aided by U.S. legislation mandating the presence of automatic gain-control circuitry in VCRs, is said to "plug the analog hole" and make VCR-to-VCR copies impossible, although an inexpensive circuit is widely available that will defeat the protection by removing the pulses. Macrovision had patented methods of defeating copy prevention, giving it a more straightforward basis to shut down manufacture of any device that descrambles it than often exists in the DRM world. While used for pre-recorded tapes, the system was not adopted for television broadcasts; Michael J. Fuchs of HBO said in 1985 that Macrovision was "not good technology" because it reduced picture quality and consumers could easily bypass it, while Peter Chernin of Showtime said "we want to accommodate our subscribers and we know they like to tape our movies". Notable payloads Over time, software publishers (especially in the case of video games) became creative about crippling the software in case it was duplicated. These games would initially show that the copy was successful, but eventually render themselves unplayable via subtle methods. Many games use the "code checksumming" technique to prevent alteration of code to bypass other copy protection. Important constants for the game - such as the accuracy of the player's firing, the speed of their movement, etc. - are not included in the game but calculated from the numbers making up the machine code of other parts of the game. If the code is changed, the calculation yields a result which no longer matches the original design of the game and the game plays improperly. Superior Soccer had no outward signs of copy protection, but if it decided it was not a legitimate copy, it made the soccer ball in the game invisible, making it impossible to play the game. In Sid Meier's Pirates, if the player entered in the wrong information, they could still play the game, but with substantially increased difficulty. As a more satirical nod to the issue, if the thriller-action game Alan Wake detects that the game is cracked or a pirated copy, it will replace tips in loading screens with messages telling the player to buy the game. If a new game is created on the copied game, an additional effect will take place. As a more humorous nod to "piracy", Alan Wake will gain a black eyepatch over his right eye, complete with a miniature Jolly Roger. While the copy protection in Zak McKracken and the Alien Mindbenders was not hidden as such, the repercussions of missing the codes was unusual: the player ended up in jail (permanently), and the police officer gave a lengthy and condescending speech about software copying. In case of copied versions of The Settlers III, the iron smelters only produced pigs (a play on pig iron); weaponsmiths require iron to produce weapons, so players could not amass arms. Bohemia Interactive developed a unique and very subtle protection system for its game Operation Flashpoint: Cold War Crisis. Dubbed FADE, if it detects an unauthorized copy, it does not inform the player immediately but instead progressively corrupts aspects of the game (such as reducing the weapon accuracy to zero) to the point that it eventually becomes unplayable. The message "Original discs don't FADE" will eventually appear if the game is detected as being an unauthorized copy. FADE is also used in ArmA II, and will similarly diminish the accuracy of the player's weapons, as well as induce a "drunken vision" effect, where the screen becomes wavy, should the player be playing on an unauthorized copy. This system was also used in Take On Helicopters, where the screen blurred and distorted when playing a counterfeit copy, making it hard to safely pilot a helicopter. The IndyCar Series also utilizes FADE technology to safeguard against piracy by making races very difficult to win on a pirated version. The penultimate section of the game's manual states: Batman: Arkham Asylum contained code that disabled Batman's glider cape, making some areas of the game very difficult to complete and a certain achievement/trophy impossible to unlock (gliding continuously for over 100m). The PC version of Grand Theft Auto IV has a copy protection that swings the camera as though the player was drunk. If the player enters a car and motorcycle or boat it will automatically throttle, making it difficult to steer. It also damages the vehicle, making it vulnerable to collisions and bullets. An update to the game prevented unauthorised copies from accessing the in-game web browser, making it impossible to finish the game as some missions involve browsing the web for objectives. EarthBound is well-documented for its extensive use of checksums to ensure that the game is being played on legitimate hardware. If the game detects that it is being played on a European SNES, it refuses to boot, as the first of several checksums has failed. A second checksum will weed out most unauthorized copies of the game, but hacking the data to get past this checksum will trigger a third checksum that makes enemy encounters appear much more often than in an authorized copy, and if the player progresses through the game without giving up (or cracks this protection), a final checksum code will activate before the final boss battle, freezing the game and deleting all the save files. A similar copy protection system was used in Spyro: Year of the Dragon, although it only uses one copy protection check at the beginning of the game (see below). In an unauthorized version of the PC edition of Mass Effect, the game save mechanism did not work and the in-game galactic map caused the game to crash. As the galactic map is needed to travel to different sections of the game, the player became stuck in the first section of the game. If an unauthorized version of The Sims 2 was used, the Build Mode would not work properly. Walls could not be built on the player's property, which prevented the player from building any custom houses. Some furniture and clothing selections would not be available either. A March 2009 update to the BeeJive IM iPhone app included special functionality for users of the unauthorized version: the screen would read "PC LOAD LETTER" whenever the user tried to establish a connection to any IM service, then quickly switch to a YouTube clip from the movie Office Space. Command & Conquer: Red Alert 2 and The Lord of the Rings: The Battle for Middle-earth have a copy protection system that completely wipes out the player's forces briefly after a battle begins on an unlicensed copy. However, some who purchased the latter have encountered a bug that caused this copy protection scheme to trigger when it was not supposed to. If a player pirated the Nintendo DS version of Michael Jackson: The Experience, vuvuzela noises will play over the notes during a song, which then become invisible. The game will also freeze if the player tries to pause it. Older versions of Autodesk 3ds Max use a dongle for copy protection; if it is missing, the program will randomly corrupt the points of the user's model during usage, destroying their work. Older versions of CDRWIN used a serial number for initial copy protection. However, if this check was bypassed, a second hidden check would activate causing a random factor to be introduced into the CD burning process, producing corrupted "coaster" disks. Terminate, a BBS terminal package, would appear to operate normally if cracked but would insert a warning that a pirated copy was in use into the IEMSI login packet it transmitted, where the sysop of any BBS the user called could clearly read it. Ubik's Musik, a music creation tool for the Commodore 64, would transform into a Space Invaders game if it detected that a cartridge-based copying device had attempted to interrupt it. This copy protection system also doubles as an easter egg, as the message that appears when it occurs is not hostile ("Plug joystick in port 1, press fire, and no more resetting/experting!"). The Amiga version of Bomberman featured a multitap peripheral that also acted as a dongle. Data from the multitap was used to calculate the time limit of each level. If the multitap was missing, the time limit would be calculated as 0, causing the level to end immediately. Nevermind, a puzzle game for the Amiga, contained code that caused an unlicensed version of the game to behave as a demo. The game would play three levels sampled from throughout the game, and then give the message "You have completed three levels; however there are 100 levels to complete on the original disc." In Spyro: Year of the Dragon, a character named Zoe will tell the player outside the room containing the balloon to Midday Garden Home and several other areas that the player is using an unlicensed copy. This conversation purposely corrupts data. When corrupted, the game would not only remove stray gems and the ability to progress in certain areas but also make the final boss unbeatable, returning the player to the beginning of the game (and removing the save file at the same time) after about 8 seconds into the battle. The Atari Jaguar console would freeze at startup and play the sound of an enraged jaguar snarling if the inserted cartridge failed the initial security check. The Lenslok copy protection system gave an obvious message if the lens-coded letters were entered incorrectly, but if the user soft-reset the machine, the areas of memory occupied by the game would be flooded with the message "THANK YOU FOR YOUR INTEREST IN OUR PRODUCT. NICE TRY. LOVE BJ/NJ" to prevent the user examining leftover code to crack the protection. An update to the sandbox game Garry's Mod enabled a copy protection mechanism that outputs the error "Unable to shade polygon normals" if the game detects that it has been copied. The error also includes the user's Steam ID as an error ID, meaning that users can be identified by their Steam account when asking for help about the error over the Internet. The Atari version of Alternate Reality: The Dungeon would have the player's character attacked by two unbeatable "FBI Agents" if it detected a cracked version. The FBI agents would also appear when restoring a save which was created by such a version, even if the version restoring the save was legal. VGA Planets, a play-by-BBS strategy game, contained code in its server which would check all clients' submitted turns for suspect registration codes. Any player deemed to be using a cracked copy, or cheating in the game, would have random forces destroyed throughout the game by an unbeatable enemy called "The Tim Continuum" (after the game's author, Tim Wissemann). A similar commercial game, Stars!, would issue empty turn updates for players with invalid registration codes, meaning that none of their orders would ever be carried out. On a copied version of the original PC version of Postal, as soon as the game was started, the player character would immediately shoot himself in the head. In Serious Sam 3: BFE, if the game code detects what it believes to be an unauthorized copy, an invincible scorpion-like monster is spawned in the beginning of the game with high speeds, melee attacks, and attacks from a range with twin chainguns making the game extremely difficult and preventing the player from progressing further. Also in the level "Under the Iron Cloud", the player's character will spin out-of-control looking up in the air. An unauthorized copy of Pokémon Black and White and their sequels will run as if it were normal, but the Pokémon will not gain any experience points after a battle. This has since been solved by patching the game's files. If Ace Attorney Investigations 2: Prosecutor's Gambit detects an illegitimate or downloaded copy of the game, it will convert the entire game's text into the game's symbol based foreign language, Borginian, which cannot be translated in any way. The unlicensed version of indie game Game Dev Tycoon, in which the player runs a game development company, will dramatically increase the piracy rate of the games the player releases to the point where no money can be made at all, and disable the player's ability to take any action against it In the stand-alone expansion to Crytek's Crysis, Crysis Warhead, players who pirated the game will have their ammunition replaced with chickens that inflict no damage and have very little knockback, rendering ranged combat impossible. In Crytek's Crysis 3, if a player used an unlicensed copy of the game, he is not able to defeat the last boss (The Alpha Ceph), thus making it impossible to beat the game. In Mirror's Edge, copy protection will prevent its player character, Faith, from sprinting, making it impossible for players to jump over long gaps and progress further on a pirated copy. In The Legend of Zelda: Spirit Tracks if the player is playing a pirated copy of the game it will remove the train control UI if it detects that it has been pirated, which effectively stonewalls the player at the train's tutorial section very early on and thus making the game unbeatable. The usage of copy protection payloads which lower playability of a game without making it clear that this is a result of copy protection is now generally considered unwise, due to the potential for it to result in unaware players with unlicensed copies spreading word-of-mouth that a game is of low quality. The authors of FADE explicitly acknowledged this as a reason for including the explicit warning message. Anti-piracy Anti-piracy measures are efforts to fight against copyright infringement, counterfeiting, and other violations of intellectual property laws. It includes, but is by no means limited to, the combined efforts of corporate associations (such as the RIAA and MPA), law enforcement agencies (such as the FBI and Interpol), and various international governments to combat copyright infringement relating to various types of creative works, such as software, music and films. These measures often come in the form of copy protection measures such as DRM, or measures implemented through a content protection network, such as Distil Networks or Incapsula. Richard Stallman and the GNU Project have criticized the use of the word "piracy" in these situations, saying that publishers use the word to refer to "copying they don't approve of" and that "they [publishers] imply that it is ethically equivalent to attacking ships on the high seas, kidnapping and murdering the people on them". Certain forms of anti-piracy (such as DRM) are considered by consumers to control the use of the products content after sale. In the case MPAA v. Hotfile, Judge Kathleen M. Williams granted a motion to deny the prosecution the usage of words she views as "pejorative". This list included the word "piracy", the use of which, the motion by the defense stated, would serve no purpose but to misguide and inflame the jury. The plaintiff argued the common use of the terms when referring to copyright infringement should invalidate the motion, but the Judge did not concur. Anti-piracy in file sharing Today copyright infringement is often facilitated by the use of file sharing. In fact, infringement accounts for 23.8% of all internet traffic in 2013. In an effort to cut down on this, both large and small films and music corporations have issued DMCA takedown notices, filed lawsuits, and pressed criminal prosecution of those who host these file sharing services. Anti-counterfeiting and gun control The EURion constellation is used by many countries to prevent color photocopiers from producing counterfeit currency. The Counterfeit Deterrence System is used to prevent counterfeit bills from being produced by image editing software. Similar technology has been proposed to prevent 3D printing of firearms, for reasons of gun control rather than copyright.
Technology
Computer security
null
219268
https://en.wikipedia.org/wiki/Population%20genetics
Population genetics
Population genetics is a subfield of genetics that deals with genetic differences within and among populations, and is a part of evolutionary biology. Studies in this branch of biology examine such phenomena as adaptation, speciation, and population structure. Population genetics was a vital ingredient in the emergence of the modern evolutionary synthesis. Its primary founders were Sewall Wright, J. B. S. Haldane and Ronald Fisher, who also laid the foundations for the related discipline of quantitative genetics. Traditionally a highly mathematical discipline, modern population genetics encompasses theoretical, laboratory, and field work. Population genetic models are used both for statistical inference from DNA sequence data and for proof/disproof of concept. What sets population genetics apart from newer, more phenotypic approaches to modelling evolution, such as evolutionary game theory and adaptive dynamics, is its emphasis on such genetic phenomena as dominance, epistasis, the degree to which genetic recombination breaks linkage disequilibrium, and the random phenomena of mutation and genetic drift. This makes it appropriate for comparison to population genomics data. History Population genetics began as a reconciliation of Mendelian inheritance and biostatistics models. Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural or sexual selection implausible. The Hardy–Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. According to this principle, the frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift. The next key step was the work of the British biologist and statistician Ronald Fisher. In a series of papers starting in 1918 and culminating in his 1930 book The Genetical Theory of Natural Selection, Fisher showed that the continuous variation measured by the biometricians could be produced by the combined action of many discrete genes, and that natural selection could change allele frequencies in a population, resulting in evolution. In a series of papers beginning in 1924, another British geneticist, J. B. S. Haldane, worked out the mathematics of allele frequency change at a single gene locus under a broad range of conditions. Haldane also applied statistical analysis to real-world examples of natural selection, such as peppered moth evolution and industrial melanism, and showed that selection coefficients could be larger than Fisher assumed, leading to more rapid adaptive evolution as a camouflage strategy following increased pollution. The American biologist Sewall Wright, who had a background in animal breeding experiments, focused on combinations of interacting genes, and the effects of inbreeding on small, relatively isolated populations that exhibited genetic drift. In 1932 Wright introduced the concept of an adaptive landscape and argued that genetic drift and inbreeding could drive a small, isolated sub-population away from an adaptive peak, allowing natural selection to drive it towards different adaptive peaks. The work of Fisher, Haldane and Wright founded the discipline of population genetics. This integrated natural selection with Mendelian genetics, which was the critical first step in developing a unified theory of how evolution worked. John Maynard Smith was Haldane's pupil, whilst W. D. Hamilton was influenced by the writings of Fisher. The American George R. Price worked with both Hamilton and Maynard Smith. American Richard Lewontin and Japanese Motoo Kimura were influenced by Wright and Haldane. Modern synthesis The mathematics of population genetics were originally developed as the beginning of the modern synthesis. Authors such as Beatty have asserted that population genetics defines the core of the modern synthesis. For the first few decades of the 20th century, most field naturalists continued to believe that Lamarckism and orthogenesis provided the best explanation for the complexity they observed in the living world. During the modern synthesis, these ideas were purged, and only evolutionary causes that could be expressed in the mathematical framework of population genetics were retained. Consensus was reached as to which evolutionary factors might influence evolution, but not as to the relative importance of the various factors. Theodosius Dobzhansky, a postdoctoral worker in T. H. Morgan's lab, had been influenced by the work on genetic diversity by Russian geneticists such as Sergei Chetverikov. He helped to bridge the divide between the foundations of microevolution developed by the population geneticists and the patterns of macroevolution observed by field biologists, with his 1937 book Genetics and the Origin of Species. Dobzhansky examined the genetic diversity of wild populations and showed that, contrary to the assumptions of the population geneticists, these populations had large amounts of genetic diversity, with marked differences between sub-populations. The book also took the highly mathematical work of the population geneticists and put it into a more accessible form. Many more biologists were influenced by population genetics via Dobzhansky than were able to read the highly mathematical works in the original. In Great Britain E. B. Ford, the pioneer of ecological genetics, continued throughout the 1930s and 1940s to empirically demonstrate the power of selection due to ecological factors including the ability to maintain genetic diversity through genetic polymorphisms such as human blood types. Ford's work, in collaboration with Fisher, contributed to a shift in emphasis during the modern synthesis towards natural selection as the dominant force. Neutral theory and origin-fixation dynamics The original, modern synthesis view of population genetics assumes that mutations provide ample raw material, and focuses only on the change in frequency of alleles within populations. The main processes influencing allele frequencies are natural selection, genetic drift, gene flow and recurrent mutation. Fisher and Wright had some fundamental disagreements about the relative roles of selection and drift. The availability of molecular data on all genetic differences led to the neutral theory of molecular evolution. In this view, many mutations are deleterious and so never observed, and most of the remainder are neutral, i.e. are not under selection. With the fate of each neutral mutation left to chance (genetic drift), the direction of evolutionary change is driven by which mutations occur, and so cannot be captured by models of change in the frequency of (existing) alleles alone. The origin-fixation view of population genetics generalizes this approach beyond strictly neutral mutations, and sees the rate at which a particular change happens as the product of the mutation rate and the fixation probability. Four processes Selection Natural selection, which includes sexual selection, is the fact that some traits make it more likely for an organism to survive and reproduce. Population genetics describes natural selection by defining fitness as a propensity or probability of survival and reproduction in a particular environment. The fitness is normally given by the symbol w=1-s where s is the selection coefficient. Natural selection acts on phenotypes, so population genetic models assume relatively simple relationships to predict the phenotype and hence fitness from the allele at one or a small number of loci. In this way, natural selection converts differences in the fitness of individuals with different phenotypes into changes in allele frequency in a population over successive generations. Before the advent of population genetics, many biologists doubted that small differences in fitness were sufficient to make a large difference to evolution. Population geneticists addressed this concern in part by comparing selection to genetic drift. Selection can overcome genetic drift when s is greater than 1 divided by the effective population size. When this criterion is met, the probability that a new advantageous mutant becomes fixed is approximately equal to 2s. The time until fixation of such an allele is approximately . Dominance Dominance means that the phenotypic and/or fitness effect of one allele at a locus depends on which allele is present in the second copy for that locus. Consider three genotypes at one locus, with the following fitness values s is the selection coefficient and h is the dominance coefficient. The value of h yields the following information: Epistasis Epistasis means that the phenotypic and/or fitness effect of an allele at one locus depends on which alleles are present at other loci. Selection does not act on a single locus, but on a phenotype that arises through development from a complete genotype. However, many population genetics models of sexual species are "single locus" models, where the fitness of an individual is calculated as the product of the contributions from each of its loci—effectively assuming no epistasis. In fact, the genotype to fitness landscape is more complex. Population genetics must either model this complexity in detail, or capture it by some simpler average rule. Empirically, beneficial mutations tend to have a smaller fitness benefit when added to a genetic background that already has high fitness: this is known as diminishing returns epistasis. When deleterious mutations also have a smaller fitness effect on high fitness backgrounds, this is known as "synergistic epistasis". However, the effect of deleterious mutations tends on average to be very close to multiplicative, or can even show the opposite pattern, known as "antagonistic epistasis". Synergistic epistasis is central to some theories of the purging of mutation load and to the evolution of sexual reproduction. Mutation The genetic process of mutation takes place within an individual, resulting in heritable changes to the genetic material. This process is often characterized by a description of the starting and ending states, or the kind of change that has happened at the level of DNA (e.g,. a T-to-C mutation, a 1-bp deletion), of genes or proteins (e.g., a null mutation, a loss-of-function mutation), or at a higher phenotypic level (e.g., red-eye mutation). Single-nucleotide changes are frequently the most common type of mutation, but many other types of mutation are possible, and they occur at widely varying rates that may show systematic asymmetries or biases (mutation bias). Mutations can involve large sections of DNA becoming duplicated, usually through genetic recombination. This leads to copy-number variation within a population. Duplications are a major source of raw material for evolving new genes. Other types of mutation occasionally create new genes from previously noncoding DNA. In the distribution of fitness effects (DFE) for new mutations, only a minority of mutations are beneficial. Mutations with gross effects are typically deleterious. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. This biological process of mutation is represented in population-genetic models in one of two ways, either as a deterministic pressure of recurrent mutation on allele frequencies, or a source of variation. In deterministic theory, evolution begins with a predetermined set of alleles and proceeds by shifts in continuous frequencies, as if the population is infinite. The occurrence of mutations in individuals is represented by a population-level "force" or "pressure" of mutation, i.e., the force of innumerable events of mutation with a scaled magnitude u applied to shifting frequencies f(A1) to f(A2). For instance, in the classic mutation–selection balance model, the force of mutation pressure pushes the frequency of an allele upward, and selection against its deleterious effects pushes the frequency downward, so that a balance is reached at equilibrium, given (in the simplest case) by f = u/s. This concept of mutation pressure is mostly useful for considering the implications of deleterious mutation, such as the mutation load and its implications for the evolution of the mutation rate. Transformation of populations by mutation pressure is unlikely. Haldane argued that it would require high mutation rates unopposed by selection, and Kimura concluded even more pessimistically that even this was unlikely, as the process would take too long (see evolution by mutation pressure). However, evolution by mutation pressure is possible under some circumstances and has long been suggested as a possible cause for the loss of unused traits. For example, pigments are no longer useful when animals live in the darkness of caves, and tend to be lost. An experimental example involves the loss of sporulation in experimental populations of B. subtilis. Sporulation is a complex trait encoded by many loci, such that the mutation rate for loss of the trait was estimated as an unusually high value, . Loss of sporulation in this case can occur by recurrent mutation, without requiring selection for the loss of sporulation ability. When there is no selection for loss of function, the speed at which loss evolves depends more on the mutation rate than it does on the effective population size, indicating that it is driven more by mutation than by genetic drift. The role of mutation as a source of novelty is different from these classical models of mutation pressure. When population-genetic models include a rate-dependent process of mutational introduction or origination, i.e., a process that introduces new alleles including neutral and beneficial ones, then the properties of mutation may have a more direct impact on the rate and direction of evolution, even if the rate of mutation is very low. That is, the spectrum of mutation may become very important, particularly mutation biases, predictable differences in the rates of occurrence for different types of mutations, because bias in the introduction of variation can impose biases on the course of evolution. Mutation plays a key role in other classical and recent theories including Muller%27s ratchet, subfunctionalization, Eigen's concept of an error catastrophe and Lynch's mutational hazard hypothesis. Genetic drift Genetic drift is a change in allele frequencies caused by random sampling. That is, the alleles in the offspring are a random sample of those in the parents. Genetic drift may cause gene variants to disappear completely, and thereby reduce genetic variability. In contrast to natural selection, which makes gene variants more common or less common depending on their reproductive success, the changes due to genetic drift are not driven by environmental or adaptive pressures, and are equally likely to make an allele more common as less common. The effect of genetic drift is larger for alleles present in few copies than when an allele is present in many copies. The population genetics of genetic drift are described using either branching processes or a diffusion equation describing changes in allele frequency. These approaches are usually applied to the Wright-Fisher and Moran models of population genetics. Assuming genetic drift is the only evolutionary force acting on an allele, after t generations in many replicated populations, starting with allele frequencies of p and q, the variance in allele frequency across those populations is Ronald Fisher held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. No population genetics perspective have ever given genetic drift a central role by itself, but some have made genetic drift important in combination with another non-selective force. The shifting balance theory of Sewall Wright held that the combination of population structure and genetic drift was important. Motoo Kimura's neutral theory of molecular evolution claims that most genetic differences within and between populations are caused by the combination of neutral mutations and genetic drift. The role of genetic drift by means of sampling error in evolution has been criticized by John H Gillespie and Will Provine, who argue that selection on linked sites is a more important stochastic force, doing the work traditionally ascribed to genetic drift by means of sampling error. The mathematical properties of genetic draft are different from those of genetic drift. The direction of the random change in allele frequency is autocorrelated across generations. Gene flow Because of physical barriers to migration, along with the limited tendency for individuals to move or spread (vagility), and tendency to remain or come back to natal place (philopatry), natural populations rarely all interbreed as may be assumed in theoretical random models (panmixy). There is usually a geographic range within which individuals are more closely related to one another than those randomly selected from the general population. This is described as the extent to which a population is genetically structured. Genetic structuring can be caused by migration due to historical climate change, species range expansion or current availability of habitat. Gene flow is hindered by mountain ranges, oceans and deserts or even human-made structures such as the Great Wall of China, which has hindered the flow of plant genes. Gene flow is the exchange of genes between populations or species, breaking down the structure. Examples of gene flow within a species include the migration and then breeding of organisms, or the exchange of pollen. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Population genetic models can be used to identify which populations show significant genetic isolation from one another, and to reconstruct their history. Subjecting a population to isolation leads to inbreeding depression. Migration into a population can introduce new genetic variants, potentially contributing to evolutionary rescue. If a significant proportion of individuals or gametes migrate, it can also change allele frequencies, e.g. giving rise to migration load. In the presence of gene flow, other barriers to hybridization between two diverging populations of an outcrossing species are required for the populations to become new species. Horizontal gene transfer Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among prokaryotes. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis may also have occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which appear to have received a range of genes from bacteria, fungi, and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and prokaryotes, during the acquisition of chloroplasts and mitochondria. Linkage If all genes are in linkage equilibrium, the effect of an allele at one locus can be averaged across the gene pool at other loci. In reality, one allele is frequently found in linkage disequilibrium with genes at other loci, especially with genes located nearby on the same chromosome. Recombination breaks up this linkage disequilibrium too slowly to avoid genetic hitchhiking, where an allele at one locus rises to high frequency because it is linked to an allele under selection at a nearby locus. Linkage also slows down the rate of adaptation, even in sexual populations. The effect of linkage disequilibrium in slowing down the rate of adaptive evolution arises from a combination of the Hill–Robertson effect (delays in bringing beneficial mutations together) and background selection (delays in separating beneficial mutations from deleterious hitchhikers). Linkage is a problem for population genetic models that treat one gene locus at a time. It can, however, be exploited as a method for detecting the action of natural selection via selective sweeps. In the extreme case of an asexual population, linkage is complete, and population genetic equations can be derived and solved in terms of a travelling wave of genotype frequencies along a simple fitness landscape. Most microbes, such as bacteria, are asexual. The population genetics of their adaptation have two contrasting regimes. When the product of the beneficial mutation rate and population size is small, asexual populations follow a "successional regime" of origin-fixation dynamics, with adaptation rate strongly dependent on this product. When the product is much larger, asexual populations follow a "concurrent mutations" regime with adaptation rate less dependent on the product, characterized by clonal interference and the appearance of a new beneficial mutation before the last one has fixed. Applications Explaining levels of genetic variation Neutral theory predicts that the level of nucleotide diversity in a population will be proportional to the product of the population size and the neutral mutation rate. The fact that levels of genetic diversity vary much less than population sizes do is known as the "paradox of variation". While high levels of genetic diversity were one of the original arguments in favor of neutral theory, the paradox of variation has been one of the strongest arguments against neutral theory. It is clear that levels of genetic diversity vary greatly within a species as a function of local recombination rate, due to both genetic hitchhiking and background selection. Most current solutions to the paradox of variation invoke some level of selection at linked sites. For example, one analysis suggests that larger populations have more selective sweeps, which remove more neutral genetic diversity. A negative correlation between mutation rate and population size may also contribute. Life history affects genetic diversity more than population history does, e.g. r-strategists have more genetic diversity. Detecting selection Population genetics models are used to infer which genes are undergoing selection. One common approach is to look for regions of high linkage disequilibrium and low genetic variance along the chromosome, to detect recent selective sweeps. A second common approach is the McDonald–Kreitman test which compares the amount of variation within a species (polymorphism) to the divergence between species (substitutions) at two types of sites; one assumed to be neutral. Typically, synonymous sites are assumed to be neutral. Genes undergoing positive selection have an excess of divergent sites relative to polymorphic sites. The test can also be used to obtain a genome-wide estimate of the proportion of substitutions that are fixed by positive selection, α. According to the neutral theory of molecular evolution, this number should be near zero. High numbers have therefore been interpreted as a genome-wide falsification of neutral theory. Demographic inference The simplest test for population structure in a sexually reproducing, diploid species, is to see whether genotype frequencies follow Hardy-Weinberg proportions as a function of allele frequencies. For example, in the simplest case of a single locus with two alleles denoted A and a at frequencies p and q, random mating predicts freq(AA) = p2 for the AA homozygotes, freq(aa) = q2 for the aa homozygotes, and freq(Aa) = 2pq for the heterozygotes. In the absence of population structure, Hardy-Weinberg proportions are reached within 1–2 generations of random mating. More typically, there is an excess of homozygotes, indicative of population structure. The extent of this excess can be quantified as the inbreeding coefficient, F. Individuals can be clustered into K subpopulations. The degree of population structure can then be calculated using FST, which is a measure of the proportion of genetic variance that can be explained by population structure. Genetic population structure can then be related to geographic structure, and genetic admixture can be detected. Coalescent theory relates genetic diversity in a sample to demographic history of the population from which it was taken. It normally assumes neutrality, and so sequences from more neutrally evolving portions of genomes are therefore selected for such analyses. It can be used to infer the relationships between species (phylogenetics), as well as the population structure, demographic history (e.g. population bottlenecks, population growth), biological dispersal, source–sink dynamics and introgression within a species. Another approach to demographic inference relies on the allele frequency spectrum. Evolution of genetic systems By assuming that there are loci that control the genetic system itself, population genetic models are created to describe the evolution of dominance and other forms of robustness, the evolution of sexual reproduction and recombination rates, the evolution of mutation rates, the evolution of evolutionary capacitors, the evolution of costly signalling traits, the evolution of ageing, and the evolution of co-operation. For example, most mutations are deleterious, so the optimal mutation rate for a species may be a trade-off between the damage from a high deleterious mutation rate and the metabolic costs of maintaining systems to reduce the mutation rate, such as DNA repair enzymes. One important aspect of such models is that selection is only strong enough to purge deleterious mutations and hence overpower mutational bias towards degradation if the selection coefficient s is greater than the inverse of the effective population size. This is known as the drift barrier and is related to the nearly neutral theory of molecular evolution. Drift barrier theory predicts that species with large effective population sizes will have highly streamlined, efficient genetic systems, while those with small population sizes will have bloated and complex genomes containing for example introns and transposable elements. However, somewhat paradoxically, species with large population sizes might be so tolerant to the consequences of certain types of errors that they evolve higher error rates, e.g. in transcription and translation, than small populations.
Biology and health sciences
Basics_4
Biology
219367
https://en.wikipedia.org/wiki/Drowning
Drowning
Drowning is a type of suffocation induced by the submersion of the mouth and nose in a liquid. Submersion injury refers to both drowning and near-miss incident. Most instances of fatal drowning occur alone or in situations where others present are either unaware of the victim's situation or unable to offer assistance. After successful resuscitation, drowning victims may experience breathing problems, confusion, or unconsciousness. Occasionally, victims may not begin experiencing these symptoms until several hours after they are rescued. An incident of drowning can also cause further complications for victims due to low body temperature, aspiration, or acute respiratory distress syndrome (respiratory failure from lung inflammation). Drowning is more likely to happen when spending extended periods of time near large bodies of water. Risk factors for drowning include alcohol use, drug use, epilepsy, minimal swim training or a complete lack of training, and, in the case of children, a lack of supervision. Common drowning locations include natural and man-made bodies of water, bathtubs, and swimming pools. Drowning occurs when a person spends too much time with their nose and mouth submerged in a liquid to the point of being unable to breathe. If this is not followed by an exit to the surface, low oxygen levels and excess carbon dioxide in the blood trigger a neurological state of breathing emergency, which results in increased physical distress and occasional contractions of the vocal folds. Significant amounts of water usually only enter the lungs later in the process. While the word "drowning" is commonly associated with fatal results, drowning may be classified into three different types: drowning that results in death, drowning that results in long-lasting health problems, and drowning that results in no health complications. Sometimes the term "near-drowning" is used in the latter cases. Among children who survive, health problems occur in about 7.5% of cases. Steps to prevent drowning include teaching children and adults to swim and to recognise unsafe water conditions, never swimming alone, use of personal flotation devices on boats and when swimming in unfavourable conditions, limiting or removing access to water (such as with fencing of swimming pools), and exercising appropriate supervision. Treatment of victims who are not breathing should begin with opening the airway and providing five breaths of mouth-to-mouth resuscitation. Cardiopulmonary resuscitation (CPR) is recommended for a person whose heart has stopped beating and has been underwater for less than an hour. == Causes == A major contributor to drowning is the inability to swim. Other contributing factors include the state of the water itself, distance from a solid footing, physical impairment, or prior loss of consciousness. Anxiety brought on by fear of drowning or water itself can lead to exhaustion, thus increasing the chances of drowning. Approximately 90% of drownings take place in freshwater (rivers, lakes, and a relatively small number of swimming pools); the remaining 10% take place in seawater. Drownings in other fluids are rare and often related to industrial accidents. In New Zealand's early colonial history, so many settlers died while trying to cross the rivers that drowning was called "the New Zealand death". People have drowned in as little as of water while lying face down. Death can occur due to complications following an initial drowning. Inhaled fluid can act as an irritant inside the lungs. Even small quantities can cause the extrusion of liquid into the lungs (pulmonary edema) over the following hours; this reduces the ability to exchange the air and can lead to a person "drowning in their own body fluid". Vomit and certain poisonous vapors or gases (as in chemical warfare) can have a similar effect. The reaction can take place up to 72 hours after the initial incident and may lead to a serious injury or death. Risk factors Many behavioral and physical factors are related to drowning: Drowning is the most common cause of death for people with seizure disorders, largely in bathtubs. Epileptics are more likely to die due to accidents such as drowning. However, this risk is especially elevated in low and middle-income countries compared to high-income countries. The use of alcohol increases the risk of drowning across developed and developing nations. Alcohol is involved in approximately 50% of fatal drownings, and 35% of non-fatal drownings. Inability to swim can lead to drowning. Participation in formal swimming lessons can reduce this risk. The optimal age to start the lessons is childhood, between one and four years of age. Feeling overly tired reduces swimming performance. This exhaustion can be aggravated by anxious movements motivated by fear during or in anticipation of drowning. An overconfident appraisal of one's own physical capabilities can lead to "swimming out too far" and exhaustion before returning to solid footing. Free access to water can be hazardous, especially to young children. Barriers can prevent young children from gaining access to the water. Ineffective supervision, since drowning can occur anywhere there is water, even in the presence of lifeguards. Risk can vary with location depending on age. Children between one and four more commonly drown in home swimming pools than elsewhere. Drownings in natural water settings increase with age. More than half of drownings occuring among those fifteen years and older occurred in natural water environments. Familial or genetic history of sudden cardiac arrest (SCA) or sudden cardiac death (SCD) can predispose children to drown. Extensive genetic testing and/or consultation with a cardiologist should be done when there is a high suspicion of familial history and/or clinical evidence of sudden cardiac arrest or sudden cardiac death. Individuals with undetected primary cardiac arrhythmias, as cold water immersion or aquatic exercise can induce these arrhythmias to occur. Population groups at risk in the US are the old and young. Youth: drowning rates are highest for children under five years of age and people fifteen to twenty-four years of age. Minorities: the fatal unintentional drowning rate for African Americans above the age of 29 between 1999 and 2010 was statistically significantly higher than that of white people above the age of 29. The fatal drowning rate of African American children of ages from five to fourteen is almost three times that of white children in the same age range and 5.5 times higher in swimming pools. These disparities might be associated with a lack of basic swimming education in some minority populations. Freediving Some additional causes of drowning can also happen during freediving activities: Ascent blackout, also called deep water blackout, is caused by hypoxia during ascent from depth. The partial pressure of oxygen in the lungs under pressure at the bottom of a deep free dive is adequate to support consciousness but drops below the blackout threshold as the water pressure decreases on the ascent. It usually occurs when arriving near the surface as the pressure approaches normal atmospheric pressure. Shallow water blackout caused by hyperventilation prior to swimming or diving. The primary urge to breathe is triggered by rising carbon dioxide () levels in the bloodstream. The body detects levels accurately and relies on this to control breathing. Hyperventilation reduces the carbon dioxide content of the blood but leaves the diver susceptible to a sudden loss of consciousness without warning from hypoxia. There is no bodily sensation that warns a diver of an impending blackout, and people (often capable swimmers swimming under the surface in shallow water) become unconscious and drown quietly without alerting anyone to the fact that there is a problem and they are typically found at the bottom. Pathophysiology Drowning is split into four stages: Breath-hold under voluntary control until the urge to breathe due to hypercapnia becomes overwhelming Fluid is swallowed and/or aspirated into the airways Cerebral anoxia stops breathing and aspiration Cerebral injury due to anoxia becomes irreversible People who do not know how to swim can struggle on the surface of the water for only 20 to 60 seconds before being submerged. In the early stages of drowning, a person holds their breath to prevent water from entering their lungs. When this is no longer possible, a small amount of water entering the trachea causes a muscular spasm that seals the airway and prevents further passage of water. If the process is not interrupted, loss of consciousness due to hypoxia is followed by cardiac arrest. Oxygen deprivation A conscious person will hold their breath and will try to access air, often resulting in panic, including rapid body movement. This uses up more oxygen in the bloodstream and reduces the time until unconsciousness. The person can voluntarily hold their breath for some time, but the breathing reflex will increase until the person try to breathe, even when submerged. The breathing reflex in the human body is weakly related to the amount of oxygen in the blood but strongly related to the amount of carbon dioxide . During an apnea, the oxygen in the body is used by the cells and excreted as carbon dioxide. Thus, the level of oxygen in the blood decreases, and the level of carbon dioxide increases. Increasing carbon dioxide levels lead to a stronger and stronger breathing reflex, up to the breath-hold breakpoint, at which the person can no longer voluntarily hold their breath. This typically occurs at an arterial partial pressure of carbon dioxide of 55 mm Hg but may differ significantly between people. When submerged into cold water, breath-holding time is significantly shorter than that in air due to the cold shock response. The breath-hold breakpoint can be suppressed or delayed, either intentionally or unintentionally. Hyperventilation before any dive, deep or shallow, flushes out carbon dioxide in the blood resulting in a dive commencing with an abnormally low carbon dioxide level: a potentially dangerous condition known as hypocapnia. The level of carbon dioxide in the blood after hyperventilation may then be insufficient to trigger the breathing reflex later in the dive. Following this, a blackout may occur before the diver feels an urgent need to breathe. This can occur at any depth and is common in distance breath-hold divers in swimming pools. Both deep and distance free divers often use hyperventilation to flush out carbon dioxide from the lungs to suppress the breathing reflex for longer. It is important not to mistake this for an attempt to increase the body's oxygen store. The body at rest is fully oxygenated by normal breathing and cannot take on any more. Breath-holding in water should always be supervised by a second person, as by hyperventilating, one increases the risk of shallow water blackout because insufficient carbon dioxide levels in the blood fail to trigger the breathing reflex. A continued lack of oxygen in the brain, hypoxia, will quickly render a person unconscious, usually around a blood partial pressure of oxygen of 25–30 mmHg. An unconscious person rescued with an airway still sealed from laryngospasm stands a good chance of a full recovery. Artificial respiration is also much more effective without water in the lungs. At this point, the person stands a good chance of recovery if attended to within minutes. More than 10% of drownings may involve laryngospasm, but the evidence suggests that it is not usually effective at preventing water from entering the trachea. The lack of water found in the lungs during autopsy does not necessarily mean there was no water at the time of drowning, as small amounts of freshwater are absorbed into the bloodstream. Hypercapnia and hypoxia both contribute to laryngeal relaxation, after which the airway is open through the trachea. There is also bronchospasm and mucous production in the bronchi associated with laryngospasm, and these may prevent water entry at terminal relaxation. The hypoxemia and acidosis caused by asphyxia in drowning affect various organs. There can be central nervous system damage, cardiac arrhythmia, pulmonary injury, reperfusion injury, and multiple-organ secondary injury with prolonged tissue hypoxia. A lack of oxygen or chemical changes in the lungs may cause the heart to stop beating. This cardiac arrest stops the flow of blood and thus stops the transport of oxygen to the brain. Cardiac arrest used to be the traditional point of death, but at this point, there is still a chance of recovery. The brain cannot survive long without oxygen, and the continued lack of oxygen in the blood, combined with the cardiac arrest, will lead to the deterioration of brain cells, causing first brain damage and eventually brain death after six minutes from which recovery is generally considered impossible. Hypothermia of the central nervous system may prolong this. In cold temperatures below 6 °C, the brain may be cooled sufficiently to allow for a survival time of more than an hour. The extent of central nervous system injury to a large extent determines the survival and long term consequences of drowning, In the case of children, most survivors are found within 2 minutes of immersion, and most fatalities are found after 10 minutes or more. Water aspiration If water enters the airways of a conscious person, the person will try to cough up the water or swallow it, often inhaling more water involuntarily. When water enters the larynx or trachea, both conscious and unconscious people experience laryngospasm, in which the vocal cords constrict, sealing the airway. This prevents water from entering the lungs. Because of this laryngospasm, in the initial phase of drowning, water enters the stomach, and very little water enters the lungs. Though laryngospasm prevents water from entering the lungs, it also interferes with breathing. In most people, the laryngospasm relaxes sometime after unconsciousness due to hypoxia in the larynx, and water can then enter the lungs, causing a "wet drowning". However, about 7–10% of people maintain this seal until cardiac arrest. This has been called "dry drowning", as no water enters the lungs. In forensic pathology, water in the lungs indicates that the person was still alive at the point of submersion. An absence of water in the lungs may be either a dry drowning or indicates a death before submersion. Aspirated water that reaches the alveoli destroys the pulmonary surfactant, which causes pulmonary edema and decreased lung compliance, compromising oxygenation in affected parts of the lungs. This is associated with metabolic acidosis, secondary fluid, and electrolyte shifts. During alveolar fluid exchange, diatoms present in the water may pass through the alveolar wall into the capillaries to be carried to internal organs. The presence of these diatoms may be diagnostic of drowning. Of people who have survived drowning, almost one-third will experience complications such as acute lung injury (ALI) or acute respiratory distress syndrome (ARDS). ALI/ARDS can be triggered by pneumonia, sepsis, and water aspiration. These conditions are life-threatening disorders that can result in death if not treated promptly. During drowning, aspirated water enters the lung tissues, causes a reduction in pulmonary surfactant, obstructs ventilation, and triggers a release of inflammatory mediators which results in hypoxia. Specifically, upon reaching the alveoli, hypotonic liquid found in freshwater dilutes pulmonary surfactant, destroying the substance. Comparatively, aspiration of hypertonic seawater draws liquid from the plasma into the alveoli and similarly causes damage to surfactant by disrupting the alveolar-capillary membrane. Still, there is no clinical difference between salt and freshwater drowning. Once someone has reached definitive care, supportive care strategies such as mechanical ventilation can help to reduce the complications of ALI/ARDS. Whether a person drowns in freshwater or salt water makes no difference in respiratory management or its outcome. People who drown in freshwater may experience worse hypoxemia early in their treatment; however, this initial difference is short-lived. Cold-water immersion Submerging the face in water cooler than about triggers the diving reflex, common to air-breathing vertebrates, especially marine mammals such as whales and seals. This reflex protects the body by putting it into energy-saving mode to maximise the time it can stay underwater. The strength of this reflex is greater in colder water and has three principal effects: Bradycardia, a slowing of the heart rate to less than 60 beats per minute. Peripheral vasoconstriction, the restriction of the blood flow to the extremities to increase the blood and oxygen supply to the vital organs, especially the brain. Blood shift, the shifting of blood to the thoracic cavity, the region of the chest between the diaphragm and the neck, to avoid the collapse of the lungs under higher pressure during deeper dives. The reflex action is automatic and allows both a conscious and an unconscious person to survive longer without oxygen underwater than in a comparable situation on dry land. The exact mechanism for this effect has been debated and may be a result of brain cooling similar to the protective effects seen in people who are treated with deep hypothermia. The actual cause of death in cold or very cold water is usually lethal bodily reactions to increased heat loss and to freezing water, rather than any loss of core body temperature. Of those who die after plunging into freezing seas, around 20% die within 2 minutes from cold shock (uncontrolled rapid breathing and gasping causing water inhalation, a massive increase in blood pressure and cardiac strain leading to cardiac arrest, and panic), another 50% die within 15 – 30 minutes from cold incapacitation (loss of use and control of limbs and hands for swimming or gripping, as the body 'protectively' shuts down the peripheral muscles of the limbs to protect its core), and exhaustion and unconsciousness cause drowning, claiming the rest within a similar time. A notable example of this occurred during the sinking of the Titanic, in which most people who entered the water died within 15–30 minutes. Submersion into cold water can induce cardiac arrhythmias (abnormal heart rates) in healthy people, sometimes causing strong swimmers to drown. The physiological effects caused by the diving reflex conflict with the body's cold shock response, which includes a gasp and uncontrollable hyperventilation leading to aspiration of water. While breath-holding triggers a slower heart rate, cold shock activates tachycardia, an increase in heart rate. It is thought that this conflict of these nervous system responses may account for the arrhythmias of cold water submersion. Heat transfers very well into water, and body heat is therefore lost quickly in water compared to air, even in 'cool' swimming waters around 70 °F (~20 °C). A water temperature of can lead to death in as little as one hour, and water temperatures hovering at freezing can lead to death in as little as 15 minutes. This is because cold water can have other lethal effects on the body. Hence, hypothermia is not usually a reason for drowning or the clinical cause of death for those who drown in cold water. Upon submersion into cold water, remaining calm and preventing loss of body heat is paramount. While awaiting rescue, swimming or treading water should be limited to conserve energy, and the person should attempt to remove as much of the body from the water as possible; attaching oneself to a buoyant object can improve the chance of survival should unconsciousness occur. Hypothermia (and cardiac arrest) presents a risk for survivors of immersion. This risk increases if the survivor—feeling well again—tries to get up and move, not realizing their core body temperature is still very low and will take a long time to recover. Most people who experience cold-water drowning do not develop hypothermia quickly enough to decrease cerebral metabolism before ischemia and irreversible hypoxia occur. The neuroprotective effects appear to require water temperatures below about . Diagnosis The World Health Organization in 2005 defined drowning as "the process of experiencing respiratory impairment from submersion/immersion in liquid." This definition does not imply death or even the necessity for medical treatment after removing the cause, nor that any fluid enters the lungs. The WHO classifies this as death, morbidity, and no morbidity. There was also consensus that the terms wet, dry, active, passive, silent, and secondary drowning should no longer be used. Experts differentiate between distress and drowning. Distress – people in trouble, but who can still float, signal for help, and take action. Drowning – people suffocating and in imminent danger of death within seconds. Forensics Forensic diagnosis of drowning is considered one of the most difficult in forensic medicine. External examination and autopsy findings are often non-specific, and the available laboratory tests are often inconclusive or controversial. The purpose of an investigation is to distinguish whether the death was due to immersion or whether the body was immersed postmortem. The mechanism in acute drowning is hypoxemia and irreversible cerebral anoxia due to submersion in liquid. Drowning would be considered a possible cause of death if the body was recovered from a body of water, near a fluid that could plausibly have caused drowning, or found with the head immersed in a fluid. A medical diagnosis of death by drowning is generally made after other possible causes of death have been excluded by a complete autopsy and toxicology tests. Indications of drowning are unambiguous and may include bloody froth in the airway, water in the stomach, cerebral edema and petrous or mastoid hemorrhage. Some evidence of immersion may be unrelated to the cause of death, and lacerations and abrasions may have occurred before or after immersion or death. Diatoms should normally never be present in human tissue unless water was aspirated. Their presence in tissues such as bone marrow suggests drowning; however, they are present in soil and the atmosphere, and samples may be contaminated. An absence of diatoms does not rule out drowning, as they are not always present in water. A match of diatom shells to those found in the water may provide supporting evidence of the place of death. Drowning in saltwater can leave different concentrations of sodium and chloride ions in the left and right chambers of the heart, but they will dissipate if the person survived for some time after the aspiration, or if CPR was attempted, and have been described in other causes of death. Most autopsy findings relate to asphyxia and are not specific to drowning. The signs of drowning are degraded by decomposition. Large amounts of froth will be present around the mouth and nostrils and in the upper and lower airways in freshly drowned bodies. The volume of froth is much greater in drowning than from other origins. Lung density may be higher than normal, but normal weights are possible after cardiac arrest or vasovagal reflex. The lungs may be overinflated and waterlogged, filling the thoracic cavity. The surface may have a marbled appearance, with darker areas associated with collapsed alveoli interspersed with paler aerated areas. Fluid trapped in the lower airways may block the passive collapse that is normal after death. Hemorrhagic bullae of emphysema may be found. These are related to the rupture of alveolar walls. These signs, while suggestive of drowning, are not conclusive. Prevention It is estimated that more than 85% of drownings could be prevented by supervision, training in water skills, technology, and public education. Measures that help to prevent drowning include the following: Learning to swim: Being able to swim is one of the best defences against drowning. It is recommended that children learn to swim in a safe and supervised environment when they are between 1 and 4 years old, but learning to swim is recommended at any age. Surveillance: The surveillance of swimmers, especially children, is essential, because drownings may be silent and go unnoticed. A drowning person may be unable to wave, shout or even speak, and remain below the surface or unconscious. The highest rates of drowning globally are among children under five years-old. People who already know how to swim could need certain surveillance too. Many pools and bathing areas have lifeguards or a drowning detection system, and local legislation may require surveillance methods. Non-professional bystanders are important in detecting and notifying drownings. Lifeguards can be called by mobile phone in many cases. Evidence shows that alarms in pools are unreliable. The World Health Organization recommends that the most crowded hours be addressed by increasing the number of lifeguards at those times. Education and awareness: The WHO recommends wide training of the public in first aid, including cardiopulmonary resuscitation (CPR), and to behave safely in the water. Swimmers need to understand how to swim within their own abilities with regard to currents, depth, temperature or waves and to be informed of the state of the sea. Even good swimmers may drown because of water conditions and other circumstances, so need to learn how to select safe places that have surveillance and to understand the local conditions and to follow the rules. Many people who drown fail to follow the local safety guidelines or pay attention to signs indicating swimming restrictions and lifeguard duties. Shallow water and obstructions: Local conditions may include shallow water and obstructions. It is not prudent to jump into the water without having calculated the deepness, especially if falling directly with the head. Between 1.2% and 22% of all spinal injuries are from accidents diving into shallow water or hitting hidden obstructions such as submerged trees. Up to 21% of shallow-water diving accidents cause spinal injury, risking permanent paralysis, or death. Alcohol and drugs: Alcohol and drugs increase the risk of drowning, and this risk increases for bars near water and parties on boats. For example, Finland sees several alcohol-implicated drownings every year at the Midsummer weekend as Finns celebrate in and around lakes and beaches. Anxiety and panic in water: The anxious movements produced by fear during drowning can render the swimmers exhausted. Additionally, an error calculating the own energies can also leave oneself too much tired before reaching firm ground. Reducing the rhythm of swimming allows rest and recovering. In case of suffering a cramp or contracture (muscle spasm), it is recommended to keep calm, move towards the shore (or pool's border), and ask for help if necessary. The stings of forms of marine life can also produce panic, but, after receiving a sting of most types, it is possible to get out of the water without serious problems, even if some pain appears. And, for most of the swimming problems, it can be useful to take a horizontal position, face up, because it allows to float without any effort. Awareness of medical conditions: Some medical conditions such as epilepsy, syncope, cramp or seizures demand caution when in water, or near water. They may require controlled conditions for swimming (and even washing) and a good understanding of the individual's limitations. State of the water: It is recommended to be aware of turbulences, dangerous waves, undertow, wind and weather conditions, dangerous animals, and water temperature. Currents of water (as river currents and sea rip currents) can carry the swimmers away with great force, so authorities in safety often recommend to the users of swimming areas avoiding useless efforts in the opposite direction, but, instead, taking some advantage of the current direction while swimming or floating outwards. Safety equipment: All boats and pools must be equipped with adequate safety equipment, such as lifejackets or lifebuoys; often this is a regulatory requirement. Any recreational activity on a boat or near water requires that a lifejacket be worn, especially by children who cannot swim and others at risk of drowning. Lifejackets must be well-fitting and properly fastened, and their wearers must understand that they have to jump into water with a one of them, and use it by fastening the strap properly and grabbing the front neck area with both hands. Emergency flotation equipment, such as a circular lifebuoy, can be thrown to the swimmer if available but, if not, any other flotation device, including inner tubes, water wings or foam tubes are used. Navigation safety: Navigation accidents are a cause of drowning that can be prevented staying informed about the state of the sea, having the proper safety instruments (especially lifejackets on board, as mentioned before) and with any other advisable measure that can be applied. Rescue robots and drones: Remote-controlled devices may assist a water rescue. Floating rescue robots can navigate to the victim to hold on to and even help to recover them. Aerial drones are fast, can help locate victims and even drop life jackets. Swimming in pairs ("buddy system"): Pairing up swimmers, so they keep surveillance each other, and are available to help in case of any problem (because of a purpose of safety, not for competitive reasons). Pool fencing: Every private and public swimming pool should be fully fenced, with child-proof latches on the gates. Many countries, including most Australian states since 1998 and France since 2003, require the fencing of pools. Objects (such as toys and others) can attract children to the water. Pool drains: Swimming pools may have filtration systems that circulate the water. Filtration drains without covers, or too strong, can injure swimmers by trapping hair or other parts of the body, leading to immobilization and drowning. Many small drainage holes are usually preferred to a single large one. Periodic inspections can check that the system is safe. Paying heed to warning signs, flags and advices: Because they indicate the safety of swimming and warn about any danger. Water safety The concept of water safety involves the procedures and policies that are directed to prevent people from drowning or from becoming injured in water. Time limits The time a person can safely stay underwater depends on many factors, including energy consumption, number of prior breaths, physical condition, and age. An average person can last between one and three minutes before falling unconscious and around ten minutes before dying. In an unusual case with the best conditions, a person was resuscitated after 65 minutes underwater. Management Rescue When a person is drowning or a swimmer becomes missing, a fast water rescue may become necessary, to take that person out of the water as soon as possible. Drowning is not necessarily violent or loud, with splashing and cries; it can be silent. Start and rescue methods from the ground Rescuers should avoid endangering themselves unnecessarily; whenever it is possible, they should assist from a safe ground position, such as a boat, a pier, or any patch of land near the victim. The fastest way to assist is to throw a buoyant object (such as a lifebuoy or a broad branch). It is very important to avoid aiming directly at the victim, since even the lightest lifebuoys weight over 2 kilograms, and can stun, injure or even render a person unconscious if they impact on the head. Another way to assist is to reach the victim with an object to grasp, and then pull both of them out of the water. Some examples include: ropes, oars, broad branches, poles, one's own arm, a hand, etc. This carries the risk of the rescuer being pulled into the water by the victim, so the rescuer must take a firm stand, lying down, as well as securing to some stable point. Any rescue with vehicles would have to avoid trampling or damaging the victim in another manner. Also, there are modern flying drones that can drop life jackets. Bystanders should immediately call for help. A lifeguard should be called, if present. If not, an emergency telephone number should be contacted as soon as possible, to get the help of professionals and paramedics. In some cases of drowning, victims have been rescued by professionals from a boat or a helicopter. Less than 6% of people rescued by lifeguards need medical attention, and only 0.5% need CPR. The statistics worsen when rescues are made by bystanders. If lifeguards or paramedics are unable to be called, bystanders must rescue the drowning person. It can be done using vehicles that the victim can reach, as row-boats or even modern robots, when they navigate across the water. Rescue by swimming A human rescue by swimming carries a risk for the rescuer, who could be drowned trying it. Death of the would-be rescuer can happen because of the water conditions, the instinctive drowning response of the victim, the physical effort, and other problems. First contact and gripping In a swimming intervention, it is recommended to carry a floating object that makes the rescue easier. That is especially important at the moment when the rescuer reaches the victim's area, because a drowning person in distress could cling to the rescuer in an attempt to stay above the water surface, which could sink both of them. In more affordable situations, the victim is exhausted, or has suffered a cramp, and stays calmer or fainted. But, in the worst cases, the victim will be anxious and with vigor. Then, the rescuer can approach the panicking person offering an object for flotation (as a rescue buoy), or any other, or even a hand, so the victim has something to grasp. In other situations, an expert rescuer could take one of the victim's arms and press it against the victim's back to restrict unnecessary movement. Communication is also important for coordination and allowing the rescue maneuvers. If the victim clings to the rescuer, and there is not any flotation object, and the rescuer cannot control the situation (by simple communication, or by immobilizing, or by getting rid of the victim), a possibility is to dive underwater (as drowning people tend to move in the opposite direction, seeking the water surface) and consider a different approach to help the drowning victim. Ascending an already sunk victim to the water surface Sometimes, the victim is already sunk beneath the water surface. If this has happened, the rescue requires caution, as the victim could be conscious and cling to the rescuer underwater desperately. Victims with suspected serious spinal injuries (which limit the movements) would need special care and specific grips to be ascended properly. In the best of cases, the sunk victim is unconscious floating shallowly under the water surface, and can be lifted to the surface by grabbing either (or both) of the victim's arms and swimming, which pulls forward and upward, making the task easier (and enticing the victim to move). Anyway, after reaching the water surface, a victim will always have to be placed in a face-up horizontal position, or at least in any other with nose and mouth above the water, to be towed to firm ground. When a victim is located deeper underwater, the rescuer should dive, take the victim from behind, and ascend vertically to the water surface holding the victim. Moving a victim out of the water by towing Finally, after a successful first physical contact with the victim (usually the most dangerous part, because victims can cling anxiously to the rescuers), the victim must be taken out of the water to a firm ground. This is achieved by a towing maneuver. It would be commonly a 'supporting tow': placing the victim body in a face-up horizontal position, and passing one hand under the victim's armpit to then grab the jaw with it, and towing by swimming backwards. The victim's mouth and nose must be kept above the water surface. If the person is cooperative, the towing may be done in a similar fashion with the hands going under the victim's armpits. Other styles of towing are possible, but all of them keeping the victim's mouth and nose above the water. Unconscious people may be pulled in an easier way: pulling on a wrist, or on the neck area of the shirt, while they are in a face-up horizontal position. Victims with suspected spinal injuries can require a more specific grip, and special care for their management, and a backboard (spinal board) may be needed for their rescue. For unconscious people, an in-water resuscitation could increase the chances of survival by a factor of about three, but this procedure requires both medical and swimming skills, and it becomes impractical to send anyone besides the rescuer to execute that task. Chest compressions require a suitable platform, so an in-water assessment of circulation is pointless. If the person does not respond after a few breaths, cardiac arrest may be assumed, and getting them out of the water becomes a priority. First aid The checks for responsiveness and breathing are carried out with the person lying in a horizontally supine position (face up). The traditional medical treatment for the drowned started expelling water from their lungs, tilting the victim face down (see drawing at right). However, handling the weight of the body could cost time and efforts in some cases, especially in victims with spinal injury (at the neck or the back) that affected their mobility, which requires special care. If the victim is unconscious, but breathing, the recovery position is appropriate (laying on a side, usually the right, the left is recommended in women since 7 and a half months of pregnancy approximately).If the victim is not breathing, rescue ventilation is necessary. In cases when drowning produces a gasping pattern of apnea while the heart is still beating, ventilation alone could be sufficient. But in the cases when ventilation is not enough, a complete cardiopulmonary resuscitation (CPR) should be used. Guides for victims of drowning indicate calling to an emergency telephone number if not yet done; a rescuer alone with the victim would do it after two minutes of cardiopulmonary resuscitation (CPR). The cardiopulmonary resuscitation (CPR) would follow an 'airway-breathing-circulation' ('ABC') sequence, starting with rescue breaths rather than with compressions as it is typical in cardiac arrest, because the problem is the lack of oxygen. For a not-breathing adult or child (someone bigger than a baby), patient's head would be tilted back, to improve the rescue breaths. It is recommended to start the cardiopulmonary resuscitation (CPR) with 2 initial rescue breaths, because of the lack of oxygen and the possibility of water in the airway; the rescue breaths are made by pinching the victim's nose and blowing air mouth-to-mouth, not excessively. Next, it is applied a continual alternation of 30 chest compressions (pressing on the lower half of the sternum, the vertical bone of the middle of the chest) and 2 rescue breaths (in the same manner that the initial ones). This alternation is repeated until vital signs are re-established, the rescuers are unable to continue, or emergency medical services arrive. Additionally, an amount of victims of drowning may have suffered a type of cardiorespiratory arrest that requires a defibrillator (AED) to correct it (read further below). For not-breathing babies (very small sized infants), the procedure is the same than above but slightly modified: the baby's head is not tilted back, but left straight, looking forward, which is necessary for the rescue breaths, because of the neck's size in babies. In each series of 2 rescue breaths (and the 2 initial breaths), the rescuer's mouth covers the baby's mouth and nose simultaneously (because a baby's face is too small). And, in the intercalated series of 30 chest compressions, they are also applied by pressing on the lower half of the sternum, the vertical bone of the middle of the chest, but with only two fingers (because the body of the baby being more fragile). Additionally, some infants may have suffered a type of cardiorespiratory arrest that requires a defibrillator (AED) to correct it (read below). Defibrillators (AED) can be found in many public places. They produce a defibrillation (electric shocks) that can restore the pulse of a victim. Anyway, they would only work in some specific cases. Defibrillators are easy to use, as they emit their instructions with voice messages. Before trying a defibrillation, the victim and the rescuer must be out of the water, and the victim's body must be dried. If the body of the victim is extremely cold, it would have to be warmed to improve defibrillation. Methods to expel water from the airway such as abdominal thrusts (Heimlich maneuver) or positioning the head downwards, should be avoided, due to there being no obstruction by solids, and they delay the start of ventilation, and increase the risk of vomiting. The risk of death is increased, as the aspiration of stomach contents is a common complication of the resuscitation efforts. Treatment for hypothermia may also be necessary. However, in those who are unconscious, it is recommended their temperature not be increased above 34 degrees C. Because of the diving reflex, people submerged in cold water and apparently drowned may revive after a long period of immersion. Rescuers retrieving a child from water significantly below body temperature should attempt resuscitation even after protracted immersion. Medical care People with a near-drowning experience who have normal oxygen levels and no respiratory symptoms should be observed in a hospital environment for a period of time to ensure there are no delayed complications. The target of ventilation is to achieve 92% to 96% arterial saturation and adequate chest rise. Positive end-expiratory pressure will improve oxygenation. Drug administration via peripheral veins is preferred over endotracheal administration. Hypotension remaining after oxygenation may be treated by rapid crystalloid infusion. Cardiac arrest in drowning usually presents as asystole or pulseless electrical activity. Ventricular fibrillation is more likely to be associated with complications of pre-existing coronary artery disease, severe hypothermia, or the use of epinephrine or norepinephrine. While surfactant may be used, no high-quality evidence exist that looks at this practice. Extracorporeal membrane oxygenation may be used in those who cannot be oxygenated otherwise. Steroids are not recommended. Prognosis People who have drowned who arrive at a hospital with spontaneous circulation and breathing usually recover with good outcomes. Early provision of basic and advanced life support improve the probability of a positive outcome. A longer duration of submersion is associated with a lower probability of survival and a higher probability of permanent neurological damage. Contaminants in the water can cause bronchospasm and impaired gas exchange and can cause secondary infection with delayed severe respiratory compromise. Low water temperature can cause ventricular fibrillation, but hypothermia during immersion can also slow the metabolism, allowing longer hypoxia before severe damage occurs. Hypothermia that reduces brain temperature significantly can improve the outcome. A reduction of brain temperature by 10 °C decreases ATP consumption by approximately 50%, which can double the time the brain can survive. The younger the person, the better the chances of survival. In one case, a child submerged in cold () water for 66 minutes was resuscitated without apparent neurological damage. However, over the long term significant deficits were noted, including a range of cognitive difficulties, particularly general memory impairment, although recent magnetic resonance imaging (MRI) and magnetoencephalography (MEG) were within normal range. Children Drowning is a major worldwide cause of death and injury in children. An estimate of about 20% of non-fatal drowning victims may result in varying degrees of ischemic and/or hypoxic brain injury. Hypoxic injuries refers to a lack or absence of oxygen in certain organs or tissues. Ischemic injuries on the other hand refers inadequate blood supply to certain organs or part of the body. These injuries can lead to an increased risk of long-term morbidity. Prolonged hypothermia and hypoxemia from nonfatal submersion drowning can result in cardiac dysrhythmias such as ventricular fibrillation, sinus bradycardia, or atrial fibrillation. Long-term neurological outcomes of drowning cannot be predicted accurately during the early stages of treatment. Although survival after long submersion times, mostly by young children, has been reported, many survivors will remain severely and permanently neurologically compromised after much shorter submersion times. Factors affecting the probability of long-term recovery with mild deficits or full function in young children include the duration of submersion, whether advanced life support was needed at the accident site, the duration of cardiopulmonary resuscitation, and whether spontaneous breathing and circulation are present on arrival at the emergency room. Prolonged submersion in water for more than 5–10 minutes usually leads to poorer prognosis. Data on the long-term outcome are scarce and unreliable. Neurological examination at the time of discharge from the hospital does not accurately predict long-term outcomes. Some people with severe brain injury who were transferred to other institutions died months or years after the drowning and are recorded as survivors. Nonfatal drownings have been estimated as two-to-four times more frequent than fatal drownings. Long-term effects of drowning in children Long-term effects of nonfatal drowning include damage to major organs such as the brain, lungs, and kidneys. Prolonged submersion time is attributed to hypoxic ischemic brain injury in susceptible areas of the brain such as the hippocampus, insular cortex, and/or basal ganglia. Severity in hypoxic ischemic damage of these brain structures corresponds to the severity in global damage to areas of the cerebral cortex. The cerebral cortex is a brain structure that is responsible for language, memory, learning, emotion, intelligence, and personality. Global damage to the cerebral cortex can affect one or more of its primary function. Treatment of pulmonary complication from drowning is dependent on the amount of lung injury that occurred during the incident. These lung injuries can be contributed by water aspiration and also irritants present in the water such as microbial pathogens leading to complications such as lung infection that can develop in adult respiratory disease syndrome later on in life. Some literature suggests that occurrences of drowning can lead to acute kidney injury from lack of blood flow and oxygenation due to shock and global hypoxia. These kidney injury can cause irreversible damage to the kidneys and may require long-term treatment such as renal replacement therapy. Infant risk Children are overrepresented in drowning statistics, with children aged 0–4 years old having the highest number of deaths due to unintentional drowning. In 2019 alone, 32,070 children between the ages of 1 and 4 years died as a result of unintentional drowning, equating to an age-adjusted fatality of 6.04 per 100,000 children. Infants are particularly vulnerable because while their mobility develops quickly, their perception concerning their ability for locomotion between surfaces develops slower. An infant can have full control of their movements, but will not recognize that water does not provide the same support for crawling as hardwood floors would. An infant's capacity for movement needs to be met with an appropriate perception of surfaces of support (and avoidance of surfaces that do not support locomotion) to avoid drowning. By crawling and interacting with their environment, infants learn to distinguish surfaces offering support for locomotion from those that do not, and their perception of surface characteristics will improve, as well as their perception of falls risk, over several weeks. Epidemiology In 2019, roughly 236,000 people died from drowning, thereby causing it to be the third leading cause of unintentional death globally, trailing traffic injuries and falls. In many countries, drowning is one of the main causes of preventable death for children under 12 years old. In the United States in 2006, 1100 people under 20 years of age died from drowning. The United Kingdom has 450 drownings per year, or 1 per 150,000, whereas in the United States, there are about 6,500 drownings yearly, around 1 per 50,000. In Asia suffocation and drowning were the leading causes of preventable death for children under five years of age; a 2008 report by UNICEF found that in Bangladesh, for instance, 46 children drown each day. Due to a generally increased likelihood for risk-taking, males are four times more likely to have submersion injuries. In the fishing industry, the largest group of drownings is associated with vessel disasters in bad weather, followed by man-overboard incidents and boarding accidents at night, either in foreign ports or under the influence of alcohol. Scuba diving deaths are estimated at 700 to 800 per year, associated with inadequate training and experience, exhaustion, panic, carelessness, and barotrauma. South Asia Deaths due to drowning is high in the South Asian region with India, China, Pakistan and Bangladesh accounting for up to 52% of the global deaths. Death due to drowning is known to be high in the Sundarbans region in West Bengal and in Bihar. According to the Daily Times in rural Pakistan, boats are the preferred mode of transport where available. Due to the influence of female modesty culture in Pakistan, women are not encouraged to swim. In the Iranian Sistan province there have been numerous instances of children dying in hootak water holes. Africa In lower-income countries, cases of drowning and deaths caused by drowning are under reported and data collection is limited. Many low-income countries in Africa have the highest rates of drowning, with incidence rates calculated from population-based studies across 15 different countries (Burkina Faso, Côte d'Ivoire, Egypt, Ethiopia, the Gambia, Ghana, Guinea, Kenya, Malawi, Nigeria, Seychelles, South Africa, Uganda, Tanzania, and Zimbabwe) ranging from 0.33 per 100,000 population to 502 per 100,000 population. Potential risk factors include young age, being male, having to commute across or work on the water (e.g. fishermen), quality and carrying capacity of the boat, and poor weather. United States In the United States, drowning is the second leading cause of death (after motor vehicle accidents) in children aged 12 and younger. People who drown are more likely to be male, young, or adolescent. There is a racial disparity found in drowning incidents. According to CDC data collected from 1999 to 2019, drowning rates among Native Americans was 2 times higher than non-Hispanic whites while the rate among African-Americans was 1.5 times higher. Surveys indicate that 10% of children under 5 have experienced a situation with a high risk of drowning. Worldwide, about 175,000 children die through drowning every year. The causes of drowning cases in the US from 1999 to 2006 were as follows: According to the US National Safety Council, 353 people ages 5 to 24 drowned in 2017. Society and culture Old terminology The word "drowning"—like "electrocution"—was previously used to describe fatal events only. Occasionally, that usage is still insisted upon, though the medical community's consensus supports the definition used in this article. Several terms related to drowning which have been used in the past are also no longer recommended. These include: Active drowning: people, such as non-swimmers and the exhausted or hypothermic at the surface, who are unable to hold their mouth above water and are suffocating due to lack of air. Instinctively, people in such cases perform well-known behaviors in the last 20–60 seconds before being submerged, representing the body's last efforts to obtain air. Notably, such people are unable to call for help, talk, reach for rescue equipment, or alert swimmers even feet away, and they may drown quickly and silently close to other swimmers or safety. Dry drowning: drowning in which no water enters the lungs. Near drowning: drowning which is not fatal. Wet drowning: drowning in which water enters the lungs. Passive drowning: people who suddenly sink or have sunk due to a change in their circumstances. Examples include people who drown in an accident due to sudden loss of consciousness or sudden medical condition. Secondary drowning: physiological response to foreign matter in the lungs due to drowning causing extrusion of liquid into the lungs (pulmonary edema) which adversely affects breathing. Silent drowning: drowning without a noticeable external display of distress. Dry drowning "Dry drowning" is an urban legend according to which some people, notably children, die of drowning hours or days after swimming or ingesting water. Misinformation about this supposed phenomenon is spread cyclically, mostly at the beginning of summer, over social media. As a medical condition, "dry drowning" has never had an accepted definition, and the term is discredited. Following the 2002 World Congress on Drowning in Amsterdam, a consensus definition of drowning was established: it is the "process of experiencing respiratory impairment from submersion/immersion in liquid." This definition resulted in only three legitimate drowning subsets: fatal drowning, non-fatal drowning with illness/injury, and non-fatal drowning without illness/injury. In response, major medical consensus organizations have adopted this definition worldwide and have discouraged any medical or publication use of the term "dry drowning". Such organizations include the International Liaison Committee on Resuscitation, the Wilderness Medical Society, the American Heart Association, the Utstein Style system, the International Lifesaving Federation, the International Conference on Drowning, Starfish Aquatics Institute, the American Red Cross, the Centers for Disease Control and Prevention (CDC), the World Health Organization and the American College of Emergency Physicians. Drowning experts have recognized that the resulting pathophysiology of hypoxemia, acidemia, and eventual death is the same whether water entered the lung or not. As this distinction does not change management or prognosis but causes significant confusion due to alternate definitions and misunderstandings, it is established that pathophysiological discussions of "dry" versus "wet" drowning are not relevant to drowning care. "Dry drowning" is cited in the news with a wide variety of definitions. and is often confused with "secondary drowning" or "delayed drowning". Various conditions including spontaneous pneumothorax, chemical pneumonitis, bacterial or viral pneumonia, head injury, asthma, heart attack, and chest trauma have been misattributed to the erroneous terms "delayed drowning", "secondary drowning", and "dry drowning". Currently, there has never been a case identified in the medical literature where a person was observed to be without symptoms and who died hours or days later as a direct result of drowning alone. However, famed forensic pathologist Dr. Cyril H. Wecht has published at least one opinion asserting that the cause of death of a 16 year old student was due to "delayed drowning" Capital punishment In Europe, drowning was used as capital punishment. During the Middle Ages, a sentence of death was read using the words , or "with pit and gallows". Drowning survived as a method of execution in Europe until the 17th and 18th centuries. England had abolished the practice by 1623, Scotland by 1685, Switzerland in 1652, Austria in 1776, Iceland in 1777, and Russia by the beginning of the 1800s. France revived the practice during the French Revolution (1789–1799) and it was carried out by Jean-Baptiste Carrier at Nantes. Experience People who have experienced drowning have reported slowing of time, but this is suggested to be a function of recollection, not perception. If the person is conscious after the initial struggle and breath-holding, they may feel a burning or tearing sensation on aspirating water. This burning sensation does not depend on the type of water. Following this painful feeling, many report peaceful perceptions, hallucinations, diminished pain and even euphoria. Sensations of tranquility are not limited to drowning, and similar perceptions have also been reported in near-death experiences from other causes. The euphoria and calmness can be attributed to cerebral hypoxia and consequent changes in neurotransmitters. These experiences vary by person, because the rate of oxygen loss in the blood (and resulting hypoxia) depends on the circumstances.
Biology and health sciences
Types
Health
219378
https://en.wikipedia.org/wiki/Mockingbird
Mockingbird
Mockingbirds are a group of New World passerine birds from the family Mimidae. They are best known for the habit of some species mimicking the songs of other birds and the sounds of insects and amphibians, often loudly and in rapid succession and for being extremely territorial when raising hatchlings. Studies have shown the ability of some species to identify individual humans and treat them differently based on learned threat assessments. The only mockingbird commonly found in North America is the northern mockingbird. Mockingbirds are known for singing late at night, even past midnight. They are opportunistic omnivores, feeding on insects, fruits, seeds, and occasional greens. The northern mockingbird is the state bird of five states in the United States, a trend that was started in 1920, when the Texas Federation of Women's Clubs proposed the idea. In January 1927, Governor Dan Moody approved this, and Texas became the first state ever to choose a state bird. Since then, Arkansas, Florida, Mississippi, and Tennessee have also adopted the Northern Mockingbird as their official state bird. Taxonomy There are about 17 species in two genera, although three species of mockingbird from the Galápagos Islands were formerly separated into a third genus, Nesomimus. The mockingbirds do not appear to form a monophyletic lineage, as Mimus and Melanotis are not each other's closest relatives; instead, Melanotis appears to be more closely related to the catbirds, while the closest living relatives of Mimus appear to be thrashers, such as the sage thrasher. Species in taxonomic order Mimus: Brown-backed mockingbird, Mimus dorsalis Bahama mockingbird, Mimus gundlachii Long-tailed mockingbird, Mimus longicaudatus Patagonian mockingbird, Mimus patagonicus Chilean mockingbird, Mimus thenca White-banded mockingbird, Mimus triurus Northern mockingbird, Mimus polyglottos Socorro mockingbird, Mimus graysoni Tropical mockingbird, Mimus gilvus Chalk-browed mockingbird, Mimus saturninus Formerly Nesomimus (endemic to the Galapagos): Hood mockingbird, Mimus macdonaldi Galápagos mockingbird, Mimus parvulus Floreana mockingbird or Charles mockingbird, Mimus trifasciatus San Cristóbal mockingbird, Mimus melanotis Melanotis: Blue mockingbird, Melanotis caerulescens tite Blue-and-white mockingbird, Melanotis hypoleucus Charles Darwin When the survey voyage of HMS Beagle visited the Galápagos Islands in September to October 1835, the naturalist Charles Darwin noticed that the mockingbirds Mimus thenca differed from island to island, and were closely allied in appearance to mockingbirds on the South American mainland. Nearly a year later when writing up his notes on the return voyage he speculated that this, together with what he had been told about Galápagos tortoises, could undermine the doctrine of stability of species. This was his first recorded expression of doubts about species being immutable, which led to his being convinced about the transmutation of species and hence evolution.
Biology and health sciences
Passerida
null
1745670
https://en.wikipedia.org/wiki/Social%20choice%20theory
Social choice theory
Social choice theory is a branch of welfare economics that extends the theory of rational choice to collective decision-making. Social choice studies the behavior of different mathematical procedures (social welfare functions) used to combine individual preferences into a coherent whole. It contrasts with political science in that it is a normative field that studies how a society can make good decisions, whereas political science is a descriptive field that observes how societies actually do make decisions. While social choice began as a branch of economics and decision theory, it has since received substantial contributions from mathematics, philosophy, political science, and game theory. Real-world examples of social choice rules include constitutions and parliamentary procedures for voting on laws, as well as electoral systems; as such, the field is occasionally called voting theory. It is closely related to mechanism design, which uses game theory to model social choice with imperfect information and self-interested citizens. Social choice differs from decision theory in that the latter is concerned with how individuals, rather than societies, can make rational decisions. History The earliest work on social choice theory comes from the writings of the Marquis de Condorcet, who formulated several key results including his jury theorem and his example showing the impossibility of majority rule. His work was prefigured by Ramon Llull's 1299 manuscript Ars Electionis (The Art of Elections), which discussed many of the same concepts, but was lost in the Late Middle Ages and only rediscovered in the early 21st century. Kenneth Arrow's book Social Choice and Individual Values is often recognized as inaugurating the modern era of social choice theory. Later work has also considered approaches to legal compensation, fair division, variable populations, partial strategy-proofing of social-choice mechanisms, natural resources, capabilities and functionings approaches, and measures of welfare. Key results Arrow's impossibility theorem Arrow's impossibility theorem is a key result showing that social choice functions based only on ordinal comparisons, rather than cardinal utility, will behave incoherently (unless they are dictatorial). Such systems violate independence of irrelevant alternatives, meaning they suffer from spoiler effects—the system can behave erratically in response to changes in the quality or popularity of one of the options. Condorcet cycles Condorcet's example demonstrates that democracy cannot be thought of as being the same as simple majority rule or majoritarianism; otherwise, it will be self-contradictory when three or more options are available. Majority rule can create cycles that violate the transitive property: Attempting to use majority rule as a social choice function creates situations where we have A better than B and B better than C, but C is also better than A. This contrasts with May's theorem, which shows that simple majority is the optimal voting mechanism when there are only two outcomes, and only ordinal preferences are allowed. Harsanyi's theorem Harsanyi's utilitarian theorem shows that if individuals have preferences that are well-behaved under uncertainty (i.e. coherent), the only coherent and Pareto efficient social choice function is the utilitarian rule. This lends some support to the viewpoint expressed of John Stuart Mill, who identified democracy with the ideal of maximizing the common good (or utility) of society as a whole, under an equal consideration of interests. Manipulation theorems Gibbard's theorem provides limitations on the ability of any voting rule to elicit honest preferences from voters, showing that no voting rule is strategyproof (i.e. does not depend on other voters' preferences) for elections with 3 or more outcomes. The Gibbard–Satterthwaite theorem proves a stronger result for ranked-choice voting systems, showing that no such voting rule can be sincere (i.e. free of reversed preferences). Median voter theorem Mechanism design The field of mechanism design, a subset of social choice theory, deals with the identification of rules that preserve while incentivizing agents to honestly reveal their preferences. One particularly important result is the revelation principle, which is almost a reversal of Gibbard's theorem: for any given social choice function, there exists a mechanism that obtains the same results but incentivizes participants to be completely honest. Because mechanism design places stronger assumptions on the behavior of participants, it is sometimes possible to design mechanisms for social choice that accomplish apparently-"impossible" tasks. For example, by allowing agents to compensate each other for losses with transfers, the Vickrey–Clarke–Groves (VCG) mechanism can achieve the "impossible" according to Gibbard's theorem: the mechanism ensures honest behavior from participants, while still achieving a Pareto efficient outcome. As a result, the VCG mechanism can be considered a "better" way to make decisions than voting (though only so long as monetary transfers are possible). Others If the domain of preferences is restricted to those that include a majority-strength Condorcet winner, then selecting that winner is the unique resolvable, neutral, anonymous, and non-manipulable voting rule. Interpersonal utility comparison Social choice theory is the study of theoretical and practical methods to aggregate or combine individual preferences into a collective social welfare function. The field generally assumes that individuals have preferences, and it follows that they can be modeled using utility functions, by the VNM theorem. But much of the research in the field assumes that those utility functions are internal to humans, lack a meaningful unit of measure and cannot be compared across different individuals. Whether this type of interpersonal utility comparison is possible or not significantly alters the available mathematical structures for social welfare functions and social choice theory. In one perspective, following Jeremy Bentham, utilitarians have argued that preferences and utility functions of individuals are interpersonally comparable and may therefore be added together to arrive at a measure of aggregate utility. Utilitarian ethics call for maximizing this aggregate. In contrast many twentieth century economists, following Lionel Robbins, questioned whether such measures of utility could be measured, or even considered meaningful. Following arguments similar to those espoused by behaviorists in psychology, Robbins argued concepts of utility were unscientific and unfalsifiable. Consider for instance the law of diminishing marginal utility, according to which utility of an added quantity of a good decreases with the amount of the good that is already in possession of the individual. It has been used to defend transfers of wealth from the "rich" to the "poor" on the premise that the former do not derive as much utility as the latter from an extra unit of income. Robbins argued that this notion is beyond positive science; that is, one cannot measure changes in the utility of someone else, nor is it required by positive theory. Apologists for the interpersonal comparison of utility have argued that Robbins claimed too much. John Harsanyi agreed that perfect comparisons of mental states are not practically possible, but people can still make some comparisons thanks to their similar backgrounds, cultural experiences, and psychologies. Amartya Sen argues that even if interpersonal comparisons of utility are imperfect, we can still say that (despite being positive for Nero) the Great Fire of Rome had a negative overall value. Harsanyi and Sen thus argue that at least partial comparability of utility is possible, and social choice theory should proceed under that assumption. Relationship to public choice theory Despite the similar names, "public choice" and "social choice" are two distinct fields that are only weakly related. Public choice deals with the modeling of political systems as they actually exist in the real world, and is primarily limited to positive economics (predicting how politicians and other stakeholders will act). It is therefore often thought of as the application of microeconomic models to political science, in order to predict the behavior of political actors. By contrast, social choice has a much more normative bent, and deals with the abstract study of decision procedures and their properties. The Journal of Economic Literature classification codes place Social Choice under Microeconomics at JEL D71 (with Clubs, Committees, and Associations) whereas Public Choice falls under JEL D72 (Economic Models of Political Processes: Rent-Seeking, Elections, Legislatures, and Voting Behavior). Empirical research Since Arrow, social choice theory has been characterized by being predominantly mathematical and theoretical, but some research has aimed at estimating the frequency of various voting paradoxes, such as the Condorcet paradox. A summary of 37 individual studies, covering a total of 265 real-world elections, large and small, found 25 instances of a Condorcet paradox for a total likelihood of 9.4%. While examples of the paradox seem to occur often in small settings like parliaments, very few examples have been found in larger groups (electorates), although some have been identified. However, the frequency of such paradoxes depends heavily on the number of options and other factors. Rules Let be a set of possible 'states of the world' or 'alternatives'. Society wishes to choose a single state from . For example, in a single-winner election, may represent the set of candidates; in a resource allocation setting, may represent all possible allocations. Let be a finite set, representing a collection of individuals. For each , let be a utility function, describing the amount of happiness an individual i derives from each possible state. A social choice rule is a mechanism which uses the data to select some element(s) from which are 'best' for society. The question of what 'best' means is a common question in social choice theory. The following rules are most common: Utilitarian rule – sometimes called the max-sum rule or Benthamite welfare – aims to maximize the sum of utilities. Egalitarian rule – sometimes called the max-min rule or Rawlsian welfare – aims to maximize the smallest utility. Social choice functions A social choice function, sometimes called a voting system in the context of politics, is a rule that takes an individual's complete and transitive preferences over a set of outcomes and returns a single chosen outcome (or a set of tied outcomes). We can think of this subset as the winners of an election, and compare different social choice functions based on which axioms or mathematical properties they fulfill. Arrow's impossibility theorem is what often comes to mind when one thinks about impossibility theorems in voting. There are several famous theorems concerning social choice functions. The Gibbard–Satterthwaite theorem implies that the only rule satisfying non-imposition (every alternative can be chosen) and strategyproofness when there are more than two candidates is the dictatorship mechanism. That is, a voter may be able to cast a ballot that misrepresents their preferences to obtain a result that is more favorable to them under their sincere preferences. May's theorem shows that when there are only two candidates and only rankings of options are available, the simple majority vote is the unique neutral, anonymous, and positively-responsive voting rule.
Mathematics
Game theory
null
1746409
https://en.wikipedia.org/wiki/Mineral%20acid
Mineral acid
A mineral acid (or inorganic acid) is an acid derived from one or more inorganic compounds, as opposed to organic acids which are acidic, organic compounds. All mineral acids form hydrogen ions and the conjugate base when dissolved in water. Characteristics Commonly used mineral acids are sulfuric acid (H2SO4), hydrochloric acid (HCl) and nitric acid (HNO3); these are also known as bench acids. Mineral acids range from superacids (such as perchloric acid) to very weak ones (such as boric acid). Mineral acids tend to be very soluble in water and insoluble in organic solvents. Mineral acids are used in many sectors of the chemical industry as feedstocks for the synthesis of other chemicals, both organic and inorganic. Large quantities of these acids – especially sulfuric acid, nitric acid, and hydrochloric acid – are manufactured for commercial use in large plants. Mineral acids are also used directly for their corrosive properties. For example, a dilute solution of hydrochloric acid is used for removing the deposits from the inside of boilers, with precautions taken to prevent the corrosion of the boiler by the acid. This process is known as descaling. Examples Solutions of a hydrogen halide: Hydrofluoric acid HF Hydrochloric acid HCl Hydrobromic acid HBr Hydroiodic acid HI Nitric acid HNO3 Phosphoric acid H3PO4 Sulfuric acid H2SO4 Boric acid H3BO3 Perchloric acid HClO4 Hydrogen cyanide HCN
Physical sciences
Specific acids
Chemistry
1748160
https://en.wikipedia.org/wiki/Cell%20junction
Cell junction
Cell junctions or junctional complexes are a class of cellular structures consisting of multiprotein complexes that provide contact or adhesion between neighboring cells or between a cell and the extracellular matrix in animals. They also maintain the paracellular barrier of epithelia and control paracellular transport. Cell junctions are especially abundant in epithelial tissues. Combined with cell adhesion molecules and extracellular matrix, cell junctions help hold animal cells together. Cell junctions are also especially important in enabling communication between neighboring cells via specialized protein complexes called communicating (gap) junctions. Cell junctions are also important in reducing stress placed upon cells. In plants, similar communication channels are known as plasmodesmata, and in fungi they are called septal pores. Types In vertebrates, there are three major types of cell junction: Adherens junctions, desmosomes and hemidesmosomes (anchoring junctions) Gap junctions (communicating junction) Tight junctions (occluding junctions) Invertebrates have several other types of specific junctions, for example septate junctions (a type of occluding junction) or the C. elegans apical junction. In multicellular plants, the structural functions of cell junctions are instead provided for by cell walls. The analogues of communicative cell junctions in plants are called plasmodesmata. Anchoring junctions Cells within tissues and organs must be anchored to one another and attached to components of the extracellular matrix. Cells have developed several types of junctional complexes to serve these functions, and in each case, anchoring proteins extend through the plasma membrane to link cytoskeletal proteins in one cell to cytoskeletal proteins in neighboring cells as well as to proteins in the extracellular matrix. Three types of anchoring junctions are observed, and differ from one another in the cytoskeletal protein anchor as well as the transmembrane linker protein that extends through the membrane: Anchoring-type junctions not only hold cells together but provide tissues with structural cohesion. These junctions are most abundant in tissues that are subject to constant mechanical stress such as skin and heart. Desmosomes Desmosomes, also termed as maculae adherentes, can be visualized as rivets through the plasma membrane of adjacent cells. Intermediate filaments composed of keratin or desmin are attached to membrane-associated attachment proteins that form a dense plaque on the cytoplasmic face of the membrane. Cadherin molecules form the actual anchor by attaching to the cytoplasmic plaque, extending through the membrane and binding strongly to cadherins coming through the membrane of the adjacent cell. Hemidesmosomes Hemidesmosomes form rivet-like links between cytoskeleton and extracellular matrix components such as the basal laminae that underlie epithelia. Like desmosomes, they tie to intermediate filaments in the cytoplasm, but in contrast to desmosomes, their transmembrane anchors are integrins rather than cadherins. Adherens junctions Adherens junctions share the characteristic of anchoring cells through their cytoplasmic actin filaments. Similarly to desmosomes and hemidesmosomes, their transmembrane anchors are composed of cadherins in those that anchor to other cells and integrins (focal adhesion) in those that anchor to extracellular matrix. There is considerable morphologic diversity among adherens junctions. Those that tie cells to one another are seen as isolated streaks or spots, or as bands that completely encircle the cell. The band-type of adherens junctions is associated with bundles of actin filaments that also encircle the cell just below the plasma membrane. Spot-like adherens junctions called focal adhesions help cells adhere to extracellular matrix. The cytoskeletal actin filaments that tie into adherens junctions are contractile proteins and in addition to providing an anchoring function, adherens junctions are thought to participate in folding and bending of epithelial cell sheets. Thinking of the bands of actin filaments as being similar to 'drawstrings' allows one to envision how contraction of the bands within a group of cells would distort the sheet into interesting patterns. Gap junctions Gap junctions or communicating junctions, allow for direct chemical communication between adjacent cellular cytoplasm through diffusion without contact with the extracellular fluid. This is possible due to six connexin proteins interacting to form a cylinder with a pore in the centre called a connexon. The connexon complexes stretches across the cell membrane and when two adjacent cell connexons interact, they form a complete gap junction channel. Connexon pores vary in size, polarity and therefore can be specific depending on the connexin proteins that constitute each individual connexon. Whilst variation in gap junction channels do occur, their structure remains relatively standard, and this interaction ensures efficient communication without the escape of molecules or ions to the extracellular fluid. Gap junctions play vital roles in the human body, including their role in the uniform contractile of the heart muscle. They are also relevant in signal transfers in the brain, and their absence shows a decreased cell density in the brain. Retinal and skin cells are also dependent on gap junctions in cell differentiation and proliferation. Tight junctions Found in vertebrate epithelia, tight junctions act as barriers that regulate the movement of water and solutes between epithelial layers. Tight junctions are classified as a paracellular barrier which is defined as not having directional discrimination; however, movement of the solute is largely dependent upon size and charge. There is evidence to suggest that the structures in which solutes pass through are somewhat like pores. Physiological pH plays a part in the selectivity of solutes passing through tight junctions with most tight junctions being slightly selective for cations. Tight junctions present in different types of epithelia are selective for solutes of differing size, charge, and polarity. Proteins There have been approximately 40 proteins identified to be involved in tight junctions. These proteins can be classified into four major categories;signalling proteins. Roles Scaffolding proteins – organise the transmembrane proteins, couple transmembrane proteins to other cytoplasmic proteins as well as to actin filaments. Signaling proteins – involved in junctions assembly, barrier regulation, and gene transcription. Regulation proteins – regulate membrane vesicle targeting. Transmembrane proteins – including junctional adhesion molecule, occludin, and claudin. It is believed that claudin is the protein molecule responsible for the selective permeability between epithelial layers. A three-dimensional image is still yet to be achieved and as such specific information about the function of tight junctions is yet to be determined. Tricellular junctions Tricellular junctions seal epithelia at the corners of three cells. Due to the geometry of three-cell vertices, the sealing of the cells at these sites requires a specific junctional organization, different from those in bicellular junctions. In vertebrates, components tricellular junctions are tricellulin and lipolysis-stimulated lipoprotein receptors. In invertebrates, the components are gliotactin and anakonda. Tricellular junctions are also implicated in the regulation of cytoskeletal organization and cell divisions. In particular they ensure that cells divide according to the Hertwig rule. In some Drosophila epithelia, during cell divisions tricellular junctions establish physical contact with spindle apparatus through astral microtubules. Tricellular junctions exert a pulling force on the spindle apparatus and serve as a geometrical clue to determine orientation of cell divisions. Cell junction molecules The molecules responsible for creating cell junctions include various cell adhesion molecules. There are four main types: selectins, cadherins, integrins, and the immunoglobulin superfamily. Selectins are cell adhesion molecules that play an important role in the initiation of inflammatory processes. The functional capacity of selectin is limited to leukocyte collaborations with vascular endothelium. There are three types of selectins found in humans; L-selectin, P-selectin and E-selectin. L-selectin deals with lymphocytes, monocytes and neutrophils, P-selectin deals with platelets and endothelium and E-selectin deals only with endothelium. They have extracellular regions made up of an amino-terminal lectin domain, attached to a carbohydrate ligand, growth factor-like domain, and short repeat units (numbered circles) that match the complementary binding protein domains. Cadherins are calcium-dependent adhesion molecules. Cadherins are extremely important in the process of morphogenesis – fetal development. Together with an alpha-beta catenin complex, the cadherin can bind to the microfilaments of the cytoskeleton of the cell. This allows for homophilic cell–cell adhesion. The β-catenin–α-catenin linked complex at the adherens junctions allows for the formation of a dynamic link to the actin cytoskeleton. Integrins act as adhesion receptors, transporting signals across the plasma membrane in multiple directions. These molecules are an invaluable part of cellular communication, as a single ligand can be used for many integrins. Unfortunately, these molecules still have a long way to go in the ways of research. Immunoglobulin superfamily are a group of calcium independent proteins capable of homophilic and heterophilic adhesion. Homophilic adhesion involves the immunoglobulin-like domains on the cell surface binding to the immunoglobulin-like domains on an opposing cell's surface while heterophilic adhesion refers to the binding of the immunoglobulin-like domains to integrins and carbohydrates instead. Cell adhesion is a vital component of the body. Loss of this adhesion effects cell structure, cellular functioning and communication with other cells and the extracellular matrix and can lead to severe health issues and diseases.
Biology and health sciences
Cell parts
Biology
1748421
https://en.wikipedia.org/wiki/Simple%20eye%20in%20invertebrates
Simple eye in invertebrates
A simple eye or ocellus (sometimes called a pigment pit) is a form of eye or an optical arrangement which has a single lens without the sort of elaborate retina that occurs in most vertebrates. These eyes are called "simple" to distinguish them from "compound eyes", which have multiple lenses. They are not necessarily simple in the sense of being uncomplicated or basic. The structure of an animal's eye is determined by the environment in which it lives, and the behavioural tasks it must fulfill to survive. Arthropods differ widely in the habitats in which they live, as well as their visual requirements for finding food or conspecifics, and avoiding predators. Consequently, an enormous variety of eye types are found in arthropods to overcome visual problems or limitations. Use of the term simple eye is flexible, and must be interpreted in proper context; for example, the eyes of most large animals are camera eyes and are sometimes considered "simple" because a single lens collects and focuses an entire image onto the retina (analogous to a camera). By other criteria, the presence of a complex retina distinguishes the vertebrate camera eye from the simple stemma or ommatidia which make up compound eyes. Additionally, not all invertebrate ocelli and ommatidium have simple photoreceptors. Many have various forms of retinula (a retina-like cluster of photoreceptor cells), including the ommatidia of most insects and the central eyes of camel spiders. Jumping spiders and some other predatory spiders with seemingly simple eyes also emulate retinal vision in various ways. Many insects have unambiguously compound eyes consisting of multiple lenses (up to tens of thousands), but achieve an effect similar to that of a camera eye, in that each ommatidium lens focuses light onto a number of neighbouring retinulae. Ocelli or eye spots Some jellyfish, sea stars, flatworms, and ribbonworms have the simplest "eyes" – pigment spot ocelli – which have randomly distributed pigment, and which have no other structure (such as a cornea, or lens). The apparent "eye color" in these animals is red or black. Certain groups such as box jellyfish have more complex eyes, including some with a distinct retina, lens, and cornea. Many snails and slugs also have ocelli, either at the tips or bases of their tentacles. Some other gastropods, such as the Strombidae, have much more sophisticated eyes. Giant clams have ocelli that allow light to penetrate their mantles. Simple eyes in arthropods Spider eyes Spiders do not have compound eyes, but instead have several pairs of simple eyes with each pair adapted for a specific task or tasks. The principal and secondary eyes in spiders are arranged in four, or occasionally fewer, pairs. Only the principal eyes have moveable retinas. The secondary eyes have a reflector at the back of the eyes. The light-sensitive part of the receptor cells is next to this, so they get direct and reflected light. In hunting or jumping spiders, for example, a forward-facing pair possesses the best resolution (and even some telescopic ability) to help spot prey from a distance. Nocturnal spiders' eyes are very sensitive in low light levels and are large to capture more light, equivalent to f/0.58 in the rufous net-casting spider. Dorsal ocelli The term "ocellus" (plural ocelli) is derived from the Latin (eye), and literally means "little eye". In insects, two distinct ocellus types exist: dorsal (top-most) ocelli, and lateral ocelli (often referred to as ocelli and stemmata, respectively), most insects have dorsal ocelli while stemmata are found in the larvae of some insect orders. Despite the shared name, they are structurally and functionally very different. Simple eyes of other animals may also be referred to as ocelli, but again the structure and anatomy of these eyes is quite distinct from those of insect dorsal ocelli. Dorsal ocelli are light-sensitive organs found on the dorsal surface or frontal surface of the head of many insects, including Hymenoptera (bees, ants, wasps, sawflies), Diptera (flies), Odonata (dragonflies, damselflies), Orthoptera (grasshoppers, locusts) and Mantodea (mantises). These ocelli coexist with compound eyes; thus, most insects possess two anatomically separate and functionally different visual pathways. The number, forms, and functions of the dorsal ocelli vary markedly throughout insect orders. They tend to be larger and more strongly expressed in flying insects (particularly bees, wasps, dragonflies and locusts) where they are typically found as a triplet. Two ocelli are directed to either side of the head, while a central (median) ocellus is directed forwards. In some terrestrial insects (e.g. some ants and cockroaches), the median ocellus is absent. The sideways-facing ocelli can be called "lateral ocelli", referring to their direction and position in the triplet, however this is not to be confused with the stemmata of some insect larvae, which are also known as lateral ocelli. A dorsal ocellus consists of a lens element (cornea) and a layer of photoreceptors (rod cells). The ocellar lens may be strongly curved or flat. The photoreceptor layer may also be separated from the lens by a clear vitreous humour. The number of photoreceptors also varies widely, but may number in the hundreds or thousands for well-developed ocelli. In bees, locusts, and dragonflies, the lens is strongly curved; while in cockroaches it is flat. Locusts possess vitreous humour while blowflies and dragonflies do not. Two somewhat unusual features of ocelli are particularly notable and generally common between insect orders. The refractive power of the lens is not typically sufficient to form an image on the photoreceptor layer, essentially it is out of focus. Dorsal ocelli ubiquitously have massive convergence ratios from first-order (photoreceptor) to second-order neurons. These two factors have led to the conclusion that, with some exceptions in predatory insects, the ocelli are incapable of perceiving proper images and are thus solely suitable for light-metering functions. Given the large aperture and low f-number of the lens, as well as high convergence ratios and synaptic gains (amplification of photoreceptor signals), the ocelli are generally considered to be far more sensitive to light than the compound eyes. Additionally, given the relatively simple neural arrangement of the eye (small number of synapses between detector and effector), as well as the extremely large diameter of some ocellar interneurons (often the largest diameter neurons in the animal's nervous system), the ocelli are typically considered to be "faster" than the compound eyes. One common theory of ocellar function in flying insects holds that they are used to assist in maintaining flight stability. Given their underfocused nature, wide fields of view, and high light-collecting ability, the ocelli are superbly adapted for measuring changes in the perceived brightness of the external world as an insect rolls or pitches around its body axis during flight. Locusts and dragonflies in tethered flight have been observed to try and "correct" their flight posture based on changes in light. Other theories of ocellar function have ranged from roles as light adaptors or global excitatory organs to polarization sensors and circadian entrainers. Recent studies have shown the ocelli of some insects (most notably the dragonfly, but also some wasps) are capable of "form vision" similar to camera eyes, as the ocellar lens forms an image within, or close to, the photoreceptor layer. In dragonflies it has been demonstrated that the receptive fields of both the photoreceptors and the second-order neurons can be quite restricted. Further research has demonstrated these eyes not only resolve spatial details of the world, but also perceive motion. Second-order neurons in the dragonfly median ocellus respond more strongly to upwards-moving bars and gratings than to downwards-moving bars and gratings, but this effect is only present when ultraviolet light is used in the stimulus; when ultraviolet light is absent, no directional response is observed. Dragonfly ocelli are especially highly developed and specialised visual organs, which may support the exceptional acrobatic abilities of these animals. Research on the ocelli is of high interest to designers of small unmanned aerial vehicles. Designers of these craft face many of the same challenges that insects face in maintaining stability in a three-dimensional world. Engineers are increasingly taking inspiration from insects to overcome these challenges. Stemmata Stemmata (singular stemma) are a class of simple eyes. Many kinds of holometabolous larvae bear no other form of eyes until they enter their final stage of growth. Adults of several orders of hexapods also have stemmata, and never develop compound eyes at all. Examples include fleas, springtails, and Thysanura. Some other Arthropoda, such as some Myriapoda, rarely have any eyes other than stemmata at any stage of their lives (exceptions include the large and well-developed compound eyes of the house centipedes, Scutigera). Behind each lens of a typical functional stemma lies a single cluster of photoreceptor cells, termed a retinula. The lens is biconvex, and the body of the stemma has a vitreous or crystalline core. Although stemmata are simple eyes, some kinds (such as those of the larvae of Lepidoptera and especially those of Tenthredinidae, a family of sawflies) are only "simple" in that they represent immature or embryonic forms of the compound eyes of the adult. They can possess a considerable degree of acuity and sensitivity, and can detect polarized light. They may be optimized for light sensitivity, as opposed to detailed image formation. In the pupal stage, such stemmata develop into fully fledged compound eyes. One feature offering a clue to their ontogenetic role is their lateral position on the head; ocelli, that in other ways resemble stemmata, tend to be borne in sites median to the compound eyes, or nearly so. Among some researchers, this distinction has led to the use of the term "lateral ocelli" for stemmata. Genetic controls A number of genetic pathways are responsible for the occurrence and positioning of the ocelli. The gene orthodenticle is allelic to ocelliless, a mutation that stops ocelli from being produced. In Drosophila, the rhodopsin Rh2 is only expressed in simple eyes. While (in Drosophila at least) the genes eyeless and dachshund are both expressed in the compound eye but not the simple eye, no reported 'developmental' genes are uniquely expressed in the simple eye. Epidermal growth factor receptor (Egfr) promotes the expression of orthodenticle and possibly eyes absent (Eya) and as such is essential for simple eye formation.
Biology and health sciences
Visual system
Biology
1748511
https://en.wikipedia.org/wiki/Scylla%20serrata
Scylla serrata
Scylla serrata (often called mud crab or mangrove crab, although both terms are highly ambiguous, and black crab) is an ecologically important species of crab found in the estuaries and mangroves of Africa, Australia, and Asia. In their most common forms, their shell colours vary from a deep, mottled green to very dark brown. Distribution The natural range of S. serrata is in the Indo-Pacific. It is found from South Africa, around the coast of the Indian Ocean, where it is especially abundant in Sri Lanka, to the Southeast Asian Archipelago, as well as from southern Japan to south-eastern Australia, northern New Zealand, and as far east as Fiji and Samoa. The species has also been introduced to Hawaii and Florida. In Hawaii, mud crabs are colloquially known as Samoan crabs, as they were originally imported from American Samoa. As these crabs are known for their robust size and dense meat content, they have been greatly sought after over the years. As a result of overcrabbing, local government efforts have restricted harvesting of crabs smaller than 6 inches (width across back) and to harvest females of any size is illegal. Ecology A study on tidal flats in Deception Bay in Queensland found juvenile crabs ( carapace width) were resident in the mangrove zone, remaining there during low tide, while subadults () migrated into the intertidal zone to feed at high tide and retreated to subtidal waters at low tide. Adults ( and larger) were caught mainly below the low-tide mark, with small numbers captured in the intertidal zone at high tide. These crabs are highly cannibalistic in nature; when crabs undergo molting, other hard-shelled ones sometimes attack the molting crabs and devour them. The females can give birth to a million offspring, which can grow up to in size and have a shell width up to wide.
Biology and health sciences
Crabs and hermit crabs
Animals
1748525
https://en.wikipedia.org/wiki/Diapir
Diapir
A diapir (; , ) is a type of intrusion in which a more mobile and ductilely deformable material is forced into brittle overlying rocks. Depending on the tectonic environment, diapirs can range from idealized mushroom-shaped Rayleigh–Taylor instability structures in regions with low tectonic stress such as in the Gulf of Mexico to narrow dikes of material that move along tectonically induced fractures in surrounding rock. The term was introduced by Romanian geologist Ludovic Mrazek, who was the first to understand the principle of salt tectonics and plasticity. The term diapir may be applied to igneous intrusions, but it is more commonly applied to non-igneous, relatively cold materials, such as salt domes and mud diapirs. If a salt diapir reaches the surface, it can flow because salt becomes ductile with a small amount of moisture, forming a salt glacier. Occurrence Differential loading causes salt deposits covered by overburden (sediment) to rise upward toward the surface and pierce the overburden, forming diapirs (including salt domes), pillars, sheets, or other geological structures. In addition to Earth-based observations, diapirism is thought to occur on Neptune's moon Triton, Jupiter's moon Europa, Saturn's moon Enceladus, and Uranus's moon Miranda. Formation Diapirs commonly intrude buoyantly upward along fractures or zones of structural weakness through denser overlying rocks. This process is known as diapirism. The resulting structures are also referred to as piercement structures. In the process, segments of the existing strata can be disconnected and pushed upwards. While moving higher, they retain many of their original properties, e.g. pressure; their pressure can be significantly different from the pressure of the shallower strata they get pushed into. Such overpressured "floaters" pose a significant risk when trying to drill through them. There is an analogy to a Galilean thermometer. Rock types such as evaporitic salt deposits, and gas charged muds are potential sources of diapirs. Diapirs also form in the Earth's mantle when a sufficient mass of hot, less dense magma assembles. Diapirism in the mantle is thought to be associated with the development of large igneous provinces and some mantle plumes. Explosive, hot volatile rich magma or volcanic eruptions are referred to generally as diatremes. Diatremes are not usually associated with diapirs, as they are small-volume magmas which ascend by volatile plumes, not by density contrast with the surrounding mantle. Economic importance Diapirs or piercement structures are structures resulting from the penetration of overlaying material. By pushing upward and piercing overlying rock layers, diapirs can form anticlines (arch-like shape folds), salt domes (mushroom/dome-shaped diapirs), and other structures capable of trapping hydrocarbons such as petroleum and natural gas. Igneous intrusions themselves are typically too hot to allow the preservation of preexisting hydrocarbons. Occurrences There are many salt domes and salt glaciers in the Zagros mountains, formed by the collision of two tectonic plates, the Eurasian Plate and the Arabian Plate. There are underwater salt domes in the Gulf of Mexico.
Physical sciences
Geologic features
Earth science
1748563
https://en.wikipedia.org/wiki/Solar%20irradiance
Solar irradiance
Solar irradiance is the power per unit area (surface power density) received from the Sun in the form of electromagnetic radiation in the wavelength range of the measuring instrument. Solar irradiance is measured in watts per square metre (W/m2) in SI units. Solar irradiance is often integrated over a given time period in order to report the radiant energy emitted into the surrounding environment (joule per square metre, J/m2) during that time period. This integrated solar irradiance is called solar irradiation, solar radiation, solar exposure, solar insolation, or insolation. Irradiance may be measured in space or at the Earth's surface after atmospheric absorption and scattering. Irradiance in space is a function of distance from the Sun, the solar cycle, and cross-cycle changes. Irradiance on the Earth's surface additionally depends on the tilt of the measuring surface, the height of the Sun above the horizon, and atmospheric conditions. Solar irradiance affects plant metabolism and animal behavior. The study and measurement of solar irradiance have several important applications, including the prediction of energy generation from solar power plants, the heating and cooling loads of buildings, climate modeling and weather forecasting, passive daytime radiative cooling applications, and space travel. Types There are several measured types of solar irradiance. Total solar irradiance (TSI) is a measure of the solar power over all wavelengths per unit area incident on the Earth's upper atmosphere. It is measured facing (pointing at / parallel to) the incoming sunlight (i.e. the flux through a surface perpendicular to the incoming sunlight; other angles would not be TSI and be reduced by the dot product). The solar constant is a conventional measure of mean TSI at a distance of one astronomical unit (AU). Direct normal irradiance (DNI), or beam radiation, is measured at the surface of the Earth at a given location with a surface element perpendicular to the Sun direction. It excludes diffuse solar radiation (radiation that is scattered or reflected by atmospheric components). Direct irradiance is equal to the extraterrestrial irradiance above the atmosphere minus the atmospheric losses due to absorption and scattering. Losses depend on time of day (length of light's path through the atmosphere depending on the solar elevation angle), cloud cover, moisture content and other contents. The irradiance above the atmosphere also varies with time of year (because the distance to the Sun varies), although this effect is generally less significant compared to the effect of losses on DNI. Diffuse horizontal irradiance (DHI), or diffuse sky radiation is the radiation at the Earth's surface from light scattered by the atmosphere. It is measured on a horizontal surface with radiation coming from all points in the sky excluding circumsolar radiation (radiation coming from the sun disk). There would be almost no DHI in the absence of atmosphere. Global horizontal irradiance (GHI) is the total irradiance from the Sun on a horizontal surface on Earth. It is the sum of direct irradiance (after accounting for the solar zenith angle of the Sun z) and diffuse horizontal irradiance: Global tilted irradiance (GTI) is the total radiation received on a surface with defined tilt and azimuth, fixed or Sun-tracking. GTI can be measured or modeled from GHI, DNI, DHI. It is often a reference for photovoltaic power plants, while photovoltaic modules are mounted on the fixed or tracking constructions. Global normal irradiance (GNI) is the total irradiance from the Sun at the surface of Earth at a given location with a surface element perpendicular to the Sun. Spectral versions of the above irradiances (e.g. spectral TSI, spectral DNI, etc.) are any of the above with units divided either by meter or nanometer (for a spectral graph as function of wavelength), or per-Hz (for a spectral function with an x-axis of frequency). When one plots such spectral distributions as a graph, the integral of the function (area under the curve) will be the (non-spectral) irradiance. e.g.: Say one had a solar cell on the surface of the earth facing straight up, and had DNI in units of W/m^2 per nm, graphed as a function of wavelength (in nm). Then, the unit of the integral (W/m^2) is the product of those two units. Units The SI unit of irradiance is watts per square metre (W/m2 = Wm−2). The unit of insolation often used in the solar power industry is kilowatt hours per square metre (kWh/m2). The Langley is an alternative unit of insolation. One Langley is one thermochemical calorie per square centimetre or 41,840J/m2. At the top of Earth's atmosphere The average annual solar radiation arriving at the top of the Earth's atmosphere is about 1361W/m2. This represents the power per unit area of solar irradiance across the spherical surface surrounding the Sun with a radius equal to the distance to the Earth (1AU). This means that the approximately circular disc of the Earth, as viewed from the Sun, receives a roughly stable 1361W/m2 at all times. The area of this circular disc is , in which is the radius of the Earth. Because the Earth is approximately spherical, it has total area , meaning that the solar radiation arriving at the top of the atmosphere, averaged over the entire surface of the Earth, is simply divided by four to get 340W/m2. In other words, averaged over the year and the day, the Earth's atmosphere receives 340W/m2 from the Sun. This figure is important in radiative forcing. Derivation The distribution of solar radiation at the top of the atmosphere is determined by Earth's sphericity and orbital parameters. This applies to any unidirectional beam incident to a rotating sphere. Insolation is essential for numerical weather prediction and understanding seasons and climatic change. Application to ice ages is known as Milankovitch cycles. Distribution is based on a fundamental identity from spherical trigonometry, the spherical law of cosines: where , and are arc lengths, in radians, of the sides of a spherical triangle. is the angle in the vertex opposite the side which has arc length . Applied to the calculation of solar zenith angle , the following applies to the spherical law of cosines: This equation can be also derived from a more general formula: where is an angle from the horizontal and is an azimuth angle. The separation of Earth from the Sun can be denoted and the mean distance can be denoted , approximately The solar constant is denoted . The solar flux density (insolation) onto a plane tangent to the sphere of the Earth, but above the bulk of the atmosphere (elevation 100 km or greater) is: The average of over a day is the average of over one rotation, or the hour angle progressing from to : Let be the hour angle when becomes positive. This could occur at sunrise when , or for as a solution of or If , then the sun does not set and the sun is already risen at , so . If , the sun does not rise and . is nearly constant over the course of a day, and can be taken outside the integral Therefore: Let θ be the conventional polar angle describing a planetary orbit. Let θ = 0 at the March equinox. The declination δ as a function of orbital position is where is the obliquity. (Note: The correct formula, valid for any axial tilt, is .) The conventional longitude of perihelion ϖ is defined relative to the March equinox, so for the elliptical orbit: or With knowledge of ϖ, ε and e from astrodynamical calculations and So from a consensus of observations or theory, can be calculated for any latitude φ and θ. Because of the elliptical orbit, and as a consequence of Kepler's second law, θ does not progress uniformly with time. Nevertheless, θ = 0° is exactly the time of the March equinox, θ = 90° is exactly the time of the June solstice, θ = 180° is exactly the time of the September equinox and θ = 270° is exactly the time of the December solstice. A simplified equation for irradiance on a given day is: where n is a number of a day of the year. Variation Total solar irradiance (TSI) changes slowly on decadal and longer timescales. The variation during solar cycle 21 was about 0.1% (peak-to-peak). In contrast to older reconstructions, most recent TSI reconstructions point to an increase of only about 0.05% to 0.1% between the 17th century Maunder Minimum and the present. However, current understanding based on various lines of evidence suggests that the lower values for the secular trend are more probable. In particular, a secular trend greater than 2 Wm−2 is considered highly unlikely. Ultraviolet irradiance (EUV) varies by approximately 1.5 percent from solar maxima to minima, for 200 to 300 nm wavelengths. However, a proxy study estimated that UV has increased by 3.0% since the Maunder Minimum. Some variations in insolation are not due to solar changes but rather due to the Earth moving between its perihelion and aphelion, or changes in the latitudinal distribution of radiation. These orbital changes or Milankovitch cycles have caused radiance variations of as much as 25% (locally; global average changes are much smaller) over long periods. The most recent significant event was an axial tilt of 24° during boreal summer near the Holocene climatic optimum. Obtaining a time series for a for a particular time of year, and particular latitude, is a useful application in the theory of Milankovitch cycles. For example, at the summer solstice, the declination δ is equal to the obliquity ε. The distance from the Sun is For this summer solstice calculation, the role of the elliptical orbit is entirely contained within the important product , the precession index, whose variation dominates the variations in insolation at 65°N when eccentricity is large. For the next 100,000 years, with variations in eccentricity being relatively small, variations in obliquity dominate. Measurement The space-based TSI record comprises measurements from more than ten radiometers and spans three solar cycles. All modern TSI satellite instruments employ active cavity electrical substitution radiometry. This technique measures the electrical heating needed to maintain an absorptive blackened cavity in thermal equilibrium with the incident sunlight which passes through a precision aperture of calibrated area. The aperture is modulated via a shutter. Accuracy uncertainties of < 0.01% are required to detect long term solar irradiance variations, because expected changes are in the range 0.05–0.15W/m2 per century. Intertemporal calibration In orbit, radiometric calibrations drift for reasons including solar degradation of the cavity, electronic degradation of the heater, surface degradation of the precision aperture and varying surface emissions and temperatures that alter thermal backgrounds. These calibrations require compensation to preserve consistent measurements. For various reasons, the sources do not always agree. The Solar Radiation and Climate Experiment/Total Irradiance Measurement (SORCE/TIM) TSI values are lower than prior measurements by the Earth Radiometer Budget Experiment (ERBE) on the Earth Radiation Budget Satellite (ERBS), VIRGO on the Solar Heliospheric Observatory (SoHO) and the ACRIM instruments on the Solar Maximum Mission (SMM), Upper Atmosphere Research Satellite (UARS) and ACRIMSAT. Pre-launch ground calibrations relied on component rather than system-level measurements since irradiance standards at the time lacked sufficient absolute accuracies. Measurement stability involves exposing different radiometer cavities to different accumulations of solar radiation to quantify exposure-dependent degradation effects. These effects are then compensated for in the final data. Observation overlaps permits corrections for both absolute offsets and validation of instrumental drifts. Uncertainties of individual observations exceed irradiance variability (~0.1%). Thus, instrument stability and measurement continuity are relied upon to compute real variations. Long-term radiometer drifts can potentially be mistaken for irradiance variations which can be misinterpreted as affecting climate. Examples include the issue of the irradiance increase between cycle minima in 1986 and 1996, evident only in the ACRIM composite (and not the model) and the low irradiance levels in the PMOD composite during the 2008 minimum. Despite the fact that ACRIM I, ACRIM II, ACRIM III, VIRGO and TIM all track degradation with redundant cavities, notable and unexplained differences remain in irradiance and the modeled influences of sunspots and faculae. Persistent inconsistencies Disagreement among overlapping observations indicates unresolved drifts that suggest the TSI record is not sufficiently stable to discern solar changes on decadal time scales. Only the ACRIM composite shows irradiance increasing by ~1W/m2 between 1986 and 1996; this change is also absent in the model. Recommendations to resolve the instrument discrepancies include validating optical measurement accuracy by comparing ground-based instruments to laboratory references, such as those at National Institute of Science and Technology (NIST); NIST validation of aperture area calibrations uses spares from each instrument; and applying diffraction corrections from the view-limiting aperture. For ACRIM, NIST determined that diffraction from the view-limiting aperture contributes a 0.13% signal not accounted for in the three ACRIM instruments. This correction lowers the reported ACRIM values, bringing ACRIM closer to TIM. In ACRIM and all other instruments but TIM, the aperture is deep inside the instrument, with a larger view-limiting aperture at the front. Depending on edge imperfections this can directly scatter light into the cavity. This design admits into the front part of the instrument two to three times the amount of light intended to be measured; if not completely absorbed or scattered, this additional light produces erroneously high signals. In contrast, TIM's design places the precision aperture at the front so that only desired light enters. Variations from other sources likely include an annual systematics in the ACRIM III data that is nearly in phase with the Sun-Earth distance and 90-day spikes in the VIRGO data coincident with SoHO spacecraft maneuvers that were most apparent during the 2008 solar minimum. TSI Radiometer Facility TIM's high absolute accuracy creates new opportunities for measuring climate variables. TSI Radiometer Facility (TRF) is a cryogenic radiometer that operates in a vacuum with controlled light sources. L-1 Standards and Technology (LASP) designed and built the system, completed in 2008. It was calibrated for optical power against the NIST Primary Optical Watt Radiometer, a cryogenic radiometer that maintains the NIST radiant power scale to an uncertainty of 0.02% (1σ). As of 2011 TRF was the only facility that approached the desired <0.01% uncertainty for pre-launch validation of solar radiometers measuring irradiance (rather than merely optical power) at solar power levels and under vacuum conditions. TRF encloses both the reference radiometer and the instrument under test in a common vacuum system that contains a stationary, spatially uniform illuminating beam. A precision aperture with an area calibrated to 0.0031% (1σ) determines the beam's measured portion. The test instrument's precision aperture is positioned in the same location, without optically altering the beam, for direct comparison to the reference. Variable beam power provides linearity diagnostics, and variable beam diameter diagnoses scattering from different instrument components. The Glory/TIM and PICARD/PREMOS flight instrument absolute scales are now traceable to the TRF in both optical power and irradiance. The resulting high accuracy reduces the consequences of any future gap in the solar irradiance record. 2011 reassessment The most probable value of TSI representative of solar minimum is , lower than the earlier accepted value of , established in the 1990s. The new value came from SORCE/TIM and radiometric laboratory tests. Scattered light is a primary cause of the higher irradiance values measured by earlier satellites in which the precision aperture is located behind a larger, view-limiting aperture. The TIM uses a view-limiting aperture that is smaller than the precision aperture that precludes this spurious signal. The new estimate is from better measurement rather than a change in solar output. A regression model-based split of the relative proportion of sunspot and facular influences from SORCE/TIM data accounts for 92% of observed variance and tracks the observed trends to within TIM's stability band. This agreement provides further evidence that TSI variations are primarily due to solar surface magnetic activity. Instrument inaccuracies add a significant uncertainty in determining Earth's energy balance. The energy imbalance has been variously measured (during a deep solar minimum of 2005–2010) to be , and . Estimates from space-based measurements range +3–7W/m2. SORCE/TIM's lower TSI value reduces this discrepancy by 1W/m2. This difference between the new lower TIM value and earlier TSI measurements corresponds to a climate forcing of −0.8W/m2, which is comparable to the energy imbalance. On Earth's surface Average annual solar radiation arriving at the top of the Earth's atmosphere is roughly 1361W/m2. The Sun's rays are attenuated as they pass through the atmosphere, leaving maximum normal surface irradiance at approximately 1000W/m2 at sea level on a clear day. When 1361 W/m2 is arriving above the atmosphere (when the Sun is at the zenith in a cloudless sky), direct sun is about 1050 W/m2, and global radiation on a horizontal surface at ground level is about 1120 W/m2. The latter figure includes radiation scattered or reemitted by the atmosphere and surroundings. The actual figure varies with the Sun's angle and atmospheric circumstances. Ignoring clouds, the daily average insolation for the Earth is approximately . The output of, for example, a photovoltaic panel, partly depends on the angle of the sun relative to the panel. One Sun is a unit of power flux, not a standard value for actual insolation. Sometimes this unit is referred to as a Sol, not to be confused with a sol, meaning one solar day. Absorption and reflection Part of the radiation reaching an object is absorbed and the remainder reflected. Usually, the absorbed radiation is converted to thermal energy, increasing the object's temperature. Humanmade or natural systems, however, can convert part of the absorbed radiation into another form such as electricity or chemical bonds, as in the case of photovoltaic cells or plants. The proportion of reflected radiation is the object's reflectivity or albedo. Projection effect Insolation onto a surface is largest when the surface directly faces (is normal to) the sun. As the angle between the surface and the Sun moves from normal, the insolation is reduced in proportion to the angle's cosine; see effect of Sun angle on climate. In the figure, the angle shown is between the ground and the sunbeam rather than between the vertical direction and the sunbeam; hence the sine rather than the cosine is appropriate. A sunbeam one mile wide arrives from directly overhead, and another at a 30° angle to the horizontal. The sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the angled sunbeam spreads the light over twice the area. Consequently, half as much light falls on each square mile. This projection effect is the main reason why Earth's polar regions are much colder than equatorial regions. On an annual average, the poles receive less insolation than does the equator, because the poles are always angled more away from the Sun than the tropics, and moreover receive no insolation at all for the six months of their respective winters. Absorption effect At a lower angle, the light must also travel through more atmosphere. This attenuates it (by absorption and scattering) further reducing insolation at the surface. Attenuation is governed by the Beer-Lambert Law, namely that the transmittance or fraction of insolation reaching the surface decreases exponentially in the optical depth or absorbance (the two notions differing only by a constant factor of ) of the path of insolation through the atmosphere. For any given short length of the path, the optical depth is proportional to the number of absorbers and scatterers along that length, typically increasing with decreasing altitude. The optical depth of the whole path is then the integral (sum) of those optical depths along the path. When the density of absorbers is layered, that is, depends much more on vertical than horizontal position in the atmosphere, to a good approximation the optical depth is inversely proportional to the projection effect, that is, to the cosine of the zenith angle. Since transmittance decreases exponentially with increasing optical depth, as the sun approaches the horizon there comes a point when absorption dominates projection for the rest of the day. With a relatively high level of absorbers this can be a considerable portion of the late afternoon, and likewise of the early morning. Conversely, in the (hypothetical) total absence of absorption, the optical depth remains zero at all altitudes of the sun, that is, transmittance remains 1, and so only the projection effect applies. Solar potential maps Assessment and mapping of solar potential at the global, regional and country levels have been the subject of significant academic and commercial interest. One of the earliest attempts to carry out comprehensive mapping of solar potential for individual countries was the Solar & Wind Resource Assessment (SWERA) project, funded by the United Nations Environment Program and carried out by the US National Renewable Energy Laboratory (NREL). The National Aeronautics and Space Administration (NASA) provides data for global solar potential maps through the CERES experiment and the POWER project. Global mapping by many other similar institutes are available on the Global Atlas for Renewable Energy provided by the International Renewable Energy Agency. A number of commercial firms now exist to provide solar resource data to solar power developers, including 3E, Clean Power Research, SoDa Solar Radiation Data, Solargis, Vaisala (previously 3Tier), and Vortex, and these firms have often provided solar potential maps for free. The Global Solar Atlas was launched by the World Bank in January 2017, using data provided by Solargis, to provide a single source for high-quality solar data, maps, and GIS layers covering all countries. Solar radiation maps are built using databases derived from satellite imagery, as for example using visible images from Meteosat Prime satellite. A method is applied to the images to determine solar radiation. One well validated satellite-to-irradiance model is the SUNY model. The accuracy of this model is well evaluated. In general, solar irradiance maps are accurate, especially for Global Horizontal Irradiance. Applications Solar power Solar irradiation figures are used to plan the deployment of solar power systems. In many countries, the figures can be obtained from an insolation map or from insolation tables that reflect data over the prior 30–50 years. Different solar power technologies are able to use different components of the total irradiation. While solar photovoltaics panels are able to convert to electricity both direct irradiation and diffuse irradiation, concentrated solar power is only able to operate efficiently with direct irradiation, thus making these systems suitable only in locations with relatively low cloud cover. Because solar collectors panels are almost always mounted at an angle towards the Sun, insolation figures must be adjusted to find the amount of sunlight falling on the panel. This will prevent estimates that are inaccurately low for winter and inaccurately high for summer. This also means that the amount of sunlight falling on a solar panel at high latitude is not as low compared to one at the equator as would appear from just considering insolation on a horizontal surface. Horizontal insolation values range from 800 to 950kWh/(kWp·y) in Norway to up to 2,900kWh/(kWp·y) in Australia. But a properly tilted panel at 50° latitude receives 1860kWh/m/y, compared to 2370 at the equator. In fact, under clear skies a solar panel placed horizontally at the north or south pole at midsummer receives more sunlight over 24 hours (cosine of angle of incidence equal to sin(23.5°) or about 0.40) than a horizontal panel at the equator at the equinox (average cosine equal to 1/ or about 0.32). Photovoltaic panels are rated under standard conditions to determine the Wp (peak watts) rating, which can then be used with insolation, adjusted by factors such as tilt, tracking and shading, to determine the expected output. Buildings In construction, insolation is an important consideration when designing a building for a particular site. The projection effect can be used to design buildings that are cool in summer and warm in winter, by providing vertical windows on the equator-facing side of the building (the south face in the northern hemisphere, or the north face in the southern hemisphere): this maximizes insolation in the winter months when the Sun is low in the sky and minimizes it in the summer when the Sun is high. (The Sun's north–south path through the sky spans 47° through the year). Civil engineering In civil engineering and hydrology, numerical models of snowmelt runoff use observations of insolation. This permits estimation of the rate at which water is released from a melting snowpack. Field measurement is accomplished using a pyranometer. Climate research Irradiance plays a part in climate modeling and weather forecasting. A non-zero average global net radiation at the top of the atmosphere is indicative of Earth's thermal disequilibrium as imposed by climate forcing. The impact of the lower 2014 TSI value on climate models is unknown. A few tenths of a percent change in the absolute TSI level is typically considered to be of minimal consequence for climate simulations. The new measurements require climate model parameter adjustments. Experiments with GISS Model 3 investigated the sensitivity of model performance to the TSI absolute value during the present and pre-industrial epochs, and describe, for example, how the irradiance reduction is partitioned between the atmosphere and surface and the effects on outgoing radiation. Assessing the impact of long-term irradiance changes on climate requires greater instrument stability combined with reliable global surface temperature observations to quantify climate response processes to radiative forcing on decadal time scales. The observed 0.1% irradiance increase imparts 0.22W/m2 climate forcing, which suggests a transient climate response of 0.6 °C per W/m2. This response is larger by a factor of 2 or more than in the IPCC-assessed 2008 models, possibly appearing in the models' heat uptake by the ocean. Global cooling Measuring a surface's capacity to reflect solar irradiance is essential to passive daytime radiative cooling, which has been proposed as a method of reversing local and global temperature increases associated with global warming. In order to measure the cooling power of a passive radiative cooling surface, both the absorbed powers of atmospheric and solar radiations must be quantified. On a clear day, solar irradiance can reach 1000 W/m2 with a diffuse component between 50 and 100 W/m2. On average the cooling power of a passive daytime radiative cooling surface has been estimated at ~100-150 W/m2. Space Insolation is the primary variable affecting equilibrium temperature in spacecraft design and planetology. Solar activity and irradiance measurement is a concern for space travel. For example, the American space agency, NASA, launched its Solar Radiation and Climate Experiment (SORCE) satellite with Solar Irradiance Monitors.
Physical sciences
Climate change
Earth science
1749134
https://en.wikipedia.org/wiki/Pharming%20%28genetics%29
Pharming (genetics)
Pharming, a portmanteau of farming and pharmaceutical, refers to the use of genetic engineering to insert genes that code for useful pharmaceuticals into host animals or plants that would otherwise not express those genes, thus creating a genetically modified organism (GMO). Pharming is also known as molecular farming, molecular pharming, or biopharming. The products of pharming are recombinant proteins or their metabolic products. Recombinant proteins are most commonly produced using bacteria or yeast in a bioreactor, but pharming offers the advantage to the producer that it does not require expensive infrastructure, and production capacity can be quickly scaled to meet demand, at greatly reduced cost. History The first recombinant plant-derived protein (PDP) was human serum albumin, initially produced in 1990 in transgenic tobacco and potato plants. Open field growing trials of these crops began in the United States in 1992 and have taken place every year since. While the United States Department of Agriculture has approved planting of pharma crops in every state, most testing has taken place in Hawaii, Nebraska, Iowa, and Wisconsin. In the early 2000s, the pharming industry was robust. Proof of concept has been established for the production of many therapeutic proteins, including antibodies, blood products, cytokines, growth factors, hormones, recombinant enzymes and human and veterinary vaccines. By 2003 several PDP products for the treatment of human diseases were under development by nearly 200 biotech companies, including recombinant gastric lipase for the treatment of cystic fibrosis, and antibodies for the prevention of dental caries and the treatment of non-Hodgkin's lymphoma. However, in late 2002, just as ProdiGene was ramping up production of trypsin for commercial launch it was discovered that volunteer plants (left over from the prior harvest) of one of their GM corn products were harvested with the conventional soybean crop later planted in that field. ProdiGene was fined $250,000 and ordered by the USDA to pay over $3 million in cleanup costs. This raised a furor and set the pharming field back, dramatically. Many companies went bankrupt as companies faced difficulties getting permits for field trials and investors fled. In reaction, APHIS introduced more strict regulations for pharming field trials in the US in 2003. In 2005, Anheuser-Busch threatened to boycott rice grown in Missouri because of plans by Ventria Bioscience to grow pharm rice in the state. A compromise was reached, but Ventria withdrew its permit to plant in Missouri due to unrelated circumstances. The industry has slowly recovered, by focusing on pharming in simple plants grown in bioreactors and on growing GM crops in greenhouses. Some companies and academic groups have continued with open-field trials of GM crops that produce drugs. In 2006 Dow AgroSciences received USDA approval to market a vaccine for poultry against Newcastle disease, produced in plant cell culture – the first plant-produced vaccine approved in the U.S. In mammals Historical development Milk is presently the most mature system to produce recombinant proteins from transgenic organisms. Blood, egg white, seminal plasma, and urine are other theoretically possible systems, but all have drawbacks. Blood, for instance, as of 2012 cannot store high levels of stable recombinant proteins, and biologically active proteins in blood may alter the health of the animals. Expression in the milk of a mammal, such as a cow, sheep, or goat, is a common application, as milk production is plentiful and purification from milk is relatively easy. Hamsters and rabbits have also been used in preliminary studies because of their faster breeding. One approach to this technology is the creation of a transgenic mammal that can produce the biopharmaceutical in its milk (or blood or urine). Once an animal is produced, typically using the pronuclear microinjection method, it becomes efficacious to use cloning technology to create additional offspring that carry the favorable modified genome. In February 2009 the US FDA granted marketing approval for the first drug to be produced in genetically modified livestock. The drug is called ATryn, which is antithrombin protein purified from the milk of genetically modified goats. Marketing permission was granted by the European Medicines Agency in August 2006. Patentability issues As indicated above, some mammals typically used for food production (such as goats, sheep, pigs, and cows) have been modified to produce non-food products, a practice sometimes called pharming. Use of genetically modified goats has been approved by the FDA and EMA to produce ATryn, i.e. recombinant antithrombin, an anticoagulant protein drug. These products "produced by turning animals into drug-manufacturing 'machines' by genetically modifying them" are sometimes termed biopharmaceuticals. The patentability of such biopharmaceuticals and their process of manufacture is uncertain. Probably, the biopharmaceuticals themselves so made are unpatentable, assuming that they are chemically identical to the preexisting drugs that they imitate. Several 19th century United States Supreme Court decisions hold that a previously known natural product manufactured by artificial means cannot be patented. An argument can be made for the patentability of the process for manufacturing a biopharmaceutical, however, because genetically modifying animals so that they will produce the drug is dissimilar to previous methods of manufacture; moreover, one Supreme Court decision seems to hold open that possibility. On the other hand, it has been suggested that the recent Supreme Court decision in Mayo v. Prometheus may create a problem in that, in accordance with the ruling in that case, "it may be said that such and such genes manufacture this protein in the same way they always did in a mammal, they produce the same product, and the genetic modification technology used is conventional, so that the steps of the process 'add nothing to the laws of nature that is not already present. If the argument prevailed in court, the process would also be ineligible for patent protection. This issue has not yet been decided in the courts. In plants Plant-made pharmaceuticals (PMPs), also referred to as pharming, is a sub-sector of the biotechnology industry that involves the process of genetically engineering plants so that they can produce certain types of therapeutically important proteins and associated molecules such as peptides and secondary metabolites. The proteins and molecules can then be harvested and used to produce pharmaceuticals. Arabidopsis is often used as a model organism to study gene expression in plants, while actual production may be carried out in maize, rice, potatoes, tobacco, flax or safflower. Tobacco has been a highly popular choice of organism for the expression of transgenes, as it is easily transformed, produces abundant tissues, and survives well in vitro and in greenhouses. The advantage of rice and flax is that they are self-pollinating, and thus gene flow issues (see below) are avoided. However, human error could still result in modified crops entering the food supply. Using a minor crop such as safflower or tobacco avoids the greater political pressures and risk to the food supply involved with using staple crops such as beans or rice. Expression of proteins in plant cell or hairy root cultures also minimizes risk of gene transfer, but at a higher cost of production. Sterile hybrids may also be used for the bioconfinement of transgenic plants, although stable lines cannot be established. Grain crops are sometimes chosen for pharming because protein products targeted to the endosperm of cereals have been shown to have high heat stability. This characteristic makes them an appealing target for the production of edible vaccines, as viral coat proteins stored in grains do not require cold storage the way many vaccines currently do. Maintaining a temperature controlled supply chain of vaccines is often difficult when delivering vaccines to developing countries. Most commonly, plant transformation is carried out using Agrobacterium tumefaciens. The protein of interest is often expressed under the control of the cauliflower mosaic virus 35S promoter (CaMV35S), a powerful constitutive promoter for driving expression in plants. Localization signals may be attached to the protein of interest to cause accumulation to occur in a specific sub-cellular location, such as chloroplasts or vacuoles. This is done in order to improve yields, simplify purification, or so that the protein folds properly. Recently, the inclusion of antisense genes in expression cassettes has been shown to have potential for improving the plant pharming process. Researchers in Japan transformed rice with an antisense SPK gene, which disrupts starch accumulation in rice seeds, so that products would accumulate in a watery sap that is easier to purify. Recently, several non-crop plants such as the duckweed Lemna minor or the moss Physcomitrella patens have shown to be useful for the production of biopharmaceuticals. These frugal organisms can be cultivated in bioreactors (as opposed to being grown in fields), secrete the transformed proteins into the growth medium and, thus, substantially reduce the burden of protein purification in preparing recombinant proteins for medical use. In addition, both species can be engineered to cause secretion of proteins with human patterns of glycosylation, an improvement over conventional plant gene-expression systems. Biolex Therapeutics developed a duckweed-based expression platform; it sold the business to Synthon and declared bankruptcy in 2012. Additionally, an Israeli company, Protalix, has developed a method to produce therapeutics in cultured transgenic carrot or tobacco cells. Protalix and its partner, Pfizer, received FDA approval to market its drug, taliglucerase alfa (Elelyso), as a treatment for Gaucher's disease, in 2012. Regulation The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of genetically modified crops. There are differences in the regulation of GM crops – including those used for pharming – between countries, with some of the most marked differences occurring between the USA and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. Controversy There are controversies around GMOs generally on several levels, including whether making them is ethical, issues concerning intellectual property and market dynamics; environmental effects of GM crops; and GM crops' role in industrial agricultural more generally. There are also specific controversies around pharming. Advantages Plants do not carry pathogens that might be dangerous to human health. Additionally, on the level of pharmacologically active proteins, there are no proteins in plants that are similar to human proteins. On the other hand, plants are still sufficiently closely related to animals and humans that they are able to correctly process and configure both animal and human proteins. Their seeds and fruits also provide sterile packaging containers for the valuable therapeutics and guarantee a certain storage life. Global demand for pharmaceuticals is at unprecedented levels. Expanding the existing microbial systems, although feasible for some therapeutic products, is not a satisfactory option on several grounds. Many proteins of interest are too complex to be made by microbial systems or by protein synthesis. These proteins are currently being produced in animal cell cultures, but the resulting product is often prohibitively expensive for many patients. For these reasons, science has been exploring other options for producing proteins of therapeutic value. These pharmaceutical crops could become extremely beneficial in developing countries. The World Health Organization estimates that nearly 3 million people die each year from vaccine preventable disease, mostly in Africa. Diseases such as measles and hepatitis lead to deaths in countries where the people cannot afford the high costs of vaccines, but pharm crops could help solve this problem. Disadvantages While molecular farming is one application of genetic engineering, there are concerns that are unique to it. In the case of genetically modified (GM) foods, concerns focus on the safety of the food for human consumption. In response, it has been argued that the genes that enhance a crop in some way, such as drought resistance or pesticide resistance, are not believed to affect the food itself. Other GM foods in development, such as fruits designed to ripen faster or grow larger, are believed not to affect humans any differently from non-GM varieties. In contrast, molecular farming is not intended for crops destined for the food chain. It produces plants that contain physiologically active compounds that accumulate in the plant’s tissues. Considerable attention is focused, therefore, on the restraint and caution necessary to protect both consumer health and environmental biodiversity. The fact that the plants are used to produce drugs alarms activists. They worry that once production begins, the altered plants might find their way into the food supply or cross-pollinate with conventional, non-GM crops. These concerns have historical validation from the ProdiGene incident, and from the StarLink incident, in which GMO corn accidentally ended up in commercial food products. Activists also are concerned about the power of business. According to the Canadian Food Inspection Agency, in a recent report, says that U.S. demand alone for biotech pharmaceuticals is expanding at 13 percent annually and to reach a market value of $28.6 billion in 2004. Pharming is expected to be worth $100 billion globally by 2020. List of originators (companies and universities), research projects and products Please note that this list is by no means exhaustive. Dow AgroSciences – poultry vaccine against Newcastle disease virus (first PMP to be approved for marketing by the USDA Center for Veterinary Biologics Dow never intended to market the vaccine. "'Dow Agrosciences used the animal vaccine as an example to completely run through the process. A new platform needs to be approved, which can be difficult when authorities get in contact with it for the first time', explains the plant physiologist Stefan Schillberg, head of the Molecular Biology Division at the Fraunhofer Institute for Molecular Biology and Applied Ecology Aachen." Fraunhofer Institute for Molecular Biology and Applied Ecology, with sites in Germany, the US, and Chile is the lead institute of the Pharma Planta consortium of 33 partner organizations from 12 European countries and South Africa, funded by the European Commission. Pharma Planta is developing systems for plant production of proteins in greenhouses in the European regulatory framework. It is collaborating on biosimilars with Plantform and PharmaPraxis (see below). Genzyme – antithrombin III in goat milk GTC Biotherapeutics – ATryn (recombinant human antithrombin) in goat milk Icon Genetics produces therapeutics in transiently infected Nicotiana benthamiana (relative of tobacco) plants in greenhouses in Halle, Germany or in fields. First product is a vaccine for a cancer, non-Hodgkin's lymphoma. Iowa State University – immunogenic protein from E. coli bacteria in pollen-free corn as a potential vaccine against E. coli for animals and humans Kentucky Bioprocessing took over Large Scale Biology's facilities in Owensboro, Kentucky, and offers contract biomanufacturing services in tobacco plants, grown in greenhouses or in open fields. Medicago Inc. – Pre-clinical trials of Influenza vaccine made in transiently infected Nicotiana benthamiana (relative of tobacco) plants in greenhouses. Medicago grew virus-like particles in the Australian weed Nicotiana benthamiana, for development of a candidate vaccine against the COVID-19 virus, initiating a Phase I clinical trial in July 2020. PharmaPraxis – Developing biosimilars in collaboration with PlantForm (see below) and Fraunhofer. Pharming – C1 inhibitor, human collagen 1, fibrinogen (with American Red Cross), and lactoferrin in cow milk The intellectual property behind the fibrinogen project was acquired from PPL Therapeutics when PPL went bankrupt in 2004. Phyton Biotech uses plant cell culture systems to manufacture active pharmaceutical ingredients based on taxanes, including paclitaxel and docetaxel Planet Biotechnology – antibodies against Streptococcus mutans, antibodies against doxorubicin, and ICAM 1 receptor in tobacco PlantForm Corporation – biosimilar trastuzumab in tobacco – It is developing biosimilars in collaboration with PharmaPraxis (see above) and Fraunhofer. ProdiGene – was developing several proteins, including aprotinin, trypsin and a veterinary TGE vaccine in corn. Was in process of launching trypsin product in 2002 when later that year its field test crops contaminated conventional crops. Unable to pay the $3 million cost of the cleanup, it was purchased by International Oilseed Distributors in 2003 International Oilseed Distributors is controlled by Harry H. Stine, who owns one of the biggest soybeans genetics companies in the US. ProdiGene's maize-produced trypsin, with the trademark TrypZean is currently sold by Sigma-Aldritch as a research reagent. Syngenta – Beta carotene in rice (this is "Golden rice 2"), which Syngenta has donated to the Golden Rice Project Arizona State University – Hepatitis C vaccine in potatoes Ventria Bioscience – lactoferrin and lysozyme in rice Washington State University – lactoferrin and lysozyme in barley European COST Action on Molecular Farming – COST Action FA0804 on Molecular Farming provides a pan-European coordination centre, connecting academic and government institutions and companies from 23 countries. The aim of the Action is to advance the field by encouraging scientific interactions, providing expert opinion and encouraging commercial development of new products. The COST Action also provides grants allowing young scientists to visit participating laboratories across Europe for scientific training. Mapp Biopharmaceutical in San Diego, California, was reported in August 2014 to be developing ZMapp, an experimental cure for the deadly Ebola virus disease. Two Americans who had been infected in Liberia were reported to be improving with the drug. ZMapp was made using antibodies produced by GM tobacco plants. Projects known to be abandoned Agragen, in collaboration with University of Alberta – docosahexaenoic acid and human serum albumin in flax Chlorogen, Inc. – cholera, anthrax, and plague vaccines, albumin, interferon for liver diseases including hepatitis C, elastin, 4HB, and insulin-like growth factor in tobacco chloroplasts. Went out of business in 2007. Dow Chemical Company made a deal with Sunol Molecular in 2003 to develop antibodies against tissue factor in plants and in mammalian cell culture and to compare them. In 2005 Sunol sold all its tissue factor antagonists to Tanox, which in turn was bought by Genentech in 2003. Genentech licensed the tissue factor program to Altor in 2008 Altor is itself a spinout from Sunol. The product under development, ALT-836, formerly known as TNX-832 and Sunol-cH36, is not the plant-produced antibody, but rather is a mammalian antibody, more specifically, a chimeric antibody produced in a hybridoma. Epicyte – spermicidal antibodies in corn Epicyte was purchased by Biolex in 2004 at which time Epicyte's portfolio was described as "focused on the discovery and development of human monoclonal antibody products as treatments for a wide range of infectious and inflammatory diseases." Large Scale Biology Corporation (LSBC) (bankrupt) – used Tobacco mosaic virus to develop reagents and patient-specific vaccines for Non-Hodgkin's lymphoma, Papillomavirus vaccine, parvovirus vaccine, alpha galactosidase for Fabry disease, lysosomal acid lipase, aprotinin, interferon Alpha 2a and 2b, G-CSF, and Hepatitis B vaccine antigens in tobacco. In 2004, LSBC announced an agreement with Sigma-Aldritch under which LSB would produce recombinant aprotinin in plants of the tobacco family and Sigma-Aldrich would commercially distribute LSBC's recombinant product to its customers in the R&D, cell culture and manufacturing markets. As of October 2012 SIgma still has the protein in stock. Meristem Therapeutics – Lipase, lactoferrin, plasma proteins, collagen, antibodies (IgA, IgM), allergens and protease inhibitors in tobacco. Liquidated in 2008. Novoplant GmgH – therapeutic proteins in tobacco and feed peas. Conducted field trials in US of feed peas for pigs that produced anti-bacterial antibodies. Former CSO is now with another company; appears that Novoplant is out of business. Monsanto Company – abandoned development of pharmaceutical producing corn PPL Therapeutics – Alpha 1-antitrypsin for cystic fibrosis and emphysema in sheep milk. This is the company that created Dolly the Sheep, the first cloned animal. Went bankrupt in 2004. Assets were acquired by Pharming and an investment group including University of Pittsburgh Medical Center. SemBioSys – insulin in safflower. In May 2012, SemBioSys terminated its operations.
Technology
Biotechnology
null
1750934
https://en.wikipedia.org/wiki/Carvone
Carvone
Carvone is a member of a family of chemicals called terpenoids. Carvone is found naturally in many essential oils, but is most abundant in the oils from seeds of caraway (Carum carvi), spearmint (Mentha spicata), and dill. Uses Food applications Both carvones are used in the food and flavor industry. As the compound most responsible for the flavor of caraway, dill, and spearmint, carvone has been used for millennia in food. Food applications are mainly met by carvone made from limonene. R-(−)-Carvone is also used for air freshening products and, like many essential oils, oils containing carvones are used in aromatherapy and alternative medicine. Agriculture S-(+)-Carvone is also used to prevent premature sprouting of potatoes during storage, being marketed in the Netherlands for this purpose under the name Talent. Insect control R-(−)-Carvone has been approved by the U.S. Environmental Protection Agency for use as a mosquito repellent. Stereoisomerism and odor Carvone forms two mirror image forms or enantiomers: R-(−)-carvone, has a sweetish minty smell, like spearmint leaves. Its mirror image, S-(+)-carvone, has a spicy aroma with notes of rye, like caraway seeds. The fact that the two enantiomers are perceived as smelling different is evidence that olfactory receptors must respond more strongly to one enantiomer than to the other. Not all enantiomers have distinguishable odors. Squirrel monkeys have also been found to be able to discriminate between carvone enantiomers. The two forms are also referred to, in older texts, by their optical rotations of laevo (l) referring to R-(−)-carvone, and dextro (d) referring to S-(+)-carvone. Modern naming refers to levorotatory isomers with the sign (-) and dextrorotatory isomers with the sign (+) in the systematic name. Occurrence S-(+)-Carvone is the principal constituent (60–70%) of the oil from caraway seeds (Carum carvi), which is produced on a scale of about 10 tonnes per year. It also occurs to the extent of about 40–60% in dill seed oil (from Anethum graveolens), and also in mandarin orange peel oil. R-(−)-Carvone is also the most abundant compound in the essential oil from several species of mint, particularly spearmint oil (Mentha spicata), which is composed of 50–80% R-(−)-carvone. Spearmint is a major source of naturally produced R-(−)-carvone. However, the majority of R-(−)-carvone used in commercial applications is synthesized from R-(+)-limonene. The R-(−)-carvone isomer also occurs in kuromoji oil. Some oils, like gingergrass oil, contain a mixture of both enantiomers. Many other natural oils, for example peppermint oil, contain trace quantities of carvones. History Caraway was used for medicinal purposes by the ancient Romans, but carvone was probably not isolated as a pure compound until Franz Varrentrapp (1815–1877) obtained it in 1849. It was originally called carvol by Schweizer. Goldschmidt and Zürrer identified it as a ketone related to limonene, and the structure was finally elucidated by Georg Wagner (1849–1903) in 1894. Preparation Carvone can be obtained from natural sources but insufficient is available to meet demand. Instead most carvone is produced from limonene. The dextro-form, S-(+)-carvone is obtained practically pure by the fractional distillation of caraway oil. The levo-form obtained from the oils containing it usually requires additional treatment to produce high purity R-(−)-carvone. This can be achieved by the formation of an addition compound with hydrogen sulfide, from which carvone may be regenerated by treatment with potassium hydroxide followed by steam distillation. Carvone may be synthetically prepared from limonene by first treating limonene nitrosyl chloride. Heating this nitroso compound gives carvoxime. Treating carvoxime with oxalic acid yields carvone. This procedure affords R-(−)-carvone from R-(+)-limonene. The major use of d-limonene is as a precursor to S-(+)-carvone. The large scale availability of orange rinds, a byproduct in the production of orange juice, has made limonene cheaply available, and synthetic carvone correspondingly inexpensively prepared. The biosynthesis of carvone is by oxidation of limonene. Chemical properties Reduction There are three double bonds in carvone capable of reduction; the product of reduction depends on the reagents and conditions used. Catalytic hydrogenation of carvone (1) can give either carvomenthol (2) or carvomenthone (3). Zinc and acetic acid reduce carvone to give dihydrocarvone (4). MPV reduction using propan-2-ol and aluminium isopropoxide effects reduction of the carbonyl group only to provide carveol (5); a combination of sodium borohydride and CeCl3 (Luche reduction) is also effective. Hydrazine and potassium hydroxide give limonene (6) via a Wolff–Kishner reduction. Oxidation Oxidation of carvone can also lead to a variety of products. In the presence of an alkali such as Ba(OH)2, carvone is oxidised by air or oxygen to give the diketone 7. With hydrogen peroxide the epoxide 8 is formed. Carvone may be cleaved using ozone followed by steam, giving dilactone 9, while KMnO4 gives 10. Conjugate additions As an α,β;-unsaturated ketone, carvone undergoes conjugate additions of nucleophiles. For example, carvone reacts with lithium dimethylcuprate to place a methyl group trans to the isopropenyl group with good stereoselectivity. The resulting enolate can then be allylated using allyl bromide to give ketone 11. Other Being available inexpensively in enantiomerically pure forms, carvone is an attractive starting material for the asymmetric total synthesis of natural products. For example, (S)-(+)-carvone was used to begin a 1998 synthesis of the terpenoid quassin: Metabolism In the body, in vivo studies indicate that both enantiomers of carvone are mainly metabolized into dihydrocarvonic acid, carvonic acid and uroterpenolone. (–)-Carveol is also formed as a minor product via reduction by NADPH. (+)-Carvone is likewise converted to (+)-carveol. This mainly occurs in the liver and involves cytochrome P450 oxidase and (+)-trans-carveol dehydrogenase.
Physical sciences
Terpenes and terpenoids
Chemistry
2427716
https://en.wikipedia.org/wiki/Container%20port
Container port
A container port, container terminal, or intermodal terminal is a facility where cargo containers are transshipped between different transport vehicles, for onward transportation. The transshipment may be between container ships and land vehicles, for example trains or trucks, in which case the terminal is described as a maritime container port. Alternatively, the transshipment may be between land vehicles, typically between train and truck, in which case the terminal is described as an inland container port. In November 1932, the first inland container port in the world was opened by the Pennsylvania Railroad company in Enola, Pennsylvania. Port Newark-Elizabeth on the Newark Bay in the Port of New York and New Jersey is considered the world's first maritime container port. On April 26, 1956, the Ideal X was rigged for an experiment to use standardized cargo containers that were stacked and then unloaded to a compatible truck chassis at Port Newark. The concept had been developed by the McLean Trucking Company. On August 15, 1962, the Port Authority of New York and New Jersey opened the world’s first container port, Elizabeth Marine Terminal. Maritime container ports tend to be part of a larger port, and the biggest maritime container ports can be found situated around major harbours. Inland container ports tend to be located in or near major cities, with good rail connections to maritime container ports. It is common for cargo that arrives to a container port in a single ship to be distributed over several modes of transportation for delivery to inland customers. According to a manager from the Port of Rotterdam, it may be fairly typical way for the cargo of a large 18,000 TEU container ship to be distributed over 19 container trains (74 TEU each), 32 barges (97 TEU each) and 1,560 trucks (1.6 TEU each, on average). The further container terminal, in April 2015, such APM Terminal Maasvlakte II, that adapts the advanced technology of remotely-controlled STS gantry cranes and conceptions of sustainability, renewable energy, and zero carbon dioxide emission. Both maritime and inland container ports usually provide storage facilities for both loaded and empty containers. Loaded containers are stored for relatively short periods, whilst waiting for onward transportation, whilst unloaded containers may be stored for longer periods awaiting their next use. Containers are normally stacked for storage, and the resulting stores are known as container stacks. In recent years methodological advances regarding container port operations have considerably improved, such as container port design process. For a detailed description and a comprehensive list of references see, e.g., the operations research literature.
Technology
Coastal infrastructure
null
2428938
https://en.wikipedia.org/wiki/Cell%20migration
Cell migration
Cell migration is a central process in the development and maintenance of multicellular organisms. Tissue formation during embryonic development, wound healing and immune responses all require the orchestrated movement of cells in particular directions to specific locations. Cells often migrate in response to specific external signals, including chemical signals and mechanical signals. Errors during this process have serious consequences, including intellectual disability, vascular disease, tumor formation and metastasis. An understanding of the mechanism by which cells migrate may lead to the development of novel therapeutic strategies for controlling, for example, invasive tumour cells. Due to the highly viscous environment (low Reynolds number), cells need to continuously produce forces in order to move. Cells achieve active movement by very different mechanisms. Many less complex prokaryotic organisms (and sperm cells) use flagella or cilia to propel themselves. Eukaryotic cell migration typically is far more complex and can consist of combinations of different migration mechanisms. It generally involves drastic changes in cell shape which are driven by the cytoskeleton. Two very distinct migration scenarios are crawling motion (most commonly studied) and blebbing motility. A paradigmatic example of crawling motion is the case of fish epidermal keratocytes, which have been extensively used in research and teaching. Cell migration studies The migration of cultured cells attached to a surface or in 3D is commonly studied using microscopy. As cell movement is very slow, a few μm/minute, time-lapse microscopy videos are recorded of the migrating cells to speed up the movement. Such videos (Figure 1) reveal that the leading cell front is very active, with a characteristic behavior of successive contractions and expansions. It is generally accepted that the leading front is the main motor that pulls the cell forward. Common features The processes underlying mammalian cell migration are believed to be consistent with those of (non-spermatozooic) locomotion. Observations in common include: cytoplasmic displacement at leading edge (front) laminar removal of dorsally-accumulated debris toward trailing edge (back) The latter feature is most easily observed when aggregates of a surface molecule are cross-linked with a fluorescent antibody or when small beads become artificially bound to the front of the cell. Other eukaryotic cells are observed to migrate similarly. The amoeba Dictyostelium discoideum is useful to researchers because they consistently exhibit chemotaxis in response to cyclic AMP; they move more quickly than cultured mammalian cells; and they have a haploid genome that simplifies the process of connecting a particular gene product with its effect on cellular behaviour. Molecular processes of migration There are two main theories for how the cell advances its front edge: the cytoskeletal model and membrane flow model. It is possible that both underlying processes contribute to cell extension. Cytoskeletal model (A) Leading edge Experimentation has shown that there is rapid actin polymerisation at the cell's front edge. This observation has led to the hypothesis that formation of actin filaments "push" the leading edge forward and is the main motile force for advancing the cell's front edge. In addition, cytoskeletal elements are able to interact extensively and intimately with a cell's plasma membrane. Trailing edge Other cytoskeletal components (like microtubules) have important functions in cell migration. It has been found that microtubules act as "struts" that counteract the contractile forces that are needed for trailing edge retraction during cell movement. When microtubules in the trailing edge of cell are dynamic, they are able to remodel to allow retraction. When dynamics are suppressed, microtubules cannot remodel and, therefore, oppose the contractile forces. The morphology of cells with suppressed microtubule dynamics indicate that cells can extend the front edge (polarized in the direction of movement), but have difficulty retracting their trailing edge. On the other hand, high drug concentrations, or microtubule mutations that depolymerize the microtubules, can restore cell migration but there is a loss of directionality. It can be concluded that microtubules act both to restrain cell movement and to establish directionality. Membrane flow model (B) The leading edge at the front of a migrating cell is also the site at which membrane from internal membrane pools is returned to the cell surface at the end of the endocytic cycle. This suggests that extension of the leading edge occurs primarily by addition of membrane at the front of the cell. If so, the actin filaments that form there might stabilize the added membrane so that a structured extension, or lamella, is formed — rather than a bubble-like structure (or bleb) at its front. For a cell to move, it is necessary to bring a fresh supply of "feet" (proteins called integrins, which attach a cell to the surface on which it is crawling) to the front. It is likely that these feet are endocytosed toward the rear of the cell and brought to the cell's front by exocytosis, to be reused to form new attachments to the substrate. In the case of Dictyostelium amoebae, three conditional temperature sensitive mutants which affect membrane recycling block cell migration at the restrictive (higher) temperature; they provide additional support for the importance of the endocytic cycle in cell migration. Furthermore, these amoebae move quite quickly — about one cell length in ~5 mins. If they are regarded as cylindrical (which is roughly true whilst chemotaxing), this would require them to recycle the equivalent of one cell surface area each 5 mins, which is approximately what is measured. Mechanistic basis of amoeboid migration Adhesive crawling is not the only migration mode exhibited by eukaryotic cells. Importantly, several cell types — Dictyostelium amoebae, neutrophils, metastatic cancer cells and macrophages — have been found to be capable of adhesion-independent migration. Historically, the physicist E. M. Purcell theorized (in 1977) that under conditions of low Reynolds number fluid dynamics, which apply at the cellular scale, rearward surface flow could provide a mechanism for microscopic objects to swim forward. After some decades, experimental support for this model of cell movement was provided when it was discovered (in 2010) that amoeboid cells and neutrophils are both able to chemotax towards a chemo-attractant source whilst suspended in an isodense medium. It was subsequently shown, using optogenetics, that cells migrating in an amoeboid fashion without adhesions exhibit plasma membrane flow towards the cell rear that may propel cells by exerting tangential forces on the surrounding fluid. Polarized trafficking of membrane-containing vesicles from the rear to the front of the cell helps maintain cell size. Rearward membrane flow was also observed in Dictyostelium discoideum cells. These observations provide strong support for models of cell movement which depend on a rearward cell surface membrane flow (Model B, above). The migration of supracellular clusters has also been found to be supported by a similar mechanism of rearward surface flow. Collective biomechanical and molecular mechanism of cell motion Based on some mathematical models, recent studies hypothesize a novel biological model for collective biomechanical and molecular mechanism of cell motion. It is proposed that microdomains weave the texture of cytoskeleton and their interactions mark the location for formation of new adhesion sites. According to this model, microdomain signaling dynamics organizes cytoskeleton and its interaction with substratum. As microdomains trigger and maintain active polymerization of actin filaments, their propagation and zigzagging motion on the membrane generate a highly interlinked network of curved or linear filaments oriented at a wide spectrum of angles to the cell boundary. It is also proposed that microdomain interaction marks the formation of new focal adhesion sites at the cell periphery. Myosin interaction with the actin network then generate membrane retraction/ruffling, retrograde flow, and contractile forces for forward motion. Finally, continuous application of stress on the old focal adhesion sites could result in the calcium-induced calpain activation, and consequently the detachment of focal adhesions which completes the cycle. Polarity in migrating cells Migrating cells have a polarity—a front and a back. Without it, they would move in all directions at once, i.e. spread. How this polarity is formulated at a molecular level inside a cell is unknown. In a cell that is meandering in a random way, the front can easily give way to become passive as some other region, or regions, of the cell form(s) a new front. In chemotaxing cells, the stability of the front appears enhanced as the cell advances toward a higher concentration of the stimulating chemical. From biophysical perspective, polarity was explained in terms of a gradient in inner membrane surface charge between front regions and rear edges of the cell. This polarity is reflected at a molecular level by a restriction of certain molecules to particular regions of the inner cell surface. Thus, the phospholipid PIP3 and activated Ras, Rac, and CDC42 are found at the front of the cell, whereas Rho GTPase and PTEN are found toward the rear. It is believed that filamentous actins and microtubules are important for establishing and maintaining a cell's polarity. Drugs that destroy actin filaments have multiple and complex effects, reflecting the wide role that these filaments play in many cell processes. It may be that, as part of the locomotory process, membrane vesicles are transported along these filaments to the cell's front. In chemotaxing cells, the increased persistence of migration toward the target may result from an increased stability of the arrangement of the filamentous structures inside the cell and determine its polarity. In turn, these filamentous structures may be arranged inside the cell according to how molecules like PIP3 and PTEN are arranged on the inner cell membrane. And where these are located appears in turn to be determined by the chemoattractant signals as these impinge on specific receptors on the cell's outer surface. Although microtubules have been known to influence cell migration for many years, the mechanism by which they do so has remained controversial. On a planar surface, microtubules are not needed for the movement, but they are required to provide directionality to cell movement and efficient protrusion of the leading edge. When present, microtubules retard cell movement when their dynamics are suppressed by drug treatment or by tubulin mutations. Inverse problems in the context of cell motility An area of research called inverse problems in cell motility has been established. This approach is based on the idea that behavioral or shape changes of a cell bear information about the underlying mechanisms that generate these changes. Reading cell motion, namely, understanding the underlying biophysical and mechanochemical processes, is of paramount importance. The mathematical models developed in these works determine some physical features and material properties of the cells locally through analysis of live cell image sequences and uses this information to make further inferences about the molecular structures, dynamics, and processes within the cells, such as the actin network, microdomains, chemotaxis, adhesion, and retrograde flow.
Biology and health sciences
Cell processes
Biology
2430192
https://en.wikipedia.org/wiki/Voltammetry
Voltammetry
Voltammetry is a category of electroanalytical methods used in analytical chemistry and various industrial processes. In voltammetry, information about an analyte is obtained by measuring the current as the potential is varied. The analytical data for a voltammetric experiment comes in the form of a voltammogram, which plots the current produced by the analyte versus the potential of the working electrode. Theory Voltammetry is the study of current as a function of applied potential. Voltammetric methods involve electrochemical cells, and investigate the reactions occurring at electrode/electrolyte interfaces. The reactivity of analytes in these half-cells is used to determine their concentration. It is considered a dynamic electrochemical method as the applied potential is varied over time and the corresponding changes in current are measured. Most experiments control the potential (volts) of an electrode in contact with the analyte while measuring the resulting current (amperes). Electrochemical cells Electrochemical cells are used in voltammetric experiments to drive the redox reaction of the analyte. Like other electrochemical cells, two half-cells are required, one to facilitate reduction and the other oxidation. The cell consists of an analyte solution, an ionic electrolyte, and two or three electrodes, with oxidation and reduction reactions occurring at the electrode/electrolyte interfaces. As a species is oxidized (loses electrons), the electrons produced pass through an external electric circuit and generate a current, acting as an electron source for reduction. The generated currents are faradaic currents, which follow Faraday's law. As Faraday's law states that the number of moles of a substance, m, produced or consumed during an electrode process is proportional to the electric charge passed through the electrode, the faradaic currents allow analyte concentrations to be determined. Whether the analyte is reduced or oxidized depends on the analyte and the potential applied, but its reaction always occurs at the working/indicator electrode. Therefore, the working electrode potential varies as a function of the analyte concentration. A second auxiliary electrode completes the electric circuit, called the counter electrode. A third reference electrode provides a constant, baseline potential reading for the other two electrode potentials to be compared to. In case of microelectrodes with small dimensions, the counter electrode and the reference electrode can be combined as the current generated and flowing through the combined electrode will be too small to not affect the potential at the reference. Three electrode system Voltammetry experiments investigate the half-cell reactivity of an analyte. Voltammetry is the study of current as a function of applied potential. These curves I = f(E) are called voltammograms. The potential is varied arbitrarily, either step by step or continuously, and the resulting current value is measured as the dependent variable. The opposite, i.e., amperometry, is also possible but not common. The shape of the curves depends on the speed of potential variation, (nature of driving force) and whether the solution is stirred or quiescent (mass transfer). Most experiments control the potential (volts) of an electrode in contact with the analyte while measuring the resulting current (amperes). To conduct such an experiment, at least two electrodes are required. The working electrode, which makes contact with the analyte, must apply the desired potential in a controlled way and facilitate the transfer of charge to and from the analyte. A second electrode acts as the other half of the cell. This second electrode must have a known potential to gauge the potential of the working electrode from; furthermore it must balance the charge added or removed by the working electrode. While this is a viable setup, it has a number of shortcomings. Most significantly, it is extremely difficult for an electrode to maintain a constant potential while passing current to counter redox events at the working electrode. To solve this problem, the roles of supplying electrons and providing a reference potential are divided between two separate electrodes. The reference electrode is a half cell with a known reduction potential. Its only role is to act as reference for measuring and controlling the working electrode's potential and it does not pass any current. The auxiliary electrode passes the current required to balance the observed current at the working electrode. To achieve this current, the auxiliary will often swing to extreme potentials at the edges of the solvent window, where it oxidizes or reduces the solvent or supporting electrolyte. These electrodes, the working, reference, and auxiliary make up the modern three-electrode system. There are many systems which have more electrodes, but their design principles are similar to the three-electrode system. For example, the rotating ring-disk electrode has two distinct and separate working electrodes, a disk, and a ring, which can be used to scan or hold potentials independently of each other. Both of these electrodes are balanced by a single reference and auxiliary combination for an overall four-electrode design. More complicated experiments may add working electrodes, reference, or auxiliary electrodes as required. In practice it can be important to have a working electrode with known dimensions and surface characteristics. As a result, it is common to clean and polish working electrodes regularly. The auxiliary electrode can be almost anything as long as it doesn't react with the bulk of the analyte solution and conducts well. A common voltammetry method, polarography, uses mercury as a working electrode e.g. DME and HMDE, and as an auxiliary electrode. The reference is the most complex of the three electrodes; there are a variety of standards used. For non-aqueous work, IUPAC recommends the use of the ferrocene/ferrocenium couple as an internal standard. In most voltammetry experiments, a bulk electrolyte (also known as a supporting electrolyte) is used to minimize solution resistance. It is possible to run an experiment without a bulk electrolyte, but the added resistance greatly reduces the accuracy of the results. With room temperature ionic liquids, the solvent can act as the electrolyte. The supporting electrolyte also minimises the effect of migration-controlled and ensures that the reaction is diffusion-controlled. Voltammograms A voltammogram (see linear sweep voltammetry) is a graph that measures the current of an electrochemical cell as a function of the potential applied. This graph is used to determine the concentration and the standard potential of the analyte. To determine the concentration, values such as the limiting or peak current are read from the graph and applied to various mathematical models. After determining the concentration, the applied standard potential can be identified using the Nernst equation. There are three main shapes for voltammograms. The first shape is dependent on the diffusion layer. If the analyte is continuously stirred, the diffusion layer will be a constant width and produce a voltammogram that reaches a constant current. The graph takes this shape as the current increases from the background residual to reach the limiting current (il). If the mixture is not stirred, the width of the diffusion layer eventually increases. This can be observed by the maximum peak current (ip), and is identified by the highest point on the graph. The third common shape for a voltammogram measures the sample for change in current rather than current applied. A maximum current is still observed, but represents the maximum change in current (ip). Mathematical models To determine analyte concentrations, mathematical models are required to link the applied potential and current measured over time. The Nernst equation relates electrochemical cell potential to the concentration ratio of the reduced and oxidized species in a logarithmic relationship. The Nernst equation is as follows: Where: : Reduction potential : Standard potential : Universal gas constant : Temperature in kelvin : Ion charge (moles of electrons) : Faraday constant : Reaction quotient This equation describes how the changes in applied potential will alter the concentration ratio. However, the Nernst equation is limited, as it is modeled without a time component and voltammetric experiments vary applied potential as a function of time. Other mathematical models, primarily the Butler-Volmer equation, the Tafel equation, and Fick's law address the time dependence. The Butler–Volmer equation relates concentration, potential, and current as a function of time. It describes the non-linear relationship between the electrode and electrolyte voltage difference and the electrical current. It helps make predictions about how the forward and backward redox reactions affect potential and influence the reactivity of the cell. This function includes a rate constant which accounts for the kinetics of the reaction. A compact version of the Butler-Volmer equation is as follows: Where: : electrode current density, A/m2 (defined as j = I/S) : exchange current density, A/m2 : electrode potential, V : equilibrium potential, V : absolute temperature, K : number of electrons involved in the electrode reaction : Faraday constant : universal gas constant : so-called cathodic charge transfer coefficient, dimensionless : so-called anodic charge transfer coefficient, dimensionless : activation overpotential (defined as ). At high overpotentials, the Butler–Volmer equation simplifies to the Tafel equation. The Tafel equation relates the electrochemical currents to the overpotential exponentially, and is used to calculate the reaction rate. The overpotential is calculated at each electrode separately, and related to the voltammogram data to determine reaction rates. The Tafel equation for a single electrode is: Where: the plus sign under the exponent refers to an anodic reaction, and a minus sign to a cathodic reaction : overpotential, V A: "Tafel slope", V : current density, A/m2 : "exchange current density", A/m2. As the redox species are oxidized and reduced at the electrodes, material accumulates at the electrode/electrolyte interface. Material accumulation creates a concentration gradient between the interface and the bulk solution. Fick's laws of diffusion is used to relate the diffusion of oxidized and reduced species to the faradaic current used to describe redox processes. Fick's law is most commonly written in terms of moles, and is as follows: Where: J: diffusion flux (in amount of substance per unit area per unit time) D: diffusion coefficient or diffusivity. (in area per unit time) φ: concentration (in amount of substance per unit volume) x: position (in length) Types of voltammetry History The beginning of voltammetry was facilitated by the discovery of polarography in 1922 by the Nobel Prize–winning Czech chemist Jaroslav Heyrovský. Early voltammetric techniques had many problems, limiting their viability for everyday use in analytical chemistry. In polarography, these problems included the fact that mercury is oxidized at a potential that is more positive than +0.2 Volt, making it harder to analyze the results for the analytes in the positive region of the potential. Another problem included the residual current obtained from the charging of the large capacitance of the electrode surface. When Heyrovsky first recorded the first dependence on the current flowing through the dropping mercury electrode on the applied potential in 1922, he took point-by-point measurements and plotted a current-voltage curve. This was considered to be the first polarogram. In order to facilitate this process, he constructed what is now known as a polarograph with M. Shikata, which enabled him to record photographically the same curve in a matter of hours. He gave recognition to the importance of potential and its control and also recognized the opportunities of measuring the limiting currents. He was also an important part of the introduction of dropping mercury electrode as a modern-day tool. In 1942, the English electrochemist Archie Hickling (University of Leicester) built the first three electrodes potentiostat, which was an advancement for the field of electrochemistry. He used this potentiostat to control the voltage of an electrode. In the meantime, in the late 1940s, the American biophysicist Kenneth Stewart Cole invented an electronic circuit which he called a voltage clamp. The voltage clamp was used to analyze the ionic conduction in nerves. The 1960s and 1970s saw many advances in the theory, instrumentation, and the introduction of computer aided and controlled systems. Modern polarographic and voltammetric methods on mercury electrodes came about in three sections. The first section includes the development of the mercury electrodes. The following electrodes were produced: dropping mercury electrode, mercury steaming electrode, hanging mercury drop electrode, static mercury drop electrode, mercury film electrode, mercury amalgam electrodes, mercury microelectrodes, chemically modified mercury electrodes, controlled growth mercury electrodes, and contractible mercury drop electrodes. There was also an advancement of the measuring techniques used. These measuring techniques include: classical DC polarography, oscillopolarography, Kaloussek's switcher, AC polarography, tast polarography, normal pulse polarography, differential pulse polarography, square-wave voltammetry, cyclic voltammetry, anodic stripping voltammetry, convolution techniques, and elimination methods. Lastly, there was also an advancement of preconcentration techniques that produced an increase in the sensitivity of the mercury electrodes. This came about through the development of anodic stripping voltammetry, cathodic stripping voltammetry and adsorptive stripping voltammetry. These advancements improved sensitivity and created new analytical methods, which prompted the industry to respond with the production of cheaper potentiostat, electrodes, and cells that could be effectively used in routine analytical work. Applications Voltammetric sensors A number of voltammetric systems are produced commercially for the determination of species that are of interest in industry and research. These devices are sometimes called electrodes but are actually complete voltammetric cells, which are better referred to as sensors. These sensors can be employed for the analysis of organic and inorganic analytes in various matrices. The oxygen electrode The determination of dissolved oxygen in a variety of aqueous environments, such as sea water, blood, sewage, effluents from chemical plants, and soils is of tremendous importance to industry, biomedical and environmental research, and clinical medicine. One of the most common and convenient methods for making such measurements is with the Clark oxygen sensor, which was patented by L.C. Clark, Jr. in 1956.
Physical sciences
Electrical methods
Chemistry
2431323
https://en.wikipedia.org/wiki/Mudflow
Mudflow
A mudflow, also known as mudslide or mud flow, is a form of mass wasting involving fast-moving flow of debris and dirt that has become liquified by the addition of water. Such flows can move at speeds ranging from 3 meters/minute to 5 meters/second. Mudflows contain a significant proportion of clay, which makes them more fluid than debris flows, allowing them to travel farther and across lower slope angles. Both types of flow are generally mixtures of particles with a wide range of sizes, which typically become sorted by size upon deposition. Mudflows are often called mudslips, a term applied indiscriminately by the mass media to a variety of mass wasting events. Mudflows often start as slides, becoming flows as water is entrained along the flow path; such events are often called mud failures. Other types of mudflows include lahars (involving fine-grained pyroclastic deposits on the flanks of volcanoes) and jökulhlaups (outbursts from under glaciers or icecaps). A statutory definition of "flood-related mudslide" appears in the United States' National Flood Insurance Act of 1968, as amended, codified at 42 USC Sections 4001 and following. Triggering of mudflows Heavy rainfall, snowmelt, or high levels of groundwater flowing through cracked bedrock may trigger a movement of soil or sediments in landslides that continue as mudflows. Floods and debris flows may also occur when strong rains on hill or mountain slopes cause extensive erosion and/or mobilize loose sediment that is located in steep mountain channels. The 2006 Sidoarjo mud flow may have been caused by rogue drilling. The point where a muddy material begins to flow depends on its grain size, the water content, and the slope of the topography. Fine grained material like mud or sand can be mobilized by shallower flows than a coarse sediment or a debris flow. Higher water content (higher precipitation/overland flow) also increases the potential to initiate a mudflow. After a mudflow forms, coarser sediment may be picked up by the flow. Coarser sediment picked up by the flow often forms the front of a mudflow surge and is pushed by finer sediment and water that pools up behind the coarse-grained moving mudflow-front. Mudflows may contain multiple surges of material as the flow scours channels and destabilizes adjacent hillslopes (potentially nucleating new mudflows). Mudflows have mobilized boulders 1–10 m across in mountain settings. Some broad mudflows are rather viscous and therefore slow; others begin very quickly and continue like an avalanche. They are composed of at least 50% silt and clay-sized materials and up to 30% water. Because mudflows mobilize a significant amount of sediment, mudflows have higher flow heights than a clear water flood for the same water discharge. Also, sediment within the mudflow increases granular friction within the flow structure of the flow relative to clear water floods, which raises the flow depth for the same water discharge. Difficulty predicting the amount and type of sediment that will be included in a mudflow makes it much more challenging to forecast and engineer structures to protect against mudflow hazards compared to clear water flood hazards. Mudflows are common even in the hills around Los Angeles, California, where they have destroyed many homes built on hillsides without sufficient support after fires destroy vegetation holding the land. On 14 December 1999 in Vargas, Venezuela, a mudflow known as The Vargas tragedy significantly altered more than 60 kilometers (37 mi) of the coastline. It was triggered by heavy rainfall and caused estimated damages of US$1.79 to US$3.5 billion, killed between 10,000 and 30,000 people, forced 85,000 people to evacuate, and led to the complete collapse of the state's infrastructure. Mudflows and landslides Landslide is a more general term than mudflow. It refers to the gravity-driven failure and subsequent movement downslope of any types of surface movement of soil, rock, or other debris. The term incorporates earth slides, rock falls, flows, and mudslides, amongst other categories of hillslope mass movements. They do not have to be as fluid as a mudflow. Mudflows can be caused by unusually heavy rains or a sudden thaw. They consist mainly of mud and water plus fragments of rock and other debris, so they often behave like floods. They can move houses off their foundations or bury a place within minutes because of incredibly strong currents. Mudflow geography When a mudflow occurs it is given four named areas, the 'main scarp', in bigger mudflows the 'upper and lower shelves' and the 'toe'. The main scarp will be the original area of incidence, the toe is the last affected area(s). The upper and lower shelves are located wherever there is a large dip (due to mountain or natural drop) in the mudflow's path. A mudflow can have many shelves. Largest recorded mudflow The world's largest historic subareal (on land) landslide occurred during the 1980 eruption of Mount St. Helens, a volcano in the Cascade Mountain Range in the State of Washington, US The volume of material displaced was . Directly in the path of the huge mudflow was Spirit Lake. Normally a chilly , the lahar instantly raised the temperature to near . Today the bottom of Spirit Lake is above the original surface, and it has two and a half times more surface area than it did before the eruption. The largest known of all prehistoric landslides was an enormous submarine landslide that disintegrated 60,000 years ago and produced the longest flow of sand and mud yet documented on Earth. The massive submarine flow travelled – the distance from London to Rome. By volume, the largest submarine landslide (the Agulhas slide off South Africa) occurred approximately 2.6 million years ago. The volume of the slide was . Areas at risk The areas most generally recognized as being at risk of a dangerous mudflow are: Areas where wildfires or human modification of the land have destroyed vegetation Areas where landslides have occurred before Steep slopes and areas at the bottom of slopes or canyons Slopes that have been altered for the construction of buildings and roads Channels along streams and rivers Areas where surface runoff is directed
Physical sciences
Geomorphology: General
Earth science
2432450
https://en.wikipedia.org/wiki/Taxus%20brevifolia
Taxus brevifolia
Taxus brevifolia, the Pacific yew or western yew, is a species of tree in the yew family Taxaceae native to the Pacific Northwest of North America. It is a small evergreen conifer, thriving in moisture and otherwise tending to take the form of a shrub. Description A small evergreen conifer (sometimes appearing as a shrub), the Pacific yew grows to tall and with a trunk up to in diameter, rarely more. In some instances, trees with heights in excess of occur in parks and other protected areas, quite often in gullies. The tree is extremely slow growing, and has a habit of rotting from the inside, creating hollow forms. This makes it difficult and sometimes impossible to make accurate ring counts to determine a specimen's true age. Often damaged by succession of the forest, it usually ends up in a squat, multiple-leader form, able to grow new sprouts from decapitated stumps. In its shrub form, sometimes called "yew brush", it can reproduce vegetatively via layering. It has thin, scaly bark, red then purplish-brown, covering a thin layer of off-white sap wood with a darker heartwood that varies in color from brown to a purplish hue to deep red, or even bright orange when freshly cut. The leaves are lanceolate, flat, dark green, long and broad, arranged spirally on the stem, but with the leaf bases twisted to align the leaves in two flat rows either side of the stem except on erect leading shoots where the spiral arrangement is more obvious. The seed cones are highly modified, each cone containing a single seed long partly surrounded by a modified scale which develops into a soft, bright red berry-like structure called an aril, long and wide and open at the end. The arils are mature 6–9 months after pollination. The seeds contained in the arils are eaten by thrushes and other birds, which disperse the hard seeds undamaged in their droppings; maturation of the arils is spread over 2–3 months, increasing the chances of successful seed dispersal. The male cones are globose, diameter, and shed their pollen in early spring. It is mostly dioecious, but occasional individuals can be variably monoecious, or change sex with time. Taxonomy Varieties Taxus brevifolia var. reptaneta T. brevifolia var. reptaneta (thicket yew) is a shrub variety that generally occurs in the mid to upper elevation range of the typical variety, at its southernmost occurrence in the Klamath Mountains region, and at lower elevations further north. It is distinguished from young trees of the typical variety (var. brevifolia) by its stems initially creeping along the ground for a short distance before ascending (curving) upwards and by the branches growing off to one side of the stem, usually the upper side. The epithet reptaneta is from the Latin reptans which means "creeping, prostrate, and rooting", which is exactly what this variety does; in rooting it forms yew thickets; hence, the epithet reptaneta (-etum means "collective place of growth") and hence the common name, thicket yew. Unlike the typical variety, thicket yew grows in abundance on open sunny avalanche shoots or ravines as well as in the forest understory. It also occurs along forest margins. In northwestern Montana, a variant of the thicket yew does not ascend upwards; rather, it remains along the ground. This is probably the ancestral form; the upright form with branches along the upper side would be the expected growth pattern that might evolve from one with stems that strictly creep along the ground since branches can only arise from the upper surface. T. brevifolia var. reptaneta has been described as synonymous with typical yew (var. brevifolia). Though the two varieties may be genetically distinct, some botanists only use this taxon to describe different geographical ranges. For example, T. mairei var. speciosa, which occurs with the typical variety in southern China in 10 of 13 provinces, was rejected for the lack of a "geographic reason" for recognition though it appears genetically distinct. T. brevifolia var. reptaneta has also been proposed to be elevated to a subspecies, despite that rank being used to define geographically separated groups of T. baccata. Taxus brevifolia var. polychaeta Typical T. brevifolia, like most species in the genus, usually produces a single ovule on a complex scaly shoot, composed of a primary shoot and a secondary short shoot. To the casual observer they appear as one funnelform shoot with an ovule at the apex. T. brevifolia var. polychaeta differs from var. brevifolia in producing a relatively longer primary shoot with as many five secondary shoots. The epithet, polychaeta, is in reference to the primary shoot resembling a polychaete worm; hence, its common name 'worm cone yew'. Variety polychaeta appears to be relatively rare. It may have been extirpated from the type locality—around Mud Bay near Olympia, Washington—as a result of urban expansion. It is also known from Northern Idaho and Sonoma County, California. As in the case with thicket yew, worm yew has been indicated to be the same as the typical variety, but again there are no specific studies to support this conclusion. The authority of thicket yew and worm cone yew has been involved in the study of Taxus for 25 years at the time the varieties were described. Similar species Yew foliage is very similar to that of Sequoia sempervirens, the coastal redwood. Distribution and habitat Pacific yew is native to the Pacific Northwest. It ranges from southernmost Alaska south to Northern California, mostly in the Pacific Coast Ranges, but with isolated disjunct populations in southeast British Columbia and in Northern Idaho. It grows in varying types of environments; however, in drier environments it is mostly limited to stream-side habitats, whereas in moist environments it will grow up onto slopes and ridgetops, at least as high in altitude as above sea level. Pacific yew is shade tolerant, but can also grow in sun. The tree's shade tolerance allows it to form an understory, which means that it can grow along streams providing shade to maintain water temperature. Ecology Birds eat the fruit cups and spread the seeds. Moose feed on the tree in winter in forests of the Rocky Mountains. Toxicity Many parts of yews are poisonous and can be fatal if eaten, including the seed which should not even be chewed. Uses Traditionally, the resilient and rot-resistant wood was used by Native Americans to make tools, bows (backed with sinew), arrows, and canoe paddles. Other purposes for yew included making harpoons, fishhooks, wedges, clubs, spoons, drums, snowshoes, and arrowheads. The foliage and bark was used for medicinal purposes. Members of the Pit River Tribe would sell this plant to the Ukiah. The Concow tribe calls the tree yōl’-kō (Konkow language). Modern-day longbow makers report that a very small percentage of yew trees are of a grain suitable for their craft. The Japanese have used the wood for decorative purposes, and the Taiwanese have valued it as well. The juicy red cup around the seed seems to be edible (but not the toxic seed within), with a mild cherry jello-like flavour. The berry is said to have a sweet taste but slimy texture, while the leaves, bark and seed are extremely poisonous and should not be consumed. The chemotherapy drug paclitaxel (taxol), used in breast, ovarian, and lung cancer treatment, can be derived from T. brevifolia and other species of yew. As it was already becoming scarce when its chemotherapeutic potential was realized around the 1990s, the Pacific yew was never commercially harvested from its habitat at a large scale; the widespread use of the paclitaxel was enabled circa 2003 when a semi-synthetic pathway was developed from extracts of cultivated yews of other species. Gallery
Biology and health sciences
Pinophyta (Conifers)
Plants
4512079
https://en.wikipedia.org/wiki/Cephalaspis
Cephalaspis
Cephalaspis (from , 'head' and , 'shield') is a possibly monotypic genus of extinct osteostracan agnathan vertebrate. It was a trout-sized detritivorous fish that lived in the early Devonian. Description Like its relatives, Cephalaspis was heavily armored, presumably to defend against predatory placoderms and eurypterids, as well as to serve as a source of calcium for metabolic functions in calcium-poor freshwater environments. It had sensory patches along the rim and center of its head shield, which were used to sense for worms and other burrowing organisms in the mud. Diet Because its mouth was situated directly beneath its head, Cephalaspis was thought of as being a bottom-feeder, akin to a heavily armoured catfish or sturgeon. It moved its plow-like head from side to side, Cephalaspis easily stirring sand and dust into the water, along with revealing the hiding places of its prey, digging up worms or crustaceans hidden in the mud and algae, as well as sifting through detritus (inferred from its lack of jaws and inability to bite). Classification The genus Cephalaspis has long been used as a wastebasket taxon since Agassiz erected it in 1835 for four species, C. lyelli, C. rostratus, C. lewisi and C. lloydi. Later, it was eventually determined that the last three species were portions of what would eventually be described as the heterostracan Pteraspis rostratus. C. lyelli, named after Sir Charles Lyell, would be left to be the type species of the genus. Other researchers would continue adding other similar-looking osteostracans throughout the decades until, in 2009, Sansom reevaluated Osteostraci, and determined that only C. lyelli could be reliably placed within Cephalaspis, and that probably all other species would eventually need to be reexamined and be placed into other genera. In the same 2009 study, Sansom also determined that Cephalaspis sensu stricto was the sister-taxon of cornuate osteostracans, i.e., all osteostracans that either have, or have ancestors that had defined corners on the head-shields. Included species The following is a list of species that have been included into Cephalaspis; most likely do not belong to the genus, but have not been formally moved. †Cephalaspis lyelli (Agassiz, 1835) (type species) †"Cephalaspis" aarhusi (Wangsjö, 1952) †"Cephalaspis" agassizi (Lankester, 1868) †"Cephalaspis" brevirostris (Denison, 1952) †"Cephalaspis" broughi (Wangsjö, 1952) †Cephalaspis cradleyensis (Stensiö, 1932) †"Cephalaspis" dissimulata (Wangsjö, 1952) †"Cephalaspis" doryphorus (Wangsjö, 1952) †"Cephalaspis" fletti (Stensiö, 1932) †"Cephalaspis" fraticornis (Wangsjö, 1952) †"Cephalaspis" hyperboreus (Wangsjö, 1952) †"Cephalaspis" lankestri (Stensiö, 1932) †"Cephalaspis" lornensis (Traquair, 1899) †"Cephalaspis" microlepidota (Balabai, 1962) †"Cephalaspis" novaescotiae (Denison, 1955) †"Cephalaspis" platycephalus (Wangsjö, 1952) †"Cephalaspis" producta (Wangsjö, 1952) †"Cephalaspis" recticornis (Wangsjö, 1952) †"Cephalaspis" spinifer (Stensiö, 1923) †"Cephalaspis" tenuicornis (Wangsjö, 1952) †"Cephalaspis" verrulosa (Wangsjö, 1952) †"Cephalaspis" websteri (Stensiö, 1932) †"Cephalaspis" whitbachensis (Stensiö, 1932) †"Cephalaspis" wyomingensis (Denison, 1952) Species of Cephalaspis that have been reassigned †Cephalaspis corystis (Wangsjö, 1952) = Machairaspis corystis †Cephalaspis excellens (Wangsjö, 1952) = Waengsjoeaspis excellens †Cephalaspis elegans (Balabai, 1962) = Zychaspis siemiradzkii †Cephalaspis hastata (Wangsjö, 1952) = Machairaspis hastata †Cephalaspis hoeli (Stensiö, 1927) = Mimetaspis hoeli †Cephalaspis ibex (Wangsjö, 1952) = Machairaspis ibex †Cephalaspis jarviki (Wangsjö, 1952) = Diademaspis jarviki †Cephalaspis magnifica (Traquair, 1893) = Trewinia magnifica †Cephalaspis microtuberculata (Obruchev, 1961) = Escuminaspis laticeps †Cephalaspis pagei (Lankester, 1868) = Janaspis pagei †Cephalaspis patteni (Robertson, 1936) = Levesquaspis patteni †Cephalaspis powriei (Lankester, 1868) = Janaspis powriei †Cephalaspis rosamundae (Roberts, 1937) = Escuminaspis laticeps †Cephalaspis rostrata (Agassiz, 1835) = Pteraspis rostrata †Cephalaspis salweyi (Egerton, 1857) = Zenaspis salweyi †Cephalaspis utahensis (Branson & Mehl, 1931) = Camptaspis utahensis Other miscellaneous species once assigned to Cephalaspis †Cephalaspis abergavenniensis (White, 1963) †Cephalaspis acuticornis (Stensiö, 1927) †Cephalaspis billcrofti (White & Toombs, 1983) †Cephalaspis campbelltonensis (Whiteaves, 1881) †Cephalaspis cocculi (MacGillivray, 1921) †Cephalaspis cwmmillensis (White & Toombs, 1983) †Cephalaspis dawsoni (Lankester, 1870) †Cephalaspis djurinensis (Balabai, 1962) †Cephalaspis grabrielsei (Dineley & Loeffler, ?) †Cephalaspis isachseni (Stensiö, 1927) †Cephalaspis jexi (Traquair, 1893) †Cephalaspis peninsula (Pageau, 1969) †Cephalaspis schrenckii (Pander, ?) †Cephalaspis sp. "Forfar" (Trewin & Davidson, 1996) †Cephalaspis syndenhami (Pageau, 1969) †Cephalaspis traquairi (Stensio, ?) †Cephalaspis uternaria (?) †Cephalaspis vogti (Stensiö, 1927) †Cephalaspis watneliei (Stensiö, 1927) †Cephalaspis westolli (Russell, 1954)
Biology and health sciences
Prehistoric agnathae and early chordates
Animals
4513980
https://en.wikipedia.org/wiki/Bacteroides%20fragilis
Bacteroides fragilis
Bacteroides fragilis is an anaerobic, Gram-negative, pleomorphic to rod-shaped bacterium. It is part of the normal microbiota of the human colon and is generally commensal, but can cause infection if displaced into the bloodstream or surrounding tissue following surgery, disease, or trauma. Habitat Bacteroides fragilis resides in the human gastrointestinal tract and is essential to healthy gastrointestinal function such as mucosal immunity and host nutrition. As a mesophile, optimal growth occurs at 37 °C and a pH around 7. Morphology Cells of B. fragilis are rod-shaped to pleomorphic with a cell size range of 0.5–1.5 × 1.0–6.0 μm.B. fragilis is a Gram-negative bacterium and does not possess flagella or cilia making it immotile. However, it does utilize peritrichous fimbriae for adhesion to other molecular structures. B. fragilis also utilizes a complex series of surface proteins, lipopolysaccharide chains, and outer membrane vesicles to help survive the volatile intestinal micro-environment. Metabolism and mutualism in the gut microbiome B. fragilis is an aerotolerant, anaerobic chemoorganotroph capable of fermenting a wide variety of glycans available in the human gut microenvironment including glucose, sucrose, and fructose. B. fragilis can also catabolize a variety of biopolymers, polysaccharides, and glycoproteins into smaller molecules which can then be used and further broken down by other microbes. Fatty acids produced by the fermentation of carbohydrates can serve as a source of energy for the host. Cytochrome bd oxidase is essential for oxygen consumption in B. fragilis and can allow other obligate anaerobes to survive in the now oxygen-reduced microenvironment. Animals lacking gut bacteria require 30% more caloric intake to maintain body mass. Environment-sensing systems The complex environmental-sensory system allows B. fragilis to survive and adapt in the ever-changing human gut microbiome. This system is composed of many components and can effectively handle a variety of threats to the bacteria. Bacteriocins B. fragilis intestinal isolates secrete high levels of bacteriocin proteins and are resistant to other bacteriocins secreted by other closely related isolates. This mechanism is believed to reduce the level of intra-specific competition. Bile salt resistance B. fragilis utilizes enzymes such as bile salt hydrolase to resist the degrading effects of bile salts. Detergent activity of bile salts can permeabilize bacterial membranes which can eventually lead to membrane collapse and/or cell damage. Oxidative stress response Proteins such as catalase, superoxide dismutase, and alkyl hydroperoxide reductase protect the organism from harmful oxygen radicals. This permits growth in the presence of nanomolar concentrations of O2. Antibiotic resistance Member of the genus Bacteroides are characterized with having the highest numbers of antibiotic resistance mechanisms accompanied by the highest resistance rates amongst anaerobic bacteria. The high resistance to antibiotics of B.fragilis is mainly attributed to genetic plasticity. Species of the Bacteroidaceae have displayed increasing resistance to antimicrobial agents such as cefoxitin, clindamycin, metronidazole, carbapenems, and fluoroquinolones. Resistance reservoirs Bacteroides species accumulate a variety of antibiotic/antimicrobial resistance genes as they reside in the gastrointestinal tract. This allows the genetic transfer of these genes to other Bacteroides species and possibly other more virulent bacteria leading to an overall increase in multi-drug resistance. This is exacerbated by the tendency of resistance genes to be relatively stable even without the presence of the antibiotic. Epidemiology and pathogenesis The B. fragilis group is the most commonly isolated Bacteroidaceae in anaerobic infections, especially those that originate from the gastrointestinal microbiota. B. fragilis is the most prevalent organism in the B. fragilis group, accounting for 41% to 78% of the isolates of the group. These organisms are resistant to penicillin by virtue of production of beta-lactamase, and by other unknown factors. This group was formerly classified as subspecies of B. fragilis (i.e. B. f. ssp. fragilis, B. f. ssp. distasonis, B. f. ssp. ovatus, B. f. ssp. thetaiotaomicron, and B. f. ssp. vulgatus). They have been reclassified into distinct species on the basis of DNA homology studies. B. fragilis (formerly known as B. f. ssp. fragilis) is often recovered from blood, pleural fluid, peritoneal fluid, wounds, and brain abscesses. Although the B. fragilis group is the most common species found in clinical specimens, it is the least common Bacteroides present in fecal microbiota, comprising only 0.5% of the bacteria present in stool. Their pathogenicity partly results from their ability to produce capsular polysaccharide, which is protective against phagocytosis and stimulates abscess formation. Bacteroides fragilis is involved in 90% of anaerobic peritoneal infections. It also causes bacteremia associated with intra-abdominal infections, peritonitis and abscesses following rupture of viscus, and subcutaneous abscesses or burns near the anus. Though it is gram negative, it has an altered LPS and does not cause endotoxic shock. Untreated B. fragilis infections have a 60% mortality rate. Anti-inflammatory effects B. fragilis polysaccharide A (PSA) has been shown to protect animals from experimental diseases like colitis, asthma, or pulmonary inflammation. B. fragilis mutants lacking surface polysaccharides cannot easily colonize the intestine. PSA colonization of B. fragilis in the gut mucosa induces regulatory T cells and suppresses pro-inflammatory T helper 17 cells.
Biology and health sciences
Gram-negative bacteria
Plants
14119824
https://en.wikipedia.org/wiki/Phalaenopsis%20amabilis
Phalaenopsis amabilis
Phalaenopsis amabilis, commonly known as the moon orchid, moth orchid, or mariposa orchid, is a species of flowering plant in the orchid family Orchidaceae. It is widely cultivated as a decorative houseplant. It is an epiphytic or lithophytic herb with long, thick roots, between two and eight thick, fleshy leaves with their bases hiding the stem and nearly flat, white, long-lasting flowers on a branching flowering stem with up to ten flowers on each branch. Phalaenopsis amabilis is native to Maritime Southeast Asia, New Guinea, and Australia. It has three subspecies: P. a. amabilis, native to the Philippines (Palawan), Malaysia (Borneo), Indonesia (Borneo, Sumatra, and Java); P. a. moluccana, native to the Maluku Islands (Seram and Buru Islands) and Sulawesi of Indonesia; and P. a. rosenstromii, native to Papua New Guinea and Australia (northeastern Queensland). Phalaenospsis amabilis is one of the three national flowers of Indonesia, where it is known as (lit. "moon orchid"). Description Phalaenopsis amabilis is an epiphytic, rarely lithophytic herb with coarse, flattened, branching roots up to long and usually wide. Between two and eight fleshy, dark green, oblong to egg-shaped leaves long and wide are arranged in two rows along the stem. The stem is but hidden by the leaf bases. The flowers are arranged on a stiff, arching flowering stem long emerging from a leaf base, with a few branches near the tip. Each branch of the flowering stem bears between two and ten white, long-lasting flowers on a stalk (including the ovary) long. Each flower is long and wide with the sepals and petals free from and spreading widely apart from each other. The sepals are egg-shaped, long and about wide and the petals broadly egg-shaped to almost square, long and wide. The labellum is white with yellow and reddish markings, about long with three lobes. The side lobes curve upwards and partly surround the column. The middle lobe is cross-shaped with a rounded tip and two long, thread-like wavy arms. There is a large yellow callus near the base of the labellum. Flowering time depends on distribution but occurs from April to December in New Guinea. Taxonomy and naming In 1750, before the system of binomial nomenclature had been formalised by Carl Linnaeus, Georg Eberhard Rumphius had collected the species on Ambon Island and described it as Angraecum albus majus in his book Herbarium Amboinense. Linnaeus described it in Species Plantarum giving it the binomial Epidendrum amabile and in 1825, Carl Ludwig Blume changed the name to Phalaenopsis amabilis. The specific epithet (amabilis) is a Latin word meaning "lovely". Subspecies There are three subspecies of P. amabilis recognised by the World Checklist of Selected Plant Families: Phalaenopsis amabilis subsp. amabilis which is the most widespread subspecies and is distinguished from the other subspecies by its cross-shaped labellum middle lobe, the base of which has yellow and red markings; Phalaenopsis amabilis subsp. moluccana (Schltr.) Christenson which has a linear-oblong labellum middle lobe, with a slight dilation at its base where there are yellow and white markings; Phalaenopsis amabilis subsp. rosenstromii (F.M.Bailey) Christenson which has a relatively short, triangular labellum middle lobe where the markings are yellow; In Australia, subspecies rosenstromii is recognised as Phalaenopsis rosenstromii by the Australian Plant Census. It was discovered by Gus Rosenstrom "on trees, high from the ground, Daintree River" and was first formally described by Frederick Manson Bailey who published the description in the Queensland Agricultural Journal. Distribution and habitat Phalaenopsis amabilis usually grows on trees, rarely on rocks, in rainforest where the humidity is high but there is free air movement. Subspecies amabilis has the widest distribution and occurs from Palawan in the southern Philippines to Borneo, Sumatra and Java. Subspecies moluccana is separated from subspecies amabilis by the Wallace Line and is found in Sulawesi as well as Seram and Buru in the Moluccas. Subspecies rosenstromii is native to New Guinea and Australia where it occurs on the Cape York Peninsula between the Iron Range and the Paluma Range National Park. It is separated from subspecies moluccana by Lydekker's Line. Conservation Phalaenopsis rosenstromii was listed as "endangered" under the Australian Government Environment Protection and Biodiversity Conservation Act 1999 but the listing was updated to Phalaenopsis amabilis subsp. rosenstromii in May 2016. The main threat to the subspecies in Australia is illegal collecting. Use in horticulture Phalaenopsis amabilis is reported to be very easy to grow as a houseplant, as long as attention is paid to a correct feeding and watering regimen. It thrives in a domestic temperature range of , in bright indirect light such as that offered by an east- or west-facing window. Specialist orchid compost and feed is widely available. Species and cultivars in the genus Phalaenopsis are recommended for beginners. In cultivation in the United Kingdom, Phalaenopsis amabilis has gained the Royal Horticultural Society's Award of Garden Merit. Phalaenopsis amabilis is one of the parents of Phalaenopsis Harriettiae, reportedly the first man-made Phalaenopsis hybrid, created by John Veitch and recorded in 1887. Importance Phalaenopsis amabilis ( meaning "moon orchid") is one of the three national flowers in Indonesia, the other two being the sambac jasmine and padma raksasa. It was officially recognized as national "flower of charm" () in Presidential Decree No. 4 in 1993. The orchid is also the official flower of Kota Kinabalu, the capital city of Sabah, Malaysia.
Biology and health sciences
Asparagales
Plants
1146946
https://en.wikipedia.org/wiki/Syncline
Syncline
In structural geology, a syncline is a fold with younger layers closer to the center of the structure, whereas an anticline is the inverse of a syncline. A synclinorium (plural synclinoriums or synclinoria) is a large syncline with superimposed smaller folds. Synclines are typically a downward fold (synform), termed a synformal syncline (i.e. a trough), but synclines that point upwards can be found when strata have been overturned and folded (an antiformal syncline). Characteristics On a geologic map, synclines are recognized as a sequence of rock layers, with the youngest at the fold's center or hinge and with a reverse sequence of the same rock layers on the opposite side of the hinge. If the fold pattern is circular or elongate, the structure is a basin. Folds typically form during crustal deformation as the result of compression that accompanies orogenic mountain building. Notable examples Powder River Basin, Wyoming, US Sideling Hill roadcut along Interstate 68 in western Maryland, US, where the Rockwell Formation and overlying Purslane Sandstone are exposed Forêt de Saou syncline in Saou, France The Southland Syncline in the southeastern corner of the South Island of New Zealand, including The Catlins and the Hokonui Hills Strathmore Syncline, Scotland Wilpena Pound, Flinders Ranges, South Australia Fort Valley, Shenandoah County, Virginia Hondo Syncline in the Picuris Mountains of New Mexico, an example of an overturned syncline. Gallery
Physical sciences
Structural geology
Earth science
1147329
https://en.wikipedia.org/wiki/Goji
Goji
Goji, goji berry, or wolfberry () is the sweet fruit of either Lycium barbarum or Lycium chinense, two closely related species of boxthorn in the nightshade family, Solanaceae. L. barbarum and L. chinense fruits are similar but can be distinguished by differences in taste and sugar content. Goji berries, are primarily cultivated in the Ningxia Hui Autonomous Region and Xinjiang in China, where the unique climate and soil conditions contribute to their vibrant color and nutrient-rich profile. Both of these species are native to East Asia, and have been long used in traditional East Asian cuisine. In the United States, varieties of the genus, Lycium, are given the common names, desert-thorn and Berlandier's wolfberry for the species, Lycium berlandieri. The fruit has also been an ingredient in East Asian traditional medicine, namely traditional Chinese, Japanese, and Korean medicine since at least the 3rd century AD. In pharmacopeias, the fruit of the plant is called by the Latin name lycii fructus and the leaves are called herba lycii. Since about 2000, goji berry and derived products have become common in developed countries as health foods or alternative medicine remedies, extending from exaggerated and unproven claims about their health benefits. Etymology and naming The genus name Lycium was assigned by Linnaeus in 1753. The Latin name lycium is derived from the Greek word λυκιον (lykion), used by Pliny the Elder (23–79) and Pedanius Dioscorides (ca. 40–90) for a plant known as dyer's buckthorn, which was probably a Rhamnus species. The Greek word refers to the ancient region of Lycia (Λυκία) in Anatolia, where that plant grew. The common English name, wolfberry, has an unknown origin. It may have arisen from the mistaken assumption that the Latin name Lycium was derived from Greek λύκος (lycos), meaning "wolf". In the English-speaking world, the name goji berry has been used since around 2000. The word goji is an approximation of the pronunciation of gǒuqǐ (pinyin for 枸杞), the name for the berry-producing plant L. chinense in several Chinese dialects. In Japanese, it is known as 枸杞 (kuko), usually written in kana as クコ. In technical botanical nomenclature, L. barbarum is called matrimony vine, while L. chinense is Chinese desert-thorn. In the United States, various common names are used for Lycium species and varieties, such as desert-thorn, boxthorn, matrimony vine, and wolfberry. Uses Traditional East Asian cuisine Young wolfberry shoots and leaves are harvested commercially as a leaf vegetable. The berries are used in dishes as either a garnish or a source of sweetness. Food Since the early 21st century, the dried fruit, occasionally compared to raisins, has been marketed as a health food, with unsupported health claims about its benefits. In the wake of those claims, dried and fresh goji berries were included in many snack foods and food supplements, such as granola bars. There are products of whole and ground wolfberry seeds and seed oil. Marketing controversies Exaggerated claims about the health benefits of goji berry and derived products have triggered strong reactions from government regulatory agencies. In 2019–2020, the U.S. Food and Drug Administration (FDA) placed two goji product distributors on notice with warning letters about unproven therapeutic benefits. The advertisers' statements were in violation of the United States Food, Drug and Cosmetic Act [21 USC/321 (g)(1)] because they "establish[ed] the product as a drug intended for use in the cure, mitigation, treatment, or prevention of disease" when goji ingredients have had no such scientific evaluation. Additionally stated by the FDA, the goji products are "not generally recognized as safe and effective for the referenced conditions" and therefore must be treated as a "new drug" under Section 21(p) of the Act. New drugs may not be legally marketed in the United States without prior approval of the FDA. In January 2007, marketing statements for a goji juice product were the subject of an investigative report by consumer advocacy program Marketplace produced by Canadian public broadcaster CBC. In the interview, Earl Mindell (then working for direct-marketing company FreeLife International, Inc.) falsely claimed the Memorial Sloan-Kettering Cancer Center in New York had completed clinical studies showing that use of wolfberry juice would prevent 75% of human breast cancer cases. Among the extreme claims used to market goji berries or its juice, often referred to as a "superfruit", is the unsupported story that a Chinese man named Li Qing Yuen, who was said to have consumed wolfberries daily, lived to the age of 256 years (1677–1933). This claim apparently originated in a 2003 booklet by Earl Mindell, who claimed also that goji had anti-cancer properties. The booklet contained false and unverified claims. On 29 May 2009, a class action lawsuit was filed against FreeLife in the United States District Court of Arizona. This lawsuit alleged false claims, misrepresentations, false and deceptive advertising and other issues regarding FreeLife's Himalayan Goji Juice, GoChi, and TaiSlim products. This lawsuit sought remedies for consumers who had purchased the products over years. A settlement agreement was reached on 28 April 2010, where FreeLife took steps to ensure that its goji products were not marketed as "unheated" or "raw", and made a contribution to an educational organization. As with many other novel "health" foods and supplements, the lack of clinical evidence and poor quality control in the manufacture of consumer products prevent goji from being clinically recommended or applied. Scientific research Because of the numerous effects claimed by traditional medicine, there has been considerable basic research to investigate biological properties of the fruit phytochemicals. The composition of the fruits, seeds, roots, and other constituents, such as polysaccharides, has been analyzed, and extracts are under study. However, no biological effects or clinical effectiveness of consuming the fruit itself, its juice, or extracts have been confirmed, . Safety Interaction with drugs In vitro testing suggests that unidentified wolfberry phytochemicals in goji tea may inhibit metabolism of medications, such as those processed by the cytochrome P450 liver enzymes. Such drugs include warfarin and drugs for diabetes, tachycardia or hypertension. Pesticide and fungicide residues Organochlorine pesticides are conventionally used in commercial wolfberry cultivation to mitigate infestation by insects. China's Green Food Standard, administered by the Chinese Ministry of Agriculture's China Green Food Development Center, permits some pesticide and herbicide use. Agriculture in the Tibetan plateau (where many "Himalayan" or "Tibetan"-branded berries supposedly originate) conventionally uses fertilizers and pesticides, making organic claims for berries originating there dubious. Since the early 21st century, high levels of insecticide residues (including fenvalerate, cypermethrin, and acetamiprid) and fungicide residues (such as triadimenol and isoprothiolane), have been detected by the United States Food and Drug Administration in some imported wolfberries and wolfberry products of Chinese origin, leading to the seizure of these products. Cultivation and commercialization Wolfberries are most often sold in dried form. When ripe, the oblong, red berries are tender and must be picked or shaken from the vine into trays to avoid spoiling. The fruits are preserved by drying them in full sun on open trays or by mechanical dehydration, employing a progressively increasing series of heat exposure over 48 hours. China China is the main supplier of wolfberry products in the world, with total exports generating US$120 million in 2004. This production derived from farmed nationwide, yielding 95,000 tons of wolfberries. The majority of commercially produced wolfberry (50,000 tons in 2013, accounting for 45% of China's total yield) comes from L. barbarum plantations in the Ningxia and Xinjiang in Northwestern China. The cultivation is centered in Zhongning County, Ningxia, where wolfberry plantations typically range between 40 and 400 hectares (100–1000 acres or 500–6000 mu) in area. Ningxia goji has been cultivated along the fertile floodplains of the Yellow River for more than 700 years. They are sometimes described commercially as "red diamonds". The region has developed an industrial association of growers, processors, marketers, and scholars of wolfberry cultivation to promote the berry's commercial and export potential. Ningxia goji is the variety used by practitioners of traditional Chinese medicine. Wolfberries are celebrated each August in Ningxia with an annual festival coinciding with the berry harvest. Originally held in Ningxia's capital, Yinchuan, the festival has been based since 2000 in Zhongning County. Besides Ningxia, commercial volumes of wolfberries grow in the Chinese regions of Inner Mongolia, Qinghai, Gansu, Shaanxi, Shanxi, and Hebei. United Kingdom Lycium barbarum had been introduced in the United Kingdom in the 1730s by The Duke of Argyll, but the plant was mostly used for hedges and decorative gardening. The UK Food Standards Agency (FSA) had initially placed goji berry in the Novel Foods list. That classification would have required authorisation from the European Council and Parliament for marketing. However, on 18 June 2007, the FSA concluded that there was a significant history of consumption of the fruit before 1997, indicating its safety, and thus removed it from the list. Canada and United States In the first decade of the 21st century, farmers in Canada and the United States began cultivating goji on a commercial scale to meet potential markets for fresh berries, juice, and processed products. Australia Australia imports the majority of its goji berries from China, due to how expensive the Australian labour force is in comparison with the countries that have the largest share of the current market.
Biology and health sciences
Berries
Plants
1147422
https://en.wikipedia.org/wiki/Wobbegong
Wobbegong
Wobbegong is the common name given to the 12 species of carpet sharks in the family Orectolobidae. They are found in shallow temperate and tropical waters of the western Pacific Ocean and eastern Indian Ocean, chiefly around Australia and Indonesia, although one species (the Japanese wobbegong, Orectolobus japonicus) occurs as far north as Japan. The word wobbegong is believed to come from an Australian Aboriginal language, meaning "shaggy beard", referring to the growths around the mouth of the shark of the western Pacific. Description Wobbegongs are bottom-dwelling sharks, spending much of their time resting on the sea floor. Most species have a maximum length of , but the largest, the spotted wobbegong, Orectolobus maculatus, and banded wobbegong, O. halei, reach about in length. Wobbegongs are well camouflaged with a symmetrical pattern of bold markings which resembles a carpet. Because of this striking pattern, wobbegongs and their close relatives are often referred to as carpet sharks. The camouflage is improved by the presence of small weed-like whisker lobes surrounding the wobbegong's jaw, which help to camouflage it and act as sensory barbs. Wobbegongs make use of their camouflage to hide among rocks and catch smaller fish which swim too close, typical of ambush predators. Wobbegongs also have a powerful jaw with needle-like teeth that assist in catching reef fish and other sharks for food. The blood cells of several species of wobbegong have also been described. Interaction with humans Wobbegongs are generally not considered dangerous to humans, but have attacked swimmers, snorkelers, and scuba divers who inadvertently come close to them. The Australian Shark Attack File contains more than 50 records of unprovoked attacks by wobbegongs, and the International Shark Attack File 31 records, none of them fatal. Wobbegongs have also bitten surfers. Wobbegongs are very flexible and can easily bite a hand holding onto their tail. They have many small but sharp teeth and their bite can be severe, even through a wetsuit; having once bitten, they have been known to hang on and can be very difficult to remove. In Australia, wobbegong skin is used to make leather. Captivity Although most wobbegong species are unsuitable for home aquaria due to their large adult size, this has not stopped some of the larger species from being sold in the aquarium trade. Small wobbegong species, such as the tasselled wobbegong and Ward's wobbegong, are "ideal" sharks for home aquarists to keep because they are an appropriate size and are lethargic, enabling them to be accommodated within the limited space of a home tank, although they will consume tankmates, even quite large ones. Some aquarists, by contrast, see the lack of activity to be a drawback to keeping wobbegongs, and prefer more active sharks. Wobbegongs are largely nocturnal and, due to their slow metabolism, do not have to be fed as often as other sharks. Most do well on two feedings weekly. Underfed wobbegongs can be recognised by visibly atrophied dorsal musculature. Genera and species The 12 living species of wobbegong, in three genera, are: Genus Eucrossorhinus Regan, 1908 Eucrossorhinus dasypogon (Bleeker, 1867) (tasselled wobbegong) Genus Orectolobus Bonaparte, 1834 Orectolobus floridus Last & Chidlow, 2008 (floral banded wobbegong) Orectolobus halei Whitley, 1940. (Gulf wobbegong or banded wobbegong) Orectolobus hutchinsi Last, Chidlow & Compagno, 2006. (western wobbegong) Orectolobus japonicus Regan, 1906 (Japanese wobbegong) Orectolobus leptolineatus Last, Pogonoski & W. T. White, 2010 (Indonesian wobbegong) Orectolobus maculatus (Bonnaterre, 1788) (spotted wobbegong) Orectolobus ornatus (De Vis, 1883) (ornate wobbegong) Orectolobus parvimaculatus Last & Chidlow, 2008 (dwarf spotted wobbegong) Orectolobus reticulatus Last, Pogonoski & W. T. White, 2008 (network wobbegong) Orectolobus wardi Whitley, 1939 (northern wobbegong) Genus Sutorectus Whitley, 1939 Sutorectus tentaculatus (W. K. H. Peters, 1864) (cobbler wobbegong) Fossil genera include: Cretorectolobus Case, 1978 Eometlaouia Noubhani & Cappetta, 2002 Orectoloboides Cappetta 1977 Conservation status
Biology and health sciences
Sharks
Animals
1148092
https://en.wikipedia.org/wiki/Primordial%20fluctuations
Primordial fluctuations
Primordial fluctuations are density variations in the early universe which are considered the seeds of all structure in the universe. Currently, the most widely accepted explanation for their origin is in the context of cosmic inflation. According to the inflationary paradigm, the exponential growth of the scale factor during inflation caused quantum fluctuations of the inflaton field to be stretched to macroscopic scales, and, upon leaving the horizon, to "freeze in". At the later stages of radiation- and matter-domination, these fluctuations re-entered the horizon, and thus set the initial conditions for structure formation. The statistical properties of the primordial fluctuations can be inferred from observations of anisotropies in the cosmic microwave background and from measurements of the distribution of matter, e.g., galaxy redshift surveys. Since the fluctuations are believed to arise from inflation, such measurements can also set constraints on parameters within inflationary theory. Formalism Primordial fluctuations are typically quantified by a power spectrum which gives the power of the variations as a function of spatial scale. Within this formalism, one usually considers the fractional energy density of the fluctuations, given by: where is the energy density, its average and the wavenumber of the fluctuations. The power spectrum can then be defined via the ensemble average of the Fourier components: There are both scalar and tensor modes of fluctuations. Scalar modes Scalar modes have the power spectrum defined as the mean squared density fluctuation for a specific wavenumber , i.e., the average fluctuation amplitude at a given scale: Many inflationary models predict that the scalar component of the fluctuations obeys a power law in which For scalar fluctuations, is referred to as the scalar spectral index, with corresponding to scale invariant fluctuations (not scale invariant in but in the comoving curvature perturbation for which the power is indeed invariant with when ). The scalar spectral index describes how the density fluctuations vary with scale. As the size of these fluctuations depends upon the inflaton's motion when these quantum fluctuations are becoming super-horizon sized, different inflationary potentials predict different spectral indices. These depend upon the slow roll parameters, in particular the gradient and curvature of the potential. In models where the curvature is large and positive . On the other hand, models such as monomial potentials predict a red spectral index . Planck provides a value of . Tensor modes The presence of primordial tensor fluctuations is predicted by many inflationary models. As with scalar fluctuations, tensor fluctuations are expected to follow a power law and are parameterized by the tensor index (the tensor version of the scalar index). The ratio of the tensor to scalar power spectra is given by where the 2 arises due to the two polarizations of the tensor modes. 2015 CMB data from the Planck satellite gives a constraint of . Adiabatic/isocurvature fluctuations Adiabatic fluctuations are density variations in all forms of matter and energy which have equal fractional over/under densities in the number density. So for example, an adiabatic photon overdensity of a factor of two in the number density would also correspond to an electron overdensity of two. For isocurvature fluctuations, the number density variations for one component do not necessarily correspond to number density variations in other components. While it is usually assumed that the initial fluctuations are adiabatic, the possibility of isocurvature fluctuations can be considered given current cosmological data. Current cosmic microwave background data favor adiabatic fluctuations and constrain uncorrelated isocurvature cold dark matter modes to be small.
Physical sciences
Physical cosmology
Astronomy
1149201
https://en.wikipedia.org/wiki/Egyptian%20numerals
Egyptian numerals
The system of ancient Egyptian numerals was used in Ancient Egypt from around 3000 BC until the early first millennium AD. It was a system of numeration based on multiples of ten, often rounded off to the higher power, written in hieroglyphs. The Egyptians had no concept of a positional notation such as the decimal system. The hieratic form of numerals stressed an exact finite series notation, ciphered one-to-one onto the Egyptian alphabet. Digits and numbers The following hieroglyphs were used to denote powers of ten: Multiples of these values were expressed by repeating the symbol as many times as needed. For instance, a stone carving from Karnak shows the number 4,622 as: Egyptian hieroglyphs could be written in both directions (and even vertically). In this example the symbols decrease in value from top to bottom and from left to right. On the original stone carving, it is right-to-left, and the signs are thus reversed. Zero There was no symbol or concept of zero as a placeholder in Egyptian numeration and zero was not used in calculations. However, the symbol nefer (nfr𓄤, "good", "complete", "beautiful") was apparently also used for two numeric purposes: in a papyrus listing the court expenses, , it indicated a zero balance; in a drawing for Meidum Pyramid (and at other sites), nefer is used to indicate a ground level: height and depths are measured "above nefer" or "below nefer" respectively. According to Carl Boyer, a deed from Edfu contained a "zero concept" replacing the magnitude in geometry. Fractions Rational numbers could also be expressed, but only as sums of unit fractions, i.e., sums of reciprocals of positive integers, except for and . The hieroglyph indicating a fraction looked like a mouth, which meant "part": Fractions were written with this fractional solidus, i.e., the numerator 1, and the positive denominator below. Thus, was written as: Special symbols were used for and for the non-unit fractions and, less frequently, : If the denominator became too large, the "mouth" was just placed over the beginning of the "denominator": Written numbers As with most modern day languages, the ancient Egyptian language could also write out numerals as words phonetically, just like one can write thirty instead of "30" in English. The word (thirty), for instance, was written as while the numeral (30) was This was, however, uncommon for most numbers other than one and two and the signs were used most of the time. Hieratic numerals As administrative and accounting texts were written on papyrus or ostraca, rather than being carved into hard stone (as were hieroglyphic texts), the vast majority of texts employing the Egyptian numeral system utilize the hieratic script. Instances of numerals written in hieratic can be found as far back as the Early Dynastic Period. The Old Kingdom Abusir Papyri are a particularly important corpus of texts that utilize hieratic numerals. Boyer proved 50 years ago that hieratic script used a different numeral system, using individual signs for the numbers 1 to 9, multiples of 10 from 10 to 90, the hundreds from 100 to 900, and the thousands from 1000 to 9000. A large number like 9999 could thus be written with only four signs—combining the signs for 9000, 900, 90, and 9—as opposed to 36 hieroglyphs. Boyer saw the new hieratic numerals as ciphered, mapping one number onto one Egyptian letter for the first time in human history. Greeks adopted the new system, mapping their counting numbers onto two of their alphabets, the Doric and Ionian. In the oldest hieratic texts the individual numerals were clearly written in a ciphered relationship to the Egyptian alphabet. But during the Old Kingdom a series of standardized writings had developed for sign-groups containing more than one numeral, repeated as Roman numerals practiced. However, repetition of the same numeral for each place-value was not allowed in the hieratic script. As the hieratic writing system developed over time, these sign-groups were further simplified for quick writing; this process continued into Demotic, as well. Two famous mathematical papyri using hieratic script are the Moscow Mathematical Papyrus and the Rhind Mathematical Papyrus. Egyptian words for numbers The following table shows the reconstructed Middle Egyptian forms of the numerals (which are indicated by a preceding asterisk), the transliteration of the hieroglyphs used to write them, and finally the Coptic numerals which descended from them and which give Egyptologists clues as to the vocalism of the original Egyptian numbers. A breve (˘) in some reconstructed forms indicates a short vowel whose quality remains uncertain; the letter 'e' represents a vowel that was originally u or i (exact quality uncertain) but became e by Late Egyptian.
Mathematics
Basics
null
92290
https://en.wikipedia.org/wiki/Mechanical%20equilibrium
Mechanical equilibrium
In classical mechanics, a particle is in mechanical equilibrium if the net force on that particle is zero. By extension, a physical system made up of many parts is in mechanical equilibrium if the net force on each of its individual parts is zero. In addition to defining mechanical equilibrium in terms of force, there are many alternative definitions for mechanical equilibrium which are all mathematically equivalent. In terms of momentum, a system is in equilibrium if the momentum of its parts is all constant. In terms of velocity, the system is in equilibrium if velocity is constant. * In a rotational mechanical equilibrium the angular momentum of the object is conserved and the net torque is zero. More generally in conservative systems, equilibrium is established at a point in configuration space where the gradient of the potential energy with respect to the generalized coordinates is zero. If a particle in equilibrium has zero velocity, that particle is in static equilibrium. Since all particles in equilibrium have constant velocity, it is always possible to find an inertial reference frame in which the particle is stationary with respect to the frame. Stability An important property of systems at mechanical equilibrium is their stability. Potential energy stability test In a function which describes the system's potential energy, the system's equilibria can be determined using calculus. A system is in mechanical equilibrium at the critical points of the function describing the system's potential energy. These points can be located using the fact that the derivative of the function is zero at these points. To determine whether or not the system is stable or unstable, the second derivative test is applied. With denoting the static equation of motion of a system with a single degree of freedom the following calculations can be performed: Second derivative < 0 The potential energy is at a local maximum, which means that the system is in an unstable equilibrium state. If the system is displaced an arbitrarily small distance from the equilibrium state, the forces of the system cause it to move even farther away. Second derivative > 0 The potential energy is at a local minimum. This is a stable equilibrium. The response to a small perturbation is forces that tend to restore the equilibrium. If more than one stable equilibrium state is possible for a system, any equilibria whose potential energy is higher than the absolute minimum represent metastable states. Second derivative = 0 The state is neutral to the lowest order and nearly remains in equilibrium if displaced a small amount. To investigate the precise stability of the system, higher order derivatives can be examined. The state is unstable if the lowest nonzero derivative is of odd order or has a negative value, stable if the lowest nonzero derivative is both of even order and has a positive value. If all derivatives are zero then it is impossible to derive any conclusions from the derivatives alone. For example, the function (defined as 0 in x=0) has all derivatives equal to zero. At the same time, this function has a local minimum in x=0, so it is a stable equilibrium. If this function is multiplied by the Sign function, all derivatives will still be zero but it will become an unstable equilibrium. Function is locally constant In a truly neutral state the energy does not vary and the state of equilibrium has a finite width. This is sometimes referred to as a state that is marginally stable, or in a state of indifference, or astable equilibrium. When considering more than one dimension, it is possible to get different results in different directions, for example stability with respect to displacements in the x-direction but instability in the y-direction, a case known as a saddle point. Generally an equilibrium is only referred to as stable if it is stable in all directions. Statically indeterminate system Sometimes the equilibrium equations force and moment equilibrium conditions are insufficient to determine the forces and reactions. Such a situation is described as statically indeterminate. Statically indeterminate situations can often be solved by using information from outside the standard equilibrium equations. Examples A stationary object (or set of objects) is in "static equilibrium," which is a special case of mechanical equilibrium. A paperweight on a desk is an example of static equilibrium. Other examples include a rock balance sculpture, or a stack of blocks in the game of Jenga, so long as the sculpture or stack of blocks is not in the state of collapsing. Objects in motion can also be in equilibrium. A child sliding down a slide at constant speed would be in mechanical equilibrium, but not in static equilibrium (in the reference frame of the earth or slide). Another example of mechanical equilibrium is a person pressing a spring to a defined point. He or she can push it to an arbitrary point and hold it there, at which point the compressive load and the spring reaction are equal. In this state the system is in mechanical equilibrium. When the compressive force is removed the spring returns to its original state. The minimal number of static equilibria of homogeneous, convex bodies (when resting under gravity on a horizontal surface) is of special interest. In the planar case, the minimal number is 4, while in three dimensions one can build an object with just one stable and one unstable balance point. Such an object is called a gömböc.
Physical sciences
Basics_4
Physics
92295
https://en.wikipedia.org/wiki/Sewing
Sewing
Sewing is the craft of fastening pieces of textiles together using a sewing needle and thread. Sewing is one of the oldest of the textile arts, arising in the Paleolithic era. Before the invention of spinning yarn or weaving fabric, archaeologists believe Stone Age people across Europe and Asia sewed fur and leather clothing using bone, antler or ivory sewing-needles and "thread" made of various animal body parts including sinew, catgut, and veins. For thousands of years, all sewing was done by hand. The invention of the sewing machine in the 19th century and the rise of computerization in the 20th century led to mass production and export of sewn objects, but hand sewing is still practiced around the world. Fine hand sewing is a characteristic of high-quality tailoring, haute couture fashion, and custom dressmaking, and is pursued by both textile artists and hobbyists as a means of creative expression. The first known use of the word "sewing" was in the 14th century. A person who sews may be called a seamstress, sewist, sewer, or stitcher. History Origins Sewing has an ancient history estimated to begin during the Paleolithic Era. Sewing was used to stitch together animal hides for clothing and for shelter. The Inuit, for example, used sinew from caribou for thread and needles made of bone; the indigenous peoples of the American Plains and Canadian Prairies used sophisticated sewing methods to assemble tipi shelters. Sewing was combined with the weaving of plant leaves in Africa to create baskets, such as those made by Zulu weavers, who used thin strips of palm leaf as "thread" to stitch wider strips of palm leaf that had been woven into a coil. The weaving of cloth from natural fibers originated in the Middle East around 4000 BC, and perhaps earlier during the Neolithic Age, and the sewing of cloth accompanied this development. During the Middle Ages, Europeans who could afford it employed seamstresses and tailors. The vital importance of sewing was indicated by the honorific position of "Lord Sewer" at many European coronations from the Middle Ages. An example was Robert Radcliffe, 1st Earl of Sussex who was appointed Lord Sewer at the coronation of Henry VIII of England in 1509. Sewing for the most part was a woman's occupation, and most sewing before the 19th century was practical. Clothing was an expensive investment for most people, and women had an important role in extending the longevity of items of clothing. Sewing was used for mending. Clothing that was faded would be turned inside-out so that it could continue to be worn, and sometimes had to be taken apart and reassembled to suit this purpose. Once clothing became worn or torn, it would be taken apart and the reusable cloth sewn together into new items of clothing, made into quilts, or otherwise put to practical use. The many steps involved in making clothing from scratch (weaving, pattern making, cutting, alterations, and so forth) meant that women often bartered their expertise in a particular skill with one another. Decorative needlework such as embroidery was a valued skill, and young women with the time and means would practice to build their skill in this area. From the Middle Ages to the 17th century, sewing tools such as needles, pins and pincushions were included in the trousseaus of many European brides. Sewing birds or sewing clamps were used as a third hand and were popular gifts for seamstresses in the 19th century. Decorative embroidery was valued in many cultures worldwide. Although most embroidery stitches in the Western repertoire are traditionally British, Irish or Western European in origin, stitches originating in different cultures are known throughout the world today. Some examples are the Cretan Open Filling stitch, Romanian Couching or Oriental Couching, and the Japanese stitch. The stitches associated with embroidery spread by way of the trade routes that were active during the Middle Ages. The Silk Road brought Chinese embroidery techniques to Western Asia and Eastern Europe, while techniques originating in the Middle East spread to Southern and Western Europe through Morocco and Spain. European imperial settlements also spread embroidery and sewing techniques worldwide. However, there are instances of sewing techniques indigenous to cultures in distant locations from one another, where cross-cultural communication would have been historically unlikely. For example, a method of reverse appliqué known to areas of South America is also known to Southeast Asia. Industrial Revolution The Industrial Revolution shifted the production of textiles from the household to the mills. In the early decades of the Industrial Revolution, the machinery produced whole cloth. The world's first sewing machine was patented in 1790 by Thomas Saint. By the early 1840s, other early sewing machines began to appear. Barthélemy Thimonnier introduced a simple sewing machine in 1841 to produce military uniforms for France's army; shortly afterward, a mob of tailors broke into Thimonnier's shop and threw the machines out of the windows, believing the machines would put them out of work. By the 1850s, Isaac Singer developed the first sewing machines that could operate quickly and accurately and surpass the productivity of a seamstress or tailor sewing by hand. While much clothing was still produced at home by female members of the family, more and more ready-made clothes for the middle classes were being produced with sewing machines. Textile sweatshops full of poorly paid sewing machine operators grew into entire business districts in large cities like London and New York City. To further support the industry, piece work was done for little money by women living in slums. Needlework was one of the few occupations considered acceptable for women, but it did not pay a living wage. Women working from home often worked 14-hour days to earn enough to support themselves, sometimes by renting sewing machines that they could not afford to buy. Tailors became associated with higher-end clothing during this period. In London, this status grew out of the dandy trend of the early 19th century, when new tailor shops were established around Savile Row. These shops acquired a reputation for sewing high-quality handmade clothing in the style of the latest British fashions, as well as more classic styles. The boutique culture of Carnaby Street was absorbed by Savile Row tailors during the late 20th century, ensuring the continued flourishing of Savile Row's businesses. Historian Judith Bennett explains that the nature of women's work maintained a consistent pattern from the medieval period through the Second Industrial Revolution, characterized by tasks that were low-profit, low-volume, and low-skilled, often performed alongside other responsibilities. Similarly, Judy Lown argues that although women's work transitioned from the household to the factory, its essence—remaining low-skilled and poorly paid—persisted without significant change. The transition to industrialization introduced a growing dependence on cash income in Northwestern Europe. For many working-class families, opportunities to earn wages were often located in distant cities, prompting many girls to leave their rural homes and migrate to urban areas. The changing nature of work in general raised questions about how women fit into rising industrialization and how both men and women should navigate gender roles. One of the concerns of the 19th century was the impact of industrialization on women's morality. According to Mariana Valverde, many male factory workers and union leaders alike argued that women working in industrial settings would be contrary to their nature and symbolized a "return to barbarism."   This perception not only reflected prevailing gender biases but also influenced labor policies and union strategies, which often sought to exclude women from better-paying industrial jobs. Such debates reinforced the belief that women were best suited for domestic roles or low-skilled work, limiting their economic opportunities and perpetuating a cycle of inequality. 20th century onward Sewing underwent further developments during the 20th century. As sewing machines became more affordable to the working class, demand for sewing patterns grew. Women had become accustomed to seeing the latest fashions in periodicals during the late 19th and early 20th centuries, increasing demand for sewing patterns yet more. American tailor and manufacturer Ebenezer Butterick met the demand with paper patterns that could be traced and used by home sewers. The patterns, sold in small packets, became wildly popular. Several pattern companies soon established themselves. Women's magazines also carried sewing patterns, and continued to do so for much of the 20th century. This practice declined during the later decades of the 20th century, when ready-made clothing became a necessity as women joined the paid workforce in larger numbers, leaving them with less time to sew, if indeed they had an interest. Today, the low price of ready-made clothing in shops means that home sewing is confined largely to hobbyists in Western countries, with the exception of cottage industries in custom dressmaking and upholstery. Sewing as a pleasurable hobby has gained popularity as attested by the BBC televisions show The Great British Sewing Bee, on air since 2013. The spread of sewing machine technology to industrialized economies around the world meant the spread of Western-style sewing methods and clothing styles as well. In Japan, traditional clothing was sewn together with running stitch that could be removed so that the clothing could be taken apart and the assorted pieces laundered separately. The tight-locked stitches made by home sewing machines, and the use of Western clothing patterns, led to a movement towards wearing Western-style clothing during the early 20th century. Western sewing and clothing styles were disseminated in sub-Saharan Africa by Christian missionaries from the 1830s onward. Indigenous cultures, such as the Zulu and Tswana, were indoctrinated in the Western way of dress as a sign of conversion to Christianity. First Western hand sewing techniques, and later machine sewing, spread throughout the regions where the European colonists settled. However, a recent examination of new online learning methods demonstrated that technology can be adapted to share knowledge of a culture's traditional sewing methods. Using self-paced online tutorials, a Malay sewing class learned how to tailor and sew a traditional men's Baju Kurung garment in 3 days, whereas a traditional Malay sewing class would have taken 5 days to teach the same information. Advances in industrial technology, such as the development of synthetic fibres during the early 20th century, have brought profound changes to the textile industry as a whole. Textile industries in Western countries have declined sharply as textile companies compete for cheaper labour in other parts of the world. According to the U.S. Department of Labor "employment of sewers and tailors is expected to experience little or no change, growing 1 percent from 2010 to 2020". It is estimated that every lost textile job in a Western country in recent years has resulted in 1.5 jobs being created in an outsourced country such as China. Textile workers who perform tasks with sewing machines, or do detailed work by hand, are still a vital component of the industry, however. Small-scale sewing is also an economic standby in many developing countries, where many people, both male and female, are self-employed sewers. Garment construction Patterns and fitting Garment construction is usually guided by a sewing pattern. A pattern can be quite simple; some patterns are nothing more than a mathematical formula that the sewer calculates based on the intended wearer's measurements. Once calculated, the sewer has the measurements needed to cut the cloth and sew the garment together. At the other end of the spectrum are haute couture fashion designs. When a couture garment is made of unusual material, or has extreme proportions, the design may challenge the sewer's engineering knowledge. Complex designs are drafted and refitted dozens of times, may take around 40 hours to develop a final pattern, and require 60 hours of cutting and sewing. It is important for a pattern to be created well because the way a completed piece fits is the reason it will either be worn or not. Most clothing today is mass-produced, and conforms to standard sizing, based on body measurements that are intended to fit the greatest proportion of the population. However, while "standard" sizing is generally a useful guideline, it is little more than that, because there is no industry standard that is "both widely accepted and strictly adhered to in all markets". Home sewers often work from sewing patterns purchased from companies such as Simplicity, Butterick, McCall's, Vogue, and many others. Such patterns are typically printed on large pieces of tissue paper; a sewer may simply cut out the required pattern pieces for use but may choose to transfer the pattern onto a thicker paper if repeated use is desired. A sewer may choose to alter a pattern to make it more accurately fit the intended wearer. Patterns may be changed to increase or decrease length; to add or remove fullness; to adjust the position of the waistline, shoulder line, or any other seam, or a variety of other adjustments. Volume can be added with elements such as pleats, or reduced with the use of darts. Before work is started on the final garment, test garments may be made, sometimes referred to as muslins. Tools Sewers working on a simple project need only a few sewing tools, such as flexible measuring tape, needle, thread, cloth, and sewing shears, but there are many helpful sewing aids and specialized tools available. Rotary cutters may also be used for cutting fabric, usually used with a cutting mat to protect other surfaces from being damaged. Seam rippers are used to remove mistaken stitches or basting stitches. Special washable markers or chalk are used to mark the fabric as a guide to construction. Pressing and ironing are an essential part of any sewing project, and require additional tools. A steam iron is used to press seams and garments, and a variety of pressing aids such as a seam roll or tailor's ham are used to aid in shaping a garment. A pressing cloth may be used to protect the fabric from damage. A velvet board helps to iron velvet without crushing it. Sewing machines are now made for a broad range of specialised sewing purposes, such as quilting machines, heavy-duty machines for sewing thicker fabrics (such as leather), computerized machines for embroidery, and sergers for finishing raw edges of fabric. A wide variety of presser foot attachments are available for many sewing machines—feet exist to help with hemming, pintucks, attaching cording, assembling patchwork, quilting, and a variety of other functions. A thimble is a small hard tool used to protect fingertips while hand sewing. Elements Seamstresses are provided with the pattern, while tailors would draft their own pattern, both with the intent of using as little fabric as possible. Patterns will specify whether to cut on the grain or the bias to manipulate fabric stretch. Special placement may be required for directional, striped, or plaid fabrics. Supporting materials, such as interfacing, interlining, or lining, may be used in garment construction, to give the fabric a more rigid or durable shape. Before or after the pattern pieces are cut, it is often necessary to mark the pieces to provide a guide during the sewing process. Marking methods may include using pens, pencils, or chalk, tailor's tacks, snips, pins, or thread tracing, among others. In addition to the normal lockstitch, construction stitches include edgestitching, understitching, staystitching and topstitching. Seam types include the plain seam, zigzag seam, flat fell seam, French seam and many others. Software With the development of cloth simulation software such as CLO3D, Marvelous Designer and Optitex, seamsters can now draft patterns on the computer and visualize clothing designs by using the pattern creation tools and virtual sewing machines within these cloth simulation programs. In non-human animals Tailorbirds (genus Orthotomus), such as the common tailorbird, exhibit sewing behaviour, as do some birds of related genera. They are capable of stitching together the edges of leaves, using plant fibres or spider silk as thread, in order to create cavities in which to build their nests.
Technology
Techniques_2
null
92377
https://en.wikipedia.org/wiki/Electromagnet
Electromagnet
An electromagnet is a type of magnet in which the magnetic field is produced by an electric current. Electromagnets usually consist of wire wound into a coil. A current through the wire creates a magnetic field which is concentrated along the center of the coil. The magnetic field disappears when the current is turned off. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet. The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet, which needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field. Electromagnets are widely used as components of other electrical devices, such as motors, generators, electromechanical solenoids, relays, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel. History Danish scientist Hans Christian Ørsted discovered in 1820 that electric currents create magnetic fields. In the same year, the French scientist André-Marie Ampère showed that iron can be magnetized by inserting it into an electrically fed solenoid. British scientist William Sturgeon invented the electromagnet in 1824. His first electromagnet was a horseshoe-shaped piece of iron that was wrapped with about 18 turns of bare copper wire. (Insulated wire did not then exist.) The iron was varnished to insulate it from the windings. When a current was passed through the coil, the iron became magnetized and attracted other pieces of iron; when the current was stopped, it lost magnetization. Sturgeon displayed its power by showing that although it only weighed seven ounces (roughly 200 grams), it could lift nine pounds (roughly 4 kilos) when the current of a single-cell power supply was applied. However, Sturgeon's magnets were weak because the uninsulated wire he used could only be wrapped in a single spaced-out layer around the core, limiting the number of turns. Beginning in 1830, US scientist Joseph Henry systematically improved and popularised the electromagnet. By using wire insulated by silk thread and inspired by Schweigger's use of multiple turns of wire to make a galvanometer, he was able to wind multiple layers of wire onto cores, creating powerful magnets with thousands of turns of wire, including one that could support . The first major use for electromagnets was in telegraph sounders. The magnetic domain theory of how ferromagnetic cores work was first proposed in 1906 by French physicist Pierre-Ernest Weiss, and the detailed modern quantum mechanical theory of ferromagnetism was worked out in the 1920s by Werner Heisenberg, Lev Landau, Felix Bloch, and others. Applications of electromagnets A portative electromagnet is one designed to just hold material in place; an example is a lifting magnet. A tractive electromagnet applies a force and moves something. Electromagnets are very widely used in electric and electromechanical devices, including: Motors and generators Transformers Relays Electric bells and buzzers Loudspeakers and headphones Actuators such as valves Magnetic recording and data storage equipment: tape recorders, VCRs, hard disks MRI machines Scientific equipment such as mass spectrometers Particle accelerators Magnetic locks Magnetic separation equipment used for separating magnetic from nonmagnetic material; for example, separating ferrous metal in scrap Industrial lifting magnets Magnetic levitation, used in maglev trains Induction heating for cooking, manufacturing, and hyperthermia therapy Simple solenoid A common tractive electromagnet is a uniformly wound solenoid and plunger. The solenoid is a coil of wire, and the plunger is made of a material such as soft iron. Applying a current to the solenoid applies a force to the plunger and may make it move. The plunger stops moving when the forces upon it are balanced. For example, the forces are balanced when the plunger is centered in the solenoid. The maximum uniform pull happens when one end of the plunger is at the middle of the solenoid. An approximation for the force is where is a proportionality constant, is the cross-sectional area of the plunger, is the number of turns in the solenoid, is the current through the solenoid wire, and is the length of the solenoid. For long, slender, solenoids (in units using inches, pounds force, and amperes), the value of is around 0.009 to 0.010 psi (maximum pull pounds per square inch of plunger cross-sectional area). For example, a 12-inch-long coil () with a long plunger with a cross section of one inch square () and 11,200 ampere-turns () had a maximum pull of 8.75 pounds (corresponding to ). The maximum pull is increased when a magnetic stop is inserted into the solenoid. The stop becomes a magnet that will attract the plunger; it adds little to the solenoid pull when the plunger is far away but dramatically increases the pull when the plunger is close. An approximation for the pull is Here is the distance between the end of the stop and the end of the plunger. The additional constant for units of inches, pounds, and amperes with slender solenoids is about 2660. The first term inside the bracket represents the attraction between the stop and the plunger; the second term represents the same force as the solenoid without a stop (). Some improvements can be made on this basic design. The ends of the stop and plunger are often conical. For example, the plunger may have a pointed end that fits into a matching recess in the stop. The shape makes the solenoid's pull more uniform as a function of separation. Another improvement is to add a magnetic return path around the outside of the solenoid (an "iron-clad solenoid"). The magnetic return path, just as the stop, has little impact until the air gap is small. Physics An electric current flowing in a wire creates a magnetic field around the wire, due to Ampere's law (see drawing of wire with magnetic field). To concentrate the magnetic field in an electromagnet, the wire is wound into a coil with many turns of wire lying side-by-side. The magnetic field of all the turns of wire passes through the center of the coil, creating a strong magnetic field there. A coil forming the shape of a straight tube (a helix) is called a solenoid. The direction of the magnetic field through a coil of wire can be determined by the right-hand rule. If the fingers of the right hand are curled around the coil in the direction of current flow (conventional current, flow of positive charge) through the windings, the thumb points in the direction of the field inside the coil. The side of the magnet that the field lines emerge from is defined to be the north pole. Magnetic core For definitions of the variables below, see box at end of article. Much stronger magnetic fields can be produced if a magnetic core, made of a soft ferromagnetic (or ferrimagnetic) material such as iron, is placed inside the coil. A core can increase the magnetic field to thousands of times the strength of the field of the coil alone, due to the high magnetic permeability of the material. Not all electromagnets use cores, so this is called a ferromagnetic-core or iron-core electromagnet. This phenomenon occurs because the magnetic core's material (often iron or steel) is composed of small regions called magnetic domains that act like tiny magnets (see ferromagnetism). Before the current in the electromagnet is turned on, these domains point in random directions, so their tiny magnetic fields cancel each other out, and the core has no large-scale magnetic field. When a current passes through the wire wrapped around the core, its magnetic field penetrates the core and turns the domains to align in parallel with the field. As they align, all their tiny magnetic fields add to the wire's field, which creates a large magnetic field that extends into the space around the magnet. The core concentrates the field, and the magnetic field passes through the core in lower reluctance than it would when passing through air. The larger the current passed through the wire coil, the more the domains align, and the stronger the magnetic field is. Once all the domains are aligned, any additional current only causes a slight increase in the strength of the magnetic field. Eventually, the field strength levels off and becomes nearly constant, regardless of how much current is sent through the windings. This phenomenon is called saturation, and is the main nonlinear feature of ferromagnetic materials. For most high-permeability core steels, the maximum possible strength of the magnetic field is around 1.6 to 2 teslas (T). This is why the very strongest electromagnets, such as superconducting and very high current electromagnets, cannot use cores. When the current in the coil is turned off, most of the domains in the core material lose alignment and return to a random state, and the electromagnetic field disappears. However, some of the alignment persists because the domains resist turning their direction of magnetization, which leaves the core magnetized as a weak permanent magnet. This phenomenon is called hysteresis and the remaining magnetic field is called remanent magnetism. The residual magnetization of the core can be removed by degaussing. In alternating current electromagnets, such as those used in motors, the core's magnetization is constantly reversed, and the remanence contributes to the motor's losses. Ampere's law The magnetic field of electromagnets in the general case is given by Ampere's Law: which says that the integral of the magnetizing field around any closed loop is equal to the sum of the current flowing through the loop. A related equation is the Biot–Savart law, which gives the magnetic field due to each small segment of current. Force exerted by magnetic field Likewise, on the solenoid, the force exerted by an electromagnet on a conductor located at a section of core material is: This equation can be derived from the energy stored in a magnetic field. Energy is force times distance. Rearranging terms yields the equation above. The 1.6 T limit on the field previously mentioned sets a limit on the maximum force per unit core area, or magnetic pressure, an iron-core electromagnet can exert; roughly: for the core's saturation limit, . In more intuitive units, it is useful to remember that at 1 T the magnetic pressure is approximately . Given a core geometry, the magnetic field needed for a given force can be calculated from (); if the result is much more than 1.6 T, a larger core must be used. However, computing the magnetic field and force exerted by ferromagnetic materials in general is difficult for two reasons. First, the strength of the field varies from point to point in a complicated way, particularly outside the core and in air gaps, where fringing fields and leakage flux must be considered. Second, the magnetic field and force are nonlinear functions of the current, depending on the nonlinear relation between and for the particular core material used. For precise calculations, computer programs that can produce a model of the magnetic field using the finite element method are employed. Magnetic circuit In many practical applications of electromagnets, such as motors, generators, transformers, lifting magnets, and loudspeakers, the iron core is in the form of a loop or magnetic circuit, possibly broken by a few narrow air gaps. Iron presents much less "resistance" (reluctance) to the magnetic field than air, so a stronger field can be obtained if most of the magnetic field's path is within the core. Since the magnetic field lines are closed loops, the core is usually made in the form of a loop. Since most of the magnetic field is confined within the outlines of the core loop, this allows a simplification of the mathematical analysis. A common simplifying assumption satisfied by many electromagnets, which will be used in this section, is that the magnetic field strength is constant around the magnetic circuit (within the core and air gaps) and zero outside it. Most of the magnetic field will be concentrated in the core material (C) (see Fig. 1). Within the core, the magnetic field (B) will be approximately uniform across any cross-section; if the core also has roughly constant area throughout its length, the field in the core will be constant. At any air gaps (G) between core sections, the magnetic field lines are no longer confined by the core. Here, they bulge out beyond the core geometry over the length of the gap, reducing the field strength in the gap. The "bulges" (BF) are called fringing fields. However, as long as the length of the gap is smaller than the cross-section dimensions of the core, the field in the gap will be approximately the same as in the core. In addition, some of the magnetic field lines (BL) will take "short cuts" and not pass through the entire core circuit, and thus will not contribute to the force exerted by the magnet. This also includes field lines that encircle the wire windings but do not enter the core. This is called leakage flux. The equations in this section are valid for electromagnets for which: the magnetic circuit is a single loop of core material, possibly broken by a few air gaps; the core has roughly the same cross-sectional area throughout its length; any air gaps between sections of core material are not large compared with the cross-sectional dimensions of the core; there is negligible leakage flux. Magnetic field in magnetic circuit The magnetic field created by an electromagnet is proportional to both and ; their product, , is magnetomotive force. For an electromagnet with a single magnetic circuit, Ampere's Law reduces to: This is a nonlinear equation, because the permeability of the core varies with . For an exact solution, must be obtained from the core material hysteresis curve. If is unknown, the equation must be solved by numerical methods. However, if the magnetomotive force is well above saturation (so the core material is in saturation), the magnetic field will be approximately the material's saturation value , and will not vary much with changes in . For a closed magnetic circuit (no air gap), most core materials saturate at a magnetomotive force of roughly 800 ampere-turns per meter of flux path. For most core materials, the relative permeability . So in (), the second term dominates. Therefore, in magnetic circuits with an air gap, depends strongly on the length of the air gap, and the length of the flux path in the core does not matter much. Given an air gap of 1mm, a magnetomotive force of about 796 ampere-turns is required to produce a magnetic field of 1 T. Closed magnetic circuit For a closed magnetic circuit (no air gap), such as would be found in an electromagnet lifting a piece of iron bridged across its poles, equation () becomes: Substituting into (), the force is: To maximize the force, a core with a short flux path and a wide cross-sectional area is preferred (this also applies to magnets with an air gap). To achieve this, in applications like lifting magnets and loudspeakers, a flat cylindrical design is often used. The winding is wrapped around a short wide cylindrical core that forms one pole, and a thick metal housing that wraps around the outside of the windings forms the other part of the magnetic circuit, bringing the magnetic field to the front to form the other pole. Force between electromagnets The previous methods are applicable to electromagnets with a magnetic circuit; however, they do not apply when a large part of the magnetic field path is outside the core. (A non-circuit example would be a magnet with a straight cylindrical core.) To determine the force between two electromagnets (or permanent magnets) in these cases, a special analogy called a magnetic-charge model can be used. In this model, it is assumed that the magnets have well-defined "poles" where the field lines emerge from the core, and that the magnetic field is produced by fictitious "magnetic charges" on the surface of the poles. This model assumes point-like poles (instead of surfaces), and thus it only yields a good approximation when the distance between the magnets is much larger than their diameter; thus, it is useful just for determining a force between them. The magnetic pole strength of an electromagnet is given by and thus the force between two poles is Each electromagnet has two poles, so the total force on magnet 1 from magnet 2 is equal to the vector sum of the forces of magnet 2's poles acting on each pole of magnet 1. Side effects There are several side effects which occur in electromagnets, which must be considered in their design. These effects generally become more significant in larger electromagnets. Ohmic heating The only power consumed in a direct current (DC) electromagnet under steady-state conditions is due to the resistance of the windings, and is dissipated as heat. Some large electromagnets require water cooling systems in the windings to carry off the waste heat. Since the magnetic field is proportional to the product , the number of turns in the windings and the current can be chosen to minimize heat losses, as long as their product is constant. Since the power dissipation, , increases with the square of the current but only increases approximately linearly with the number of windings, the power lost in the windings can be minimized by reducing and proportionally increasing the number of turns , or using thicker wire to reduce the resistance. For example, halving and doubling halves the power loss, as does doubling the area of the wire. In either case, increasing the amount of wire reduces the ohmic losses. For this reason, electromagnet windings often have a significant thickness. However, the limit to increasing or lowering the resistance is that the windings take up more space between the magnet's core pieces. If the area available for windings is filled up, adding more turns requires a smaller diameter of wire, which has higher resistance, and thus cancels the advantage of using more turns. So, in large magnets there is a minimum amount of heat loss that cannot be reduced. This increases with the square of the magnetic flux, . Inductive voltage spikes An electromagnet has significant inductance, and resists changes in the current through its windings. Any sudden changes in the winding current cause large voltage spikes across the windings. This is because when the current through the magnet is increased, such as when it is turned on, energy from the circuit must be stored in the magnetic field. When it is turned off, the energy in the field is returned to the circuit. If an ordinary switch is used to control the winding current, this can cause sparks at the terminals of the switch. This does not occur when the magnet is switched on, because the limited supply voltage causes the current through the magnet and the field energy to increase slowly. But when it is switched off, the energy in the magnetic field is suddenly returned to the circuit, causing a large voltage spike and an arc across the switch contacts, which can damage them. With small electromagnets, a capacitor is sometimes used across the contacts, which reduces arcing by temporarily storing the current. More often, a diode is used to prevent voltage spikes by providing a path for the current to recirculate through the winding until the energy is dissipated as heat. The diode is connected across the winding, oriented so it is reverse-biased during steady state operation and does not conduct. When the supply voltage is removed, the voltage spike forward-biases the diode and the reactive current continues to flow through the winding, through the diode, and back into the winding. A diode used in this way is called a freewheeling diode or flyback diode. Large electromagnets are usually powered by variable current electronic power supplies, controlled by a microprocessor, which prevent voltage spikes by accomplishing current changes slowly, in gentle ramps. It may take several minutes to energize or deenergize a large magnet. Lorentz forces In powerful electromagnets, the magnetic field exerts a force on each turn of the windings, due to the Lorentz force acting on the moving charges within the wire. The Lorentz force is perpendicular to both the axis of the wire and the magnetic field. It can be visualized as a pressure between the magnetic field lines, pushing them apart. It has two effects on an electromagnet's windings: The field lines within the axis of the coil exert a radial force on each turn of the windings, tending to push them outward in all directions. This causes a tensile stress in the wire. The leakage field lines between each turn of the coil exert an attractive force between adjacent turns, tending to pull them together. The Lorentz forces increase with . In large electromagnets the windings must be firmly clamped in place, to prevent motion on power-up and power-down from causing metal fatigue in the windings. In the Bitter electromagnet design (Fig. 2), used in very high-field research magnets, the windings are constructed as flat disks to resist the radial forces, and clamped in an axial direction to resist the axial ones. Core losses In alternating current (AC) electromagnets, used in transformers, inductors, and AC motors and generators, the magnetic field is constantly changing. This causes energy losses in their magnetic cores, which is dissipated as heat in the core. The losses stem from two processes: eddy currents and hysteresis losses. Eddy currents: From Faraday's law of induction, a changing magnetic field induces circulating electric currents (eddy currents) inside nearby conductors. The energy in these currents is dissipated as heat in the electrical resistance of the conductor, so they are a cause of energy loss. Since the magnet's iron core is conductive, and most of the magnetic field is concentrated there, eddy currents in the core are the major problem. Eddy currents are closed loops of current that flow in planes perpendicular to the magnetic field. The energy dissipated is proportional to the area enclosed by the loop. To prevent them, the cores of AC electromagnets are made of stacks of thin steel sheets, or laminations, oriented parallel to the magnetic field, with an insulating coating on the surface. The insulation layers prevent eddy current from flowing between the sheets. Any remaining eddy currents must flow within the cross-section of each individual lamination, which reduces losses greatly. Another alternative is to use a ferrite core, which is a nonconductor. Hysteresis losses: Reversing the direction of magnetization of the magnetic domains in the core material each cycle causes energy loss, because of the coercivity of the material. These are called hysteresis losses. The energy lost per cycle is proportional to the area of the hysteresis loop in the graph. To minimize this loss, magnetic cores used in transformers and other AC electromagnets are made of "soft" low coercivity materials, such as silicon steel or soft ferrite. The energy loss per cycle of the alternating current is constant for each of these processes, so the power loss increases linearly with frequency. High-field electromagnets Superconducting electromagnets When a magnetic field higher than the ferromagnetic limit of 1.6 T is needed, superconducting electromagnets can be used. Instead of using ferromagnetic materials, these use superconducting windings cooled with liquid helium, which conduct current without electrical resistance. These allow enormous currents to flow, which generate intense magnetic fields. Superconducting magnets are limited by the field strength at which the winding material ceases to be superconducting. Current designs are limited to 10–20 T, with the current (2017) record of 32 T. The necessary refrigeration equipment and cryostat make them much more expensive than ordinary electromagnets. However, in high-power applications this can be offset by lower operating costs, since after startup no power is required for the windings, since no energy is lost to ohmic heating. They are used in particle accelerators and MRI machines. Bitter electromagnets Both iron-core and superconducting electromagnets have limits to the field they can produce. Therefore, the most powerful man-made magnetic fields have been generated by air-core non-superconducting electromagnets of a design invented by Francis Bitter in 1933, called Bitter electromagnets. Instead of wire windings, a Bitter magnet consists of a solenoid made of a stack of conducting disks, arranged so that the current moves in a helical path through them, with a hole through the center where the maximum field is created. This design has the mechanical strength to withstand the extreme Lorentz forces of the field, which increase with . The disks are pierced with holes through which cooling water passes to carry away the heat caused by the high current. The strongest continuous field achieved solely with a resistive magnet is 41.5 T , produced by a Bitter electromagnet at the National High Magnetic Field Laboratory in Tallahassee, Florida. The previous record was 37.5 T. The strongest continuous magnetic field overall, 45 T, was achieved in June 2000 with a hybrid device consisting of a Bitter magnet inside a superconducting magnet. The factor that limits the strength of electromagnets is the inability to dissipate the enormous waste heat, so more powerful fields, up to 100 T, have been obtained from resistive magnets by sending brief pulses of high current through them; the inactive period after each pulse allows the heat produced during the pulse to be removed before the next pulse. Explosively pumped flux compression The most powerful man-made magnetic fields have been created by using explosives to compress the magnetic field inside an electromagnet as it is pulsed; these are called explosively pumped flux compression generators. The implosion compresses the magnetic field to values of around 1,000 T for a few microseconds. While this method may seem very destructive, shaped charges redirect the blast outward to minimize harm to the experiment. These devices are known as destructive pulsed electromagnets. They are used in physics and materials science research to study the properties of materials at high magnetic fields. Definition of terms
Physical sciences
Basics_9
null
92385
https://en.wikipedia.org/wiki/Eta%20Carinae
Eta Carinae
η Carinae (Eta Carinae, abbreviated to η Car), formerly known as η Argus, is a stellar system containing at least two stars with a combined luminosity greater than five million times that of the Sun, located around distant in the constellation Carina. Previously a 4th-magnitude star, it brightened in 1837 to become brighter than Rigel, marking the start of its so-called "Great Eruption". It became the second-brightest star in the sky between 11–14 March 1843 before fading well below naked-eye visibility after 1856. In a smaller eruption, it reached 6th magnitude in 1892 before fading again. It has brightened consistently since about 1940, becoming brighter than magnitude 4.5 by 2014. At declination −59° 41′ 04.26″, η Carinae is circumpolar from locations on Earth south of latitude 30°S (for reference, the latitude of Johannesburg is 26°12′S), and is not visible north of about latitude 30°N, just south of Cairo (which is at a latitude of 30°02′N). The two main stars of the η Carinae system have an eccentric orbit with a period of 5.54 years. The primary is an extremely unusual star, similar to a luminous blue variable (LBV). It was initially , of which it has already lost at least , and it is expected to explode as a supernova in the astronomically near future. This is the only star known to produce ultraviolet laser emission. The secondary star is hot and also highly luminous, probably of spectral class O, around 30–80 times as massive as the Sun. The system is heavily obscured by the Homunculus Nebula, which consists of material ejected from the primary during the Great Eruption. It is a member of the open cluster, itself embedded in the much larger Carina Nebula. Although unrelated to the star and nebula, the weak Eta Carinids meteor shower has a radiant very close to η Carinae. Observational history η Carinae was first recorded as a fourth-magnitude star in the 16th or 17th century. It became the second-brightest star in the sky in the mid-19th century, before fading below naked-eye visibility. During the second half of the 20th century, it slowly brightened to again become visible to the naked eye, and by 2014 was again a fourth-magnitude star. Discovery and naming There is no reliable evidence of η Carinae being observed or recorded before the 17th century, although Dutch navigator Pieter Keyser described a fourth-magnitude star at approximately the correct position around 1595–1596, which was copied onto the celestial globes of Petrus Plancius and Jodocus Hondius and the 1603 Uranometria of Johann Bayer. Frederick de Houtman's independent star catalogue from 1603 does not include η Carinae among the other 4th magnitude stars in the region. The earliest firm record was made by Edmond Halley in 1677 when he recorded the star simply as Sequens (i.e. "following" relative to another star) within a new constellation Robur Carolinum. His Catalogus Stellarum Australium was published in 1679. The star was also known by the Bayer designations η Roboris Caroli, η Argus, or η Navis. In 1751 Nicolas-Louis de Lacaille gave the stars of Argo Navis and Robur Carolinum a single set of Greek letter Bayer designations within his constellation Argo, and designated three areas within Argo for the purposes of using Latin letter designations three times over. The letter η fell within the keel portion of the ship which was later to become the constellation Carina. It was not generally known as η Carinae until 1879, when the stars of Argo Navis were finally given the epithets of the daughter constellations in the Uranometria Argentina of Gould. η Carinae is too far south to be part of the mansion-based traditional Chinese astronomy, but it was mapped when the Southern Asterisms were created at the start of the 17th century. Together with s Carinae, λ Centauri and λ Muscae, η Carinae forms the asterism (Sea and Mountain). η Carinae has the names Tseen She (from the Chinese 天社 [Mandarin: tiānshè] "Heaven's altar") and Foramen. It is also known as (, ). Halley gave an approximate apparent magnitude 4 at the time of discovery, which has been calculated as magnitude 3.3 on the modern scale. The handful of possible earlier sightings suggest that η Carinae was not significantly brighter than this for much of the 17th century. Further sporadic observations over the next 70 years show that η Carinae was probably around 3rd magnitude or fainter, until Lacaille reliably recorded it at 2nd magnitude in 1751. It is unclear whether η Carinae varied significantly in brightness over the next 50 years; there are occasional observations such as William Burchell's at 4th magnitude in 1815, but it is uncertain whether these are just re-recordings of earlier observations. Great Eruption In 1827, Burchell specifically noted η Carinae's unusual brightness at 1st magnitude, and was the first to suspect that it varied in brightness. John Herschel, who was in South Africa at the time, made a detailed series of accurate measurements in the 1830s showing that η Carinae consistently shone around magnitude 1.4 until November 1837. On the evening of 16 December 1837, Herschel was astonished to see that it had brightened to slightly outshine Rigel. This event marked the beginning of a roughly 18-year period known as the Great Eruption. η Carinae was brighter still on 27 January 1838, equivalent to Alpha Centauri, before fading slightly over the following three months. Herschel did not observe the star after this, but received correspondence from the Reverend W.S. Mackay in Calcutta, who wrote in 1843, "To my great surprise I observed this March last (1843), that the star η Argus had become a star of the first magnitude fully as bright as Canopus, and in colour and size very like Arcturus." Observations at the Cape of Good Hope indicated it peaked in brightness, surpassing Canopus, from 11 to 14 March 1843, then began to fade, then brightened to between the brightness of Alpha Centauri and Canopus between 24 and 28 March before fading once again. For much of 1844 the brightness was midway between Alpha Centauri and Beta Centauri, around magnitude +0.2, before brightening again at the end of the year. At its brightest in 1843 it likely reached an apparent magnitude of −0.8, then −1.0 in 1845. The peaks in 1827, 1838 and 1843 are likely to have occurred at the periastron passage—the point the two stars are closest together—of the binary orbit. From 1845 to 1856, the brightness decreased by around 0.1 magnitudes per year, but with possible rapid and large fluctuations. In their oral traditions, the Boorong clan of the Wergaia people of Lake Tyrrell, north-western Victoria, Australia, told of a reddish star they knew as Collowgullouric War "Old Woman Crow", the wife of War "Crow" (Canopus). In 2010, astronomers Duane Hamacher and David Frew from Macquarie University in Sydney showed that this was η Carinae during its Great Eruption in the 1840s. From 1857, the brightness decreased rapidly until it faded below naked-eye visibility by 1886. This has been calculated to be due to the condensation of dust in the ejected material surrounding the star, rather than to an intrinsic change in luminosity. Lesser Eruption A new brightening started in 1887, peaked at about magnitude 6.2 in 1892, then at the end of March 1895 faded rapidly to about magnitude 7.5. Although there are only visual records of the 1890 eruption, it has been calculated that η Carinae was suffering 4.3 magnitudes of visual extinction due to the gas and dust ejected in the Great Eruption. An unobscured brightness would have been magnitude 1.5–1.9, significantly brighter than the historical magnitude. Despite this, it was similar to the first one, even almost matching its brightness, but not the amount of material expelled. Twentieth century Between 1900 and at least 1940, η Carinae appeared to have settled at a constant brightness of around magnitude 7.6, but in 1953 it was noted to have brightened again to magnitude 6.5. The brightening continued steadily, but with fairly regular variations of a few tenths of a magnitude. In 1996, the variations were first identified as having a 5.52 year period, later measured more accurately at 5.54 years, leading to the idea of a binary system. The binary theory was confirmed by observations of radio, optical and near-infrared radial velocity and line profile changes, referred to collectively as a spectroscopic event, at the predicted time of periastron passage in late 1997 and early 1998. At the same time there was a complete collapse of the X-ray emission presumed to originate in a colliding wind zone. The confirmation of a luminous binary companion greatly modified the understanding of the physical properties of the η Carinae system and its variability. A sudden doubling of brightness was observed in 1998–99 bringing it back to naked-eye visibility. During the 2014 spectroscopic event, the apparent visual magnitude became brighter than magnitude 4.5. The brightness does not always vary consistently at different wavelengths, and does not always exactly follow the 5.5 year cycle. Radio, infrared and space-based observations have expanded coverage of η Carinae across all wavelengths and revealed ongoing changes in the spectral energy distribution. In July 2018, η Carinae was reported to have the strongest colliding wind shock in the solar neighbourhood. Observations with the NuSTAR satellite gave much higher resolution data than the earlier Fermi Gamma-ray Space Telescope. Using direct focussing observations of the non-thermal source in the extremely hard X-ray band that is spatially coincident with the star, they showed that the source of non-thermal X-rays varies with the orbital phase of the binary star system and that the photon index of the emission is similar to that derived through analysis of the γ-ray (gamma) spectrum. Visibility As a fourth-magnitude star, η Carinae is comfortably visible to the naked eye in all but the most light-polluted skies in inner-city areas according to the Bortle scale. Its brightness has varied over a wide range, from the second-brightest star in the sky for a few days in the 19th century, to well below naked-eye visibility. Its location at around 60°S in the far southern celestial hemisphere means it cannot be seen by observers in Europe and much of North America. Located between Canopus and the Southern Cross, η Carinae is easily pinpointed as the brightest star within the large naked-eye Carina Nebula. In a telescope the "star" is framed within the dark "V" dust lane of the nebula and appears distinctly orange and clearly non-stellar. High magnification will show the two orange lobes of a surrounding reflection nebula known as the Homunculus Nebula on either side of a bright central core. Variable star observers can compare its brightness with several 4th- and 5th-magnitude stars closely surrounding the nebula. Discovered in 1961, the weak Eta Carinids meteor shower has a radiant very close to η Carinae. Occurring from 14 to 28 January, the shower peaks around 21 January. Meteor showers are not associated with bodies outside the Solar System, making the proximity to η Carinae merely a coincidence. Visual spectrum The strength and profile of the lines in the η Carinae spectrum are highly variable, but there are a number of consistent distinctive features. The spectrum is dominated by emission lines, usually broad although the higher excitation lines are overlaid by a narrow central component from dense ionised nebulosity, especially the Weigelt Blobs. Most lines show a P Cygni profile but with the absorption wing much weaker than the emission. The broad P Cygni lines are typical of strong stellar winds, with very weak absorption in this case because the central star is so heavily obscured. Electron scattering wings are present but relatively weak, indicating a clumpy wind. Hydrogen lines are present and strong, showing that η Carinae still retains much of its hydrogen envelope. HeI lines are much weaker than the hydrogen lines, and the absence of HeII lines provides an upper limit to the possible temperature of the primary star. NII lines can be identified but are not strong, while carbon lines cannot be detected and oxygen lines are at best very weak, indicating core hydrogen burning via the CNO cycle with some mixing to the surface. Perhaps the most striking feature is the rich FeII emission in both permitted and forbidden lines, with the forbidden lines arising from excitation of low density nebulosity around the star. The earliest analyses of the star's spectrum are descriptions of visual observations from 1869, of prominent emission lines "C, D, b, F and the principal green nitrogen line". Absorption lines are explicitly described as not being visible. The letters refer to Fraunhofer's spectral notation and correspond to Hα, HeI, FeII, and Hβ. It is assumed that the final line is from FeII very close to the green nebulium line now known to be from OIII. Photographic spectra from 1893 were described as similar to an F5 star, but with a few weak emission lines. Analysis to modern spectral standards suggests an early F spectral type. By 1895 the spectrum again consisted mostly of strong emission lines, with the absorption lines present but largely obscured by emission. This spectral transition from F supergiant to strong emission is characteristic of novae, where ejected material initially radiates like a pseudo-photosphere and then the emission spectrum develops as it expands and thins. The emission line spectrum associated with dense stellar winds has persisted ever since the late 19th century. Individual lines show widely varying widths, profiles and Doppler shifts, often multiple velocity components within the same line. The spectral lines also show variation over time, most strongly with a 5.5-year period but also less dramatic changes over shorter and longer periods, as well as ongoing secular development of the entire spectrum. The spectrum of light reflected from the Weigelt Blobs, and assumed to originate mainly with the primary, is similar to the extreme P Cygni-type star which has a spectral type of B0Ieq. Direct spectral observations did not begin until after the Great Eruption, but light echoes from the eruption reflected from other parts of the Carina Nebula were detected using the U.S. National Optical Astronomy Observatory's Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory. Analysis of the reflected spectra indicated the light was emitted when η Carinae had the appearance of a G2-to-G5 supergiant, some 2,000 K cooler than expected from other supernova impostor events. Further light echo observations show that following the peak brightness of the Great Eruption the spectrum developed prominent P Cygni profiles and CN molecular bands, although this is likely from the material being ejected which may have been colliding with circumstellar material in a similar way to a type IIn supernova. In the second half of the 20th century, much higher-resolution visual spectra became available. The spectrum continued to show complex and baffling features, with much of the energy from the central star being recycled into the infrared by surrounding dust, some reflection of light from the star from dense localised objects in the circumstellar material, but with obvious high-ionisation features indicative of very high temperatures. The line profiles are complex and variable, indicating a number of absorption and emission features at various velocities relative to the central star. The 5.5-year orbital cycle produces strong spectral changes at periastron that are known as spectroscopic events. Certain wavelengths of radiation suffer eclipses, either due to actual occultation by one of the stars or due to passage within opaque portions of the complex stellar winds. Despite being ascribed to orbital rotation, these events vary significantly from cycle to cycle. These changes have become stronger since 2003 and it is generally believed that long-term secular changes in the stellar winds or previously ejected material may be the culmination of a return to the state of the star before its Great Eruption. Ultraviolet The ultraviolet spectrum of the η Carinae system shows many emission lines of ionised metals such as FeII and CrII, as well as Lymanα (Lyα) and a continuum from a hot central source. The ionisation levels and continuum require the existence of a source with a temperature at least 37,000 K. Certain FeII UV lines are unusually strong. These originate in the Weigelt Blobs and are caused by a low-gain lasing effect. Ionised hydrogen between a blob and the central star generates intense Lyα emission which penetrates the blob. The blob contains atomic hydrogen with a small admixture of other elements, including iron photo-ionised by radiation from the central stars. An accidental resonance (where emission coincidentally has a suitable energy to pump the excited state) allows the Lyα emission to pump the Fe+ ions to certain pseudo-metastable states, creating a population inversion that allows the stimulated emission to take place. This effect is similar to the maser emission from dense pockets surrounding many cool supergiant stars, but the latter effect is much weaker at optical and UV wavelengths and η Carinae is the only clear instance detected of an ultraviolet astrophysical laser. A similar effect from pumping of metastable OI states by Lyβ emission has also been confirmed as an astrophysical UV laser. Infrared Infrared observations of η Carinae have become increasingly important. The vast majority of the electromagnetic radiation from the central stars is absorbed by surrounding dust, then emitted as mid- and far infrared appropriate to the temperature of the dust. This allows almost the entire energy output of the system to be observed at wavelengths that are not strongly affected by interstellar extinction, leading to estimates of the luminosity that are more accurate than for other extremely luminous stars. η Carinae is the brightest source in the night sky at mid-infrared wavelengths. Far infrared observations show a large mass of dust at 100–150 K, suggesting a total mass for the Homunculus of 20 solar masses () or more. This is much larger than previous estimates, and is all thought to have been ejected in a few years during the Great Eruption. Near-infrared observations can penetrate the dust at high resolution to observe features that are completely obscured at visual wavelengths, although not the central stars themselves. The central region of the Homunculus contains a smaller Little Homunculus from the 1890 eruption, a butterfly of separate clumps and filaments from the two eruptions, and an elongated stellar wind region. High energy radiation Several X-ray and gamma ray sources have been detected around η Carinae, for example 4U 1037–60 in the 4th Uhuru catalogue and 1044–59 in the HEAO-2 catalog. The earliest detection of X-rays in the η Carinae region was from the Terrier-Sandhawk rocket, followed by Ariel 5, OSO 8, and Uhuru sightings. More detailed observations were made with the Einstein Observatory, ROSAT X-ray telescope, Advanced Satellite for Cosmology and Astrophysics (ASCA), and Chandra X-ray Observatory. There are multiple sources at various wavelengths right across the high energy electromagnetic spectrum: hard X-rays and gamma rays within 1 light-month of the η Carinae; hard X-rays from a central region about 3 light-months wide; a distinct partial ring "horse-shoe" structure in low-energy X-rays 0.67 parsec (2.2 light-years) across corresponding to the main shockfront from the Great Eruption; diffuse X-ray emission across the whole area of the Homunculus; and numerous condensations and arcs outside the main ring. All the high-energy emission associated with η Carinae varies during the orbital cycle. A spectroscopic minimum, or X-ray eclipse, occurred in July and August 2003, and similar events in 2009 and 2014 have been intensively observed. The highest-energy gamma rays above 100 MeV detected by AGILE show strong variability, while lower-energy gamma rays observed by Fermi show little variability. Radio emission Radio emissions have been observed from η Carinae across the microwave band. It has been detected in the 21 cm HI line, but has been particularly closely studied in the millimetre and centimetre bands. Masing hydrogen recombination lines (from the combining of an electron and proton to form a hydrogen atom) have been detected in this range. The emission is concentrated in a small non-point source less than 4 arcseconds across and appears to be mainly free-free emission (thermal bremsstrahlung) from ionised gas, consistent with a compact HII region at around 10,000 K. High resolution imaging shows the radio frequencies originating from a disk a few arcseconds in diameter, 10,000 astronomical units () wide at the distance of η Carinae. The radio emission from η Carinae shows continuous variation in strength and distribution over a 5.5 year cycle. The HII and recombination lines vary very strongly, with continuum emission (electromagnetic radiation across a broad band of wavelengths) less affected. This shows a dramatic reduction in the ionisation level of the hydrogen for a short period in each cycle, coinciding with the spectroscopic events at other wavelengths. Surroundings η Carinae is found within the Carina Nebula, a giant star-forming region in the Carina–Sagittarius Arm of the Milky Way. The nebula is a prominent naked-eye object in the southern skies showing a complex mix of emission, reflection and dark nebulosity. η Carinae is known to be at the same distance as the Carina Nebula and its spectrum can be seen reflected off various star clouds in the nebula. The appearance of the Carina Nebula, and particularly of the Keyhole region, has changed significantly since it was described by John Herschel over years ago. This is thought to be due to the reduction in ionising radiation from Eta Carinae since the Great Eruption. Prior to the Great Eruption the η Carinae system contributed up to 20% of the total ionising flux for the whole Carina Nebula, but that is now mostly blocked by the surrounding gas and dust. Trumpler 16 η Carinae lies within the scattered stars of the Trumpler 16 open cluster. All the other members are well below naked eye visibility, although WR 25 is another extremely massive luminous star. Trumpler 16 and its neighbour Trumpler 14 are the two dominant star clusters of the Carina OB1 association, an extended grouping of young luminous stars with a common motion through space. Homunculus η Carinae is enclosed by, and lights up, the Homunculus Nebula, a small emission and reflection nebula composed mainly of gas ejected during the Great Eruption event in the mid-19th century, as well as dust that condensed from the debris. The nebula consists of two polar lobes aligned with the rotation axis of the star, plus an equatorial "skirt", the whole being around long. Closer studies show many fine details: a Little Homunculus within the main nebula, probably formed by the 1890 eruption; a jet; fine streams and knots of material, especially noticeable in the skirt region; and three Weigelt Blobs—dense gas condensations very close to the star itself. The lobes of the Homunculus are considered to be formed almost entirely due to the initial eruption, rather than shaped by or including previously ejected or interstellar material, although the scarcity of material near the equatorial plane allows some later stellar wind and ejected material to mix. Therefore, the mass of the lobes gives an accurate measure of the scale of the Great Eruption, with estimates ranging from up to as high as . The results show that the material from the Great Eruption is strongly concentrated towards the poles; 75% of the mass and 90% of the kinetic energy were released above latitude 45°. A unique feature of the Homunculus is the ability to measure the spectrum of the central object at different latitudes by the reflected spectrum from different portions of the lobes. These clearly show a polar wind where the stellar wind is faster and stronger at high latitudes thought to be due to rapid rotation causing gravity brightening towards the poles. In contrast the spectrum shows a higher excitation temperature closer to the equatorial plane. By implication the outer envelope of η Carinae A is not strongly convective as that would prevent the gravity darkening. The current axis of rotation of the star does not appear to exactly match the alignment of the Homunculus. This may be due to interaction with η Carinae B which also modifies the observed stellar winds. Distance The distance to η Carinae has been determined by several different methods, resulting in a widely accepted value of , with a margin of error around . The distance to η Carinae itself cannot be measured using parallax due to its surrounding nebulosity, but other stars in the Trumpler 16 cluster are expected to be at a similar distance and are accessible to parallax. Gaia Data Release 2 has provided the parallax for many stars considered to be members of Trumpler 16, finding that the four hottest O-class stars in the region have very similar parallaxes with a mean value of (mas), which translates to a distance of . This implies that η Carinae may be more distant than previously thought, and also more luminous, although it is still possible that it is not at the same distance as the cluster or that the parallax measurements have large systematic errors. The distances to star clusters can be estimated by using a Hertzsprung–Russell diagram or colour–colour diagram to calibrate the absolute magnitudes of the stars, for example fitting the main sequence or identifying features such as a horizontal branch, and hence their distance from Earth. It is also necessary to know the amount of interstellar extinction to the cluster and this can be difficult in regions such as the Carina Nebula. A distance of has been determined from the calibration of O-type star luminosities in Trumpler 16. After determining an abnormal reddening correction to the extinction, the distance to both Trumpler 14 and Trumpler 16 has been measured at (). The known expansion rate of the Homunculus Nebula provides an unusual geometric method for measuring its distance. Assuming that the two lobes of the nebula are symmetrical, the projection of the nebula onto the sky depends on its distance. Values of 2,300, 2,250, and have been derived for the Homunculus, and η Carinae is clearly at the same distance. Properties The η Carinae star system is currently one of the most massive stars that can be studied in great detail. Until recently η Carinae was thought to be the most massive single star, but the system's binary nature was proposed by the Brazilian astronomer Augusto Damineli in 1996 and confirmed in 2005. Both component stars are largely obscured by circumstellar material ejected from η Carinae A, and basic properties such as their temperatures and luminosities can only be inferred. Rapid changes to the stellar wind in the 21st century suggest that the star itself may be revealed when dust from the great eruption finally clears. Orbit The binary nature of η Carinae is clearly established, although the components have not been directly observed and cannot even be clearly resolved spectroscopically due to scattering and re-excitation in the surrounding nebulosity. Periodic photometric and spectroscopic variations prompted the search for a companion, and modelling of the colliding winds and partial "eclipses" of some spectroscopic features have constrained the possible orbits. The period of the orbit is accurately known at 5.539 years, although this has changed over time due to mass loss and accretion. Between the Great Eruption and the smaller 1890 eruption, the orbital period was apparently 5.52 years, while before the Great Eruption it may have been lower still, possibly between 4.8 and 5.4 years. The orbital separation is only known approximately, with a semi-major axis of The orbit is highly eccentric, This means that the separation of the stars varies from around similar to the distance of Mars from the Sun, to 30 AU, similar to the distance of Neptune. Perhaps the most valuable use of an accurate orbit for a binary star system is to directly calculate the masses of the stars. This requires the dimensions and inclination of the orbit to be accurately known. The dimensions of η Carinae's orbit are only known approximately as the stars cannot be directly and separately observed. The inclination has been modelled at 130–145 degrees, but the orbit is still not known accurately enough to provide the masses of the two components. Classification η Carinae A is classified as a luminous blue variable (LBV) due to the distinctive spectral and brightness variations. This type of variable star is characterised by irregular changes from a high temperature quiescent state to a low temperature outburst state at roughly constant luminosity. LBVs in the quiescent state lie on a narrow instability strip, with more luminous stars being hotter. In outburst all LBVs have about the same temperature, which is near 8,000 K. LBVs in a normal outburst are visually brighter than when quiescent although the bolometric luminosity is unchanged. An event similar to η Carinae A's Great Eruption has been observed in only one other star in the Milky Way — — and in a handful of other possible LBVs in other galaxies. None of them seem to be quite as violent as η Carinae's. It is unclear if this is something that only a very few of the most massive LBVs undergo, something that is caused by a close companion star, or a very brief but common phase for massive stars. Some similar events in external galaxies have been mistaken for supernovae and have been called supernova impostors, although this grouping may also include other types of non-terminal transients that approach the brightness of a supernova. η Carinae A is not a typical LBV. It is more luminous than any other LBV in the Milky Way although possibly comparable to other supernova impostors detected in external galaxies. It does not currently lie on the S Doradus instability strip, although it is unclear what the temperature or spectral type of the underlying star actually is, and during its Great Eruption it was much cooler than a typical LBV outburst, with a middle-G spectral type. The 1890 eruption may have been fairly typical of LBV eruptions, with an early F spectral type, and it has been estimated that the star may currently have an opaque stellar wind, forming a pseudo-photosphere with a temperature of 9,000–. η Carinae B is a massive luminous hot star, about which little else is known. From certain high excitation spectral lines that ought not to be produced by the primary, η Carinae B is thought to be a young O-type star. Most authors suggest it is a somewhat evolved star such as a supergiant or giant, although a Wolf–Rayet star cannot be ruled out. Mass The masses of stars are difficult to measure except by determination of a binary orbit. η Carinae is a binary system, but certain key information about the orbit is not known accurately. The mass can be strongly constrained to be greater than , due to the high luminosity. Standard models of the system assume masses of and for the primary and secondary, respectively. Higher masses have been suggested, to model the energy output and mass transfer of the Great Eruption, with a combined system mass of over before the Great Eruption. η Carinae A has clearly lost a great deal of mass since it formed, and it is thought that it was initially , although it may have formed through binary merger. Masses of for the primary and for the secondary best-fit one-mass-transfer model of the Great Eruption event. Mass loss Mass loss is one of the most intensively studied aspects of massive star research. Put simply, calculated mass loss rates in the best models of stellar evolution do not reproduce the observed properties of evolved massive stars such as Wolf–Rayets, the number and types of core collapse supernovae, or their progenitors. To match those observations, the models require much higher mass loss rates. η Carinae A has one of the highest known mass loss rates, currently around /year, and is an obvious candidate for study. η Carinae A is losing a lot of mass due to its extreme luminosity and relatively low surface gravity. Its stellar wind is entirely opaque and appears as a pseudo-photosphere; this optically dense surface hides any true physical surface of the star that may be present. (At extreme rates of radiative mass loss, the density gradient of lofted material may become continuous enough that a meaningfully discrete physical surface may not exist.) During the Great Eruption the mass loss rate was a thousand times higher, around /year sustained for ten years or more. The total mass loss during the eruption was at least with much of it now forming the Homunculus Nebula. The smaller 1890 eruption produced the Little Homunculus Nebula, much smaller and only about . The bulk of the mass loss occurs in a wind with a terminal velocity of about 420 km/s, but some material is seen at higher velocities, up to 3,200 km/s, possibly material blown from the accretion disk by the secondary star. η Carinae B is presumably also losing mass via a thin, fast stellar wind, but this cannot be detected directly. Models of the radiation observed from interactions between the winds of the two stars show a mass loss rate of the order of /year at speeds of 3,000 km/s, typical of a hot O-class star. For a portion of the highly eccentric orbit, it may actually gain material from the primary via an accretion disk. During the Great Eruption of the primary, the secondary could have accreted , producing strong jets which formed the bipolar shape of the Homunculus Nebula. Luminosity The stars of the η Carinae system are completely obscured by dust and opaque stellar winds, with much of the ultraviolet and visual radiation shifted to infrared. The total electromagnetic radiation across all wavelengths for both stars combined is several million solar luminosities (). The best estimate for the luminosity of the primary is making it one of the most luminous stars in the Milky Way. The luminosity of η Carinae B is particularly uncertain, probably and almost certainly no more than . The most notable feature of η Carinae is its giant eruption or supernova impostor event, which originated in the primary star and was observed around 1843. In a few years, it produced almost as much visible light as a faint supernova explosion, but the star survived. It is estimated that at peak brightness the luminosity was as high as . Other supernova impostors have been seen in other galaxies, for example the possible false supernova SN 1961V in NGC 1058 and SN 2006jc's pre-explosion outburst in UGC 4904. Following the Great Eruption, η Carinae became self-obscured by the ejected material, resulting in dramatic reddening. This has been estimated at four magnitudes at visual wavelengths, meaning the post-eruption luminosity was comparable to the luminosity when first identified. η Carinae is still much brighter at infrared wavelengths, despite the presumed hot stars behind the nebulosity. The recent visual brightening is considered to be largely caused by a decrease in the extinction, due to thinning dust or a reduction in mass loss, rather than an underlying change in the luminosity. Temperature Until late in the 20th century, the temperature of η Carinae was assumed to be over 30,000 K because of the presence of high-excitation spectral lines, but other aspects of the spectrum suggested much lower temperatures and complex models were created to account for this. It is now known that the Eta Carinae system consists of at least two stars, both with strong stellar winds and a shocked colliding wind (wind-wind collision or WWC) zone, embedded within a dusty nebula that reprocesses 90% of the electromagnetic radiation into the mid and far infrared. All of these features have different temperatures. The powerful stellar winds from the two stars collide in a roughly conical WWC zone and produce temperatures as high as at the apex between the two stars. This zone is the source of the hard X-rays and gamma rays close to the stars. Near periastron, as the secondary ploughs through ever denser regions of the primary wind, the colliding wind zone becomes distorted into a spiral trailing behind η Carinae B. The wind-wind collision cone separates the winds of the two stars. For 55–75° behind the secondary, there is a thin hot wind typical of O or Wolf–Rayet stars. This allows some radiation from η Carinae B to be detected and its temperature can be estimated with some accuracy due to spectral lines that are unlikely to be produced by any other source. Although the secondary star has never been directly observed, there is widespread agreement on models where it has a temperature between 37,000 K and 41,000 K. In all other directions on the other side of the wind-wind collision zone, there is the wind from η Carinae A, cooler and around 100 times denser than η Carinae B's wind. It is also optically dense, completely obscuring anything resembling a true photosphere and rendering any definition of its temperature moot. The observable radiation originates from a pseudo-photosphere where the optical density of the wind drops to near zero, typically measured at a particular Rossland opacity value such as . This pseudo-photosphere is observed to be elongated and hotter along the presumed axis of rotation. η Carinae A is likely to have appeared as an early B hypergiant with a temperature of between 20,000 K and 25,000 K at the time of its discovery by Halley. An effective temperature determined for the surface of a spherical optically thick wind at would be 9,400–15,000 K, while the temperature of a theoretical hydrostatic "core" at optical depth 150 would be 35,200 K. The effective temperature of the visible outer edge of the opaque primary wind is generally treated as being 15,000–25,000 K on the basis of visual and ultraviolet spectral features assumed to be directly from the wind or reflected via the Weigelt Blobs. During the great eruption, η Carinae A was much cooler at around 5,000 K. The Homunculus contains dust at temperatures varying from 150 K to 400 K. This is the source of almost all the infrared radiation that makes η Carinae such a bright object at those wavelengths. Further out, expanding gases from the Great Eruption collide with interstellar material and are heated to around , producing less energetic X-rays seen in a horseshoe or ring shape. Size The size of the two main stars in the η Carinae system is difficult to determine precisely, for neither star can be seen directly. η Carinae B is likely to have a well-defined photosphere, and its radius can be estimated from the assumed type of star. An O supergiant of with a temperature of 37,200 K has an effective radius of . The size of η Carinae A is not even well defined. It has an optically dense stellar wind, so the typical definition of a star's surface being approximately where it becomes opaque gives a very different result to where a more traditional definition of a surface might be. One study calculated a radius of for a hot "core" of 35,000 K at optical depth 150, near the sonic point or very approximately what might be called a physical surface. At optical depth 0.67 the radius would be , indicating an extended optically thick stellar wind. At the peak of the Great Eruption the radius, so far as such a thing is meaningful during such a violent expulsion of material, would have been around , comparable to the largest-known red supergiants, including VY Canis Majoris. The stellar sizes should be compared with their orbital separation, which is only around at periastron. The accretion radius of the secondary is around , suggesting strong accretion near periastron leading to a collapse of the secondary wind. It has been proposed that the initial brightening from 4th magnitude to 1st at relatively constant bolometric luminosity was a normal LBV outburst, albeit from an extreme example of the class. Then the companion star passing through the expanded photosphere of the primary at periastron triggered the further brightening, increase in luminosity, and extreme mass loss of the Great Eruption. Rotation Rotation rates of massive stars have a critical influence on their evolution and eventual death. The rotation rate of the η Carinae stars cannot be measured directly because their surfaces cannot be seen. Single massive stars spin down quickly due to braking from their strong winds, but there are hints that both η Carinae A and B are fast rotators, up to 90% of critical velocity. One or both could have been spun up by binary interaction, for example accretion onto the secondary and orbital dragging on the primary. Eruptions Two eruptions have been observed from η Carinae, the Great Eruption of the mid-19th century and the Lesser Eruption of 1890. In addition, studies of outlying nebulosity suggest at least one earlier eruption around A further eruption may have occurred around although it is possible that the material indicating this eruption is actually from the Great Eruption slowed down by colliding with older nebulosity. The mechanism producing these eruptions is unknown. It is not even clear whether the eruptions involve explosive events or so-called super-Eddington winds, an extreme form of stellar wind involving very high mass loss induced by an increase in the luminosity of the star. The energy source for the explosions or luminosity increase is also unknown. Theories about the various eruptions must account for: repeating events, at least three eruptions of various sizes; ejecting or more without destroying the star; the highly unusual shape and expansion rates of the ejected material; and the light curve during the eruptions involving a brightness increases of several magnitudes over a period of decades. The best-studied event is the Great Eruption. As well as photometry during the 19th century, light echoes observed in the 21st century give further information about the progression of the eruption, showing a brightening with multiple peaks for approximately 20 years, followed by a plateau period in the 1850s. The light echoes show that the outflow of material during the plateau phase was much higher than before the peak of the eruption. Possible explanations for the eruptions include: a binary merger in what was then a triple system; mass transfer from η Carinae B during periastron passages; or a pulsational pair-instability explosion. Evolution η Carinae is a unique object, with no very close analogues currently known in any galaxy. Therefore, its future evolution is highly uncertain, but almost certainly involves further mass loss and an eventual supernova. η Carinae A would have begun life as an extremely hot star on the main sequence, already a highly luminous object over . The exact properties would depend on the initial mass, which is expected to have been at least and possibly much higher. A typical spectrum when first formed would be O2If and the star would be mostly or fully convective due to CNO cycle fusion at the very high core temperatures. Sufficiently massive or differentially rotating stars undergo such strong mixing that they remain chemically homogeneous during core hydrogen burning. As core hydrogen burning progresses, a very massive star would slowly expand and become more luminous, becoming a blue hypergiant and eventually an LBV while still fusing hydrogen in the core. When hydrogen at the core is depleted after 2–2.5 million years, hydrogen shell burning continues with further increases in size and luminosity, although hydrogen shell burning in chemically homogeneous stars may be very brief or absent since the entire star would become depleted of hydrogen. In the late stages of hydrogen burning, mass loss is extremely high due to the high luminosity and enhanced surface abundances of helium and nitrogen. As hydrogen burning ends and core helium burning begins, massive stars transition very rapidly to the Wolf–Rayet stage with little or no hydrogen, increased temperatures and decreased luminosity. They are likely to have lost over half their initial mass at this point. It is unclear whether triple-alpha helium fusion has started at the core of η Carinae A. The elemental abundances at the surface cannot be accurately measured, but ejecta within the Homunculus are around 60% hydrogen and 40% helium, with nitrogen enhanced to ten times solar levels. This is indicative of ongoing CNO cycle hydrogen fusion. Models of the evolution and death of single very massive stars predict an increase in temperature during helium core burning, with the outer layers of the star being lost. It becomes a Wolf–Rayet star on the nitrogen sequence, moving from WNL to WNE as more of the outer layers are lost, possibly reaching the WC or WO spectral class as carbon and oxygen from the triple alpha process reach the surface. This process would continue with heavier elements being fused until an iron core develops, at which point the core collapses and the star is destroyed. Subtle differences in initial conditions, in the models themselves, and most especially in the rates of mass loss, produce different predictions for the final state of the most massive stars. They may survive to become a helium-stripped star or they may collapse at an earlier stage while they retain more of their outer layers. The lack of sufficiently luminous WN stars and the discovery of apparent LBV supernova progenitors has also prompted the suggestion that certain types of LBVs explode as a supernova without evolving further. η Carinae is a close binary and this complicates the evolution of both stars. Compact massive companions can strip mass from larger primary stars much more quickly than would occur in a single star, so the properties at core collapse can be very different. In some scenarios, the secondary can accrue significant mass, accelerating its evolution, and in turn be stripped by the now compact Wolf–Rayet primary. In the case of η Carinae, the secondary is clearly causing additional instability in the primary, making it difficult to predict future developments. Potential supernova The overwhelming probability is that the next supernova observed in the Milky Way will originate from an unknown white dwarf or anonymous red supergiant, very likely not even visible to the naked eye. Nevertheless, the prospect of a supernova originating from an object as extreme, nearby, and well studied as η Carinae arouses great interest. As a single star, a star originally around 150 times as massive as the Sun would typically reach core collapse as a Wolf–Rayet star within 3 million years. At low metallicity, many massive stars will collapse directly to a black hole with no visible explosion or a sub-luminous supernova, and a small fraction will produce a pair-instability supernova, but at solar metallicity and above, there is expected to be sufficient mass loss before collapse to allow a visible supernova of type Ib or Ic. If there is still a large amount of expelled material close to the star, the shock formed by the supernova explosion impacting the circumstellar material can efficiently convert kinetic energy to radiation, resulting in a superluminous supernova (SLSN) or hypernova, several times more luminous than a typical core collapse supernova and much longer-lasting. Highly massive progenitors may also eject sufficient nickel to cause a SLSN simply from the radioactive decay. The resulting remnant would be a black hole, for it is highly unlikely such a massive star could ever lose sufficient mass for its core not to exceed the limit for a neutron star. The existence of a massive companion brings many other possibilities. If η Carinae A was rapidly stripped of its outer layers, it might be a less massive WC- or WO-type star when core collapse was reached. This would result in a type Ib or type Ic supernova due to the lack of hydrogen and possibly helium. This supernova type is thought to be the originator of certain classes of gamma-ray bursts, but models predict they occur only normally in less massive stars. Several unusual supernovae and impostors have been compared to η Carinae as examples of its possible fate. One of the most compelling is SN 2009ip, a blue supergiant which underwent a supernova impostor event in 2009 with similarities to η Carinae's Great Eruption, then an even brighter outburst in 2012 which is likely to have been a true supernova. SN 2006jc, some 77 million light-years away in UGC 4904, in the constellation Lynx, also underwent a supernova impostor brightening in 2004, followed by a magnitude 13.8 type Ib supernova, first seen on 9 October 2006. η Carinae has also been compared to other possible supernova impostors such as SN 1961V and iPTF14hls, and to superluminous supernovae such as SN 2006gy. Possible effects on Earth A typical core collapse supernova at the distance of η Carinae would peak at an apparent magnitude around −4, similar to Venus. A SLSN could be five magnitudes brighter, potentially the brightest supernova in recorded history (currently SN 1006). At 7,500 light-years from the star it is unlikely to directly affect terrestrial lifeforms, as they will be protected from gamma rays by the atmosphere and from some other cosmic rays by the magnetosphere. The main damage would be restricted to the upper atmosphere, the ozone layer, spacecraft, including satellites and any astronauts in space. At least one paper has projected that complete loss of the Earth's ozone layer is a plausible consequence of a nearby supernova, which would result in a significant increase in UV radiation reaching Earth's surface from the Sun, but this would require a typical supernova to be closer than 50 light-years from Earth, and even a potential hypernova would need to be closer than η Carinae. Another analysis of the possible impact discusses more subtle effects from the unusual illumination, such as possible melatonin suppression with resulting insomnia and increased risk of cancer and depression. It concludes that a supernova of this magnitude would have to be much closer than η Carinae to have any type of major impact on Earth. η Carinae is not expected to produce a gamma-ray burst, and its axis is not currently aimed near Earth. The Earth's atmosphere protects its inhabitants from all the radiation apart from UV light (it is opaque to gamma rays, which have to be observed using space telescopes). The main effect would result from damage to the ozone layer. η Carinae is too far away to do that even if it did produce a gamma-ray burst.
Physical sciences
Notable stars
Astronomy
92389
https://en.wikipedia.org/wiki/Zeolite
Zeolite
Zeolite is a family of several microporous, crystalline aluminosilicate materials commonly used as commercial adsorbents and catalysts. They mainly consist of silicon, aluminium, oxygen, and have the general formula ・y where is either a metal ion or H+. The term was originally coined in 1756 by Swedish mineralogist Axel Fredrik Cronstedt, who observed that rapidly heating a material, believed to have been stilbite, produced large amounts of steam from water that had been adsorbed by the material. Based on this, he called the material zeolite, from the Greek , meaning "to boil" and , meaning "stone". Zeolites occur naturally, but are also produced industrially on a large scale. , 253 unique zeolite frameworks have been identified, and over 40 naturally occurring zeolite frameworks are known. Every new zeolite structure that is obtained is examined by the International Zeolite Association Structure Commission (IZA-SC) and receives a three-letter designation. Characteristics Properties Zeolites are white solids with ordinary handling properties, like many routine aluminosilicate minerals, e.g. feldspar. They have the general formula where M+ is usually H+ and Na+. The Si/Al ratio is variable, which provides a means to tune the properties. Zeolites with a Si/Al ratios higher than about 3 are classified as high-silica zeolites, which tend to be more hydrophobic. The H+ and Na+ can be replaced by diverse cations, because zeolites have ion exchange properties. The nature of the cations influences the porosity of zeolites. Zeolites have microporous structures with a typical diameter of 0.3–0.8 nm. Like most aluminosilicates, the framework is formed by linking of aluminum and silicon atoms by oxides. This linking leads to a 3-dimensional network of Si-O-Al, Si-O-Si, and Al-O-Al linkages. The aluminum centers are negatively charged, which requires an accompanying cation. These cations are hydrated during the formation of the materials. The hydrated cations interrupt the otherwise dense network of Si-O-Al, Si-O-Si, and Al-O-Al linkage, leading to regular water-filled cavities. Because of the porosity of the zeolite, the water can exit the material through channels. Because of the rigidity of the zeolite framework, the loss of water does not result in collapse of the cavities and channels. This aspect – the ability to generate voids within the solid material – underpins the ability of zeolites to function as catalysts. They possess high physical and chemical stability due to the large covalent bonding contribution. They have excellent hydrophobicity and are suited for adsorption of bulky, hydrophobic molecules such as hydrocarbons. In addition to that, high-silica zeolites are exchangeable, unlike natural zeolites, and are used as solid acid catalysts. The acidity is strong enough to protonate hydrocarbons and high-silica zeolites are used in acid catalysis processes such as fluid catalytic cracking in petrochemical industry. Framework structure The structures of hundreds of zeolites have been determined. Most do not occur naturally. For each structure, the International Zeolite Association (IZA) gives a three-letter code called framework type code (FTC). For example, the major molecular sieves, 3A, 4A and 5A, are all LTA (Linde Type A). Most commercially available natural zeolites are of the MOR, HEU or ANA-types. An example of the notation of the ring structure of zeolite and other silicate materials is shown in the upper right figure. The middle figure shows a common notation using structural formula. The left figure emphasizes the SiO tetrahedral structure. Connecting oxygen atoms together creates a four-membered ring of oxygen (blue bold line). In fact, such a ring substructure is called four membered ring or simply four-ring. The figure on the right shows a 4-ring with Si atoms connected to each other, which is the most common way to express the topology of the framework. The figure on the right compares the typical framework structures of LTA (left) and FAU (right). Both zeolites share the truncated octahedral structure (sodalite cage) (purple line). However, the way they are connected (yellow line) is different: in LTA, the four-membered rings of the cage are connected to each other to form a skeleton, while in FAU, the six-membered rings are connected to each other. As a result, the pore entrance of LTA is an 8-ring (0.41 nm) and belongs to the small pore zeolite, while the pore entrance of FAU is a 12-ring (0.74 nm) and belongs to the large pore zeolite, respectively. Materials with a 10-ring are called medium pore zeolites, a typical example being ZSM-5 (MFI). Although more than 200 types of zeolites are known, only about 100 types of aluminosilicate are available. In addition, there are only a few types that can be synthesized in industrially feasible way and have sufficient thermal stability to meet the requirements for industrial use. In particular, the FAU (faujasite, USY), *BEA (beta), MOR (high-silica mordenite), MFI (ZSM-5), and FER (high-silica ferrierite) types are called the big five of high silica zeolites, and industrial production methods have been established. Porosity The term molecular sieve refers to a particular property of these materials, i.e., the ability to selectively sort molecules based primarily on a size exclusion process. This is due to a very regular pore structure of molecular dimensions. The maximum size of the molecular or ionic species that can enter the pores of a zeolite is controlled by the dimensions of the channels. These are conventionally defined by the ring size of the aperture, where, for example, the term "eight-ring" refers to a closed-loop that is built from eight tetrahedrally coordinated silicon (or aluminium) atoms and eight oxygen atoms. These rings are not always perfectly symmetrical due to a variety of causes, including strain induced by the bonding between units that are needed to produce the overall structure or coordination of some of the oxygen atoms of the rings to cations within the structure. Therefore, the pores in many zeolites are not cylindrical. Isomorphous substitution Isomorphous substitution of Si in zeolites can be possible for some heteroatoms such as titanium, zinc and germanium. Al atoms in zeolites can be also structurally replaced with boron and gallium. The silicoaluminophosphate type (AlPO molecular sieve), in which Si is isomorphous with Al and P and Al is isomorphous with Si, and the gallogermanate and others are known. Natural occurrence Some of the more common mineral zeolites are analcime, chabazite, clinoptilolite, heulandite, natrolite, phillipsite, and stilbite. An example of the mineral formula of a zeolite is: ·2H2O, the formula for natrolite. Natural zeolites form where volcanic rocks and ash layers react with alkaline groundwater. Zeolites also crystallize in post-depositional environments over periods ranging from thousands to millions of years in shallow marine basins. Naturally occurring zeolites are rarely pure and are contaminated to varying degrees by other minerals, metals, quartz, or other zeolites. For this reason, naturally occurring zeolites are excluded from many important commercial applications where uniformity and purity are essential. Zeolites transform to other minerals under weathering, hydrothermal alteration or metamorphic conditions. Some examples: The sequence of silica-rich volcanic rocks commonly progresses from: Clay → quartz → mordenite–heulandite → epistilbite → stilbite → thomsonite → mesolite → scolecite → chabazite → calcite. The sequence of silica-poor volcanic rocks commonly progresses from: Cowlesite → levyne → offretite → analcime → thomsonite → mesolite → scolecite → chabazite → calcite. Gemstones Thomsonites, one of the rarer zeolite minerals, have been collected as gemstones from a series of lava flows along Lake Superior in Minnesota and, to a lesser degree, in Michigan. Thomsonite nodules from these areas have eroded from basalt lava flows and are collected on beaches and by scuba divers in Lake Superior. These thomsonite nodules have concentric rings in combinations of colors: black, white, orange, pink, purple, red, and many shades of green. Some nodules have copper inclusions and rarely will be found with copper "eyes". When polished by a lapidary, the thomsonites sometimes displays a "cat's eye" effect (chatoyancy). Production The first synthetic structure was reported by Richard Barrer. Industrially important zeolites are produced synthetically. Typical procedures entail heating aqueous solutions of alumina and silica with sodium hydroxide. Equivalent reagents include sodium aluminate and sodium silicate. Further variations include the use of structure directing agents (SDA) such as quaternary ammonium cations. Synthetic zeolites hold some key advantages over their natural analogs. The synthetic materials are manufactured in a uniform, phase-pure state. It is also possible to produce zeolite structures that do not appear in nature. Zeolite A is a well-known example. Since the principal raw materials used to manufacture zeolites are silica and alumina, which are among the most abundant mineral components on earth, the potential to supply zeolites is virtually unlimited. Ore mining , the world's annual production of natural zeolite approximates 3 million tonnes. Major producers in 2010 included China (2 million tonnes), South Korea (210,000 t), Japan (150,000 t), Jordan (140,000 t), Turkey (100,000 t) Slovakia (85,000 t) and the United States (59,000 t). The ready availability of zeolite-rich rock at low cost and the shortage of competing minerals and rocks are probably the most important factors for its large-scale use. According to the United States Geological Survey, it is likely that a significant percentage of the material sold as zeolites in some countries is ground or sawn volcanic tuff that contains only a small amount of zeolites. These materials are used for construction, e.g. dimension stone (as an altered volcanic tuff), lightweight aggregate, pozzolanic cement, and soil conditioners. Synthesis Over 200 synthetic zeolites have been reported. Most zeolites have aluminosilicate frameworks but some incorporate germanium, iron, gallium, boron, zinc, tin, and titanium. Zeolite synthesis involves sol-gel-like processes. The product properties depend on reaction mixture composition, pH of the system, operating temperature, pre-reaction 'seeding' time, reaction time as well as the templates used. In the sol-gel process, other elements (metals, metal oxides) can be easily incorporated. Applications Zeolites are widely used as catalysts and sorbents. In chemistry, zeolites are used as membranes to separate molecules (only molecules of certain sizes and shapes can pass through), and as traps for molecules so they can be analyzed. Research into and development of the many biochemical and biomedical applications of zeolites, particularly the naturally occurring species heulandite, clinoptilolite, and chabazite has been ongoing. Ion-exchange, water purification and softening Zeolites are widely used as ion-exchange beds in domestic and commercial water purification, softening, and other applications. Evidence for the oldest known zeolite water purification filtration system occurs in the undisturbed sediments of the Corriental reservoir at the Maya city of Tikal, in northern Guatemala. Earlier, polyphosphates were used to soften hard water. The polyphosphates forms complex with metal ions like Ca2+ and Mg2+ to bind them up so that they could not interfere in cleaning process. However, when this phosphate rich water goes in main stream water, it results in eutrophication of water bodies and hence use of polyphosphate was replaced with use of a synthetic zeolite. The largest single use for zeolite is the global laundry detergent market. Zeolites are used in laundry detergent as water softeners, removing Ca2+ and Mg2+ ions which would otherwise precipitate from the solution. The ions are retained by the zeolites which releases Na+ ions into the solution, allowing the laundry detergent to be effective in areas with hard water. Catalysis Synthetic zeolites, like other mesoporous materials (e.g., MCM-41), are widely used as catalysts in the petrochemical industry, such as in fluid catalytic cracking and hydrocracking. Zeolites confine molecules into small spaces, which causes changes in their structure and reactivity. The acidic forms of zeolites prepared are often powerful solid-state solid acids, facilitating a host of acid-catalyzed reactions, such as isomerization, alkylation, and cracking. Catalytic cracking uses a reactor and a regenerator. Feed is injected onto a hot, fluidized catalyst where large gasoil molecules are broken into smaller gasoline molecules and olefins. The vapor-phase products are separated from the catalyst and distilled into various products. The catalyst is circulated to a regenerator, where the air is used to burn coke off the surface of the catalyst that was formed as a byproduct in the cracking process. The hot, regenerated catalyst is then circulated back to the reactor to complete its cycle. Zeolites containing cobalt nanoparticles have applications in the recycling industry as a catalyst to break down polyethylene and polypropylene, two widely used plastics, into propane. Nuclear waste reprocessing Zeolites have been used in advanced nuclear reprocessing methods, where their micro-porous ability to capture some ions while allowing others to pass freely allows many fission products to be efficiently removed from the waste and permanently trapped. Equally important are the mineral properties of zeolites. Their alumino-silicate construction is extremely durable and resistant to radiation, even in porous form. Additionally, once they are loaded with trapped fission products, the zeolite-waste combination can be hot-pressed into an extremely durable ceramic form, closing the pores and trapping the waste in a solid stone block. This is a waste form factor that greatly reduces its hazard, compared to conventional reprocessing systems. Zeolites are also used in the management of leaks of radioactive materials. For example, in the aftermath of the Fukushima Daiichi nuclear disaster, sandbags of zeolite were dropped into the seawater near the power plant to adsorb the radioactive cesium-137 that was present in high levels. Gas separation and storage Zeolites have the potential of providing precise and specific separation of gases, including the removal of H2O, CO2, and SO2 from low-grade natural gas streams. Other separations include noble gases, N2, O2, freon, and formaldehyde. On-board oxygen generating systems (OBOGS) and oxygen concentrators use zeolites in conjunction with pressure swing adsorption to remove nitrogen from compressed air to supply oxygen for aircrews at high altitudes, as well as home and portable oxygen supplies. Zeolite-based oxygen concentrator systems are widely used to produce medical-grade oxygen. The zeolite is used as a molecular sieve to create purified oxygen from air using its ability to trap impurities, in a process involving the adsorption of nitrogen, leaving highly purified oxygen and up to 5% argon. The German group Fraunhofer e.V. announced that they had developed a zeolite substance for use in the biogas industry for long-term storage of energy at a density four times greater than water. Ultimately, the goal is to store heat both in industrial installations and in small combined heat and power plants such as those used in larger residential buildings. Debbie Meyer Green Bags, a produce storage and preservation product, uses a form of zeolite as its active ingredient. The bags are lined with zeolite to adsorb ethylene, which is intended to slow the ripening process and extend the shelf life of produce stored in the bags. Clinoptilolite has also been added to chicken food: the absorption of water and ammonia by the zeolite made the birds' droppings drier and less odoriferous, hence easier to handle. Zeolites are also used as a molecular sieve in cryosorption style vacuum pumps. Solar energy storage and use Zeolites can be used to thermochemically store solar heat harvested from solar thermal collectors as first demonstrated by Guerra in 1978 and for adsorption refrigeration, as first demonstrated by Tchernev in 1974. In these applications, their high heat of adsorption and ability to hydrate and dehydrate while maintaining structural stability is exploited. This hygroscopic property coupled with an inherent exothermic (energy releasing) reaction when transitioning from a dehydrated form to a hydrated form make natural zeolites useful in harvesting waste heat and solar heat energy. Building materials Synthetic zeolites are used as an additive in the production process of warm mix asphalt concrete. The development of this application started in Germany in the 1990s. They help by decreasing the temperature level during manufacture and laying of asphalt concrete, resulting in lower consumption of fossil fuels, thus releasing less carbon dioxide, aerosols, and vapors. The use of synthetic zeolites in hot mixed asphalt leads to easier compaction and, to a certain degree, allows cold weather paving and longer hauls. When added to Portland cement as a pozzolan, they can reduce chloride permeability and improve workability. They reduce weight and help moderate water content while allowing for slower drying, which improves break strength. When added to lime mortars and lime-metakaolin mortars, synthetic zeolite pellets can act simultaneously as a pozzolanic material and a water reservoir. Cat litter Non-clumping cat litter is often made of zeolite (or diatomite), one form of which, invented at MIT, can sequester the greenhouse gas methane from the atmosphere. Hemostatic agent The original formulation of QuikClot brand hemostatic agent, which is used to stop severe bleeding, contained zeolite granules. When in contact with blood, the granules would rapidly absorb water from the blood plasma, creating an exothermic reaction which generated heat. The absorption of water would also concentrate clotting factors present within the blood, causing the clot formation process to occur much faster than under normal circumstances, as shown in vitro. The 2022 formulation of QuikClot uses a nonwoven material impregnated with kaolin, an inorganic mineral activating Factor XII, in turn accelerating natural clotting. Unlike the original zeolite formulation, kaolin does not exhibit any thermogenic properties. Soil treatment In agriculture, clinoptilolite (a naturally occurring zeolite) is used as a soil treatment. It provides a source of slowly released potassium. If previously loaded with ammonium, the zeolite can serve a similar function in the slow release of nitrogen. Zeolites can also act as water moderators, in which they will absorb up to 55% of their weight in water and slowly release it under the plant's demand. This property can prevent root rot and moderate drought cycles. Aquaria Pet stores market zeolites for use as filter additives in aquaria, where they can be used to adsorb ammonia and other nitrogenous compounds. Due to the high affinity of some zeolites for calcium, they may be less effective in hard water and may deplete calcium. Zeolite filtration is also used in some marine aquaria to keep nutrient concentrations low for the benefit of corals adapted to nutrient-depleted waters. Where and how the zeolite was formed is an important consideration for aquarium applications. Most Northern hemisphere, natural zeolites were formed when molten lava came into contact with sea water, thereby "loading" the zeolite with Na (sodium) sacrificial ions. The mechanism is well known to chemists as ion exchange. These sodium ions can be replaced by other ions in solution, thus the take-up of nitrogen in ammonia, with the release of the sodium. A deposit near Bear River in southern Idaho is a fresh water variety (Na < 0.05%). Southern hemisphere zeolites are typically formed in freshwater and have a high calcium content. Application of Zeolite as an Animal Feed Zeolite, particularly clinoptilolite, has been reported to improve shell thickness, feed conversion rates, nutrient utilization, bone quality, and growth rates in poultry, pigs, calves, and sheep. Key mechanisms of zeolite include: Binding Ammonia: Zeolite effectively binds ammonia in the gut, reducing toxicity. Inhibiting Mycotoxins: It has been shown to inhibit various mycotoxins that can adversely affect animal health. Increasing Absorption of Toxic Degradation Products: Zeolite enhances the absorption of nutrients while mitigating harmful substances. Slowing Digestion Product Passage: By slowing down the passage of digestion products, zeolite improves nutrient absorption. Adding 5% clinoptilolite to swine and poultry feed has been associated with increases in animal weight, reductions in feed costs, enhanced degradation of toxins, improved intestinal microbial balance, and slower intestinal passage for better digestion. Mineral species The zeolite structural group (Nickel-Strunz classification) includes: 09.GA. – Zeolites with T5O10 units (T = combined Si and Al): the fibrous zeolites Natrolite framework (NAT): gonnardite, natrolite, mesolite, paranatrolite, scolecite, tetranatrolite Edingtonite framework (EDI): edingtonite, kalborsite Thomsonite framework (THO): thomsonite-series 09.GB. – Chains of single connected 4-membered rings Analcime framework (ANA): analcime, leucite, pollucite, wairakite Laumontite (LAU), yugawaralite (YUG), goosecreekite (GOO), montesommaite (MON) 09.GC. – Chains of doubly connected 4-membered rings Phillipsite framework (PHI): harmotome, phillipsite-series Gismondine framework (GIS): amicite, gismondine, garronite, gobbinsite Boggsite (BOG), merlinoite (MER), mazzite-series (MAZ), paulingite-series (PAU), perlialite (Linde type L framework, zeolite L, LTL) 09.GD. – Chains of 6-membered rings: tabular zeolites Chabazite framework (CHA): chabazite-series, herschelite, willhendersonite and SSZ-13 Faujasite framework (FAU): faujasite-series, Linde type X (zeolite X, X zeolites), Linde type Y (zeolite Y, Y zeolites) Mordenite framework (MOR): maricopaite, mordenite Offretite–wenkite subgroup 09.GD.25 (Nickel–Strunz, 10 ed): offretite (OFF), wenkite (WEN) Bellbergite (TMA-E, Aiello and Barrer; framework type EAB), bikitaite (BIK), erionite-series (ERI), ferrierite (FER), gmelinite (GME), levyne-series (LEV), dachiardite-series (DAC), epistilbite (EPI) 09.GE. – Chains of T10O20 tetrahedra (T = combined Si and Al) Heulandite framework (HEU): clinoptilolite, heulandite-series Stilbite framework (STI): barrerite, stellerite, stilbite-series Brewsterite framework (BRE): brewsterite-series Others Cowlesite, pentasil (also known as ZSM-5, framework type MFI), tschernichite (beta polymorph A, disordered framework, BEA), Linde type A framework (zeolite A, LTA) Computational study Computer calculations have predicted that millions of hypothetical zeolite structures are possible. However, only 232 of these structures have been discovered and synthesized so far, so many zeolite scientists question why only this small fraction of possibilities are observed. This problem is often referred to as "the bottleneck problem". Currently, several theories attempt to explain the reasoning behind this question. Zeolite synthesis research has primarily concentrated on hydrothermal methods; however, new zeolites may be synthesized using alternative methods. Synthesis methods that have started to gain use include microwave-assisted, post-synthetic modification, and steam. Geometric computer simulations have shown that the discovered zeolite frameworks possess a behavior known as "the flexibility window". This shows that there is a range in which the zeolite structure is "flexible" and can be compressed but retains the framework structure. It is suggested that if a framework does not possess this property then it cannot be feasibly synthesized. As zeolites are metastable, certain frameworks may be inaccessible as nucleation cannot occur because more stable and energetically favorable zeolites will form. Post-synthetic modification has been used to combat this issue with the ADOR method, whereby frameworks can be cut apart into layers and bonded back together by either removing silica bonds or including them. Based on dense crystal model systems, the theory of crystallization via solute pre-nucleation clusters was developed. Investigation of zeolite crystallization in hydrated silicate ionic liquids (HSIL) has shown that zeolites can nucleate via the condensation of ion-paired pre-nucleation clusters. This line of research identified several connections between the synthesis medium liquid chemistry and important properties of zeolite crystals, such as the role of inorganic structure-directing agents in zeolite framework selection, the role of ion-pairing on the zeolite molecular composition and topology, and the role of liquid cation mobility on the zeolite crystal size and morphology. Consequently, complex relations exist between the properties of zeolite synthesis media and the crystallizing zeolite, potentially explaining why only a small fraction of the hypothetical zeolite frameworks can be synthesized. While these relations are not yet fully understood, HSIL zeolite synthesis is an exceptional model system for zeolite science, providing opportunities to advance current understanding of the zeolite conundrum.
Physical sciences
Silicate minerals
Earth science
92394
https://en.wikipedia.org/wiki/Streptococcus%20pyogenes
Streptococcus pyogenes
Streptococcus pyogenes is a species of Gram-positive, aerotolerant bacteria in the genus Streptococcus. These bacteria are extracellular, and made up of non-motile and non-sporing cocci (round cells) that tend to link in chains. They are clinically important for humans, as they are an infrequent, but usually pathogenic, part of the skin microbiota that can cause group A streptococcal infection. S. pyogenes is the predominant species harboring the Lancefield group A antigen, and is often called group A Streptococcus (GAS). However, both Streptococcus dysgalactiae and the Streptococcus anginosus group can possess group A antigen as well. Group A streptococci, when grown on blood agar, typically produce small (2–3 mm) zones of beta-hemolysis, a complete destruction of red blood cells. The name group A (beta-hemolytic) Streptococcus is thus also used. The species name is derived from Greek words meaning 'a chain' () of berries ( [Latinized from ]) and pus ()-forming (genes), since a number of infections caused by the bacterium produce pus. The main criterion for differentiation between Staphylococcus spp. and Streptococcus spp. is the catalase test. Staphylococci are catalase positive whereas streptococci are catalase-negative. S. pyogenes can be cultured on fresh blood agar plates. The PYR test allows for the differentiation of Streptococcus pyogenes from other morphologically similar beta-hemolytic streptococci (including S. dysgalactiae subsp. esquismilis) as S. pyogenes will produce a positive test result. An estimated 700 million GAS infections occur worldwide each year. While the overall mortality rate for these infections is less than 0.1%, over 650,000 of the cases are severe and invasive, and these cases have a mortality rate of 25%. Early recognition and treatment are critical; diagnostic failure can result in sepsis and death. S. pyogenes is clinically and historically significant as the cause of scarlet fever, which results from exposure to the species' exotoxin. Epidemiology Unlike most bacterial pathogens, S. pyogenes only infects humans. Thus, zoonotic transmission from an animal (or animal products) to a human is rare. S. pyogenes typically colonizes the throat, genital mucosa, rectum, and skin. Of healthy adults, 1% to 5% have throat, vaginal, or rectal carriage, with children being more common carriers. Most frequently, transmission from one person to another occurs due to inhalation of respiratory droplets, produced by sneezing and coughing from an infected person. Skin contact, contact with objects harboring the bacterium, and consumption of contaminated food are possible but uncommon modes of transmission. Streptococcal pharyngitis occurs most frequently in late winter to early spring in most countries as indoor spaces are used more often and thus more crowded. Disease cases are the lowest during autumn. Maternal S. pyogenes infection usually happens in late pregnancy, at more than 30 weeks of gestation to four weeks postpartum. Maternal infections account for 2 to 4% of all clinically diagnosed S. pyogenes infections. The risk of sepsis is relatively high compared to other bacterial infections acquired during pregnancy, and S. pyogenes is a leading cause of septic shock and death in pregnant and postpartum women. Bacteriology Serotyping In 1928, Rebecca Lancefield published a method for serotyping S. pyogenes based on its cell-wall polysaccharide, a virulence factor displayed on its surface. Later, in 1946, Lancefield described the serologic classification of S. pyogenes isolates based on components of their surface pili (known as the T-antigen) which are used by bacteria to attach to host cells. As of 2016, a total of 120 M proteins are identified. These M proteins are encoded by 234 types emm gene with greater than 1,200 alleles. Lysogeny All strains of S. pyogenes are polylysogenized, in that they carry one or more bacteriophage on their genomes. Some of the phages may be defective, but in some cases active phage may compensate for defects in others. In general, the genome of S. pyogenes strains isolated during disease are >90% identical, they differ by the phage they carry. Virulence factors S. pyogenes has several virulence factors that enable it to attach to host tissues, evade the immune response, and spread by penetrating host tissue layers. A carbohydrate-based bacterial capsule composed of hyaluronic acid surrounds the bacterium, protecting it from phagocytosis by neutrophils. In addition, the capsule and several factors embedded in the cell wall, including M protein, lipoteichoic acid, and protein F (SfbI) facilitate attachment to various host cells. M protein also inhibits opsonization by the alternative complement pathway by binding to host complement regulators. The M protein found on some serotypes is also able to prevent opsonization by binding to fibrinogen. However, the M protein is also the weakest point in this pathogen's defense, as antibodies produced by the immune system against M protein target the bacteria for engulfment by phagocytes. M proteins are unique to each strain, and identification can be used clinically to confirm the strain causing an infection. Genome The genomes of different strains were sequenced (genome size is 1.8–1.9 Mbp) encoding about 1700-1900 proteins (1700 in strain NZ131, 1865 in strain MGAS5005). Complete genome sequences of the type strain of S. pyogenes (NCTC 8198T = CCUG 4207T) are available in DNA Data Bank of Japan, European Nucleotide Archive, and GenBank under the accession numbers LN831034 and CP028841. Biofilm formation Biofilms are a way for S. pyogenes, as well as other bacterial cells, to communicate with each other. In the biofilm gene expression for multiple purposes (such as defending against the host immune system) is controlled via quorum sensing. One of the biofilm forming pathways in GAS is the Rgg2/3 pathway. It regulates SHP's (short hydrophobic peptides) that are quorum sensing pheromones a.k.a. autoinducers. The SHP's are translated to an immature form of the pheromone and must undergo processing, first by a metalloprotease enzyme inside the cell and then in the extracellular space, to reach their mature active form. The mode of transportation out of the cell and the extracellular processing factor(s) are still unknown. The mature SHP pheromone can then be taken into nearby cells and the cell it originated from via a transmembrane protein, oligopeptide permease. In the cytosol the pheromones have two functions in the Rgg2/3 pathway. Firstly, they inhibit the activity of Rgg3 which is a transcriptional regulator repressing SHP production. Secondly, they bind another transcriptional regulator, Rgg2, that increases the production of SHP's, having an antagonistic effect to Rgg3. SHP's activating their own transcriptional activator creates a positive feedback loop, which is common for the production for quorum sensing peptides. It enables the rapid production of the pheromones in large quantities. The production of SHP's increases biofilm biogenesis. It has been suggested that GAS switches between biofilm formation and degradation by utilizing pathways with opposing effects. Whilst Rgg2/3 pathway increases biofilm, the RopB pathway disrupts it. RopB is another Rgg-like protein (Rgg1) that directly activates SpeB (Streptococcal pyrogenic exotoxin B), a cysteine protease that acts as a virulence factor. In the absence of this pathway, biofilm formation is enhanced, possibly due to the lack of the protease degrading pheromones or other Rgg2/3 pathway counteracting effects. Disease S. pyogenes is the cause of many human diseases, ranging from mild superficial skin infections to life-threatening systemic diseases. The most frequent manifestations of disease are commonly known as scarlet fever. Infections typically begin in the throat or skin. The most striking sign is a strawberry-like rash. Examples of mild S. pyogenes infections include pharyngitis (strep throat) and localized skin infection (impetigo). Erysipelas and cellulitis are characterized by multiplication and lateral spread of S. pyogenes in deep layers of the skin. S. pyogenes invasion and multiplication in the fascia beneath the skin can lead to necrotizing fasciitis, a life-threatening surgical emergency. The bacterium is also an important cause of infection in newborns, who are susceptible to some forms of the infection that are rarely seen in adults, including meningitis. Like many pathogenic bacteria, S. pyogenes may colonize a healthy person's respiratory system without causing disease, existing as a commensal member of the respiratory microbiota. It is commonly found in some populations as part of the mixed microbiome of the upper respiratory tract. Individuals who have the bacterium in their bodies but no signs of disease are known as asymptomatic carriers. The bacteria may start to cause disease when the host's immune system weakens, such as during a viral respiratory infection, which may lead to S. pyogenes superinfection. S. pyogenes infections are commonly associated with the release of one or more bacterial toxins. The release of endotoxins from throat infections has been linked to the development of scarlet fever. Other toxins produced by S. pyogenes may lead to streptococcal toxic shock syndrome, a life-threatening emergency. S. pyogenes can also cause disease in the form of post-infectious "non-pyogenic" (not associated with local bacterial multiplication and pus formation) syndromes. These autoimmune-mediated complications follow a small percentage of infections and include rheumatic fever and acute post-infectious glomerulonephritis. Both conditions appear several weeks following the initial streptococcal infection. Rheumatic fever is characterized by inflammation of the joints and/or heart following an episode of streptococcal pharyngitis. Acute glomerulonephritis, inflammation of the renal glomerulus, can follow streptococcal pharyngitis or skin infection. S. pyogenes is sensitive to penicillin, and has not developed resistance to it, making penicillin a suitable antibiotic to treat infections caused by this bacterium. Failure of treatment with penicillin is generally attributed to other local commensal microorganisms producing β-lactamase, or failure to achieve adequate tissue levels in the pharynx. Certain strains have developed resistance to macrolides, tetracyclines, and clindamycin. Vaccine There is a polyvalent inactivated vaccine against several types of Streptococcus including S. pyogenes called " vacuna antipiogena polivalente BIOL" it is recommended an administration in a series of 5 weeks. Two weekly applications are made at intervals of 2 to 4 days. The vaccine is produced by the Instituto Biológico Argentino. There is another potential vaccine being developed; the vaccine candidate peptide is called StreptInCor. Applications Bionanotechnology Many S. pyogenes proteins have unique properties, which have been harnessed in recent years to produce a highly specific "superglue" and a route to enhance the effectiveness of antibody therapy. Genome editing The CRISPR system from this organism that is used to recognize and destroy DNA from invading viruses, thus stopping the infection, was appropriated in 2012 for use as a genome-editing tool that could potentially alter any piece of DNA and later RNA.
Biology and health sciences
Gram-positive bacteria
Plants
92396
https://en.wikipedia.org/wiki/Scarlet%20fever
Scarlet fever
Scarlet fever, also known as scarlatina, is an infectious disease caused by Streptococcus pyogenes, a Group A streptococcus (GAS). It most commonly affects children between five and 15 years of age. The signs and symptoms include a sore throat, fever, headache, swollen lymph nodes, and a characteristic rash. The face is flushed and the rash is red and blanching. It typically feels like sandpaper and the tongue may be red and bumpy. The rash occurs as a result of capillary damage by exotoxins produced by S.pyogenes. On darker-pigmented skin the rash may be hard to discern. Scarlet fever develops in a small number of people who have strep throat or streptococcal skin infections. The bacteria are usually spread by people coughing or sneezing. It can also be spread when a person touches an object that has the bacteria on it and then touches their mouth or nose. The diagnosis is typically confirmed by culturing swabs of the throat. There is no vaccine for scarlet fever. Prevention is by frequent handwashing, not sharing personal items, and staying away from other people when sick. The disease is treatable with antibiotics, which reduce symptoms and spread, and prevent most complications. Outcomes with scarlet fever are typically good if treated. Long-term complications as a result of scarlet fever include kidney disease, rheumatic fever, and arthritis. In the early 20th century, scarlet fever was a leading cause of death in children, but even before World War II and the introduction of antibiotics, its severity was already declining. This decline is suggested to be due to better living conditions, the introduction of better control measures, or a decline in the virulence of the bacteria. In recent years, there have been signs of antibiotic resistance; there was an outbreak in Hong Kong in 2011 and in the UK in 2014, and occurrence of the disease rose by 68% in the UK between 2014 and 2018. Research published in October 2020 showed that infection of the bacterium by three viruses has led to more virulent strains of the bacterium. Signs and symptoms Scarlet fever typically presents with a sudden onset of sore throat, fever, and malaise. Headache, nausea, vomiting and abdominal pain may also be present. Scarlet fever usually follows from a group A streptococcal infection that involves a strep throat, such as streptococcal tonsillitis or more usually streptococcal pharyngitis. Often these can present together, known as pharyngotonsillitis. The signs and symptoms are therefore those of a strep throat but these are followed by the inclusion of the characteristic widespread rash. The rash usually appears one to two days later, but may appear before or up to seven days following feeling ill. It generally hurts to swallow. However, not all cases present with a fever, the degree of tiredness may vary, the sore throat and tongue changes might be slight or absent, and in some the rash can be patchy rather than diffuse. Cough, hoarseness, runny nose, diarrhea, and conjunctivitis are typically absent in scarlet fever; such symptoms indicate what is more likely a viral infection. Mouth and throat Strep throat is usually associated with fatigue and a fever of over 39 °C (102.2 °F). The tonsils may appear red and enlarged and are typically covered in exudate. The throat may be red with small red spots on the roof of the mouth. The uvula can look red and swollen. 30% to 60% of cases have associated enlarged and tender lymph nodes in the neck. During the first two days of illness the tongue may have a whitish coating from which red swollen papillae protrude, giving the appearance of a "white strawberry tongue". After four to five days when the white coating sheds it becomes a "red strawberry tongue". The symptomatic appearance of the tongue is part of the rash that is characteristic of scarlet fever. Rash The characteristic rash has been denoted as "scarlatiniform", and it appears as a diffuse redness of the skin with small bumps resembling goose bumps. It typically appears as small flat spots on the neck or torso before developing into small bumps that spread to the arms and legs. It tends to feel rough like sandpaper. The cheeks might look flushed with a pale area around the mouth. The scarlet fever rash generally looks red on white and pale skin, and might be difficult to visualise on brown or black skin, in whom the bumps are typically larger, the skin less like sandpaper, and the perioral pallor less obvious. The palms and soles are spared. The reddened skin blanches when pressure is applied to it. The skin may feel itchy, but is not painful. A more intense redness on the inside of skin folds and creases might be noticed. These are lines of petechiae, appearing as pink/red areas located in arm pits and elbow pits. It takes around a week for the main rash to disappear. This may be followed by several weeks of peeling of the skin of typically fingers and toes. The desquamation process usually begins on the face and progresses downward on the body. Sometimes, this peeling is the only sign that scarlet fever occurred. If the case of scarlet fever is uncomplicated, recovery from the fever and clinical symptoms, other than the process of desquamation, occurs in 5–10 days. After the desquamation, the skin will be left with a sunburned appearance. Variable presentations Children younger than five years old may have atypical presentations and many of the common signs and symptoms may be missing or different. Children younger than 3 years old can present with nasal congestion and a lower grade fever. Infants may present with symptoms of increased irritability and decreased appetite. Complications The complications, which can arise from scarlet fever when left untreated or inadequately treated, can be divided into two categories: suppurative and nonsuppurative. Suppurative complications: These are rare complications that arise either from direct spread to structures that are close to the primary site of infection, or spread through the lymphatic system or blood. In the first case, scarlet fever may spread to the pharynx. Possible problems from this method of spread include peritonsillar or retropharyngeal abscesses, cellulitis, mastoiditis, or sinusitis. In the second case, the streptococcal infection may spread through the lymphatic system or the blood to areas of the body further away from the pharynx. A few examples of the many complications that can arise from those methods of spread include endocarditis, pneumonia, or meningitis. Nonsuppurative complications: These complications arise from certain subtypes of group A streptococci that cause an autoimmune response in the body through what has been termed molecular mimicry. In these cases, the antibodies which the person's immune system developed to attack the group A streptococci are also able to attack the person's own tissues. The following complications result, depending on which tissues in the person's body are targeted by those antibodies. Acute rheumatic fever: This is a complication that results 2–6 weeks after a group A streptococcal infection of the upper respiratory tract. It presents in developing countries, where antibiotic treatment of streptococcal infections is less common, as a febrile illness with several clinical manifestations, which are organized into what is called the Jones criteria. These criteria include arthritis, carditis, neurological issues, and skin findings. Diagnosis also depends on evidence of a prior group A streptococcal infection in the upper respiratory tract (as seen in streptococcal pharyngitis and scarlet fever). The carditis is the result of the immunologic response targeting the person's heart tissue, and it is the most serious sequela that develops from acute rheumatic fever. When this involvement of the heart tissue occurs, it is called rheumatic heart disease. In most cases of rheumatic heart disease, the mitral valve is affected, ultimately leading to mitral stenosis. The link to rheumatic fever and heart disease is a particular concern in Australia, because of the high prevalence of these diseases in Aboriginal and Torres Strait Islander communities. Poststreptococcal glomerulonephritis: This is inflammation of the kidney, which presents 1–2 weeks after a group A streptococcal pharyngitis. It can also develop after an episode of Impetigo or any group A streptococcal infection in the skin (this differs from acute rheumatic fever which only follows group A streptococcal pharyngitis). It is the result of the autoimmune response to the streptococcal infection affecting part of the kidney. Persons present with what is called acute nephritic syndrome, in which they have high blood pressure, swelling, and urinary abnormalities. Urinary abnormalities include blood and protein found in the urine, as well as less urine production overall. Poststreptococcal reactive arthritis: The presentation of arthritis after a recent episode of group A streptococcal pharyngitis raises suspicion for acute rheumatic fever, since it is one of the Jones criteria for that separate complication. But, when the arthritis is an isolated symptom, it is referred to as poststreptococcal reactive arthritis. This arthritis can involve a variety of joints throughout the body, unlike the arthritis of acute rheumatic fever, which primarily affects larger joints such as the knee joints. It can present less than 10 days after the group A streptococcal pharyngitis. Cause Strep throat spreads by close contact among people, via respiratory droplets (for example, saliva or nasal discharge). A person in close contact with another person infected with group A streptococcal pharyngitis has a 35% chance of becoming infected. One in ten children who are infected with group A streptococcal pharyngitis will develop scarlet fever. Pathophysiology The rash of scarlet fever, which is what differentiates this disease from an isolated group A strep pharyngitis (or strep throat), is caused by specific strains of group A streptococcus that produce a streptococcal pyrogenic exotoxin, which is mainly responsible for the skin manifestation of the infection. These toxin-producing strains cause scarlet fever in people who do not already have antitoxin antibodies. Streptococcal pyrogenic exotoxins – SPEs A, B, C. and F have been identified. The pyrogenic exotoxins, also called erythrogenic toxins, cause the erythematous rash of scarlet fever. The strains of group A streptococcus that cause scarlet fever need specific bacteriophages for there to be pyrogenic exotoxin production. Specifically, bacteriophage T12 is responsible for the production of speA. Streptococcal Pyrogenic Exotoxin A, speA, is the one most commonly associated with cases of scarlet fever that are complicated by the immune-mediated sequelae of acute rheumatic fever and post-streptococcal glomerulonephritis. These toxins are also known as "superantigens" because they can cause an extensive immune response by activating some of the cells that are mainly responsible for the person's immune system. Although the body responds to the toxins it encounters by making antibodies, those antibodies will only protect against that particular subset of toxins. They will not necessarily completely protect a person from future group A streptococcal infections, because there are 12 different pyrogenic exotoxins that may be produced by the disease, and future infections may produce a different subset of those toxins. Microbiology The disease is caused by secretion of pyrogenic exotoxins by the infecting Streptococcus bacteria. Streptococcal pyrogenic exotoxin A (speA) is probably the best studied of these toxins. It is carried by the bacteriophage T12 which integrates into the streptococcal genome from where the toxin is transcribed. The phage itself integrates into a serine tRNA gene on the chromosome. The T12 virus itself has not been placed into a taxon by the International Committee on Taxonomy of Viruses. It has a double-stranded DNA genome and on morphological grounds appears to be a member of the Siphoviridae. The speA gene was cloned and sequenced in 1986. It is 753 base pairs in length and encodes a 29.244 kilodalton (kDa) protein. The protein contains a putative 30-amino-acid signal peptide; removal of the signal sequence gives a predicted molecular weight of 25.787 kDa for the secreted protein. Both a promoter and a ribosome binding site (Shine-Dalgarno sequence) are present upstream of the gene. A transcriptional terminator is located 69 bases downstream from the translational termination codon. The carboxy terminal portion of the protein exhibits extensive homology with the carboxy terminus of Staphylococcus aureus enterotoxins B and C1. Streptococcal phages other than T12 may also carry the speA gene. Diagnosis Although the presentation of scarlet fever can be clinically diagnosed, further testing may be required to distinguish it from other illnesses. Also, history of a recent exposure to someone with strep throat can be useful in diagnosis. There are two methods used to confirm suspicion of scarlet fever; rapid antigen detection test and throat culture. The rapid antigen detection test is a very specific test but not very sensitive. This means that if the result is positive (indicating that the group A strep antigen was detected and therefore confirming that the person has a group A strep pharyngitis), then it is appropriate to treat the people with scarlet fever with antibiotics. But, if the rapid antigen detection test is negative (indicating that they do not have group A strep pharyngitis), then a throat culture is required to confirm, as the first test could have yielded a false negative result. In the early 21st century, the throat culture is the current "gold standard" for diagnosis. Serologic testing seeks evidence of the antibodies that the body produces against the streptococcal infection, including antistreptolysin-O and antideoxyribonuclease B. It takes the body 2–3 weeks to make these antibodies, so this type of testing is not useful for diagnosing a current infection. But it is useful when assessing a person who may have one of the complications from a previous streptococcal infection. Throat cultures done after antibiotic therapy can show if the infection has been removed. These throat swabs, however, are not indicated, because up to 25% of properly treated individuals can continue to carry the streptococcal infection while being asymptomatic. Differential diagnosis Scarlet fever might appear similar to Kawasaki disease, which has a characteristic red but not white strawberry tongue, and staphylococcal scarlatina which does not have the strawberry tongue at all. Other conditions that might appear similar include impetigo, erysipelas, measles, chickenpox, and hand-foot-and-mouth disease, and may be distinguished by the pattern of symptoms. Viral exanthem: Viral infections are often accompanied by a rash which can be described as morbilliform or maculopapular. This type of rash is accompanied by a prodromal period of cough and runny nose in addition to a fever, indicative of a viral process. Allergic or contact dermatitis: The erythematous appearance of the skin will be in a more localized distribution rather than the diffuse and generalized rash seen in scarlet fever. Drug eruption: These are potential side effects of taking certain drugs such as penicillin. The reddened maculopapular rash which results can be itchy and be accompanied by a fever. Kawasaki disease: Children with this disease also present with a strawberry tongue and undergo a desquamative process on their palms and soles. However, these children tend to be younger than five years old, their fever lasts longer (at least five days), and they have additional clinical criteria (including signs such as conjunctival redness and cracked lips), which can help distinguish this from scarlet fever. Toxic shock syndrome: Both streptococcal and staphylococcal bacteria can cause this syndrome. Clinical manifestations include diffuse rash and desquamation of the palms and soles. It can be distinguished from scarlet fever by low blood pressure, lack of sandpaper texture for the rash, and multi-organ system involvement. Staphylococcal scalded skin syndrome: This is a disease that occurs primarily in young children due to a toxin-producing strain of the bacteria Staphylococcus aureus. The abrupt start of the fever and diffused sunburned appearance of the rash can resemble scarlet fever. However, this rash is associated with tenderness and large blister formation. These blisters easily pop, followed by causing the skin to peel. Staphylococcal scarlet fever: The rash is identical to the streptococcal scarlet fever in distribution and texture, but the skin affected by the rash will be tender. Prevention One method is long-term use of antibiotics to prevent future group A streptococcal infections. This method is only indicated for people who have had complications like recurrent attacks of acute rheumatic fever or rheumatic heart disease. Antibiotics are limited in their ability to prevent these infections since there are a variety of subtypes of group A streptococci that can cause the infection. Although there are currently no vaccines available, the vaccine approach has a greater likelihood of effectively preventing group A streptococcal infections in the future because vaccine formulations can target multiple subtypes of the bacteria. A vaccine developed by George and Gladys Dick in 1924 was discontinued due to poor efficacy and the introduction of antibiotics. Difficulties in vaccine development include the considerable strain variety of group A streptococci present in the environment and the amount of time and number of people needed for appropriate trials for safety and efficacy of any potential vaccine. There have been several attempts to create a vaccine in the past few decades. These vaccines, which are still in the development phase, expose the person to proteins present on the surface of the group A streptococci to activate an immune response that will prepare the person to fight and prevent future infections. There used to be a diphtheria scarlet fever vaccine. It was, however, found not to be effective. This product was discontinued by the end of World War II. Treatment Antibiotics to combat the streptococcal infection are the mainstay of treatment for scarlet fever. Prompt administration of appropriate antibiotics decreases the length of illness. Peeling of the outer layer of skin, however, will happen despite treatment. One of the main goals of treatment is to prevent the child from developing one of the suppurative or nonsuppurative complications, especially acute rheumatic fever. As long as antibiotics are started within nine days, it is very unlikely for the child to develop acute rheumatic fever. Antibiotic therapy has not been shown to prevent the development of post-streptococcal glomerulonephritis. Another important reason for prompt treatment with antibiotics is the ability to prevent transmission of the infection between children. An infected individual is most likely to pass on the infection to another person during the first two weeks. A child is no longer contagious (able to pass the infection to another child) after 24 hours of antibiotics. The antibiotic of choice is Penicillin V which is taken by mouth. In countries without a liquid Penicillin V product, children unable to take tablets can be given amoxicillin which comes in a liquid form and is equally effective. Duration of treatment is 10 days. Benzathine penicillin G can be given as a one time intramuscular injection as another alternative if swallowing pills is not possible. If the person is allergic to the family of antibiotics which both penicillin and amoxicillin are a part of (beta-lactam antibiotics), a first generation cephalosporin is used. Cephalosporin antibiotics, however, can still cause adverse reactions in people whose allergic reaction to penicillin is a Type 1 Hypersensitivity reaction. In those cases it is appropriate to choose clindamycin or erythromycin instead. Tonsillectomy, although once a reasonable treatment for recurrent streptococcal pharyngitis, is not indicated, as a person can still be infected with group A streptococcus without their tonsils. Antibiotic resistance and resurgence A drug-resistant strain of scarlet fever, resistant to macrolide antibiotics such as erythromycin, but retaining drug-sensitivity to beta-lactam antibiotics, such as penicillin, emerged in Hong Kong in 2011, accounting for at least two deaths in that city—the first such in over a decade. About 60% of circulating strains of the group A streptococcus that cause scarlet fever in Hong Kong are resistant to macrolide antibiotics, according to Professor Yuen Kwok-yung, head of Hong Kong University's microbiology department. Previously, observed resistance rates had been 10–30%; the increase is likely the result of overuse of macrolide antibiotics in recent years. There was also an outbreak in the UK in 2014, and the National Health Service reported a 68% increase in the number of S. pyogenes identified in laboratory reports between 2014 and 2018. New research published in October 2020 indicates that the bacterium appears to be getting more robust after being infected with viruses, specifically the North-East Asian serotype M12 (emm12) (group A Streptococcus, GAS). They found three new genes, acquired from viruses, which cause development of "superantigens" targeting white blood cells, resulting in a more virulent strain of the bacterium. A vaccine that will protect against the 180 to 200 types of bacteria causing the disease has been worked on for over 20 years, but a safe one had not yet been developed. Epidemiology Scarlet fever occurs equally in both males and females. Children are most commonly infected, typically between 5–15 years old. Although streptococcal infections can happen at any time of year, infection rates peak in the winter and spring months, typically in colder climates. The morbidity and mortality of scarlet fever has declined since the 18th and 19th centuries when there were epidemics of this disease. Around 1900 the mortality rate in multiple places reached 25%. The improvement in prognosis can be attributed to the use of penicillin in the treatment of this disease. The frequency of scarlet fever cases has also been declining over the past century. There have been several reported outbreaks of the disease in various countries in the past decade. The reason for these increases remains unclear in the medical community. Between 2013 and 2016 population rates of scarlet fever in England increased from 8.2 to 33.2 per 100,000 and hospital admissions for scarlet fever increased by 97%. Further increases in the reporting of scarlet fever cases have been noted in England during the 2021–2022 season (September to September) and so far also in the season 2022–2023. The World Health Organization has reported an increase in scarlet fever (and iGAS – invasive GAS cases) in England, and other European countries during this time. Increases have been reported in France and Ireland. In the US, cases of scarlet fever are not reported, but as of December 2022, the CDC was looking at a possible increase in the numbers of invasive GAS infections reported in children. In late December 2022, the CDC's Health Alert Network issued an advisory on the reported increases in invasive GAS infections. History It is unclear when a description of this disease was first recorded. Hippocrates, writing around 400 BC, described the condition of a person with a reddened skin and fever. The first unambiguous description of the disease in the medical literature appeared in the 1553 book De Tumoribus praeter Naturam by the Sicilian anatomist and physician Giovanni Filippo Ingrassia, where he referred to it as rossalia. He also made a point to distinguish that this presentation had different characteristics from measles. It was redescribed by Johann Weyer during an epidemic in lower Germany between 1564 and 1565; he referred to it as scarlatina anginosa. The first unequivocal description of scarlet fever appeared in a book by Joannes Coyttarus of Poitiers, , which was published in 1578 in Paris. Daniel Sennert of Wittenberg described the classical 'scarlatinal desquamation' in 1572 and was also the first to describe the early arthritis, scarlatinal dropsy, and ascites associated with the disease. In 1675 the term that has been commonly used to refer to scarlet fever, "scarlatina", was written by Thomas Sydenham, an English physician. In 1827, Richard Bright was the first to recognize the involvement of the renal system in scarlet fever. The association between streptococci and disease was first described in 1874 by Theodor Billroth, discussing people with skin infections. Billroth also coined the genus name Streptococcus. In 1884 Friedrich Julius Rosenbach edited the name to its current one, Streptococcus pyogenes, after further looking at the bacteria in the skin lesions. The organism was first cultured in 1883 by the German surgeon Friedrich Fehleisen from erysipelas lesions. Also in 1884, the German physician Friedrich Loeffler was the first to show the presence of streptococci in the throats of people with scarlet fever. Because not all people with pharyngeal streptococci developed scarlet fever, these findings remained controversial for some time. The association between streptococci and scarlet fever was confirmed by Alphonse Dochez and George and Gladys Dick in the early 1900s. Also in 1884, the world's first convalescent home for people with scarlet fever was opened at Brockley Hill, Stanmore, founded by Mary Wardell. Nil Filatov (in 1895) and Clement Dukes (in 1894) described an exanthematous disease which they thought was a form of rubella, but in 1900, Dukes described it as a separate illness which came to be known as Dukes' disease, Filatov's disease, or fourth disease. However, in 1979, Keith Powell identified it as in fact the same illness as the form of scarlet fever which is caused by staphylococcal exotoxin and is known as staphylococcal scalded skin syndrome. Scarlet fever serum from horses' blood was used in the treatment of children beginning in 1900 and reduced mortality rates significantly. In 1906, Austrian pediatrician Clemens von Pirquet postulated that disease-causing immune complexes were responsible for the nephritis that followed scarlet fever. Bacteriophages were discovered in 1915 by Frederick Twort. His work was overlooked and bacteriophages were later rediscovered by Felix d'Herelle in 1917. The specific association of scarlet fever with the group A streptococci had to await the development of Rebecca Lancefield's streptococcal grouping scheme in the 1920s. George and Gladys Dick showed that cell-free filtrates could induce the erythematous reaction characteristic of scarlet fever, proving that this reaction was due to a toxin. Karelitz and Stempien discovered that extracts from human serum globulin and placental globulin can be used as lightening agents for scarlet fever and this was used later as the basis for the Dick test. The association of scarlet fever and bacteriophages was described in 1926 by Cantacuzène (Ioan Cantacuzino) and Bonciu. There was a widespread epidemic of scarlet fever in 1922. Amongst the victims of this epidemic was Agathe Whitehead. An antitoxin for scarlet fever was developed in 1924. The discovery of penicillin and its subsequent widespread use significantly reduced the mortality of this once-feared disease. The first toxin which causes this disease was cloned and sequenced in 1986 by Weeks and Ferretti. The incidence of scarlet fever was reported to be increasing in countries including England, Wales, South Korea, Vietnam, China, and Hong Kong in the 2010s; the cause had not been established as of 2018. Cases were also reported to be increasing after the easing of restrictions due to the COVID pandemic that started in 2020. The Dick test The Dick test, developed in 1924 by George F. Dick and Gladys Dick, was used to identify those susceptible to scarlet fever. The Dick test involved injecting a diluted strain of the streptococci known to cause scarlet fever; a reaction in the skin at the injection site identified people susceptible to developing scarlet fever. The reaction could be seen four hours after the injection, but was more noticeable after 24 hours. If no reaction was seen in the skin, then the person was assumed not to be at risk from the disease, having developed immunity to it.
Biology and health sciences
Infectious disease
null
92398
https://en.wikipedia.org/wiki/Streptococcal%20pharyngitis
Streptococcal pharyngitis
Streptococcal pharyngitis, also known as streptococcal sore throat (strep throat), is pharyngitis (an infection of the pharynx, the back of the throat) caused by Streptococcus pyogenes, a gram-positive, group A streptococcus. Common symptoms include fever, sore throat, red tonsils, and enlarged lymph nodes in the front of the neck. A headache and nausea or vomiting may also occur. Some develop a sandpaper-like rash which is known as scarlet fever. Symptoms typically begin one to three days after exposure and last seven to ten days. Strep throat is spread by respiratory droplets from an infected person, spread by talking, coughing or sneezing, or by touching something that has droplets on it and then touching the mouth, nose, or eyes. It may be spread directly through touching infected sores. It may also be spread by contact with skin infected with group A strep. The diagnosis is made based on the results of a rapid antigen detection test or throat culture. Some people may carry the bacteria without symptoms. Prevention is by frequent hand washing, and not sharing eating utensils. There is no vaccine for the disease. Treatment with antibiotics is only recommended in those with a confirmed diagnosis. Those infected should stay away from other people until fever is gone and for at least 12 hours after starting treatment. Pain can be treated with paracetamol (acetaminophen) and nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen. Strep throat is a common bacterial infection in children. It is the cause of 15–40% of sore throats among children and 5–15% among adults. Cases are more common in late winter and early spring. Potential complications include rheumatic fever and peritonsillar abscess. Signs and symptoms The typical signs and symptoms of streptococcal pharyngitis are a sore throat, fever of greater than , tonsillar exudates (pus on the tonsils), and large cervical lymph nodes. Other symptoms include: headache, nausea and vomiting, abdominal pain, muscle pain, or a scarlatiniform rash or palatal petechiae, the latter being an uncommon but highly specific finding. Symptoms typically begin one to three days after exposure and last seven to ten days. Strep throat is unlikely when any of the symptoms of red eyes, hoarseness, runny nose, or mouth ulcers are present. It is also unlikely when there is no fever. Cause Strep throat is caused by group A β-hemolytic Streptococcus (GAS or S. pyogenes). Humans are the primary natural reservoir for group A streptococcus. Other bacteria such as non–group A β-hemolytic streptococci and fusobacterium may also cause pharyngitis. It is spread by direct, close contact with an infected person; thus crowding, as may be found in the military and schools, increases the rate of transmission. Dried bacteria in dust are not infectious, although moist bacteria on toothbrushes or similar items can persist for up to fifteen days. Contaminated food can result in outbreaks, but this is rare. Of children with no signs or symptoms, 12% carry GAS in their pharynx, and, after treatment, approximately 15% of those remain positive, and are true "carriers". Diagnosis A number of scoring systems exist to help with diagnosis; however, their use is controversial due to insufficient accuracy. The modified Centor criteria are a set of five criteria; the total score indicates the probability of a streptococcal infection. One point is given for each of the criteria: Absence of a cough Swollen and tender cervical lymph nodes Temperature > Tonsillar exudate or swelling Age less than 15 (a point is subtracted if age >44) A score of one may indicate no treatment or culture is needed or it may indicate the need to perform further testing if other high risk factors exist, such as a family member having the disease. The Infectious Disease Society of America recommends against routine antibiotic treatment and considers antibiotics only appropriate when given after a positive test. Testing is not needed in children under three as both group A strep and rheumatic fever are rare, unless a child has a sibling with the disease. Laboratory testing A throat culture is the gold standard for the diagnosis of streptococcal pharyngitis, with a sensitivity of 90–95%. A rapid strep test (also called rapid antigen detection testing or RADT) may also be used. While the rapid strep test is quicker, it has a lower sensitivity (70%) and statistically equal specificity (98%) as a throat culture. In areas of the world where rheumatic fever is uncommon, a negative rapid strep test is sufficient to rule out the disease. A positive throat culture or RADT in association with symptoms establishes a positive diagnosis in those in which the diagnosis is in doubt. In adults, a negative RADT is sufficient to rule out the diagnosis. However, in children a throat culture is recommended to confirm the result. Asymptomatic individuals should not be routinely tested with a throat culture or RADT because a certain percentage of the population persistently "carries" the streptococcal bacteria in their throat without any harmful results. Differential diagnosis As the symptoms of streptococcal pharyngitis overlap with other conditions, it can be difficult to make the diagnosis clinically. Coughing, nasal discharge, diarrhea, and red, irritated eyes in addition to fever and sore throat are more indicative of a viral sore throat than of strep throat. The presence of marked lymph node enlargement along with sore throat, fever, and tonsillar enlargement may also occur in infectious mononucleosis. Other conditions that may present similarly include epiglottitis, Kawasaki disease, acute retroviral syndrome, Lemierre's syndrome, Ludwig's angina, peritonsillar abscess, and retropharyngeal abscess. Prevention Tonsillectomy may be a reasonable preventive measure in those with frequent throat infections (more than three a year). However, the benefits are small and episodes typically lessen in time regardless of measures taken. Recurrent episodes of pharyngitis which test positive for GAS may also represent a person who is a chronic carrier of GAS who is getting recurrent viral infections. Treating people who have been exposed but who are without symptoms is not recommended. Treating people who are carriers of GAS is not recommended as the risk of spread and complications is low. Treatment Untreated streptococcal pharyngitis usually resolves within a few days. Treatment with antibiotics shortens the duration of the acute illness by about 16 hours. The primary reason for treatment with antibiotics is to reduce the risk of complications such as rheumatic fever and retropharyngeal abscesses. Antibiotics prevent acute rheumatic fever if given within 9 days of the onset of symptoms. Pain medication Pain medication such as NSAIDs and paracetamol (acetaminophen) helps in the management of pain associated with strep throat. Viscous lidocaine may also be useful. While steroids may help with the pain, they are not routinely recommended. Aspirin may be used in adults but is not recommended in children due to the risk of Reye syndrome. Antibiotics The antibiotic of choice in the United States for streptococcal pharyngitis is penicillin V, due to safety, cost, and effectiveness. Amoxicillin is preferred in Europe. In India, where the risk of rheumatic fever is higher, intramuscular benzathine penicillin G is the first choice for treatment. Appropriate antibiotics decrease the average 3–5 day duration of symptoms by about one day, and also reduce contagiousness. They are primarily prescribed to reduce rare complications such as rheumatic fever and peritonsillar abscess. The arguments in favor of antibiotic treatment should be balanced by the consideration of possible side effects, and it is reasonable to suggest that no antimicrobial treatment be given to healthy adults who have adverse reactions to medication or those at low risk of complications. Antibiotics are prescribed for strep throat at a higher rate than would be expected from how common it is. Erythromycin and other macrolides or clindamycin are recommended for people with severe penicillin allergies. First-generation cephalosporins may be used in those with less severe allergies and some low-certainty evidence suggest cephalosporins are superior to penicillin. These late-generation antibiotics show a similar effect when prescribed for 3–7 days in comparison to the standard ten days of penicillin when used in areas of low rheumatic heart disease. Streptococcal infections may also lead to acute glomerulonephritis; however, the incidence of this side effect is not reduced by the use of antibiotics. Prognosis The symptoms of strep throat usually improve within three to five days, irrespective of treatment. Treatment with antibiotics reduces the risk of complications and transmission; children may return to school 24 hours after antibiotics are administered. The risk of complications in adults is low. In children, acute rheumatic fever is rare in most of the developed world. It is, however, the leading cause of acquired heart disease in India, sub-Saharan Africa, and some parts of Australia. Complications Complications arising from streptococcal throat infections include: Acute rheumatic fever Scarlet fever Streptococcal toxic shock syndrome Glomerulonephritis PANDAS syndrome Peritonsillar abscess Cervical lymphadenitis Mastoiditis The economic cost of the disease in the United States in children is approximately $350 million annually. Epidemiology Pharyngitis, the broader category into which Streptococcal pharyngitis falls, is diagnosed in 11 million people annually in the United States. It is the cause of 15–40% of sore throats among children and 5–15% in adults. Cases usually occur in late winter and early spring.
Biology and health sciences
Bacterial infections
Health
92399
https://en.wikipedia.org/wiki/Impetigo
Impetigo
Impetigo is a contagious bacterial infection that involves the superficial skin. The most common presentation is yellowish crusts on the face, arms, or legs. Less commonly there may be large blisters which affect the groin or armpits. The lesions may be painful or itchy. Fever is uncommon. It is typically due to either Staphylococcus aureus or Streptococcus pyogenes. Risk factors include attending day care, crowding, poor nutrition, diabetes mellitus, contact sports, and breaks in the skin such as from mosquito bites, eczema, scabies, or herpes. With contact it can spread around or between people. Diagnosis is typically based on the symptoms and appearance. Prevention is by hand washing, avoiding people who are infected, and cleaning injuries. Treatment is typically with antibiotic creams such as mupirocin or fusidic acid. Antibiotics by mouth, such as cefalexin, may be used if large areas are affected. Antibiotic-resistant forms have been found. Healing generally occurs without scarring. Impetigo affected about 140 million people (2% of the world population) in 2010. It can occur at any age, but is most common in young children. In some places the condition is also known as "school sores". Without treatment people typically get better within three weeks. Recurring infections can occur due to colonization of the nose by the bacteria. Complications may include cellulitis or poststreptococcal glomerulonephritis. The name is from the Latin meaning "attack". Signs and symptoms Contagious impetigo This most common form of impetigo, also called nonbullous impetigo, most often begins as a red sore near the nose or mouth which soon breaks, leaking pus or fluid, and forms a honey-colored scab, followed by a red mark which often heals without leaving a scar. Sores are not painful, but they may be itchy. Lymph nodes in the affected area may be swollen, but fever is rare. Touching or scratching the sores may easily spread the infection to other parts of the body. Skin ulcers with redness and scarring also may result from scratching or abrading the skin. Bullous impetigo Bullous impetigo, mainly seen in children younger than two years, involves painless, fluid-filled blisters, mostly on the arms, legs, and trunk, surrounded by red and itchy (but not sore) skin. The blisters may be large or small. After they break, they form yellow scabs. Ecthyma Ecthyma, the nonbullous form of impetigo, produces painful fluid- or pus-filled sores with redness of skin, usually on the arms and legs, become ulcers that penetrate deeper into the dermis. After they break open, they form hard, thick, gray-yellow scabs, which sometimes leave scars. Ecthyma may be accompanied by swollen lymph nodes in the affected area. Causes Impetigo is primarily caused by Staphylococcus aureus, and sometimes by Streptococcus pyogenes. Both bullous and nonbullous are primarily caused by S. aureus, with Streptococcus also commonly being involved in the nonbullous form. Predisposing factors Impetigo is more likely to infect children ages 2–5, especially those that attend school or day care. 70% of cases are the nonbullous form and 30% are the bullous form. Impetigo occurs more frequently among people who live in warm climates. Transmission The infection is spread by direct contact with lesions or with nasal carriers. The incubation period is 1–3 days after exposure to Streptococcus and 4–10 days for Staphylococcus. Dried streptococci in the air are not infectious to intact skin. Scratching may spread the lesions. Diagnosis Impetigo is usually diagnosed based on its appearance. It generally appears as honey-colored scabs formed from dried sebum and is often found on the arms, legs, or face. If a visual diagnosis is unclear a culture may be done to test for resistant bacteria. Differential diagnosis Other conditions that can result in symptoms similar to the common form include contact dermatitis, herpes simplex virus, discoid lupus, and scabies. Other conditions that can result in symptoms similar to the blistering form include other bullous skin diseases, burns, and necrotizing fasciitis. Prevention To prevent the spread of impetigo the skin and any open wounds should be kept clean and covered. Care should be taken to keep fluids from an infected person away from the skin of a non-infected person. Washing hands, linens, and affected areas will lower the likelihood of contact with infected fluids. Scratching can spread the sores; keeping nails short will reduce the chances of spreading. Infected people should avoid contact with others and eliminate sharing of clothing or linens. Children with impetigo can return to school 24 hours after starting antibiotic therapy as long as their draining lesions are covered. Treatment Antibiotics, either as a cream or by mouth, are usually prescribed. Mild cases may be treated with mupirocin ointments. In 95% of cases, a single seven-day antibiotic course results in resolution in children. It has been advocated that topical antiseptics are inferior to topical antibiotics, and therefore should not be used as a replacement. However, the National Institute for Health and Care Excellence (NICE) as of February 2020 recommends a hydrogen peroxide 1% cream antiseptic rather than topical antibiotics for localised non-bullous impetigo in otherwise well individuals. This recommendation is part of an effort to reduce the overuse of antimicrobials that may contribute to the development of resistant organisms such as MRSA. More severe cases require oral antibiotics, such as dicloxacillin, flucloxacillin, or erythromycin. Alternatively, amoxicillin combined with clavulanate potassium, cephalosporins (first-generation) and many others may also be used as an antibiotic treatment. Alternatives for people who are seriously allergic to penicillin or infections with methicillin-resistant Staphococcus aureus include doxycycline, clindamycin, and trimethoprim-sulphamethoxazole, although doxycycline should not be used in children under the age of eight years old due to the risk of drug-induced tooth discolouration. When streptococci alone are the cause, penicillin is the drug of choice. When the condition presents with ulcers, valacyclovir, an antiviral, may be given in case a viral infection is causing the ulcer. Prognosis Without treatment, individuals with impetigo typically get better within three weeks. Complications may include cellulitis or poststreptococcal glomerulonephritis. Rheumatic fever does not appear to be related. Epidemiology Globally, impetigo affects more than 162 million children in low- to middle-income countries. The rates are highest in countries with low available resources and is especially prevalent in the region of Oceania. The tropical climate and high population in lower socioeconomic regions contribute to these high rates. Children under the age of 4 in the United Kingdom are 2.8% more likely than average to contract impetigo; this decreases to 1.6% for children up to 15 years old. As age increases, the rate of impetigo declines, but all ages are still susceptible. History Impetigo was originally described and differentiated by the English dermatologist William Tilbury Fox around 1864. The word impetigo is the generic Latin word for 'skin eruption', and it stems from the verb 'to attack' (as in impetus). Before the discovery of antibiotics, the disease was treated with an application of the antiseptic gentian violet, which was an effective treatment.
Biology and health sciences
Bacterial infections
Health
92410
https://en.wikipedia.org/wiki/Shigella
Shigella
Shigella is a genus of bacteria that is Gram negative, facultatively anaerobic, non–spore-forming, nonmotile, rod shaped, and is genetically nested within Escherichia. The genus is named after Kiyoshi Shiga, who discovered it in 1897. Shigella causes disease in primates, but not in other mammals; it is the causative agent of human shigellosis. It is only naturally found in humans and gorillas. During infection, it typically causes dysentery. Shigella is a leading cause of bacterial diarrhea worldwide, with 80–165 million annual cases (estimated) and 74,000 to 600,000 deaths. It is one of the top four pathogens that cause moderate-to-severe diarrhea in African and South Asian children. Classification Shigella species are classified by three serogroups and one serotype: Serogroup A: S. dysenteriae (15 serotypes) Serogroup B: S. flexneri (9 serotypes) Serogroup C: S. boydii (19 serotypes) Serogroup D: S. sonnei (one serotype) Groups A–C are physiologically similar; S. sonnei (group D) can be differentiated based on biochemical metabolism assays. Three Shigella groups are the major disease-causing species: S. flexneri is the most frequently isolated species worldwide, and accounts for 60% of cases in the developing world; S. sonnei causes 77% of cases in the developed world, compared to only 15% of cases in the developing world; and S. dysenteriae is usually the cause of epidemics of dysentery, particularly in confined populations such as refugee camps. Each of the Shigella genomes includes a virulence plasmid that encodes conserved primary virulence determinants. The Shigella chromosomes share most of their genes with those of E. coli K12 strain MG1655. Phylogenetic studies indicate Shigella is more appropriately treated as a subgroup of Escherichia (see Escherichia coli#Diversity for details). Pathogenesis Shigella infection is typically by ingestion. Depending on the host's health, fewer than 100 bacterial cells may cause an infection. Shigella species generally invade the epithelial lining of the colon, causing severe inflammation and death of the cells lining the colon. This inflammation produces the hallmark diarrhea — even dysentery — of Shigella infection. Toxins produced by some strains contribute to disease during infection. S. flexneri strains produce ShET1 and ShET2, which may contribute to diarrhea. S. dysenteriae strains produce the hemolytic Shiga toxin, similar to the verotoxin produced by enterohemorrhagic E. coli. Both Shiga toxin and verotoxin are associated with causing potentially fatal hemolytic-uremic syndrome. Because they do not interact with the apical surface of epithelial cells — preferring the basolateral side — Shigella species invade the host through the M-cells interspersed in the epithelia of the small intestine. Shigella uses a type-III secretion system that acts as a biological syringe to translocate toxic effector proteins to the target human cell. The effector proteins can alter the metabolism of the target cell — leading, for example, to the lysis of vacuolar membranes or reorganization of actin polymerization to facilitate intracellular motility of Shigella bacteria inside the host cell. For instance, the IcsA effector protein (an autotransporter, not a type-III secretion-system effector) triggers actin reorganization by N-WASP recruitment of Arp2/3 complexes, promoting cell-to-cell spread. After infection, Shigella cells multiply intracellularly and spread to neighboring epithelial cells, resulting in tissue destruction and the characteristic pathology of shigellosis. The most common symptoms are diarrhea, fever, nausea, vomiting, stomach cramps, and flatulence. Infection is also commonly known to cause large and painful bowel movements. The stool may contain blood, mucus, or pus. Hence, Shigella cells may cause dysentery. In rare cases, young children may have seizures. Symptoms can take as long as a week to appear, but most often begin two to four days after ingestion. Symptoms usually last for several days, but can last for weeks. Shigella is implicated as one of the pathogenic causes of reactive arthritis worldwide. Discovery The Shigella genus is named after Japanese physician Kiyoshi Shiga, who researched the cause of dysentery. Shiga entered the Tokyo Imperial University School of Medicine in 1892, during which he attended a lecture by Shibasaburo Kitasato. Shiga was impressed by Kitasato's intellect and confidence, so after graduating, he went to work for him as a research assistant at the Institute for Infectious Diseases. In 1897, Shiga focused his efforts on what the Japanese referred to as a sekiri (dysentery) outbreak. Such epidemics were detrimental to the Japanese people and occurred often in the late 19th century. The 1897 sekiri epidemic affected >91,000, with a mortality rate of >20%. Shiga studied 32 dysentery patients and used Koch's postulates to successfully isolate and identify the bacterium causing the disease. He continued to study and characterize the bacterium, identified its methods of (Shiga-) toxin production, and worked to create a vaccine for the disease.
Biology and health sciences
Gram-negative bacteria
Plants
92447
https://en.wikipedia.org/wiki/Superoxide
Superoxide
In chemistry, a superoxide is a compound that contains the superoxide ion, which has the chemical formula . The systematic name of the anion is dioxide(1−). The reactive oxygen ion superoxide is particularly important as the product of the one-electron reduction of dioxygen , which occurs widely in nature. Molecular oxygen (dioxygen) is a diradical containing two unpaired electrons, and superoxide results from the addition of an electron which fills one of the two degenerate molecular orbitals, leaving a charged ionic species with a single unpaired electron and a net negative charge of −1. Both dioxygen and the superoxide anion are free radicals that exhibit paramagnetism. Superoxide was historically also known as "hyperoxide". Salts Superoxide forms salts with alkali metals and alkaline earth metals. The salts sodium superoxide (), potassium superoxide (), rubidium superoxide () and caesium superoxide () are prepared by the reaction of with the respective alkali metal. The alkali salts of are orange-yellow in color and quite stable, if they are kept dry. Upon dissolution of these salts in water, however, the dissolved undergoes disproportionation (dismutation) extremely rapidly (in a pH-dependent manner): This reaction (with moisture and carbon dioxide in exhaled air) is the basis of the use of potassium superoxide as an oxygen source in chemical oxygen generators, such as those used on the Space Shuttle and on submarines. Superoxides are also used in firefighters' oxygen tanks to provide a readily available source of oxygen. In this process, acts as a Brønsted base, initially forming the hydroperoxyl radical (). The superoxide anion, , and its protonated form, hydroperoxyl, are in equilibrium in an aqueous solution: Given that the hydroperoxyl radical has a pKa of around 4.8, superoxide predominantly exists in the anionic form at neutral pH. Potassium superoxide is soluble in dimethyl sulfoxide (facilitated by crown ethers) and is stable as long as protons are not available. Superoxide can also be generated in aprotic solvents by cyclic voltammetry. Superoxide salts also decompose in the solid state, but this process requires heating: Biology Superoxide is common in biology, reflecting the pervasiveness of O2 and its ease of reduction. Superoxide is implicated in a number of biological processes, some with negative connotations, and some with beneficial effects. Like hydroperoxyl, superoxide is classified as reactive oxygen species. It is generated by the immune system to kill invading microorganisms. In phagocytes, superoxide is produced in large quantities by the enzyme NADPH oxidase for use in oxygen-dependent killing mechanisms of invading pathogens. Mutations in the gene coding for the NADPH oxidase cause an immunodeficiency syndrome called chronic granulomatous disease, characterized by extreme susceptibility to infection, especially catalase-positive organisms. In turn, micro-organisms genetically engineered to lack the superoxide-scavenging enzyme superoxide dismutase (SOD) lose virulence. Superoxide is also deleterious when produced as a byproduct of mitochondrial respiration (most notably by Complex I and Complex III), as well as several other enzymes, for example xanthine oxidase, which can catalyze the transfer of electrons directly to molecular oxygen under strongly reducing conditions. Because superoxide is toxic at high concentrations, nearly all aerobic organisms express SOD. SOD efficiently catalyzes the disproportionation of superoxide: Other proteins that can be both oxidized and reduced by superoxide (such as hemoglobin) have weak SOD-like activity. Genetic inactivation ("knockout") of SOD produces deleterious phenotypes in organisms ranging from bacteria to mice and have provided important clues as to the mechanisms of toxicity of superoxide in vivo. Yeast lacking both mitochondrial and cytosolic SOD grow very poorly in air, but quite well under anaerobic conditions. Absence of cytosolic SOD causes a dramatic increase in mutagenesis and genomic instability. Mice lacking mitochondrial SOD (MnSOD) die around 21 days after birth due to neurodegeneration, cardiomyopathy, and lactic acidosis. Mice lacking cytosolic SOD (CuZnSOD) are viable but suffer from multiple pathologies, including reduced lifespan, liver cancer, muscle atrophy, cataracts, thymic involution, haemolytic anemia, and a very rapid age-dependent decline in female fertility. Superoxide may contribute to the pathogenesis of many diseases (the evidence is particularly strong for radiation poisoning and hyperoxic injury), and perhaps also to aging via the oxidative damage that it inflicts on cells. While the action of superoxide in the pathogenesis of some conditions is strong (for instance, mice and rats overexpressing CuZnSOD or MnSOD are more resistant to strokes and heart attacks), the role of superoxide in aging must be regarded as unproven, for now. In model organisms (yeast, the fruit fly Drosophila, and mice), genetically knocking out CuZnSOD shortens lifespan and accelerates certain features of aging: (cataracts, muscle atrophy, macular degeneration, and thymic involution). But the converse, increasing the levels of CuZnSOD, does not seem to consistently increase lifespan (except perhaps in Drosophila). The most widely accepted view is that oxidative damage (resulting from multiple causes, including superoxide) is but one of several factors limiting lifespan. The binding of by reduced () heme proteins involves formation of Fe(III) superoxide complex. Assay in biological systems The assay of superoxide in biological systems is complicated by its short half-life. One approach that has been used in quantitative assays converts superoxide to hydrogen peroxide, which is relatively stable. Hydrogen peroxide is then assayed by a fluorimetric method. As a free radical, superoxide has a strong EPR signal, and it is possible to detect superoxide directly using this method. For practical purposes, this can be achieved only in vitro under non-physiological conditions, such as high pH (which slows the spontaneous dismutation) with the enzyme xanthine oxidase. Researchers have developed a series of tool compounds termed "spin traps" that can react with superoxide, forming a meta-stable radical (half-life 1–15 minutes), which can be more readily detected by EPR. Superoxide spin-trapping was initially carried out with DMPO, but phosphorus derivatives with improved half-lives, such as DEPPMPO and DIPPMPO, have become more widely used. Bonding and structure Superoxides are compounds in which the oxidation number of oxygen is −. Whereas molecular oxygen (dioxygen) is a diradical containing two unpaired electrons, the addition of a second electron fills one of its two degenerate molecular orbitals, leaving a charged ionic species with single unpaired electron and a net negative charge of −1. Both dioxygen and the superoxide anion are free radicals that exhibit paramagnetism. The derivatives of dioxygen have characteristic O–O distances that correlate with the order of the O–O bond.
Physical sciences
Oxide salts
Chemistry
92465
https://en.wikipedia.org/wiki/Lambert%20W%20function
Lambert W function
In mathematics, the Lambert function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function , where is any complex number and is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the function per se in 1783. For each integer there is one branch, denoted by , which is a complex-valued function of one complex argument. is known as the principal branch. These functions have the following property: if and are any complex numbers, then holds if and only if When dealing with real numbers only, the two branches and suffice: for real numbers and the equation can be solved for only if ; yields if and the two values and if . The Lambert function's branches cannot be expressed in terms of elementary functions. It is useful in combinatorics, for instance, in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as . In biochemistry, and in particular enzyme kinetics, an opened-form solution for the time-course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert function. Terminology The notation convention chosen here (with and ) follows the canonical reference on the Lambert function by Corless, Gonnet, Hare, Jeffrey and Knuth. The name "product logarithm" can be understood as follows: since the inverse function of is termed the logarithm, it makes sense to call the inverse "function" of the product the "product logarithm". (Technical note: like the complex logarithm, it is multivalued and thus W is described as a converse relation rather than inverse function.) It is related to the omega constant, which is equal to . History Lambert first considered the related Lambert's Transcendental Equation in 1758, which led to an article by Leonhard Euler in 1783 that discussed the special case of . The equation Lambert considered was Euler transformed this equation into the form Both authors derived a series solution for their equations. Once Euler had solved this equation, he considered the case . Taking limits, he derived the equation He then put and obtained a convergent series solution for the resulting equation, expressing in terms of . After taking derivatives with respect to and some manipulation, the standard form of the Lambert function is obtained. In 1993, it was reported that the Lambert function provides an exact solution to the quantum-mechanical double-well Dirac delta function model for equal charges—a fundamental problem in physics. Prompted by this, Rob Corless and developers of the Maple computer algebra system realized that "the Lambert W function has been widely used in many fields, but because of differing notation and the absence of a standard name, awareness of the function was not as high as it should have been." Another example where this function is found is in Michaelis–Menten kinetics. Although it was widely believed that the Lambert function cannot be expressed in terms of elementary (Liouvillian) functions, the first published proof did not appear until 2008. Elementary properties, branches and range There are countably many branches of the function, denoted by , for integer ; being the main (or principal) branch. is defined for all complex numbers z while with is defined for all non-zero z. With and for all . The branch point for the principal branch is at , with a branch cut that extends to along the negative real axis. This branch cut separates the principal branch from the two branches and . In all branches with , there is a branch point at and a branch cut along the entire negative real axis. The functions are all injective and their ranges are disjoint. The range of the entire multivalued function is the complex plane. The image of the real axis is the union of the real axis and the quadratrix of Hippias, the parametric curve . Inverse The range plot above also delineates the regions in the complex plane where the simple inverse relationship is true. implies that there exists an such that , where depends upon the value of . The value of the integer changes abruptly when is at the branch cut of , which means that , except for where it is . Defining , where and are real, and expressing in polar coordinates, it is seen that For , the branch cut for is the non-positive real axis, so that and For , the branch cut for is the real axis with , so that the inequality becomes Inside the regions bounded by the above, there are no discontinuous changes in , and those regions specify where the function is simply invertible, i.e. . Calculus Derivative By implicit differentiation, one can show that all branches of satisfy the differential equation ( is not differentiable for .) As a consequence, that gets the following formula for the derivative of W: Using the identity , gives the following equivalent formula: At the origin we have Integral The function , and many other expressions involving , can be integrated using the substitution , i.e. : (The last equation is more common in the literature but is undefined at ). One consequence of this (using the fact that ) is the identity Asymptotic expansions The Taylor series of around 0 can be found using the Lagrange inversion theorem and is given by The radius of convergence is , as may be seen by the ratio test. The function defined by this series can be extended to a holomorphic function defined on all complex numbers with a branch cut along the interval ; this holomorphic function defines the principal branch of the Lambert function. For large values of , is asymptotic to where , , and is a non-negative Stirling number of the first kind. Keeping only the first two terms of the expansion, The other real branch, , defined in the interval , has an approximation of the same form as approaches zero, with in this case and . Integer and complex powers Integer powers of also admit simple Taylor (or Laurent) series expansions at zero: More generally, for , the Lagrange inversion formula gives which is, in general, a Laurent series of order . Equivalently, the latter can be written in the form of a Taylor expansion of powers of : which holds for any and . Bounds and inequalities A number of non-asymptotic bounds are known for the Lambert function. Hoorfar and Hassani showed that the following bound holds for : They also showed the general bound for every and , with equality only for . The bound allows many other bounds to be made, such as taking which gives the bound In 2013 it was proven that the branch can be bounded as follows: Roberto Iacono and John P. Boyd enhanced the bounds as follows: Identities A few identities follow from the definition: Note that, since is not injective, it does not always hold that , much like with the inverse trigonometric functions. For fixed and , the equation has two real solutions in , one of which is of course . Then, for and , as well as for and , is the other solution. Some other identities: (which can be extended to other and if the correct branch is chosen). Substituting in the definition: With Euler's iterated exponential : Special values The following are special values of the principal branch: (the omega constant) Special values of the branch : Representations The principal branch of the Lambert function can be represented by a proper integral, due to Poisson: Another representation of the principal branch was found by Kalugin–Jeffrey–Corless: The following continued fraction representation also holds for the principal branch: Also, if : In turn, if , then Other formulas Definite integrals There are several useful definite integral formulas involving the principal branch of the function, including the following: The first identity can be found by writing the Gaussian integral in polar coordinates. The second identity can be derived by making the substitution , which gives Thus The third identity may be derived from the second by making the substitution and the first can also be derived from the third by the substitution . Except for along the branch cut (where the integral does not converge), the principal branch of the Lambert function can be computed by the following integral: where the two integral expressions are equivalent due to the symmetry of the integrand. Indefinite integrals Applications Solving equations The Lambert function is used to solve equations in which the unknown quantity occurs both in the base and in the exponent, or both inside and outside of a logarithm. The strategy is to convert such an equation into one of the form and then to solve for using the function. For example, the equation (where is an unknown real number) can be solved by rewriting it as This last equation has the desired form and the solutions for real x are: and thus: Generally, the solution to is: where a, b, and c are complex constants, with b and c not equal to zero, and the W function is of any integer order. Inviscid flows Applying the unusual accelerating traveling-wave Ansatz in the form of (where , , a, x and t are the density, the reduced variable, the acceleration, the spatial and the temporal variables) the fluid density of the corresponding Euler equation can be given with the help of the W function. Viscous flows Granular and debris flow fronts and deposits, and the fronts of viscous fluids in natural events and in laboratory experiments can be described by using the Lambert–Euler omega function as follows: where is the debris flow height, is the channel downstream position, is the unified model parameter consisting of several physical and geometrical parameters of the flow, flow height and the hydraulic pressure gradient. In pipe flow, the Lambert W function is part of the explicit formulation of the Colebrook equation for finding the Darcy friction factor. This factor is used to determine the pressure drop through a straight run of pipe when the flow is turbulent. Time-dependent flow in simple branch hydraulic systems The principal branch of the Lambert function is employed in the field of mechanical engineering, in the study of time dependent transfer of Newtonian fluids between two reservoirs with varying free surface levels, using centrifugal pumps. The Lambert function provided an exact solution to the flow rate of fluid in both the laminar and turbulent regimes: where is the initial flow rate and is time. Neuroimaging The Lambert function is employed in the field of neuroimaging for linking cerebral blood flow and oxygen consumption changes within a brain voxel, to the corresponding blood oxygenation level dependent (BOLD) signal. Chemical engineering The Lambert function is employed in the field of chemical engineering for modeling the porous electrode film thickness in a glassy carbon based supercapacitor for electrochemical energy storage. The Lambert function provides an exact solution for a gas phase thermal activation process where growth of carbon film and combustion of the same film compete with each other. Crystal growth In the crystal growth, the negative principal of the Lambert W-function can be used to calculate the distribution coefficient, , and solute concentration in the melt, , from the Scheil equation: Materials science The Lambert function is employed in the field of epitaxial film growth for the determination of the critical dislocation onset film thickness. This is the calculated thickness of an epitaxial film, where due to thermodynamic principles the film will develop crystallographic dislocations in order to minimise the elastic energy stored in the films. Prior to application of Lambert for this problem, the critical thickness had to be determined via solving an implicit equation. Lambert turns it in an explicit equation for analytical handling with ease. Semiconductor It was shown that a W-function describes the relation between voltage, current and resistance in a diode. Porous media The Lambert function has been employed in the field of fluid flow in porous media to model the tilt of an interface separating two gravitationally segregated fluids in a homogeneous tilted porous bed of constant dip and thickness where the heavier fluid, injected at the bottom end, displaces the lighter fluid that is produced at the same rate from the top end. The principal branch of the solution corresponds to stable displacements while the −1 branch applies if the displacement is unstable with the heavier fluid running underneath the lighter fluid. Bernoulli numbers and Todd genus The equation (linked with the generating functions of Bernoulli numbers and Todd genus): can be solved by means of the two real branches and : This application shows that the branch difference of the function can be employed in order to solve other transcendental equations. Statistics The centroid of a set of histograms defined with respect to the symmetrized Kullback–Leibler divergence (also called the Jeffreys divergence ) has a closed form using the Lambert function. Pooling of tests for infectious diseases Solving for the optimal group size to pool tests so that at least one individual is infected involves the Lambert function. Exact solutions of the Schrödinger equation The Lambert function appears in a quantum-mechanical potential, which affords the fifth – next to those of the harmonic oscillator plus centrifugal, the Coulomb plus inverse square, the Morse, and the inverse square root potential – exact solution to the stationary one-dimensional Schrödinger equation in terms of the confluent hypergeometric functions. The potential is given as A peculiarity of the solution is that each of the two fundamental solutions that compose the general solution of the Schrödinger equation is given by a combination of two confluent hypergeometric functions of an argument proportional to The Lambert function also appears in the exact solution for the bound state energy of the one dimensional Schrödinger equation with a Double Delta Potential. Exact solution of QCD coupling constant In Quantum chromodynamics, the quantum field theory of the Strong interaction, the coupling constant is computed perturbatively, the order n corresponding to Feynman diagrams including n quantum loops. The first order, , solution is exact (at that order) and analytical. At higher orders, , there is no exact and analytical solution and one typically uses an iterative method to furnish an approximate solution. However, for second order, , the Lambert function provides an exact (if non-analytical) solution. Exact solutions of the Einstein vacuum equations In the Schwarzschild metric solution of the Einstein vacuum equations, the function is needed to go from the Eddington–Finkelstein coordinates to the Schwarzschild coordinates. For this reason, it also appears in the construction of the Kruskal–Szekeres coordinates. Resonances of the delta-shell potential The s-wave resonances of the delta-shell potential can be written exactly in terms of the Lambert function. Thermodynamic equilibrium If a reaction involves reactants and products having heat capacities that are constant with temperature then the equilibrium constant obeys for some constants , , and . When (equal to ) is not zero the value or values of can be found where equals a given value as follows, where can be used for . If and have the same sign there will be either two solutions or none (or one if the argument of is exactly ). (The upper solution may not be relevant.) If they have opposite signs, there will be one solution. Phase separation of polymer mixtures In the calculation of the phase diagram of thermodynamically incompatible polymer mixtures according to the Edmond-Ogston model, the solutions for binodal and tie-lines are formulated in terms of Lambert functions. Wien's displacement law in a D-dimensional universe Wien's displacement law is expressed as . With and , where is the spectral energy energy density, one finds , where is the number of degrees of freedom for spatial translation. The solution shows that the spectral energy density is dependent on the dimensionality of the universe. AdS/CFT correspondence The classical finite-size corrections to the dispersion relations of giant magnons, single spikes and GKP strings can be expressed in terms of the Lambert function. Epidemiology In the limit of the SIR model, the proportion of susceptible and recovered individuals has a solution in terms of the Lambert function. Determination of the time of flight of a projectile The total time of the journey of a projectile which experiences air resistance proportional to its velocity can be determined in exact form by using the Lambert function. Electromagnetic surface wave propagation The transcendental equation that appears in the determination of the propagation wave number of an electromagnetic axially symmetric surface wave (a low-attenuation single TM01 mode) propagating in a cylindrical metallic wire gives rise to an equation like (where and clump together the geometrical and physical factors of the problem), which is solved by the Lambert function. The first solution to this problem, due to Sommerfeld circa 1898, already contained an iterative method to determine the value of the Lambert function. Orthogonal trajectories of real ellipses The family of ellipses centered at is parameterized by eccentricity . The orthogonal trajectories of this family are given by the differential equation whose general solution is the family . Generalizations The standard Lambert function expresses exact solutions to transcendental algebraic equations (in ) of the form: where , and are real constants. The solution is Generalizations of the Lambert function include: An application to general relativity and quantum mechanics (quantum gravity) in lower dimensions, in fact a link (unknown prior to 2007) between these two areas, where the right-hand side of () is replaced by a quadratic polynomial in : where and are real distinct constants, the roots of the quadratic polynomial. Here, the solution is a function which has a single argument but the terms like and are parameters of that function. In this respect, the generalization resembles the hypergeometric function and the Meijer function but it belongs to a different class of functions. When , both sides of () can be factored and reduced to () and thus the solution reduces to that of the standard function. Equation () expresses the equation governing the dilaton field, from which is derived the metric of the or lineal two-body gravity problem in 1 + 1 dimensions (one spatial dimension and one time dimension) for the case of unequal rest masses, as well as the eigenenergies of the quantum-mechanical double-well Dirac delta function model for unequal charges in one dimension. Analytical solutions of the eigenenergies of a special case of the quantum mechanical three-body problem, namely the (three-dimensional) hydrogen molecule-ion. Here the right-hand side of () is replaced by a ratio of infinite order polynomials in : where and are distinct real constants and is a function of the eigenenergy and the internuclear distance . Equation () with its specialized cases expressed in () and () is related to a large class of delay differential equations. G. H. Hardy's notion of a "false derivative" provides exact multiple roots to special cases of (). Applications of the Lambert function in fundamental physical problems are not exhausted even for the standard case expressed in () as seen recently in the area of atomic, molecular, and optical physics. Plots Numerical evaluation The function may be approximated using Newton's method, with successive approximations to (so ) being The function may also be approximated using Halley's method, given in Corless et al. to compute . For real , it may be approximated by the quadratic-rate recursive formula of R. Iacono and J.P. Boyd: Lajos Lóczi proves that by using this iteration with an appropriate starting value , For the principal branch if : if if For the branch if if one can determine the maximum number of iteration steps in advance for any precision: if (Theorem 2.4): if (Theorem 2.9): if for the principal branch (Theorem 2.17): for the branch (Theorem 2.23): Toshio Fukushima has presented a fast method for approximating the real valued parts of the principal and secondary branches of the function without using any iteration. In this method the function is evaluated as a conditional switch of rational functions on transformed variables: where , , and are transformations of : . Here , , , and are rational functions whose coefficients for different -values are listed in the referenced paper together with the values that determine the subdomains. With higher degree polynomials in these rational functions the method can approximate the function more accurately. For example, when , can be approximated to 24 bits of accuracy on 64-bit floating point values as where is defined with the transformation above and the coefficients and are given in the table below. Fukushima also offers an approximation with 50 bits of accuracy on 64-bit floats that uses 8th- and 7th-degree polynomials. Software The Lambert function is implemented in many programming languages. Some of them are listed below:
Mathematics
Specific functions
null
92512
https://en.wikipedia.org/wiki/Lipoprotein
Lipoprotein
A lipoprotein is a biochemical assembly whose primary function is to transport hydrophobic lipid (also known as fat) molecules in water, as in blood plasma or other extracellular fluids. They consist of a triglyceride and cholesterol center, surrounded by a phospholipid outer shell, with the hydrophilic portions oriented outward toward the surrounding water and lipophilic portions oriented inward toward the lipid center. A special kind of protein, called apolipoprotein, is embedded in the outer shell, both stabilising the complex and giving it a functional identity that determines its role. Plasma lipoprotein particles are commonly divided into five main classes, based on size, lipid composition, and apolipoprotein content: HDL, LDL, IDL, VLDL and chylomicrons. Subgroups of these plasma particles are primary drivers or modulators of atherosclerosis. Many enzymes, transporters, structural proteins, antigens, adhesins, and toxins are sometimes also classified as lipoproteins, since they are formed by lipids and proteins. Scope Transmembrane lipoproteins Some transmembrane proteolipids, especially those found in bacteria, are referred to as lipoproteins; they are not related to the lipoprotein particles that this article is about. Such transmembrane proteins are difficult to isolate, as they bind tightly to the lipid membrane, often require lipids to display the proper structure, and can be water-insoluble. Detergents are usually required to isolate transmembrane lipoproteins from their associated biological membranes. Plasma lipoprotein particles Because fats are insoluble in water, they cannot be transported on their own in extracellular water, including blood plasma. Instead, they are surrounded by a hydrophilic external shell that functions as a transport vehicle. The role of lipoprotein particles is to transport fat molecules, such as triglycerides, phospholipids, and cholesterol within the extracellular water of the body to all the cells and tissues of the body. The proteins included in the external shell of these particles, called apolipoproteins, are synthesized and secreted into the extracellular water by both the small intestine and liver cells. The external shell also contains phospholipids and cholesterol. All cells use and rely on fats and cholesterol as building blocks to create the multiple membranes that cells use both to control internal water content and internal water-soluble elements and to organize their internal structure and protein enzymatic systems. The outer shell of lipoprotein particles have the hydrophilic groups of phospholipids, cholesterol, and apolipoproteins directed outward. Such characteristics make them soluble in the salt-water-based blood pool. Triglycerides and cholesteryl esters are carried internally, shielded from the water by the outer shell. The kind of apolipoproteins contained in the outer shell determines the functional identity of the lipoprotein particles. The interaction of these apolipoproteins with enzymes in the blood, with each other, or with specific proteins on the surfaces of cells, determines whether triglycerides and cholesterol will be added to or removed from the lipoprotein transport particles. Characterization in human plasma Structure Lipoproteins are complex particles that have a central hydrophobic core of non-polar lipids, primarily cholesteryl esters and triglycerides. This hydrophobic core is surrounded by a hydrophilic membrane consisting of phospholipids, free cholesterol, and apolipoproteins. Plasma lipoproteins, found in blood plasma, are typically divided into five main classes based on size, lipid composition, and apolipoprotein content: HDL, LDL, IDL, VLDL and chylomicrons. Functions Metabolism The handling of lipoprotein particles in the body is referred to as lipoprotein particle metabolism. It is divided into two pathways, exogenous and endogenous, depending in large part on whether the lipoprotein particles in question are composed chiefly of dietary (exogenous) lipids or whether they originated in the liver (endogenous), through de novo synthesis of triglycerides. The hepatocytes are the main platform for the handling of triglycerides and cholesterol; the liver can also store certain amounts of glycogen and triglycerides. While adipocytes are the main storage cells for triglycerides, they do not produce any lipoproteins. Exogenous pathway Bile emulsifies fats contained in the chyme, then pancreatic lipase cleaves triglyceride molecules into two fatty acids and one 2-monoacylglycerol. Enterocytes readily absorb the small molecules from the chymus. Inside of the enterocytes, fatty acids and monoacylglycerides are transformed again into triglycerides. Then these lipids are assembled with apolipoprotein B-48 into nascent chylomicrons. These particles are then secreted into the lacteals in a process that depends heavily on apolipoprotein B-48. As they circulate through the lymphatic vessels, nascent chylomicrons bypass the liver circulation and are drained via the thoracic duct into the bloodstream. In the blood stream, nascent chylomicron particles interact with HDL particles, resulting in HDL donation of apolipoprotein C-II and apolipoprotein E to the nascent chylomicron. The chylomicron at this stage is then considered mature. Via apolipoprotein C-II, mature chylomicrons activate lipoprotein lipase (LPL), an enzyme on endothelial cells lining the blood vessels. LPL catalyzes the hydrolysis of triglycerides that ultimately releases glycerol and fatty acids from the chylomicrons. Glycerol and fatty acids can then be absorbed in peripheral tissues, especially adipose and muscle, for energy and storage. The hydrolyzed chylomicrons are now called chylomicron remnants. The chylomicron remnants continue circulating the bloodstream until they interact via apolipoprotein E with chylomicron remnant receptors, found chiefly in the liver. This interaction causes the endocytosis of the chylomicron remnants, which are subsequently hydrolyzed within lysosomes. Lysosomal hydrolysis releases glycerol and fatty acids into the cell, which can be used for energy or stored for later use. Endogenous pathway The liver is the central platform for the handling of lipids: it is able to store glycerols and fats in its cells, the hepatocytes. Hepatocytes are also able to create triglycerides via de novo synthesis. They also produce the bile from cholesterol. The intestines are responsible for absorbing cholesterol. They transfer it over into the blood stream. In the hepatocytes, triglycerides and cholesteryl esters are assembled with apolipoprotein B-100 to form nascent VLDL particles. Nascent VLDL particles are released into the bloodstream via a process that depends upon apolipoprotein B-100. In the blood stream, nascent VLDL particles bump with HDL particles; as a result, HDL particles donate apolipoprotein C-II and apolipoprotein E to the nascent VLDL particle. Once loaded with apolipoproteins C-II and E, the nascent VLDL particle is considered mature. VLDL particles circulate and encounter LPL expressed on endothelial cells. Apolipoprotein C-II activates LPL, causing hydrolysis of the VLDL particle and the release of glycerol and fatty acids. These products can be absorbed from the blood by peripheral tissues, principally adipose and muscle. The hydrolyzed VLDL particles are now called VLDL remnants or intermediate-density lipoproteins (IDLs). VLDL remnants can circulate and, via an interaction between apolipoprotein E and the remnant receptor, be absorbed by the liver, or they can be further hydrolyzed by hepatic lipase. Hydrolysis by hepatic lipase releases glycerol and fatty acids, leaving behind IDL remnants, called low-density lipoproteins (LDL), which contain a relatively high cholesterol content (). LDL circulates and is absorbed by the liver and peripheral cells. Binding of LDL to its target tissue occurs through an interaction between the LDL receptor and apolipoprotein B-100 on the LDL particle. Absorption occurs through endocytosis, and the internalized LDL particles are hydrolyzed within lysosomes, releasing lipids, chiefly cholesterol. Possible role in oxygen transport Plasma lipoproteins may carry oxygen gas. This property is due to the crystalline hydrophobic structure of lipids, providing a suitable environment for O2 solubility compared to an aqueous medium. Role in inflammation Inflammation, a biological system response to stimuli such as the introduction of a pathogen, has an underlying role in numerous systemic biological functions and pathologies. This is a useful response by the immune system when the body is exposed to pathogens, such as bacteria in locations that will prove harmful, but can also have detrimental effects if left unregulated. It has been demonstrated that lipoproteins, specifically HDL, have important roles in the inflammatory process. When the body is functioning under normal, stable physiological conditions, HDL has been shown to be beneficial in several ways. LDL contains apolipoprotein B (apoB), which allows LDL to bind to different tissues, such as the artery wall if the glycocalyx has been damaged by high blood sugar levels. If oxidised, the LDL can become trapped in the proteoglycans, preventing its removal by HDL cholesterol efflux. Normal functioning HDL is able to prevent the process of oxidation of LDL and the subsequent inflammatory processes seen after oxidation. Lipopolysaccharide, or LPS, is the major pathogenic factor on the cell wall of Gram-negative bacteria. Gram-positive bacteria has a similar component named Lipoteichoic acid, or LTA. HDL has the ability to bind LPS and LTA, creating HDL-LPS complexes to neutralize the harmful effects in the body and clear the LPS from the body. HDL also has significant roles interacting with cells of the immune system to modulate the availability of cholesterol and modulate the immune response. Under certain abnormal physiological conditions such as system infection or sepsis, the major components of HDL become altered, The composition and quantity of lipids and apolipoproteins are altered as compared to normal physiological conditions, such as a decrease in HDL cholesterol (HDL-C), phospholipids, apoA-I (a major lipoprotein in HDL that has been shown to have beneficial anti-inflammatory properties), and an increase in Serum amyloid A. This altered composition of HDL is commonly referred to as acute-phase HDL in an acute-phase inflammatory response, during which time HDL can lose its ability to inhibit the oxidation of LDL. In fact, this altered composition of HDL is associated with increased mortality and worse clinical outcomes in patients with sepsis. Classification By density Lipoproteins may be classified as five major groups, listed from larger and lower density to smaller and higher density. Lipoproteins are larger and less dense when the fat to protein ratio is increased. They are classified on the basis of electrophoresis, ultracentrifugation and nuclear magnetic resonance spectroscopy via the Vantera Analyzer. Chylomicrons carry triglycerides (fat) from the intestines to the liver, to skeletal muscle, and to adipose tissue. Very-low-density lipoproteins (VLDL) carry (newly synthesised) triglycerides from the liver to adipose tissue. Intermediate-density lipoproteins (IDL) are intermediate between VLDL and LDL. They are not usually detectable in the blood when fasting. Low-density lipoproteins (LDL) carry 3,000 to 6,000 fat molecules (phospholipids, cholesterol, triglycerides, etc.) around the body. LDL particles are sometimes referred to as "bad" lipoprotein because concentrations of two kinds of LDL (sd-LDL and LPA), correlate with atherosclerosis progression. In healthy individuals, most LDL is large and buoyant (lb LDL). large buoyant LDL (lb LDL) particles small dense LDL (sd LDL) particles Lipoprotein(a) (LPA) is a lipoprotein particle of a certain phenotype High-density lipoproteins (HDL) collect fat molecules from the body's cells/tissues and take them back to the liver. HDLs are sometimes referred to as "good" lipoprotein because higher concentrations correlate with low rates of atherosclerosis progression and/or regression. For young healthy research subjects, ~70 kg (154 lb), these data represent averages across individuals studied, percentages represent % dry weight: However, these data are not necessarily reliable for any one individual or for the general clinical population. Alpha and beta It is also possible to classify lipoproteins as "alpha" and "beta", according to the classification of proteins in serum protein electrophoresis. This terminology is sometimes used in describing lipid disorders such as abetalipoproteinemia. Subdivisions Lipoproteins, such as LDL and HDL, can be further subdivided into subspecies isolated through a variety of methods. These are subdivided by density or by the protein contents/ proteins they carry. While the research is currently ongoing, researchers are learning that different subspecies contain different apolipoproteins, proteins, and lipid contents between species which have different physiological roles. For example, within the HDL lipoprotein subspecies, a large number of proteins are involved in general lipid metabolism. However, it is being elucidated that HDL subspecies also contain proteins involved in the following functions: homeostasis, fibrinogen, clotting cascade, inflammatory and immune responses, including the complement system, proteolysis inhibitors, acute-phase response proteins, and the LPS-binding protein, heme and iron metabolism, platelet regulation, vitamin binding and general transport. Research High levels of lipoprotein(a) are a significant risk factor for atherosclerotic cardiovascular diseases via mechanisms associated with inflammation and thrombosis. The links of mechanisms between different lipoprotein isoforms and risk for cardiovascular diseases, lipoprotein synthesis, regulation, and metabolism, and related risks for genetic diseases are under active research, as of 2022.
Biology and health sciences
Lipids
Biology
92514
https://en.wikipedia.org/wiki/Detergent
Detergent
A detergent is a surfactant or a mixture of surfactants with cleansing properties when in dilute solutions. There are a large variety of detergents. A common family is the alkylbenzene sulfonates, which are soap-like compounds that are more soluble than soap in hard water, because the polar sulfonate is less likely than the polar carboxylate of soap to bind to calcium and other ions found in hard water. Definitions The word detergent is derived from the Latin adjective detergens, from the verb detergere, meaning to wipe or polish off. Detergent can be defined as a surfactant or a mixture of surfactants with cleansing properties when in dilute solutions. However, conventionally, detergent is used to mean synthetic cleaning compounds as opposed to soap (a salt of the natural fatty acid), even though soap is also a detergent in the true sense. In domestic contexts, the term detergent refers to household cleaning products such as laundry detergent or dish detergent, which are in fact complex mixtures of different compounds, not all of which are by themselves detergents. Detergency is the ability to remove unwanted substances termed 'soils' from a substrate (e.g., clothing). Structure and properties Detergents are a group of compounds with an amphiphilic structure, where each molecule has a hydrophilic (polar) head and a long hydrophobic (non-polar) tail. The hydrophobic portion of these molecules may be straight- or branched-chain hydrocarbons, or it may have a steroid structure. The hydrophilic portion is more varied, they may be ionic or non-ionic, and can range from a simple or a relatively elaborate structure. Detergents are surfactants since they can decrease the surface tension of water. Their dual nature facilitates the mixture of hydrophobic compounds (like oil and grease) with water. Because air is not hydrophilic, detergents are also foaming agents to varying degrees. Detergent molecules aggregate to form micelles, which makes them soluble in water. The hydrophobic group of the detergent is the main driving force of micelle formation, its aggregation forms the hydrophobic core of the micelles. The micelle can remove grease, protein or soiling particles. The concentration at which micelles start to form is the critical micelle concentration (CMC), and the temperature at which the micelles further aggregate to separate the solution into two phases is the cloud point when the solution becomes cloudy and detergency is optimal. Detergents work better in an alkaline pH. The properties of detergents are dependent on the molecular structure of the monomer. The ability to foam may be determined by the head group, for example anionic surfactants are high-foaming, while nonionic surfactants may be non-foaming or low-foaming. Chemical classifications of detergents Detergents are classified into four broad groupings, depending on the electrical charge of the surfactants. Anionic detergents Typical anionic detergents are alkylbenzene sulfonates. The alkylbenzene portion of these anions is lipophilic and the sulfonate is hydrophilic. Two varieties have been popularized, those with branched alkyl groups and those with linear alkyl groups. The former were largely phased out in economically advanced societies because they are poorly biodegradable. Anionic detergents are the most common form of detergents, and an estimated 6 billion kilograms of anionic detergents are produced annually for the domestic markets. Bile acids, such as deoxycholic acid (DOC), are anionic detergents produced by the liver to aid in digestion and absorption of fats and oils. Cationic detergents Cationic detergents are similar to anionic ones, but quaternary ammonium replaces the hydrophilic anionic sulfonate group. The ammonium sulfate center is positively charged. Cationic surfactants generally have poor detergency. Non-ionic detergents Non-ionic detergents are characterized by their uncharged, hydrophilic headgroups. Typical non-ionic detergents are based on polyoxyethylene or a glycoside. Common examples of the former include Tween, Triton, and the Brij series. These materials are also known as ethoxylates or PEGylates and their metabolites, nonylphenol. Glycosides have a sugar as their uncharged hydrophilic headgroup. Examples include octyl thioglucoside and maltosides. HEGA and MEGA series detergents are similar, possessing a sugar alcohol as headgroup. Amphoteric detergents Amphoteric or zwitterionic detergents have zwitterions within a particular pH range, and possess a net zero charge arising from the presence of equal numbers of +1 and −1 charged chemical groups. Examples include CHAPS. History Soap is known to have been used as a surfactant for washing clothes since the Sumerian time in 2,500 B.C. In ancient Egypt, soda was used as a wash additive. In the 19th century, synthetic surfactants began to be created, for example from olive oil. Sodium silicate (water glass) was used in soap-making in the United States in the 1860s, and in 1876, Henkel sold a sodium silicate-based product that can be used with soap and marketed as a "universal detergent" (Universalwaschmittel) in Germany. Soda was then mixed with sodium silicate to produce Germany's first brand name detergent Bleichsoda. In 1907, Henkel also added a bleaching agent sodium perborate to launch the first 'self-acting' laundry detergent Persil to eliminate the laborious rubbing of laundry by hand. During the First World War, there was a shortage of oils and fats needed to make soap. In order to find alternatives for soap, synthetic detergents were made in Germany by chemists using raw material derived from coal tar. These early products, however, did not provide sufficient detergency. In 1928, effective detergent was made through the sulfation of fatty alcohol, but large-scale production was not feasible until low-cost fatty alcohols become available in the early 1930s. The synthetic detergent created was more effective and less likely to form scum than soap in hard water, and can also eliminate acid and alkaline reactions and decompose dirt. Commercial detergent products with fatty alcohol sulphates began to be sold, initially in 1932 in Germany by Henkel. In the United States, detergents were sold in 1933 by Procter & Gamble (Dreft) primarily in areas with hard water. However, sales in the US grew slowly until the introduction of 'built' detergents with the addition of effective phosphate builder developed in the early 1940s. The builder improves the performance of the surfactants by softening the water through the chelation of calcium and magnesium ions, helping to maintain an alkaline pH, as well as dispersing and keeping the soiling particles in solution. The development of the petrochemical industry after the Second World War also yielded material for the production of a range of synthetic surfactants, and alkylbenzene sulfonates became the most important detergent surfactants used. By the 1950s, laundry detergents had become widespread, and largely replaced soap for cleaning clothes in developed countries. Over the years, many types of detergents have been developed for a variety of purposes, for example, low-sudsing detergents for use in front-loading washing machines, heavy-duty detergents effective in removing grease and dirt, all-purpose detergents and specialty detergents. They become incorporated in various products outside of laundry use, for example in dishwasher detergents, shampoo, toothpaste, industrial cleaners, and in lubricants and fuels to reduce or prevent the formation of sludge or deposits. The formulation of detergent products may include bleach, fragrances, dyes and other additives. The use of phosphates in detergent, however, led to concerns over nutrient pollution and demand for changes to the formulation of the detergents. Concerns were also raised over the use of surfactants such as branched alkylbenzene sulfonate (tetrapropylenebenzene sulfonate) that lingers in the environment, which led to their replacement by surfactants that are more biodegradable, such as linear alkylbenzene sulfonate. Developments over the years have included the use of enzymes, substitutes for phosphates such as zeolite A and NTA, TAED as bleach activator, sugar-based surfactants which are biodegradable and milder to skin, and other green friendly products, as well as changes to the form of delivery such as tablets, gels and pods. Major applications of detergents Household cleaning One of the largest applications of detergents is for household and shop cleaning including dish washing and washing laundry. These detergents are commonly available as powders or concentrated solutions, and the formulations of these detergents are often complex mixtures of a variety of chemicals aside from surfactants, reflecting the diverse demands of the application and the highly competitive consumer market. These detergents may contain the following components: surfactants foam regulators builders bleach bleach activators enzymes dyes fragrances other additives Fuel additives Both carburetors and fuel injector components of internal combustion engines benefit from detergents in the fuels to prevent fouling. Concentrations are about 300 ppm. Typical detergents are long-chain amines and amides such as polyisobuteneamine and polyisobuteneamide/succinimide. Biological reagent Reagent grade detergents are employed for the isolation and purification of integral membrane proteins found in biological cells. Solubilization of cell membrane bilayers requires a detergent that can enter the inner membrane monolayer. Advancements in the purity and sophistication of detergents have facilitated structural and biophysical characterization of important membrane proteins such as ion channels also the disrupt membrane by binding lipopolysaccharide, transporters, signaling receptors, and photosystem II.
Technology
Food, water and health
null
92516
https://en.wikipedia.org/wiki/Shinkansen
Shinkansen
The , colloquially known in English as the bullet train, is a network of high-speed railway lines in Japan. It was initially built to connect distant Japanese regions with Tokyo, the capital, to aid economic growth and development. Beyond long-distance travel, some sections around the largest metropolitan areas are used as a commuter rail network. It is owned by the Japan Railway Construction, Transport and Technology Agency and operated by five Japan Railways Group companies. Starting with the Tokaido Shinkansen () in 1964, the network has expanded to consist of of lines with maximum speeds of , of Mini-shinkansen lines with a maximum speed of , and of spur lines with Shinkansen services. The network links most major cities on the islands of Honshu and Kyushu, and connects to Hakodate on the northern island of Hokkaido. An extension to Sapporo is under construction and scheduled to open in 2038. The maximum operating speed is (on a section of the Tōhoku Shinkansen). Test runs have reached for conventional rail in 1996, and up to a world record for SCMaglev trains in April 2015. The original Tokaido Shinkansen, connecting Tokyo, Nagoya, and Osaka, three of Japan's largest cities, is one of the world's busiest high-speed rail lines. In the one-year period preceding March 2017, it carried 159 million passengers, and since its opening more than six decades ago, it has transported more than 6.4 billion total passengers. At peak times, the line carries up to 16 trains per hour in each direction with 16 cars each (1,323-seat capacity and occasionally additional standing passengers) with a minimum headway of three minutes between trains. The Shinkansen network of Japan had the highest annual passenger ridership (a maximum of 353 million in 2007) of any high-speed rail network until 2011, when the Chinese high-speed railway network surpassed it at 370 million passengers annually. Etymology in Japanese means 'new trunk line' or 'new main line', but this word is used to describe both the railway lines the trains run on and the trains themselves. In English, the trains are also known as the bullet train. The term originates from 1939, and was the initial name given to the Shinkansen project in its earliest planning stages. Furthermore, the name , used exclusively until 1972 for trains on the Tōkaidō Shinkansen, is used today in English-language announcements and signage. History Japan was the first country to build dedicated railway lines for high-speed travel. Because of the mountainous terrain, the existing network consisted of narrow-gauge lines, which generally took indirect routes and could not be adapted to higher speeds due to technical limitations of narrow-gauge rail. For example, if a standard-gauge rail has a curve with a maximum speed of , the same curve on narrow-gauge rail will have a maximum allowable speed of . Consequently, Japan had a greater need for new high-speed lines than countries where the existing standard gauge or broad gauge rail system had more upgrade potential. Among the key people credited with the construction of the first Shinkansen are Hideo Shima, the Chief Engineer, and Shinji Sogō, the first President of Japanese National Railways (JNR) who managed to persuade politicians to back the plan. Other significant people responsible for its technical development were Tadanao Miki, Tadashi Matsudaira, and Hajime Kawanabe based at the Railway Technical Research Institute (RTRI), part of JNR. They were responsible for much of the technical development of the first line, the Tōkaidō Shinkansen. All three had worked on aircraft design during World War II. Early proposals The popular English name bullet train is a literal translation of the Japanese term , a nickname given to the project while it was initially discussed in the 1930s. The name stuck because of the original 0 Series Shinkansen's resemblance to a bullet and its high speed. The Shinkansen name was first formally used in 1940 for a proposed standard gauge passenger and freight line between Tokyo and Shimonoseki that would have used steam and electric locomotives with a top speed of . Over the next three years, the Ministry of Railways drew up more ambitious plans to extend the line to Beijing (through a tunnel to Korea) and even Singapore, and build connections to the Trans-Siberian Railway and other trunk lines in Asia. These plans were abandoned in 1943 as Japan's position in World War II worsened. However, some construction did commence on the line; several tunnels on the present-day Shinkansen date to the war-era project. Construction Following the end of World War II, high-speed rail was forgotten for several years while traffic of passengers and freight steadily increased on the conventional Tōkaidō Main Line along with the reconstruction of Japanese industry and economy. By the mid-1950s the Tōkaidō Line was operating at full capacity, and the Ministry of Railways decided to revisit the Shinkansen project. In 1957, Odakyu Electric Railway introduced its 3000 series SE Romancecar train, setting a world speed record of for a narrow gauge train when JNR leased a trainset in order to perform high-speed tests. This train gave designers the confidence that they could safely build an even faster standard gauge train. Thus the first Shinkansen, the 0 series, was built on the success of the Romancecar. In the 1950s, the Japanese national attitude was that as was happening in the United States, railways would soon be outdated and replaced by air travel and highways. However, Shinji Sogō, President of Japanese National Railways, insisted strongly on the possibility of high-speed rail, and the Shinkansen project was implemented. Government approval came in December 1958, and construction of the first segment of the Tōkaidō Shinkansen between Tokyo and Osaka started in April 1959. The cost of constructing the Shinkansen was at first estimated at nearly 200 billion yen, which was raised in the form of a government loan, railway bonds and a low-interest loan of US$80 million from the World Bank. Initial estimates, however, were understated and the actual cost was about 380 billion yen. As the budget shortfall became clear in 1963, Sogo resigned to take responsibility. A test facility for rolling stock, called the Kamonomiya Model Section, opened in Odawara in 1962. Initial success The Tōkaidō Shinkansen began service on 1 October 1964, in time for the first Tokyo Olympics. The conventional Limited Express service took six hours and 40 minutes from Tokyo to Osaka, but the Shinkansen made the trip in just four hours, shortened to three hours and ten minutes by 1965. It enabled day trips between Tokyo and Osaka, the two largest metropolises in Japan, significantly changed the style of business and life of the Japanese people, and increased new traffic demand. The service was an immediate success, reaching the 100 million passenger mark in less than three years on 13 July 1967, and one billion passengers in 1976. Sixteen-car trains were introduced for Expo '70 in Osaka. With an average of 23,000 passengers per hour in each direction in 1992, the Tōkaidō Shinkansen was the world's busiest high-speed rail line. As of 2014, the train's 50th anniversary, daily passenger traffic rose to 391,000 which, spread over its 18-hour schedule, represented an average of just under 22,000 passengers per hour. The first Shinkansen trains, the 0 series, ran at speeds of up to , later increased to . The last of these trains, with their classic bullet-nosed appearance, were retired on 30 November 2008. A driving car from one of the 0 series trains was donated by JR West to the National Railway Museum in York, United Kingdom in 2001. Network expansion The Tōkaidō Shinkansen's rapid success prompted an extension westward to Okayama, Hiroshima and Fukuoka (the San'yō Shinkansen), which was completed in 1975. Prime Minister Kakuei Tanaka was an ardent supporter of the Shinkansen, and his government proposed an extensive network paralleling most existing trunk lines. Two new lines, the Tōhoku Shinkansen and Jōetsu Shinkansen, were built following this plan. Many other planned lines were delayed or scrapped entirely as JNR slid into debt throughout the late 1970s, largely because of the high cost of building the Shinkansen network. By the early 1980s, the company was practically insolvent, leading to its privatization in 1987. Development of the Shinkansen by the privatised regional JR companies has continued, with new train models developed, each generally with its own distinctive appearance (such as the 500 series introduced by JR West). Since 2014, Shinkansen trains run regularly at speeds up to on the Tōhoku Shinkansen; only the Shanghai maglev train, China Railway High-speed networks, and the Indonesian Jakarta-Bandung High-speed railway have commercial services that operate faster. Since 1970, development has also been underway for the Chūō Shinkansen, a planned maglev line from Tokyo to Osaka. On 21 April 2015, a seven-car L0 series maglev trainset, planned to be used on the line, set a world speed record of . The line is expected to operate at , with the estimated travel time between Tokyo and Osaka taking 67 minutes. Construction commenced in 2011 and was originally scheduled to open in 2027, though it has since been delayed to at least 2034. Technology To enable high-speed operation, Shinkansen uses a range of advanced technology compared with conventional rail, achieving not only high speed but also a high standard of safety and comfort. Its success has influenced other railways in the world, demonstrating the importance and advantages of high-speed rail. Routing Shinkansen routes never intersect with slower, narrow-gauge conventional lines (except mini-shinkansen, which runs along these older lines). Consequently, the shinkansen is not affected by slower local or freight trains (except for Hokkaido Shinkansen while traveling through the Seikan Tunnel), and has the capacity to operate many high-speed trains punctually. In addition, shinkansen routes (excluding mini-shinkansen) are completely grade separated from roads and highways, meaning railway crossings are almost eliminated. Tracks are strictly off-limits with penalties against trespassing strictly regulated by law. The routes use tunnels and viaducts to go through and over obstacles rather than around them, with a minimum curve radius of ( on the oldest Tōkaidō Shinkansen). Track The Shinkansen uses standard gauge in contrast to the narrow gauge of most other lines in Japan. Continuous welded rail and swingnose crossing points are employed, eliminating gaps at turnouts and crossings. Long rails are used, joined by expansion joints to minimize gauge fluctuation due to thermal elongation and shrinkage. A combination of ballasted and slab track is used, with slab track exclusively employed on concrete bed sections such as viaducts and tunnels. Slab track is significantly more cost-effective in tunnel sections, since the lower track height reduces the cross-sectional area of the tunnel, reducing construction costs up to 30%. However, the smaller diameter of Shinkansen tunnels, compared to some other high-speed lines, has resulted in the issue of tunnel boom becoming a concern for residents living close to tunnel portals. The slab track consists of rails, fasteners and track slabs with a cement asphalt mortar. On the roadbed and in tunnels, circular upstands, measuring in diameter and high, are located at 5-metre intervals. The prefabricated upstands are made of either reinforced concrete or pre-stressed reinforced concrete; they prevent the track slab from moving latitudinally or longitudinally. One track slab weighs approximately 5 tons and is wide, long and thick. Signal system The Shinkansen employs an ATC (automatic train control) system, eliminating the need for trackside signals. It uses a comprehensive system of automatic train protection. Centralized traffic control manages all train operations, and all tasks relating to train movement, track, station and schedule are networked and computerized. Electrical systems Shinkansen uses a 25 kV AC overhead power supply (20 kV AC on Mini-shinkansen lines), to overcome the limitations of the 1,500 V direct current used on the existing electrified narrow-gauge system. Power is distributed along the train's axles to reduce the heavy axle loads under single power cars. The AC frequency of the power supply for the Tokaido Shinkansen is 60 Hz. Trains Shinkansen trains are electric multiple units (EMUs), offering fast acceleration, deceleration and reduced damage to the track because of the use of lighter vehicles compared to locomotives or power cars. The coaches are air-sealed to ensure stable air pressure when entering tunnels at high speed. Shinkansen trains (excluding mini-Shinkansen) are also built to a larger loading gauge compared to conventional-speed rolling stock. This larger loading gauge permits wider coaches, allowing for 5-abreast seating (2+3) in Standard Class coaches, compared to the more common 4-abreast (2+2) seating usually found elsewhere. On occasions, this wider loading gauge was also used to allow 6-abreast seating (3+3) on certain trains, such as the E1 and E4 series sets. This, combined with a lack of power cars, allows for a higher passenger capacity within a shorter train length. However, since mini-Shinkansen lines are effectively track-regauged conventional lines, the conventional loading gauge for 1,067mm lines still applies on mini-Shinkansen lines. Traction The Shinkansen has used EMUs from the outset, with the 0 Series Shinkansen having all axles powered. Other railway manufacturers were traditionally reluctant or unable to use distributed traction configurations (Talgo, the German ICE 2 and the French (and subsequently South Korean) TGV (and KTX-I and KTX-Sancheon) use the locomotive (also known as power car) configuration with the Renfe Class 102 and continues with it for the Talgo AVRIL because it is not possible to use powered bogies as part of Talgo's bogie design, which uses a modified Jacobs bogie with a single axle instead of two and allows the wheels to rotate independently of each other, on the ICE 2, TGV and KTX it is because it easily allows for a high ride quality and less electrical equipment.) In Japan, significant engineering desirability exists for the electric multiple unit configuration. A greater proportion of motored axles permits higher acceleration, so the Shinkansen does not lose as much time if stopping frequently. Shinkansen lines have more stops in proportion to their lengths than high-speed lines elsewhere in the world. Lines The main Shinkansen lines are: In practice, the Tokaido, San'yō, and Kyushu lines form a contiguous west/southbound line from Tokyo, as train services run between the Tokaido and San'yō lines and between the San'yō and Kyushu lines, though the lines are operated by different companies. The Tokaido Shinkansen tracks are not physically connected to the lines of the Tohoku Shinkansen at Tokyo Station, as they use different electrification standards, signaling systems, and earthquake mitigation devices. There also exists a dispute between JR East and JR Central about the use of the two platforms which were added to the Tokaido line's half of Tokyo station. Before JNR's privatization, they were conceived as being shared with the Tohoku line, and their construction used funds allocated to the Tohoku line's extension to Tokyo; however, the extension was finished after privatization, by which time the platforms were owned by JR Central. Therefore, there is no through service between those lines. All northbound services from Tokyo travel along the Tohoku Shinkansen until at least Ōmiya before splitting off towards Sendai or Takasaki. Two further lines, known as Mini-shinkansen, have also been constructed by re-gauging and upgrading existing sections of line: Yamagata Shinkansen (Fukushima – Shinjō) Akita Shinkansen (Morioka – Akita) There are two standard-gauge lines not technically classified as Shinkansen lines but run Shinkansen trains as they use tracks leading to Shinkansen storage/maintenance yards: Hakata Minami Line (Hakata – Hakataminami) Gala-Yuzawa Line – technically a branch of the Jōetsu Line – (Echigo-Yuzawa – Gala-Yuzawa) Lines under construction The following lines are under construction. These lines except Chūō Shinkansen, called or planned Shinkansen, are the Shinkansen projects designated in the decided by the government. Hokkaido Shinkansen from to is under construction and scheduled to open by 2038. Chūō Shinkansen (Tokyo–Nagoya–Osaka) is the first maglev Shinkansen line, which has been under construction since 2014. JR Central originally aimed to begin commercial service between Tokyo and Nagoya in 2027. However, in 2024, Central Japan Railway Co President Shunsuke Niwa said that due to construction delays a 2027 opening was now impossible and it is not expected to open until at least 2034. Planned lines The extension of Hokuriku Shinkansen to Osaka is proposed, with the route via Obama and Kyoto selected by the government on 20 December 2016. Construction is proposed to commence in 2030, and take 15 years. Nishi Kyushu Shinkansen has been built to full Shinkansen standards between Takeo-Onsen and Nagasaki, with the existing narrow-gauge line from Shin-Tosu to Takeo Onsen to remain as narrow-gauge track, although there is a proposal to build the section between Shin-Tosu and Takeo Onsen to full Shinkansen standards. In 2018, the Ministry of Land, Infrastructure, Transport and Tourism released cost-benefit analysis results to compare and contrast full Shinkansen, Mini-shinkansen, and Gauge Change Train for this section. Cancelled lines The Narita Shinkansen project to connect Tokyo to Narita International Airport, initiated in the 1970s but halted in 1983 after landowner protests, has been officially cancelled and removed from the Basic Plan governing Shinkansen construction. Parts of its planned right-of-way were used by the Narita Sky Access Line which opened in 2010, and the Keiyo Line reused space originally set aside for the Narita Shinkansen terminus at Tokyo Station. Although the Sky Access Line uses standard-gauge track, it was not built to Shinkansen specifications and there are no plans to convert it into a full Shinkansen line. Proposed lines Many Shinkansen lines were proposed during the boom of the early 1970s but have yet to be constructed and have subsequently been shelved indefinitely. Hokkaido Shinkansen northward extension: Sapporo–Asahikawa : Oshamanbe–Muroran–Sapporo : Toyama–Niigata–Aomori Toyama–Jōetsu-Myōkō exists as part of the Hokuriku Shinkansen, and Nagaoka–Niigata exists as part of the Jōetsu Shinkansen, with provisions for the Uetsu Shinkansen at Nagaoka. : Fukushima–Yamagata–Akita Fukushima–Shinjō and Ōmagari–Akita exist as the Yamagata Shinkansen and Akita Shinkansen, respectively, but as "Mini-Shinkansen" upgrades of existing track, they do not meet the requirements of the Basic Plan. : Nagoya–Tsuruga : Osaka–Tottori–Matsue–Shimonoseki : Okayama–Matsue : Osaka–Tokushima–Takamatsu–Matsuyama–Ōita : Okayama–Kōchi–Matsuyama There have been some activity regarding the Shikoku and Trans-Shikoku Shinkansen in recent years. In 2016, the Shikoku and Trans-Shikoku Shinkansen were identified as potential future projects in a review of long-term plans for the Shikoku area and funds allocated towards the planning of the route. A profitability study has also been commissioned by the city of Oita in 2018 that found the route to be potentially profitable : Fukuoka–Ōita–Miyazaki–Kagoshima : Ōita–Kumamoto In addition, the Basic Plan specified that the Jōetsu Shinkansen should start from Shinjuku, not Tokyo Station, which would have required building an additional of track between Shinjuku and Ōmiya. While no construction work was ever started, land along the proposed track, including an underground section leading to Shinjuku Station, remains reserved. If capacity on the Tokyo–Ōmiya section proves insufficient at some point, construction of the Shinjuku–Ōmiya link may be reconsidered. In December 2009, then transport minister Seiji Maehara proposed a bullet train link to Haneda Airport, using an existing spur that connects the Tōkaidō Shinkansen to a train depot. JR Central called the plan "unrealistic" due to tight train schedules on the existing line, but reports said that Maehara wished to continue discussions on the idea. The succeeding minister has not indicated whether this proposal remains supported. While the plan may become more feasible after the opening the Chūō Shinkansen (sometimes referred to as a bypass to the Tokaido Shinkansen) frees up capacity, construction is already underway for other rail improvements between Haneda and Tokyo station expected to be completed prior to the opening of the 2020 Tokyo Olympics, so any potential Shinkansen service would likely offer only marginal benefit. Despite these plans ultimately not being realized (owing in part due to the effects of the COVID-19 pandemic), rail projects in the vicinity of Haneda Airport, including the Haneda Airport Access Line and the Tokyo Rinkai Subway Line, continue to undergo planning. Services Originally intended to carry passenger trains by day and freight trains by night, the Shinkansen lines carried exclusively passengers for the first five and a half decades of their operation. Light freight has been carried on some passenger services since 2019, and there are plans to expand this with freight-only trains in the future. The system shuts down between midnight and 06:00 every day for maintenance. Japan's few remaining overnight passenger trains run on the older, narrow-gauge network that the Shinkansen parallels. There are three principal service types on the Shinkansen: Express services – these stop at only the very largest stations and, as a result, are the fastest Shinkansen services measured by average speed. Semi-express services – these stop at certain smaller stops alongside stopping at all the largest stations. These allow for faster connections from smaller stops to larger stations than would be otherwise possible with a local service. Local services – these stop at every station along the Shinkansen line. Consequently, local services are the slowest Shinkansen services measured by average speed. Frequently, these services only operate on a part of the line, instead of covering the entirety. Tōkaidō, San'yō and Kyushu Shinkansen Nozomi (express, Tokaido and San'yō) Hikari (semi-express, Tokaido and San'yō) Hikari Rail Star (semi-express, San'yō) Kodama (local, Tokaido and San'yō) Sakura (semi-express, San'yō and Kyushu) Mizuho (express, San'yō and Kyushu) Tsubame (local, Kyushu) Tōhoku, Hokkaido, Yamagata and Akita Shinkansen Hayabusa (express, Tohoku & Hokkaido, using E5 series/H5 series trains) Hayate (local, Tohoku & Hokkaido. Express discontinued in 2019) Yamabiko (semi-express, Tohoku) Nasuno (local, Tohoku) Aoba (discontinued) Komachi (Akita) Tsubasa (Yamagata) Jōetsu Shinkansen Toki / Max Toki (semi-express, Jōetsu) Tanigawa / Max Tanigawa (local, Jōetsu) Asahi / Max Asahi (discontinued) Hokuriku Shinkansen Kagayaki (express, Hokuriku) Hakutaka (semi-express, Hokuriku) Tsurugi (local, Hokuriku) Asama (local, Hokuriku) Nishi Kyushu Shinkansen Kamome Train types Trains are up to sixteen cars long. With each car measuring in length, the longest trains are 400 m ( mile) end to end. Stations are similarly long to accommodate these trains. Some of Japan's high-speed maglev trains are considered Shinkansen, while other slower maglev trains (such as Linimo, serving local communities in and nearby Nagoya, Aichi Prefecture) are intended as alternatives to conventional urban rapid transit systems. Passenger trains Tōkaidō and San'yō Shinkansen 0 series: The first Shinkansen trains which entered service in 1964. Maximum operating speed was . More than 3,200 cars were built. Withdrawn in December 2008. 100 series: Entered service in 1985, and featured bilevel cars with restaurant car and compartments. Maximum operating speed was . Later used only on San'yō Shinkansen Kodama services. Withdrawn in March 2012. 300 series: Entered service in 1992, initially on Nozomi services with maximum operating speed of . Withdrawn in March 2012. 500 series: Introduced on Nozomi services in 1997, with an operating speed of . Since 2008, sets have been shortened from 16 to 8 cars for use on San'yō Shinkansen Kodama services. 700 series: Introduced in 1999, with maximum operating speed of . The JR Central owned units were withdrawn in March 2020, with the JR West owned units continuing to operate on the San'yō Shinkansen line between Shin-Osaka and Hakata. N700 series: In service since 2007, with a maximum operating speed of . N700A series: An upgraded version of N700 series with improved acceleration & deceleration and quieter traction motors. All N700 series sets have been converted to N700A. N700S series: An evolution of the N700 series. First trainset was rolled out in 2019 with passenger services commencing on 1 July 2020. Kyushu and Nishi Kyushu Shinkansen 800 series: In service since 2004 on Tsubame services, with a maximum speed of . N700-7000/8000 series In service since March 2011 on Mizuho and Sakura services with a maximum speed of . N700S-8000 series: 6-car trains introduced in 2022 on the Kamome services with a maximum speed of 260 km/h. Tohoku, Hokkaido, Joetsu, and Hokuriku Shinkansen 200 series: The first type introduced on the Tohoku and Joetsu Shinkansen in 1982 and withdrawn in April 2013. Maximum speed was . The final configuration was as 10-car sets. 12-car and 16-car sets also operated at earlier times. E1 series: Bilevel 12-car trains introduced in 1994 and withdrawn in September 2012. Maximum speed was . E2 series: 8/10-car sets in service since 1997 with a maximum speed of . E4 series: Bilevel 8-car trains introduced in 1997 and withdrawn in October 2021. Maximum speed was . E5 series: 10-car sets in service since March 2011 with a maximum speed of . H5 series: The cold weather derivative of the E5 series. 10-car sets entered service from March 2016 on the Hokkaido Shinkansen with a maximum speed of . E7 series: 12-car trains operated on the Hokuriku Shinkansen since March 2014, with a maximum speed of . In 2019, the E7 series began operating on the Joetsu Shinkansen. W7 series: 12-car trains operated on the Hokuriku Shinkansen since March 2015, with a maximum speed of . Yamagata and Akita Shinkansen 400 series: The first Mini-shinkansen type, introduced in 1992 on Yamagata Shinkansen Tsubasa services with a maximum speed of 240 km/h. Withdrawn in April 2010. E3 series: Introduced in 1997 on Akita Shinkansen Komachi and Yamagata Shinkansen Tsubasa services with a maximum speed of 275 km/h, later operated solely on the Yamagata Shinkansen. E6 series: Introduced in March 2013 on Akita Shinkansen Komachi services, with a maximum speed of , raised to in March 2014. E8 series: Replacement of the E3 series for Tsubasa services introduced from 2024. Experimental trains Class 1000 – 1961 Class 951 – 1969 Class 961 – 1973 Class 962 – 1979 500-900 series "WIN350" – 1992 Class 952/953 "STAR21" – 1992 Class 955 "300X" – 1994 Gauge Change Train – 1998 to present Class E954 "Fastech 360S" – 2004 Class E955 "Fastech 360Z" – 2005 Class E956 "ALFA-X" – 2019 Maglev trains These trains were and are used only for experimental runs, though the L0 series could be a passenger train. LSM200 – 1972 ML100 – 1972 ML100A – 1975 ML-500 – 1977 ML-500R – 1979 MLU001 – 1981 MLU002 – 1987 MLU002N – 1993 MLX01 – 1996 L0 series – 2012 Maintenance vehicles 911 Type diesel locomotive 912 Type diesel locomotive DD18 Type diesel locomotive DD19 Type diesel locomotive 941 Type (rescue train) 921 Type (track inspection car) 922 Type (Doctor Yellow sets T1, T2, T3) 923 Type (Doctor Yellow sets T4, T5) 925 Type (Doctor Yellow sets S1, S2) E926 Type (East i) Speed records Traditional rail Maglev Reliability Punctuality The Shinkansen is very reliable thanks to several factors, including its near-total separation from slower traffic. There are separate laws governing interfering with or otherwise obstructing Shinkansen trains, tracks, or its operation. In 2016, JR Central reported that the Shinkansen's average delay from schedule per train was 24 seconds. This includes delays due to uncontrollable causes, such as natural disasters. Safety record Over the Shinkansen's 60-plus year history, carrying over 10 billion passengers, there have been no passenger fatalities due to train accidents such as derailments or collisions, despite frequent earthquakes and typhoons. Injuries and a single fatality have been caused by doors closing on passengers or their belongings; attendants are employed at platforms to prevent such accidents. There have, however, been suicides by passengers jumping both from and in front of moving trains. On 30 June 2015, a passenger committed suicide on board a Shinkansen train by setting himself on fire, killing another passenger and seriously injuring seven other people. There have been two derailments of Shinkansen trains in passenger service. The first one occurred during the Chūetsu earthquake on 23 October 2004. Eight of ten cars of the Toki No. 325 train on the Jōetsu Shinkansen derailed near Nagaoka Station in Nagaoka, Niigata. There were no casualties among the 154 passengers. Another derailment happened on 2 March 2013 on the Akita Shinkansen when the Komachi No. 25 train derailed in blizzard conditions in Daisen, Akita. No passengers were injured. In the event of an earthquake, an earthquake detection system can bring the train to a stop very quickly; newer trainsets are lighter and have stronger braking systems, allowing for quicker stopping. New anti-derailment devices were installed on tracks after analysis of the Jōetsu derailment. Several months after the exposure of the Kobe Steel falsification scandal, which is among the suppliers of high-strength steel for Shinkansen trainsets, cracks were found upon inspection of a single bogie, and removed from service on 11 December 2017. On 23 January 2024, a massive power outage struck the Tohoku, Hokuriku and Joetsu Shinkansen lines, resulting in the cancellation of 283 trains and affecting about 120,000 passengers. JR East said that the outage was caused by a Kagayaki service train touching an overhead power cable which was left dangling after the metal rod supporting it fractured between Omiya Station in Saitama and Ueno Station in Tokyo. The incident damaged the train's pantographs and a window, while two railway employees were hospitalized following an explosion that occurred at the site during repairs. Most Shinkansen services were restored the following morning. Effects Economics The Shinkansen has had a significant beneficial effect on Japan's business, economy, society, environment and culture beyond mere construction and operational contributions. The resultant time savings alone from switching from a conventional to a high-speed network have been estimated at 400 million hours, and the system has an economic contribution of per year. That does not include the savings from reduced reliance on imported fuel, which also has national security benefits. Shinkansen lines, particularly in the very crowded coastal Taiheiyō Belt megalopolis, met two primary goals: Shinkansen trains reduced the congestion burden on regional transportation by increasing throughput on a minimal land footprint, therefore being economically preferable compared to modes (such as airports or highways) common in less densely populated regions of the world. As rail was already the primary urban mode of passenger travel, from that perspective it was akin to a sunk cost; there was not a significant number of motorists to convince to switch modes. The initial megalopolitan Shinkansen lines were profitable and paid for themselves. Connectivity rejuvenated rural towns such as Kakegawa that would otherwise be too distant from major cities. However, upon the introduction of the 1973 Basic Plan the initial prudence in developing Shinkansen lines gave way to political considerations to extend the mode to far less populated regions of the country, partly to spread these benefits beyond the key centres of Kanto and Kinki. Although in some cases regional extension was frustrated by protracted land acquisition (sometimes influenced by the cancellation of the Narita Shinkansen following fierce protests by locals), over time Shinkansen lines were built to relatively sparsely populated areas with the intent to disperse the population away from the capital. Such expansion had a significant cost. JNR, the national railway company, was already burdened with subsidizing unprofitable rural and regional railways. It then assumed Shinkansen construction debt until the government corporation eventually owed some , contributing to it being regionalised and privatized in 1987. The privatized JRs eventually paid to acquire JNR's Shinkansen network. Following privatization, the JR group of companies have continued Shinkansen network expansion to less populated areas, but with far more flexibility to spin-off unprofitable railways or cut costs than in JNR days. An important factor is the post bubble zero interest-rate policy that allows JR to borrow huge sums of capital without significant concern regarding repayment timing. A UCLA study found that the presence of a Shinkansen line had improved housing affordability by making it more realistic for lower-income city workers to live in exurban areas much further away from the city, which tend to have cheaper housing options. That in turn helps the city to "decentralise" and reduce city property prices. Environment Traveling by the Tokaido Shinkansen from Tokyo to Osaka produces only around 16% of the carbon dioxide of the equivalent journey by car, a saving of 15,000 tons of per year. Challenges Noise pollution Noise pollution concerns have made increasing speed more difficult. In Japan, population density is high and there have been strong protests against the Shinkansen's noise pollution. Its noise is thus limited to less than 70 dB in residential areas. Improvement and reduction of the pantograph, weight saving of cars, and construction of noise barriers and other measures have been implemented. Research is primarily aimed at reducing operational noise, particularly the tunnel boom phenomenon caused when trains transit tunnels at high speed. Earthquake Because of the risk of earthquakes in Japan, the Urgent Earthquake Detection and Alarm System (UrEDAS) (an earthquake warning system) was introduced in 1992. It enables automatic braking of Shinkansen trains in the event of large earthquakes. Heavy snow The Tōkaidō Shinkansen often experiences heavy snow in the area around Maibara Station between December and February, requiring trains to reduce speed thus disrupting the timetable. Snow-dispersing sprinkler systems have been installed, but delays of 10–20 minutes still occur during snowy weather. Snow-related treefalls have also caused service interruptions. Along the Jōetsu Shinkansen route, snow can be very heavy, with depths of two to three metres; the line is equipped with stronger sprinklers and slab track to mitigate the snow's effects. Despite having multiple days with delays longer than 30 minutes, the Tōhoku Shinkansen still presents a slight advantage in reliability compared to air travel on days with significant snowfall. Ridership Annual * The sum of the ridership of individual lines does not equal the ridership of the system because a single rider may be counted multiple times when using multiple lines, to get proper ridership figures for a system, in the above case, is only counted once. ** Only refers to 6 days of operation: 26 March 2016 (opening date) to 31 March 2016 (end of FY2015). Until 2011, Japan's high-speed rail system had the highest annual patronage of any system worldwide, when China's HSR network's patronage reached 1.7 billion and became the world's highest. Cumulative comparison
Technology
Trains
null
92693
https://en.wikipedia.org/wiki/Common%20cold
Common cold
The common cold or the cold is a viral infectious disease of the upper respiratory tract that primarily affects the respiratory mucosa of the nose, throat, sinuses, and larynx. Signs and symptoms may appear in as little as two days after exposure to the virus. These may include coughing, sore throat, runny nose, sneezing, headache, and fever. People usually recover in seven to ten days, but some symptoms may last up to three weeks. Occasionally, those with other health problems may develop pneumonia. Well over 200 virus strains are implicated in causing the common cold, with rhinoviruses, coronaviruses, adenoviruses and enteroviruses being the most common. They spread through the air or indirectly through contact with objects in the environment, followed by transfer to the mouth or nose. Risk factors include going to child care facilities, not sleeping well, and psychological stress. The symptoms are mostly due to the body's immune response to the infection rather than to tissue destruction by the viruses themselves. The symptoms of influenza are similar to those of a cold, although usually more severe and less likely to include a runny nose. There is no vaccine for the common cold. This is due to the rapid mutation and wide variation of viruses that cause the common cold. The primary methods of prevention are hand washing; not touching the eyes, nose or mouth with unwashed hands; and staying away from sick people. People are considered contagious as long as the symptoms are still present. Some evidence supports the use of face masks. There is also no cure, but the symptoms can be treated. Zinc may reduce the duration and severity of symptoms if started shortly after the onset of symptoms. Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen may help with pain. Antibiotics, however, should not be used, as all colds are caused by viruses rather than bacteria. There is no good evidence that cough medicines are effective. The common cold is the most frequent infectious disease in humans. Under normal circumstances, the average adult gets two to three colds a year, while the average child may get six to eight colds a year. Infections occur more commonly during the winter. These infections have existed throughout human history. Signs and symptoms The typical symptoms of a cold include cough, runny nose, sneezing, nasal congestion, and a sore throat, sometimes accompanied by muscle ache, fatigue, headache, and loss of appetite. A sore throat is present in about 40% of cases, a cough in about 50%, and muscle aches in about 50%. In adults, a fever is generally not present but it is common in infants and young children. The cough is usually mild compared to that accompanying influenza. While a cough and a fever indicate a higher likelihood of influenza in adults, a great deal of similarity exists between these two conditions. A number of the viruses that cause the common cold may also result in asymptomatic infections. The color of the mucus or nasal secretion may vary from clear to yellow to green and does not indicate the class of agent causing the infection. Progression A cold usually begins with fatigue, a feeling of being chilled, sneezing, and a headache, followed in a couple of days by a runny nose and cough. Symptoms may begin within sixteen hours of exposure and typically peak two to four days after onset. They usually resolve in seven to ten days, but some can last for up to three weeks. The average duration of cough is eighteen days and in some cases people develop a post-viral cough which can linger after the infection is gone. In children, the cough lasts for more than ten days in 35–40% of cases and continues for more than 25 days in 10%. Causes Viruses The common cold is an infection of the upper respiratory tract which can be caused by many different viruses. The most commonly implicated is a rhinovirus (30–80%), a type of picornavirus with 99 known serotypes. Other commonly implicated viruses include coronaviruses, adenoviruses, enteroviruses, parainfluenza and RSV. Frequently more than one virus is present. In total, more than 200 viral types are associated with colds. The viral cause of some common colds (20–30%) is unknown. Transmission The common cold virus is typically transmitted via airborne droplets, direct contact with infected nasal secretions, or fomites (contaminated objects). Which of these routes is of primary importance has not been determined. As with all respiratory pathogens once presumed to transmit via respiratory droplets, it is highly likely to be carried by the aerosols generated during routine breathing, talking, and singing. The viruses may survive for prolonged periods in the environment (over 18 hours for rhinoviruses) and can be picked up by people's hands and subsequently carried to their eyes or noses where infection occurs. Transmission from animals is considered highly unlikely; an outbreak documented at a British scientific base on Adelaide Island after seventeen weeks of isolation was thought to have been caused by transmission from a contaminated object or an asymptomatic human carrier, rather than from the husky dogs which were also present at the base. Transmission is common in daycare and schools due to the proximity of many children with little immunity and poor hygiene. These infections are then brought home to other members of the family. There is no evidence that recirculated air during commercial flight is a method of transmission. People sitting close to each other appear to be at greater risk of infection. Other Herd immunity, generated from previous exposure to cold viruses, plays an important role in limiting viral spread, as seen with younger populations that have greater rates of respiratory infections. Poor immune function is a risk factor for disease. Insufficient sleep and malnutrition have been associated with a greater risk of developing infection following rhinovirus exposure; this is believed to be due to their effects on immune function. Breast feeding decreases the risk of acute otitis media and lower respiratory tract infections among other diseases, and it is recommended that breast feeding be continued when an infant has a cold. In the developed world breast feeding may not be protective against the common cold in and of itself. Pathophysiology The symptoms of the common cold are believed to be primarily related to the immune response to the virus. The mechanism of this immune response is virus-specific. For example, the rhinovirus is typically acquired by direct contact; it binds to humans via ICAM-1 receptors and the CDHR3 receptor through unknown mechanisms to trigger the release of inflammatory mediators. These inflammatory mediators then produce the symptoms. It does not generally cause damage to the nasal epithelium. The respiratory syncytial virus (RSV), on the other hand, is contracted by direct contact and airborne droplets. It then replicates in the nose and throat before spreading to the lower respiratory tract. RSV does cause epithelium damage. Human parainfluenza virus typically results in inflammation of the nose, throat, and bronchi. In young children, when it affects the trachea, it may produce the symptoms of croup, due to the small size of their airways. Diagnosis The distinction between viral upper respiratory tract infections is loosely based on the location of symptoms, with the common cold affecting primarily the nose (rhinitis), throat (pharyngitis), and lungs (bronchitis). There can be significant overlap, and more than one area can be affected. Self-diagnosis is frequent. Isolation of the viral agent involved is rarely performed, and it is generally not possible to identify the virus type through symptoms. Prevention The only useful ways to reduce the spread of cold viruses are physical and engineering measures such as using correct hand washing technique, respirators, and improvement of indoor air. In the healthcare environment, gowns and disposable gloves are also used. Droplet precautions cannot reliably protect against inhalation of common-cold-laden aerosols. Instead, airborne precautions such as respirators, ventilation, and HEPA/high MERV filters, are the only reliable protection against cold-laden aerosols. Isolation or quarantine is not used as the disease is so widespread and symptoms are non-specific. There is no vaccine to protect against the common cold. Vaccination has proven difficult as there are so many viruses involved and because they mutate rapidly. Creation of a broadly effective vaccine is, therefore, highly improbable. Regular hand washing appears to be effective in reducing the transmission of cold viruses, especially among children. Whether the addition of antivirals or antibacterials to normal hand washing provides greater benefit is unknown. Wearing face masks when around people who are infected may be beneficial; however, there is insufficient evidence for maintaining a greater social distance. It is unclear whether zinc supplements affect the likelihood of contracting a cold. Management Treatments of the common cold primarily involve medications and other therapies for symptomatic relief. Getting plenty of rest, drinking fluids to maintain hydration, and gargling with warm salt water are reasonable conservative measures. Much of the benefit from symptomatic treatment is, however, attributed to the placebo effect. no medications or herbal remedies had been conclusively demonstrated to shorten the duration of infection. Symptomatic Treatments that may help with symptoms include pain medication and medications for fevers such as ibuprofen and acetaminophen (paracetamol). However, it is not clear whether acetaminophen helps with symptoms. It is not known if over-the-counter cough medications are effective for treating an acute cough. Cough medicines are not recommended for use in children due to a lack of evidence supporting effectiveness and the potential for harm. In 2009, Canada restricted the use of over-the-counter cough and cold medication in children six years and under due to concerns regarding risks and unproven benefits. The misuse of dextromethorphan (an over-the-counter cough medicine) has led to its ban in a number of countries. Intranasal corticosteroids have not been found to be useful. In adults, short term use of nasal decongestants may have a small benefit. Antihistamines may improve symptoms in the first day or two; however, there is no longer-term benefit and they have adverse effects such as drowsiness. Other decongestants such as pseudoephedrine appear effective in adults. Combined oral analgesics, antihistaminics, and decongestants are generally effective for older children and adults. Ipratropium nasal spray may reduce the symptoms of a runny nose but has little effect on stuffiness. Ipratropium may also help with coughs in adults. The safety and effectiveness of nasal decongestant use in children is unclear. Due to lack of studies, it is not known whether increased fluid intake improves symptoms or shortens respiratory illness. As of 2017, heated and humidified air, such as via RhinoTherm, is of unclear benefit. One study has found chest vapor rub to provide some relief of nocturnal cough, congestion, and sleep difficulty. Some experts advise against physical exercise if there are symptoms such as fever, widespread muscle aches or fatigue. It is regarded as safe to perform moderate exercise if the symptoms are confined to the head, including runny nose, nasal congestion, sneezing, or a minor sore throat. There is a popular belief that having a hot drink can help with cold symptoms, but evidence to support this is very limited. Antibiotics and antivirals Antibiotics have no effect against viral infections, including the common cold. Due to their side effects, antibiotics cause overall harm but nevertheless are still frequently prescribed. Some of the reasons that antibiotics are so commonly prescribed include people's expectations for them, physicians' desire to help, and the difficulty in excluding complications that may be amenable to antibiotics. There are no effective antiviral drugs for the common cold even though some preliminary research has shown benefits. Zinc Zinc supplements may shorten the duration of colds by up to 33% and reduce the severity of symptoms if supplementation begins within 24 hours of the onset of symptoms. Some zinc remedies directly applied to the inside of the nose have led to the loss of the sense of smell. A 2017 review did not recommend the use of zinc for the common cold for various reasons; whereas a 2017 and 2018 review both recommended the use of zinc, but also advocated further research on the topic. Alternative medicine While there are many alternative medicines and Chinese herbal medicines supposed to treat the common cold, there is insufficient scientific evidence to support their use. As of 2015, there is weak evidence to support nasal irrigation with saline. There is no firm evidence that Echinacea products or garlic provide any meaningful benefit in treating or preventing colds. Vitamins C and D Vitamin C supplementation does not affect the incidence of the common cold, but may reduce its duration if taken on a regular basis. There is no conclusive evidence that vitamin D supplementation is efficacious in the prevention or treatment of respiratory tract infections. Prognosis The common cold is generally mild and self-limiting with most symptoms generally improving in a week. In children, half of cases resolve in 10 days and 90% in 15 days. Severe complications, if they occur, are usually in the very old, the very young, or those who are immunosuppressed. Secondary bacterial infections may occur resulting in sinusitis, pharyngitis, or an ear infection. It is estimated that sinusitis occurs in 8% and ear infection in 30% of cases. Epidemiology The common cold is the most common human disease and affects people all over the globe. Adults typically have two to three infections annually, and children may have six to ten colds a year (and up to twelve colds a year for school children). Rates of symptomatic infections increase in the elderly due to declining immunity. Weather A common misconception is that one can "catch a cold" merely through prolonged exposure to cold weather. Although it is now known that colds are viral infections, the prevalence of many such viruses are indeed seasonal, occurring more frequently during cold weather. The reason for the seasonality has not been conclusively determined. Possible explanations may include cold temperature-induced changes in the respiratory system, decreased immune response, and low humidity causing an increase in viral transmission rates, perhaps due to dry air allowing small viral droplets to disperse farther and stay in the air longer. The apparent seasonality may also be due to social factors, such as people spending more time indoors near infected people, and especially children at school. Although normal exposure to cold does not increase one's risk of infection, severe exposure leading to significant reduction of body temperature (hypothermia) may put one at a greater risk for the common cold: although controversial, the majority of evidence suggests that it may increase susceptibility to infection. History While the cause of the common cold was identified in the 1950s, the disease appears to have been with humanity since its early history. Its symptoms and treatment are described in the Egyptian Ebers papyrus, the oldest existing medical text, written before the 16th century BCE. The name "cold" came into use in the 16th century, due to the similarity between its symptoms and those of exposure to cold weather. In the United Kingdom, the Common Cold Unit (CCU) was set up by the Medical Research Council in 1946 and it was where the rhinovirus was discovered in 1956. In the 1970s, the CCU demonstrated that treatment with interferon during the incubation phase of rhinovirus infection protects somewhat against the disease, but no practical treatment could be developed. The unit was closed in 1989, two years after it completed research of zinc gluconate lozenges in the prevention and treatment of rhinovirus colds, the only successful treatment in the history of the unit. Research directions Antivirals have been tested for effectiveness in the common cold; as of 2009, none had been both found effective and licensed for use. There are trials of the anti-viral drug pleconaril which shows promise against picornaviruses as well as trials of BTA-798. The oral form of pleconaril had safety issues and an aerosol form is being studied. The genomes of all known human rhinovirus strains have been sequenced. Societal impact The economic impact of the common cold is not well understood in much of the world. In the United States, the common cold leads to 75–100 million physician visits annually at a conservative cost estimate of $7.7 billion per year. Americans spend $2.9 billion on over-the-counter drugs and another $400 million on prescription medicines for symptom relief. More than one-third of people who saw a doctor received an antibiotic prescription, which has implications for antibiotic resistance. An estimated 22–189 million school days are missed annually due to a cold. As a result, parents missed 126 million workdays to stay home to care for their children. When added to the 150 million workdays missed by employees who have a cold, the total economic impact of cold-related work loss exceeds $20 billion per year. This accounts for 40% of time lost from work in the United States.
Biology and health sciences
Illness and injury
null
92943
https://en.wikipedia.org/wiki/Digital-to-analog%20converter
Digital-to-analog converter
In electronics, a digital-to-analog converter (DAC, D/A, D2A, or D-to-A) is a system that converts a digital signal into an analog signal. An analog-to-digital converter (ADC) performs the reverse function. There are several DAC architectures; the suitability of a DAC for a particular application is determined by figures of merit including: resolution, maximum sampling frequency and others. Digital-to-analog conversion can degrade a signal, so a DAC should be specified that has insignificant errors in terms of the application. DACs are commonly used in music players to convert digital data streams into analog audio signals. They are also used in televisions and mobile phones to convert digital video data into analog video signals. These two applications use DACs at opposite ends of the frequency/resolution trade-off. The audio DAC is a low-frequency, high-resolution type while the video DAC is a high-frequency low- to medium-resolution type. Due to the complexity and the need for precisely matched components, all but the most specialized DACs are implemented as integrated circuits (ICs). These typically take the form of metal–oxide–semiconductor (MOS) mixed-signal integrated circuit chips that integrate both analog and digital circuits. Discrete DACs (circuits constructed from multiple discrete electronic components instead of a packaged IC) would typically be extremely high-speed low-resolution power-hungry types, as used in military radar systems. Very high-speed test equipment, especially sampling oscilloscopes, may also use discrete DACs. Overview A DAC converts an abstract finite-precision number (usually a fixed-point binary number) into a physical quantity (e.g., a voltage or a pressure). In particular, DACs are often used to convert finite-precision time series data to a continually varying physical signal. Provided that a signal's bandwidth meets the requirements of the Nyquist–Shannon sampling theorem (i.e., a baseband signal with bandwidth less than the Nyquist frequency) and was sampled with infinite resolution, the original signal can theoretically be reconstructed from the sampled data. However, an ADC's filtering can't entirely eliminate all frequencies above the Nyquist frequency, which will alias into the baseband frequency range. And the ADC's digital sampling process introduces some quantization error (rounding error), which manifests as low-level noise. These errors can be kept within the requirements of the targeted application (e.g. under the limited dynamic range of human hearing for audio applications). Applications DACs and ADCs are part of an enabling technology that has contributed greatly to the digital revolution. To illustrate, consider a typical long-distance telephone call. The caller's voice is converted into an analog electrical signal by a microphone, then the analog signal is converted to a digital stream by an ADC. The digital stream is then divided into network packets where it may be sent along with other digital data, not necessarily audio. The packets are then received at the destination, but each packet may take a completely different route and may not even arrive at the destination in the correct time order. The digital voice data is then extracted from the packets and assembled into a digital data stream. A DAC converts this back into an analog electrical signal, which drives an audio amplifier, which in turn drives a speaker, which finally produces sound. Audio Most modern audio signals are stored in digital form (for example MP3s and CDs), and in order to be heard through speakers, they must be converted into an analog signal. DACs are therefore found in CD players, digital music players, and PC sound cards. Specialist standalone DACs can also be found in high-end hi-fi systems. These normally take the digital output of a compatible CD player or dedicated transport (which is basically a CD player with no internal DAC) and convert the signal into an analog line-level output that can then be fed into an amplifier to drive speakers. Similar digital-to-analog converters can be found in digital speakers such as USB speakers and in sound cards. In voice over IP applications, the source must first be digitized for transmission, so it undergoes conversion via an ADC and is then reconstructed into analog using a DAC on the receiving party's end. Video Video sampling tends to work on a completely different scale altogether thanks to the highly nonlinear response both of cathode ray tubes (for which the vast majority of digital video foundation work was targeted) and the human eye, using a "gamma curve" to provide an appearance of evenly distributed brightness steps across the display's full dynamic range - hence the need to use RAMDACs in computer video applications with deep enough color resolution to make engineering a hardcoded value into the DAC for each output level of each channel impractical (e.g. an Atari ST or Sega Genesis would require 24 such values; a 24-bit video card would need 768...). Given this inherent distortion, it is not unusual for a television or video projector to truthfully claim a linear contrast ratio (difference between darkest and brightest output levels) of 1000:1 or greater, equivalent to 10 bits of audio precision even though it may only accept signals with 8-bit precision and use an LCD panel that only represents 6 or 7 bits per channel. Video signals from a digital source, such as a computer, must be converted to analog form if they are to be displayed on an analog monitor. As of 2007, analog inputs were more commonly used than digital, but this changed as flat-panel displays with DVI and/or HDMI connections became more widespread. A video DAC is, however, incorporated in any digital video player with analog outputs. The DAC is usually integrated with some memory (RAM), which contains conversion tables for gamma correction, contrast and brightness, to make a device called a RAMDAC. Digital potentiometer A device that is distantly related to the DAC is the digitally controlled potentiometer, used to control an analog signal digitally. Mechanical A one-bit mechanical actuator assumes two positions: one when on, another when off. The motion of several one-bit actuators can be combined and weighted with a whiffletree mechanism to produce finer steps. The IBM Selectric typewriter uses such a system. Communications DACs are widely used in modern communication systems enabling the generation of digitally-defined transmission signals. High-speed DACs are used for mobile communications and ultra-high-speed DACs are employed in optical communications systems. Types The most common types of electronic DACs are: The pulse-width modulator where a stable current or voltage is switched into a low-pass analog filter with a duration determined by the digital input code. This technique is often used for electric motor speed control and dimming LED lamps. Oversampling DACs or interpolating DACs such as those employing delta-sigma modulation, use a pulse density conversion technique with oversampling. Audio delta-sigma DACs are sold with 384 kHz sampling rate and quoted 24-bit resolution, though quality is lower due to inherent noise (see ). Some consumer electronics use a type of oversampling DAC referred to as a 1-bit DAC. The binary-weighted DAC, which contains individual electrical components for each bit of the DAC connected to a summing point, typically an operational amplifier. Each input in the summing has powers-of-two weighting with the most current or voltage at the most-significant bit. This is one of the fastest conversion methods but suffers from poor accuracy because of the high precision required for each individual voltage or current. Switched resistor DAC contains a parallel resistor network. Individual resistors are enabled or bypassed in the network based on the digital input. Switched current source DAC, from which different current sources are selected based on the digital input. Switched capacitor DAC contains a parallel capacitor network. Individual capacitors are connected or disconnected with switches based on the input. The R-2R ladder DAC which is a binary-weighted DAC that uses a repeating cascaded structure of resistor values R and 2R. This improves the precision due to the relative ease of producing equal valued-matched resistors. The successive approximation or cyclic DAC, which successively constructs the output during each cycle. Individual bits of the digital input are processed each cycle until the entire input is accounted for. The thermometer-coded DAC, which contains an equal resistor or current-source segment for each possible value of DAC output. An 8-bit thermometer DAC would have 255 segments, and a 16-bit thermometer DAC would have 65,535 segments. This is a fast and highest precision DAC architecture but at the expense of requiring many components which, for practical implementations, fabrication requires high-density IC processes. Hybrid DACs, which use a combination of the above techniques in a single converter. Most DAC integrated circuits are of this type due to the difficulty of getting low cost, high speed and high precision in one device. The segmented DAC, which combines the thermometer-coded principle for the most significant bits and the binary-weighted principle for the least significant bits. In this way, a compromise is obtained between precision (by the use of the thermometer-coded principle) and number of resistors or current sources (by the use of the binary-weighted principle). The full binary-weighted design means 0% segmentation, the full thermometer-coded design means 100% segmentation. Most DACs shown in this list rely on a constant reference voltage or current to create their output value. Alternatively, a multiplying DAC takes a variable input voltage or current as a conversion reference. This puts additional design constraints on the bandwidth of the conversion circuit. Modern high-speed DACs have an interleaved architecture, in which multiple DAC cores are used in parallel. Their output signals are combined in the analog domain to enhance the performance of the combined DAC. The combination of the signals can be performed either in the time domain or in the frequency domain. Performance The most important characteristics of a DAC are: Resolution The number of possible output levels the DAC is designed to reproduce. This is usually stated as the number of bits it uses, which is the binary logarithm of the number of levels. For instance, a 1-bit DAC is designed to reproduce 2 (21) levels while an 8-bit DAC is designed for 256 (28) levels. Resolution is related to the effective number of bits which is a measurement of the actual resolution attained by the DAC. Resolution determines color depth in video applications and audio bit depth in audio applications. Maximum sampling rate The maximum speed at which the DACs circuitry can operate and still produce correct output. The Nyquist–Shannon sampling theorem defines a relationship between this and the bandwidth of the sampled signal. Monotonicity The ability of a DAC's analog output to move only in the direction that the digital input moves (i.e., if the input increases, the output doesn't dip before asserting the correct output.) This characteristic is very important for DACs used as a low-frequency signal source or as a digitally programmable trim element. Total harmonic distortion and noise (THD+N) A measurement of the distortion and noise introduced to the signal by the DAC. It is expressed as a percentage of the total power of unwanted harmonic distortion and noise that accompanies the desired signal. Dynamic range A measurement of the difference between the largest and smallest signals the DAC can reproduce expressed in decibels. This is usually related to resolution and noise floor. Other measurements, such as phase distortion and jitter, can also be very important for some applications, some of which (e.g. wireless data transmission, composite video) may even rely on accurate production of phase-adjusted signals. Non-linear PCM encodings (A-law / μ-law, ADPCM, NICAM) attempt to improve their effective dynamic ranges by using logarithmic step sizes between the output signal strengths represented by each data bit. This trades greater quantization distortion of loud signals for better performance of quiet signals. Figures of merit Static performance: Differential nonlinearity (DNL) shows how much two adjacent code analog values deviate from the ideal 1 LSB step. Integral nonlinearity (INL) shows how much the DAC transfer characteristic deviates from an ideal one. That is, the ideal characteristic is usually a straight line; INL shows how much the actual voltage at a given code value differs from that line, in LSBs (1 LSB steps). Gain error Offset error Noise is ultimately limited by the thermal noise generated by passive components such as resistors. For audio applications and in room temperatures, such noise is usually a little less than 1μV (microvolt) of white noise. This practically limits resolution to less than 20~21 bits, even in 24-bit DACs. Frequency domain performance Spurious-free dynamic range (SFDR) indicates in dB the ratio between the powers of the converted main signal and the greatest undesired spur. Signal-to-noise and distortion (SINAD) indicates in dB the ratio between the powers of the converted main signal and the sum of the noise and the generated harmonic spurs i-th harmonic distortion (HDi) indicates the power of the i-th harmonic of the converted main signal Total harmonic distortion (THD) is the sum of the powers of all the harmonics of the input signal If the maximum DNL is less than 1 LSB, then the converter is guaranteed to be monotonic. However, many monotonic converters may have a maximum DNL greater than 1 LSB. Time domain performance: Glitch impulse area (glitch energy)
Technology
Signal processing
null