id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
40,345,124 | https://en.wikipedia.org/wiki/Geoglossum%20cookeanum | Geoglossum cookeanum is a mushroom in the family Geoglossaceae.
References
Geoglossaceae
Fungi of Europe
Fungus species | Geoglossum cookeanum | Biology | 29 |
64,680,945 | https://en.wikipedia.org/wiki/Nonadaptive%20radiation | Nonadaptive radiations are a subset of evolutionary radiations (or species flocks) that are characterized by diversification that is not driven by resource partitioning. The species that are a part of a nonadaptive radiation will tend to have very similar niches, and in many (though not all) cases will be morphologically similar. Nonadaptive radiations are driven by nonecological speciation. In many cases, this nonecological speciation is allopatric, and the organisms are dispersal-limited such that populations can be geographically isolated within a landscape with relatively similar ecological conditions. For example, Albinaria land snails on islands in the Mediterranean and Batrachoseps salamanders from California each include relatively dispersal-limited, and closely related, ecologically similar species often have minimal range overlap, a pattern consistent with allopatric, nonecological speciation. In other cases, such as certain damselflies and crickets from Hawaii, there can be range overlap in closely related species, and it is likely that sexual selection (and species recognition) plays a role in maintaining (and perhaps giving rise to) species boundaries.
See also
Adaptive radiation
Species complex
Ecological speciation
References
Speciation
Ecology
Evolutionary biology
Paleobiology | Nonadaptive radiation | Biology | 253 |
31,369,965 | https://en.wikipedia.org/wiki/Frog%20galvanoscope | The frog galvanoscope was a sensitive electrical instrument used to detect voltage in the late 18th and 19th centuries. It consists of a skinned frog's leg with electrical connections to a nerve. The instrument was invented by Luigi Galvani and improved by Carlo Matteucci.
The frog galvanoscope, and other experiments with frogs, played a part in the dispute between Galvani and Alessandro Volta over the nature of electricity. The instrument is extremely sensitive and continued to be used well into the nineteenth century, even after electromechanical meters came into use.
Terminology
Synonyms for this device include galvanoscopic frog, frog's leg galvanoscope, frog galvanometer, rheoscopic frog, and frog electroscope. The device is properly called a galvanoscope rather than galvanometer since the latter implies accurate measurement whereas a galvanoscope only gives an indication. In modern usage a galvanometer is a sensitive laboratory instrument for measuring current, not voltage. Everyday current meters for use in the field are called ammeters. A similar distinction can be made between electroscopes, electrometers, and voltmeters for voltage measurements.
History
Frogs were a popular subject of experiment in the laboratories of early scientists. They were small, easily handled, and there was a ready supply. Marcello Malpighi, for instance, used frogs in his study of lungs in the seventeenth century. Frogs were particularly suitable for the study of muscle activity. Especially in the legs, the muscle contractions are readily observed and the nerves are easily dissected out. Another desirable feature for scientists was that these contractions continued after death for a considerable time. Also in the eighteenth century, Leopoldo Caldani and Felice Fontana subjected frogs to electric shocks to test Albrecht von Haller's irritability theory.
Luigi Galvani, a lecturer at the University of Bologna, was researching the nervous system of frogs from around 1780. This research included the muscular response to opiates and static electricity, for which experiments the spinal cord and rear legs of a frog were dissected out together and the skin removed. In 1781, an observation was made while a frog was being so dissected. An electric machine discharged just at the moment one of Galvani's assistants touched the crural nerve of a dissected frog with a scalpel. The frog's legs twitched as the discharge happened. Galvani found that he could make the prepared leg of a frog (see the Construction section) twitch by connecting a metal circuit from a nerve to a muscle, thus inventing the first frog galvanoscope. Galvani published these results in 1791 in De viribus electricitatis.
An alternative version of the story of the frog response at a distance has the frogs being prepared for a soup on the same table as a running electric machine. Galvani's wife notices the frog twitch when an assistant accidentally touches a nerve and reports the phenomenon to her husband. This story originates with Jean-Louis Alibert and, according to Piccolino and Bresadola, was probably invented by him.
Galvani, and his nephew Giovanni Aldini, used the frog galvanoscope in their electrical experiments. Carlo Matteucci improved the instrument and brought it to wider attention. Galvani used the frog galvanoscope to investigate and promote the theory of animal electricity, that is, that there was a vital life force in living things that manifested itself as a new kind of electricity. Alessandro Volta opposed this theory, believing that the electricity that Galvani and other proponents were witnessing was due to metal contact electrification in the circuit. Volta's motivation in inventing the voltaic pile (the forerunner of the common zinc–carbon battery) was largely to enable him to construct a circuit entirely with non-biological material to show that the vital force was not necessary to produce the electrical effects seen in animal experiments. Matteucci, in answer to Volta, and to show that metal contacts were not necessary, constructed a circuit entirely out of biological material, including a frog battery. Neither the animal electricity theory of Galvani nor the contact electrification theory of Volta forms part of modern electrical science. However, Alan Hodgkin in the 1930s showed that there is indeed an ionic current flowing in nerves.
Matteucci used the frog galvanoscope to study the relationship of electricity to muscles, including in freshly amputated human limbs. Matteucci concluded from his measurements that there was an electric current continually flowing from the interior, to the exterior of all muscles. Matteucci's idea was widely accepted by his contemporaries, but this is no longer believed and his results are now explained in terms of injury potential.
Construction
An entire frog's hind leg is removed from the frog's body with the sciatic nerve still attached, and possibly also a portion of the spinal cord. The leg is skinned, and two electrical connections are made. These may be made to the nerve and the foot of the frog's leg by wrapping them with metal wire or foil, but a more convenient instrument is Matteucci's arrangement shown in the image. The leg is placed in a glass tube with just the nerve protruding. Connection is made to two different points on the nerve.
According to Matteucci, the instrument is most accurate if direct electrical contact with muscle is avoided. That is, connections are made only to the nerve. Matteucci also advises that the nerve should be well stripped and that contacts to it can be made with wet paper in order to avoid using sharp metal probes directly on the nerve.
Operation
When the frog's leg is connected to a circuit with an electric potential, the muscles will contract and the leg will twitch briefly. It will twitch again when the circuit is broken. The instrument is capable of detecting extremely small voltages, and could far surpass other instruments available in the first half of the nineteenth century, including the electromagnetic galvanometer and the gold-leaf electroscope. For this reason, it remained popular long after other instruments became available. The galvanometer was made possible in 1820 by the discovery by Hans Christian Ørsted that electric currents would deflect a compass needle, and the gold-leaf electroscope was even earlier (Abraham Bennet, 1786). Yet Golding Bird could still write in 1848 that "the irritable muscles of a frog's legs were no less than 56,000 times more delicate a test of electricity than the most sensitive condensing electrometer." The word condenser used by Bird here means a coil, so named by Johann Poggendorff by analogy with Volta's term for a capacitor.
The frog galvanoscope can be used to detect the direction of electric current. A frog's leg that has been somewhat desensitised is needed for this. The sensitivity of the instrument is greatest with a freshly prepared leg and then falls off with time, so an older leg is best for this. The response of the leg is greater to currents in one direction than the other and with a suitably desensitised leg it may only respond to currents in one direction. For a current going into the leg from the nerve, the leg will twitch on making the circuit. For a current passing out of the leg, it will twitch on breaking the circuit.
The major drawback of the frog galvanoscope is that the frog leg frequently needs replacing. The leg will continue to respond for up to 44 hours, but after that a fresh one must be prepared.
References
Bibliography
Clarke, Edwin; Jacyna, L. S., Nineteenth-Century Origins of Neuroscientific Concepts, University of California Press, 1992 .
Clarke, Edwin; O'Malley, Charles Donald, The Human Brain and Spinal Cord: a historical study illustrated by writings from antiquity to the twentieth century, Norman Publishing, 1996 .
Bird, Golding, Chapter XX, "Physiological electricity, or galvanism", Elements of Natural Philosophy, London: John Churchill, 1848 .
Hackmann, Willem D., "Galvanometer", in Bud, Robert; Warner, Deborah Jean (eds), Instruments of Science: An Historical Encyclopedia, pp. 257–259, Taylor & Francis, 1998 .
Hare, Robert, "Of galvanism, or voltaic electricity", A Brief Exposition of the Science of Mechanical Electricity, Philadelphia: J. G. Auner, 1840 .
Hellman, Hal, Great Feuds in Medicine, John Wiley and Sons, 2001
Keithley, Joseph F., The Story of Electrical and Magnetic Measurements: From 500 BC to the 1940s, IEEE Press, 1999 .
Piccolino, Marco; Bresadola, Marco, Shocking Frogs: Galvani, Volta, and the Electric Origins of Neuroscience, Oxford University Press, 2013 .
Matteucci, Carlo "The muscular current" Philosophical Transactions, pp. 283–295, 1845.
Wilkinson, Charles Henry, Elements of Galvanism, London: John Murray, 1804 .
Biophysics
History of technology
Electrical meters | Frog galvanoscope | Physics,Technology,Engineering,Biology | 1,849 |
6,389,879 | https://en.wikipedia.org/wiki/Immunotoxicology | Immunotoxicology (sometimes abbreviated as ITOX) is the study of the toxicity of foreign substances called xenobiotics and their effects on the immune system. Some toxic agents that are known to alter the immune system include: industrial chemicals, heavy metals, agrochemicals, pharmaceuticals, drugs, ultraviolet radiation, air pollutants and some biological materials. The effects of these immunotoxic substances have been shown to alter both the innate and adaptive parts of the immune system. Consequences of xenobiotics affect the organ initially in contact (often the lungs or skin). Some commonly seen problems that arise as a result of contact with immunotoxic substances are: immunosuppression, hypersensitivity, and autoimmunity. The toxin-induced immune dysfunction may also increase susceptibility to cancer.
The study of immunotoxicology began in the 1970s. However, the idea that some substances have a negative effect on the immune system was not a novel concept as people have observed immune system alterations as a result of contact toxins since ancient Egypt. Immunotoxicology has become increasingly important when considering the safety and effectiveness of commercially sold products. In recent years, guidelines and laws have been created in the effort to regulate and minimize the use of immunotoxic substances in the production of agricultural products, drugs, and consumer products. One example of these regulations are FDA guidelines mandate that all drugs must be tested for toxicity to avoid negative interactions with the immune system, and in-depth investigations are required whenever a drug shows signs of affecting the immune system. Scientists use both in vivo and in vitro techniques when determining the immunotoxic effects of a substance.
Immunotoxic agents can damage the immune system by destroying immune cells and changing signaling pathways. This has wide-reaching consequences in both the innate and adaptive immune systems. Changes in the adaptive immune system can be observed by measuring levels of cytokine production, modification of surface markers, activation, and cell differentiation. There are also changes in macrophages and monocyte activity indicating changes in the innate immune system.
Immunosuppression
Some common agents that have been shown to cause immunosuppression are corticosteroids, radiation, heavy metals, halogenated aromatic hydrocarbons, drugs, air pollutants and immunosuppressive drugs. These chemicals can result in mutations found in regulatory genes of the immune system, which alter the amount of critical cytokines produced and can cause insufficient immune responses when antigens are encountered. These agents have also been known to kill or damage immune cells and cells in the bone marrow, resulting in difficulty recognizing antigens and creating novel immune responses. This can be measured by decreased IgM and IgG antibody levels which are an indicator of immune suppression. T regulatory cells, which are critical to maintaining the correct level of response in the immune system, also appear to be altered by some agents. In the presence of certain immunotoxic substances, granulocytes of the innate immune system have also been observed to be damaged causing the rare disease agranulocytosis. Vaccine effectiveness can also be decreased when the immune system is suppressed by immunotoxic substances. In vitro T-lymphocyte activation assays have been useful when determining which substances have immunosuppressive properties.
Hypersensitivity
Hypersensitive or allergic reactions, such as asthma, are commonly associated with immunotoxic agents and the number of people exhibiting these symptoms is increasing in industrial countries. This partially due to the increasing number of immunotoxic agents. Nanomaterials are commonly absorbed through the skin or inhaled and are known for causing hypersensitive responses by recruiting immune cells. These nanomaterials are often encountered when a person is in contact with chemicals in an occupational, consumer, or environmental setting. Agents that are known for creating a hypersensitive response include poison ivy, fragrances, cosmetics, metals, preservatives, and pesticides. These molecules that are so small, they act as haptens and bind to larger molecules to induce an immune response. An allergic response is induced when T lymphocytes recognize these haptens and recruit professional antigen-presenting cells. IgE antibodies are important when looking at hypersensitive reactions but cannot be used to definitively determine the effects of an immunotoxic agents. Because of this, in vivo testing is the most effective way to determine the potential toxicity of nanomaterials and other agents that are believed to cause hypersensitivity.
Autoimmunity
Immunotoxic agents can increase the occurrence of immune system attacks on self molecules. Although autoimmunity mostly occurs as a result of genetic factors, immunotoxic agents such as asbestos, sulfadiazine, silica, paraffin and silicone can also increase the chance of an autoimmune attack. These agents are known for causing disturbances to the carefully regulated immune system and increasing the development of autoimmunity. Changes in the circulating regulatory and responder T cells are good indicators of an autoimmune response induced by an immunotoxic agent. The effects of autoimmunity have been examined primarily through studies with animal models. Currently, there is not a screen to determine how agents affect human autoimmunity, because of this much of the current knowledge about autoimmunity in response to immunotoxic agents comes from the observations of individuals who have been exposed to suspected immunotoxic agents.
See also
Journal of Immunotoxicology
Toxicology
References
Toxicology
Branches of immunology | Immunotoxicology | Biology,Environmental_science | 1,168 |
78,928,068 | https://en.wikipedia.org/wiki/Dibenzylamine | Dibenzylamine is an organic compound with the formula . It is classified as a secondary amine, being the middle member of the series that includes the primary amine benzylamine () and the tertiary amine tribenzylamine (). It is a colorless oily substance with a faint ammonia-like odor. It is produced as a side product in the hydrogenation of benzonitrile:
Selected reactions
Amides derived from dibenzylamine are useful in organic synthesis. Dibenzylamine is a typical substrate for C-N coupling reactions related to the Buchwald-Hartwig reaction.
References
Amines
Benzyl compounds | Dibenzylamine | Chemistry | 136 |
60,758,873 | https://en.wikipedia.org/wiki/Chlorobactane | Chlorobactane is the diagenetic product of an aromatic carotenoid produced uniquely by green-pigmented green sulfur bacteria (GSB) in the order Chlorobiales. Observed in organic matter as far back as the Paleoproterozoic, its identity as a diagnostic biomarker has been used to interpret ancient environments.
Background
Chlorobactene is a monocyclic accessory pigment used by green sulfur bacteria to capture electrons from wavelengths in the visible light spectrum. Green sulfur bacteria (GSB) live in anaerobic and sulfidic (euxinic) zones in the presence of light, so they are found most often in meromictic lakes and ponds, sediments, and certain regions of the Black Sea. The enzyme CrtU converts γ-carotene into chlorobactene by shifting the C17 methyl group from the C1 site to the C2 site.
Preservation
Following transport and burial, diagenetic processes saturate the hydrocarbon chain, turning it into the fully saturated structure of chlorobactane.
Isoreneiratene is an aromatic light-harvesting molecule interpreted as a biomarker for brown-pigmented GSB in the same order, Chlorobiales, and its fossil form (isorenieratane) is often found co-occurring with chlorobactene in ancient organic material. Purple sulfur bacteria (PSB) also live in euxinic regions. They produce a different accessory pigment, okenone, that is preserved as okenane and often observed co-occurring with chlorobactane.
Measurement techniques
Gas chromatography coupled to mass spectrometry (GC/MS)
Organic molecules are first extracted from rocks using solvents, capitalizing on chemical properties like the polarity of the molecules to dissolve the molecules. Usually, less than one percent of the organic material from a rock is successfully pulled out in this process, leaving behind undissolved material called kerogen. The organic-rich extract is subsequently purified using silica gel column packed chromatography – eluting the extract through the column with targeted solvents pulls out contaminants and remnant undissolved organic material, which will bind to the polar silica moieties. When the sample is then run through a gas chromatography (GC) column, the compounds separate based on their boiling points and interaction with a stationary phase within the column. The temperature ramping of a gas chromatography column can be programmed to obtain optimal separation of the compounds. After the GC, the molecules are ionized and fragmented into smaller, charged molecules. A mass spectrometer then separates the individual compounds based on their mass-to-charge (M/Z) ratio and measures their relative abundance, producing a characteristic mass spectrum. Peaks representing the relative abundance of the compounds are identified as molecules based on their relative retention times, matches to a library of mass spectra with known compound identities, and comparison to standards.
Case Study: Ocean Euxinia
Because green-pigmented green sulfur bacteria require higher light intensities than their brown-pigmented counterparts, the presence of chlorobactane in the rock record has been used as key evidence in interpretations for a very shallow euxinic layer in the ocean. The euxinic zone may have changed depth in the ocean at various points in Earth's history, such as with the advent of an oxygenated atmosphere around 2.45 billion years ago and the shallowing of the oxic zone within the last six kyr.
See also
Green sulfur bacteria (GSB)
Carotenoid
Isorenieratene
Purple sulfur bacteria (PSB)
Okenane
References
Carotenoids
Chlorobiota | Chlorobactane | Biology | 770 |
518,740 | https://en.wikipedia.org/wiki/Soot | Soot ( ) is a mass of impure carbon particles resulting from the incomplete combustion of hydrocarbons. Soot is considered a hazardous substance with carcinogenic properties. Most broadly, the term includes all the particulate matter produced by this process, including black carbon and residual pyrolysed fuel particles such as coal, cenospheres, charred wood, and petroleum coke classified as cokes or char. It can include polycyclic aromatic hydrocarbons and heavy metals like mercury.
Soot causes various types of cancer and lung disease.
Terminology
Definition
Among scientists, exact definitions for soot vary, depending partly on their field. For example, atmospheric scientists may use a different definition compared to toxicologists. Soot's definition can also vary across time, and from paper to paper even among scientists in the same field. A common feature of the definitions is that soot is composed largely of carbon based particles resulting from the incomplete burning of hydrocarbons or organic fuel such as wood. Some note that soot may be formed by other high temperature processes, not just by burning. Soot typically takes an aerosol form when first created. It tends to eventually settle onto surfaces, though some parts of it may be decomposed while still airborne. In some definitions, soot is defined purely as carbonaceous particles, but in others it is defined to include the whole ensemble of particles resulting from partial combustion of organic matter or fossil fuels - as such it can include non carbon elements like sulphur and even traces of metal. In many definitions, soot is assumed to be black, but in some definitions it can be composed partly or even mainly of brown carbon, and so can also be medium or even light gray in colour.
Related terms
Terms like "soot", "carbon black", and "black carbon" are often used to mean the same thing, even in the scientific literature, but other scientists have stated this is incorrect and that they refer to chemically and physically distinct things.
Carbon black is a term for the industrial production of powdery carbonaceous matter which has been underway since the 19th century. Carbon black is composed almost entirely of elemental carbon. Carbon black is not found in regular soot - only in the special soot that is intentionally produced for its manufacture, mostly from specialised oil furnaces.
Black carbon is a term that arose in the late twentieth century among atmospheric scientists, to describe strongly light absorbing carbonaceous particles which have a significant climate forcing affect - second only to itself as a contributor to short term global warming. The term is sometimes used synonymously with soot, but is now used preferentially in atmospheric science, though some prefer more precise terms like 'light-absorbing carbon'. Unlike carbon black, black carbon is produced unintentionally. The chemical composition of black carbon is much more varied, and typically has a much lower proportion of elemental carbon, compared with carbon black. In some definitions, black carbon also includes charcoal, a type of matter where the chunks tend to be too large to have an aerosol form as is the case with soot.
Sources
Soot as an airborne contaminant in the environment has many different sources, all of which are results of some form of pyrolysis. They include soot from coal burning, internal-combustion engines, power-plant boilers, hog-fuel boilers, ship boilers, central steam-heat boilers, waste incineration, local field burning, house fires, forest fires, fireplaces, and furnaces. These exterior sources also contribute to the indoor environment sources such as smoking of plant matter, cooking, oil lamps, candles, quartz/halogen bulbs with settled dust, fireplaces, exhaust emissions from vehicles, and defective furnaces. Soot in very low concentrations is capable of darkening surfaces or making particle agglomerates, such as those from ventilation systems, appear black. Soot is the primary cause of "ghosting", the discoloration of walls and ceilings or walls and flooring where they meet. It is generally responsible for the discoloration of the walls above baseboard electric heating units.
The formation and properties of soot depend strongly on the fuel composition, but may also be influenced by flame temperature. Regarding fuel composition, the rank ordering of sooting tendency of fuel components is: naphthalenes → benzenes → aliphatics. However, the order of sooting tendencies of the aliphatics (alkanes, alkenes, and alkynes) varies dramatically depending on the flame type. The difference between the sooting tendencies of aliphatics and aromatics is thought to result mainly from the different routes of formation. Aliphatics appear to first form acetylene and polyacetylenes, which is a slow process; aromatics can form soot both by this route and also by a more direct pathway involving ring condensation or polymerization reactions building on the existing aromatic structure.
Description
The Intergovernmental Panel on Climate Change (IPCC) adopted the description of soot particles given in the glossary of Charlson and Heintzenberg (1995), "Particles formed during the quenching of gases at the outer edge of flames of organic vapours, consisting predominantly of carbon, with lesser amounts of oxygen and hydrogen present as carboxyl and phenolic groups and exhibiting an imperfect graphitic structure".
Formation of soot is a complex process, an evolution of matter in which a number of molecules undergo many chemical and physical reactions within a few milliseconds. Soot always contains nanoparticles of graphite and diamond, a phenomenon known as gemmy soot. Soot is a powder-like form of amorphous carbon. Gas-phase soot contains polycyclic aromatic hydrocarbons (PAHs). The PAHs in soot are known mutagens and are classified as a "known human carcinogen" by the International Agency for Research on Cancer (IARC). Soot forms during incomplete combustion from precursor molecules such as acetylene. It consists of agglomerated nanoparticles with diameters between 6 and 30 nm. The soot particles can be mixed with metal oxides and with minerals and can be coated with sulfuric acid.
Soot formation mechanism
Many details of soot formation chemistry remain unanswered and controversial, but there have been a few agreements:
Soot begins with some precursors or building blocks.
Nucleation of heavy molecules occurs to form particles.
Surface growth of a particle proceeds by adsorption of gas phase molecules.
Coagulation happens via reactive particle–particle collisions.
Oxidation of the molecules and soot particles reduces soot formation.
Hazards
Soot, particularly diesel exhaust pollution, accounts for over one-quarter of the total hazardous pollution in the air.
Among these diesel emission components, particulate matter has been a serious concern for human health due to its direct and broad impact on the respiratory organs. In earlier times, health professionals associated PM10 (diameter < 10 μm) with chronic lung disease, lung cancer, influenza, asthma, and increased mortality rate. However, recent scientific studies suggest that these correlations be more closely linked with fine particles (PM2.5) and ultra-fine particles (PM0.1).
Long-term exposure to urban air pollution containing soot increases the risk of coronary artery disease.
Diesel exhaust (DE) gas is a major contributor to combustion-derived particulate-matter air pollution. In human experimental studies using an exposure chamber setup, DE has been linked to acute vascular dysfunction and increased thrombus formation. This serves as a plausible mechanistic link between the previously described association between particulate matter air pollution and increased cardiovascular morbidity and mortality.
Soot also tends to form in chimneys in domestic houses possessing one or more fireplaces. If a large deposit collects in one, it can ignite and create a chimney fire. Regular cleaning by a chimney sweep should eliminate the problem.
Soot modeling
Soot mechanism is difficult to model mathematically because of the large number of primary components of diesel fuel, complex combustion mechanisms, and the heterogeneous interactions during soot formation. Soot models are broadly categorized into three subgroups: empirical (equations that are adjusted to match experimental soot profiles), semi-empirical (combined mathematical equations and some empirical models which used for particle number density and soot volume and mass fraction), and detailed theoretical mechanisms (covers detailed chemical kinetics and physical models in all phases).
First, empirical models use correlations of experimental data to predict trends in soot production. Empirical models are easy to implement and provide excellent correlations for a given set of operating conditions. However, empirical models cannot be used to investigate the underlying mechanisms of soot production. Therefore, these models are not flexible enough to handle changes in operating conditions. They are only useful for testing previously established designed experiments under specific conditions.
Second, semi-empirical models solve rate equations that are calibrated using experimental data. Semi-empirical models reduce computational costs primarily by simplifying the chemistry in soot formation and oxidation. Semi-empirical models reduce the size of chemical mechanisms and use simpler molecules, such as acetylene as precursors.
Detailed theoretical models use extensive chemical mechanisms containing hundreds of chemical reactions in order to predict concentrations of soot. Detailed theoretical soot models contain all the components present in the soot formation with a high level of detailed chemical and physical processes.
Finally, comprehensive models (detailed models) are usually expensive and slow to compute, as they are much more complex than empirical or semi-empirical models. Thanks to recent technological progress in computation, it has become more feasible to use detailed theoretical models and obtain more realistic results; however, further advancement of comprehensive theoretical models is limited by the accuracy of modeling of formation mechanisms.
Additionally, phenomenological models have found wide use recently. Phenomenological soot models, which may be categorized as semi-empirical models, correlate empirically observed phenomena in a way that is consistent with the fundamental theory, but is not directly derived from the theory. These models use sub-models developed to describe the different processes (or phenomena) observed during the combustion process. Examples of sub-models of phenomenological empirical models include spray model, lift-off model, heat release model, ignition delay model, etc. These sub-models can be empirically developed from observation or by using basic physical and chemical relations. Phenomenological models are accurate for their relative simplicity. They are useful, especially when the accuracy of the model parameters is low. Unlike empirical models, phenomenological models are flexible enough to produce reasonable results when multiple operating conditions change.
Applications
Historically soot was used in manufacturing artistic paints and shoe polish, as well as a blackener for Russia leather for boots. With the advent of the printing press it was used in the printing ink well into the 20th century.
See also
Activated carbon
Atmospheric particulate matter
Bistre
Black carbon
Carbon black
Coal
Colorant
Creosote
Diesel particulate matter
Dust
Fullerene
Health effects of coal ash
Health effects of wood smoke
Indian ink
Joss paper
Open burning of waste
Rolling coal
Soot blower
Spheroidal carbonaceous particles
Sulfur dioxide
References
External links
Allotropes of carbon
IARC Group 1 carcinogens
Pollution
Air pollution | Soot | Chemistry | 2,315 |
3,765,996 | https://en.wikipedia.org/wiki/Idrialite | Idrialite is a rare hydrocarbon mineral with approximate chemical formula C22H14.
Idrialite usually occurs as soft orthorhombic crystals, is usually greenish yellow to light brown in color with bluish fluorescence. It is named after Idrija, town in Slovenia, where its occurrence was first described.
The mineral has also been called idrialine, and branderz in German It has also been called inflammable cinnabar due to its combustibility and association with cinnabar ores in the source locality. A mineral found in the Skaggs Springs location of California was described in 1925 and named curtisite, but was eventually found to consist of the same compounds as idrialite, in somewhat different amounts. Thus curtisite is now considered to be merely a variety of idrialite.
Discovery and occurrence
Idrialite was first described in 1832 for an occurrence in the Idrija region west of Ljubljana, northwestern Slovenia, mixed with clay, pyrite, quartz and gypsum associated with cinnabar.
It also occurs at the Skaggs Springs location in Sonoma County, in western Lake County, and in the Knoxville Mine in Napa County, California. It has also been reported from localities in France, Slovakia and Ukraine.
In the Skaggs Springs occurrence, the mineral occurs in a hot spring area of the Franciscan formation, around a vent in the sandstone that gave off flammable gases. The mineral was described in 1925 and named "curtisite" after the local resident L. Curtis who called attention to it. The crystals are square or six sided flakes, 1 mm in diameter, yellow to pistachio green in transmitted light. It is associated with opaline silica, realgar (arsenic sulfide) and metacinnabarite (mercuric sulfide), which had been deposited in that order before it.
Composition and properties
The Curtisite variety is only slightly soluble in hot acetone, amyl acetate, butanol, petroleum ether. The solubility is 0.5% or less in hot carbon bisulfide, carbon tetrachloride, chloroform, diethyl ether, or boiling benzene; about 1.5% in toluene, about 2.5% in xylene, and over 10% in hot aniline. The material purified by repeated recrystallization melts at 360-370 C while turning very black. It sublimes giving very thin iridescent colors.
Raman spectroscopy studies indicate that it may be a mixture of complex hydrocarbons including benzonaphthothiophenes (chemical formula: C16H10S) and dinaphthothiophenes (chemical formula: C20H12S).
Curtisite and idrialite have been found to be unique complex mixtures of over 100 polyaromatic hydrocarbons (PAHs) consisting of six specific PAH structural series with each member of a series differing from the previous member by addition of another aromatic ring. The curtisite and idrialite samples contained many of the same components but in considerably different relative amounts.
The major PAH constituents of the curtisite sample were: picene (a PAH with 5 fused benzene rings), dibenzo[a,h]fluorene, 11H-indeno[2,1-a]phenanthrene, benzo[b]phenanthro[2,1-d]thiophene, indenofluorenes, chrysene, and their methyl- and dimethyl-substituted homologues; the major components in the idrialite sample were higher-molecular-weight PAH, i.e. benzonaphthofluorenes (molecular weight 316), benzoindenofluorenes (MW 304) and benzopicene (MW 328), in addition to the compounds found in the curtisite sample.
Curtisite is also associated with small amounts of a dark brown oil, that appears to be responsible for some of the yellow color and most of the fluorescence, and can be separated by recrystallization.
Based on the composition, it was conjectured that the compounds were produced by medium-temperature pyrolysis of organic matter, then further modified by extended equilibration at elevated temperatures in the subsurface and by recrystallization during migration.
When distilled, it produces the mineral wax idrialin.
References
Organic minerals
Orthorhombic minerals
Luminescent minerals
Minerals described in 1832
Idrija | Idrialite | Chemistry | 945 |
4,008,035 | https://en.wikipedia.org/wiki/Efficiency%20movement | The efficiency movement was a major movement in the United States, Britain and other industrial nations in the early 20th century that sought to identify and eliminate waste in all areas of the economy and society, and to develop and implement best practices. The concept covered mechanical, economic, social, and personal improvement. The quest for efficiency promised effective, dynamic management rewarded by growth.
As a result of the influence of an early proponent, it is more often known as Taylorism.
United States
The efficiency movement played a central role in the Progressive Era in the United States, where it flourished 1890–1932. Adherents argued that all aspects of the economy, society and government were riddled with waste and inefficiency. Everything would be better if experts identified the problems and fixed them. The result was strong support for building research universities and schools of business and engineering, municipal research agencies, as well as reform of hospitals and medical schools, and the practice of farming. Perhaps the best known leaders were engineers Frederick Winslow Taylor (1856–1915), who used a stopwatch to identify the smallest inefficiencies, and Frank Bunker Gilbreth Sr. (1868–1924) who proclaimed there was always "one best way" to fix a problem.
Leaders including Herbert Croly, Charles R. van Hise, and Richard Ely sought to improve governmental performance by training experts in public service comparable to those in Germany, notably at the Universities of Wisconsin and Pennsylvania. Schools of business administration set up management programs oriented toward efficiency.
Municipal and state efficiency
Many cities set up "efficiency bureaus" to identify waste and apply the best practices. For example, Chicago created an Efficiency Division (1910–16) within the city government's Civil Service Commission, and private citizens organized the Chicago Bureau of Public Efficiency (1910–32). The former pioneered the study of "personal efficiency," measuring employees' performance through new scientific merit systems and efficiency movement
State governments were active as well. For example, Massachusetts set up its "Commission on Economy and Efficiency" in 1912. It made hundreds of recommendations.
Philanthropy
Leading philanthropists such as Andrew Carnegie and John D. Rockefeller actively promoted the efficiency movement. In his many philanthropic pursuits, Rockefeller believed in supporting efficiency. He said
To help an inefficient, ill-located, unnecessary school is a waste ...it is highly probable that enough money has been squandered on unwise educational projects to have built up a national system of higher education adequate to our needs, if the money had been properly directed to that end.
Conservation
The conservation movement regarding national resources came to prominence during the Progressive Era. According to historian Samuel P. Hays, the conservation movement was based on the "gospel of the efficiency".
The Massachusetts Commission on Economy and Efficiency reflected the new concern with conservation. It said in 1912:
President Roosevelt was the nation's foremost conservationist, putting the issue high on the national agenda by emphasizing the need to eliminate wasteful uses of limited natural resources. He worked with all the major figures of the movement, especially his chief advisor on the matter, Gifford Pinchot. Roosevelt was deeply committed to conserving natural resources, and is considered to be the nation's first conservation President.
In 1908, Roosevelt sponsored the Conference of Governors held in the White House, with a focus on natural resources and their most efficient use. Roosevelt delivered the opening address: "Conservation as a National Duty".
In contrast, environmentalist John Muir promulgated a very different view of conservation, rejecting the efficiency motivation. Muir instead preached that nature was sacred and humans are intruders who should look but not develop. Working through the Sierra Club he founded, Muir tried to minimize commercial use of water resources and forests. While Muir wanted nature preserved for the sake of pure beauty, Roosevelt subscribed to Pinchot's formulation, "to make the forest produce the largest amount of whatever crop or service will be most useful, and keep on producing it for generation after generation of men and trees."
National politics
In U.S. national politics, the most prominent figure was Herbert Hoover, a trained engineer who played down politics and believed dispassionate, nonpolitical experts could solve the nation's great problems, such as ending poverty.
After 1929, Democrats blamed the Great Depression on Hoover and helped to somewhat discredit the movement.
Antitrust
Boston lawyer Louis Brandeis (1856–1941) argued bigness conflicted with efficiency and added a new political dimension to the Efficiency Movement. For instance, while fighting against legalized price fixing, Brandeis launched an effort to influence congressional policymaking with the help of his friend Norman Hapgood, who was then the editor of Harper's Weekly. He coordinated the publication of a series of articles (Competition Kills, Efficiency and the One-Price Article, and How Europe deals with the one-price goods), which were also distributed by the lobbying group American Fair Trade League to legislators, Supreme Court justices, governors, and twenty national magazines. For his works, he was asked to speak before a congressional committee considering the price-fixing bill he drafted. Here, he stated that "big business is not more efficient than little business" and that "it is a mistake to suppose that the department stores can do business cheaper than the little dealer." Brandeis ideas on which business is most efficient conflicted with Croly's positions, which favored efficiency driven by a kind of consolidation gained through large-scale economic operations.
As early as 1895 Brandeis had warned of the harm that giant corporations could do to competitors, customers, and their own workers. The growth of industrialization was creating mammoth companies which he felt threatened the well-being of millions of Americans. In The Curse of Bigness he argued, "Efficiency means greater production with less effort and at less cost, through the elimination of unnecessary waste, human and material. How else can we hope to attain our social ideals?" He also argued against an appeal to Congress by the state-regulated railroad industry in 1910 seeking an increase in rates. Brandeis explained that instead of passing along increased costs to the consumer, the railroads should pursue efficiency by reducing their overhead and streamlining their operations, initiatives that were unprecedented during the time.
Bedaux system
The Bedaux system, developed by Franco-American management consultant Charles Bedaux (1886–1944) built on the work of F. W. Taylor and Charles E. Knoeppel.
Its distinctive advancement beyond these earlier thinkers was the Bedaux Unit or B, a universal measure for all manual work.
The Bedaux System was influential in the United States in the 1920s and Europe in the 1930s and 1940s, especially in Britain.
From the 1920s to the 1950s there were about one thousand companies in 21 countries worldwide that were run on the Bedaux System, including giants such as Swift's, Eastman Kodak, B.F. Goodrich, DuPont, Fiat, ICI and General Electric.
Relation to other movements
Later movements had echoes of the Efficiency Movement and were more directly inspired by Taylor and Taylorism. Technocracy, for instance, and others flourished in the 1930s and 1940s.
Postmodern opponents of nuclear energy in the 1970s broadened their attack to try to discredit movements that saw salvation for human society in technical expertise alone, or which held that scientists or engineers had any special expertise to offer in the political realm.
Coming into usage in 1990, the Western term lean manufacturing (lean enterprise, lean production, or simply "lean") refers to a business idea that considered the expenditure of resources for anything other than the creation of value for the end customer to be wasteful, and thus a target for elimination. Today the Lean concept is broadening to include a greater range of strategic goals, not just cost-cutting and efficiency.
Britain
In engineering, the concept of efficiency was developed in Britain in the mid-18th century by John Smeaton (1724–1792). Called the "father of civil engineering", he studied water wheels and steam engines. In the late 19th century there was much talk about improving the efficiency of the administration and economic performance of the British Empire.
National Efficiency was an attempt to discredit the old-fashioned habits, customs and institutions that put the British at a handicap in competition with the world, especially with Germany, which was seen as the epitome of efficiency. In the early 20th century, "National Efficiency" became a powerful demand — a movement supported by prominent figures across the political spectrum who disparaged sentimental humanitarianism and identified waste as a mistake that could no longer be tolerated. The movement took place in two waves; the first wave from 1899 to 1905 was made urgent by the inefficiencies and failures in the Second Boer War (1899–1902). Spectator magazine reported in 1902 there was "a universal outcry for efficiency in all departments of society, in all aspects of life". The two most important themes were technocratic efficiency and managerial efficiency. As White (1899) argued vigorously, the empire needed to be put on a business footing and administered to get better results. The looming threat of Germany, which was widely seen as a much more efficient nation, added urgency after 1902. Politically National Efficiency brought together modernizing Conservatives and Unionists, Liberals who wanted to modernize their party, and Fabians such as George Bernard Shaw and H. G. Wells, along with Beatrice and Sidney Webb, who had outgrown socialism and saw the utopia of a scientifically up-to-date society supervised by experts such as themselves. Churchill in 1908 formed an alliance with the Webbs, announcing the goal of a "National Minimum", covering hours, working conditions, and wages – it was a safety net below which the individual would not be allowed to fall.
Representative legislation included the Education Act 1902, which emphasized the role of experts in the schools system. Higher education was an important initiative, typified by the growth of the London School of Economics, and the foundation of Imperial College.
There was a pause in the movement between 1904 and 1909, when interest resumed. The most prominent new leaders included Liberals Winston Churchill and David Lloyd George, whose influence brought a bundle of reform legislation that introduced the welfare state to Britain.
Much of the popular and elite support for National Efficiency grew out of concern for Britain's military position, especially with respect to Germany. The Royal Navy underwent a dramatic modernization, most famously in the introduction of the Dreadnought, which in 1906 revolutionized naval warfare overnight.
Germany
In Germany the efficiency movement was called "rationalization" and it was a powerful social and economic force before 1933. In part it looked explicitly at American models, especially Fordism. The Bedaux system was widely adopted in the rubber and tire industry, despite strong resistance in the socialist labor movement to the Bedaux system. Continental AG, the leading rubber company in Germany, adopted the system and profited heavily from it, thus surviving the Great Depression relatively undamaged and improving its competitive capabilities. However most German businessmen preferred the home-grown REFA system which focused on the standardization of working conditions, tools, and machinery.
"Rationalization" meant higher productivity and greater efficiency, promising science would bring prosperity. More generally it promised a new level of modernity and was applied to economic production and consumption as well as public administration. Various versions of rationalization were promoted by industrialists and Social Democrats, by engineers and architects, by educators and academics, by middle class feminists and social workers, by government officials and politicians of many parties. It was ridiculed by the extremists in the Communist movement. As ideology and practice, rationalization challenged and transformed not only machines, factories, and vast business enterprises but also the lives of middle-class and working-class Germans.
Soviet Union
Ideas of Science Management was very popular in the Soviet Union. One of the leading theorists and practitioners of the Scientific Management in Soviet Russia was Alexei Gastev. The Central Institute of Labour (Tsentralnyi Institut Truda, or TsIT), founded by Gastev in 1921 with Vladimir Lenin's support, was a veritable citadel of socialist Taylorism.
Fascinated by Taylorism and Fordism, Gastev has led a popular movement for the “scientific organization of labor” (Nauchnaya Organizatsiya Truda, or NOT).
Because of its emphasis on the cognitive components of labor, some scholars consider Gastev's NOT to represent a Marxian variant of cybernetics. As with the concept of 'Organoprojection' (1919) by Pavel Florensky, underlying Nikolai Bernstein and Gastev's approach, lay a powerful man-machine metaphor.
Japan
W. Edwards Deming (1900–1993) brought the efficiency movement to Japan after World War II, teaching top management how to improve design (and thus service), product quality, testing and sales (the last through global markets), especially using statistical methods. Deming then brought his methods back to the U.S. in the form of quality control called continuous improvement process.
Citations
General and cited references
Secondary sources
Alexander, Jennifer K. The Mantra of Efficiency: From Waterwheel to Social Control, (2008), international perspective excerpt and text search
Bruce, Kyle, and Chris Nyland. "Scientific Management, Institutionalism, and Business Stabilization: 1903–1923," Journal of Economic Issues Vol. 35, No. 4 (Dec., 2001), pp. 955–978. .
Chandler, Alfred D. Jr. The Visible Hand: The Managerial Revolution in American Business (1977)
Fry, Brian R. Mastering Public Administration: From Max Weber to Dwight Waldo (1989) online edition
Hays, Samuel P. Conservation and the Gospel of Efficiency: The Progressive Conservation Movement 1890–1920 (1959).
Haber, Samuel. Efficiency and Uplift: Scientific Management in the Progressive Era, 1890–1920 (1964)
Hawley, Ellis W. "Herbert Hoover, the Commerce Secretariat, and the vision of the 'Associative State'." Journal of American History, (1974) 61: 116–140. .
Jensen, Richard. "Democracy, Republicanism and Efficiency: The Values of American Politics, 1885–1930," in Byron Shafer and Anthony Badger, eds, Contesting Democracy: Substance and Structure in American Political History, 1775–2000 (U of Kansas Press, 2001) pp 149–180; online version
Jordan, John M. Machine-Age Ideology: Social Engineering and American Liberalism, 1911–1939 (1994).
Kanigel, Robert. The One Best Way: Frederick Winslow Taylor and the Enigma of Efficiency. (Penguin, 1997).
Knoedler; Janet T. "Veblen and Technical Efficiency," Journal of Economic Issues, Vol. 31, 1997
Knoll, Michael: From Kidd to Dewey: The Origin and Meaning of Social Efficiency. Journal of Curriculum Studies 41 (June 2009), No. 3, pp. 361–391.
Lamoreaux, Naomi and Daniel M. G. Raft eds. Coordination and Information: Historical Perspectives on the Organization of Enterprise University of Chicago Press, 1995
Lee, Mordecai. Bureaus of Efficiency: Reforming Local Government in the Progressive Era (Marquette University Press, 2008)
Merkle, Judith A. Management and Ideology: The Legacy of the International Scientific Management Movement (1980)
Nelson, Daniel. Frederick W. Taylor and the Rise of Scientific Management (1980).
Nelson, Daniel. Managers and Workers: Origins of the Twentieth-Century Factory System in the United States, 1880–1920 2nd ed. (1995).
Noble, David F. America by Design (1979).
Nolan, Mary. Visions of Modernity: American Business and the Modernization of Germany (1995)
Nolan, Mary. "Housework Made Easy: The Taylorized Housewife in Weimar Germany's Rationalized Economy," Feminist Studies. (1975) Volume: 16. Issue: 3. pp 549+
Searle, G. R. The quest for national efficiency: a study in British politics and political thought, 1899–1914 (1971)
Stillman Richard J. II. Creating the American State: The Moral Reformers and the Modern Administrative World They Made (1998) online edition
Primary sources
Dewey, Melville. "Efficiency Society" Encyclopedia Americana (1918) online vol 9 p 720
Emerson, Harrington, "Efficiency Engineering" Encyclopedia Americana (1918) online vol 9 pp 714–20
Taylor, Frederick Winslow Principles of Scientific Management (1913) online edition
Taylor, Frederick Winslow. Scientific Management: Early Sociology of Management and Organizations (2003), reprints Shop Management (1903), The Principles of Scientific Management (1911) and Testimony Before the Special House Committee (1912).
White, Arnold. Efficiency and empire (1901) online edition, influential study regarding the British Empire
19th century in economic history
20th century in economic history
Economic history of the United States
History of science and technology in the United States
History of technology
Industrial history
Social movements | Efficiency movement | Technology | 3,456 |
43,952,768 | https://en.wikipedia.org/wiki/Y-320 | Y-320 is an orally active immunomodulator, and inhibits IL-17 production by CD4 T cells stimulated with IL-15 with IC50 of 20 to 60 nM.
Biological activity
In vitro
Y-320 also inhibits the production of IFN-γ and TNF-α by mouse CD4 T cells stimulated with IL-15, CXCL12, and anti-CD3 mAb.
In vivo
Y-320 (0.3-3 mg/kg p.o.) ameliorates collagen-induced arthritis (CIA) in mice with a reduction of IL-17 mRNA in arthritic joints, and also shows therapeutic effects on CIA in cynomolgus monkeys. Moreover, Y-320 shows a synergistic effect in combination with anti-TNF-α mAb on chronic-progressing CIA in mice.
References
Ketones
Nitriles
4-Chlorophenyl compounds
Pyrazoles | Y-320 | Chemistry | 200 |
24,677,325 | https://en.wikipedia.org/wiki/Emergency%20airworthiness%20directive | An emergency airworthiness directive (EAD) is an airworthiness directive issued when unsafe conditions require immediate action by an aircraft owner or operator. An EAD is published by a responsible authority such as the FOCA, EASA or FAA related to airworthiness and maintenance of aircraft and aircraft parts. It contains measures which must be accomplished and the related periods to preserve their airworthiness. Technical information is addressed to operators and maintenance organisations of affected aircraft only. EADs become effective upon receipt of notification.
Notable incidents that have led to emergency airworthiness directives
On May 25, 1979, American Airlines Flight 191, a McDonnell Douglas DC-10 crashed after takeoff from Chicago O'Hare Airport. An engine separated from the plane, damaging electrical and hydraulic systems, causing the left wing's slats to retract and stall that wing. This led to the type being grounded in June 1979.
On August 20, 2007, China Airlines Flight 120, a Boeing 737-800 inbound from Taipei, caught fire shortly after landing at Naha Airport in Okinawa Prefecture, Japan. There were no fatalities. Following this incident, the FAA issued an Emergency Airworthiness Directive on August 25 ordering inspection of all Boeing 737NG series aircraft for loose components in the wing leading edge slats within 24 days.
On October 7, 2008, Qantas Flight 72, a scheduled flight from Singapore Changi Airport to Perth Airport, made an emergency landing at Learmonth airport near the town of Exmouth, Western Australia following a pair of sudden uncommanded pitch-down manoeuvres that resulted in serious injuries to many of the occupants. The aircraft was equipped with a Northrop Grumman made ADIRS, which investigators sent to the manufacturer in the US for further testing. On 15 January 2009 the EASA issued an Emergency Airworthiness Directive to address the above A330 and A340 Northrop-Grumman ADIRU problem of incorrectly responding to a defective inertial reference.
On January 16, 2013, the FAA issued an emergency airworthiness directive ordering all U.S. airlines to ground the Boeing 787s in their fleets due to problems with the aircraft's lithium-ion battery. The directive came after the second incident of battery fire aboard the aircraft. This was the first time that the FAA had issued a general grounding of an aircraft model since 1979's DC-10 grounding.
On November 7, 2018, nine days after the crash of Lion Air flight JT610 which killed all 189 people on board, the Federal Aviation Administration issued an emergency airworthiness directive concerning a possible problem with the AoA (Angle of Attack) display of the Boeing 737 MAX 8 and MAX 9 aircraft types.
On January 6, 2024, the FAA issued an EAD grounding all Boeing 737 MAX 9 aircraft for inspections, following an incident one day earlier on Alaska Airlines Flight 1282 where a door plug blew out while the flight was underway, leading to uncontrolled decompression of the cabin and subsequent emergency landing of the aircraft involved.
References
External links
FAA
FOCA
EASA
Aviation licenses and certifications
Aviation safety
Aircraft maintenance | Emergency airworthiness directive | Engineering | 645 |
38,852,319 | https://en.wikipedia.org/wiki/Wire%20of%20Death | The Wire of Death (, , ) was a lethal electric fence created by the German military to control the Dutch–Belgian frontier after the occupation of Belgium during the First World War.
Terminology
The name 'Wire of Death' is an English rendition of one of its popular Dutch names, , which literally means "Death wire". As the war continued and more and more victims fell to the electric fence, it became known as simply meaning "The Wire". To the German authorities it was officially known as the ("High Voltage Border Obstacle"). Parallels have been made between the 'Death Wire' and the later Iron Curtain.
Background
As Germany invaded neutral Belgium, Belgians began to cross the border to the Netherlands en masse. In 1914 one million Belgian refugees were already in the Netherlands, but throughout the war, refugees kept coming and tried to cross the border. Many wanted to escape German occupation, others wanted to join their relatives who had already fled, and some wanted to take part in the war and chose this detour to join the forces on the Western Front.
Many of these refugees were also men trying to get to France through the Netherlands and the United Kingdom to enlist in the Allied armies, encouraged by Albert I of Belgium and Cardinal Mercier to defend the last remaining unoccupied Belgian territory.
Therefore, the Germans decided to build the wire to prevent these volunteers as well as spies from frequently crossing the Belgian-Dutch border and reaching the British secret service in Rotterdam.
The Belgian government also kept a post office in the unoccupied Belgian enclave of Baarle-Hertog.
Construction
Construction began in the spring of 1915 and consisted of over of 2,000-volt wire with a height ranging from spanning the length of the Dutch-Belgian border from Aix-la-Chapelle to the River Scheldt. Within of the wire, anyone who was not able to officially explain their presence was summarily executed, although the German border guards took care not to fire into the Netherlands who were officially neutral.
Dutch reaction
The Dutch government did not protest the construction of the Wire. For the Netherlands, it was a sign that its neutrality was being recognized, and the Dutch government did everything in its power to preserve that neutrality. Any actions that could be seen as cooperation were avoided as much as possible. The Wire did emphasize to the Dutch people that there was a dire situation of war right outside the Netherlands during the Great War, and so much so that many people were willing to flee from it at the risk of their very lives. From the start of the fence's construction, Dutch citizens were warned of the deadly consequences of touching the Wire. Warning signs had also been posted on the Dutch side of the border, saying: Hoogspanning - Levensgevaar (High voltage - Lethal Danger). Despite the fact that several Dutch citizens were also killed by the Wire, the Netherlands still never objected to it.
For the Dutch, only the field police were allowed to come close to the border and thus close to the Wire. Strict action was taken against those who tried to cross the Wire. Arrested deserters were interned in the many camps in the Netherlands. Smugglers, who were very active due to the large shortages in both countries, were prosecuted by the courts.
Result and legacy
The number of victims is estimated to range between 2,000 and 3,000 people. Local newspapers in the Southern Netherlands carried almost daily reports about people who were 'lightninged to death'. However, many also succeeded in overcoming the fence, often by employing dangerous or creative methods, ranging from the use of very large ladders and tunnels to pole vaulting and binding porcelain plates onto shoes in an attempt to insulate themselves.
The wire also separated families and friends as the Dutch–Belgian border where Dutch and Flemings (Dutch-speaking Belgians), despite living in different states, often intermarried or otherwise socialized with each other. Funeral processions used to walk to the fence and halt there, to give relatives and friends on the other side the opportunity to pray and say farewell. The (neutral) Dutch government, which initially did not object, protested the wire later on several occasions after its existence caused public outrage in the Netherlands. The great number of fatalities not only resulted in a sharp increase in Dutch Anti-German sentiment (in a country which had up until then been mostly hostile to Britain due to the Second Boer War) but also made smuggling goods in the border area much more dangerous and therefore more lucrative for local smugglers.
The fence did not completely follow the border and did not cross rivers. The Germans also allowed locals to pass through for church services, on market days and during harvest. In October 1918 the Germans opened the border to allow refugees from France and Belgium through rather than clog up German lines of communication in Belgium. At the end of the war, the Kaiser crossed the border from Belgium into the neutral Netherlands to take refuge there.
Immediately after the signing of the armistice in November 1918, the power plants around the wire were shut down and locals on both sides of the border soon destroyed the much-hated fence. Today all that remains of the original wire are some warning signs; however in some areas certain stretches have been reconstructed such as near Hamont-Achel, Zondereigen, Molenbeersel and between Achtmaal and Nieuwmoer in nature reserve "De Maatjes" by observation post "De Klot".
References
Bibliography
Further reading
External links
Dutch Public Television documentary
1910s in the Netherlands
Belgium–Netherlands border
Border barriers
Rape of Belgium | Wire of Death | Engineering | 1,133 |
36,132,862 | https://en.wikipedia.org/wiki/Kosmos%20862 | Kosmos 862 ( meaning Cosmos 862) was a Soviet US-K missile early warning satellite which was launched in 1976 as part of the Soviet military's Oko programme. The satellite was designed to identify missile launches using optical telescopes and infrared sensors.
Launch
Kosmos 862 was launched from Site 43/4 at Plesetsk Cosmodrome in the Russian SSR. A Molniya-M carrier rocket with a 2BL upper stage was used to perform the launch, which took place at 09:12 UTC on 22 October 1976.
Orbit
The launch successfully placed the satellite into a molniya orbit. It subsequently received its Kosmos designation, and the international designator 1976-105A. The United States Space Command assigned it the Satellite Catalog Number 9495.
The satellite self-destructed on March 15, 1977, breaking into 13 pieces of which several are still on orbit.
See also
1976 in spaceflight
List of Kosmos satellites (751–1000)
List of Oko satellites
List of R-7 launches (1975-1979)
References
Spacecraft that broke apart in space
Kosmos satellites
1976 in spaceflight
1976 in the Soviet Union
Oko
Spacecraft launched by Molniya-M rockets
Spacecraft launched in 1976 | Kosmos 862 | Technology | 256 |
61,514,499 | https://en.wikipedia.org/wiki/C23H32O4S | {{DISPLAYTITLE:C23H32O4S}}
The molecular formula C23H32O4S (molar mass: 404.563 g/mol) may refer to:
6β-Hydroxy-7α-thiomethylspironolactone
7α-Thiomethylspironolactone sulfoxide
Molecular formulas | C23H32O4S | Physics,Chemistry | 82 |
43,625,508 | https://en.wikipedia.org/wiki/Orbit%20of%20Venus | Venus has an orbit with a semi-major axis of , and an eccentricity of 0.007. The low eccentricity and comparatively small size of its orbit give Venus the least range in distance between perihelion and aphelion of the planets: 1.46 million km. The planet orbits the Sun once every 225 days and travels in doing so, giving an average orbital speed of .
Conjunctions and transits
When the geocentric ecliptic longitude of Venus coincides with that of the Sun, it is in conjunction with the Sun – inferior if Venus is nearer and superior if farther. The distance between Venus and Earth varies from about 42 million km (at inferior conjunction) to about 258 million km (at superior conjunction). The average period between successive conjunctions of one type is 584 days – one synodic period of Venus. Five synodic periods of Venus is almost exactly 13 sidereal Venus years and 8 Earth years, and consequently the longitudes and distances almost repeat.
The 3.4° inclination of Venus's orbit is great enough to usually prevent the inferior planet from passing directly between the Sun and Earth at inferior conjunction. Such solar transits of Venus rarely occur, but with great predictability and interest.
Close approaches to Earth and Mercury
In this current era, the nearest that Venus comes to Earth is just under 40 million km.
Because the range of heliocentric distances is greater for the Earth than for Venus, the closest approaches come near Earth's perihelion. The Earth's declining eccentricity is increasing the minimum distances. The last time Venus drew nearer than 39.5 million km was in 1623, but that will not happen again for many millennia, and in fact after 5683 Venus will not even come closer than 40 million km for about 60,000 years.
The orientation of the orbits of the two planets is not favorable for minimizing the close approach distance. The longitudes of perihelion were only 29 degrees apart at J2000, so the smallest distances, which come when inferior conjunction happens near Earth's perihelion, occur when Venus is near perihelion. An example was the transit of December 6, 1882: Venus reached perihelion Jan 9, 1883, and Earth did the same on December 31. Venus was 0.7205 au from the Sun on the day of transit, decidedly less than average.
Moving far backwards in time, more than 200,000 years ago Venus sometimes passed by at a distance from Earth of barely less than 38 million km, and will next do that after more than 400,000 years.
Venus and Earth come the closest, but they come less often closer than Venus and Mercury. While Venus approaches Earth the closest, Mercury approaches Earth more often the closest of all planets. That said, Venus and Earth still have the lowest gravitational potential difference between them than to any other planet, needing the lowest delta-v to transfer between them, than to any other planet from them.
The distance between Venus and Mercury will become smaller over time primarily because of Mercury's increasing eccentricity.
Historical importance
The discovery of phases of Venus by Galileo in 1610 was important. It contradicted the model of Ptolemy which considered all celestial objects to revolve around the Earth and was consistent with others, such as those of Tycho and Copernicus.
In Galileo’s day the prevailing model of the universe was based on the assertion by the Greek astronomer Ptolemy almost 15 centuries earlier that all celestial objects revolve around Earth (see Ptolemaic system). Observation of the phases of Venus was inconsistent with this view but was consistent with the Polish astronomer Nicolaus Copernicus’s idea that the solar system is centered on the Sun. Galileo’s observation of the phases of Venus provided the first direct observational evidence for Copernican theory.
Observations of transits of Venus across the Sun have played a major role in the history of astronomy in the determination of a more accurate value of the astronomical unit.
Accuracy and predictability
Venus has a very well observed and predictable orbit. From the perspective of all but the most demanding, its orbit is simple. An equation in Astronomical Algorithms that assumes an unperturbed elliptical orbit predicts the perihelion and aphelion times with an error of a few hours. Using orbital elements to calculate those distances agrees to actual averages to at least five significant figures. Formulas for computing position straight from orbital elements typically do not provide or need corrections for the effects of other planets.
However, observations are much better now, and space age technology has replaced the older techniques. E. Myles Standish wrote Classical ephemerides over the past centuries have been based entirely upon optical observations: almost exclusively, meridian circle transit timings. With the advent of planetary radar, spacecraft missions, VLBI, etc., the situation for the four inner planets has changed dramatically. For DE405, created in 1998, optical observations were dropped and as he wrote initial conditions for the inner four planets were adjusted to ranging data primarily... Now the orbit estimates are dominated by observations of the Venus Express spacecraft. The orbit is now known to sub-kilometer accuracy.
Table of orbital parameters
No more than five significant figures are presented here, and to this level of precision the numbers match very well the VSOP87 elements and calculations derived from them, Standish's (of JPL) 250-year best fit, Newcomb's, and calculations using the actual positions of Venus over time.
Dust ring
Venus' orbital space has been shown to have its own dust ring-cloud, with a suspected origin either from Venus trailing asteroids, interplanetary dust migrating in waves, or the remains of the Solar System's circumstellar disc out of which its proto-planetary disc and then itself, the Solar planetary system, formed.
References
Dynamics of the Solar System
Venus | Orbit of Venus | Astronomy | 1,201 |
30,013,204 | https://en.wikipedia.org/wiki/Globally%20asynchronous%20locally%20synchronous | Globally asynchronous locally synchronous (GALS), in electronics, is an architecture for designing electronic circuits that addresses the problem of safe and reliable data transfer between independent clock domains. GALS is a model of computation that emerged in the 1980s. It allows to design computer systems consisting of several synchronous islands (using synchronous programming for each such island) interacting with other islands using asynchronous communication, e.g. with FIFOs.
Details
A GALS circuit consists of a set of locally synchronous modules communicating with each other via asynchronous wrappers. Each synchronous subsystem ("clock domain") can run on its own independent clock (frequency). Advantages include much lower electromagnetic interference (EMI). The CMOS circuit (logic gates) requires relatively large supply current when changing state from 0 to 1. These changes are aggregated for synchronous circuit as most changes are initialised by an active clock edge. Therefore, large spikes on supply current occur at active clock edges. These spikes can cause large electromagnetic interference, and may lead to circuit malfunction. In order to limit these spikes large number of decoupling capacitors are used. Another solution is to use a GALS design style, i.e. design (locally) is synchronous (thus easier to be designed than asynchronous circuit) but globally asynchronous, i.e. there are different (e.g. phase shifted, rising and falling active edge) clock signal regimes thus supply current spikes do not aggregate at the same time. Consequently, GALS design style is often used in system on a chip (SoC). It is especially used in network on a chip (NoC) architectures for SoCs.
Some larger GALS circuits contain multiple CPUs.
Generally each CPU in such an asynchronous array of simple processors has its own independent oscillator.
That oscillator can be halted when there's no work for its CPU to do.
In some cases each CPU is further divided into smaller modules, each with their own independent clock,
or in a few cases no clock at all ().
See also
Synchronous programming
Asynchronous programming
Concurrency (computer science)
Asynchronous system
Clock domain crossing
SIGNAL – a dataflow-oriented synchronous language enabling multi-clock and GALS specifications
References
General
Dataflow Architectures for GALS
Synchronization
Digital circuits | Globally asynchronous locally synchronous | Engineering | 526 |
12,151,020 | https://en.wikipedia.org/wiki/TAN-1057%20C | TAN-1057 C and TAN-1057 D are organic compounds found in the Flexibacter sp. PK-74 bacterium. TAN-1057 C and D are closely related structurally as diastereomers. Also related are TAN-1057 A and TAN-1057 B, isolated from the same bacteria. The four compounds have been shown to be an effective antibiotics against methicillin-resistant strains of Staphylococcus aureus which act through the inhibition of protein biosynthesis.
References
Antibiotics
Alkaloids
Guanidines
Lactams
Diazepines | TAN-1057 C | Chemistry,Biology | 122 |
75,359,163 | https://en.wikipedia.org/wiki/Nemtabrutinib | Nemtabrutinib (MK-1026, formerly ARQ 531) is a small molecule drug that works as a reversible Bruton's tyrosine kinase (BTK) inhibitor; unlike other BTK inhibitors it also works against some mutated forms of BTK. Merck paid $2.7 billion to acquire the company ArQule and the drug, which is being investigated as a cancer treatment.
References
Tyrosine kinase inhibitors
Small-molecule drugs
Phenol ethers
Chloroarenes
Benzaldehydes
Pyrrolopyridines | Nemtabrutinib | Chemistry | 124 |
37,839,995 | https://en.wikipedia.org/wiki/SR1%20RNA | In molecular biology, the SR1 RNA is a small RNA (sRNA) produced by species of Bacillus and closely related bacteria. It is a dual-function RNA which acts both as a protein-coding RNA and as a regulatory sRNA.
SR1 RNA is involved in the regulation of arginine catabolism. SR1 RNA binds to complementary stretches of ahrC mRNA (also known as argR and inhibits translation. AhrC endodes an arginine repressor protein which represses synthesis of arginine biosynthetic enzymes and activates arginine catabolic enzymes via regulation of the rocABC and rocDEF operons.
In addition to acting as a sRNA, SR1 also encodes a small peptide, SR1P. SR1P binds to glyceraldehyde-3-phosphate dehydrogenase (GapA) and stabilises the gapA operon mRNAs.
SR1 expression is regulated by CcpA and CcpN.
See also
Bacterial small RNA
References
RNA
Non-coding RNA | SR1 RNA | Chemistry | 223 |
8,688,642 | https://en.wikipedia.org/wiki/XtremPC | XtremPC was a computer magazine from Romania founded in 1998. XtremPC included previews and reviews on computer hardware, software, PC games and gadgets, as well as IT news. Although its major focus was on personal computers only, latter editions started including sections dedicated to game consoles as well. XtremPC was the first Romanian magazine to include a DVD in 2004, followed two years later by LeveL. The last issue of XtremPC was the May 2010 issue (No. 120), which appeared on 3 June 2010. The further issuing of the magazine temporarily ended as a result of a drop in the number of readers.
Format
XtremPC included four main sections:
IT Express – news and articles regarding the latest innovations in the IT world;
Hardware – news, previews, reviews, tests and comparison charts of computer hardware;
Software & Communication – news, reviews and tests on computer software and communication and multimedia devices;
Jocuri (Games) – news and reviews on PC games; later included console games as well.
Editions
The latter issues of the magazine were available in three editions based on the type of digital media that they included:
XtremPC (key-coloured in green) – included the magazine only, priced at 5.9 lei (approx. US$2)
XtremPC CD (key-coloured in orange) – included the magazine as well as a Compact Disc, priced at 7.9 lei (approx. US$2.6)
XtremPC DVD (key-coloured in blue) – included the magazine as well as a DVD, priced at 12.9 lei (approx. US$4.2)
Currently, all three editions of XtremPC are out of print. The website has been shut down, but the forum is still active. There is a fan site that holds the PDF versions of the magazine.
References
External links
Revista XtremPC se inchide – 2 Mai
site-ul revistei xtrempc se inchide – 1 Iulie
XtremPC se inchide, raman cu Itfiles
La revedere XtremPC!
Defunct computer magazines
Defunct magazines published in Romania
Magazines established in 1998
Magazines disestablished in 2010
Science and technology in Romania
1998 establishments in Romania
2010 disestablishments in Romania
Romanian-language magazines | XtremPC | Technology | 482 |
2,570,072 | https://en.wikipedia.org/wiki/Fleur%20de%20sel | Fleur de sel ("flower of salt" in French; ) or flor de sal (also "flower of salt" in Portuguese, Spanish and Catalan) is a salt that forms as a thin, delicate crust on the surface of seawater as it evaporates. Fleur de sel has been collected since ancient times (it was mentioned by Pliny the Elder in his book Natural History), and was traditionally used as a purgative and salve. It is now used as a finishing salt to flavor and garnish food. The origin of the name is uncertain, but is perfectly in line with both meanings of fleur: flower, and the surface of something. The salt crust forms flower-like patterns of crystals which may contribute to the name.
Fleur de sel is a highly sought after salt, used globally in high end kitchens due to its long-lasting flavor. Properly harvested fleur de sel costs hundreds of times more than table salt due to the difficult-to-master harvesting technique and high demand globally.
Harvesting
One method of gathering sea salt is to draw seawater into marsh basins or salt pans and allow the water to evaporate, leaving behind the salt that was dissolved in it. As the water evaporates, most of the salt precipitates out on the bottom of the marsh or pan (and is later collected as ordinary sea salt), but some salt crystals float on the surface of the water, forming a delicate crust of intricate pyramidal crystals. This is fleur de sel. The delicacy requires that it be harvested by hand, so this is done with traditional methods using traditional tools. In France, the workers who collect salt are called paludiers, and to collect fleur de sel they employ a wooden rake called a lousse à fleur to gently rake it from the water. In Portugal, a butterfly-shaped sieve called a borboleta is used instead. It is then put in special boxes so that it will dry in the sun, and to avoid disturbing the flakes as it is transported for packaging. Historically, the workers who harvested fleur de sel were women, because it was believed that as the salt crystals were so delicate, they needed to be collected by "the more delicate sex." Because it is scraped from the salt marsh like cream from milk, fleur de sel has been called "the cream of the salt pans." It is also called "the caviar of sea salts."
Fleur de sel can be collected only when it is very sunny, dry, and with slow, steady winds. Because of the nature of its formation, fleur de sel is produced in small quantities. At Guérande, France, each salt marsh produces only about one kilo (2.2 pounds) per day. Because of this and the labor-intensive way in which it is harvested, fleur de sel is the most expensive of salts.
This method of salt formation and collection results in salt crystals that are not uniform. The salt also has a much higher amount of moisture than common salt (up to 10% compare to 0.5% for common salt), allowing the crystals to stick together in snowflake-like forms. other minerals, like calcium and magnesium chloride, give it a more complex flavor. These chemicals make fleur de sel taste even saltier than salt, and give it what has been described as the flavor of the sea. Trace mineral content depends upon the location at which it is harvested, so the flavor varies with point of origin.
Fleur de sel is rarely the pure white of table salt. It is often pale gray or off-white from clay from the salt marsh beds. Sometimes it has a faintly pink tinge from the presence of Dunaliella salina, a type of pink microalga commonly found in salt marshes. However, fleur de sel from Ria Formosa in Portugal is white.
Uses
Only about 5% of salt is used for cooking, but fleur de sel is used only to flavor food. It is not used in place of salt during the cooking process, instead, it is added just before serving, like a garnish, a "finishing salt," to boost the flavor of eggs, fish, meat, vegetables, chocolate, and caramel.
Sources
Sea salt has been gathered around the world for millennia, but over the last thousand years, fleur de sel was harvested only in France. Elsewhere it was collected and discarded. As the market for specialty salts has grown, companies have begun to harvest fleur de sel for export wherever the geographic and meteorological conditions are favorable.
Europe
Traditional French fleur de sel is collected off the coast of Brittany, most notably in the town of Guérande (called Fleur de Sel de Guérande), but also in Noirmoutier, and Île de Ré.
Greeks have harvested sea salt and fleur de sel (ανθος αλατιού) along the Mediterranean Sea coast, particularly the Mani Peninsula of Lakonia and Missolonghi, from ancient times.
Flor de sal is harvested in Portugal, mostly in the Aveiro District and in the Algarve, but also in the salt marshes of Castro Marim, at the mouth of the Guadiana River that forms the border to Spain. Roman ruins near Ria Formosa specifically suggest that there has been a long history of sea salt production here. Before the invention of salt mining, Portugal's sea salt production helped to solidify its place as a world power. However, when mechanical salt mining made salt inexpensive, demand for Portugal's sea salt dropped due to its expense. For centuries flor de sal was scraped away and either discarded or given to workers, as its presence disturbed the evaporation that was creating the sea salt underneath. The process of harvesting flor de sal for sale was reintroduced in 1997 by Necton, with a grant to develop ways to capitalize Portugal's natural resources. Necton's flor de sal is whiter than the fleur de sel from Guérande, and is said to have the more robust flavor of the Atlantic as opposed to Guérande's milder Biscay Bay flavor. Due to Portugal's laws regarding the grading of salt, Necton's flor de sal is exported to France and marketed by companies who also market fleur de sel.
Spain also produces high quality flor de sal in the Ebro Delta on the mainland and the Salinas d'Es Trenc on the island of Majorca and in the Salinas de la Trinidad in the Ebro Delta. Majorca has a long history of salt production, dating to the Phoenicians and the Romans, but flor de sel was mostly kept for local use until Katja Wöhr arrived from Switzerland in 2002 and convinced local officials to allow her to harvest it in Es Trenc. She worked with British chef Marc Fosh to develop mixtures of flor de sal with herbs and spice blends added, such as orange, lemon, black olive, lavender, rosemary, dried rose petals, curry spices, and beetroot.
Spain's Canary Islands are also a source of flor de sal. Saltworks have operated on La Palma and Lanzarote for centuries, but the flor de sal that resulted was kept for the use of the workers until 2007, when the salt gained gourmet status. The culinary rediscovery of fleur de sel and other gourmet salts has saved small scale artisanal saltworks in the Canaries, which were in rapid decline.
Flower of salt is also produced in Croatia, where the harvesting of salt and flower of salt dates to ancient times, thanks to relatively high salinity (3.5 B°; 1 m3 of seawater contains approximately 30 kg of sea salt) of sea water and favourable climate on the eastern coast of the Adriatic sea. Today, the largest producer of sea salt in Croatia is Solana Pag on the island Pag.
Americas
Canada now produces high quality fleur de sel from the Pacific Ocean off Vancouver Island. The colder climate adds extra crunch and reduces the flakiness. Unlike traditional European fleur de sel, which crystallizes naturally in the sun, Canadian fleur de sel makers heat their seawater to force evaporation.
Mexico has produced both sea salt and flor de sal since Aztec times from the Lagoon of Cuyutlán on the Pacific Coast. There is also a museum in Cuyutlán, dedicated to the history and technique of flor de sal production.
Flor de sal is also harvested along the beaches of Celestun in Yucatan, Mexico, where Mayans cultivated salt 1,500 years ago for its distribution throughout Mesoamerican trade routes extending to Guatemala, Central America and the Caribbean.
Brazil started producing flor de sal in 2008 in the traditional salt-producing area of Macau, in the state of Rio Grande do Norte. The salt kept for use in Brazil is iodized though, as required by the Brazilian law for all salt intended for human consumption, but that intended for export is not. The main producer is ArtSal - Flor De Sal.
Mineral composition
Because it is harvested naturally from the sea and is usually not refined, fleur de sel has more mineral complexity than common table salt. The following is a chemical analysis of Flos Salis, a flor de sal by Portuguese company Marisol:
See also
List of edible salts
References
External links
The Wall Street Journal on the Camargue saltworks
de sal/como.html Diagrams and explanations of a salt pan works, from D'Aveiro, a Portuguese salt manufacturer.
Edible salt | Fleur de sel | Chemistry | 2,011 |
908,086 | https://en.wikipedia.org/wiki/Fractionation | Fractionation is a separation process in which a certain quantity of a mixture (of gasses, solids, liquids, enzymes, or isotopes, or a suspension) is divided during a phase transition, into a number of smaller quantities (fractions) in which the composition varies according to a gradient. Fractions are collected based on differences in a specific property of the individual components. A common trait in fractionations is the need to find an optimum between the amount of fractions collected and the desired purity in each fraction. Fractionation makes it possible to isolate more than two components in a mixture in a single run. This property sets it apart from other separation techniques.
Fractionation is widely employed in many branches of science and technology. Mixtures of liquids and gasses are separated by fractional distillation by difference in boiling point. Fractionation of components also takes place in column chromatography by a difference in affinity between stationary phase and the mobile phase. In fractional crystallization and fractional freezing, chemical substances are fractionated based on difference in solubility at a given temperature. In cell fractionation, cell components are separated by difference in mass.
Of natural samples
Bioassay-guided fractionation
A typical protocol to isolate a pure chemical agent from natural origin is step-by-step separation of extracted components based on differences in their bioassay-guided fractionation physicochemical properties, and assessing the biological activity, followed by next round of separation and assaying. Typically, such work is initiated after a given crude extract is deemed "active" in a particular in vitro assay.
Blood fractionation
The process of blood fractionation involves separation of blood into its main components. Blood fractionation refers generally to the process of separation using a centrifuge (centrifugation), after which three major blood components can be visualized: plasma, buffy coat and erythrocytes (blood cells). These separated components can be analyzed and often further separated.
Of food
Fractionation is also used for culinary purposes, as coconut oil, palm oil, and palm kernel oil are fractionated to produce oils of different viscosities, that may be used for different purposes. These oils typically use fractional crystallization (separation by solubility at temperatures) for the separation process instead of distillation. Mango oil is an oil fraction obtained during the processing of mango butter.
Milk can also be fractionated to recover the milk protein concentrate or the milk basic proteins fraction.
Isotope fractionation
See also
Copurification
List of purification methods in chemistry
Transposition cipher#Fractionation
References
Further reading
Laboratory techniques | Fractionation | Chemistry | 532 |
3,633,309 | https://en.wikipedia.org/wiki/Structural%20load | A structural load or structural action is a mechanical load (more generally a force) applied to structural elements. A load causes stress, deformation, displacement or acceleration in a structure. Structural analysis, a discipline in engineering, analyzes the effects of loads on structures and structural elements. Excess load may cause structural failure, so this should be considered and controlled during the design of a structure. Particular mechanical structures—such as aircraft, satellites, rockets, space stations, ships, and submarines—are subject to their own particular structural loads and actions. Engineers often evaluate structural loads based upon published regulations, contracts, or specifications. Accepted technical standards are used for acceptance testing and inspection.
Types
In civil engineering, specified loads are the best estimate of the actual loads a structure is expected to carry. These loads come in many different forms, such as people, equipment, vehicles, wind, rain, snow, earthquakes, the building materials themselves, etc. Specified loads also known as characteristic loads in many cases.
Buildings will be subject to loads from various sources. The principal ones can be classified as live loads (loads which are not always present in the structure), dead loads (loads which are permanent and immovable excepting redesign or renovation) and wind load, as described below. In some cases structures may be subject to other loads, such as those due to earthquakes or pressures from retained material. The expected maximum magnitude of each is referred to as the characteristic load.
Dead loads are static forces that are relatively constant for an extended time. They can be in tension or compression. The term can refer to a laboratory test method or to the normal usage of a material or structure.
Live loads are usually variable or moving loads. These can have a significant dynamic element and may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids, etc.
An impact load is one whose time of application on a material is less than one-third of the natural period of vibration of that material.
Cyclic loads on a structure can lead to fatigue damage, cumulative damage, or failure. These loads can be repeated loadings on a structure or can be due to vibration.
Imposed loads are those associated with occupation and use of the building; their magnitude is less clearly defined and is generally related to the use of the building.
Loads on architectural and civil engineering structures
Structural loads are an important consideration in the design of buildings. Building codes require that structures be designed and built to safely resist all actions that they are likely to face during their service life, while remaining fit for use. Minimum loads or actions are specified in these building codes for types of structures, geographic locations, usage and building materials. Structural loads are split into categories by their originating cause. In terms of the actual load on a structure, there is no difference between dead or live loading, but the split occurs for use in safety calculations or ease of analysis on complex models.
To meet the requirement that design strength be higher than maximum loads, building codes prescribe that, for structural design, loads are increased by load factors. These load factors are, roughly, a ratio of the theoretical design strength to the maximum load expected in service. They are developed to help achieve the desired level of reliability of a structure based on probabilistic studies that take into account the load's originating cause, recurrence, distribution, and static or dynamic nature.
Dead load
The dead load includes loads that are relatively constant over time, including the weight of the structure itself, and immovable fixtures such as walls, plasterboard or carpet. The roof is also a dead load. Dead loads are also known as permanent or static loads. Building materials are not dead loads until constructed in permanent position. IS875(part 1)-1987 give unit weight of building materials, parts, components.
Live load
Live loads, or imposed loads, are temporary, of short duration, or a moving load. These dynamic loads may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids and material fatigue.
Live loads, sometimes also referred to as probabilistic loads, include all the forces that are variable within the object's normal operation cycle not including construction or environmental loads.
Roof and floor live loads are produced during maintenance by workers, equipment and materials, and during the life of the structure by movable objects, such as planters and people.
Bridge live loads are produced by vehicles traveling over the deck of the bridge.
Environmental loads
Environmental loads are structural loads caused by natural forces such as wind, rain, snow, earthquake or extreme temperatures.
Wind loads
Snow, rain and ice loads
Seismic loads
Hydrostatic loads
Temperature changes leading to thermal expansion cause thermal loads
Ponding loads
Frost heaving
Lateral pressure of soil, groundwater or bulk materials
Loads from fluids or floods
Permafrost melting
Dust loads
Other loads
Engineers must also be aware of other actions that may affect a structure, such as:
Foundation settlement or displacement
Fire
Corrosion
Explosion
Creep or shrinkage
Impact from vehicles or machinery vibration
Construction loads
Load combinations
A load combination results when more than one load type acts on the structure. Building codes usually specify a variety of load combinations together with load factors (weightings) for each load type in order to ensure the safety of the structure under different maximum expected loading scenarios. For example, in designing a staircase, a dead load factor may be 1.2 times the weight of the structure, and a live load factor may be 1.6 times the maximum expected live load. These two "factored loads" are combined (added) to determine the "required strength" of the staircase.
The size of the load factor is based on the probability of exceeding any specified design load. Dead loads have small load factors, such as 1.2, because weight is mostly known and accounted for, such as structural members, architectural elements and finishes, large pieces of mechanical, electrical and plumbing (MEP) equipment, and for buildings, it's common to include a Super Imposed Dead Load (SIDL) of around 5 pounds per square foot (psf) accounting for miscellaneous weight such as bolts and other fasteners, cabling, and various fixtures or small architectural elements. Live loads, on the other hand, can be furniture, moveable equipment, or the people themselves, and may increase beyond normal or expected amounts in some situations, so a larger factor of 1.6 attempts to quantify this extra variability. Snow will also use a maximum factor of 1.6, while lateral loads (earthquakes and wind) are defined such that a 1.0 load factor is practical. Multiple loads may be added together in different ways, such as 1.2*Dead + 1.0*Live + 1.0*Earthquake + 0.2*Snow, or 1.2*Dead + 1.6(Snow, Live(roof), OR Rain) + (1.0*Live OR 0.5*Wind).
Aircraft structural loads
For aircraft, loading is divided into two major categories: limit loads and ultimate loads. Limit loads are the maximum loads a component or structure may carry safely. Ultimate loads are the limit loads times a factor of 1.5 or the point beyond which the component or structure will fail. Gust loads are determined statistically and are provided by an agency such as the Federal Aviation Administration. Crash loads are loosely bounded by the ability of structures to survive the deceleration of a major ground impact. Other loads that may be critical are pressure loads (for pressurized, high-altitude aircraft) and ground loads. Loads on the ground can be from adverse braking or maneuvering during taxiing. Aircraft are constantly subjected to cyclic loading. These cyclic loads can cause metal fatigue.
See also
Hotel New World disaster – caused by omitting the dead load of the building in load calculations
Influence line
Probabilistic design
Mechanical load
Structural testing
Southwell plot
References
External links
Luebkeman, Chris H., and Donald Petting "Lecture 17: Primary Loads". University of Oregon. 1996
Fisette, Paul, and the American Wood Council. "Understanding Loads and Using Span Tables". 1997.
www.govinfo.gov/content/pkg/GOVPUB-C13-03121e193fe7b5a13f0f635aaae922aa/pdf/GOVPUB-C13-03121e193fe7b5a13f0f635aaae922aa.pdf
Civil engineering
Structural engineering
Building engineering
Mechanical engineering
Structural analysis | Structural load | Physics,Engineering | 1,722 |
147,027 | https://en.wikipedia.org/wiki/Chemical%20property | A chemical property is any of a material's properties that becomes evident during, or after, a chemical reaction; that is, any attribute that can be established only by changing a substance's chemical identity. Simply speaking, chemical properties cannot be determined just by viewing or touching the substance; the substance's internal structure must be affected greatly for its chemical properties to be investigated. When a substance goes under a chemical reaction, the properties will change drastically, resulting in chemical change. However, a catalytic property would also be a chemical property.
Chemical properties can be contrasted with physical properties, which can be discerned without changing the substance's structure. However, for many properties within the scope of physical chemistry, and other disciplines at the boundary between chemistry and physics, the distinction may be a matter of researcher's perspective. Material properties, both physical and chemical, can be viewed as supervenient; i.e., secondary to the underlying reality. Several layers of superveniency are possible.
Chemical properties can be used for building chemical classifications. They can also be useful to identify an unknown substance or to separate or purify it from other substances. Materials science will normally consider the chemical properties of a substance to guide its applications.
Examples
Heat of combustion
Enthalpy of formation
Toxicity
Chemical stability in a given environment
Flammability (the ability to burn)
Preferred oxidation state(s)
Ability to corrode
Combustibility
Acidity and basicity
creation of a new substance(s)
The ability to melt
See also
Chemical structure
Material properties
Biological activity
Quantitative structure–activity relationship (QSAR)
Lipinski's Rule of Five, describing molecular properties of drugs
References | Chemical property | Chemistry | 341 |
48,322,624 | https://en.wikipedia.org/wiki/VDIworks | VDIworks is an American software company founded in 2008 that provides services like desktop virtualization, desktop as a service (DaaS), networking, PCoIP and cloud computing.
VDIworks built the very first PCoIP broker connection which is the industry’s only PCoIP broker with quad broking support. VDIworks VDI Technology service comes with full physical management.
VDIworks helps virtual desktops with a high-speed connection protocol.
History
VDIworks, Inc. started its services in 2008 like virtual desktop enablement and management software. It offers fast remote desktop to bring the power of Windows to iPad to run Microsoft Office, as well as access spreadsheets, PowerPoint presentations, and other PC documents while travelling; and VideoOverIP, a remoting protocol for virtual desktops that delivers multimedia performance and multi-monitor capabilities by capitalizing on the management improvements, security enhancements, and lowered TCO that result from virtualization.
Expansion
The company provides Virtual Desktop Platform (VDP), a virtual desktop infrastructure management system, which combines connection brokering, VM management, health, alerting, inventory, physical management, and support for various remoting protocols; and VDIvision for System Center Operations Manager 2007 to combine the power of the VDIworks Virtual Desktop Platform with the ubiquity and datacenter management capabilities of System Center.
In addition, VDIworks, Inc. introduced VDIworks2Go, an extension to the VDIworks VDP Console that allows mobile users to check out a virtual machine and compute on the go even when they are not connected to a network; and Protocol Inspector to discover and report on remoting capabilities of VMs and hosts on a network.
Further, the company provides cloud computing, desktop virtualization, remote access, and systems management technologies. VDIworks, Inc. offers its software for education, healthcare, financial services, small- to medium-sized businesses, and enterprise markets. VDIworks, Inc. is a prior subsidiary of ClearCube Technology, Inc.
Awards and recognitions
VDIworks received the best emerging virtualization company award multiple times from CRN for its contribution and commitment towards virtualization innovation and awareness.
Products and technologies
Virtual Desktops Platform 3.2
VDIworks Virtual Desktop Platform is a VDI management suite for connection brokering, remoting protocol, centralized management and desktop security.
Virtual Desktops for Healthcare
Virtual Desktop for Healthcare aggregates user environment on a few servers, and replacing PCs, hardy and completely secure thin clients, and virtual desktops represent the next evolution of the healthcare PC.
Virtual Desktops for Education
VDIworks have made remote access possible to educational institutions through the power of virtualization. This has removed the limitations of location and single desktop dependency; users can access information whether in the classroom, restaurant at home or travelling.
References
Centralized computing
Remote desktop
Thin clients
Cloud computing providers
Companies based in Austin, Texas
Software companies established in 2008
2008 establishments in Texas
fr:Virtual Desktop Infrastructure | VDIworks | Technology | 616 |
37,719,142 | https://en.wikipedia.org/wiki/Global%20surface%20temperature | Global surface temperature (GST) is the average temperature of Earth's surface. More precisely, it is the weighted average of the temperatures over the ocean and land. The former is also called sea surface temperature and the latter is called surface air temperature. Temperature data comes mainly from weather stations and satellites. To estimate data in the distant past, proxy data can be used for example from tree rings, corals, and ice cores. Observing the rising GST over time is one of the many lines of evidence supporting the scientific consensus on climate change, which is that human activities are causing climate change. Alternative terms for the same thing are global mean surface temperature (GMST) or global average surface temperature.
Series of reliable temperature measurements in some regions began in the 1850—1880 time frame (this is called the instrumental temperature record). The longest-running temperature record is the Central England temperature data series, which starts in 1659. The longest-running quasi-global records start in 1850. For temperature measurements in the upper atmosphere a variety of methods can be used. This includes radiosondes launched using weather balloons, a variety of satellites, and aircraft. Satellites can monitor temperatures in the upper atmosphere but are not commonly used to measure temperature change at the surface. Ocean temperatures at different depths are measured to add to global surface temperature datasets. This data is also used to calculate the ocean heat content.
Through 1940, the average annual temperature increased, but was relatively stable between 1940 and 1975. Since 1975, it has increased by roughly 0.15 °C to 0.20 °C per decade, to at least 1.1 °C (1.9 °F) above 1880 levels. The current annual GMST is about , though monthly temperatures can vary almost above or below this figure.
The global average and combined land and ocean surface temperature show a warming of 1.09 °C (range: 0.95 to 1.20 °C) from 1850–1900 to 2011–2020, based on multiple independently produced datasets. The trend is faster since the 1970s than in any other 50-year period over at least the last 2000 years. Within that upward trend, some variability in temperatures happens because of natural internal variability (for example due to El Niño–Southern Oscillation).
The global temperature record shows the changes of the temperature of the atmosphere and the oceans through various spans of time. There are numerous estimates of temperatures since the end of the Pleistocene glaciation, particularly during the current Holocene epoch. Some temperature information is available through geologic evidence, going back millions of years. More recently, information from ice cores covers the period from 800,000 years ago until now. Tree rings and measurements from ice cores can give evidence about the global temperature from 1,000-2,000 years before the present until now.
Definition
The IPCC Sixth Assessment Report defines global mean surface temperature (GMST) as the "estimated global average of near-surface air temperatures over land and sea ice, and sea surface temperature (SST) over ice-free ocean regions, with changes normally expressed as departures from a value over a specified reference period".
Put simply, the global surface temperature (GST) is calculated by averaging the temperature at the surface layer of the ocean (sea surface temperature) and over land (surface air temperature).
In comparison, the global mean surface air temperature (GSAT) is the "global average of near-surface air temperatures over land, oceans and sea ice. Changes in GSAT are often used as a measure of global temperature change in climate models."
Global temperature can have different definitions. There is a small difference between air and surface temperatures.
Temperature data from 1850 to the present time
Total warming and trends
Changes in global temperatures over the past century provide evidence for the effects of increasing greenhouse gases. When the climate system reacts to such changes, climate change follows. Measurement of the GST is one of the many lines of evidence supporting the scientific consensus on climate change, which is that humans are causing warming of Earth's climate system.
The global average and combined land and ocean surface temperature, show a warming of 1.09 °C (range: 0.95 to 1.20 °C) from 1850–1900 to 2011–2020, based on multiple independently produced datasets. The trend is faster since the 1970s than in any other 50-year period over at least the last 2000 years.
Most of the observed warming occurred in two periods: around 1900 to around 1940 and around 1970 onwards; the cooling/plateau from 1940 to 1970 has been mostly attributed to sulfate aerosol. Some of the temperature variations over this time period may also be due to ocean circulation patterns.
Land air temperatures are rising faster than sea surface temperatures. Land temperatures have warmed by 1.59 °C (range: 1.34 to 1.83 °C) from 1850–1900 to 2011–2020, while sea surface temperatures have warmed by 0.88 °C (range: 0.68 to 1.01 °C) over the same period.
For 1980 to 2020, the linear warming trend for combined land and sea temperatures has been 0.18 °C to 0.20 °C per decade, depending on the data set used.
It is unlikely that any uncorrected effects from urbanisation, or changes in land use or land cover, have raised global land temperature changes by more than 10%. However, larger urbanisation signals have been found locally in some rapidly urbanising regions, such as eastern China.
Methods
The instrumental temperature record is a record of temperatures within Earth's climate based on direct measurement of air temperature and ocean temperature. Instrumental temperature records do not use indirect reconstructions using climate proxy data such as from tree rings and marine sediments.
Global record from 1850 onwards
The period for which reasonably reliable instrumental records of near-surface land temperature exist with quasi-global coverage is generally considered to begin around 1850. Earlier records exist, but with sparser coverage, largely confined to the Northern Hemisphere, and less standardized instrumentation. (The longest-running temperature record is the Central England temperature data series, which starts in 1659).
The temperature data for the record come from measurements from land stations and ships. On land, temperatures are measured either using electronic sensors, or mercury or alcohol thermometers which are read manually, with the instruments being sheltered from direct sunlight using a shelter such as a Stevenson screen. The sea record consists of ships taking sea temperature measurements, mostly from hull-mounted sensors, engine inlets or buckets, and more recently includes measurements from moored and drifting buoys. The land and marine records can be compared.
Data is collected from thousands of meteorological stations, buoys and ships around the globe. Areas that are densely populated tend to have a high density of measurement points. In contrast, temperature observations are more spread out in sparsely populated areas such as polar regions and deserts, as well as in many regions of Africa and South America. In the past, thermometers were read manually to record temperatures. Nowadays, measurements are usually connected with electronic sensors which transmit data automatically. Surface temperature data is usually presented as anomalies rather than as absolute values.
Land and sea measurement and instrument calibration is the responsibility of national meteorological services. Standardization of methods is organized through the World Meteorological Organization (and formerly through its predecessor, the International Meteorological Organization).
Most meteorological observations are taken for use in weather forecasts. Centers such as European Centre for Medium-Range Weather Forecasts show instantaneous map of their coverage; or the Hadley Centre show the coverage for the average of the year 2000. Coverage for earlier in the 20th and 19th centuries would be significantly less. While temperature changes vary both in size and direction from one location to another, the numbers from different locations are combined to produce an estimate of a global average change.
Satellite and balloon temperature records (1950s–present)
Weather balloon radiosonde measurements of atmospheric temperature at various altitudes begin to show an approximation of global coverage in the 1950s. Since December 1978, microwave sounding units on satellites have produced data which can be used to infer temperatures in the troposphere.
Several groups have analyzed the satellite data to calculate temperature trends in the troposphere. Both the University of Alabama in Huntsville (UAH) and the private, NASA funded, corporation Remote Sensing Systems (RSS) find an upward trend. For the lower troposphere, UAH found a global average trend between 1978 and 2019 of 0.130 degrees Celsius per decade. RSS found a trend of 0.148 degrees Celsius per decade, to January 2011.
In 2004 scientists found trends of +0.19 degrees Celsius per decade when applied to the RSS dataset. Others found 0.20 degrees Celsius per decade up between 1978 and 2005, since which the dataset has not been updated.
The most recent climate model simulations give a range of results for changes in global-average temperature. Some models show more warming in the troposphere than at the surface, while a slightly smaller number of simulations show the opposite behaviour. There is no fundamental inconsistency among these model results and observations at the global scale.
The satellite records used to show much smaller warming trends for the troposphere which were considered to disagree with model prediction; however, following revisions to the satellite records, the trends are now similar.
Global surface and ocean datasets
The methods used to derive the principal estimates of global surface temperature trends are largely independent from each other and include:
The National Oceanic and Atmospheric Administration (NOAA) maintains the Global Historical Climatology Network (GHCN-Monthly) data base containing historical temperature, precipitation, and pressure data for thousands of land stations worldwide. Also, NOAA's National Climatic Data Center (NCDC) of surface temperature measurements maintains a global temperature database since 1880.
HadCRUT is a collaboration between the University of East Anglia's Climatic Research Unit and the Hadley Centre for Climate Prediction and Research.
NASA's Goddard Institute for Space Studies maintains GISTEMP.
More recently the Berkeley Earth Surface Temperature dataset was started. It is now one of the datasets used by IPCC and WMO in their assessments.
These datasets are updated frequently, and are generally in close agreement with each other.
Absolute temperatures v. anomalies
Records of global average surface temperature are usually presented as anomalies rather than as absolute temperatures. A temperature anomaly is measured against a reference value (also called baseline period or long-term average). Usually it is a period of 30 years. For example, a commonly used baseline period is 1951-1980. Therefore, if the average temperature for that time period was 15 °C, and the currently measured temperature is 17 °C, then the temperature anomaly is +2 °C.
Temperature anomalies are useful for deriving average surface temperatures because they tend to be highly correlated over large distances (of the order of 1000 km). In other words, anomalies are representative of temperature changes over large areas and distances. By comparison, absolute temperatures vary markedly over even short distances. A dataset based on anomalies will also be less sensitive to changes in the observing network (such as a new station opening in a particularly hot or cold location) than one based on absolute values will be.
The Earth's average surface absolute temperature for the 1961–1990 period has been derived by spatial interpolation of average observed near-surface air temperatures from over the land, oceans and sea ice regions, with a best estimate of 14 °C (57.2 °F). The estimate is uncertain, but probably lies within 0.5 °C of the true value. Given the difference in uncertainties between this absolute value and any annual anomaly, it's not valid to add them together to imply a precise absolute value for a specific year.
Siting of temperature measurement stations
The U.S. National Weather Service Cooperative Observer Program has established minimum standards regarding the instrumentation, siting, and reporting of surface temperature stations. The observing systems available are able to detect year-to-year temperature variations such as those caused by El Niño or volcanic eruptions.
Another study concluded in 2006, that existing empirical techniques for validating the local and regional consistency of temperature data are adequate to identify and remove biases from station records, and that such corrections allow information about long-term trends to be preserved. A study in 2013 also found that urban bias can be accounted for, and when all available station data is divided into rural and urban, that both temperature sets are broadly consistent.
Warmest periods
Warmest years
The warmest years in the instrumental temperature record have occurred in the last decade (i.e. 2012-2021). The World Meteorological Organization reported in 2021 that 2016 and 2020 were the two warmest years in the period since 1850.
Each individual year from 2015 onwards has been warmer than any prior year going back to at least 1850. In other words: each of the seven years in 2015-2021 was clearly warmer than any pre-2014 year.
The year 2023 was 1.48 °C hotter than the average in the years 1850-1900 according to the Copernicus Climate Change Service. It was declared as the warmest on record almost immediately after it ended and broke many climate records.
There is a long-term warming trend, and there is variability about this trend because of natural sources of variability (e.g. ENSO such as 2014–2016 El Niño event, volcanic eruption). Not every year will set a record but record highs are occurring regularly.
While record-breaking years can attract considerable public interest, individual years are less significant than the overall trend. Some climatologists have criticized the attention that the popular press gives to warmest year statistics.
Based on the NOAA dataset (note that other datasets produce different rankings), the following table lists the global combined land and ocean annually averaged temperature rank and anomaly for each of the 10 warmest years on record. For comparison: IPCC uses the mean of four different datasets and expresses the data relative to 1850–1900. Although global instrumental temperature records begin only in 1850, reconstructions of earlier temperatures based on climate proxies, suggest these recent years may be the warmest for several centuries to millennia, or longer.
Warmest decades
Numerous drivers have been found to influence annual global mean temperatures. An examination of the average global temperature changes by decades reveals continuing climate change: each of the last four decades has been successively warmer at the Earth's surface than any preceding decade since 1850. The most recent decade (2011-2020) was warmer than any multi-centennial period in the past 11,700 years.
The following chart is from NASA data of combined land-surface air and sea-surface water temperature anomalies.
Factors influencing global temperature
Factors that influence global temperature include:
Greenhouse gases trap outgoing radiation warming the atmosphere which in turn warms the land (greenhouse effect).
El Niño–Southern Oscillation (ENSO): El Niño generally tends to increase global temperatures. La Niña, on the other hand, usually causes years which are cooler than the short-term average. El Niño is the warm phase of the El Niño–Southern Oscillation (ENSO) and La Niña the cold phase. In the absence of other short-term influences such as volcanic eruptions, strong El Niño years are typically 0.1 °C to 0.2 °C warmer than the years immediately preceding and following them, and strong La Niña years 0.1 °C to 0.2 °C cooler. The signal is most prominent in the year in which the El Niño/La Niña ends.
Aerosols and volcanic eruptions: Aerosols diffuse incoming radiation generally cooling the planet. On a long-term basis, aerosols are primarily of anthropogenic origin, but major volcanic eruptions can produce quantities of aerosols which exceed those from anthropogenic sources over periods of time up to a few years. Volcanic eruptions which are sufficiently large to inject significant quantities of sulfur dioxide into the stratosphere can have a significant global cooling effect for one to three years after the eruption. This effect is most prominent for tropical volcanoes as the resultant aerosols can spread over both hemispheres. The largest eruptions of the last 100 years, such as the Mount Pinatubo eruption in 1991 and Mount Agung eruption in 1963-1964, have been followed by years with global mean temperatures 0.1 °C to 0.2 °C below long-term trends at the time.
Land use change like deforestation can increase greenhouse gases through burning biomass. Albedo can also be changed.
Incoming solar radiation varies very slightly, with the main variation controlled by the approximately 11-year solar magnetic activity cycle.
Robustness of evidence
There is a scientific consensus that climate is changing and that greenhouse gases emitted by human activities are the primary driver. The scientific consensus is reflected, for example, by the Intergovernmental Panel on Climate Change (IPCC), an international body which summarizes existing science, and the U.S. Global Change Research Program.
Other reports and assessments
The U.S. National Academy of Sciences, both in its 2002 report to President George W. Bush, and in later publications, has strongly endorsed evidence of an average global temperature increase in the 20th century.
The preliminary results of an assessment carried out by the Berkeley Earth Surface Temperature group and made public in October 2011, found that over the past 50 years the land surface warmed by 0.911 °C, and their results mirrors those obtained from earlier studies carried out by the NOAA, the Hadley Centre and NASA's GISS. The study addressed concerns raised by skeptics (more often: climate change deniers). Those concerns included urban heat island effects and apparently poor station quality, and the "issue of data selection bias" and found that these effects did not bias the results obtained from these earlier studies.
Internal climate variability and global warming
One of the issues that has been raised in the media is the view that global warming "stopped in 1998". This view ignores the presence of internal climate variability. Internal climate variability is a result of complex interactions between components of the climate system, such as the coupling between the atmosphere and ocean. An example of internal climate variability is the El Niño–Southern Oscillation (ENSO). The El Niño in 1998 was particularly strong, possibly one of the strongest of the 20th century, and 1998 was at the time the world's warmest year on record by a substantial margin.
Cooling over the 2007 to 2012 period, for instance, was likely driven by internal modes of climate variability such as La Niña. The area of cooler-than-average sea surface temperatures that defines La Niña conditions can push global temperatures downward, if the phenomenon is strong enough. The slowdown in global warming rates over the 1998 to 2012 period is also less pronounced in current generations of observational datasets than in those available at the time in 2012. The temporary slowing of warming rates ended after 2012, with every year from 2015 onwards warmer than any year prior to 2015, but it is expected that warming rates will continue to fluctuate on decadal timescales through the 21st century.
Related research
Trends and predictions
Each of the seven years in 2015-2021 was clearly warmer than any pre-2014 year, and this trend is expected to be true for some time to come (that is, the 2016 record will be broken before 2026 etc.). A decadal forecast by the World Meteorological Organisation issued in 2021 stated a probability of 40% of having a year above 1.5 C in the 2021-2025 period.
Global warming is very likely to reach 1.0 °C to 1.8 °C by the late 21st century under the very low GHG emissions scenario. In an intermediate scenario global warming would reach 2.1 °C to 3.5 °C, and 3.3 °C to 5.7 °C under the very high GHG emissions scenario. These projections are based on climate models in combination with observations.
Regional temperature changes
The changes in climate are not expected to be uniform across the Earth. In particular, land areas change more quickly than oceans, and northern high latitudes change more quickly than the tropics. There are three major ways in which global warming will make changes to regional climate: melting ice, changing the hydrological cycle (of evaporation and precipitation) and changing currents in the oceans.
Temperature estimates from prior to 1850
The global temperature record shows the fluctuations of the temperature of the atmosphere and the oceans through various spans of time. There are numerous estimates of temperatures since the end of the Pleistocene glaciation, particularly during the current Holocene epoch. Some temperature information is available through geologic evidence, going back millions of years. More recently, information from ice cores covers the period from 800,000 years ago until now. A study of the paleoclimate covers the time period from 12,000 years ago. Tree rings and measurements from ice cores can give evidence about the global temperature from 1,000-2,000 years ago. The most detailed information exists since 1850, when methodical thermometer-based records began. Modifications on the Stevenson-type screen were made for uniform instrument measurements around 1880.
Tree rings and ice cores (from 1,000–2,000 years before present)
Proxy measurements can be used to reconstruct the temperature record before the historical period. Quantities such as tree ring widths, coral growth, isotope variations in ice cores, ocean and lake sediments, cave deposits, fossils, ice cores, borehole temperatures, and glacier length records are correlated with climatic fluctuations. From these, proxy temperature reconstructions of the last 2000 years have been performed for the northern hemisphere, and over shorter time scales for the southern hemisphere and tropics.
Geographic coverage by these proxies is necessarily sparse, and various proxies are more sensitive to faster fluctuations. For example, tree rings, ice cores, and corals generally show variation on an annual time scale, but borehole reconstructions rely on rates of thermal diffusion, and small scale fluctuations are washed out. Even the best proxy records contain far fewer observations than the worst periods of the observational record, and the spatial and temporal resolution of the resulting reconstructions is correspondingly coarse. Connecting the measured proxies to the variable of interest, such as temperature or rainfall, is highly non-trivial. Data sets from multiple complementary proxies covering overlapping time periods and areas are reconciled to produce the final reconstructions.Proxy reconstructions extending back 2,000 years have been performed, but reconstructions for the last 1,000 years are supported by more and higher quality independent data sets. These reconstructions indicate:
global mean surface temperatures over the last 25 years have been higher than any comparable period since AD 1600, and probably since AD 900
there was a Little Ice Age centered on AD 1700
there was a Medieval Warm Period centered on AD 1000, but this was not a global phenomenon.
Indirect historical proxies
As well as natural, numerical proxies (tree-ring widths, for example) there exist records from the human historical period that can be used to infer climate variations, including: reports of frost fairs on the Thames; records of good and bad harvests; dates of spring blossom or lambing; extraordinary falls of rain and snow; and unusual floods or droughts. Such records can be used to infer historical temperatures, but generally in a more qualitative manner than natural proxies.
Recent evidence suggests that a sudden and short-lived climatic shift between 2200 and 2100 BCE occurred in the region between Tibet and Iceland, with some evidence suggesting a global change. The result was a cooling and reduction in precipitation. This is believed to be a primary cause of the collapse of the Old Kingdom of Egypt.
Paleoclimate (from 12,000 years before present)
Many estimates of past temperatures have been made over Earth's history. The field of paleoclimatology includes ancient temperature records. As the present article is oriented toward recent temperatures, there is a focus here on events since the retreat of the Pleistocene glaciers. The 10,000 years of the Holocene epoch covers most of this period, since the end of the Northern Hemisphere's Younger Dryas millennium-long cooling. The Holocene Climatic Optimum was generally warmer than the 20th century, but numerous regional variations have been noted since the start of the Younger Dryas.
Ice cores (from 800,000 years before present)
Even longer term records exist for few sites: the recent Antarctic EPICA core reaches 800 kyr; many others reach more than 100,000 years. The EPICA core covers eight glacial/interglacial cycles. The NGRIP core from Greenland stretches back more than 100 kyr, with 5 kyr in the Eemian interglacial. Whilst the large-scale signals from the cores are clear, there are problems interpreting the detail, and connecting the isotopic variation to the temperature signal.
Ice core locations
The World Paleoclimatology Data Center (WDC) maintains the ice core data files of glaciers and ice caps in polar and low latitude mountains all over the world.
Ice core records from Greenland
As a paleothermometry, the ice core in central Greenland showed consistent records on the surface-temperature changes. According to the records, changes in global climate are rapid and widespread. Warming phase only needs simple steps, however, the cooling process requires more prerequisites and bases. Also, Greenland has the clearest record of abrupt climate changes in the ice core, and there are no other records that can show the same time interval with equally high time resolution.
When scientists explored the trapped gas in the ice core bubbles, they found that the methane concentration in Greenland ice core is significantly higher than that in Antarctic samples of similar age, the records of changes of concentration difference between Greenland and Antarctic reveal variation of latitudinal distribution of methane sources. Increase in methane concentration shown by Greenland ice core records implies that the global wetland area has changed greatly over past years. As a component of greenhouse gases, methane plays an important role in global warming. The variation of methane from Greenland records makes a unique contribution for global temperature records undoubtedly.
Ice core records from Antarctica
The Antarctic ice sheet originated in the late Eocene, the drilling has restored a record of 800,000 years in Dome Concordia, and it is the longest available ice core in Antarctica. In recent years, more and more new studies have provided older but discrete records. Due to the uniqueness of the Antarctic ice sheet, the Antarctic ice core not only records the global temperature changes, but also contains huge quantities of information about the global biogeochemical cycles, climate dynamics and abrupt changes in global climate.
By comparing with current climate records, the ice core records in Antarctica further confirm that polar amplification. Although Antarctica is covered by the ice core records, the density is rather low considering the area of Antarctica. Exploring more drilling stations is the primary goal for current research institutions.
Ice core records from low-latitude regions
The ice core records from low-latitude regions are not as common as records from polar regions, however, these records still provide much useful information for scientists. Ice cores in low-latitude regions are usually from high altitude areas. The Guliya record is the longest record from low-latitude, high altitude regions, which spans over 700,000 years. According to these records, scientists found the evidence which can prove the Last Glacial Maximum (LGM) was colder in the tropics and subtropics than previously believed. Also, the records from low-latitude regions helped scientists confirm that the 20th century was the warmest period in the last 1000 years.
Geologic evidence (millions of years)
On longer time scales, sediment cores show that the cycles of glacials and interglacials are part of a deepening phase within a prolonged ice age that began with the glaciation of Antarctica approximately 40 million years ago. This deepening phase, and the accompanying cycles, largely began approximately 3 million years ago with the growth of continental ice sheets in the Northern Hemisphere. Gradual changes in Earth's climate of this kind have been frequent during the existence of planet Earth. Some of them are attributed to changes in the configuration of continents and oceans due to continental drift.
See also
References
External links
Hadley Centre: Global temperature data
NASA's Goddard Institute for Space Studies (GISS) — Global Temperature Trends.
GISS Surface Temperature Analysis (GISTEMP)
Google Earth interface for CRUTEM4 land temperature data
Climate change
Earth sciences
Temperature
History of climate variability and change
Climate and weather statistics
Historical climatology
Articles containing video clips | Global surface temperature | Physics,Chemistry | 5,819 |
75,672,385 | https://en.wikipedia.org/wiki/Statue%20of%20Shakira | The bronze statue of Colombian singer Shakira stands at the Gran Malecon de Barranquilla, in Barranquilla, Colombia.
This bronze statue is the second biggest pop icon statue only behind Forever Marilyn, a 26-ft tribute to Marilyn Monroe that sits outside a tourism agency in Palm Springs, California,
Sculpture
The statue, which resembles Shakira's signature pose and hip swivel from her 2005 "Hips Don't Lie" music video, was sculpted by 52-year-old sculptor Yino Marquez. Marquez, with a career spanning from the age of 16, has crafted large statues for Colombian cities and serves as an academic coordinator in Barranquilla’s public art academy. Mayor Jaime Pumarejo proposed the idea of a Shakira statue for the waterfront, and a month later, Marquez was chosen. The city aimed to honor Barranquilla figures to boost tourism and provide role models, resulting in two statues—one representing the city's coat of arms and another of Shakira. The project cost around $700 million Colombian pesos (USD$180,000). Over 30 people worked on the sculpture over the course of five months.
A plaque beneath the statue reads: "A heart that composes, hips that don't lie, unmatched talent, a voice that moves the masses, and bare feet that march for the good of children and humanity." The sculpture is located on a promenade along the Magdalena River.
Reception
Shakira expressed in a social media post about the statue dedication, "This is too much for my little heart." She shared her happiness at having her parents present on her mother's birthday during the ceremony and extended gratitude to the statue's sculptor and the students from a local arts school. She has not seen it yet in person according to her interview with Zane Lowe on Apple Music.
Sculpture of Shakira in Barranquilla
This is the second monumental sculpture of the Colombian artist located in Barranquilla. Since 2006, a no less representative one has been displayed in the park of the Estadio Metropolitano Roberto Meléndez, the largest stadium in Colombia. The Shakira sculpture made of steel represents the singer and songwriter standing with a guitar.
References
2023 sculptures
Barranquilla
Bronze sculptures in Colombia
Outdoor sculptures in Colombia
Shakira
Sculptures of women in Colombia
Statues in Colombia
Colossal statues
2023 establishments in Colombia | Statue of Shakira | Physics,Mathematics | 492 |
2,764,113 | https://en.wikipedia.org/wiki/Fire%20in%20the%20hole | "Fire in the hole" is an expression indicating that an explosive detonation in a confined space is imminent. It originated from American miners, who needed to warn their fellows that a charge had been set. The phrase appears in this sense in American state mining regulations, in military and corporate procedures, and in various mining and military blasting-related print books and narratives, e.g. during bomb disposal or throwing grenades into a confined space.
In common parlance it has become a catchphrase for a warning of the type "Watch out!" or "Heads up!".
NASA has used the term to describe a means of staging a multistage rocket vehicle by igniting the upper stage simultaneously with the ejection of the lower stage, without a usual delay of several seconds. On the Apollo 5 uncrewed flight test of the first Apollo Lunar Module, a "fire in the hole test" used this procedure to simulate a lunar landing abort. Gene Kranz describes the test in his autobiography:
References
English phrases
Explosion protection
Military terminology | Fire in the hole | Chemistry,Engineering | 215 |
701,322 | https://en.wikipedia.org/wiki/Pointe%20du%20Hoc | La Pointe du Hoc () is a promontory with a cliff overlooking the English Channel on the northwestern coast of Normandy in the Calvados department, France.
In World War II, Pointe du Hoc was the location of a series of German bunkers and machine gun posts. Prior to the invasion of Normandy, the German army fortified the area with concrete casemates and gun pits. On D-Day, the United States Army Provisional Ranger Group attacked and captured Pointe du Hoc after scaling the cliffs. United States generals including Dwight D. Eisenhower had determined that the place housed artillery that could slow down nearby beach attacks.
Background
Pointe du Hoc lies west of the center of Omaha Beach. As part of the Atlantic Wall fortifications, the prominent cliff top location was fortified by the Germans.
The battery was initially built in 1943 to house six captured French First World War vintage GPF 155 mm K418(f) guns positioned in open concrete gun pits. The battery was garrisoned by the 2nd Battery of Army Coastal Artillery Battalion 1260 (Heeres-Küsten-Artillerie-Abteilung 1260 or 2/HKAA.1260). To defend the promontory from attack, elements of the 352nd Infantry Division were also stationed at the battery.
Prelude
To provide increased defensive capability, the Germans began to improve the defenses of the battery in the spring of 1944, with enclosed H671 concrete casemates being started and the older 155 mm guns displaced. The plan was to build six casemates but two were unfinished when the location was attacked. The casemates were built over and in front of the circular gun pits, which housed the 155 mm guns.
Also built was a H636 observation bunker and L409a mounts for 20 mm Flak 30 anti-aircraft guns. The 155 mm guns would have threatened the Allied landings on Omaha and Utah beaches when finished, risking heavy casualties to the landing forces.
In the months before D-Day the Germans were recorded by Allied Intelligence removing their guns one by one as they re-developed the site with the final aim of four casemates facing Utah Beach and the possibility of two 155 mm guns in open emplacements. During the preparation for Operation Overlord it was determined by Lt Col. James Earl Rudder that Pointe du Hoc should be attacked by ground forces, to prevent the Germans using the casemates.
Recently released documents in the US Archives show that Rudder knew prior to landing that the casemates were unfinished and only two were actually structurally close to being ready. They remain that way today. The U.S. 2nd and 5th Ranger Battalions were given the task of assaulting the strong point early on D-Day. Elements of the 2nd Battalion went in to attack Pointe du Hoc but delays meant the remainder of the 2nd Battalion and the complete 5th Battalion landed at Omaha Beach as their secondary landing position.
Though the Germans had removed the main armament from Pointe du Hoc, the beachheads were shelled by field artillery from the nearby Maisy battery, which was fired on by the heavy cruiser HMS Hawkins. The rediscovery of the battery at Maisy has shown that it was responsible for firing on the Allied beachheads until 9 June 1944.
Plan
Pointe du Hoc lay within General Leonard Gerow's V Corps field of operations. This then went to the 1st Infantry Division (the Big Red One) and then down to the right-hand assault formation, the 116th Infantry Regiment attached from 29th Division. In addition they were given two Ranger battalions to undertake the attack.
The Ranger battalions were commanded by Lieutenant Colonel James Earl Rudder. The plan called for the three companies of Rangers to be landed by sea at the foot of the cliffs, scale them using ropes, ladders, and grapples while under enemy fire, and engage the enemy at the top of the cliff. This was to be carried out before the main landings. The Rangers trained for the cliff assault on the Isle of Wight, under the direction of British Commandos.
Major Cleveland A. Lytle was to command Companies D, E and F of the 2nd Ranger Battalion (known as "Force A") in the assault at Pointe du Hoc. During a briefing aboard the Landing Ship Infantry TSS Ben My Chree, he heard that French Resistance sources reported the guns had been removed. Impelled to some degree by alcohol, Lytle became quite vocal that the assault would be unnecessary and suicidal and was relieved of his command at the last minute by Provisional Ranger Force commander Rudder. Rudder felt that Lytle could not convincingly lead a force with a mission that he did not believe in. Lytle was later transferred to the 90th Infantry Division where he was awarded the Distinguished Service Cross.
Battle
Landings
The assault force was carried in ten landing craft, with another two carrying supplies and four DUKW amphibious trucks carrying the ladders requisitioned from the London Fire Brigade. One landing craft carrying troops sank, drowning all but one of its occupants; another was swamped. One supply craft sank and the other put the stores overboard to stay afloat. German fire sank one of the DUKWs. Once within a mile of the shore, German mortars and machine guns fired on the craft.
These initial setbacks resulted in a 40-minute delay in landing at the base of the cliffs, but British landing craft carrying the Rangers finally reached the base of the cliffs at 7:10am with approximately half the force it started out with. The landing craft were fitted with rocket launchers to fire grapnels and ropes up the cliffs. As the Rangers scaled the cliffs, the Allied ships USS Texas (BB-35), USS Satterlee (DD-626), USS Ellyson (DD-454), and HMS Talybont (L18) provided them with fire support in an attempt to prevent the German defenders above from firing down on the assaulting troops. The cliffs proved to be higher than the ladders could reach.
Attack
The original plans had also called for an additional, larger Ranger force of eight companies (Companies A and B of the 2nd Ranger Battalion and the entire 5th Ranger Battalion) to follow the first attack, if successful. Flares from the cliff tops were to signal this second wave to join the attack, but because of the delayed landing, the signal came too late, and the other Rangers landed on Omaha instead of Pointe du Hoc. The added impetus these 500-plus Rangers provided on the stalled Omaha Beach landing has been conjectured to have averted a disastrous failure there, since they carried the assault beyond the beach, into the overlooking bluffs and outflanked the German defenses.
When the Rangers made it to the top at Pointe du Hoc, they had sustained 15 casualties. "Ranger casualties on the beach totaled about 15, most of them from the raking fire to their left". The force also found that their radios were ineffective. Upon reaching the fortifications, most of the Rangers learned for the first time that the main objective of the assault, the artillery battery, had been removed. The Rangers regrouped at the top of the cliffs, and a small patrol went off in search of the guns. Two different patrols found five of the six guns nearby (the sixth was being repaired) and destroyed their firing mechanisms with thermite grenades.
Leonard Lomell of the 2nd Ranger Battalion maintained that he and Ranger Jack Kuhn found the guns completely by accident after walking down a tree-lined lane, whilst on patrol.
Multiple copies of the Rangers' orders were released in 2012 by the US National Archives, indicating that Lt. Col. Rudder had been told of the guns' removal prior to landing. His D-Day orders went beyond the taking of Pointe du Hoc and remained consistent: Land at Pointe du Hoc & Omaha Beach; advance along the coast; take the town of Grandcamp, attack the Maisy Batteries and reach the "D-Day Phase Line" (close to Osmanville) two hours before dark. The Rangers could then repel counterattacks along the Grandcamp-Vierville road, via the Isigny-Bayeux road or diagonally across open fields. They could also prevent mobile 150mm artillery getting within a 12-mile range of the beachhead.
The Rangers trained specifically for the 12-mile inland march during the Slapton Sands exercises in England, and the First Infantry Division was also given the same "D-Day Phase Line" objective.
The Small Unit Actions Report, written by US Army Intelligence, states that there were times (some hours) when the Rangers did not see a single German after the initial fighting. Amateur Historian Gary Sterne suggests this gave Lt. Col. Rudder the time to have continued with his objectives. No documentary evidence has been produced ordering Rudder to stay and "guard the road" behind Pointe du Hoc or wait for reinforcements. This plan would have only been possible had the remainder of the 2nd Rangers come in as reinforcements, but that would have also possibly cost Omaha Beach. Despite what Sterne would suggest, after two days of fighting, 77 of the 225 soldiers that had landed at the Pointe had been killed, with another 152 wounded, indicating that there was indeed fierce fighting occurring.
German counter-attacks
The costliest part of the battle for Pointe du Hoc for the Rangers came after the successful cliff assault. Determined to hold the vital high ground, yet isolated from other Allied forces, the Rangers fended off several counter-attacks from the German 914th Grenadier Regiment. The 5th Ranger Battalion and elements of the 116th Infantry Regiment headed towards Pointe du Hoc from Omaha Beach. However, only twenty-three Rangers from the 5th were able to link up with the 2nd Rangers during the evening of 6 June 1944. During the night the Germans forced the Rangers into a smaller enclave along the cliff, and some were taken prisoner.
It was not until the morning of 8 June that the Rangers at Pointe du Hoc were finally relieved by the 2nd and 5th Rangers, plus the 1st Battalion of the 116th Infantry, accompanied by tanks from the 743rd Tank Battalion.
When the Rangers began suffering heavy losses, brief consideration was given to sending in the 84-man Marine Detachment aboard the battleship USS Texas on the morning of 7 June. At the last minute, word was passed down through the Army chain of command that no Marines would be allowed to go ashore, not even providing armed escort on landing craft ferrying Army troops or supplies.
Aftermath
At the end of the two-day action, the initial Ranger landing force of 225+ was reduced to about 90 fighting men. In the aftermath of the battle, some Rangers became convinced that French civilians had taken part in the fighting on the German side. A number of French civilians accused of shooting at US forces or of serving as artillery observers for the Germans were executed.
Timeline
6 June 1944
06.39 – H-Hour – D, E and F companies of 2nd Ranger Battalion approach the Normandy coast in a flotilla of twelve craft.
07.05 – Strong tides and navigation errors mean the initial assault arrives late and the 5th Ranger Battalion as well A and B companies from 2nd Battalion move to Omaha Beach instead.
07.30 – Rangers fight their way up the cliff and reach the top and start engaging the Germans across the battery. Rangers discover the casemates are empty.
08.15 – Approximately 35 Rangers reach the road and create a roadblock.
09.00 – Five German guns are located and destroyed using thermite grenades.
For the rest of the day the Rangers repel several German counter-attacks.
During the evening, one patrol from the 5th Rangers that landed at Omaha beach make it through to join the Rangers at Pointe du Hoc.
7 June 1944
The Rangers continue to defend an even smaller area on Pointe du Hoc against German counter-attacks.
Afternoon – A platoon of Rangers arrives on an LST, with wounded removed.
8 June 1944
Morning – The Rangers are relieved by troops arriving from Omaha Beach.
Commemoration
Pointe du Hoc now features a memorial and museum dedicated to the battle. Many of the original fortifications have been left in place and the site remains speckled with a number of bomb craters. On 11 January 1979 this 13-hectare field was transferred to American control, and the American Battle Monuments Commission was made responsible for its maintenance.
As part of the commemorations of the 75th anniversary of D-Day in 2019, members of the current 75th Ranger Regiment reenacted the climb in both period and modern uniforms.
Notes
References
Further reading
External links
History and photos of the Pointe du Hoc D-Day – Overlord
Author Interview, November 15, 2012 Pritzker Military Library
American D-Day: Omaha Beach, Utah Beach & Pointe du Hoc
D-Day – Etat des Lieux: Pointe du Hoc
President Reagan's speech at the 40th anniversary commemoration
Ranger Monument on the American Battle Monuments Commission web site
The World War II US Army Rangers celebrate the 50th Anniversary of D-Day
Migraction.net: seawatching at Pointe du Hoc – for visitors interested in sea birds at this site
Royal Marines Supporting the Rangers at Pointe Du Hoc
Operation Overlord
Operation Neptune
Battles of World War II involving the United States
United States Army Rangers
Military operations of World War II involving Germany
Conflicts in 1944
Atlantic Wall
World War II sites of Nazi Germany
World War II sites in France
Coastal fortifications
Nazi architecture
World War II defensive lines | Pointe du Hoc | Engineering | 2,699 |
37,621,685 | https://en.wikipedia.org/wiki/Stenella%20adeniae | Stenella adeniae is a species of anamorphic fungi.
References
External links
adeniae
Fungi described in 1979
Fungus species | Stenella adeniae | Biology | 26 |
1,117,315 | https://en.wikipedia.org/wiki/Lyapunov%20time | In mathematics, the Lyapunov time is the characteristic timescale on which a dynamical system is chaotic. It is named after the Russian mathematician Aleksandr Lyapunov. It is defined as the inverse of a system's largest Lyapunov exponent.
Use
The Lyapunov time mirrors the limits of the predictability of the system. By convention, it is defined as the time for the distance between nearby trajectories of the system to increase by a factor of e. However, measures in terms of 2-foldings and 10-foldings are sometimes found, since they correspond to the loss of one bit of information or one digit of precision respectively.
While it is used in many applications of dynamical systems theory, it has been particularly used in celestial mechanics where it is important for the problem of the stability of the Solar System. However, empirical estimation of the Lyapunov time is often associated with computational or inherent uncertainties.
Examples
Typical values are:
See also
Belousov–Zhabotinsky reaction
Molecular chaos
Three-body problem
References
Dynamical systems | Lyapunov time | Physics,Mathematics | 225 |
58,945,140 | https://en.wikipedia.org/wiki/Joint%20Enterprise%20Defense%20Infrastructure | The Joint Enterprise Defense Infrastructure (JEDI) contract was a large United States Department of Defense cloud computing contract which has been reported as being worth $10 billion over ten years. JEDI was meant to be a commercial off-the-shelf (COTS) implementation of existing technology, while providing economies of scale to DoD.
Controversy
Companies interested in the contract included Amazon, Google, Microsoft and Oracle. After protests from Google employees, Google decided to drop out of contention for the contract because of conflict with its corporate values. The deal was considered "gift-wrapped for Amazon" until Oracle (co-chaired by Safra Catz) contested the contract, citing the National Defense Authorization Act over IDIQ contracts and the conflicts of interest from Deap Ubhi, who worked for Amazon both before and after his time in the Department of Defense. This led Eric G. Bruggink, senior judge of the United States Court of Federal Claims, to place the contract award on hold.
In August 2019, weeks before the winner was expected to be announced, President Donald Trump ordered the contract placed on hold again for Defense Secretary Mark Esper to investigate complaints of favoritism towards Amazon. In October 2019, it was announced that the contract was awarded to Microsoft. Media has noted Trump's dislike towards Amazon's founder, Jeff Bezos, owner of the Washington Post, a newspaper critical of Trump. According to Bezos, Trump "used his power to 'screw Amazon' out of the JEDI Contract". The JEDI contract was awarded to Microsoft on October 25, 2019, the DoD announced, but AWS filed documents with the Court of Federal Claims on November 22, 2019 challenging the award; its legal strategy included calling Trump to testify.
A federal judge, Patricia Campbell-Smith, halted Microsoft's work on the project on February 13, 2020, a day before the system was scheduled to go live, awaiting a resolution in Amazon's suit. She said that Amazon's claims are reasonable and "is likely to succeed on the merits of its argument that the DOD improperly evaluated" Microsoft's offer. As a result, the DOD was forced by a federal judge to reopen bidding for the contract. In the wake of that reopening, Amazon has filed additional protests related to modifications which have been made to selected sections of the contract. Recent DOD legal filings have stated that the final award of the contract cannot take place until at least August 17, and may yet be delayed beyond that date as well. On September 4, 2020, the Department of Defense reaffirmed that Microsoft won the JEDI Cloud contract after the reevaluation of the proposal, stating that Microsoft's proposal continues to represent the best value to the government. DISA/CCPO (Defense Information Systems Agency/Cloud Computing Program Office) had not yet begun work, as of May 29, 2021, while Microsoft continued to mark time before an implementation. In the meantime the several departments (Army, Navy, Air Force) are using their previous infrastructures to meet their several internal time lines, respectively.
Cancellation and JWCC
The JEDI contract with Microsoft was cancelled on July 6, 2021 with the expectation that a new program called "Joint Warfighter Cloud Capability" (JWCC) would replace it, which would involve services from multiple vendors. On November 19, 2021 the Department of Defense issued formal solicitations to four of the original JEDI companies: Amazon, Google, Microsoft and Oracle; notably not including the fifth provider consulted, IBM. On December 7, 2022, the JWCC contract was awarded to the four companies for a combined total of up to $9billion under the program.
References
Cloud computing
United States Department of Defense
Microsoft
Trump administration controversies | Joint Enterprise Defense Infrastructure | Technology | 758 |
1,132,947 | https://en.wikipedia.org/wiki/Planetary%20phase | A planetary phase is a certain portion of a planet's area that reflects sunlight as viewed from a given vantage point, as well as the period of time during which it occurs. The phase is determined by the phase angle, which is the angle between the planet, the Sun and the Earth.
Inferior planets
The two inferior planets, Mercury and Venus, which have orbits that are smaller than the Earth's, exhibit the full range of phases as does the Moon, when seen through a telescope. Their phases are "full" when they are at superior conjunction, on the far side of the Sun as seen from the Earth. It is possible to see them at these times, since their orbits are not exactly in the plane of Earth's orbit, so they usually appear to pass slightly above or below the Sun in the sky. Seeing them from the Earth's surface is difficult, because of sunlight scattered in Earth's atmosphere, but observers in space can see them easily if direct sunlight is blocked from reaching the observer's eyes. The planets' phases are "new" when they are at inferior conjunction, passing more or less between the Sun and the Earth. Sometimes they appear to cross the solar disk, which is called a transit of the planet. At intermediate points on their orbits, these planets exhibit the full range of crescent and gibbous phases.
Superior planets
The superior planets, orbiting outside the Earth's orbit, do not exhibit a full range of phases since their maximum phase angles are smaller than 90°. Mars often appears significantly gibbous, it has a maximum phase angle of 45°. Jupiter has a maximum phase angle of 11.1° and Saturn of 6°, so their phases are almost always full.
See also
Earth phase
Lunar phase
Phases of Venus
References
Further reading
One Schaaf, Fred. The 50 Best Sights in Astronomy and How to See Them: Observing Eclipses, Bright Comets, Meteor Showers, and Other Celestial Wonders. Hoboken, New Jersey: John Wiley, 2007. Print.
Two Ganguly, J. Thermodynamics in Earth and Planetary Sciences. Berlin: Springer, 2008. Print.
Three Ford, Dominic. The Observer's Guide to Planetary Motion: Explaining the Cycles of the Night Sky. Dordrecht: Springer, 2014. Print.
Observational astronomy | Planetary phase | Astronomy | 472 |
21,051,128 | https://en.wikipedia.org/wiki/Stage%20monitor%20system | A stage monitor system is a set of performer-facing loudspeakers called monitor speakers, stage monitors, floor monitors, wedges, or foldbacks on stage during live music performances in which a sound reinforcement system is used to amplify a performance for the audience. The monitor system allows musicians to hear themselves and fellow band members clearly.
The sound at popular music and rock music concerts is amplified with power amplifiers through a sound reinforcement system. With the exception of the smallest venues, such as coffeehouses, most mid- to large-sized venues use two sound systems. The main or front-of-house (FOH) system amplifies the onstage sounds for the main audience. The monitor system is driven by a mix separate from the front-of-house system. This mix typically highlights the vocals and acoustic instruments so they can be heard over the electronic instruments and drums.
Monitor systems have a range of sizes and complexity. A small pub or nightclub may have a single monitor speaker on stage so that the lead vocalist can hear their singing and the signal for the monitor may be produced on the same mixing console and audio engineer as the front-of-house mix. A stadium rock concert may use a large number of monitor wedges and a separate mixing console and engineer on or beside the stage for the monitors. In the most sophisticated and expensive monitor set-ups, each onstage performer can ask the sound engineer for a separate monitor mix for separate monitors. For example, the lead singer can choose to hear mostly their voice in the monitor in front of them and the guitarist can choose to hear mostly the bassist and drummer in their monitor.
Role
For live sound reproduction during popular music concerts in mid- to large-size venues, there are typically two complete loudspeaker systems and PA systems (also called sound reinforcement systems): the main or front-of-house system and the monitor or foldback system. Each system consists of a mixing console, sound processing equipment, power amplifiers, and speakers.
Without a foldback system, the sound that on-stage performers would hear from front of house would be the reverberated reflections bouncing from the rear wall of the venue. The naturally reflected sound is delayed and distorted, which could, for example, cause the singer to sing out of time with the band. In situations with poor or absent foldback mixes, vocalists may end up singing off-tune or out of time with the band.
The monitor system reproduces the sounds of the performance and directs them towards the onstage performers (typically using wedge-shaped monitor speaker cabinets), to help them hear the instruments and vocals. A separately mixed signal is often routed to the foldback speaker to allow musicians to hear their performance as the audience hears it or in a way that helps improve their performance. More frequently, major professional bands and singers often use small in-ear monitors rather than onstage monitor speakers. The two systems usually share microphones and direct inputs using a splitter microphone snake.
The front-of-house system, which provides the amplified sound for the audience, will typically use a number of powerful amplifiers driving a range of large, heavy-duty loudspeaker cabinets including low-frequency speaker cabinets called subwoofers, full-range speaker cabinets, and high-range horns. A coffeehouse or small bar where singers perform while accompanying themselves on acoustic guitar may have a relatively small, low-powered PA system, such as a pair of two 200 watt powered speakers. A large club may use several power amplifiers to provide 1000 to 2000 watts of power to the main speakers. An outdoor rock concert may use large racks of a number of power amplifiers to provide 10,000 or more watts.
The monitor system in a coffeehouse or singer-songwriter stage for a small bar may be a single 100 watt powered monitor wedge. In the smallest PA systems, the performer may set their own main and monitor sound levels with a simple powered mixing console. The simplest monitor systems consist of a single monitor speaker for the lead vocalist which amplifies their singing voice so that they can hear it clearly.
In a large club where rock bands play, the monitor system may use racks of power amplifiers and four to six monitor speakers to provide 500 to 1000 watts of power to the monitor speakers. At an outdoor rock concert, there may be several thousand watts of power going to a complex monitor system that includes wedge-shaped cabinets for vocalists and larger cabinets called sidefill cabinets to help the musicians to hear their playing and singing.
Larger clubs and concert venues typically use a more complex type of monitor system which has two or three different monitor speakers and mixes for the different performers, e.g., vocalists and instrumentalists. Each monitor mix contains a blend of different vocal and instruments, and an amplified speaker is placed in front of the performer. This way the lead vocalist can have a mix that forefronts their vocals, the backup singers can have a mix that emphasizes their backup vocals and the rhythm section members can have a mix which emphasizes the bass and drums. In most clubs and larger venues, sound engineers and technicians control the mixing consoles for the main and monitor systems, adjusting the tone, sound levels, and overall volume of the performance.
History
In the early 1960s, many pop and rock concerts were performed without monitor speakers. In the early 1960s, PA systems were typically low-powered units that could only be used for the vocals. The PA systems during this era were not used to amplify the electric instruments on stage; each performer was expected to bring a powerful amplifier and speaker system to make their electric guitar, electric bass, Hammond organ or electric piano loud enough to hear on stage and to fill the venue with sound.
With these systems, singers could only hear their vocals by listening to the reflected sound from the audience-facing front-of-house speakers. This was not an effective way to hear one's vocals because of the associated delay which made it hard to sing in rhythm with the band and in tune.
The use of performer-facing loudspeakers for foldback or monitoring may have been developed independently by sound engineers in different cities who were trying to resolve this problem. The earliest recorded instance that a loudspeaker was used for foldback (monitoring) was for Judy Garland at the San Francisco Civic Auditorium on September 13, 1961; provided by McCune Sound Service.
Early stage monitors were simply speakers on each side of the stage pointed at the performers driven by the same mix as the FOH; audio mixers used in PAs at the time rarely had auxiliary send mixes. Today these would be called sidefill monitors. F.B. "Duke" Mewborn of Atlanta's Baker Audio used left and right arrays of Altec loudspeakers to cover the audience and to serve sidefill duties for the Beatles at Atlanta Stadium on August 18, 1965. Bill Hanley working with Neil Young of Buffalo Springfield pioneered the concept of a speaker on the floor angled up at the performer with directional microphones to allow louder volumes with less feedback.
In the 1970s, Bob Cavin, chief engineer at McCune Sound, designed the first monitor mixer designed expressly for stage monitoring. He also designed the first stage monitor loudspeaker that had two different listening angles.
The introduction of monitor speakers made it much easier for performers to hear their singing and playing on stage, which helped to improve the quality of live performances. A singer who has a good monitor system does not have to strain their voice to try to be heard. Monitor systems also helped rhythm section instrumentalists hear each other and thus improve their playing together even on a huge stage (e.g., at a stadium rock concert) with the musicians far apart.
From the late 1960s to the 1980s, most monitor speaker cabinets used an external power amplifier. In the 1990s and 2000s, clubs increasingly used powered monitors, which contain an integrated power amplifier. Another trend of the 2000s was the blurring of the lines between monitor speaker cabinets and regular speaker cabinets; many companies began selling wedge-shaped full-range speakers intended to be used for either monitors or main public address purposes.
The stage monitoring system
The monitor system consists of the monitor mixer, equalization or other signal processing, amplifiers, and monitor speakers on stage pointing at the performers. Microphones and direct inputs are shared with the front-of-house system.
Front of house auxiliary speaker
The simplest monitor system is a speaker pointed at the performer fed from the FOH mix. This might be used by one or two performers in a coffee house, small club, or small house of worship. In this setting, a two-channel powered mixer might be used with one channel powering the main speakers and one channel powering the monitor speaker. The mixer would be on stage with the performers setting their own levels.
Monitors mixed from front of house
A common monitor setup for smaller venues is one that uses one or more separate auxiliary mixes or sub-mixes on the FOH mixing console. These mixes are pre-fader so that changes to the FOH levels do not significantly affect what the performers hear on stage. The monitor mixes drive dedicated monitor equalizers and signal processors which in turn drive dedicated monitor amplifiers that power the monitor speakers. The FOH mixer is operated by an audio engineer who must mix for the audience and also tend to the needs of the musicians on stage.
Separate monitor mixer
Larger venues will use a separate system for monitors with its own mixer and monitor sound engineer. In this case, a microphone splitter is used to split the signal from the microphones and direct inputs between the monitor mixer and the FOH mixer.
This splitter may be part of the microphone snake or it may be built into the monitor mixer. With a separate monitor system, there may be 8, 12, or more separate monitor mixes, typically one per performer. Each monitor mix contains a blend of different vocals and instruments. This way the lead vocalist can have a mix that forefronts their vocals, the backup singers can have a mix that emphasizes their backup vocals and the rhythm section members can have a mix that emphasizes the bass and drums. In addition, there may be side-fill monitors to provide sound for areas on stage not covered by the floor wedges.
Distributed monitoring
An innovation first used in recording studios is the use of small mixers placed next to each performer so that they can adjust their own mix. The mixers are driven by sub-mixes from the FOH console with each sub-mix having a subset of the inputs on stage. For example, mix 1 vocals, mix 2 guitars, mix 3 keyboards, and mix 4 drums and bass. The performers can then adjust these four groups to their own preferences. If the balance between several vocals or the balance between bass and drums needed to be changed, the sound engineer would have to change it at the main mixing console.
A variation on this is to add an additional input to each mixer which is the performer's instrument or vocal microphone so that each performer can add more of their performance to the other sub-mixes. This approach has been called more me in the monitors.
With advances in digital technology, it is now possible to transmit multiple audio channels over a single Ethernet cable. This allows the distribution of most or all of the input sources to each performer's mixer, giving them complete control over their mix.
Distributed monitor mixers are most successful with headphones or in-ear monitors. If monitor speakers are used, feedback problems are common when the performer turns their microphone up too loud.
Monitor equipment
Monitor speakers
Monitor speakers often include a single full-range loudspeaker and a horn in a cabinet. Monitor speakers have numerous features that facilitate their transportation and protection, including handles, metal corner protectors, sturdy felt covering or paint and a metal grille to protect the speaker. Monitor speakers are normally heavy-duty speakers that can accept high input power to create high volumes and withstand extreme electrical and physical abuse.
There are two types of monitors: passive monitors consist of a loudspeaker and horn in a cabinet and must be plugged into an external power amplifier; active monitors have a loudspeaker, horn and a power amplifier in a single cabinet, which means the signal from the mixing console can be plugged straight into the monitor speaker.
A recent trend has been to build the amplifier and associated sound processing equipment into the monitor speaker enclosure. These monitors are called active or powered monitors. This design allows amplifiers with the right amount of power to be custom made for the speakers. Active monitors are typically bi-amped and have an active crossover with custom equalization to tune the monitor to have a flat frequency response. One of the first examples of this type of monitor is the Meyer Sound Laboratories UM-1P.
Monitor speakers come in two forms: floor monitors and side-fill monitors.
Floor monitors are compact speakers with an angled back that is laid on the floor. This angled shape gives the floor monitor its other name of wedge. The angle is typically 30 degrees which points the speaker back and up towards the performer. These speakers may also be single small speakers which are sometimes mounted on a microphone stand to get them closer to the performers' ears. More often they are heavy-duty two-way systems with a woofer and a high-frequency horn. A small floor monitor might use a 12" woofer with an integrated high-frequency horn or driver combination. A large floor monitor might use one or two 15" woofers and a high-frequency driver attached to a high-frequency horn. The speaker might use a passive crossover or might be bi-amped with an active crossover and separate amplifiers for the woofer and high-frequency driver.
Side-fill monitors are monitors that sit upright on the side of the stage and are used to provide sound to the areas of the stage not covered by the floor monitors. Side fill monitors are typically standard FOH speakers. A special case of a side fill monitor is a drum fill. Drum fills are typically large 2- or 3-way speakers with one or more large woofers capable of extremely high volumes to help drummers hear other band members over the acoustic sound of their drums.
Monitor amplifiers
If the amplifier is not built into the monitor speaker enclosure, one or more external amplifiers are required to power the monitor system speakers. Robust commercial amplifiers are used here. In a simple monitor system, a single amplifier may drive all monitor speakers. In more complex scenarios where there are multiple monitor mixes, additional power is required or speakers are bi-amped, multiple amplifiers or amplifier channels are used.
Equalization and signal processing
Monitor speakers need their own equalization primarily to reduce or eliminate acoustic feedback. Acoustic feedback occurs when the time delay between the acoustic input of a microphone and the output of a monitor speaker is a multiple of the period of a frequency. When this occurs the acoustic output of the speaker is picked up by the microphone and amplified again by the monitor speaker. This is a positive feedback loop that reinforces the specific frequency, causing the speaker to howl or squeal. Equalization is used to attenuate the specific frequency that is feeding back.
The process of eliminating feedback in the monitor is called ringing out the monitors. To eliminate feedback, the monitor's level is increased until it starts to feed back. The feedback frequency is identified either by ear or by a frequency analyzer. Equalization is used to reduce that frequency. The monitor level is again increased until the next frequency starts to feed back and that frequency is eliminated. The process is repeated until feedback occurs at a previously suppressed frequency or at multiple frequencies simultaneously. If multiple monitor mixes are being used, the process has to be repeated for each separate monitor mix.
Graphic equalizer
A common equalizer used in monitor systems is the graphic equalizer. They get their name from the slide potentiometers used to adjust the level of each frequency bandthe positions of the sliders side by side reads out as a frequency response graph. Graphic equalizers are fixed frequency equalizers; The center frequency of each band can not be adjusted. The bandwidth or Q of each band can either be 1/3, 2/3 or one octave, giving a 31-band, 15-band, or 10-band for a graphic equalizer that covers the audio frequency range. The narrower the band the more precisely the feedback frequency can be isolated. Normally 31-band equalizers are used.
A variation on the graphic equalizer is a cut-only graphic equalizer. Since most of the time, monitor equalization involves the removal of frequencies, a cut-only equalizer can give you more precise level adjustments since the entire travel of the slider is used for reducing the level rather than wasting half the travel for boost.
One of the advantages of graphic equalizers is their simplicity of use. When ringing the monitors, a person can boost then restore each frequency band until the ringing starts.
This helps you identify the feedback frequency. A drawback of graphic equalizers is the fixed frequency bands. Feedback rarely occurs on the exact center of the frequency band so two adjacent frequency bands may have to be reduced in parallel to eliminate the feedback.
Parametric equalizer
A second type of equalizer used in monitor systems are parametric equalizers. A parametric equalizer does not use fixed frequency bands. Instead, each frequency band can be adjusted. The center frequency can be adjusted over a several-octave range. The bandwidth of each band can be adjusted from a wide Q factor affecting several octaves to a narrow Q affecting less than an octave, and the level of the band can be adjusted. Each band may have a different frequency sweep range, with the left or lower bands sweeping the lower octaves, the middle bands sweeping the middle octaves, and the right or higher bands sweeping the higher octaves. There is normally a lot of overlap between bands. Parametric equalizers typically have 3 to 5 filtering bands per channel.
The advantage of using parametric equalizers in a monitor system is that the filter can be exactly adjusted to the specific feedback frequency, and the bandwidth of the filter can be set to be very narrow so the adjustment affects as little of the frequency band as possible. This leads to more precise feedback elimination with less coloring of the sound. For this reason, many professionals recommend using parametric equalizers over graphic equalizers for monitors.
The process of using a parametric equalizer is different from using a graphic equalizer. When using a parametric equalizer the first step is to choose the band to use. Normally the first feedback frequency is in the lower mid-range so the second band would be a good choice. If the feedback frequency is in the upper mid-range, then the 3rd or 4th band would be a good choice. Next adjust the Q of the filter to be as narrow as possible and boost the frequency by 6 to 9 db. Raise the level of the monitor until it just begins to feedback, lower by 3 db or so. Now sweep the frequency of the filter until the monitor feeds back. Sweep it back and forth over the feedback frequency to find the center frequency by finding the lower and upper frequency of the ring and setting it to the middle between these two frequencies. You may need to drop the gain on the frequency if the feedback is too loud. You repeat the process for the next and the next feedback frequencies. You may discover that the order of the frequencies does not increase left to right. For example the sequence might be 250 Hz, 800 Hz, 500 Hz, 2.6 kHz, and 1.7 kHz.
Notch filter
A notch filter is a semi-parametric equalizer where the bandwidth is set very narrow, a 1/6 an octave or less and is a cut-only filter. An example is a UREI 562 Feedback Suppressor and the Ashly SC-68 Parametric Notch Filter.
Monitor mixer
Monitor mixers provide musicians with a stage mix. The mix can be controlled by a sound engineer or by the musicians, depending on the monitor mixer's capabilities and the amount of control required. The stage mix consists of whatever vocal and instrument sources are connected to the sound reinforcement system.
Some musicians may prefer a bespoke in-ear monitor mix. This provides a more musician-controllable mix and provides them exactly what they want. This can be achieved by using a separate mixing console (the monitor mixer) and using either a split snake cable or Y-cable splitters cables to allow the required instrument or vocal inputs, to feed both the FOH mixer and monitor mixer.
These inputs can then be mixed on the monitor mixer, setting whatever level is required for each separate input e.g. more guitar, less bass, more lead vocals, less backing vocals, thus providing a bespoke mix for whoever is connected to the sub-mixer. The number of inputs on the sub-mixer will determine the number of instruments and vocals that can be sub-mixed and the number of outputs determines how many musicians can be provided with a bespoke monitor mix.
Related products
Headphones
Hardshell headphones are typically used by the audio engineer to listen to specific channels or to listen to the entire mix. While an amplified monitor speaker can also be used for this purpose, the high sound volumes in many club settings make hardshell headphones a better choice because the hard plastic shell and foam cushions help to block the room noise. Some performers may use headphones as monitors, such as drummers in pop music bands.
In-ear monitors
In the 2000s, some bands and singers, typically touring professionals, began using small in-ear-style headphone monitors. These in-ear monitors allow musicians to hear their voice and the other instruments with a clearer, more intelligible sound because the molded in-ear headphone design blocks out on-stage noise. While some in-ear monitors are universal fit designs, some companies also sell custom-made in-ear monitors, which require a fitting by an audiologist. Custom-made in-ear monitors provide an exact fit for a performer's ear.
In-ear monitors greatly reduce on-stage volume by eliminating the need for on-stage monitor wedges. This reduced on-stage volume makes it easier for the front-of-house audio engineer to get a good sound for the audience. In-ear monitors also make audio feedback howls much less likely since there are no monitor speakers. The lower on-stage volume may lead to less hearing damage for performers.
One drawback of in-ear monitors is that the singers and musicians cannot hear on-stage comments spoken away from a microphone (e.g., the bandleader turning away from the vocal mic and looking at the band and calling for an impromptu repetition of the chorus) or sounds from the audience. This issue can be rectified by placing microphones in front of the stage and mixing those into the monitor mix so that the band can hear the audience in their in-ear monitors.
Bass shakers
Drummers typically use a monitor speaker that is capable of loud bass reproduction, so that they can monitor their bass drum. Since the drums are already very loud, having a subwoofer producing a high sound pressure level can raise the overall stage volumes to uncomfortable levels for the drummer. Since much very low bass is felt, some drummers use tactile transducers called bass shakers, butt shakers and throne shakers to monitor the timing of their bass drum. The tactile transducers are attached to the drummer's stool (throne) and the vibrations of the driver are transmitted to the body and then on to the ear in a manner similar to bone conduction. They connect to an amplifier like a normal subwoofer. They can be attached to a large flat surface (for instance a floor or platform) to create a large low-frequency conduction area, although the transmission of low frequencies through the feet isn't as efficient as the seat.
Other meanings
The term foldback is sometimes applied to in-ear monitoring systems, also described as artist's cue-mixes, as they are generally set up for individual performers. Foldback may less frequently refer to current limiting protection in audio electronic amplifiers.
The term foldback has been used when referring to one or more video monitors facing a stage, in the same manner as an audio foldback monitor. The video monitor allows a person on stage to see what is behind them on screen, to see distant parties during a video conference, or to read notes or sing lyrics to a song. Other terms for this usage are confidence monitor and kicker monitor.
See also
Sidetone
References
Audio engineering
Loudspeakers
Sound reinforcement system | Stage monitor system | Engineering | 5,003 |
236,994 | https://en.wikipedia.org/wiki/Hubert%20H.%20Humphrey%20Metrodome | The Hubert H. Humphrey Metrodome (commonly called the Metrodome) was a domed sports stadium in downtown Minneapolis, Minnesota. It opened in 1982 as a replacement for Metropolitan Stadium, the former home of the National Football League's (NFL) Minnesota Vikings and Major League Baseball's (MLB) Minnesota Twins, and Memorial Stadium, the former home of the Minnesota Golden Gophers football team.
The Metrodome was the home of the Vikings from 1982 to 2013, the Twins from 1982 to 2009, the National Basketball Association's (NBA) Minnesota Timberwolves in their 1989–90 inaugural season, the Golden Gophers football team from 1982 to 2008, and the occasional home of the Golden Gophers baseball team from 1985 to 2010 and their full-time home in 2012. It was also the home of the Minnesota Strikers of the North American Soccer League in 1984. The Vikings played at the University of Minnesota's TCF Bank Stadium for the 2014 and 2015 NFL seasons, ahead of the planned opening of U.S. Bank Stadium in 2016.
The stadium had a fiberglass fabric roof that was self-supported by air pressure and was the third major sports facility to have this feature (the first two being the Pontiac Silverdome and the Carrier Dome). The Metrodome was similar in design to the former RCA Dome and to BC Place, though BC Place was reconfigured with a retractable roof in 2010. The Metrodome was the inspiration for the Tokyo Dome in Tokyo, Japan. The stadium was the only facility to have hosted a Super Bowl (1992), World Series (1987, 1991), MLB All-Star Game (1985), and NCAA Division I Basketball Final Four (1992, 2001).
The Metrodome had several nicknames such as "The Dome", "The Thunderdome", "The Homer Dome", and "The Technodome". Preparation for the demolition of the Metrodome began the day after the facility hosted its final home game for the Minnesota Vikings on December 29, 2013, and the roof was deflated and demolition began on January 18, 2014. The Metrodome was torn down in sections while construction of U.S. Bank Stadium began.
History
Background
By the early 1970s, the Minnesota Vikings were unhappy with Metropolitan Stadium's (the Met) relatively small capacity for football. Before the completion of the AFL–NFL merger, the NFL declared that stadiums with a capacity under 50,000 were not adequate. The Met never held more than 49,700 people for football, and could not be expanded. At the time, the biggest stadium in the area was the University of Minnesota's Memorial Stadium. However, the Vikings were unwilling to be tenants in a college football stadium even on a temporary basis, and demanded a new venue. Supporters of a dome also believed that the Minnesota Twins would benefit from a climate-controlled stadium to insulate the team from harsh Minnesota weather later in their season. The Met would have likely needed to be replaced anyway, as it was not well maintained. Broken railings and seats could be seen in the upper deck by the 1970s; by its final season, they had become a distinct safety hazard.
Construction success of other domed stadiums, particularly the Pontiac Silverdome near Detroit, paved the way for voters to approve funding for a new stadium. Downtown Minneapolis was beginning a revitalization program, and the return of professional sports from suburban Bloomington was seen as a major success story; a professional team had not been based in downtown Minneapolis since the Minneapolis Lakers left for Los Angeles in 1960.
Construction
Construction on the Metrodome began on December 20, 1979, and was funded by a limited hotel-motel and liquor tax, local business donations, and payments established within a special tax district near the stadium site. Uncovering the Dome by Amy Klobuchar (now a U.S. Senator) describes the 10-year effort to build the venue. The stadium was named in memory of former mayor of Minneapolis, U.S. Senator, and U.S. Vice President Hubert Humphrey, who died in 1978. The building's construction was designed by Bangladeshi-American architect Fazlur Rahman Khan, of Skidmore, Owings & Merrill.
The Metrodome itself cost $68 million to build—significantly under budget—totaling around $124 million with infrastructure and other costs associated with the project added. It was a somewhat utilitarian facility, though not quite as spartan as Metropolitan Stadium. One stadium official once said that all the Metrodome was designed to do was "get fans in, let 'em see a game, and let 'em go home."
1980s roof incidents
Five times in the stadium's history, heavy snows or other weather conditions have significantly damaged the roof and in four instances caused it to deflate. Four of the five incidents occurred within the stadium's first five years of operation: On November 19, 1981, a rapid accumulation of over a foot of snow caused the roof to collapse, requiring it to be re-inflated. It deflated the following winter on December 30, 1982, because of a tear caused by a crane used in snow removal. This was four days before the Vikings played the Dallas Cowboys in the last regular-season game of the 1982 NFL season. In the spring following that same winter, on April 14, 1983, the Metrodome roof deflated because of a tear caused by late-season heavy snow, and the scheduled Twins game with the California Angels was postponed. On April 26, 1986, the Metrodome roof suffered a slight tear because of high winds, causing a nine-minute delay in the bottom of the seventh inning versus the Angels; however, the roof did not deflate.
2010 roof incident and replacement
A severe snowstorm arrived in Minneapolis in the late evening of December 10. The snowstorm lasted to the following night on December 11, with of snow accumulated across the city. Due to strong winds, hoses malfunctioning, and a hazardous slippery layer building up on the roof, workers were not allowed to remove the snow from the roof. As the workers were pulled back, many noticed that the roof's center was sagging down by the weight of the snow.
At around 5:00 a.m. CST on December 12, three of the roof's panels tore open. Snow fell through, covering the turf field. The night before the incident a Fox Sports crew, who were setting up for the football game between the New York Giants and Vikings, noticed water was leaking through the roof. They decided to leave their cameras on; the cameras captured footage of the roof deflation and the snow dropping to the field. The footage was aired on Fox NFL Sunday and quickly went viral.
The game between the Vikings and Giants, scheduled to take place on December 12 during the afternoon, was postponed to the next day and relocated to Ford Field in Detroit. There were considerations moving the game to the University of Minnesota's nearby TCF Bank Stadium. However, the stadium had limited seating capacity, as well as snow that would have taken several days to clear. A couple of days later, a fourth panel ripped open, allowing more snow to enter the stadium. This forced another game between the Vikings and Chicago Bears (originally scheduled at the Metrodome on December 20) to be relocated to TCF Bank Stadium. The final two games for the Vikings for the season were on the road, and the Vikings were already eliminated from the playoffs, meaning no additional home games were to be played.
The roof collapse also caused schedule complications for the Golden Gophers baseball team. All Big Ten Conference home games were moved to Target Field, the home stadium of Major League Baseball's (MLB) Minnesota Twins. A Metrodome tournament was replaced with a three-game series against Gonzaga. Another tournament named the Dairy Queen Classic was relocated to Tucson, Arizona. Other changes included many home game cancellations, and some games being pushed to next year's season.
On February 10, 2011, it was announced that the entire Metrodome roof needed to be replaced at an estimated cost of $18 million. In November 2010, the University of Minnesota men's baseball team had announced plans to play all of their 2011 games at the Metrodome; however, the roof collapse caused those plans to be abandoned. On February 18, 2011, the Gophers announced that all 12 scheduled Big Ten home games in April and May would be played at Target Field, with three non-conference games moved to on-campus Siebert Field.
On July 13, 2011, it was announced that the roof was repaired and had been inflated that morning. However, other construction and repairs were still in progress. The remaining construction and repairs were done by August 1, 2011.
Demolition
With the approval of the new Vikings stadium at the Metrodome site by the Minnesota legislature, the fate of the Metrodome was sealed. The Vikings played their final game at the Metrodome on December 29, 2013, beating the Detroit Lions 14–13. The following day, a local company began removal of seats for sale to the public and various charities and nonprofits. Individual chairs went for $40 each to charities, $60 each to the public and $80 each for specific seat requests.
The roof was deflated for the final time on January 18. On the morning of February 2, 2014, the steel support cables that stretched from end-to-end of stadium that held together the roof were severed, as construction crews set off a simultaneous set of 42 explosive charges that detached the cables from the concrete structure. The general public was not informed about this phase of the demolition process, prompting about a half-dozen phone calls to police from people who wondered what was going on. This was viewed as the final step before the destruction of the concrete bowl of the Metrodome would begin. On February 10, 2014, shortly after 9:15 a.m., after more than two months of preliminary work that dated back all the way to the groundbreaking of the new Vikings stadium, demolition of the stadium walls finally began.
Just after 1 p.m. on February 17, 2014, one week after demolition of the stadium bowl had begun, demolition crews were working on taking down the concrete ring beam that encircled to top of the Metrodome, when a portion of the ring beam collapsed out of sequence, bringing an immediate halt to the work. No one was hurt and no equipment was damaged by the collapse. After five days of investigation from structural and demolition experts, it was decided that the remaining portion of the concrete ring beam would be destroyed using controlled explosive charges—virtually the same method that was used to bring down the Metrodome's steel support cables for the roof. This second controlled explosion was a continued deviation from the original plan to not use explosives to destroy the stadium, as it was determined that this was the safest way to bring down the remaining ring beam structure. On February 23, 2014, the remaining ring beam and corners of the Metrodome were brought down with 84 explosive charges of dynamite. This enabled demolition crews to continue with the wrecking ball demolition method that was originally chosen (though the order in which the sections would be brought down were changed as a result of the ring beam implosion), to bring down what was left of the Metrodome. Despite this unexpected setback, Mortenson Construction said that the demolition of the Metrodome and construction of U.S. Bank Stadium were both still on schedule.
On March 15, 2014, the final upper deck bleachers and concrete bleacher-support girders (on the northwest side of the Metrodome) were brought down, taking away any standing remnants of the exterior stadium walls. On April 11, 2014, the final portion of the inner-stadium concrete walls were reduced to rubble, marking the official end of the Hubert H. Humphrey Metrodome. Demolition of the Metrodome was formally declared complete six days later—a month ahead of schedule—as the final truckload of rubble was loaded up and removed from the new stadium construction site. Officials from Mortenson Construction said the entire demolition job required 4,910 truckloads and 16,000 man hours to complete the job.
Usage
The Metrodome is the only venue to have hosted an MLB All-Star Game (1985), a Super Bowl (1992), an NCAA Final Four (1992 & 2001), and a World Series (1987 & 1991).
The NCAA Final Four was held at the Metrodome in 1992 and 2001. The Metrodome also served as one of the four regional venues for the NCAA Division I Basketball Championship in 1986, 1989, 1996, 2000, 2003, 2006, and 2009. The dome also held first- and second-round games in the NCAA basketball tournament in addition to regionals and the Final Four, most recently in 2009.
The Metrodome was recognized as one of the loudest venues in which to view a game, due in part to the fact that sound was recycled throughout the stadium because of the fabric domed roof. Stadium loudness is a sports marketing issue, as the noise lends the home team a home advantage against the visiting team. Until its demolition, the Metrodome was the loudest domed NFL stadium; most notably, during the 1987 World Series and 1991 World Series, peak decibel levels were measured at 125 and 118 respectively compared to a jet airliner—both close to the threshold of pain.
The 1991 World Series is considered one of the best of all time. The blue colored seat back and bottom where Kirby Puckett's 1991 World Series Game 6 walk-off home run landed in Section 101, Row 5, Seat 27 (renumbered 34 after the home run in honor of Kirby's uniform number), is now in the Twins archives, along with the gold-colored back and bottom that replaced it for several years. The Twins reinstalled a blue seat back and bottom as well as Puckett's #34 on the seat where it remained until the final Vikings game of 2013 in the Metrodome when, as local media reported, a fan took the #34 plate off the seat. The original World Series armrests and hardware, as well as the replacement blue seat back and bottom, are now part of a private Kirby Puckett collection in Minnesota.
Features
From the time the stadium was built to when it was demolished, the economics of sports marketing changed. Teams began charging higher prices for tickets and demanding more amenities, such as bigger clubhouses and locker rooms, more luxury suites, and more concession revenue. Team owners, the media, and fans pressured the State of Minnesota to provide newer, better facilities to host its teams. The Metrodome served its primary purpose: to provide a climate-controlled facility to host the three sports tenants in Minnesota with the largest attendance.
For Major League baseball, the Metrodome was regarded as a hitter's park, with a low (7 ft) left-field fence (343 ft) that favored right-handed power hitters, and the higher (23 ft) but closer (327 ft) right-field Baggie that favored left-handed power hitters. It gave up even more home runs before air conditioning was installed in 1983. Before 1983, the Dome had been nicknamed "the Sweat Box". The Metrodome was climate controlled, and protected the baseball schedule during the entire time it was the venue for the Minnesota Twins. Major League Baseball schedulers had the luxury of being able to count on dates played at Metrodome. Doubleheader games only occurred when purposely scheduled. The last time that happened was when the Twins scheduled a day-night doubleheader against the Kansas City Royals on August 31, 2007. The doubleheader was necessitated after an August 2 game vs. Kansas City was postponed one day after the I-35W Bridge collapse in downtown Minneapolis.
Roof
The Metrodome's air-supported roof was designed by the inventor of air-supported structures, David H. Geiger, through his New York-based Geiger Berger Associates, and manufactured and installed by Birdair Structures. An air-supported structure supported by positive air pressure, it required 250,000 ft3/min (120 m3/s) of air to keep it inflated. The air pressure was supplied by 20 fans of each. The roof was made of two layers: the outer layers were Teflon-coated fiberglass and the inner was a proprietary acoustical fabric. By design, the dead air space between the layers insulated the roof; in winter, warm air was blown into space between layers to help melt snow that had accumulated on top. At the time it was built, the of fabric made the roof the largest expanse ever done in that manner. The outside Teflon membrane was of an inch thick and the inner liner of woven fiberglass was of an inch thick. The entire roof weighed roughly . It reached , or about 16 stories, at its highest point.
To prevent roof tears like those that occurred in its first years of service, the Metropolitan Sports Facilities Commission adopted a twofold strategy: When snow accumulation was expected, hot air was pumped into the space between the roof's two layers. Workers also climbed on the roof and used steam and high-powered hot-water hoses to melt snow. In addition, before the storm that caused the December 2010 collapse, the inside of the stadium was heated to nearly .
To maintain the differential air pressure, spectators usually entered and left the seating and concourse areas through revolving doors, since the use of regular doors without an airlock would have caused significant loss of air pressure. The double-walled construction allowed warmed air to circulate beneath the top of the dome, melting accumulated snow. A sophisticated environmental control center in the lower part of the stadium was staffed to monitor the weather and make adjustments in air distribution to maintain the roof.
Because it was unusually low to the playing field, the air-inflated dome occasionally figured into game action during baseball games. Major League Baseball had specific ground rules for the Metrodome. Any ball which struck the Dome roof, or objects hanging from it, remained in play; if it landed in foul territory it became a foul ball, if it landed in fair territory it became a fair ball. Any ball which became caught in the roof over fairground was a ground rule double. That has only happened three times in its history – Dave Kingman for the Oakland Athletics on May 4, 1984, the University of Minnesota Gophers player George Behr and Corey Koskie in 2004. The speakers, being closer to the playing surface, were hit more frequently, especially the speakers in foul ground near the infield, which were typically hit several times a season, which posed an extra challenge to infielders trying to catch them. However, beginning with the 2005 season, the ground rules for Twins games were changed such that any batted ball that struck a speaker in the foul territory would automatically be called a foul ball, regardless of whether or not it was caught.
The dome's roof color made it close to impossible to catch balls without taking the eye off the ball. As a result, fielders frequently lost balls in the roof. An example of this is seen in a home run derby put on by a softball entertainment crew before a Twins game. Taken at the field level, the balls generally tended to be lost in the roof.
The field
During its early years of operation, the field at the Metrodome was surfaced with SuperTurf. The surface, also known as SporTurf, was very bouncy—so bouncy, in fact, that Billy Martin once protested a game after seeing a base hit that would normally be a pop single turn into a ground-rule double. Baseball and football players alike complained that it was too hard.
This surface was upgraded to AstroTurf in 1987, and in 2004, the sports commission had a newer artificial surface, called FieldTurf, installed. FieldTurf is thought to be a closer approximation to natural grass than Astroturf in its softness, appearance, and feel. A new Sportexe Momentum Turf surface was installed during the summer of 2010.
When the conversion between football and baseball took place, the pitcher's mound was raised and lowered by an electric motor. The mound weighed and was in diameter. With the field repair, the sliding pits and pitcher's mound used by the Twins and Gophers were removed. Any future baseball games would see baserunners slide on "grass". The home plate area was kept, as it was not "in-play" for football configuration. The original home plate installed at the dome was memorably dug up after the Twins' final game and has been installed at Target Field. A new field was installed in the summer of 2011 due to the damage from the December 2010 roof collapse.
Plexiglas
From 1985 to 1994, the left-field wall included a clear Plexiglas screen for a total height of . It was off this Plexiglas wall that Twins player Kirby Puckett jumped to rob Ron Gant of the Atlanta Braves of an extra-base hit during Game 6 of the 1991 World Series (a game that Puckett would win with an 11th-inning walk-off homer) – in later years, with the Plexiglas removed, it would have been a potential home run ball.
Stadium neighborhood
The Metrodome was constructed in an area of downtown Minneapolis known as "Industry Square". Development in the Downtown East neighborhood around Metrodome took many years to materialize. For many years, there were few bars or restaurants nearby where fans could gather, and tailgating was expressly forbidden in most parking areas. The City of Minneapolis was directing the development of the entertainment districts along with Seven Corners in Cedar-Riverside, Hennepin Avenue, and the Warehouse district. The Metrodome existed among several parking areas built upon old rail yards, along with defunct factories and warehouses. The Star Tribune owns several blocks nearby that have remained parking lots. The Metrodome was not connected to the Minneapolis Skyway System, although that had been proposed in 1989 to be completed in time to host Super Bowl XXVI. The Star Tribune properties and the Minneapolis Armory had not been developed and stood between the Metrodome and the rest of Downtown Minneapolis. Only in recent years did redevelopment begin moving Southeast to reach the Metrodome. More restaurants, hotels, and condominiums have been built nearby. The METRO Blue Line light rail connected the Minneapolis entertainment district with the Metrodome and the Airport.
Sight lines
The Metrodome was not a true multi-purpose stadium. Rather, it was built as a football stadium that could convert into a baseball stadium. The seating configuration was almost rectangular in shape, with the baseball field tucked into one corner. The seats along the four straight sides directly faced their corresponding seats on the opposite side, while the seats in the corners were four quarter-circles.
While this was more than suitable for the Vikings and Gophers, with few exceptions this resulted in poor sightlines for baseball. For instance, the seats directly along the left-field line faced the center field and right field fences. Unlike other major league parks, there were no seats down to field level. Only 8,000 seats were located in the lower deck between home plate and the dugouts, where most game action occurs. Seats in these areas were popularly known as "the baseball section." However, even the closest front-row seats were at least above the field.
The way that many seats were situated forced some fans to crane their necks to see the area between the pitcher's mound and home plate. Some fans near the foul poles had to turn more than 80°, compared to less than 70° with the original Yankee Stadium or 75° at Camden Yards. For that reason, the seats down the left-field line were typically among the last ones sold; the (less expensive) outfield lower deck seating tended to fill up sooner. Nearly 1,400 seats were at least partially obstructed – some of them due to the right-field upper deck being directly above (and somewhat overhanging) the folded-up football seats behind right field; and some of them due to steel beams in the back rows of the upper deck which are part of the dome's support system.
On the plus side, there was relatively little foul territory, which was not typical of most domed stadiums (especially those primarily built for football). Also, with the infield tucked into one corner of the stadium, the seats in the so-called "baseball section" had some of the closest views in Major League Baseball. In 2007, the Twins began selling seats in extra rows behind the plate which were previously only used for football. The sight lines were also very good in the right field corner, which faced the infield and was closer to the action than the left field corner.
Unlike most domed stadiums, the Metrodome's baseball configuration had asymmetrical outfield dimensions.
The Twins stopped selling most of the seats in sections 203–212 of the upper level in 1996. This area was usually curtained off during the regular season. However, the stadium could easily be expanded to full capacity for the postseason, or when popular opponents came to town during the regular season.
Scheduling conflicts
As part of the deal with Metrodome, the Minnesota Twins had post-season priority over the Gophers in scheduling. If the Twins were in the playoffs with a home series, the baseball game took priority and the Gopher football game had to be moved to a time suitable to allow the grounds crew to convert the playing field and the stands to the football configuration.
The last month of Major League Baseball's regular season often included one or two Saturdays in which the Twins and Gophers used Metrodome on the same day. On those occasions, the Twins game would start at about 11 am local time (TV announcer Dick Bremer sometimes joked that the broadcast was competing with SpongeBob SquarePants). Afterward, the conversion took place and the Gophers football game started at about 6 pm. The University of Minnesota was the only school in the Big Ten that shared a football facility with professional sports teams for an extended period of years.
In 2007, there were two such schedule conflicts, on September 1 and 22. In 2008, there were no conflicts on the regular-season schedule.
Due to the minimum time needed to convert the field, a baseball game that ran long in clock time had to be suspended, and concluded the next day. The only time this happened was on October 2, 2004, when a game between the Twins and Indians reached the end of the 11th inning after 2:30 pm in a tie and resumed the next day.
The Vikings had rights to the Dome over the Twins except for World Series games. In 1987, the Vikings' home date with the Tampa Bay Buccaneers scheduled for the same day as Game 2 of the World Series was moved to Tampa, and the Vikings' game with the Denver Broncos scheduled for the same day as Game 7 was pushed back to the following Monday night.
The Twins' 2009 AL Central division tiebreaker with the Detroit Tigers was played on Tuesday, October 6, 2009. One-game playoffs are normally held the day after the regular season ends (in this case, the season ended on Sunday, October 4), but the Vikings were using Metrodome for Monday Night Football on October 5. The Twins were awarded the right to host the tiebreaker because they won the season series against Detroit.
Seating capacity
Stadium usage
Minnesota Vikings football
As the stadium was designed first and foremost for the Minnesota Vikings, they had the fewest problems. However, the economics of 21st century professional sports meant that the Vikings owners wanted more luxury suites and better concessions. Renovations were rejected twice, with the 2001 price tag at $269 million.
The Vikings played their first game at the Metrodome in a preseason matchup against the Seattle Seahawks on August 21, 1982. Minnesota won 7–3. The first touchdown in the dome was scored by Joe Senser on an 11-yard pass from Tommy Kramer. The first regular-season game at the Metrodome was the 1982 opener on September 12, when the Vikings defeated the Tampa Bay Buccaneers, 17–10. Rickey Young scored the first regular-season touchdown in the dome on a 3-yard run in the 2nd quarter. On January 9, 1983, the Vikings defeated the Atlanta Falcons, 30–24, in a 1st-round game that was the first playoff game at the Metrodome. On January 17, 1999, the Falcons defeated the Vikings in the first NFC championship game played at the Metrodome. On December 29, 2013, the Vikings played their final game at the Metrodome, a 14–13 victory over the Detroit Lions. The team's record at the dome was 162–88 in the regular season and 6–4 in playoff games. They finished with a perfect record at the dome against the Arizona Cardinals (8–0), Baltimore Ravens (1–0), Cincinnati Bengals (4–0), and Houston Texans (1–0), but with a winless record there against the New York Jets (0–3).
Super Bowl XXVI
NFL owners voted during their May 24, 1989, meeting to award Super Bowl XXVI to Minneapolis over Indianapolis, Pontiac and Seattle. The game on January 26, 1992, was the second Super Bowl to be played in a cold, winter climate city. The first one was Super Bowl XVI on January 24, 1982, in Pontiac, Michigan. Super Bowl XXVI resulted in the Washington Redskins defeating the Buffalo Bills, 37–24.
Minnesota Twins baseball
When opened in 1982, the Metrodome was appreciated for the protection it gave from mosquitoes, and later the weather. Over the years there had been a love-hate relationship with the fans, sportswriters, and stadium.
The Minnesota Twins won two World Series championships at the Metrodome. The Twins won the 1987 World Series and 1991 World Series by winning all four games held at the Dome in both seasons. The loud noise, white roof, quick turf, and the right-field wall (or "Baggie") provided a substantial home-field advantage for the Twins. The 1991 World Series has been considered one of the best of all time.
For Twins baseball, the address of the Metrodome became 34 Kirby Puckett Place, an honor given to one of the most famous Minnesota Twins players. In 1996, a section of Chicago Avenue in front of the Metrodome was renamed Kirby Puckett Place by the city of Minneapolis. The Metrodome Plaza was added along Kirby Puckett Place before the 1996 season. Before that, the address for the Twins was 501 Chicago Avenue South. For baseball, the Metrodome informally has been called "The House That Puck Built".
By 2001, several newer purpose-built Major League Baseball stadiums had been constructed, and the Metrodome was considered to be among the worst venues in Major League Baseball.
Only two Twins games at the Metrodome were ever postponed. The first was on April 14, 1983, when a massive snowstorm prevented the California Angels from getting to Minneapolis. The game would have likely been postponed in any case, however; that night heavy snow caused part of the roof to collapse. The second was on August 2, 2007, the day after the I-35W Mississippi River bridge had collapsed a few blocks away from the Metrodome. The game scheduled for August 1 was played as scheduled (about one hour after the bridge had collapsed) because the team and police officials were concerned about too many fans departing Metrodome at one time, potentially causing conflict with rescue workers. The August 2 ceremonial groundbreaking at the eventual Target Field was also postponed, for the same reason. The Metrodome carried a memorial decal on the backstop wall for the remainder of the 2007 season.
The Twins played their final scheduled regular-season game at the Metrodome on October 4, 2009, beating the Kansas City Royals, 13–4. After the game, they held their scheduled farewell celebration. Because they ended the day tied with the Detroit Tigers for first place in the American League Central, a one-game playoff between the teams was played there on October 6, 2009, with the Twins beating the Tigers 6–5 in 12 innings. The division clincher would be the Twins' last win at the Metrodome. The announced crowd was 54,088, setting the regular-season attendance record.
The final Twins game at the Metrodome was on October 11, 2009, when they lost to the New York Yankees 4–1, resulting in a three-game sweep in the 2009 ALDS. The Twins' appearance in this series gave Metrodome the distinction of being the first American League stadium to end its Major League Baseball history with post-season play. The only other stadiums whose final games came in the postseason are Fulton County Stadium in Atlanta (1996), the Astrodome in Houston (1999) and Busch Memorial Stadium in St. Louis (2005), all of which were home venues for National League teams. With the departure of the Twins, this leaves the Tampa Bay Rays as the last remaining major league team to play their games in a non-retractable domed stadium.
Basketball
When configured as a basketball arena, the fans in the nearby bleachers got a suitable view of the court, but the action was difficult to see in the upper decks. Concessions were very far away from the temporary infrastructure. The Metrodome as a basketball arena was much larger than most NBA and major college basketball arenas, which run to about 20,000 seats; it functioned like Syracuse's large Carrier Dome. However, the NCAA made a significant amount of money selling the high number of seats for regional and championship games for the men's basketball tournament.
Ten NCAA tournaments took place at the stadium:
1986 1st and 2nd round
1989 Midwest Regional
1991 1st and 2nd round
1992 Final Four
1996 Midwest Regional
2000 1st and 2nd round
2001 Final Four
2003 Midwest Regional
2006 Minneapolis Regional
2009 1st and 2nd round
The Timberwolves used the stadium for their home games during their inaugural season (1989–90) in the NBA while the team waited for construction of Target Center to be completed. The team set NBA records for the highest single-season attendance ever: 1,072,572 fans in 41 home games. The largest crowd for a single game occurred on April 17, 1990: 49,551 fans watched the T-Wolves lose to the Denver Nuggets in the last game of the season. This was the third-largest crowd in the NBA's history.
College football
Beginning in the 1982 college football season, the University of Minnesota Golden Gophers began playing their home football games at the Metrodome. The first game was a 57–3 victory over the Ohio Bobcats on September 11, 1982. The Gophers football record at the Metrodome 1982–2008 (27 seasons) 169 total games 87–80–2 .521%. 109 Big Ten Conference games 41–66–2 .385%
With the Gophers' move to TCF Bank Stadium, only one Power Four program still plays in a domed stadium. Syracuse has its own such facility on campus. When the Gophers first moved to the Metrodome, the NFL-class facilities were seen as an improvement over the aging Memorial Stadium. Initially, attendance increased. However, fans waxed nostalgic over fall days playing outdoors on campus. Huntington Bank Stadium now provides an outdoor, on-campus venue for the team.
College baseball
In the 2010 season, the University of Minnesota Golden Gopher Baseball team played all of their home games at the Metrodome (except a game at the new Target Field on March 27, 2010). The University of Minnesota Golden Gophers baseball team had played games at the Metrodome during February and March since 1985 because of weather. Later games were played at Siebert Field, except for 2006 when all but two home games were played at the Metrodome. The team often played major tournaments at the Dome, which included the Dairy Queen Classic, where three other major Division I baseball teams play in an invitational. Before the NCAA's 2008 rule in Division I regarding the start of the college baseball season, the Golden Gophers would often play home games at the Metrodome earlier than other teams in the area to neutralize the advantage of warmer-weather schools starting their seasons earlier in the year. Some early Big Ten conference games were played at the Metrodome, and the Golden Gophers enjoyed home-field advantage during the early part of the season before the weather warmed, and the Gophers could play games on-campus. Other small colleges also played games in the stadium during the weeks before the Metrodome was open for Division I play. In 2010, 420 amateur baseball and softball games—including the majority of the Golden Gophers' home schedule—were played at the Metrodome.
The size of Siebert Field also affected the Golden Gophers starting in 2010. The Golden Gophers last hosted an NCAA baseball tournament regional in 2000, with temporary seating added. With the Metrodome being available for the tournament starting in 2010, the team could easily place a bid for, and have a better possibility of hosting, an NCAA baseball regional or super regional.
Other cold-weather teams have played at the Metrodome. Big 12 Conference member Kansas has played two series (2007 and 2010) at the Metrodome because of inclement weather against South Dakota State University and Eastern Michigan, respectively.
Soccer
The Minnesota Kicks were supposed to move into the Metrodome for the 1982 NASL season. However, the franchise folded in November 1981. The Minnesota Strikers played the 1984 NASL season at the Dome. 52,621 saw the Minnesota Strikers defeat Tampa Bay 1–0 on May 28, 1984. MSHSL boys and girls soccer championships were also held at the stadium. The Minnesota Thunder played selected games at the Dome from 1990 to 2009. Minnesota Stars FC, later renamed to Minnesota United FC, opened their 2012 season at the stadium and used it for the 2013 NASL spring season. The field dimensions for soccer at the Metrodome were . The largest crowd to see a soccer game in Minnesota was at the Metrodome.
Large concerts
The concert capacity of the Metrodome was around 60,000 people, depending on seating and stage configurations, which made it a profitable location for stadium tours during the late 80s and 90s. By comparison, the Target Center in Minneapolis has a concert capacity of up to 20,500. Acoustics at the Metrodome for these concerts were "iffy at best".
Other events
2002 and 2008 Victory Bowls, the NCCAA National Football Championships.
Prep Bowl (Minnesota State High School League (MSHSL); state high school football championships) (1982–2013).
MSHSL football semifinal games (1990–2013)
MSHSL soccer championships and semifinals (1986–2013).
High school and small college baseball games through the spring.
Small college football games in November hosted by Augsburg College. Also other small college football events including the Northern Sun Intercollegiate Conference and the Upper Midwest Athletic Conference.
AMA Motocross Championship (1994–2004, 2008, 2013)
The Stadium Super Trucks off-road racing series scheduled an event in 2013.
Other motorsport events.
Large religious services and gatherings.
The American Wrestling Association, promoted WrestleRock 86 on April 20, 1986, drawing 23,000. This was one of the AWA's last major shows before they went out of business several years later.
Rollerdome inline skating around the stadium's concourses and Minnesota Distance Running Association running (exercise programs in the concourses).
Conventions, such as Twins Fest, golf shows, home and garden expos, and car shows.
Cultural celebrations, such as Hmong New Year gatherings and the Oromo Jilboo American Games.
Youth in Music Band Championships
The Promise Keepers, an all-men's evangelical Christian service.
The annual Hmong American New Year celebration was held in December over the course of two days.
Monster Jam.
The 1991 World Special Olympics Summer Games Opening Ceremonies
Naming rights
In 2009, Mall of America purchased naming rights for the field at Metrodome. The contract stated that the field would be called "Mall of America Field at Hubert H. Humphrey Metrodome" for a three-year period, beginning October 5, 2009, and ending February 28, 2012. The name was still used for the 2012 and 2013 seasons.
Despite possible inference from the signage, the MoA name applied only to the field, not the stadium as a whole. The building remained Hubert H. Humphrey Metrodome. The connection between Mall of America and the Metrodome is also notable because Mall of America is built on the site of the former Metropolitan Stadium. The mall and the dome were located about apart from each other.
Replacement facilities
With the passage of time, the Metrodome was thought to be an increasingly poor fit for its three major tenants, all of whom claimed the stadium was nearing the end of its useful life.
One major complaint was about the concourses, which were considered somewhat narrow by modern standards, making for cramped conditions whenever attendance was anywhere near capacity. During a 2010 Vikings game, Fox Sports' Alex Marvez wrote that the Metrodome's passageways were so cramped that it would be difficult for fans to evacuate in the event of an emergency. Two of the former tenants, the Gophers (football) and Twins, moved out, while the Vikings played their final years there until demolition. The Vikings' 2014 and 2015 seasons were played at the University of Minnesota's TCF Bank Stadium, and U.S. Bank Stadium, built on the Metrodome site, opened in time for the team's 2016 season.
The Twins, the Vikings, and the Gophers all proposed replacements for the Metrodome, and all three were accepted. The first of the three major tenants to move was the Gophers, who opened their new TCF Bank Stadium (now Huntington Bank Stadium) in September 2009. The next to depart were the Twins, whose new Target Field was completed in time for Opening Day 2010. On May 10, 2012, the Vikings were granted a new stadium by the Minnesota legislators that was built on the Metrodome site, which opened for the 2016 NFL season. Governor Mark Dayton signed the bill on May 14.
Minnesota Twins
The Twins moved to their new ballpark, Target Field, in 2010, after attaining their new stadium with an effort that began in the mid-1990s. Although indoor baseball had critics when Metrodome opened, it was positively regarded by players and fans. By 2001, with Metrodome's peculiarities revealed, and several newer purpose-built Major League Baseball stadiums constructed, an ESPN Page 2 reader poll ranked it as one of the worst Major League Baseball stadiums. Twins management claimed Metrodome generated too little revenue for the Twins to be competitive; specifically, they received no revenue from luxury suite leasing (as those were owned by the Vikings) and only a small percentage of concessions sales. This came to a head in 2001, when the Twins were nearly contracted along with the Montreal Expos, who were also generating insufficient revenue and had a stadium in poor condition. Also, the percentage of season-ticket-quality seats was said to be very low compared to other stadiums. From 2003 through 2009, the Twins had year-to-year leases, and could have moved to another city at any time. However, with no large American markets or new major-league-quality stadiums existing without a current team, it was accepted that the Twins could not profit from a move. The Twins sought a taxpayer subsidy of more than $200 million to assist in construction of the stadium. On January 9, 2005, the Twins went to court to argue that their Metrodome lease should be considered "dead" after the 2005 season. In February, the district court ruled that the Twins' lease was year-to-year and the team could vacate Metrodome at the end of the 2005 season.
In late April 2007, Hennepin County officially took over the future ballpark site (through a form of eminent domain called "quick-take") which had been an ongoing struggle between the county and the land owners. On October 15, 2007, the two sides reached a negotiated settlement of just under $29 million, ending the dispute. As a result, the county noted it would have to cut back on some improvements to the surrounding streetscapes, though it also revealed that the Pohlad family had committed another $15 million for infrastructure.
University of Minnesota Golden Gophers football
The Minnesota Golden Gophers football program began playing in Metrodome for the 1982 season. Attendance was expected to increase over the old Memorial Stadium attendance, especially for late fall games, due to the climate controlled comfort. Initially, average attendance had increased over previous seasons at Memorial Stadium. But, the venue was removed from the traditional on-campus football atmosphere if fans wanted to attend a Gophers football game. Students had to take a bus from the campus to the stadium. The distance from the main campus, along with poor performance by the Gopher football team, caused interest to wane.
The Gophers officially moved back onto campus, to TCF Bank Stadium, for the 2009 football season. The university believed an on-campus stadium would motivate its student base for increased ticket sales, and also would benefit from athletic revenues, not only for the football program, but the non-revenue sports as well. The new stadium reportedly cost less than half of a current-era NFL-style football stadium, and was built on what were former surface parking lots just a few blocks east of the former Memorial Stadium, with the naming rights purchased by TCF Financial Corporation. The University of Minnesota expected to raise more than half the cost of the stadium via private donations. The Gopher Stadium bill was passed by both houses on May 20, 2006, the day before the Twins Stadium bill passed. On May 24, 2006, Governor Pawlenty signed the Gopher bill on the university campus.
Minnesota Vikings
The Vikings initially supported a Superfund site in Arden Hills, but costs of developing infrastructure made the site unworkable. A number of sites in Minneapolis were floated before the team and state settled on a location adjacent to and including the current Metrodome site.
On May 10, 2012, the Minnesota Legislature approved funding for a new Vikings stadium on that site. The project had a budget of $1.027 billion, of which the Vikings covered $529 million, the state covered $348 million, and the remaining $150 million was covered by a Minneapolis hospitality tax. The bill was signed by Governor Dayton on May 14. The Vikings played in the Metrodome until the end of the 2013 season. The Vikings' temporary home during construction was TCF Bank Stadium.
References
External links
Metrodome Dreamscapes - digital ephemera archive
Ballpark Digest review of Metrodome
Blog with pictures of 2011 roof
Defunct multi-purpose stadiums in the United States
2014 disestablishments in Minnesota
Demolished sports venues in Minnesota
Sports venues in Minneapolis
Covered stadiums in the United States
Defunct college baseball venues in the United States
Defunct college football venues
Defunct American football venues in the United States
Defunct Major League Baseball venues
Minnesota Golden Gophers football venues
Minnesota Timberwolves
Minnesota Twins stadiums
Minnesota Vikings stadiums
Fort Lauderdale Strikers stadiums
h
Defunct baseball venues in the United States
Defunct National Football League venues
Former NBA venues
Skidmore, Owings & Merrill buildings
Air-supported structures
American football venues in Minnesota
Baseball venues in Minnesota
Sports venues completed in 1982
1982 establishments in Minnesota
American inventions
Bangladeshi inventions
North American Soccer League (1968–1984) stadiums
Sports venues demolished in 2014
Defunct soccer venues in the United States
Soccer venues in Minnesota
Buildings and structures demolished by controlled implosion
Fazlur Khan buildings | Hubert H. Humphrey Metrodome | Engineering | 9,710 |
23,251,097 | https://en.wikipedia.org/wiki/Sand%20rammer | A sand rammer is a piece of equipment used in foundry sand testing to make test specimen of molding sand by compacting bulk material by free fixed height drop of fixed weight for 3 times. It is also used to determine compactibility of sands by using special specimen tubes and a linear scale.
Mechanism
Sand rammer consists of calibrated sliding weight actuated by cam, a shallow cup to accommodate specimen tube below ram head, a specimen stripper to strip compacted specimen out of specimen tube, a specimen tube to prepare the standard specimen of 50 mm diameter by 50 mm height or 2 inch diameter by 2 inch height for an AFS standard specimen.
Specimen preparation
The cam is actuated by a user by rotating the handle, causing a cam to lift the weight and let it fall freely on the frame attached to the ram head. This produces a standard compacting action to a pre-measured amount of sand. Demonstration of this apparatus can be seen here:
Variety of standard specimen for Green Sand and Silicate based (CO2)sand are prepared using a sand rammer along with accessories
The object for producing the standard cylindrical specimen is to have the specimen become 2 inches high (plus or minus 1/32 inch) with three rams of the machine. After the specimen has been prepared inside the specimen tube, the specimen can be used for various standard sand tests such as the permeability test, the green sand compression test, the shear test, or other standard foundry tests.
The sand rammer machine can be used to measure compactability of prepared sand by filling the specimen tube with prepared sand so that it is level with the top of the tube. The tube is then placed under the ram head in the shallow cup and rammed three times. Compactability in percentage is then calculated from the resultant height of the sand inside the specimen tube.
A rammer is mounted on a base block on a solid foundation, which provides vibration damping to ensure consistent ramming.
Used for sand types
Green sand
Oil sand
CO2 sand
Raw sand i.e. base sand i.e. un-bonded sand.
Prerequisites
Prerequisite equipments for sand rammer may vary from case to case basis or testing scenario:
Case 1: If the prepared sand is ready
Tube filler accessory to fill sample tube with sand. Advantage is it lets the sand fill in from fixed distance and riddles it before filling.
Case 2: Experiment by preparing new sand sample
If sand needs to be prepared before making specimen following equipments may be needed
Laboratory sand muller or laboratory sand mixer (for core sands)
Case 3: For low compressive strength sands and mixtures:
Split specimen tube
References
Casting (manufacturing)
Metallurgical processes
Metalworking tools | Sand rammer | Chemistry,Materials_science | 551 |
50,226,467 | https://en.wikipedia.org/wiki/Bair%20Hugger | The Bair Hugger system is a convective temperature management system used in a hospital or surgery center to maintain a patient's core body temperature. The Bair Hugger system consists of a reusable warming unit and single-use disposable warming blankets for use before, during and after surgery. This medical device launched in 1987 and is currently manufactured by the 3M Company.
Function
The Bair Hugger system uses convective warming, also known as forced-air warming, to prevent and treat perioperative hypothermia.
The system includes two primary components: a warming unit and a disposable blanket. The warming unit is connected by a flexible hose to the single-use blanket. Warm air from the warming unit passes through the flexible hose and into the blanket. Once the warmed air reaches the blanket it exits through a series of micro-perforations on the underside of the blanket, warming the patient's skin in an area not involved in the surgical procedure.
Performance
The Bair Hugger system warms effectively due to the properties of convection and radiation; heat transfer improves with the movement of warmed air across the surface of the patient's skin. Up to 64 percent of the patient's body surface may be recruited for heat transfer, depending on which Bair Hugger blanket is used.
History
The Bair Hugger system was originally designed by Scott Augustine, MD of Minnesota. The Bair Hugger was produced by Arizant, previously known as Augustine Medicine. Augustine resigned from Arizant in 2002, and Arizant was bought by 3M in 2009. Augustine later invented a different type of patient-warming device and formed a separate company to sell his competing device.
The Bair Hugger system received FDA clearance in 1987.
References
External sources
http://www.bairhugger.com
Medical equipment
3M_brands | Bair Hugger | Biology | 387 |
52,022,163 | https://en.wikipedia.org/wiki/L%20band%20%28infrared%29 | In infrared astronomy, the L band is an atmospheric transmission window centred on 3.5 micrometres (in the mid-infrared).
References
Electromagnetic spectrum
Infrared imaging | L band (infrared) | Physics,Astronomy | 35 |
45,473 | https://en.wikipedia.org/wiki/Lynn%20Margulis | Lynn Margulis (born Lynn Petra Alexander; March 5, 1938 – November 22, 2011) was an American evolutionary biologist, and was the primary modern proponent for the significance of symbiosis in evolution. In particular, Margulis transformed and fundamentally framed current understanding of the evolution of cells with nuclei by proposing it to have been the result of symbiotic mergers of bacteria. Margulis was also the co-developer of the Gaia hypothesis with the British chemist James Lovelock, proposing that the Earth functions as a single self-regulating system, and was the principal defender and promulgator of the five kingdom classification of Robert Whittaker.
Throughout her career, Margulis' work could arouse intense objections, and her formative paper, "On the Origin of Mitosing Cells", appeared in 1967 after being rejected by about fifteen journals. Still a junior faculty member at Boston University at the time, her theory that cell organelles such as mitochondria and chloroplasts were once independent bacteria was largely ignored for another decade, becoming widely accepted only after it was powerfully substantiated through genetic evidence. Margulis was elected a member of the US National Academy of Sciences in 1983. President Bill Clinton presented her the National Medal of Science in 1999. The Linnean Society of London awarded her the Darwin-Wallace Medal in 2008.
Margulis was a strong critic of neo-Darwinism. Her position sparked lifelong debate with leading neo-Darwinian biologists, including Richard Dawkins, George C. Williams, and John Maynard Smith. Margulis' work on symbiosis and her endosymbiotic theory had important predecessors, going back to the mid-19th century – notably Andreas Franz Wilhelm Schimper, Konstantin Mereschkowski, Boris Kozo-Polyansky, and Ivan Wallin – and Margulis not only promoted greater recognition for their contributions, but personally oversaw the first English translation of Kozo-Polyansky's Symbiogenesis: A New Principle of Evolution, which appeared the year before her death. Many of her major works, particularly those intended for a general readership, were collaboratively written with her son Dorion Sagan.
In 2002, Discover magazine recognized Margulis as one of the 50 most important women in science.
Early life and education
Lynn Petra Alexander was born on March 5, 1938 in Chicago, to a Jewish family. Her parents were Morris Alexander and Leona Wise Alexander. She was the eldest of four daughters. Her father was an attorney who also ran a company that made road paints. Her mother operated a travel agency. She entered the Hyde Park Academy High School in 1952, describing herself as a bad student who frequently had to stand in the corner.
A precocious child, she was accepted at the University of Chicago Laboratory Schools at the age of fifteen. In 1957, at age 19, she earned a BA from the University of Chicago in Liberal Arts. She joined the University of Wisconsin to study biology under Hans Ris and Walter Plaut, her supervisor, and graduated in 1960 with an MS in genetics and zoology. (Her first publication, published with Plaut in 1958 in the Journal of Protozoology, was on the genetics of Euglena, flagellates which have features of both animals and plants.) She then pursued research at the University of California, Berkeley, under the zoologist Max Alfert. Before she could complete her dissertation, she was offered research associateship and then lectureship at Brandeis University in Massachusetts in 1964. It was while working there that she obtained her PhD from the University of California, Berkeley in 1965. Her thesis was An Unusual Pattern of Thymidine Incorporation in Euglena.
Career
In 1966 she moved to Boston University, where she taught biology for twenty-two years. She was initially an Adjunct Assistant Professor, then was appointed to Assistant Professor in 1967. She was promoted to Associate Professor in 1971, to full Professor in 1977, and to University Professor in 1986. In 1988 she was appointed Distinguished Professor of Botany at the University of Massachusetts at Amherst. She was Distinguished Professor of Biology in 1993. In 1997 she transferred to the Department of Geosciences at UMass Amherst to become Distinguished Professor of Geosciences "with great delight", the post which she held until her death.
Endosymbiosis theory
In 1966, as a young faculty member at Boston University, Margulis wrote a theoretical paper titled "On the Origin of Mitosing Cells". The paper, however, was "rejected by about fifteen scientific journals," she recalled. It was finally accepted by Journal of Theoretical Biology and is considered today a landmark in modern endosymbiotic theory. Weathering constant criticism of her ideas for decades, Margulis was famous for her tenacity in pushing her theory forward, despite the opposition she faced at the time. The descent of mitochondria from bacteria and of chloroplasts from cyanobacteria was experimentally demonstrated in 1978 by Robert Schwartz and Margaret Dayhoff. This formed the first experimental evidence for the symbiogenesis theory. The endosymbiosis theory of organogenesis became widely accepted in the early 1980s, after the genetic material of mitochondria and chloroplasts had been found to be significantly different from that of the symbiont's nuclear DNA.
In 1995, English evolutionary biologist Richard Dawkins had this to say about Lynn Margulis and her work:
I greatly admire Lynn Margulis's sheer courage and stamina in sticking by the endosymbiosis theory, and carrying it through from being an unorthodoxy to an orthodoxy. I'm referring to the theory that the eukaryotic cell is a symbiotic union of primitive prokaryotic cells. This is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it.
Symbiosis as evolutionary force
Margulis opposed competition-oriented views of evolution, stressing the importance of symbiotic or cooperative relationships between species.
She later formulated a theory that proposed symbiotic relationships between organisms of different phyla, or kingdoms, as the driving force of evolution, and explained genetic variation as occurring mainly through transfer of nuclear information between bacterial cells or viruses and eukaryotic cells. Her organelle genesis ideas are now widely accepted, but the proposal that symbiotic relationships explain most genetic variation is still something of a fringe idea.
Margulis also held a negative view of certain interpretations of Neo-Darwinism that she felt were excessively focused on competition between organisms, as she believed that history will ultimately judge them as comprising "a minor twentieth-century religious sect within the sprawling religious persuasion of Anglo-Saxon Biology."
She wrote that proponents of the standard theory "wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him ... Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk."
Gaia hypothesis
Margulis initially sought out the advice of James Lovelock for her own research: she explained that, "In the early seventies, I was trying to align bacteria by their metabolic pathways. I noticed that all kinds of bacteria produced gases. Oxygen, hydrogen sulfide, carbon dioxide, nitrogen, ammonia—more than thirty different gases are given off by the bacteria whose evolutionary history I was keen to reconstruct. Why did every scientist I asked believe that atmospheric oxygen was a biological product but the other atmospheric gases—nitrogen, methane, sulfur, and so on—were not? 'Go talk to Lovelock,' at least four different scientists suggested. Lovelock believed that the gases in the atmosphere were biological."
Margulis met with Lovelock, who explained his Gaia hypothesis to her, and very soon they began an intense collaborative effort on the concept. One of the earliest significant publications on Gaia was a 1974 paper co-authored by Lovelock and Margulis, which succinctly defined the hypothesis as follows: "The notion of the biosphere as an active adaptive control system able to maintain the Earth in homeostasis we are calling the 'Gaia hypothesis.'"
Like other early presentations of Lovelock's idea, the Lovelock-Margulis 1974 paper seemed to give living organisms complete agency in creating planetary self-regulation, whereas later, as the idea matured, this planetary-scale self-regulation was recognized as an emergent property of the Earth system, life and its physical environment taken together. When climatologist Stephen Schneider convened the 1989 American Geophysical Union Chapman Conference around the issue of Gaia, the idea of "strong Gaia" and "weak Gaia" was introduced by James Kirchner, after which Margulis was sometimes associated with the idea of "weak Gaia", incorrectly (her essay "Gaia is a Tough Bitch" dates from 1995 – and it stated her own distinction from Lovelock as she saw it, which was primarily that she did not like the metaphor of Earth as a single organism, because, she said, "No organism eats its own waste"). In her 1998 book Symbiotic Planet, Margulis explored the relationship between Gaia and her work on symbiosis.
Five kingdoms of life
In 1969, life on earth was classified into five kingdoms, as introduced by Robert Whittaker. Margulis became the most important supporter, as well as critic – while supporting parts, she was the first to recognize the limitations of Whittaker's classification of microbes. But later discoveries of new organisms, such as archaea, and emergence of molecular taxonomy challenged the concept. By the mid-2000s, most scientists began to agree that there are more than five kingdoms. Margulis became the most important defender of the five kingdom classification. She rejected the three-domain system introduced by Carl Woese in 1990, which gained wide acceptance. She introduced a modified classification by which all life forms, including the newly discovered, could be integrated into the classical five kingdoms. According to Margulis, the main problem, archaea, falls under the kingdom Prokaryotae alongside bacteria (in contrast to the three-domain system, which treats archaea as a higher taxon than kingdom, or the six-kingdom system, which holds that it is a separate kingdom). Margulis' concept is given in detail in her book Five Kingdoms, written with Karlene V. Schwartz. It has been suggested that it is mainly because of Margulis that the five-kingdom system survives.
Metamorphosis theory
In 2009, via a then-standard publication-process known as "communicated submission" (which bypassed traditional peer review), she was instrumental in getting the Proceedings of the National Academy of Sciences (PNAS) to publish a paper by Donald I. Williamson rejecting "the Darwinian assumption that larvae and their adults evolved from a single common ancestor." Williamson's paper provoked immediate response from the scientific community, including a countering paper in PNAS. Conrad Labandeira of the Smithsonian National Museum of Natural History said, "If I was reviewing [Williamson's paper] I would probably opt to reject it," he says, "but I'm not saying it's a bad thing that this is published. What it may do is broaden the discussion on how metamorphosis works and [...] [on] the origin of these very radical life cycles." But Duke University insect developmental biologist Fred Nijhout said that the paper was better suited for the "National Enquirer than the National Academy." In September it was announced that PNAS would eliminate communicated submissions in July 2010. PNAS stated that the decision had nothing to do with the Williamson controversy.
AIDS/HIV theory
In 2009 Margulis and seven others authored a position paper concerning research on the viability of round body forms of some spirochetes, "Syphilis, Lyme disease, & AIDS: Resurgence of 'the great imitator'?" which states that, "Detailed research that correlates life histories of symbiotic spirochetes to changes in the immune system of associated vertebrates is sorely needed", and urging the "reinvestigation of the natural history of mammalian, tick-borne, and venereal transmission of spirochetes in relation to impairment of the human immune system". The paper went on to suggest "that the possible direct causal involvement of spirochetes and their round bodies to symptoms of immune deficiency be carefully and vigorously investigated".
In a Discover Magazine interview, Margulis explained her reason for interest in the topic of the 2009 "AIDS" paper: "I'm interested in spirochetes only because of our ancestry. I'm not interested in the diseases", and stated that she had called them "symbionts" because both the spirochete which causes syphilis (Treponema) and the spirochete which causes Lyme disease (Borrelia) only retain about 20% of the genes they would need to live freely, outside of their human hosts.
However, in the Discover Magazine interview Margulis said that "the set of symptoms, or syndrome, presented by syphilitics overlaps completely with another syndrome: AIDS", and also noted that Kary Mullis said that "he went looking for a reference substantiating that HIV causes AIDS and discovered, 'There is no such document' ".
This provoked a widespread supposition that Margulis had been an "AIDS denialist". Jerry Coyne reacted on his Why Evolution is True blog against his interpretation that Margulis believed "that AIDS is really syphilis, not viral in origin at all." Seth Kalichman, a social psychologist who studies behavioral and social aspects of AIDS, cited her [Margulis] 2009 paper as an example of AIDS denialism "flourishing", and asserted that her [Margulis] "endorsement of HIV/AIDS denialism defies understanding".
Reception
Historian Jan Sapp has said that "Lynn Margulis's name is as synonymous with symbiosis as Charles Darwin's is with evolution." She has been called "science's unruly earth mother", a "vindicated heretic", or a scientific "rebel", It has been suggested that initial rejection of Margulis' work on the endosymbiotic theory, and the controversial nature of it as well as Gaia theory, made her identify throughout her career with scientific mavericks, outsiders, and unaccepted theories generally.
In the last decade of her life, while key components of her life's work began to be understood as fundamental to a modern scientific viewpoint – the widespread adoption of Earth System Science and the incorporation of key parts of endosymbiotic theory into biology curricula worldwide – Margulis if anything became more embroiled in controversy, not less. Journalist John Wilson explained this by saying that Lynn Margulis "defined herself by oppositional science," and in the commemorative collection of essays Lynn Margulis: The Life and Legacy of a Scientific Rebel, commentators again and again depict her as a modern embodiment of the "scientific rebel", akin to Freeman Dyson's 1995 essay The Scientist as Rebel, a tradition Dyson saw embodied in Benjamin Franklin, and which Dyson believed to be essential to good science.
Awards and recognitions
1975, Elected Fellow of the American Association for the Advancement of Science.
1978, Guggenheim Fellowship.
1983, Elected to the National Academy of Sciences.
1985, Guest Hagey Lecturer, University of Waterloo.
1986, Miescher-Ishida Prize.
1989, conferred the Commandeur de l'Ordre des Palmes Académiques de France.
1992, recipient of Chancellor's Medal for Distinguished Faculty of the University of Massachusetts at Amherst.
1995, elected Fellow of the World Academy of Art and Science.
1997, elected to the Russian Academy of Natural Sciences.
1998, papers permanently archived in the Library of Congress, Washington, D.C.
1998, recipient of the Distinguished Service Award of the American Institute of Biological Sciences.
1998, elected Fellow of the American Academy of Arts and Sciences.
1999, recipient of the William Procter Prize for Scientific Achievement.
1999, recipient of the National Medal of Science, awarded by President William J. Clinton.
2001, Golden Plate Award of the American Academy of Achievement
2002–05, Alexander von Humboldt Prize.
2005, elected President of Sigma Xi, The Scientific Research Society.
2006, Founded Sciencewriters Books with her son Dorion.
2008, one of thirteen recipients in 2008 of the Darwin-Wallace Medal, heretofore bestowed every 50 years, by the Linnean Society of London.
2010, inductee into the Leonardo da Vinci Society of Thinking at the University of Advancing Technology in Tempe, Arizona.
2010, NASA Public Service Award for Astrobiology.
2012, Lynn Margulis Symposium: Celebrating a Life in Science, University of Massachusetts, Amherst, March 23–25, 2012.
2017, the Journal of Theoretical Biology 434, 1–114 commemorated the 50th anniversary of "The origin of mitosing cells" with a special issue
Honorary doctorate from 15 universities.
Personal life
Margulis married astronomer Carl Sagan in 1957 soon after she got her bachelor's degree. Sagan was then a graduate student in physics at the University of Chicago. Their marriage ended in 1964, just before she completed her PhD. They had two sons, Dorion Sagan, who later became a popular science writer and her collaborator, and Jeremy Sagan, software developer and founder of Sagan Technology.
In 1967 she married Thomas N. Margulis, a crystallographer. They had a son named Zachary Margulis-Ohnuma, a New York City criminal defense lawyer, and a daughter Jennifer Margulis, teacher and author. They divorced in 1980.
She commented, "I quit my job as a wife twice," and, "it's not humanly possible to be a good wife, a good mother, and a first-class scientist. No one can do it — something has to go."
In the 2000s she had a relationship with fellow biologist Ricardo Guerrero.
Margulis argued that the September 11 attacks were a "false-flag operation, which has been used to justify the wars in Afghanistan and Iraq as well as unprecedented assaults on [...] civil liberties." She wrote that there was "overwhelming evidence that the three buildings [of the World Trade Center] collapsed by controlled demolition."
She was a religious agnostic, and a staunch evolutionist, but rejected the modern evolutionary synthesis, and said: "I remember waking up one day with an epiphanous revelation: I am not a neo-Darwinist! I recalled an earlier experience, when I realized that I wasn't a humanistic Jew. Although I greatly admire Darwin's contributions and agree with most of his theoretical analysis and I am a Darwinist, I am not a neo-Darwinist." She argued that "Natural selection eliminates and maybe maintains, but it doesn't create", and maintained that symbiosis was the major driver of evolutionary change.
Margulis died on November 22, 2011, at home in Amherst, Massachusetts, five days after suffering a hemorrhagic stroke. As her wish, she was cremated and her ashes were scattered in her favorite research areas, near her home.
Works
Books
Margulis, Lynn (1970). Origin of Eukaryotic Cells, Yale University Press,
Margulis, Lynn (1982). Early Life, Science Books International,
Margulis, Lynn, and Dorion Sagan (1986). Origins of Sex: Three Billion Years of Genetic Recombination, Yale University Press,
Margulis, Lynn, and Dorion Sagan (1987). Microcosmos: Four Billion Years of Evolution from Our Microbial Ancestors, HarperCollins,
Margulis, Lynn, and Dorion Sagan (1991). Mystery Dance: On the Evolution of Human Sexuality, Summit Books,
Margulis, Lynn, ed. (1991). Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis, The MIT Press,
Margulis, Lynn (1992). Symbiosis in Cell Evolution: Microbial Communities in the Archean and Proterozoic Eons, W.H. Freeman,
Sagan, Dorion, and Margulis, Lynn (1993). The Garden of Microbial Delights: A Practical Guide to the Subvisible World, Kendall/Hunt,
Margulis, Lynn, Dorion Sagan and Niles Eldredge (1995) What Is Life?, Simon and Schuster,
Margulis, Lynn, and Dorion Sagan (1997). Slanted Truths: Essays on Gaia, Symbiosis, and Evolution, Copernicus Books,
Margulis, Lynn, and Dorion Sagan (1997). What Is Sex?, Simon and Schuster,
Margulis, Lynn, and Karlene V. Schwartz (1997). Five Kingdoms: An Illustrated Guide to the Phyla of Life on Earth, W.H. Freeman & Company,
Margulis, Lynn (1998). Symbiotic Planet: A New Look at Evolution, Basic Books,
Margulis, Lynn, et al. (2002). The Ice Chronicles: The Quest to Understand Global Climate Change, University of New Hampshire,
Margulis, Lynn, and Dorion Sagan (2002). Acquiring Genomes: A Theory of the Origins of Species, Perseus Books Group,
Margulis, Lynn (2007). Luminous Fish: Tales of Science and Love, Sciencewriters Books,
Margulis, Lynn, and Eduardo Punset, eds. (2007). Mind, Life and Universe: Conversations with Great Scientists of Our Time, Sciencewriters Books,
Margulis, Lynn, and Dorion Sagan (2007). Dazzle Gradually: Reflections on the Nature of Nature, Sciencewriters Books,
Journals
Explanatory notes
References
External links
1938 births
2011 deaths
20th-century American biologists
20th-century American women scientists
20th-century American zoologists
21st-century American women scientists
21st-century American zoologists
21st-century American biologists
American agnostics
American Jews
American conspiracy theorists
Boston University faculty
Carl Sagan
American evolutionary biologists
Jewish women scientists
Lyme disease researchers
HIV/AIDS denialists
Members of the United States National Academy of Sciences
National Medal of Science laureates
Sagan family
Scientists from Massachusetts
Symbiogenesis researchers
Symbiosis
Theoretical biologists
University of California, Berkeley alumni
University of Chicago Laboratory Schools alumni
University of Massachusetts Amherst faculty
University of Wisconsin–Madison College of Letters and Science alumni
American women evolutionary biologists
American women zoologists
Jewish humanists
Jewish agnostics | Lynn Margulis | Biology | 4,751 |
22,417,827 | https://en.wikipedia.org/wiki/Niche%20apportionment%20models | Mechanistic models for niche apportionment are biological models used to explain relative species abundance distributions. These niche apportionment models describe how species break up resource pool in multi-dimensional space, determining the distribution of abundances of individuals among species. The relative abundances of species are usually expressed as a Whittaker plot, or rank abundance plot, where species are ranked by number of individuals on the x-axis, plotted against the log relative abundance of each species on the y-axis. The relative abundance can be measured as the relative number of individuals within species or the relative biomass of individuals within species.
History
Niche apportionment models were developed because ecologists sought biological explanations for relative species abundance distributions. MacArthur (1957, 1961), was one of the earliest to express dissatisfaction with purely statistical models, presenting instead 3 mechanistic niche apportionment models. MacArthur believed that ecological niches within a resource pool could be broken up like a stick, with each piece of the stick representing niches occupied in the community. With contributions from Sugihara (1980), Tokeshi (1990, 1993, 1996) expanded upon the broken stick model, when he generated roughly 7 mechanistic niche apportionment models. These mechanistic models provide a useful starting point for describing the species composition of communities.
Description
A niche apportionment model can be used in situations where one resource pool is either sequentially or simultaneously broken up into smaller niches by colonizing species or by speciation (clarification on resource use: species within a guild use same resources, while species within a community may not).
These models describe how species that draw from the same resource pool (e.g. a guild (ecology)) partition their niche. The resource pool is broken either sequentially or simultaneously, and the two components of the process of fragmentation of the niche include which fragment is chosen and the size of the resulting fragment (Figure 2).
Niche apportionment models have been used in the primary literature to explain, and describe changes in the relative abundance distributions of a diverse array of taxa including, freshwater insects, fish, bryophytes beetles, hymenopteran parasites, plankton assemblages and salt marsh grass.
Assumptions
The mechanistic models that describe these plots work under the assumption that rank abundance plots are based on a rigorous estimate of the abundances of individuals within species and that these measures represent the actual species abundance distribution. Furthermore, whether using the number of individuals as the abundance measure or the biomass of individuals, these models assume that this quantity is directly proportional to the size of the niche occupied by an organism. One suggestion is that abundance measured as the numbers of individuals, may exhibit lower variances than those using biomass. Thus, some studies using abundance as a proxy for niche allocation may overestimate the evenness of a community. This happens because there is not a clear distinction of the relationship between body size, abundance (ecology), and resource use. Often studies fail to incorporate size structure or biomass estimates into measures of actual abundance, and these measure can create a higher variance around the niche apportionment models than abundance measured strictly as the number of individuals.
Tokeshi's mechanistic models of niche apportionment
Seven mechanistic models that describe niche apportionment are described below. The models are presented in the order of increasing evenness, from least even, the Dominance Pre-emption model to the most even the Dominance Decay and MacArthur Fraction models.
Dominance preemption
This model describes a situation where after initial colonization (or speciation) each new species pre-empts more than 50% of the smallest remaining niche. In a Dominance preemption model of niche apportionment the species colonize random portion between 50 and 100% of the smallest remaining niche, making this model stochastic in nature. A closely related model, the Geometric Series, is a deterministic version of the Dominance pre-emption model, wherein the percentage of remaining niche space that the new species occupies (k) is always the same. In fact, the dominance pre-emption and geometric series models are conceptually similar and will produce the same relative abundance distribution when the proportion of the smaller niche filled is always 0.75. The dominance pre-emption model is the best fit to the relative abundance distributions of some stream fish communities in Texas, including some taxonomic groupings, and specific functional groupings.
The Geometric (k=0.75)
Random assortment
In the random assortment model the resource pool is divided at random among simultaneously or sequentially colonizing species. This pattern could arise because the abundance measure does not scale with the amount of niche occupied by a species or because temporal-variation in species abundance or niche breadth causes discontinuity in niche apportionment over time and thus species appear to have no relationship between extent of occupancy and their niche. Tokeshi (1993) explained that this model, in many ways, is similar to Caswell's neutral theory of biodiversity, mainly because species appear to act independently of each other.
Random fraction
The random fraction model describes a process where niche size is chosen at random by sequentially colonizing species. The initial species chooses a random portion of the total niche and subsequent colonizing species also choose a random portion of the total niche and divide it randomly until all species have colonized. Tokeshi (1990) found this model to be compatible with some epiphytic Chiromonid shrimp communities, and more recently it has been used to explain the relative abundance distributions of phytoplankton communities, salt meadow vegetation, some communities of insects in the order Diptera, some ground beetle communities, functional and taxonomic groupings of stream fish in Texas bio-regions, and ichneumonid parasitoids. A similar model was developed by Sugihara in an attempt to provide a biological explanation for the log normal distribution of Preston (1948). Sugihara's (1980) Fixed Division Model was similar to the random fraction model, but the randomness of the model is drawn from a triangular distribution with a mean of 0.75 rather that a normal distribution with a mean of 0.5 used in the random fraction. Sugihara used a triangular distribution to draw the random variables because the randomness of some natural populations matches a triangular distribution with a mean of 0.75.
Power fraction
This model can explain a relative abundance distribution when the probability of colonization an existing niche in a resource pool is positively related to the size of that niche (measured as abundance, biomass etc.). The probability with which a portion of the niche colonized is dependent on the relative sizes of the established niches, and is scaled by an exponent k. k can take a value between 0 and 1 and if k>0 there is always a slightly higher probability that the larger niche will be colonized. This model is toted as being more biologically realistic because one can imagine many cases where the niche with the larger proportion of resources is more likely to be invaded because that niche has more resource space, and thus more opportunity for acquisition. The random fraction model of niche apportionment is an extreme of the power fraction model where k=0, and the other extreme of the power fraction, when k=1 resembles the MacArthur Fraction model where the probability of colonization is directly proportion to niche size.
MacArthur fraction
This model requires that the initial niche is broken at random and the successive niches are chosen with a probability proportional to their size. In this model the largest niche always has a greater probability of being broken relative to the smaller niches in the resource pool. This model can lead to a more even distribution where larger niches are more likely to be broken facilitating co-existence between species in equivalent sized niches. The basis for the MacArthur Fraction model is the Broken Stick, developed by MacArthur (1957). These models produce similar results, but one of the main conceptual differences is that niches are filled simultaneously in Broken Stick model rather than sequentially as in the MacArthur Fraction. Tokeshi (1993) argues that sequentially invading a resource pool is more biologically realistic than simultaneously breaking the niche space. When the abundance of fish from all bio-regions in Texas were combined the distribution resembled the broken stick model of niche apportionment, suggesting a relatively even distribution of freshwater fish species in Texas.
Dominance decay
This model can be thought of as the inverse to the Dominance pre-emption model. First, the initial resource pool is colonized randomly and the remaining, subsequent colonizers always colonize the largest niche, whether or not it is already colonized. This model generates the most even community relative to the niche apportionment models described above because the largest niche is always broken into two smaller fragments that are more likely to be equivalent to the size of the smaller niche that was not broken. Communities of this “level” of evenness seem to be rare in natural systems. However, one such community includes the relative abundance distribution of filter feeders in one site within the River Danube in Austria.
Composite
A composite model exists when a combination of niche apportionment models are acting in different portions of the resource pool. Fesl (2002). shows how a composite model might appear in a study of freshwater Diptera, in that different niche apportionment models fit different functional groups of the data. Another example by Higgins and Strauss (2008), modeling fish assemblages, found that fish communities from different habitats and with different species compositions conform to different niche apportionment models, thus the entire species assemblage was a combination of models in different regions of the species range.
Fitting mechanistic models of niche apportionment to empirical data
Mechanistic models of niche apportionment are intended to describe communities. Researchers have used these models in many ways to investigate the temporal and geographic trends in species abundance.
For many years the fit of niche apportionment models was conducted by eye and graphs of the models were compared with empirical data. More recently statistical tests of the fit of niche apportionment models to empirical data have been developed. The later method (Etienne and Ollf 2005) uses a Bayesian simulation of the models to test their fit to empirical data. The former method, which is still commonly used, simulates the expected relative abundances, from a normal distribution, of each model given the same number of species as the empirical data. Each model is simulated multiple times, and mean and standard deviation can be calculated to assign confidence intervals around each relative abundance distribution. The confidence around each rank can be tested against empirical data for each model to determine model fit. The confidence intervals are calculated as follows. For more information on the simulation of niche apportionment models the website , which explains the program Power Niche.
r=confidence limit of simulated data
σ=standard deviation of simulated data
n=number of replicates of empirical sample
References
Biodiversity
Community ecology
Ecological niche
Landscape ecology | Niche apportionment models | Biology | 2,253 |
75,998,433 | https://en.wikipedia.org/wiki/2-Hydroxysaclofen | 2-Hydroxysaclofen is a GABAB receptor antagonist and an analogue of saclofen.
Pharmacodynamics
2-Hydroxysaclofen binds to the GABAB receptor, but this is selective for the (S)-enantiomer, while the (R)-enantiomer does not bind to the GABAB protein.
2-Hydroxysaclofen has been reported to be more potent than saclofen.
See also
Saclofen - another GABAB receptor antagonist
Baclofen - an agonist of the receptor
References
GABA receptor antagonists
4-Chlorophenyl compounds
Sulfonic acids
Tertiary alcohols
Amines
Ethanolamines | 2-Hydroxysaclofen | Chemistry | 151 |
72,646,521 | https://en.wikipedia.org/wiki/Levitated%20optomechanics | Levitated optomechanics is a field of mesoscopic physics which deals with the mechanical motion of mesoscopic particles which are optically or electrically or magnetically levitated. Through the use of levitation, it is possible to decouple the particle's mechanical motion exceptionally well from the environment. This in turn enables the study of high-mass quantum physics, out-of-equilibrium- and nano-thermodynamics and provides the basis for precise sensing applications.
Motivation
In order to use mechanical oscillators in the regime of quantum physics or for sensing applications, low damping of the oscillator's motion and thus high quality factors are desirable. In nano and micromechanics, the Q-factor of a system is often limited by its suspension, which usually demands filigree structures. Nevertheless, the maximally achievable Q-factor usually correlates with the system's size, requiring large systems for achieving high Q-factors.
Particle levitation in external fields can alleviate this constraint. This is one of the reasons why the field of levitated optomechanics has become attractive for research on the foundations in physics and for high-precision applications.
Physical basics
The interaction between a dielectric particle with polarizability and an electric field is given by the gradient force . When a particle is trapped and optically levitated in the focus of a Gaussian laser beam, the force can be approximated to first order by with , i.e. a harmonic oscillator with frequency , where is the particle's mass. Including passive damping, active external feedback and coupling results in the Langevin equations of motion:
Here is the total damping rate, which has usually two dominant contributions: collisions with atoms or molecules of the background gas and photon shot noise, which becomes dominant below pressures on the order of 10−6 mbar.
The coupling term allows to model any coupling to an external heat bath.
The external feedback is usually used to cool and control the particle motion.
The approximation of a classical harmonic oscillator holds true until one reaches the regime of quantum mechanics, where the quantum harmonic oscillator is the superior approximation and the quantization of the energy levels becomes apparent. The QHO has a ground state of lowest energy where both position and velocity have a minimal variance, determined by the Heisenberg uncertainty principle.
Such quantum states are interesting starting conditions for preparing non-Gaussian quantum states, quantum enhanced sensing, matter-wave interferometry or the realization of entanglement in many-particle systems.
Methods of cooling
Parametric feedback cooling and cold damping
The idea of feedback cooling is to apply a position and/or velocity dependent force on the particle in a way which produces a negative feedback loop.
One way to achieve that is by adding a feedback term, which is proportional to the particle's position (). Since that mechanism provides damping, which cools down the mechanical motion, without the introduction of fluctuations, it is referred to as “cold damping”. The first experiment employing this type of cooling was done in 1977 by Arthur Ashkin, who received the 2018 Nobel Prize in Physics for his pioneering work on trapping with optical tweezers.
Instead of applying a linear feedback signal, one can also combine position and velocity via to get a signal with twice the frequency of the particle's oscillation. This way the stiffness of the trap increases when the particle moves out of the trap and decreases when the particle is moving back.
Cavity-enhanced Sisyphus cooling
Coherent scattering cavity cooling
References
Mesoscopic physics
Quantum mechanics | Levitated optomechanics | Physics,Materials_science | 744 |
33,888,515 | https://en.wikipedia.org/wiki/Tennis%20racket%20theorem | The tennis racket theorem or intermediate axis theorem, is a kinetic phenomenon of classical mechanics which describes the movement of a rigid body with three distinct principal moments of inertia. It has also been dubbed the Dzhanibekov effect, after Soviet cosmonaut Vladimir Dzhanibekov, who noticed one of the theorem's logical consequences whilst in space in 1985. The effect was known for at least 150 years prior, having been described by Louis Poinsot in 1834 and included in standard physics textbooks such as Classical Mechanics by Herbert Goldstein throughout the 20th century.
The theorem describes the following effect: rotation of an object around its first and third principal axes is stable, whereas rotation around its second principal axis (or intermediate axis) is not.
This can be demonstrated by the following experiment: Hold a tennis racket at its handle, with its face being horizontal, and throw it in the air such that it performs a full rotation around its horizontal axis perpendicular to the handle (ê2 in the diagram), and then catch the handle. In almost all cases, during that rotation the face will also have completed a half rotation, so that the other face is now up. By contrast, it is easy to throw the racket so that it will rotate around the handle axis (ê1) without accompanying half-rotation around another axis; it is also possible to make it rotate around the vertical axis perpendicular to the handle (ê3) without any accompanying half-rotation.
The experiment can be performed with any object that has three different moments of inertia, for instance with a (rectangular) book, remote control, or smartphone. The effect occurs whenever the axis of rotation differs – even only slightly – from the object's second principal axis; air resistance or gravity are not necessary.
Theory
The tennis racket theorem can be qualitatively analysed with the help of Euler's equations.
Under torque–free conditions, they take the following form:
Here denote the object's principal moments of inertia, and we assume . The angular velocities around the object's three principal axes are and their time derivatives are denoted by .
Stable rotation around the first and third principal axis
Consider the situation when the object is rotating around the axis with moment of inertia . To determine the nature of equilibrium, assume small initial angular velocities along the other two axes. As a result, according to equation (1), is very small. Therefore, the time dependence of may be neglected.
Now, differentiating equation (2) and substituting from equation (3),
because and .
Note that is being opposed and so rotation around this axis is stable for the object.
Similar reasoning gives that rotation around the axis with moment of inertia is also stable.
Unstable rotation around the second principal axis
Now apply the same analysis to the axis with moment of inertia This time is very small. Therefore, the time dependence of may be neglected.
Now, differentiating equation (1) and substituting from equation (3),
Note that is not opposed (and therefore will grow) and so rotation around the second axis is unstable. Therefore, even a small disturbance, in the form of a very small initial value of or , causes the object to 'flip'.
Matrix analysis
If the object is mostly rotating along its third axis, so , we can assume does not vary much, and write the equations of motion as a matrix equation:which has zero trace and positive determinant, implying the motion of is a stable rotation around the origin—a neutral equilibrium point. Similarly, the point is a neutral equilibrium point, but is a saddle point.
Geometric analysis
During motion, both the energy and angular momentum-squared are conserved, thus we have two conserved quantities:and so for any initial condition , the trajectory of must stay on the intersection curve between two ellipsoids defined by This is shown on the animation to the left.
By inspecting Euler's equations, we see that implies that two components of are zero—that is, the object is exactly spinning around one of the principal axes. In all other situations, must remain in motion.
By Euler's equations, if is a solution, then so is for any constant . In particular, the motion of the body in free space (obtained by integrating ) is exactly the same, just completed faster by a ratio of .
Consequently, we can analyze the geometry of motion with a fixed value of , and vary on the fixed ellipsoid of constant squared angular momentum. As varies, the value of also varies—thus giving us a varying ellipsoid of constant energy. This is shown in the animation as a fixed orange ellipsoid and increasing blue ellipsoid.
For concreteness, consider , then the angular momentum ellipsoid's major axes are in ratios of , and the energy ellipsoid's major axes are in ratios of . Thus the angular momentum ellipsoid is both flatter and sharper, as visible in the animation. In general, the angular momentum ellipsoid is always more "exaggerated" than the energy ellipsoid.
Now inscribe on a fixed ellipsoid of its intersection curves with the ellipsoid of , as increases from zero to infinity. We can see that the curves evolve as follows:
For small energy, there is no intersection, since we need a minimum of energy to stay on the angular momentum ellipsoid.
The energy ellipsoid first intersects the momentum ellipsoid when , at the points . This is when the body rotates around its axis with the largest moment of inertia.
They intersect at two cycles around the points . Since each cycle contains no point at which , the motion of must be a periodic motion around each cycle.
They intersect at two "diagonal" curves that intersects at the points , when . If starts anywhere on the diagonal curves, it would approach one of the points, distance exponentially decreasing, but never actually reach the point. In other words, we have 4 heteroclinic orbits between the two saddle points.
They intersect at two cycles around the points . Since each cycle contains no point at which , the motion of must be a periodic motion around each cycle.
The energy ellipsoid last intersects the momentum ellipsoid when , at the points . This is when the body rotates around its axis with the smallest moment of inertia.
The tennis racket effect occurs when is very close to a saddle point. The body would linger near the saddle point, then rapidly move to the other saddle point, near , linger again for a long time, and so on. The motion repeats with period .
The above analysis is all done in the perspective of an observer which is rotating with the body. An observer watching the body's motion in free space would see its angular momentum vector conserved, while both its angular velocity vector and its moment of inertia undergo complicated motions in space. At the beginning, the observer would see both mostly aligned with the second major axis of . After a while, the body performs a complicated motion and ends up with , and again both are mostly aligned with the second major axis of .
Consequently, there are two possibilities: either the rigid body's second major axis is in the same direction, or it has reversed direction. If it is still in the same direction, then viewed in the rigid body's reference frame are also mostly in the same direction. However, we have just seen that and are near opposite saddle points . Contradiction.
Qualitatively, then, this is what an observer watching in free space would observe:
The body rotates around its second major axis for a while.
The body rapidly undergoes a complicated motion, until its second major axis has reversed direction.
The body rotates around its second major axis again for a while. Repeat.
This can be easily seen in the video demonstration in microgravity.
With dissipation
When the body is not exactly rigid, but can flex and bend or contain liquid that sloshes around, it can dissipate energy through its internal degrees of freedom. In this case, the body still has constant angular momentum, but its energy would decrease, until it reaches the minimal point. As analyzed geometrically above, this happens when the body's angular velocity is exactly aligned with its axis of maximal moment of inertia.
This happened to Explorer 1, the first satellite launched by the United States in 1958. The elongated body of the spacecraft had been designed to spin about its long (least-inertia) axis but refused to do so, and instead started precessing due to energy dissipation from flexible structural elements.
In general, celestial bodies large or small would converge to a constant rotation around its axis of maximal moment of inertia. Whenever a celestial body is found in a complex rotational state, it is either due to a recent impact or tidal interaction, or is a fragment of a recently disrupted progenitor.
See also
References
External links
on Mir International Space Station
Louis Poinsot, Théorie nouvelle de la rotation des corps, Paris, Bachelier, 1834, 170 p. : historically, the first mathematical description of this effect.
- intuitive video explanation by Matt Parker
The "Dzhanibekov effect" - an exercise in mechanics or fiction? Explain mathematically a video from a space station,
The Bizarre Behavior of Rotating Bodies, Veritasium
Classical mechanics
Physics theorems
Juggling | Tennis racket theorem | Physics | 1,944 |
5,033,373 | https://en.wikipedia.org/wiki/Action%20selection | Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behavior in an agent environment. The term is also sometimes used in ethology or animal behavior.
One problem for understanding action selection is determining the level of abstraction used for specifying an "act". At the most basic level of abstraction, an atomic act could be anything from contracting a muscle cell to provoking a war. Typically for any one action-selection mechanism, the set of possible actions is predefined and fixed.
Most researchers working in this field place high demands on their agents:
The acting agent typically must select its action in dynamic and unpredictable environments.
The agents typically act in real time; therefore they must make decisions in a timely fashion.
The agents are normally created to perform several different tasks. These tasks may conflict for resource allocation (e.g. can the agent put out a fire and deliver a cup of coffee at the same time?)
The environment the agents operate in may include humans, who may make things more difficult for the agent (either intentionally or by attempting to assist.)
The agents themselves are often intended to model animals or humans, and animal/human behaviour is quite complicated.
For these reasons, action selection is not trivial and attracts a good deal of research.
Characteristics of the action selection problem
The main problem for action selection is complexity. Since all computation takes both time and space (in memory), agents cannot possibly consider every option available to them at every instant in time. Consequently, they must be biased, and constrain their search in some way. For AI, the question of action selection is what is the best way to constrain this search? For biology and ethology, the question is how do various types of animals constrain their search? Do all animals use the same approaches? Why do they use the ones they do?
One fundamental question about action selection is whether it is really a problem at all for an agent, or whether it is just a description of an emergent property of an intelligent agent's behavior. However, if we consider how we are going to build an intelligent agent, then it becomes apparent there must be some mechanism for action selection. This mechanism may be highly distributed (as in the case of distributed organisms such as social insect colonies or slime mold) or it may be a special-purpose module.
The action selection mechanism (ASM) determines not only the agent's actions in terms of impact on the world, but also directs its perceptual attention, and updates its memory. These egocentric sorts of actions may in turn result in modifying the agent's basic behavioral capacities, particularly in that updating memory implies some form of machine learning is possible. Ideally, action selection itself should also be able to learn and adapt, but there are many problems of combinatorial complexity and computational tractability that may require restricting the search space for learning.
In AI, an ASM is also sometimes either referred to as an agent architecture or thought of as a substantial part of one.
AI mechanisms
Generally, artificial action selection mechanisms can be divided into several categories: symbol-based systems sometimes known as classical planning, distributed solutions, and reactive or dynamic planning. Some approaches do not fall neatly into any one of these categories. Others are really more about providing scientific models than practical AI control; these last are described further in the next section.
Symbolic approaches
Early in the history of artificial intelligence, it was assumed that the best way for an agent to choose what to do next would be to compute a probably optimal plan, and then execute that plan. This led to the physical symbol system hypothesis, that a physical agent that can manipulate symbols is necessary and sufficient for intelligence. Many software agents still use this approach for action selection. It normally requires describing all sensor readings, the world, all of ones actions and all of one's goals in some form of predicate logic. Critics of this approach complain that it is too slow for real-time planning and that, despite the proofs, it is still unlikely to produce optimal plans because reducing descriptions of reality to logic is a process prone to errors.
Satisficing is a decision-making strategy that attempts to meet criteria for adequacy, rather than identify an optimal solution. A satisficing strategy may often, in fact, be (near) optimal if the costs of the decision-making process itself, such as the cost of obtaining complete information, are considered in the outcome calculus.
Goal driven architectures – In these symbolic architectures, the agent's behavior is typically described by a set of goals. Each goal can be achieved by a process or an activity, which is described by a prescripted plan. The agent must just decide which process to carry on to accomplish a given goal. The plan can expand to subgoals, which makes the process slightly recursive. Technically, more or less, the plans exploit condition-rules. These architectures are reactive or hybrid. Classical examples of goal-driven architectures are implementable refinements of belief-desire-intention architecture like JAM or IVE.
Distributed approaches
In contrast to the symbolic approach, distributed systems of action selection actually have no one "box" in the agent that decides the next action. At least in their idealized form, distributed systems have many modules running in parallel and determining the best action based on local expertise. In these idealized systems, overall coherence is expected to emerge somehow, possibly through careful design of the interacting components. This approach is often inspired by artificial neural networks research. In practice, there is almost always some centralized system determining which module is "the most active" or has the most salience. There is evidence real biological brains also have such executive decision systems which evaluate which of the competing systems deserves the most attention, or more properly, has its desired actions disinhibited.
is an attention-based architecture developed by Mary-Anne Williams, Benjamin Johnston and their PhD student Rony Novianto. It orchestrates a diversity of modular distributed processes that can use their own representations and techniques to perceive the environment, process information, plan actions and propose actions to perform.
Various types of winner-take-all architectures, in which the single selected action takes full control of the motor system
Spreading activation including Maes Nets (ANA)
Extended Rosenblatt & Payton is a spreading activation architecture developed by Toby Tyrrell in 1993. The agent's behavior is stored in the form of a hierarchical connectionism network, which Tyrrell named free-flow hierarchy. Recently exploited for example by de Sevin & Thalmann (2005) or Kadleček (2001).
Behavior based AI, was a response to the slow speed of robots using symbolic action selection techniques. In this form, separate modules respond to different stimuli and generate their own responses. In the original form, the subsumption architecture, these consisted of different layers that could monitor and suppress each other's inputs and outputs.
Creatures are virtual pets from a computer game driven by three-layered neural network, which is adaptive. Their mechanism is reactive since the network at every time step determines the task that has to be performed by the pet. The network is described well in the paper of Grand et al. (1997) and in The Creatures Developer Resources. See also the Creatures Wiki.
Dynamic planning approaches
Because purely distributed systems are difficult to construct, many researchers have turned to using explicit hard-coded plans to determine the priorities of their system.
Dynamic or reactive planning methods compute just one next action in every instant based on the current context and pre-scripted plans. In contrast to classical planning methods, reactive or dynamic approaches do not suffer combinatorial explosion. On the other hand, they are sometimes seen as too rigid to be considered strong AI, since the plans are coded in advance. At the same time, natural intelligence can be rigid in some contexts although it is fluid and able to adapt in others.
Example dynamic planning mechanisms include:
Finite-state machines These are reactive architectures used mostly for computer game agents, in particular for first-person shooters bots, or for virtual movie actors. Typically, the state machines are hierarchical. For concrete game examples, see Halo 2 bots paper by Damian Isla (2005) or the Master's Thesis about Quake III bots by Jan Paul van Waveren (2001). For a movie example, see Softimage.
Other structured reactive plans tend to look a little more like conventional plans, often with ways to represent hierarchical and sequential structure. Some, such as PRS's 'acts', have support for partial plans. Many agent architectures from the mid-1990s included such plans as a "middle layer" that provided organization for low-level behavior modules while being directed by a higher level real-time planner. Despite this supposed interoperability with automated planners, most structured reactive plans are hand coded (Bryson 2001, ch. 3). Examples of structured reactive plans include James Firby's RAP System and the Nils Nilsson's Teleo-reactive plans. PRS, RAPs & TRP are no longer developed or supported. One still-active (as of 2006) descendant of this approach is the Parallel-rooted Ordered Slip-stack Hierarchical (or POSH) action selection system, which is a part of Joanna Bryson's Behaviour Oriented Design.
Sometimes to attempt to address the perceived inflexibility of dynamic planning, hybrid techniques are used. In these, a more conventional AI planning system searches for new plans when the agent has spare time, and updates the dynamic plan library when it finds good solutions. The important aspect of any such system is that when the agent needs to select an action, some solution exists that can be used immediately (see further anytime algorithm).
Others
CogniTAO is a decision making engine it based on BDI (belief-desire-intention), it includes built in teamwork capabilities.
Soar is a symbolic cognitive architecture. It is based on condition-action rules known as productions. Programmers can use the Soar development toolkit for building both reactive and planning agents or any compromise between these two extremes.
Excalibur was a research project led by Alexander Nareyek featuring any-time planning agents for computer games. The architecture is based on structural constraint satisfaction, which is an advanced artificial intelligence technique.
ACT-R is similar to Soar. It includes a Bayesian learning system to help prioritize the productions.
ABL/Hap
Fuzzy architectures The fuzzy approach in action selection produces more smooth behavior than can be produced by architectures exploiting Boolean condition-action rules (like Soar or POSH). These architectures are mostly reactive and symbolic.
Theories of action selection in nature
Many dynamic models of artificial action selection were originally inspired by research in ethology. In particular, Konrad Lorenz and Nikolaas Tinbergen provided the idea of an innate releasing mechanism to explain instinctive behaviors (fixed action patterns). Influenced by the ideas of William McDougall, Lorenz developed this into a "psychohydraulic" model of the motivation of behavior. In ethology, these ideas were influential in the 1960s, but they are now regarded as outdated because of their use of an energy flow metaphor; the nervous system and the control of behavior are now normally treated as involving information transmission rather than energy flow. Dynamic plans and neural networks are more similar to information transmission while spreading activation is more similar to the diffuse control of emotional or hormonal systems.
Stan Franklin has proposed that action selection is the right perspective to take in understanding the role and evolution of mind. See his page on the action selection paradigm.
AI models of neural action selection
Some researchers create elaborate models of neural action selection. See for example:
The Computational Cognitive Neuroscience Lab (CU Boulder).
The Adaptive Behaviour Research Group (Sheffield).
Catecholaminergic Neuron Electron Transport (CNET)
The locus coeruleus (LC) is one of the primary sources of noradrenaline in the brain and has been associated with selection of cognitive processing, such as attention and behavioral tasks. The substantia nigra pars compacta (SNc) is one of the primary sources of dopamine in the brain, and has been associated with action selection, primarily as part of the basal ganglia. CNET is a hypothesized neural signaling mechanism in the SNc and LC (which are catecholaminergic neurons), that could assist with action selection by routing energy between neurons in each group as part of action selection, to help one or more neurons in each group to reach action potential. It was first proposed in 2018, and is based on a number of physical parameters of those neurons, which can be broken down into three major components:
1) Ferritin and neuromelanin are present in high concentrations in those neurons, but it was unknown in 2018 whether they formed structures that would be capable of transmitting electrons over relatively long distances on the scale of microns between the largest of those neurons, which had not been previously proposed or observed. Those structures would also need to provide a routing or switching function, which had also not previously been proposed or observed. Evidence of the presence of ferritin and neuromelanin structures in those neurons and their ability to both conduct electrons by sequential tunneling and to route/switch the path of the neurons was subsequently obtained.
2) ) The axons of large SNc neurons were known to have extensive arbors, but it was unknown whether post-synaptic activity at the synapses of those axons would raise the membrane potential of those neurons sufficiently to cause the electrons to be routed to the neuron or neurons with the most post-synaptic activity for the purpose of action selection. At the time, prevailing explanations of the purpose of those neurons was that they did not mediate action selection and were only modulatory and non-specific. Prof. Pascal Kaeser of Harvard Medical School subsequently obtained evidence that large SNc neurons can be temporally and spatially specific and mediate action selection. Other evidence indicates that the large LC axons have similar behavior.
3) Several sources of electrons or excitons to provide the energy for the mechanism were hypothesized in 2018 but had not been observed at that time. Dioxetane cleavage (which can occur during somatic dopamine metabolism by quinone degradation of melanin) was contemporaneously proposed to generate high energy triplet state electrons by Prof. Doug Brash at Yale, which could provide a source for electrons for the CNET mechanism.
While evidence of a number of physical predictions of the CNET hypothesis has thus been obtained, evidence of whether the hypothesis itself is correct has not been sought. One way to try to determine whether the CNET mechanism is present in these neurons would be to use quantum dot fluorophores and optical probes to determine whether electron tunneling associated with ferritin in the neurons is occurring in association with specific actions.
See also
References
Further reading
Bratman, M.: Intention, plans, and practical reason. Cambridge, Mass: Harvard University Press (1987)
Brom, C., Lukavský, J., Šerý, O., Poch, T., Šafrata, P.: Affordances and level-of-detail AI for virtual humans. In: Proceedings of Game Set and Match 2, Delft (2006)
Bryson, J.: Intelligence by Design: Principles of Modularity and Coordination for Engineering Complex Adaptive Agents. PhD thesis, Massachusetts Institute of Technology (2001)
Champandard, A. J.: AI Game Development: Synthetic Creatures with learning and Reactive Behaviors. New Riders, USA (2003)
Grand, S., Cliff, D., Malhotra, A.: Creatures: Artificial life autonomous software-agents for home entertainment. In: Johnson, W. L. (eds.): Proceedings of the First International Conference on Autonomous Agents. ACM press (1997) 22-29
Huber, M. J.: JAM: A BDI-theoretic mobile agent architecture. In: Proceedings of the Third International Conference on Autonomous Agents (Agents'99). Seattle (1999) 236-243
Isla, D.: Handling complexity in Halo 2. In: Gamastura online, 03/11 (2005)
Maes, P.: The agent network architecture (ANA). In: SIGART Bulletin, 2 (4), pages 115–120 (1991)
Nareyek, A. Excalibur project
Reynolds, C. W. Flocks, Herds, and Schools: A Distributed Behavioral Model. In: Computer Graphics, 21(4) (SIGGRAPH '87 Conference Proceedings) (1987) 25–34.
de Sevin, E. Thalmann, D.:A motivational Model of Action Selection for Virtual Humans. In: Computer Graphics International (CGI), IEEE Computer SocietyPress, New York (2005)
Tyrrell, T.: Computational Mechanisms for Action Selection. Ph.D. Dissertation. Centre for Cognitive Science, University of Edinburgh (1993)
van Waveren, J. M. P.: The Quake III Arena Bot. Master thesis. Faculty ITS, University of Technology Delft (2001)
Wooldridge, M. An Introduction to MultiAgent Systems. John Wiley & Sons (2002)
External links
The University of Memphis: Agents by action selection
Michael Wooldridge: Introduction to agents and their action selection mechanisms
Cyril Brom: Slides on a course on action selection of artificial beings
Soar project. University of Michigan.
Modelling natural action selection, a special issue published by The Royal Society - Philosophical Transactions of the Royal Society
Problems in artificial intelligence
Functional analysis
Motor control
Motor cognition
Management cybernetics | Action selection | Mathematics,Biology | 3,670 |
14,074,754 | https://en.wikipedia.org/wiki/International%20Manufacturing%20Technology%20Show | The International Manufacturing Technology Show (IMTS), first held in Cleveland, Ohio in 1927
, is a trade show that features industrial machinery and technology. It is the largest manufacturing technology trade show in North America, and in 1990 was renamed from the original "International Machine Tool Show" to reflect the growing scope of the show to additional technologies such as welding, lubrication, and materials engineering. The show is managed by the Association for Manufacturing Technology.
An agreement between the AMT and the CECIMO (European Machine Tool Industry Association), which organizes the European-based EMO trade show for the metal working industry, coordinates the IMTS and the EMO such that every even-numbered year the IMTS is held in Chicago, and every odd-numbered year the EMO is held in Europe.
The next show is scheduled for September 9-14, 2024 at Mccormick Place, Chicago Illinois. https://www.imts.com/show/abouttheshow.cfm
Scale & Exhibitors
The six-day show is held in even-numbered years at Chicago's McCormick Place and draws attendees and exhibitors from the U.S. and some 119 other countries. IMTS was cancelled in 2020 due to the COVID-19 pandemic, but reconvened for 2022. For 2018, there were 129,415 registrants and 2,563 exhibitors across four buildings and of exhibit space.
Since 2004, IMTS has sponsored the Emerging Technology Center, where the latest academic and industrial advancements are showcased. IMTS 2012, for example, featured a Local Motors Rally Fighter car built live on the show floor, MTConnect, the open-source communication and interconnectivity standard, and MTInsight, the game-changing customized manufacturing business intelligence system.
References
External links
Official Youtube Channel
Industrial equipment
Trade shows in the United States
Events at McCormick Place | International Manufacturing Technology Show | Engineering | 390 |
70,533,744 | https://en.wikipedia.org/wiki/Cheluviation | Cheluviation is the process in which the metal ions in the upper layer of the soil are combined with organic ligands to form coordination complexes or chelates, moving downwards through eluviation and then depositing.
Metal ions that can participate in chelation include Fe, Al, Mn, Ca, Mg and trace elements in soil, while the organic ligands combined with these metal ions come mainly from the soil organic matter. Soil organic matter includes relatively stable complex organic compounds (such as lignin, protein, humus, etc.), as well as some simple organic acids and intermediate products of microbial decomposition of organic matter. These organic coordination compounds all contain active groups to varying degrees. Chain organic coordination compounds are complexed with metal ions to generate complexes, and these generated complexes containing multiple coordination atoms in a cyclic structure with metal ions are called chelates. The stability of the chelate is related to the number of atoms in the chelate ring, the stability constant of the chelation reaction, and the concentration of organic chelating agents and metal ions. The chelates produced by fulvic acid and metal ions in soil humus have strong leaching and deposition effects, and therefore are an important manifestation of soil cheluviation, which is generally resulting in the formation of gray-white leaching layers and dark brown/red deposited layer.
Dissolution and chelation of metal elements
Organic acids have the ability to dissolve soil minerals, and can destroy silicate minerals and iron and aluminum oxides, so that metal ions are precipitated and complexed with organic complexing agents through ion exchange, surface absorption, and chelation-reaction mechanisms. For example, at low pH, a large number of metal ions are complexed with organic acids. When the organic acid occupies the coordination position of the metal ion, it can prevent the precipitation and crystallization of the metal oxide and increase its solubility. Conversely, at high pH (e.g. 7–8), dissolved metal ions, such as Fe(III), will precipitate out of the solution as insoluble complexes.
Eluviation of chelate compounds
The eluviation of chelate compounds is the downward movement of soil chelates. The eluviation of chelate compounds can be affected by:
Acidity. Organic acids produced under acidic conditions can increase the solubility of metal elements such as iron and aluminum, thereby enhancing soil eluviation. Iron and aluminum are easily leached at low pH. As the pH increases, ferric hydroxide and aluminum hydroxide compounds precipitate.
Redox conditions. Under reducing conditions, more organic acids are produced and metal ions are reduced to soluble metal complexes that migrate into the soil. Under oxidative conditions, metal ions are easily precipitated, and the chelate is easily polymerized, thereby separating the chelate from the metal ions.
Soil texture. Clay has a certain adsorption capacity for chelates, which weakens the leaching of complexes. On the other hand, soils with a coarse texture and water-saturated soils will likely enhance the leaching effect of chelates.
References
Soil chemistry
Coordination chemistry | Cheluviation | Chemistry | 653 |
34,007,557 | https://en.wikipedia.org/wiki/The%20Lodge%20%28audio%20mastering%29 | The Lodge is an audio mastering facility located in Manhattan, New York City. It was founded by Emily Lazar in 1997. Over the years The Lodge has mastered recordings for many well known musicians, including David Bowie, The Subways, Foo Fighters, Lou Reed, Paul McCartney, Sinéad O'Connor, Natalie Merchant, Marianne Faithfull, and Madonna. The engineers have also mastered sound tracks for movies such as American Psycho and Thievery Corporation.
References
Audio engineering
Recording studios in Manhattan
1997 establishments in New York City
Recording studios owned by women | The Lodge (audio mastering) | Engineering | 111 |
228,540 | https://en.wikipedia.org/wiki/360-degree%20feedback | 360-degree feedback (also known as multi-rater feedback, multi-source feedback, or multi-source assessment) is a process through which feedback from an employee's colleagues and associates is gathered, in addition to a self-evaluation by the employee.
360-degree feedback can include input from external sources who interact with the employee (such as customers and suppliers), subordinates, peers, and supervisors. It differs from traditional performance appraisal, which typically uses downward feedback delivered by supervisors employees, and upward feedback delivered to managers by subordinates.
Organizations most commonly use 360-degree feedback for developmental purposes. Nonetheless, organizations are increasingly using 360-degree feedback in performance evaluations and administrative decisions, such as in payroll and promotion. When 360-degree feedback is used for performance evaluation purposes, it is sometimes called a 360-degree review. The use of 360-degree feedback in evaluation is controversial, due to concerns about the subjectivity and fairness of feedback providers.
History
The origins of 360-degree feedback date back to around 1930, with the German Reichswehr, when the military psychologist Johann Baptist Rieffert developed a selection methodology for officer candidates. One of the earliest recorded uses of surveys to gather information about employees occurred in the 1950s at the Esso Research and Engineering Company. From there, the idea of 360-degree feedback gained momentum.
Online evaluation tools led to increased popularity of multi-rater feedback assessments, due to the ease of use compared to physical pen-and-paper tools. The outsourcing of human resources functions has also created a market for 360-degree feedback products from consultants. Today, studies suggest that over one-third of U.S. companies use some type of multi-source feedback, including 90% of all Fortune 500 firms. In recent years, multi-source feedback has become a best practice in human resources due to online tools such as multiple language options, comparative reporting, and aggregate reporting.
Guidelines
Certain guidelines emphasise establishing trust between raters and ratees to improve rater accountability and feedback accuracy. At the same time, anonymous participation has also been found to result in more accurate feedback, in which case confidentiality among human resources staff and managers should be preserved. The standardisation and optimisation of rating scales and data collection also affects assessment accuracy, including such factors like the time of day.
Issues
Using 360-degree feedback tools for appraisal purposes has been criticised over concerns of performance criteria validity, ability of peers to give accurate feedback, and manipulation of these systems by feedback providers. Employee manipulation of feedback ratings has been reported in some companies who have utilized 360-degree feedback for performance evaluation, including GE, IBM, and Amazon.
The amount and level of training in 360-degree feedback for both the rater and ratee can affect the level of accuracy of the feedback. If no guidance is given, individual bias may affect the rater's ratings and the ratee's interpretation of the feedback. However, even with training measures in place, unconscious bias may still occur due to factors such as the cultural influences or relationship quality between the rater and ratee. Additionally, if there are potential consequences from rater feedback, rater motivation may shift from providing accurate feedback to providing feedback based on self-motivated reasons such as promoting or harming a particular individual.
Some members of the U.S. military have criticized its use of 360-degree feedback programs in employment decisions because of problems with validity and reliability. Other branches of the U.S. government have questioned 360-degree feedback reviews as well. Still, these organizations continue to use and develop their assessments in developmental processes.
A study on the patterns of rater accuracy shows that the length of time that a rater has known the individual being evaluated generally correlates with positive review favorability and lower accuracy of a 360-degree review, apart from raters who have known the individual for less than a year.
It has been suggested that multi-rater assessments often generate conflicting opinions and that there may be no way to determine whose feedback is accurate. Studies have also indicated that self-ratings are generally significantly higher than the ratings given from others.
Results
Several studies indicate that the use of 360-degree feedback helps to improve employee performance because it helps the evaluated see different perspectives of their performance.
In a 5-year study, no improvement in overall rater scores was found from the 1st year to the 2nd, but scores rose with each passing year from 2nd to 4th. A 1996 study found that performance increased between the 1st and 2nd administrations, and sustained this improvement 2 years later. Additional studies show that 360-degree feedback may be predictive of future performance.
Some authors maintain, however, that there are too many confounding variables related to 360-degree evaluations to reliably generalize their effectiveness, arguing that process features are likely to have major effects on creating behavior change. A 1998 study has found that the category of rater affects the reliability of feedback, with direct reports generally the least reliable.
Multiple pieces of research have demonstrated that the scale of responses can have a major effect on the results, and that some response scales are better than others. The evaluated individual following up with raters to discuss their results, which cannot be done when feedback is anonymous, often has a profound impact on results. Other potentially powerful factors affecting behavior change include how raters are selected, manager approval, instrument quality, rater training and orientation, participant training, supervisor training, coaching, integration with HR systems, and accountability.
One group of studies proposed four paradoxes that explain why 360-degree evaluations do not elicit accurate data:
The Paradox of Roles, in which an evaluator is conflicted by being both peer and the judge
The Paradox of Group Performance, which admits that the vast majority of work done in a corporate setting is done in groups, not individually
The Measurement Paradox, which shows that qualitative, or in-person, techniques are much more effective than mere ratings in facilitating change
The Paradox of Rewards, which shows that individuals evaluating their peers care more about the rewards associated with finishing the task than the actual content of the evaluation itself.
Additional studies found no correlation between an employee's multi-rater assessment scores and performance appraisal scores provided by supervisors. They advise that although multi-rater feedback can be effectively used for appraisal, care needs to be taken in its implementation or results will be compromised. This research suggests that 360-degree feedback and performance appraisals get at different outcome, leading some executives to argue that traditional performance appraisals and 360-degree feedback should be used in evaluating overall performance.
References
Further reading
Job evaluation
Personal development
Industrial and organizational psychology
Workplace
Workplace programs | 360-degree feedback | Biology | 1,368 |
45,061,337 | https://en.wikipedia.org/wiki/Anupam%20%28supercomputer%29 | Anupam is a series of supercomputers designed and developed by Bhabha Atomic Research Centre (BARC) for their internal usages. It is mainly used for molecular dynamical simulations, reactor physics, theoretical physics, computational chemistry, computational fluid dynamics, and finite element analysis.
The latest in the series is Anupam-Aganya.
Introduction
Bhabha Atomic Research Centre (BARC) carries out inter-disciplinary and multi-disciplinary R&D activities covering a wide range of disciplines in physical sciences, chemical sciences, biological sciences and engineering. Expertise at BARC covers the entire spectrum of science and technology.
BARC has started development of supercomputers under the ANUPAM project in 1991 and till date, has developed more than 20 different computer systems. All ANUPAM systems have employed parallel processing as the underlying philosophy and MIMD (Multiple Instruction Multiple Data) as the core architecture. BARC, being a multidisciplinary research organization, has a large pool of scientists and engineers, working in various aspects of nuclear science and technology and thus are involved in doing diverse nature of computations.
To keep the gestation period short, the parallel computers were built with commercially available off-the-shelf components, with BARC's major contribution being in the areas of system integration, system engineering, system software development, application software development, fine tuning of the system and support to a diverse set of users.
The series started with a small four-processor system in 1991 with a sustained performance of 34 MFlops. Keeping in mind the ever increasing demands from the users, new systems have been built regularly with increasing computational power. The latest in the series of supercomputers is the 4608 core ANUPAM-Adhya system developed in 2010-11, with a sustained performance of 47 TeraFLOPS on the standard High Performance Linpack (HPL) benchmark. The system is in production mode and released to users.
In 2001, BARC achieved a new milestone in developing a supercomputer 20-25 times faster than the fastest computer built by other institutes in the country when it commissioned ANUPAM-PENTIUM.
Anupam Systems
See also
PARAM
SAGA-220, a 220 TeraFLOP supercomputer built by ISRO
EKA
Wipro Supernova
Supercomputing in India
References
Supercomputers
Information technology in India
Supercomputing in India | Anupam (supercomputer) | Technology | 496 |
6,804,782 | https://en.wikipedia.org/wiki/Preferential%20concentration | Preferential concentration is the tendency of dense particles in a turbulent fluid to cluster in regions of high strain (low vorticity) due to their inertia. The extent by which particles cluster is determined by the Stokes number, defined as , where and are the timescales for the particle and fluid respectively; note that and are the mass densities of the fluid and the particle, respectively, is the kinematic viscosity of the fluid, and is the kinetic energy dissipation rate of the turbulence. Maximum preferential concentration occurs at . Particles with follow fluid streamlines and particles with do not respond significantly to the fluid within the times the fluid motions are coherent.
Systems that can be strongly influenced by the dynamics of preferential concentration are aerosol production of fine powders, spray, emulsifier, and crystallization reactors, pneumatic devices, cloud droplet formation, aerosol transport in the upper atmosphere, and even planet formation from protoplanetary nebula.
References
External links
Turbulence | Preferential concentration | Chemistry | 205 |
23,613,099 | https://en.wikipedia.org/wiki/Energy%20homeostasis | In biology, energy homeostasis, or the homeostatic control of energy balance, is a biological process that involves the coordinated homeostatic regulation of food intake (energy inflow) and energy expenditure (energy outflow). The human brain, particularly the hypothalamus, plays a central role in regulating energy homeostasis and generating the sense of hunger by integrating a number of biochemical signals that transmit information about energy balance. Fifty percent of the energy from glucose metabolism is immediately converted to heat.
Energy homeostasis is an important aspect of bioenergetics.
Definition
In the US, biological energy is expressed using the energy unit Calorie with a capital C (i.e. a kilocalorie), which equals the energy needed to increase the temperature of 1 kilogram of water by 1 °C (about 4.18 kJ).
Energy balance, through biosynthetic reactions, can be measured with the following equation:
Energy intake (from food and fluids) = Energy expended (through work and heat generated) + Change in stored energy (body fat and glycogen storage)
The first law of thermodynamics states that energy can be neither created nor destroyed. But energy can be converted from one form of energy to another. So, when a calorie of food energy is consumed, one of three particular effects occur within the body: a portion of that calorie may be stored as body fat, triglycerides, or glycogen, transferred to cells and converted to chemical energy in the form of adenosine triphosphate (ATP – a coenzyme) or related compounds, or dissipated as heat.
Energy
Intake
Energy intake is measured by the amount of calories consumed from food and fluids. Energy intake is modulated by hunger, which is primarily regulated by the hypothalamus, and choice, which is determined by the sets of brain structures that are responsible for stimulus control (i.e., operant conditioning and classical conditioning) and cognitive control of eating behavior. Hunger is regulated in part by the action of certain peptide hormones and neuropeptides (e.g., insulin, leptin, ghrelin, and neuropeptide Y, among others) in the hypothalamus.
Expenditure
Energy expenditure is mainly a sum of internal heat produced and external work. The internal heat produced is, in turn, mainly a sum of basal metabolic rate (BMR) and the thermic effect of food. External work may be estimated by measuring the physical activity level (PAL).
Imbalance
The Set-Point Theory, first introduced in 1953, postulated that each body has a preprogrammed fixed weight, with regulatory mechanisms to compensate. This theory was quickly adopted and used to explain failures in developing effective and sustained weight loss procedures. A 2019 systematic review of multiple weight change interventions on humans, including dieting, exercise and overeating, found systematic "energetic errors", the non-compensated loss or gain of calories, for all these procedures. This shows that the body cannot precisely compensate for errors in energy/calorie intake, contrary to what the Set-Point Theory hypothesizes, and potentially explaining both weight loss and weight gain such as obesity. This review was conducted on short-term studies, therefore such a mechanism cannot be excluded in the long term, as evidence is currently lacking on this timeframe.
Positive balance
A positive balance is a result of energy intake being higher than what is consumed in external work and other bodily means of energy expenditure.
The main preventable causes are:
Overeating, resulting in increased energy intake
Sedentary lifestyle, resulting in decreased energy expenditure through external work
A positive balance results in energy being stored as fat and/or muscle, causing weight gain. In time, overweight and obesity may develop, with resultant complications.
Negative balance
A negative balance or caloric deficit is a result of energy intake being less than what is consumed in external work and other bodily means of energy expenditure.
The main cause is undereating due to a medical condition such as decreased appetite, anorexia nervosa, digestive disease, or due to some circumstance such as fasting or lack of access to food. Hyperthyroidism can also be a cause.
Requirement
Normal energy requirement, and therefore normal energy intake, depends mainly on age, sex and physical activity level (PAL). The Food and Agriculture Organization (FAO) of the United Nations has compiled a detailed report on human energy requirements. An older but commonly used and fairly accurate method is the Harris-Benedict equation.
Yet, there are currently ongoing studies to show if calorie restriction to below normal values have beneficial effects, and even though they are showing positive indications in nonhuman primates it is still not certain if calorie restriction has a positive effect on longevity for humans and other primates. Calorie restriction may be viewed as attaining energy balance at a lower intake and expenditure, and is, in this sense, not generally an energy imbalance, except for an initial imbalance where decreased expenditure hasn't yet matched the decreased intake.
Society and culture
There has been controversy over energy-balance messages that downplay energy intake being promoted by food industry groups.
See also
Dynamic energy budget
Earth's energy balance
References
External links
Diagram of regulation of fat stores and hunger
Daily energy requirement calculator
Nutrition
Metabolism
Biochemistry | Energy homeostasis | Chemistry,Biology | 1,115 |
27,926,338 | https://en.wikipedia.org/wiki/Frame%20rate%20control | Frame rate control (FRC) or temporal dithering is a method for achieving greater color depth particularly in liquid-crystal displays.
Older, cheaper, or faster LCDs, especially those using TN, often represent colors using only 6 bits per RGB color, or 18 bit in total, and are unable to display the 16.78 million color shades (24-bit truecolor) that contemporary signal sources like graphics cards, video game consoles, set-top boxes, and video cameras can output. Instead, they use a temporal dithering method that combines successive colors in the same pixel to simulate the desired shade. This is distinct from, though can be combined with, spatial dithering, which uses nearby pixels at the same time.
FRC cycles between different color shades within each new frame to simulate an intermediate shade. This can create a potentially noticeable 30 Hz (half frame rate) flicker. Temporal dithering tends to be most noticeable in darker tones, while spatial dithering appears to make the individual pixels of the LCD visible. TFT panels available in 2020 often use FRC to display 30-bit deep color or HDR10 with 24-bit color panels. Temporal dithering is also implemented in software, for if the display itself does not, as for instance GPU drivers from both AMD and Nvidia provide the option, enabled by default on some platforms.
This method is similar in principle to field-sequential color system by CBS and other sequential methods, such as used for grays in DLP, and also colors in single-chip DLP.
In the demonstration video green and cyan-green are mixed both statically (for reference) and by rapidly alternating. A display with a refresh rate of at least 60hz is recommended for this video. Pausing the video shows that the perceived color of the bottom-right square during playback is different from the color seen in any individual frame. In an LCD display that uses FRC the colors that are alternated between would be more similar than those in the demonstration video, further reducing the flicker effect.
See also
Computer monitor
LCD television
GigaScreen / DithVIDE / BZither
References
Display technology | Frame rate control | Engineering | 446 |
20,333,238 | https://en.wikipedia.org/wiki/Beta%20Monocerotis | Beta Monocerotis (Beta Mon, β Monocerotis, β Mon) is a triple star system in the constellation of Monoceros. To the naked eye, it appears as a single star with an apparent visual magnitude of approximately 3.74, making it the brightest visible star in the constellation. A telescope shows a curved line of three pale blue stars (or pale yellow stars, depending on the scope's focus). William Herschel who discovered it in 1781 commented that it is "one of the most beautiful sights in the heavens". The star system consists of three Be stars, β Monocerotis A, β Monocerotis B, and β Monocerotis C. There is also an additional visual companion star that is probably not physically close to the other three stars.
System
The three stars of β Monocerotis lie approximately in a straight line. Component B is 7" from component A, and component C a further 3" away. The stars have a common proper motion across the sky and very similar radial velocities. They share a single Hipparcos satellite identifier and are assumed to be at the same distance, around 700 light years based on their parallax.
β Monocerotis is classified as a variable star, although it is unclear which of the three components causes the brightness changes. The magnitude range is given as 3.77 to 3.84 in the Hipparcos photometric band.
Beta Monocerotis A
Beta Monocerotis A (Beta Mon A, β Monocerotis A, β Mon A) is a Be shell star with a mass of approximately 7 solar masses and a luminosity of 3,200 times the Sun's.
Beta Monocerotis B
Beta Monocerotis B (Beta Mon B / β Monocerotis B / β Mon B) is a Be star with a mass of approximately 6.2 solar masses and a luminosity of 1,600 times the Sun's.
Beta Monocerotis C
Beta Monocerotis C (Beta Mon C / β Monocerotis C / β Mon C) is a Be star with a mass of approximately 6 solar masses and a luminosity of 1,300 times the Sun's. This star was observed to be double in speckle interferometric observations in 1988, but this has not been confirmed by later infrared observations.
Visual companion
The triple star system has a visual companion, CCDM J06288-0702D, which has an apparent visual magnitude of approximately 12 and is visible approximately 25 arcseconds away from β Monocerotis A. It is probably not physically close to the other three stars, merely appearing next to them in the sky.
Notes
References
Monoceros
Be stars
4
045725 6 7
Monocerotis, Beta
030867
Monocerotis, 11
BD-06 1574 5
Triple star systems
Shell stars
2356 7 8 | Beta Monocerotis | Astronomy | 613 |
5,270,907 | https://en.wikipedia.org/wiki/Amphibian%20Species%20of%20the%20World | Amphibian Species of the World 6.2: An Online Reference (ASW) is a herpetology database. It lists the names of frogs, salamanders and other amphibians, which scientists first described each species and what year, and the animal's known range.
The American Museum of Natural History hosts Amphibian Species of the World, which is updated by herpetologist Darrel Frost. As of 2024, it contained more than 8700 species.
History
The Association of Systematics Collections (ASC) started this project in 1978 because the Convention on Trade in Endangered Species of Fauna and Flora (CITES) needed a database for animals. (The ASC later changed its name to Natural Science Collections Alliance.) The ASC's Stephen R. Edwards wrote Mammal Species of the World first and started Amphibian Species of the World second. Edwards decided to write about living amphibians because Richard G. Zweifel had just composed a large list of amphibian names and because experts from the University of Kansas were available to assist him. Darrel Frost joined the project to help Edwards. Frost planned to write Turtle and Crocodilian Species of the World next, but he left to complete his Ph.D. instead.
The first version of the catalogue was published as a book in 1985, and well received by specialists in the field.
In 1989, the ASC gave the copyright for Amphibian Species of the World to the Herpetologists' League, and they added more amphibians to the database. The League and American Museum of Natural History put Darrel Frost in charge of the project. At the time, Frost was a curator at the American Museum of Natural History. Frost added more information for professional herpetologists to use and made many corrections. He added more species that had been discovered since 1985. The project's own page notes that there are ten times as many amphibian species known to science today than were known in the mid-1980s.
In July 1999, the catalogue was first published on the internet, in its 2.0 version. New versions were added in 2004, 2006 and 2007. The 6.0 version, published in 2014, allows for real-time modifications.
The 6.2 version was published in January 2023. As of August, the website contains 8,674 species and over 17,848 references.
Critical response
According to Amphibians.org, "For three decades ASW has been the primary reference for amphibian taxonomy." In 2013, Frost won the Sabin Award for his work on Amphibian Species of the World.
Related pages
Mammal Species of the World
AmphibiaWeb
References
Other websites
Site hosted by American Museum of Natural History
Science websites
Biodiversity databases
Online taxonomy databases | Amphibian Species of the World | Biology,Environmental_science | 569 |
6,571,387 | https://en.wikipedia.org/wiki/Cylinder%20set%20measure | In mathematics, cylinder set measure (or promeasure, or premeasure, or quasi-measure, or CSM) is a kind of prototype for a measure on an infinite-dimensional vector space. An example is the Gaussian cylinder set measure on Hilbert space.
Cylinder set measures are in general not measures (and in particular need not be countably additive but only finitely additive), but can be used to define measures, such as the classical Wiener measure on the set of continuous paths starting at the origin in Euclidean space. This is done in the construction of the abstract Wiener space where one defines a cylinder set Gaussian measure on a separable Hilbert space and chooses a Banach space in such a way that the cylindrical measure becomes σ-additive on the cylindrical algebra.
The terminology is not always consistent in the literature. Some authors call cylinder set measures just cylinder measure or cylindrical measures (see e.g.), while some reserve this word only for σ-additive measures.
Definition
There are two equivalent ways to define a cylinder set measure.
One way is to define it directly as a set function on the cylindrical algebra such that certain restrictions on smaller σ-algebras are σ-additive measure. This can also be expressed in terms of a finite-dimensional linear operator.
Denote by the cylindrical algebra defined for two spaces with dual pairing , i.e. the set of all cylindrical sets
for some and . This is an algebra which can also be written as the union of smaller σ-algebras.
Definition on the cylindrical algebra
Let be a topological vector space over , denote its algebraic dual as and let be a subspace. Then the set function
is a cylinder set measure (or cylinderical measure) if for any finite set the restriction to
is a σ-additive measure. Notice that is a σ-algebra while is not.
As usual if we call it a cylindrical probability measure.
Operatic definition
Let be a real topological vector space. Let denote the collection of all surjective continuous linear maps defined on whose image is some finite-dimensional real vector space :
A cylinder set measure on is a collection of measures
where is a measure on These measures are required to satisfy the following consistency condition: if is a surjective projection, then the push forward of the measure is as follows:
If then it's a cylindrical probability measure. Some authors define cylindrical measures explicitly as probability measures, however they don't need to be.
Connection to the abstract Wiener spaces
Let be an abstract Wiener space in its classical definition by Leonard Gross, this is a separable Hilbert space , a separable Banach space that is the completion under a measurable norm or Gross-measurable norm and a continuous linear embedding with dense range. Gross then showed that this construction allows to continue a cylindrical Gaussian measure as a σ-additive measure on the Banach space. More precisely let be the topological dual space of , he showed that a cylindrical Gaussian measure on defined on the cylindrical algebra will be σ-additive on the cylindrical algebra of the Banach space. Hence the measure is also σ-additive on the cylindrical σ-algebra , which follows from the Carathéodory's extension theorem, and is therefore also a measure in the classical sense.
Remarks
The consistency condition
is modelled on the way that true measures push forward (see the section cylinder set measures versus true measures). However, it is important to understand that in the case of cylinder set measures, this is a requirement that is part of the definition, not a result.
A cylinder set measure can be intuitively understood as defining a finitely additive function on the cylinder sets of the topological vector space The cylinder sets are the pre-images in of measurable sets in : if denotes the -algebra on on which is defined, then
In practice, one often takes to be the Borel -algebra on In this case, one can show that when is a separable Banach space, the σ-algebra generated by the cylinder sets is precisely the Borel -algebra of :
Cylinder set measures versus true measures
A cylinder set measure on is not actually a true measure on : it is a collection of measures defined on all finite-dimensional images of If has a probability measure already defined on it, then gives rise to a cylinder set measure on using the push forward: set on
When there is a measure on such that in this way, it is customary to abuse notation slightly and say that the cylinder set measure "is" the measure
Cylinder set measures on Hilbert spaces
When the Banach space is also a Hilbert space there is a arising from the inner product structure on Specifically, if denotes the inner product on let denote the quotient inner product on The measure on is then defined to be the canonical Gaussian measure on :
where is an isometry of Hilbert spaces taking the Euclidean inner product on to the inner product on and is the standard Gaussian measure on
The canonical Gaussian cylinder set measure on an infinite-dimensional separable Hilbert space does not correspond to a true measure on The proof is quite simple: the ball of radius (and center 0) has measure at most equal to that of the ball of radius in an -dimensional Hilbert space, and this tends to 0 as tends to infinity. So the ball of radius has measure 0; as the Hilbert space is a countable union of such balls it also has measure 0, which is a contradiction. (See infinite dimensional Lebesgue measure.)
An alternative proof that the Gaussian cylinder set measure is not a measure uses the Cameron–Martin theorem and a result on the quasi-invariance of measures. If really were a measure, then the identity function on would radonify that measure, thus making into an abstract Wiener space. By the Cameron–Martin theorem, would then be quasi-invariant under translation by any element of which implies that either is finite-dimensional or that is the zero measure. In either case, we have a contradiction.
Sazonov's theorem gives conditions under which the push forward of a canonical Gaussian cylinder set measure can be turned into a true measure.
Nuclear spaces and cylinder set measures
A cylinder set measure on the dual of a nuclear Fréchet space automatically extends to a measure if its Fourier transform is continuous.
Example: Let be the space of Schwartz functions on a finite dimensional vector space; it is nuclear. It is contained in the Hilbert space of functions, which is in turn contained in the space of tempered distributions the dual of the nuclear Fréchet space :
The Gaussian cylinder set measure on gives a cylinder set measure on the space of tempered distributions, which extends to a measure on the space of tempered distributions,
The Hilbert space has measure 0 in by the first argument used above to show that the canonical Gaussian cylinder set measure on does not extend to a measure on
See also
References
I.M. Gel'fand, N.Ya. Vilenkin, Generalized functions. Applications of harmonic analysis, Vol 4, Acad. Press (1968)
L. Schwartz, Radon measures.
Measures (measure theory)
Topological vector spaces | Cylinder set measure | Physics,Mathematics | 1,440 |
61,101,068 | https://en.wikipedia.org/wiki/Ulotaront | Ulotaront (; developmental codes SEP-363856, SEP-856) is an investigational antipsychotic that is undergoing clinical trials for the treatment of schizophrenia and Parkinson's disease psychosis. The medication was discovered in collaboration between PsychoGenics Inc. and Sunovion Pharmaceuticals (which was subsequently merged into Sumitomo Pharma) using PsychoGenics' behavior and AI-based phenotypic drug discovery platform, SmartCube.
Ulotaront is in phase III clinical trial for schizophrenia, phase II/III for generalised anxiety disorder and major depressive disorder and discontinued for narcolepsy and psychotic disorders.
Research has shown that ulotaront results in a greater reduction from baseline in the PANSS total score than placebo. Treatment with ulotaront, as compared with placebo, was also associated with an improvement in sleep quality. Ulotaront was awarded a Breakthrough Therapy designation due to its increased efficacy and greatly reduced side effects compared to current treatments.
Adverse effects
The adverse effect profile of ulotaront differs from that of other antipsychotics because its mechanism of action does not involve antagonism of dopamine receptors in the brain, which is responsible for the drug-induced movement disorders (like akathisia) that may occur with those agents. Some adverse events reported in preliminary clinical trials are somnolence, agitation, nausea, diarrhea, and dyspepsia.
Pharmacology
Pharmacodynamics
The mechanism of action of ulotaront in the treatment of schizophrenia is unclear. However, it is thought to be an agonist at the trace amine-associated receptor 1 (TAAR1) and serotonin 5-HT1A receptors. This mechanism of action is unique among available antipsychotics, which generally antagonize dopamine receptors (especially dopamine D2 receptor).
Ulotaront is a full agonist of the human TAAR1 with an of 140nM and an of 101.3%. It is also a partial agonist of the serotonin 5-HT1A receptor ( = 2,300nM; = 74.7%) and of the serotonin 5-HT1D receptor ( = 262nM; = 57.1%). Conversely, its activities at various other targets, such as various other serotonin receptors as well as adrenergic and dopamine receptors, are much less potent.
Ulotaront decreases basal locomotor activity in rodents and this effect was absent in TAAR1 knockout mice. It prevented the hyperlocomotion induced by the NMDA receptor antagonist phencyclidine (PCP). Conversely, ulotaront did not affect dextroamphetamine-induced hyperlocomotion. Similarly, it did not reverse apomorphine-induced climbing behavior.
Pharmacokinetics
The precise pharmacokinetic profile of ulotaront has not been reported, though the developer has suggested that the pharmacokinetic data supports once daily dosing.
Research
As of 2018, Sunovion, the maker of another antipsychotic called lurasidone (Latuda), is conducting clinical trials on ulotaront in partnership with the preclinical research company PsychoGenics. The U.S. Food and Drug Administration has granted ulotaront the breakthrough therapy designation. In addition to schizophrenia, ulotaront is also being studied for the treatment of psychosis associated with Parkinson's disease.
The Brief Negative Symptom Scale (BNSS) has been used to assess the effect of Ulotaront on the negative symptoms of schizophrenia.
In July 2023, the pharmaceutical company behind the drug announced that the drug had failed to outperform placebo in the treatment of acutely psychotic patients with schizophrenia, as measured by the PANSS.
See also
List of investigational antipsychotics § Monoamine receptor modulators
Ralmitaront
References
5-HT1A agonists
5-HT1D agonists
Amines
Antipsychotics
Experimental drugs developed for schizophrenia
TAAR1 agonists
Thiophenes | Ulotaront | Chemistry | 879 |
20,797,876 | https://en.wikipedia.org/wiki/Edge%20cycle%20cover | In graph theory, a branch of mathematics, an edge cycle cover (sometimes called simply cycle cover) of a graph is a family of cycles which are subgraphs of G and contain all edges of G.
If the cycles of the cover have no vertices in common, the cover is called vertex-disjoint or sometimes simply disjoint cycle cover. In this case, the set of the cycles constitutes a spanning subgraph of G.
If the cycles of the cover have no edges in common, the cover is called edge-disjoint or simply disjoint cycle cover.
Properties and applications
Minimum-Weight Cycle Cover
For a weighted graph, the Minimum-Weight Cycle Cover Problem (MWCCP) is the problem to find a cycle cover with minimal sum of weights of edges in all cycles of the cover.
For bridgeless planar graphs, the MWCCP can be solved in polynomial time.
Cycle k-cover
A cycle k-cover of a graph is a family of cycles which cover every edge of G exactly k times. It has been proven that every bridgeless graph has cycle k-cover for any even integer k≥4. For k=2, it is the well-known cycle double cover conjecture is an open problem in graph theory. The cycle double cover conjecture states that in every bridgeless graph, there exists a set of cycles that together cover every edge of the graph twice.
See also
Alspach's conjecture
Vertex cycle cover
References
Graph theory objects
Combinatorial optimization | Edge cycle cover | Mathematics | 308 |
23,488,217 | https://en.wikipedia.org/wiki/Peer%20Bork | Peer Bork (born 4 May 1963) is a German bioinformatician. He is Director of the European Molecular Biology Laboratory (EMBL) site in Heidelberg, in south-west Germany.
Bork received his PhD in biochemistry in 1990 from the Leipzig University and his habilitation in theoretical biophysics in 1995 from the Humboldt University of Berlin. He was appointed a Group Leader at EMBL in 1995. He has worked on the microbiomes of humans and other animals.
He is on the board of editorial reviewers of Science, and is a senior editor of the journal Molecular Systems Biology.
In 2000, Bork was elected as a Member of the European Molecular Biology Organization, and in 2008 he received the Nature "mid-career achievement" award for science mentoring in Germany. He was appointed a member of the German National Academy of Sciences Leopoldina in 2014. He received an honorary doctorate from the University of Würzburg in 2014 and Utrecht University in 2017.
In 2021, Bork was awarded the Novozymes Prize "for developing groundbreaking, publicly available and integrative bioinformatic tools" by the Novo Nordisk Foundation. He was also awarded the 2021 International Society for Computational Biology 'Accomplishments by a Senior Scientist Award' for "tremendous contributions to bioinformatics on a plethora of fronts within the field".
References
Members of the European Molecular Biology Organization
Living people
Human Genome Project scientists
21st-century German biologists
Members of the German National Academy of Sciences Leopoldina
1963 births
Leipzig University alumni | Peer Bork | Engineering | 307 |
1,487,249 | https://en.wikipedia.org/wiki/Human-centered%20computing | Human-centered computing (HCC) studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.
Human-centered computing researchers and practitioners usually come from one or more disciplines such as computer science, human factors, sociology, psychology, cognitive science, anthropology, communication studies, graphic design, and industrial design. Some researchers focus on understanding humans, both as individuals and in social groups, by focusing on the ways that human beings adopt and organize their lives around computational technologies. Others focus on designing and developing new computational artifacts.
Overview
Scope
HCC aims at bridging the existing gaps between the various disciplines involved with the design and implementation of computing systems that support human's activities. Meanwhile, it is a set of methodologies that apply to any field that uses computers in applications in which people directly interact with devices or systems that use computer technologies.
HCC facilitates the design of effective computer systems that take into account personal, social, and cultural aspects and addresses issues such as information design, human information interaction, human-computer interaction, human-human interaction, and the relationships between computing technology and art, social, and cultural issues.
HCC topics
The National Science Foundation (NSF) defines three-dimensional research as "a three dimensional space comprising human, computer, and environment." According to the NSF, the human dimension ranges from research that supports individual needs, through teams as goal-oriented groups, to society as an unstructured collection of connected people. The computer dimension ranges from fixed computing devices, through mobile devices, to computational systems of visual/audio devices that are embedded in the surrounding physical environment. The environment dimension ranges from discrete physical computational devices, through mixed reality systems, to immersive virtual environments. Some examples of topics in the field are listed below.
List of topics in the HCC field
Problem-solving in distributed environments, ranging across Internet-based information systems, grids, sensor-based information networks, and mobile and wearable information appliances.
Multimedia and multi-modal interfaces in which combinations of speech, text, graphics, gesture, movement, touch, sound, etc. are used by people and machines to communicate with one another.
Intelligent interfaces and user modeling, information visualization, and adaptation of content to accommodate different display capabilities, modalities, bandwidth, and latency.
Multi-agent systems that control and coordinate actions and solve complex problems in distributed environments in a wide variety of domains, such as disaster response teams, e-commerce, education, and successful aging.
Models for effective computer-mediated human-human interaction under a variety of constraints, (e.g., video conferencing, collaboration across high vs. low bandwidth networks, etc.).
Definition of semantic structures for multimedia information to support cross-modal input and output.
Specific solutions to address the special needs of particular communities.
Collaborative systems that enable knowledge-intensive and dynamic interactions for innovation and knowledge generation across organizational boundaries, national borders, and professional fields.
Novel methods to support and enhance social interaction, including innovative ideas like social orthotics, affective computing, and experience capture.
Studies of how social organizations, such as government agencies or corporations, respond to and shape the introduction of new information technologies, especially with the goal of improving scientific understanding and technical design.
Knowledge-driven human-computer interaction that uses ontologies to address the semantic ambiguities between human and computer's understandings towards mutual behaviors
Human-centered semantic relatedness measure that employs human power to measure the semantic relatedness between two concepts
Human-centered systems
Human-centered systems (HCS) are systems designed for human-centered computing. This approach was developed by Mike Cooley in his book Architect or Bee? drawing on his experience working with the Lucas Plan. HCS focuses on the design of interactive systems as they relate to human activities. According to Kling et al., the Committee on Computing, Information, and Communication of the National Science and Technology Council, identified human-centered systems, or HCS, as one of five components for a High Performance Computing Program. Human-centered systems can be referred to in terms of human-centered automation. According to Kling et al., HCS refers to "systems that are:
based on the analysis of the human tasks the system is aiding
monitored for performance in terms of human benefits
built to take account of human skills and
adaptable easily to changing human needs."
In addition, Kling et al. defines four dimensions of human-centeredness that should be taken into account when classifying a system: systems that are human centered must analyze the complexity of the targeted social organization, and the varied social units that structure work and information; human centeredness is not an attribute of systems, but a process in which the stakeholder group of a particular system assists in evaluating the benefit of the system; the basic architecture of the system should reflect a realistic relationship between humans and machines; the purpose and audience the system is designed for should be an explicit part of the design, evaluation, and use of the system.
Human-computer interaction
Within the field of human-computer interaction (HCI), the term "user-centered" is commonly used. The main focus of this approach is to thoroughly understand and address user needs to drive the design process. However, human-centered computing (HCC) goes beyond conventional areas like usability engineering, human-computer interaction, and human factors which primarily deal with user interfaces and interactions. Experts define HCC as a discipline that integrates disciplines such as learning sciences, social sciences, cognitive sciences, and intelligent systems more extensively compared to traditional HCI practices.
The concept of human-centered computing (HCC) is regarded as an essential aspect within the realm of computer-related research, extending beyond being just a subset discipline of computer science. The HCC perspective acknowledges that "computing" encompasses tangible technologies that enable diverse tasks while also serving as a significant social and economic influence.
In addition, Dertouzos elaborates on how HCC goes beyond the notion of interfaces that are easy for users to navigate by strategically incorporating five technologies: natural interaction, automation, personalized information retrieval, collaborative capabilities, and customization.
While the scope of HCC is extensive, three fundamental factors are proposed to constitute the core of HCC system and algorithm design processes:
Social and culturally aware considerations.
Direct augmentation and/or consideration of human abilities.
Adaptability is a key feature.
Adherence to these factors in system and algorithm design for HCC applications is anticipated to yield qualities such as:
Responsive actions aligned with the social and cultural context of deployment.
Integration of input from various sensors, with communication through diverse media as output.
Accessibility for a diverse range of individuals.
Human-centered activities in multimedia
The human-centered activities in multimedia, or HCM, can be considered as follows according to: media production, annotation, organization, archival, retrieval, sharing, analysis, and communication, which can be clustered into three areas: production, analysis, and interaction.
Multimedia production
Multimedia production is the human task of creating media. For instance, photographing, recording audio, remixing, etc. All aspects of media production concerned must directly involve humans in HCM. There are two main characteristics of multimedia production. The first is culture and social factors. HCM production systems should consider cultural differences and be designed according to the culture in which they will be deployed. The second is to consider human abilities. Participants involved in HCM production should be able to complete the activities during the production process. The field of Multimedia in Human-Centered Multimedia (HCM) is dedicated to the creation and development of various forms of media, including photography, audio recording, and remixing. What sets HCM apart is its emphasis on active human involvement throughout the production process. This means that cultural differences must be taken into account to tailor HCM systems according to specific cultural contexts. Furthermore, a key factor for achieving success in HCM production lies in recognizing and utilizing human capabilities effectively; this enables active participation and ensures efficient completion of all production activities.
Multimedia analysis
Multimedia analysis can be considered as a type of HCM applications which is the automatic analysis of human activities and social behavior in general. There is a broad area of potential relevant uses from facilitating and enhancing human communications, to allowing for improved information access and retrieval in the professional, entertainment, and personal domains. The field of Multimedia Analysis in Human-Centered Multimedia (HCM), involves automatically analyzing human activities and social behavior. This application area covers a wide range of domains, including improving communication between individuals and enhancing information access in professional, entertainment, and personal contexts. The possibilities for utilizing multimedia analysis are extensive, as it goes beyond simple categorization to achieve a nuanced understanding of human behavior. By doing so, system functionalities can be enhanced while providing users with improved experiences.
Multimedia interaction
Multimedia interaction can be considered as the interaction activity area of HCM. It is paramount to understand both how humans interact with each other and why, so that we can build systems to facilitate such communication and so that people can interact with computers in natural ways. To achieve natural interaction, cultural differences and social context are primary factors to consider, due to the potential different cultural backgrounds. For instance, a couple of examples include: face-to-face communications where the interaction is physically located and real-time; live-computer mediated communications where the interaction is physically remote but remains real-time; and non-real time computer-mediated communications such as instant SMS, email, etc.
Human-Centered Design Process
The Human-Centered Design Process is a method to problem-solving used in design. The process involves, first, empathizing with the user to learn about the target audience of the product and understand their needs. Empathizing will then lead to research, and asking the target audience specific question to further understand their goals for the product at hand. This researching stage may also involve competitor analysis to find more design opportunities in the product's market. Once the designer has compiled data on the user and the market for their product design, they will then move on to the ideation stage, in which they will brainstorm design solutions through sketches and wireframes. Wireframing is a digital or physical illustration of a user interface, focusing on information architecture, space allocation, and content functionality. Consequently, a wireframe typically does not have any colors or graphics and only focuses on the intended functionalities of the interface.
To conclude the Human-Centered Design Process, there are two final steps. Upon wireframing or sketching, the designer will usually turn their paper sketches or low-fidelity wireframes into high-fidelity prototypes. Prototyping allows the designer to explore their design ideas further and focus on the overall design concept. High-fidelity means that the prototype is interactive or "clickable" and simulates the a real application. After creating this high-fidelity prototype of their design, the designer can then conduct usability testing. This involves collecting participants that represent the target audience of the product and having them walk through the prototype as if they were using the real product. The goal of usability testing is to identify any issues with the design that need to be improved and analyze how real users will interact with the product. To run an effective usability test, it is imperative to take notes on the users behavior and decisions and also have the user thinking out loud while they use the prototype.
Career
Academic programs
As human-centered computing has become increasingly popular, many universities have created special programs for HCC research and study for both graduate and undergraduate students.
User interface designer
A user interface designer is an individual who usually with a relevant degree or high level of knowledge, not only on technology, cognitive science, human–computer interaction, learning sciences, but also on psychology and sociology. A user interface designer develops and applies user-centered design methodologies and agile development processes that includes consideration for overall usability of interactive software applications, emphasizing interaction design and front-end development.
Information architect (IA)
Information architects mainly work to understand user and business needs in order to organize information to best satisfy these needs. Specifically, information architects often act as a key bridge between technical and creative development in a project team. Areas of interest in IA include search schemas, metadata, and taxonomy.
Projects
NASA/Ames Computational Sciences Division
The Human-Centered Computing (HCC) group at NASA/Ames Computational Sciences Division is conducting research at Haughton as members of the Haughton-Mars Project (HMP) to determine, via an analog study, how we will live and work on Mars.
HMP/Carnegie Mellon University (CMU) Field Robotics Experiments—HCC is collaborating with researchers on the HMP/CMU field robotics research program at Haughton to specify opportunities for robots assisting scientists. Researchers in this project have carried out a parallel investigation that documents work during traverses. A simulation module has been built, using a tool that represents people, their tools, and their work environment, that will serve as a partial controller for a robot that assist scientists in the field work in mars. When it comes to take human, computing and environment all into consideration, theory and techniques in HCC field will be the guideline.
Ethnography of Human Exploration of Space—HCC lab is carrying out an ethnographic study of scientific field work, covering all aspects of a scientist's life in the field. This study involves observing as participants at Haughton and writing about HCC lab`s experiences. HCC lab then look for patterns in how people organize their time, space, and objects and how they relate to each other to accomplish their goals. In this study, HCC lab is focusing on learning and conceptual change.
Center for Cognitive Ubiquitous Computing (CUbiC) at Arizona State University
Based on the principles of human-centered computing, the Center for Cognitive Ubiquitous Computing (CUbiC) at Arizona State University develops assistive, rehabilitative and healthcare applications. Founded by Sethuraman Panchanathan in 2001, CUbiC research spans three main areas of multimedia computing: sensing and processing, recognition and learning, and interaction and delivery. CUbiC places an emphasis on transdisciplinary research and positions individuals at the center of technology design and development. Examples of such technologies include the Note-Taker, a device designed to aid students with low vision to follow classroom instruction and take notes, and VibroGlove, which conveys facial expressions via haptic feedback to people with visual impairments.
In 2016, researchers at CUbiC introduced "Person-Centered Multimedia Computing", a new paradigm adjacent to HCC, which aims to understand a user's needs, preferences, and mannerisms including cognitive abilities and skills to design ego-centric technologies. Person-centered multimedia computing stresses the multimedia analysis and interaction facets of HCC to create technologies that can adapt to new users despite being designed for an individual.
See also
Cognitive science
Computer-mediated communication
Context awareness
Crowdsourcing
Health information technology
Human-based computation
Human-computer interaction
Information science
Social computing
Socially relevant computing
Ubiquitous computing
User-centered design
References
Further reading
"HMP-99 Science Field Report" NASA Ames Research Center
Human–computer interaction
Information science
Applied psychology | Human-centered computing | Engineering | 3,182 |
3,168,401 | https://en.wikipedia.org/wiki/Magnetic%20flux%20leakage | Magnetic flux leakage (TFI or Transverse Field Inspection technology) is a magnetic method of nondestructive testing to detect corrosion and pitting in steel structures, for instance: pipelines and storage tanks. The basic principle is that the magnetic field "leaks" from the steel at areas where there is corrosion or missing metal. To magnetize the steel, a powerful magnet is used. In an MFL (or Magnetic Flux Leakage) tool, a magnetic detector is placed between the poles of the magnet to detect the leakage field. Analysts interpret the chart recording of the leakage field to identify damaged areas and to estimate the depth of metal loss.
Introduction to pipeline examination
There are many methods of assessing the integrity of a pipeline. In-line-Inspection (ILI) tools are built to travel inside a pipeline and collect data as they go. Magnetic Flux Leakage inline inspection tool (MFL-ILI) has been in use the longest for pipeline inspection. MFL-ILIs detect and assess areas where the pipe wall may be damaged by corrosion. The more advanced versions are referred to as "high-resolution" due to their increased number of sensors. The high-resolution MFL-ILIs allow more reliable and accurate identification of anomalies in a pipeline, thus, minimizing the need for expensive verification excavations. Accurate assessment of pipeline anomalies can improve the decision making process within an Integrity Management Program and excavation programs can then focus on required repairs instead of calibration or exploratory digs. Utilizing the information from an MFL ILI inspection is not only cost effective but can also prove to be an extremely valuable building block of a Pipeline Integrity Management Program.
The reliable supply and transportation of product in a safe and cost-effective manner is a primary goal of most pipeline operating companies; managing the integrity of the pipeline is paramount in maintaining this objective. In-line-inspection programs are one of the most effective means of obtaining data that can be used as a fundamental base for an Integrity Management Program. There are many types of ILI tools that detect various pipeline defects. However, high-resolution MFL tools are becoming increasingly prevalent as its applications are surpassing those to which it was originally designed. Originally designed for detecting areas of metal loss, the modern High Resolution MFL tool is proving to be able to accurately assess the severity of corrosion features, define dents, wrinkles, buckles, and cracks.
MFL pipeline inspection tools
Background and origin of the term "pig":
In the field, a device that travels inside a pipeline to clean or inspect it is typically known as a pig. PIG is a bacronym for "Pipeline Inspection Gauge". The acronym PIG came later as the nickname for "pig" originated from cleaning pigs (first designed pigs) that sounded like squealing or screeching pigs when they passed through the lines to clean them using methods such as scraping, scrubbing and "squeegeeing" the internal surface. The name serves as common industry jargon for all pigs, both intelligent tools and cleaning tools. Pigs, in order to fit inside the pipeline, are cylindrical and are necessarily short in order to be able to negotiate bends in the pipeline. Many other short, cylindrical objects, such as propane storage tanks, are also known as pigs and it is likely that the name came from the shape of the devices.
In some countries, a pig is known as a "Diablo", literally translated to mean "the Devil" relating to the shuddering sound the tool would make as it passed beneath people's feet. The pigs are built to match the diameter of a pipeline and use the very product being carried to end users to transport them. Pigs have been used in pipelines for many years and have many uses. Some separate one product from another, some clean and some inspect. An MFL tool is known as an "intelligent" or "smart" inspection pig because it contains electronics and collects data real-time while traveling through the pipeline. Sophisticated electronics on board allow this tool to accurately detect anomalies as small as 1 mm2, which can include dimensions of a pipeline wall as well as its depth and thickness. This is crucial in identifying potential wall loss.
Typically, an MFL tool consists of two or more bodies. One body is the magnetizer with the magnets and sensors and the other bodies contain the electronics and batteries. The magnetizer body houses the sensors that are located between powerful "rare-earth" magnets. The magnets are mounted between the brushes and tool body to create a magnetic circuit along the pipe wall. As the tool travels along the pipe, the sensors detect interruptions in the magnetic circuit. Interruptions are typically caused by metal loss, which is typically caused by corrosion and is denoted as a "feature". Other features may be manufacturing defects or physical gouges. The feature indication or "reading" includes its length by width by depth as well as the o'clock position of the anomaly/feature. The metal loss in a magnetic circuit is analogous to a rock in a stream. Magnetism needs metal to flow and in the absence of it, the flow of magnetism will go around, over or under to maintain its relative path from one magnet to another, similar to the flow of water around a rock in a stream. The sensors detect the changes in the magnetic field in the three directions (axial, radial, or circumferential) to characterize the anomaly. The sensors are typically oriented axially which limits data to axial conditions along the length of the pipeline. Other designs of smart pigs can address other directional data readings or have completely different functions than that of a standard MFL tool. Oftentimes an operator will run a series of inspection tools to help verify or confirm MFL readings and vice versa. An MFL tool can take sensor readings based on either the distance the tool travels or on increments of time. The choice depends on many factors such as the length of the run, the speed that the tool intends to travel, and the number of stops or outages that the tool may experience.
The second body is called an Electronics Can. This section can be split into a number of bodies depending on the size of the tool, and contains the electronics required for the PIG to function. It contains the batteries and is some cases an IMU (Inertial Measurement Unit) to tie location information to GPS coordinates. On the very rear of the tool are odometer wheels that travel along the inside of the pipeline to measure the distance and speed of the tool.
MFL principle
As a MFL tool navigates the pipeline a magnetic circuit is created between the pipewall and the tool. Brushes typically act as a transmitter of magnetic flux from the tool into the pipewall, and as the magnets are oriented in opposing directions, a flow of flux is created in an elliptical pattern. High Field MFL tools saturate the pipewall with magnetic flux until the pipewall can no longer hold any more flux. The remaining flux leaks out of the pipewall and strategically placed tri-axial Hall effect sensor heads can accurately measure the three-dimensional vector of the leakage field.
Given the fact that magnetic flux leakage is a vector quantity and that a hall sensor can only measure in one direction, three sensors must be oriented within a sensor head to accurately measure the axial, radial and circumferential components of an MFL signal. The axial component of the vector signal is measured by a sensor mounted orthogonal to the axis of the pipe, and the radial sensor is mounted to measure the strength of the flux that leaks out of the pipe. The circumferential component of the vector signal can be measured by mounting a sensor perpendicular to this field. Earlier MFL tools recorded only the axial component but high-resolution tools typically measure all three components. To determine if metal loss is occurring on the internal or external surface of a pipe, a separate eddy current sensor is utilized to indicate wall surface location of the anomaly. The unit of measure when sensing an MFL signal is the gauss or the tesla and generally speaking, the larger the change in the detected magnetic field, the larger the anomaly.
Signal analysis
The primary purpose of a MFL tool is to detect corrosion in a pipeline. To more accurately predict the dimensions (length, width and depth) of a corrosion feature, extensive testing is performed before the tool enters an operational pipeline. Using a known collection of measured defects, tools can be trained and tested to accurately interpret MFL signals. Defects can be simulated using a variety of methods.
Creating and therefore knowing the actual dimensions of a feature makes it relatively easy to make simple correlations of signals to actual anomalies found in a pipeline. When signals in an actual pipeline inspection have similar characteristics to the signals found during testing it is logical to assume that the features would be similar. The algorithms and neural nets designed for calculating the dimensions of a corrosion feature are complicated and often they are closely guarded trade secrets. An anomaly is often reported in a simplified fashion as a cubic feature with an estimated length, width and depth. In this way, the effective area of metal loss can be calculated and used in acknowledged formulas to predict the estimated burst pressure of the pipe due to the detected anomaly.
Another important factor in the ongoing improvement of sizing algorithms is customer feedback to the ILI vendors. Every anomaly in a pipeline is unique and it is impossible to replicate in the shop what exists in all cases in the field. Open lines of communication usually exist between the inspection companies and the pipeline operators as to what was reported and what was actually visually observed in an excavation.
After an inspection, the collected data is downloaded and compiled so that an analyst is able to accurately interpret the collected signals. Most pipeline inspection companies have proprietary software designed to view their own tool's collected data. The three components of the MFL vector field are viewed independently and collectively to identify and classify corrosion features. Metal loss features have unique signals that analysts are trained to identify.
Estimation of corrosion growth rate
High-resolution MFL tools collect data approximately every 2 mm along the axis of a pipe and this superior resolution allows for a comprehensive analysis of collected signals. Pipeline Integrity Management programs have specific intervals for inspecting pipeline segments and by employing high-resolution MFL tools an exceptional corrosion growth analysis can be conducted. This type of analysis proves extremely useful in forecasting the inspection intervals.
Other features that an MFL tool can identify
Although primarily used to detect corrosion, MFL tools can also be used to detect features that they were not originally designed to identify. When an MFL tool encounters a geometric deformity such as a dent, wrinkle or buckle, a very distinct signal is created due to the plastic deformation of the pipe wall.
Crack detection
There are cases where large non-axial oriented cracks have been found in a pipeline that was inspected by a magnetic flux leakage tool. To an experienced MFL data analyst, a dent is easily recognizable by trademark "horseshoe" signal in the radial component of the vector field. What is not easily identifiable to an MFL tool is the signature that a crack leaves.
References
DUMALSKI, Scott, FENYVESI, Louis – Determining Corrosion Growth Accurately and Reliably
MORRISON, Tom, MANGAT, Naurang, DESJARDINS, Guy, BHATIA, Arti – Validation of an In-Line Inspection Metal Loss Tool, presented at International Pipeline Conference, Calgary, Alberta, Canada, 2000
NESTLEROTH, J.B, BUBENIK, T.A, - Magnetic Flux Leakage ( MFL ) Technology – for The Gas Research Institute – United States National Technical Information Center 1999
REMPEL, Raymond - Anomaly detection using Magnetic Flux Leakage ( MFL ) Technology - Presented at the Rio Pipeline Conference and Exposition, Rio de Janeiro, Brasil 2005
WESTWOOD, Stephen, CHOLOWSKY, Sharon. - Tri-Axial Sensors and 3-Dimensional Magnetic Modelling of Combine to Improve Defect Sizing From Magnetic Flux Leakage Signals. presented at NACE International, Northern Area Western Conference, Victoria, British Columbia, Canada 2004
WESTWOOD, Stephen, CHOLOWSKY, Sharon. – Independent Experimental Verification of the Sizing Accuracy of Magnetic Flux Leakage Tools, presented at 7th International Pipeline Conference, Puebla Mexico 2003
AMOS, D. M. - "Magnetic flux leakage as applied to aboveground storage tank flat bottom tank floor inspection", Materials Evaluation, 54(1996), p. 26
External links
Magnetic Flux Leakage (MFL) Technology For Natural Gas Pipeline Inspection
A Comparison of the Magnetic Flux Leakage and Ultrasonic Methods in the detection and measurement of corrosion pitting in ferrous plate and pipe, John Drury
Detection of Mechanical Damage Using the Magnetic Flux Leakage Technique, L. Clapham, Queen's University, Canada.
MFL for tanks
Magnetic Flux Inspection of Thick Walled Components
MFL and Pulsed Eddy Current (PEC) Tools for Plant Inspection
Nondestructive testing
Pipeline transport
Pigging | Magnetic flux leakage | Materials_science,Engineering | 2,639 |
1,413,965 | https://en.wikipedia.org/wiki/Energy%20transformation | Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else.
Thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states because it is not entirely convertible to a "useful" form, i.e. one that can do more than just affect temperature. The second law of thermodynamics states that the entropy of a closed system can never decrease. For this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100% only if the entropy of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content. Otherwise, only a part of that thermal energy may be converted to other kinds of energy (and thus useful work). This is because the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature. The increase in entropy for this process is greater than the decrease in entropy associated with the transformation of the rest of the heat into other types of energy.
In order to make energy transformation more efficient, it is desirable to avoid thermal conversion. For example, the efficiency of nuclear reactors, where the kinetic energy of the nuclei is first converted to thermal energy and then to electrical energy, lies at around 35%. By direct conversion of kinetic energy to electric energy, effected by eliminating the intermediate thermal energy transformation, the efficiency of the energy transformation process can be dramatically improved.
History of energy transformation
Energy transformations in the universe over time are usually characterized by various kinds of energy, which have been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy) by a triggering mechanism.
Release of energy from gravitational potential
A direct transformation of energy occurs when hydrogen produced in the Big Bang collects into structures such as planets, in a process during which part of the gravitational potential is to be converted directly into heat. In Jupiter, Saturn, and Neptune, for example, such heat from the continued collapse of the planets' large gas atmospheres continue to drive most of the planets' weather systems. These systems, consisting of atmospheric bands, winds, and powerful storms, are only partly powered by sunlight. However, on Uranus, little of this process occurs.
On Earth, a significant portion of the heat output from the interior of the planet, estimated at a third to half of the total, is caused by the slow collapse of planetary materials to a smaller size, generating heat.
Release of energy from radioactive potential
Familiar examples of other such processes transforming energy from the Big Bang include nuclear decay, which releases energy that was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of the nucleosynthesis of these elements. This process uses the gravitational potential energy released from the collapse of Type II supernovae to create these heavy elements before they are incorporated into star systems such as the Solar System and the Earth. The energy locked into uranium is released spontaneously during most types of radioactive decay, and can be suddenly released in nuclear fission bombs. In both cases, a portion of the energy binding the atomic nuclei together is released as heat.
Release of energy from hydrogen fusion potential
In a similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to one theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This resulted in hydrogen representing a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from the gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into starlight. Considering the solar system, starlight, overwhelmingly from the Sun, may again be stored as gravitational potential energy after it strikes the Earth. This occurs in the case of avalanches, or when water evaporates from oceans and is deposited as precipitation high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity).
Sunlight also drives many weather phenomena on Earth. One example is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as a chemical potential energy via photosynthesis, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. The release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism when these molecules are ingested, and catabolism is triggered by enzyme action.
Through all of these transformation chains, the potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in several different ways for long periods between releases, as more active energy. All of these events involve the conversion of one kind of energy into others, including heat.
Examples
Examples of sets of energy conversions in machines
A coal-fired power plant involves these energy transformations:
Chemical energy in the coal is converted into thermal energy in the exhaust gases of combustion
Thermal energy of the exhaust gases converted into thermal energy of steam through heat exchange
Kinetic energy of steam converted to mechanical energy in the turbine
Mechanical energy of the turbine is converted to electrical energy by the generator, which is the ultimate output
In such a system, the first and fourth steps are highly efficient, but the second and third steps are less efficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency. Oil- and coal-fired stations are less efficient.
In a conventional automobile, the following energy transformations occur:
Chemical energy in the fuel is converted into kinetic energy of expanding gas via combustion
Kinetic energy of expanding gas converted to the linear piston movement
Linear piston movement converted to rotary crankshaft movement
Rotary crankshaft movement passed into transmission assembly
Rotary movement passed out of transmission assembly
Rotary movement passed through a differential
Rotary movement passed out of differential to drive wheels
Rotary movement of drive wheels converted to linear motion of the vehicle
Other energy conversions
There are many different machines and transducers that convert one energy form into another. A short list of examples follows:
ATP hydrolysis (chemical energy in adenosine triphosphate → mechanical energy)
Battery (electricity) (chemical energy → electrical energy)
Electric generator (kinetic energy or mechanical work → electrical energy)
Electric heater (electric energy → heat)
Fire (chemical energy → heat and light)
Friction (kinetic energy → heat)
Fuel cell (chemical energy → electrical energy)
Geothermal power (heat→ electrical energy)
Heat engines, such as the internal combustion engine used in cars, or the steam engine (heat → mechanical energy)
Hydroelectric dam (gravitational potential energy → electrical energy)
Electric lamp (electrical energy → heat and light)
Microphone (sound → electrical energy)
Ocean thermal power (heat → electrical energy)
Photosynthesis (electromagnetic radiation → chemical energy)
Piezoelectrics (strain → electrical energy)
Thermoelectric (heat → electrical energy)
Wave power (mechanical energy → electrical energy)
Windmill (wind energy → electrical energy or mechanical energy)
See also
Chaos theory
Conservation law
Conservation of energy
Conservation of mass
Energy accounting
Energy quality
Groundwater energy balance
Laws of thermodynamics
Noether's theorem
Ocean thermal energy conversion
Thermodynamic equilibrium
Thermoeconomics
Uncertainty principle
References
Further reading
Energy Transfer and Transformation | Core knowledge science
Energy (physics) | Energy transformation | Physics,Mathematics | 2,099 |
12,242 | https://en.wikipedia.org/wiki/Germanium | Germanium is a chemical element; it has symbol Ge and atomic number 32. It is lustrous, hard-brittle, grayish-white and similar in appearance to silicon. It is a metalloid (more rarely considered a metal) in the carbon group that is chemically similar to its group neighbors silicon and tin. Like silicon, germanium naturally reacts and forms complexes with oxygen in nature.
Because it seldom appears in high concentration, germanium was found comparatively late in the discovery of the elements. Germanium ranks 50th in abundance of the elements in the Earth's crust. In 1869, Dmitri Mendeleev predicted its existence and some of its properties from its position on his periodic table, and called the element ekasilicon. On February 6, 1886, Clemens Winkler at Freiberg University found the new element, along with silver and sulfur, in the mineral argyrodite. Winkler named the element after Germany, his country of birth. Germanium is mined primarily from sphalerite (the primary ore of zinc), though germanium is also recovered commercially from silver, lead, and copper ores.
Elemental germanium is used as a semiconductor in transistors and various other electronic devices. Historically, the first decade of semiconductor electronics was based entirely on germanium. Presently, the major end uses are fibre-optic systems, infrared optics, solar cell applications, and light-emitting diodes (LEDs). Germanium compounds are also used for polymerization catalysts and have most recently found use in the production of nanowires. This element forms a large number of organogermanium compounds, such as tetraethylgermanium, useful in organometallic chemistry. Germanium is considered a technology-critical element.
Germanium is not thought to be an essential element for any living organism. Similar to silicon and aluminium, naturally-occurring germanium compounds tend to be insoluble in water and thus have little oral toxicity. However, synthetic soluble germanium salts are nephrotoxic, and synthetic chemically reactive germanium compounds with halogens and hydrogen are irritants and toxins.
History
In his report on The Periodic Law of the Chemical Elements in 1869, the Russian chemist Dmitri Mendeleev predicted the existence of several unknown chemical elements, including one that would fill a gap in the carbon family, located between silicon and tin. Because of its position in his periodic table, Mendeleev called it ekasilicon (Es), and he estimated its atomic weight to be 70 (later 72).
In mid-1885, at a mine near Freiberg, Saxony, a new mineral was discovered and named argyrodite because of its high silver content. The chemist Clemens Winkler analyzed this new mineral, which proved to be a combination of silver, sulfur, and a new element. Winkler was able to isolate the new element in 1886 and found it similar to antimony. He initially considered the new element to be eka-antimony, but was soon convinced that it was instead eka-silicon. Before Winkler published his results on the new element, he decided that he would name his element neptunium, since the recent discovery of planet Neptune in 1846 had similarly been preceded by mathematical predictions of its existence. However, the name "neptunium" had already been given to another proposed chemical element (though not the element that today bears the name neptunium, which was discovered in 1940). So instead, Winkler named the new element germanium (from the Latin word, Germania, for Germany) in honor of his homeland. Argyrodite proved empirically to be Ag8GeS6.
Because this new element showed some similarities with the elements arsenic and antimony, its proper place in the periodic table was under consideration, but its similarities with Dmitri Mendeleev's predicted element "ekasilicon" confirmed that place on the periodic table. With further material from 500 kg of ore from the mines in Saxony, Winkler confirmed the chemical properties of the new element in 1887. He also determined an atomic weight of 72.32 by analyzing pure germanium tetrachloride (), while Lecoq de Boisbaudran deduced 72.3 by a comparison of the lines in the spark spectrum of the element.
Winkler was able to prepare several new compounds of germanium, including fluorides, chlorides, sulfides, dioxide, and tetraethylgermane (Ge(C2H5)4), the first organogermane. The physical data from those compounds—which corresponded well with Mendeleev's predictions—made the discovery an important confirmation of Mendeleev's idea of element periodicity. Here is a comparison between the prediction and Winkler's data:
Until the late 1930s, germanium was thought to be a poorly conducting metal. Germanium did not become economically significant until after 1945 when its properties as an electronic semiconductor were recognized. During World War II, small amounts of germanium were used in some special electronic devices, mostly diodes. The first major use was the point-contact Schottky diodes for radar pulse detection during the War. The first silicon–germanium alloys were obtained in 1955. Before 1945, only a few hundred kilograms of germanium were produced in smelters each year, but by the end of the 1950s, the annual worldwide production had reached .
The development of the germanium transistor in 1948 opened the door to countless applications of solid state electronics. From 1950 through the early 1970s, this area provided an increasing market for germanium, but then high-purity silicon began replacing germanium in transistors, diodes, and rectifiers. For example, the company that became Fairchild Semiconductor was founded in 1957 with the express purpose of producing silicon transistors. Silicon has superior electrical properties, but it requires much greater purity that could not be commercially achieved in the early years of semiconductor electronics.
Meanwhile, the demand for germanium for fiber optic communication networks, infrared night vision systems, and polymerization catalysts increased dramatically. These end uses represented 85% of worldwide germanium consumption in 2000. The US government even designated germanium as a strategic and critical material, calling for a 146 ton (132 tonne) supply in the national defense stockpile in 1987.
Germanium differs from silicon in that the supply is limited by the availability of exploitable sources, while the supply of silicon is limited only by production capacity since silicon comes from ordinary sand and quartz. While silicon could be bought in 1998 for less than $10 per kg, the price of germanium was almost $800 per kg.
Characteristics
Under standard conditions, germanium is a brittle, silvery-white, semiconductor. This form constitutes an allotrope known as α-germanium, which has a metallic luster and a diamond cubic crystal structure, the same structure as silicon and diamond. In this form, germanium has a threshold displacement energy of . At pressures above 120 kbar, germanium becomes the metallic allotrope β-germanium with the same structure as β-tin. Like silicon, gallium, bismuth, antimony, and water, germanium is one of the few substances that expands as it solidifies (i.e. freezes) from the molten state.
Germanium is a semiconductor having an indirect bandgap, as is crystalline silicon. Zone refining techniques have led to the production of crystalline germanium for semiconductors that has an impurity of only one part in 1010,
making it one of the purest materials ever obtained.
The first semi-metallic material discovered (in 2005) to become a superconductor in the presence of an extremely strong electromagnetic field was an alloy of germanium, uranium, and rhodium.
Pure germanium is known to spontaneously extrude very long screw dislocations, referred to as germanium whiskers. The growth of these whiskers is one of the primary reasons for the failure of older diodes and transistors made from germanium, as, depending on what they eventually touch, they may lead to an electrical short.
Chemistry
Elemental germanium starts to oxidize slowly in air at around 250 °C, forming GeO2 . Germanium is insoluble in dilute acids and alkalis but dissolves slowly in hot concentrated sulfuric and nitric acids and reacts violently with molten alkalis to produce germanates (). Germanium occurs mostly in the oxidation state +4 although many +2 compounds are known. Other oxidation states are rare: +3 is found in compounds such as Ge2Cl6, and +3 and +1 are found on the surface of oxides, or negative oxidation states in germanides, such as −4 in . Germanium cluster anions (Zintl ions) such as Ge42−, Ge94−, Ge92−, [(Ge9)2]6− have been prepared by the extraction from alloys containing alkali metals and germanium in liquid ammonia in the presence of ethylenediamine or a cryptand. The oxidation states of the element in these ions are not integers—similar to the ozonides O3−.
Two oxides of germanium are known: germanium dioxide (, germania) and germanium monoxide, (). The dioxide, GeO2, can be obtained by roasting germanium disulfide (), and is a white powder that is only slightly soluble in water but reacts with alkalis to form germanates. The monoxide, germanous oxide, can be obtained by the high temperature reaction of GeO2 with elemental Ge. The dioxide (and the related oxides and germanates) exhibits the unusual property of having a high refractive index for visible light, but transparency to infrared light. Bismuth germanate, Bi4Ge3O12 (BGO), is used as a scintillator.
Binary compounds with other chalcogens are also known, such as the disulfide () and diselenide (), and the monosulfide (GeS), monoselenide (GeSe), and monotelluride (GeTe). GeS2 forms as a white precipitate when hydrogen sulfide is passed through strongly acid solutions containing Ge(IV). The disulfide is appreciably soluble in water and in solutions of caustic alkalis or alkaline sulfides. Nevertheless, it is not soluble in acidic water, which allowed Winkler to discover the element. By heating the disulfide in a current of hydrogen, the monosulfide (GeS) is formed, which sublimes in thin plates of a dark color and metallic luster, and is soluble in solutions of the caustic alkalis. Upon melting with alkaline carbonates and sulfur, germanium compounds form salts known as thiogermanates.
Four tetrahalides are known. Under normal conditions germanium tetraiodide (GeI4) is a solid, germanium tetrafluoride (GeF4) a gas and the others volatile liquids. For example, germanium tetrachloride, GeCl4, is obtained as a colorless fuming liquid boiling at 83.1 °C by heating the metal with chlorine. All the tetrahalides are readily hydrolyzed to hydrated germanium dioxide. GeCl4 is used in the production of organogermanium compounds. All four dihalides are known and in contrast to the tetrahalides are polymeric solids. Additionally Ge2Cl6 and some higher compounds of formula GenCl2n+2 are known. The unusual compound Ge6Cl16 has been prepared that contains the Ge5Cl12 unit with a neopentane structure.
Germane (GeH4) is a compound similar in structure to methane. Polygermanes—compounds that are similar to alkanes—with formula GenH2n+2 containing up to five germanium atoms are known. The germanes are less volatile and less reactive than their corresponding silicon analogues. GeH4 reacts with alkali metals in liquid ammonia to form white crystalline MGeH3 which contain the GeH3− anion. The germanium hydrohalides with one, two and three halogen atoms are colorless reactive liquids.
The first organogermanium compound was synthesized by Winkler in 1887; the reaction of germanium tetrachloride with diethylzinc yielded tetraethylgermane (). Organogermanes of the type R4Ge (where R is an alkyl) such as tetramethylgermane () and tetraethylgermane are accessed through the cheapest available germanium precursor germanium tetrachloride and alkyl nucleophiles. Organic germanium hydrides such as isobutylgermane () were found to be less hazardous and may be used as a liquid substitute for toxic germane gas in semiconductor applications. Many germanium reactive intermediates are known: germyl free radicals, germylenes (similar to carbenes), and germynes (similar to carbynes). The organogermanium compound 2-carboxyethylgermasesquioxane was first reported in the 1970s, and for a while was used as a dietary supplement and thought to possibly have anti-tumor qualities.
Using a ligand called Eind (1,1,3,3,5,5,7,7-octaethyl-s-hydrindacen-4-yl) germanium is able to form a double bond with oxygen (germanone). Germanium hydride and germanium tetrahydride are very flammable and even explosive when mixed with air.
Isotopes
Germanium occurs in five natural isotopes: , , , , and . Of these, is very slightly radioactive, decaying by double beta decay with a half-life of . is the most common isotope, having a natural abundance of approximately 36%. is the least common with a natural abundance of approximately 7%. When bombarded with alpha particles, the isotope will generate stable , releasing high energy electrons in the process. Because of this, it is used in combination with radon for nuclear batteries.
At least 27 radioisotopes have also been synthesized, ranging in atomic mass from 58 to 89. The most stable of these is , decaying by electron capture with a half-life of ays. The least stable is , with a half-life of . While most of germanium's radioisotopes decay by beta decay, and decay by delayed proton emission. through isotopes also exhibit minor delayed neutron emission decay paths.
Occurrence
Germanium is created by stellar nucleosynthesis, mostly by the s-process in asymptotic giant branch stars. The s-process is a slow neutron capture of lighter elements inside pulsating red giant stars. Germanium has been detected in some of the most distant stars and in the atmosphere of Jupiter.
Germanium's abundance in the Earth's crust is approximately 1.6 ppm. Only a few minerals like argyrodite, briartite, germanite, renierite and sphalerite contain appreciable amounts of germanium. Only few of them (especially germanite) are, very rarely, found in mineable amounts. Some zinc–copper–lead ore bodies contain enough germanium to justify extraction from the final ore concentrate. An unusual natural enrichment process causes a high content of germanium in some coal seams, discovered by Victor Moritz Goldschmidt during a broad survey for germanium deposits. The highest concentration ever found was in Hartley coal ash with as much as 1.6% germanium. The coal deposits near Xilinhaote, Inner Mongolia, contain an estimated 1600 tonnes of germanium.
Production
About 118 tonnes of germanium were produced in 2011 worldwide, mostly in China (80 t), Russia (5 t) and United States (3 t). Germanium is recovered as a by-product from sphalerite zinc ores where it is concentrated in amounts as great as 0.3%, especially from low-temperature sediment-hosted, massive Zn–Pb–Cu(–Ba) deposits and carbonate-hosted Zn–Pb deposits. A recent study found that at least 10,000 t of extractable germanium is contained in known zinc reserves, particularly those hosted by Mississippi-Valley type deposits, while at least 112,000 t will be found in coal reserves. In 2007 35% of the demand was met by recycled germanium.
While it is produced mainly from sphalerite, it is also found in silver, lead, and copper ores. Another source of germanium is fly ash of power plants fueled from coal deposits that contain germanium. Russia and China used this as a source for germanium. Russia's deposits are located in the far east of Sakhalin Island, and northeast of Vladivostok. The deposits in China are located mainly in the lignite mines near Lincang, Yunnan; coal is also mined near Xilinhaote, Inner Mongolia.
The ore concentrates are mostly sulfidic; they are converted to the oxides by heating under air in a process known as roasting:
GeS2 + 3 O2 → GeO2 + 2 SO2
Some of the germanium is left in the dust produced, while the rest is converted to germanates, which are then leached (together with zinc) from the cinder by sulfuric acid. After neutralization, only the zinc stays in solution while germanium and other metals precipitate. After removing some of the zinc in the precipitate by the Waelz process, the residing Waelz oxide is leached a second time. The dioxide is obtained as precipitate and converted with chlorine gas or hydrochloric acid to germanium tetrachloride, which has a low boiling point and can be isolated by distillation:
GeO2 + 4 HCl → GeCl4 + 2 H2O
GeO2 + 2 Cl2 → GeCl4 + O2
Germanium tetrachloride is either hydrolyzed to the oxide (GeO2) or purified by fractional distillation and then hydrolyzed. The highly pure GeO2 is now suitable for the production of germanium glass. It is reduced to the element by reacting it with hydrogen, producing germanium suitable for infrared optics and semiconductor production:
GeO2 + 2 H2 → Ge + 2 H2O
The germanium for steel production and other industrial processes is normally reduced using carbon:
GeO2 + C → Ge + CO2
Applications
The major end uses for germanium in 2007, worldwide, were estimated to be: 35% for fiber-optics, 30% infrared optics, 15% polymerization catalysts, and 15% electronics and solar electric applications. The remaining 5% went into such uses as phosphors, metallurgy, and chemotherapy.
Optics
The notable properties of germania (GeO2) are its high index of refraction and its low optical dispersion. These make it especially useful for wide-angle camera lenses, microscopy, and the core part of optical fibers. It has replaced titania as the dopant for silica fiber, eliminating the subsequent heat treatment that made the fibers brittle. At the end of 2002, the fiber optics industry consumed 60% of the annual germanium use in the United States, but this is less than 10% of worldwide consumption. GeSbTe is a phase change material used for its optic properties, such as that used in rewritable DVDs.
Because germanium is transparent in the infrared wavelengths, it is an important infrared optical material that can be readily cut and polished into lenses and windows. It is especially used as the front optic in thermal imaging cameras working in the 8 to 14 micron range for passive thermal imaging and for hot-spot detection in military, mobile night vision, and fire fighting applications. It is used in infrared spectroscopes and other optical equipment that require extremely sensitive infrared detectors. It has a very high refractive index (4.0) and must be coated with anti-reflection agents. Particularly, a very hard special antireflection coating of diamond-like carbon (DLC), refractive index 2.0, is a good match and produces a diamond-hard surface that can withstand much environmental abuse.
Electronics
Germanium can be alloyed with silicon, and silicon–germanium alloys are rapidly becoming an important semiconductor material for high-speed integrated circuits. Circuits using the properties of Si-SiGe heterojunctions can be much faster than those using silicon alone. The SiGe chips, with high-speed properties, can be made with low-cost, well-established production techniques of the silicon chip industry.
High efficiency solar panels are a major use of germanium. Because germanium and gallium arsenide have nearly identical lattice constant, germanium substrates can be used to make gallium-arsenide solar cells. Germanium is the substrate of the wafers for high-efficiency multijunction photovoltaic cells for space applications, such as the Mars Exploration Rovers, which use triple-junction gallium arsenide on germanium cells. High-brightness LEDs, used for automobile headlights and to backlight LCD screens, are also an important application.
Germanium-on-insulator (GeOI) substrates are seen as a potential replacement for silicon on miniaturized chips. CMOS circuit based on GeOI substrates has been reported recently. Other uses in electronics include phosphors in fluorescent lamps and solid-state light-emitting diodes (LEDs). Germanium transistors are still used in some effects pedals by musicians who wish to reproduce the distinctive tonal character of the "fuzz"-tone from the early rock and roll era, most notably the Dallas Arbiter Fuzz Face.
Germanium has been studied as a potential material for implantable bioelectronic sensors that are resorbed in the body without generating harmful hydrogen gas, replacing zinc oxide- and indium gallium zinc oxide-based implementations.
Other uses
Germanium dioxide is also used in catalysts for polymerization in the production of polyethylene terephthalate (PET). The high brilliance of this polyester is especially favored for PET bottles marketed in Japan. In the United States, germanium is not used for polymerization catalysts.
Due to the similarity between silica (SiO2) and germanium dioxide (GeO2), the silica stationary phase in some gas chromatography columns can be replaced by GeO2.
In recent years germanium has seen increasing use in precious metal alloys. In sterling silver alloys, for instance, it reduces firescale, increases tarnish resistance, and improves precipitation hardening. A tarnish-proof silver alloy trademarked Argentium contains 1.2% germanium.
Semiconductor detectors made of single crystal high-purity germanium can precisely identify radiation sources—for example in airport security. Germanium is useful for monochromators for beamlines used in single crystal neutron scattering and synchrotron X-ray diffraction. The reflectivity has advantages over silicon in neutron and high energy X-ray applications. Crystals of high purity germanium are used in detectors for gamma spectroscopy and the search for dark matter. Germanium crystals are also used in X-ray spectrometers for the determination of phosphorus, chlorine and sulfur.
Germanium is emerging as an important material for spintronics and spin-based quantum computing applications. In 2010, researchers demonstrated room temperature spin transport and more recently donor electron spins in germanium has been shown to have very long coherence times.
Strategic importance
Due to its use in advanced electronics and optics, Germanium is considered a technology-critical element (by e.g. the European Union), essential to fulfill the green and digital transition. As China controls 60% of global Germanium production it holds a dominant position over the world's supply chains.
On 3 July 2023 China suddenly imposed restrictions on the exports of germanium (and gallium), ratcheting up trade tensions with Western allies. Invoking "national security interests," the Chinese Ministry of Commerce informed that companies that intend to sell products containing germanium would need an export licence. The products/compounds targeted are: germanium dioxide, germanium epitaxial growth substrate, germanium ingot, germanium metal, germanium tetrachloride and zinc germanium phosphide. It sees such products as "dual-use" items that may have military purposes and therefore warrant an extra layer of oversight.
The new dispute opened a new chapter in the increasingly fierce technology race that has pitted the United States, and to a lesser extent Europe, against China. The US wants its allies to heavily curb, or downright prohibit, advanced electronic components bound to the Chinese market to prevent Beijing from securing global technology supremacy. China denied any tit-for-tat intention behind the Germanium export restrictions.
Following China's export restrictions, Russian state-owned company Rostec announced an increase in germanium production to meet domestic demand.
Germanium and health
Germanium is not considered essential to the health of plants or animals. Germanium in the environment has little or no health impact. This is primarily because it usually occurs only as a trace element in ores and carbonaceous materials, and the various industrial and electronic applications involve very small quantities that are not likely to be ingested. For similar reasons, end-use germanium has little impact on the environment as a biohazard. Some reactive intermediate compounds of germanium are poisonous (see precautions, below).
Germanium supplements, made from both organic and inorganic germanium, have been marketed as an alternative medicine capable of treating leukemia and lung cancer. There is, however, no medical evidence of benefit; some evidence suggests that such supplements are actively harmful. U.S. Food and Drug Administration (FDA) research has concluded that inorganic germanium, when used as a nutritional supplement, "presents potential human health hazard".
Some germanium compounds have been administered by alternative medical practitioners as non-FDA-allowed injectable solutions. Soluble inorganic forms of germanium used at first, notably the citrate-lactate salt, resulted in some cases of renal dysfunction, hepatic steatosis, and peripheral neuropathy in individuals using them over a long term. Plasma and urine germanium concentrations in these individuals, several of whom died, were several orders of magnitude greater than endogenous levels. A more recent organic form, beta-carboxyethylgermanium sesquioxide (propagermanium), has not exhibited the same spectrum of toxic effects.
Certain compounds of germanium have low toxicity to mammals, but have toxic effects against certain bacteria.
Precautions for chemically reactive germanium compounds
While use of germanium itself does not require precautions, some of germanium's artificially produced compounds are quite reactive and present an immediate hazard to human health on exposure. For example, Germanium tetrachloride and germane (GeH4) are a liquid and gas, respectively, that can be very irritating to the eyes, skin, lungs, and throat.
See also
Germanene
Vitrain
History of the transistor
Notes
References
External links
Germanium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Metalloids
Infrared sensor materials
Optical materials
Group IV semiconductors
Chemical elements predicted by Dmitri Mendeleev
Materials that expand upon freezing
Chemical elements with diamond cubic structure | Germanium | Physics,Chemistry | 5,731 |
23,544,441 | https://en.wikipedia.org/wiki/Ruys%27s%20bird-of-paradise | Ruys's bird-of-paradise is a bird in the family Paradisaeidae that is presumed to be an intergeneric hybrid between a magnificent bird-of-paradise and lesser bird-of-paradise, an identity since confirmed by DNA analysis.
History
Only one adult male specimen is known of this hybrid; it is held in the Netherlands Natural History Museum in Leiden, and comes from near Warsembo, on the west coast of Geelvink Bay, in north-western New Guinea.
Notes
References
Hybrid birds of paradise
Birds of West Papua
Intergeneric hybrids | Ruys's bird-of-paradise | Biology | 119 |
1,144,937 | https://en.wikipedia.org/wiki/Mucin-16 | Mucin-16 (MUC-16) also known as Ovarian cancer-related tumor marker CA125 is a protein that in humans is encoded by the MUC16 gene. MUC-16 is a member of the mucin family glycoproteins. MUC-16 has found application as a tumor marker or biomarker that may be elevated in the blood of some patients with specific types of cancers, most notably ovarian cancer, or other conditions that are benign.
Structure
Mucin 16 is a membrane associated mucin that possesses a single transmembrane domain. A unique property of MUC16 is its large size. MUC16 is more than twice as long as MUC1 and MUC4 and contains about 22,000 amino acids, making it the largest membrane-associated mucin.
MUC16 is composed of three different domains:
An N-terminal domain
A tandem repeat domain
A C-terminal domain
The N-terminal and tandem repeat domains are both entirely extracellular and highly O-glycosylated. All mucins contain a tandem repeat domain that has repeating amino acid sequences high in serine, threonine and proline. The C-terminal domain contains multiple extracellular SEA (sea urchin sperm protein, enterokinase, and agrin) modules, a transmembrane domain, and a cytoplasmic tail. The extracellular region of MUC16 can be released from the cell surface by undergoing proteolytic cleavage. MUC16 is thought to be cleaved at a site in the SEA modules.
Function
MUC16 is a component of the ocular surface (including the cornea and conjunctiva), the respiratory tract and the female reproductive tract epithelia. Since MUC16 is highly glycosylated it creates a hydrophilic environment that acts as a lubricating barrier against foreign particles and infectious agents on the apical membrane of epithelial cells. Also, the cytoplasmic tail of MUC16 has been shown to interact with cytoskeleton by binding members of the ERM protein family. The expression of mucin 16 has been shown to be altered in dry eye, cystic fibrosis, and several types of cancers.
Role in cancer
MUC16 (CA-125) has been shown to play a role in advancing tumorigenesis and tumor proliferation by several different mechanisms.
As a biomarker
Testing of CA-125 blood levels has been proposed as useful in treating ovarian cancer. While the test can give useful information for women already known to have ovarian cancer, CA-125 testing has not been found useful as a screening method because of the uncertain correlation between CA-125 levels and cancer. In addition to ovarian cancer, CA-125 can be elevated in patients who have conditions such as endometrial cancer, fallopian tube cancer, lung cancer, breast cancer, and gastrointestinal cancer. It can also be increased in pregnant women. Because of the wide variety of conditions that can increase serum levels, CA-125 is not used to detect cancer, but it is often used to monitor responses to chemotherapy, relapse, and disease progression in ovarian cancer patients.
Metastatic invasion
MUC16 is also thought to participate in cell-to-cell interactions that enable the metastasis of tumor cells. This is supported by evidence showing that MUC16 binds selectively to mesothelin, a glycoprotein normally expressed by the mesothelial cells of the peritoneum (the lining of the abdominal cavity). MUC16 and mesothelin interactions are thought to provide the first step in tumor cell invasion of the peritoneum. The region (residues 296–359) consisting of 64 amino acids at the N-terminus of cell surface mesothelin has been experimentally established as the functional binding domain (named IAB) for MUC16/CA125. An immunoadhesin (HN125) that consists of the IAB domain of mesothelin and the human Fc portion has the ability to disrupt the heterotypic cancer cell adhesion mediated by the MUC16-mesothelin interaction.
Mesothelin has also been found to be expressed in several types of cancers including mesothelioma, ovarian cancer and squamous cell carcinoma. Since mesothelin is also expressed by tumor cells, MUC16 and mesothelial interactions may aid in the gathering of other tumor cells to the location of a metastasis, thus increasing the size of the metastasis.
Induced motility
Evidence suggests that expression of the cytoplasmic tail of MUC16 enables tumor cells to grow, promotes cell motility and may facilitate invasion. This appears to be due to the ability of the C-terminal domain of MUC16 to facilitate signaling that leads to a decrease in the expression of E-cadherin and increase the expression of N-cadherin and vimentin, which are expression patterns consistent with epithelial-mesenchymal transition.
Chemotherapy resistance
MUC16 may also play a role in reducing the sensitivity of cancer cells to drug therapy. For example, overexpression of MUC16 has been shown to protect cells from the effects of genotoxic drugs, such as cisplatin.
Discovery
CA-125 was initially detected using the murine monoclonal antibody designated OC125. Robert Bast, Robert Knapp and their research team first isolated this monoclonal antibody in 1981. The protein was named "cancer antigen 125" because OC125 was the 125th antibody produced against the ovarian cancer cell line that was being studied.
References
External links
CA-125 blood test urban legend at snopes.com
CA-125 at Lab Tests Online
CA-125 analyte monograph from The Association for Clinical Biochemistry and Laboratory Medicine.
Tumor markers
Proteins | Mucin-16 | Chemistry,Biology | 1,240 |
8,819,112 | https://en.wikipedia.org/wiki/Rockin%27%20Tug | Rockin' Tug is a flat tugboat ride manufactured by Zamperla. The ride is manufactured in both traveling and park versions. It is the first of a line of new "halfpiperides". Zamperla's Disk'O is another popular ride from that "family". The difference is that the Rockin' Tug has a friction wheel, while the Disk'O is powered driven.
Design and operation
Twenty four riders are loaded into a tugboat shaped gondola, in six rows of four. The rows face into the center of the ride. The ride is driven back and forth along a track shaped in a concave arc, rocking back and forth. While this is happening, the entire tugboat rotates around its center.
The traveling version of the ride racks onto a single 28 foot trailer.
Variations
Several theme variations exist. The most common one is a tug boat, but other versions include a longboat, a pirate ship, and a skateboard.
Appearances
Australia - Two, a travelling model owned by Better Amusement Hire and Big Red Boat Ride at Dreamworld
Austria - At least two travelling models (Grubelnik)
Belgium - At least one travelling model
Canada - One: Canada's Wonderland where it is known as "Lucy's Tugboat". A former Rockin' Tug was found at Galaxyland in West Edmonton Mall, named the "Rockin' Rocket", which closed in 2006.
Germany - At least five: one travelling model (Schäfer), and at least four permanent versions in Germany (Kernwasserwunderland, Legoland)
Ireland - in 2016 a Rockin' Tug (named that way too) was added to Tayto Park in the Eagle's Nest nearby Shot Tower
Japan - At least one permanent version at Toshimaen Amusement Park, Tokyo. Acquired in 2005.
The Netherlands - At least six: Rolling Stones at Drouwenerzand, Dolle Dobber at DippieDoe, Alpenrutsche at Toverland, Moby Dick at Deltapark Neeltje Jans and Koning Juliana Toren, Fogg's Trouble at Attractiepark Slagharen.
New Zealand - One, a traveling model owned by Mahons Amusements
Sweden - At least one: Lilla Lots at Liseberg.
Switzerland - At least one travelling model (Lang)
United Arab Emirates - Magic Planet, Mall of the Emirates, Dubai. 24 Seater Rocking Tug. Stationary model.
United Kingdom - At least eleven: Rockin` Tug at Butlins Skegness, Butlins Minehead and Butlins Bognor Regis, Rocking Tug at Flamingo Land Resort., Rocking Bulstrode at Drayton Manor's Thomas Land (themed after the Thomas and Friends character Bulstrode the barge), Heave Ho at Alton Towers, Longboat Invader at Legoland Windsor (a Lego/Viking themed longship), Rockin' Tug at The Flambards Experience, Trawler Trouble at Chessington World of Adventures, Rockin' Tug at Woodlands Family Theme Park Devon, Timber Tug Boat at Thorpe Park, Sk8boarda at Adventure Island (debated - it's unknown if it's built by Zamperla or by the park itself), and Kontiki at Paultons Park.
United States of America - Thirteen: traveling models owned by Murphy Brothers Exposition, Shamrock Shows, American Traveling Shows, and by D & K Amusements; and park models owned by Alabama Splash Adventure, Six Flags over Georgia, Valleyfair (as Lucy's Tugboat), Edaville (as Rockin' Bulstrode), Knott's Berry Farm (as Rapid River Fun), Knoebels, Elitch Gardens Theme Park, Kennywood (as SS Kenny), Trimper's Rides, Waldameer & Water World (as SS Wally), SeaWorld San Antonio, Oaks Amusement Park and Santa's Village (as SS Peppermint Twist).
References
External links
Zamperla page on Rockin' Tug
Schwarzkopf.Coaster.net
Amusement rides
Zamperla
Articles containing video clips | Rockin' Tug | Physics,Technology | 855 |
33,216,484 | https://en.wikipedia.org/wiki/Ericoid | The word "ericoid" is used in modern biological terminology for its literal meanings and for extensions. Ericoid could have more than one meaning, but in practice the most common use is in reference to a plant's habit, to describe small, tough (sclerophyllous) leaves like those of heather. Etymologically the word is derived from two Greek roots via Latin adaptations. Firstly, the Ancient Greek name for plants now known in English as "heather" was "ἐρείκη", believed to be Latinised by Pliny as "Erica". Carl Linnaeus, who predominantly wrote in Latin, used Erica as the name of the genus which still is known as such.
However, when Linnaeus named an organism, using a specific epithet that described it as being like some particular thing, he commonly did so by appending the suffix "—οειδης". That was a contraction of "—ο + ειδος", denoting a likeness of form. In its Latinised form it became: "—oides". An example is the entry 9413 Stilbe ericoides according to Wappler's Index Plantarum to Linnaeus' "Species Plantarum". Further derivations emerged at need or convenience, such as "—oidea".
Accordingly, ericoid could have more than one meaning and it has been misapplied from time to time in the literature. For example, sometimes a writer uses it where the correct word would be "ericaceous", meaning a member of, or related to, the family Ericaceae. More precisely ericoid means "resembling an Erica" in some relevant way. Applied to a plant, ericoid generally means that apart from its sclerophyllous leaves, it has short internodes so that the leaves more or less cover the usually slender branchlets.
References
Botany | Ericoid | Biology | 387 |
71,484,847 | https://en.wikipedia.org/wiki/Natural%20resource%20valuation | Natural resource valuation is a process of providing of benefits, costs, damage of or to natural and environmental resources. It has a fundamental role in the practice of cost-benefit analysis of health, safety, and environmental issues.
Natural resource valuation is more apparent in the conduct of natural resource damage assessments (NRDA) and cost-benefit analysis of environmental restoration (ER) and waste management. It is a key exercise in economic analysis and its results provide important information about values of environmental goods and services.
Natural resource valuation studies are often aimed at assessing economic values that represent the public good characteristics of natural systems. Willingness to pay measures are typically used to estimate ecosystem goods and services that benefit not only a select few but wider society. There are a two types of valuation including market valuation and non market valuation. Market valuation estimates the total willingness to pay based on price (demand) whilst non market valuation estimates willingness to pay either through examining behavior of respondents or demand for related goods. Most of the environmental resources are valued using the constructive approach. There are number of methods involved in natural resource valuation including revealed preference and stated preference method.
It is important to value natural resources because they contribute towards fiscal revenue, income, and poverty reduction. Sectors related to natural resources use provide jobs and are often the basis of livelihoods in poorer communities. Owing to this fundamental importance of natural resources, they must be managed sustainably.
Importance
Resource valuation is used as an input to generate better policy recommendations to support protected areas management and its linkages to spatial planning process. Resource valuation studies should be conducted in the conflict areas (trade-off area) in order to get information on costs and benefits.
History
Contingent valuation (CV) has been used by economists to value public goods for about twenty-five years. The approach posits a hypothetical market for an unpriced good and asks individuals to state the value they place on a proposed change in its quantity, quality, or access. Development of the CV concept has been described in reviews by Cummings, Brookshire, and Schulze (1986) and Mitchell and Carson (1989). The approach is now widely used to value many different goods whose quantity or quality might be affected by the decisions of a public agency or private developer. Three market-based techniques that have recorded a significant history of natural and environmental resource valuations are described; the market price approach, the appraisal method, and resource replacement costing. Contingent Valuation is also called Direct valuation of environmental damages and refers to the direct questioning of affected parties to assess the value of the natural resource.
See also
Ecosystem valuation
Payment for Ecosystem Services
Ecological goods and services
Ecological economics
References
External links
Advanced methods of natural resource valuation
Environmental resources
Resource valuation
Valuation of natural resources, efficiency and equity
Resources
Environmental planning
Environmental science
Natural resources | Natural resource valuation | Environmental_science | 560 |
9,386,904 | https://en.wikipedia.org/wiki/Bore%20%28engine%29 | In a piston engine, the bore (or cylinder bore) is the diameter of each cylinder.
Engine displacement is calculated based on bore, stroke length and the number of cylinders:
displacement =
The stroke ratio, determined by dividing the bore by the stroke, traditionally indicated whether an engine was designed for power at high engine speeds (rpm) or torque at lower engine speeds. The term "bore" can also be applied to the bore of a locomotive cylinder or steam engine pistons.
In steam locomotives
The term bore also applies to the cylinder of a steam locomotive or steam engine.
Bore pitch
Bore pitch is the distance between the centerline of a cylinder bore to the centerline of the next cylinder bore adjacent to it in an internal combustion engine. It's also referred to as the "mean cylinder width", "bore spacing", "bore center distance" and "cylinder spacing".
The bore pitch is always larger than the inside diameter of the cylinder (the bore and piston diameter) since it includes the thickness of both cylinder walls and any water passage separating them. This is one of the first dimensions required when developing a new engine, since it limits maximum cylinder size (and therefore, indirectly, maximum displacement), and determines the length of the engine (L4, 6, 8) or of that bank of cylinders (V6, V8 etc.).
In addition, the positions of the main bearings must be between individual cylinders (L4 with 5 main bearings, or L6 with 7 main bearings - only one rod journal between main bearings), or between adjacent pairs of cylinders (L4 with 3 main bearings, L6 or V6 with 4 main bearings, or V8 with 5 main bearings - two rod journals between main bearings).
In some older engines (such as the Chevrolet Gen-2 "Stovebolt" inline-six, the GMC straight-6 engine, the Buick Straight-eight, and the Chrysler "Slant 6") the bore pitch is additionally extended to allow more material between the main bearing webs in the block. For example, in an L6 the first pair (#1 & 2), center pair (#3 & 4), and rear pair (#5 & 6) of cylinders that share a pair of main bearings have a smaller pitch than between #2 & 3 and #4 & 5 that "bridge" a main bearing.
Since the start-up expense of casting an engine block is very high, this is a strong incentive to retain this dimension for as long as possible to amortize the tooling cost over a large number of engines. If and when the engine is further refined, modified or enlarged, the bore pitch may be the only dimension retained from its predecessor. The bore diameter is frequently increased to the limit of minimal wall thickness, the water passage is eliminated between each pair of adjacent cylinders, the deck height is increased to accommodate a longer stroke, etc. but in general if the bore pitch is the same, the engines are related.
As an example of development, the Chrysler 277" polyspheric V8, first introduced in 1956, was gradually increased in size by bore and stroke to 326" by 1959, then received a drastic make-over in 1964 to conventional "wedge" combustion chambers, then modified again for stud-mounted rocker arms, and finally underwent an even greater re-design to become the modern 5.7 liter hemi. All of these engines retain the original 4.460" bore pitch distance set down in 1956.
Hybrid heads
"Hybrid" is the term commonly used to identify an engine modified for high performance by adapting a cylinder head from another (sometimes completely different) brand, size, model or type engine. Note: using a later head of the same engine "family" isn't a true hybrid, but mere modernization.
In some cases, two heads from the donor (source) engine are joined end-to-end to match the number of cylinders on the subject engine (such as using three cylinders each of two V8 heads on a Chevrolet inline-six).
Identical or extremely similar bore pitch is what makes this possible, or (almost) impossible.
See also
Bore pitch
Compression ratio
Engine displacement
References
Engine technology | Bore (engine) | Technology | 849 |
67,046,831 | https://en.wikipedia.org/wiki/%28%CE%B1/Fe%29%20versus%20%28Fe/H%29%20diagram | The [α/Fe] versus [Fe/H] diagram is a type of graph commonly used in stellar and galactic astrophysics. It shows the logarithmic ratio number densities of diagnostic elements in stellar atmospheres compared to the solar value. The x-axis represents the abundance of iron (Fe) vs. hydrogen (H), that is, [Fe/H]. The y-axis represents the combination of one or several of the alpha process elements (O, Ne, Mg, Si, S, Ar, Ca, and Ti) compared to iron (Fe), denoted as [α/Fe].
These diagrams enable the assessment of nucleosynthesis channels and galactic evolution in samples of stars as a first-order approximation. They are among the most commonly used tools for Galactic population analysis of the Milky Way. The diagrams use abundance ratios normalised to the Sun, (placing the Sun at (0,0) in the diagram). This normalisation allows for the easy identification of stars in the Galactic stellar high-alpha disk (historically known as the Galactic stellar thick disk), typically enhanced in [α/Fe], and stars in the Galactic stellar low-alpha disk (historically known as the Galactic stellar thin disk), with [α/Fe] values as low as the Sun. Furthermore, the diagrams facilitate the identification of stars that are likely born in times or environments significantly different from the stellar disk. This includes metal-poor stars (with low [Fe/H] < -1), which likely belong to the stellar halo or accreted features.
History
George Wallerstein and Beatrice Tinsley were early users of the [α/Fe] vs. [Fe/H] diagrams. In 1962, George Wallerstein noted, based on the analysis of a sample of 34 Galactic field stars, that "the [α/Fe] distribution seems to consist of a normal distribution about zero, plus seven stars with [α/Fe] > 0.20. These may be called [α/Fe]-rich stars."
In 1979, Beatrice Tinsley used the interpretation of these observations with the theory throughout her work on Stellar lifetimes and abundance ratios in chemical evolution. While discussing oxygen as one of the alpha-process elements, she wrote, 'As anticipated, the observed [O/Fe] excess in metal-poor stars can be explained qualitatively if much of the iron comes from SN I. [...] The essential ingredient in accounting for the [O/Fe] excess is that a significant fraction of oxygen must come from stars with shorter lives than those that make much of the iron.' In 1980, in Evolution of the Stars and Gas in Galaxies, she said, 'Relative abundances of elements heavier than helium provide information on both nucleosynthesis and galactic evolution [...].'
These relative abundances and the diagrams depicting different relative abundances are now among the most commonly used diagnostic tools of Galactic Archaeology. Bensby et al. (2014) used them to explore the Milky Way disk in the solar neighbourhood. Hayden et al. (2015) used them for their work on the chemical cartography of our Milky Way disk. It has been suggested that the diagram be named for Tinsley and Wallerstein.
Notation
The diagram depicts two astrophysical quantities of stars, their iron abundance relative to hydrogen [Fe/H] - a tracer of stellar metallicity - and the enrichment of alpha process elements relative to iron, [α/Fe].
The iron abundance is noted as the logarithm of the ratio of a star's iron abundance compared to that of the Sun:
,
where and are the number of iron and hydrogen atoms per unit of volume respectively.
It is a tracing the contributions of galactic chemical evolution to the nucleosynthesis of iron. These differ for the birth environments of stars, based on their star formation history and star burst strengths. Major syntheses channels of iron are supernovae Ia and II.
The ratio of alpha process elements to iron, also known as the alpha-enhancement, is written as the logarithm of the alpha process elements O, Ne, Mg, Si, S, Ar, Ca, and Ti to Fe compared to that of the Sun:
and
where and are the number of the alpha process elements and iron atoms per unit of volume respectively.
In practise, not all of these elements can be measured in stellar spectra and the alpha-enhancement is therefore commonly reported as a simple or error-weighted average of the individual alpha process element abundances.
References
Diagrams
Comparisons
Stellar astronomy | (α/Fe) versus (Fe/H) diagram | Astronomy | 941 |
7,852,887 | https://en.wikipedia.org/wiki/Cylindric%20algebra | In mathematics, the notion of cylindric algebra, developed by Alfred Tarski, arises naturally in the algebraization of first-order logic with equality. This is comparable to the role Boolean algebras play for propositional logic. Cylindric algebras are Boolean algebras equipped with additional cylindrification operations that model quantification and equality. They differ from polyadic algebras in that the latter do not model equality.
The cylindric algebra should not be confused with the measure theoretic concept cylindrical algebra that arises in the study of cylinder set measures and the cylindrical σ-algebra.
Definition of a cylindric algebra
A cylindric algebra of dimension (where is any ordinal number) is an algebraic structure such that is a Boolean algebra, a unary operator on for every (called a cylindrification), and a distinguished element of for every and (called a diagonal), such that the following hold:
(C1)
(C2)
(C3)
(C4)
(C5)
(C6) If , then
(C7) If , then
Assuming a presentation of first-order logic without function symbols,
the operator models existential quantification over variable in formula while the operator models the equality of variables and . Hence, reformulated using standard logical notations, the axioms read as
(C1)
(C2)
(C3)
(C4)
(C5)
(C6) If is a variable different from both and , then
(C7) If and are different variables, then
Cylindric set algebras
A cylindric set algebra of dimension is an algebraic structure such that is a field of sets, is given by , and is given by . It necessarily validates the axioms C1–C7 of a cylindric algebra, with instead of , instead of , set complement for complement, empty set as 0, as the unit, and instead of . The set X is called the base.
A representation of a cylindric algebra is an isomorphism from that algebra to a cylindric set algebra. Not every cylindric algebra has a representation as a cylindric set algebra. It is easier to connect the semantics of first-order predicate logic with cylindric set algebra. (For more details, see .)
Generalizations
Cylindric algebras have been generalized to the case of many-sorted logic (Caleiro and Gonçalves 2006), which allows for a better modeling of the duality between first-order formulas and terms.
Relation to monadic Boolean algebra
When and are restricted to being only 0, then becomes , the diagonals can be dropped out, and the following theorem of cylindric algebra (Pinter 1973):
turns into the axiom
of monadic Boolean algebra. The axiom (C4) drops out (becomes a tautology). Thus monadic Boolean algebra can be seen as a restriction of cylindric algebra to the one variable case.
See also
Abstract algebraic logic
Lambda calculus and Combinatory logic—other approaches to modelling quantification and eliminating variables
Hyperdoctrines are a categorical formulation of cylindric algebras
Relation algebras (RA)
Polyadic algebra
Cylindrical algebraic decomposition
Notes
References
Leon Henkin, J. Donald Monk, and Alfred Tarski (1971) Cylindric Algebras, Part I. North-Holland. .
Leon Henkin, J. Donald Monk, and Alfred Tarski (1985) Cylindric Algebras, Part II. North-Holland.
Robin Hirsch and Ian Hodkinson (2002) Relation algebras by games Studies in logic and the foundations of mathematics, North-Holland
Further reading
External links
example of cylindrical algebra by CWoo on planetmath.org
Algebraic logic | Cylindric algebra | Mathematics | 774 |
32,729,903 | https://en.wikipedia.org/wiki/Application%20Programming%20Interface%20for%20Windows | The Application Programming Interface for Windows (APIW) Standard is a specification of the Microsoft Windows 3.1 API drafted by Willows Software. It is the successor to previously proposed Public Windows Interface standard. It was created in an attempt to establish a vendor-neutral, platform-independent, open standard of the 16-bit Windows API not controlled by Microsoft.
Creation
By the end of 1990, Windows 3.0 was the top-selling software. The various graphical Windows applications had already started to reduce training time and enhance productivity on personal computers. At the same time, various Unix and Unix-based operating systems dominated technical workstations and departmental servers. The idea of a consistent application environment across heterogeneous environments was compelling to both enterprise customers and software developers.
On May 5, 1993, Sun Microsystems announced Windows Application Binary Interface (WABI), a product to run Windows software on Unix, and the Public Windows Interface (PWI) initiative, an effort to standardize a subset of the popular 16-bit Windows APIs. The PWI consortium's aims were stated as turning the proprietary Windows API into an "open, publicly available specification" and for the evolution of this specification to be the responsibility of "a neutral body". The consortium, counting Sun, IBM, Hewlett Packard and Novell among its members, proposed PWI to various companies and organizations including X/Open, IEEE and Unix International. The previous day, Microsoft had announced SoftPC, a Windows to Unix product created by Insignia Solutions as part of a program where Microsoft licensed their Windows source code to select third parties, which in the following year became known as Windows Interface Source Environment (WISE). Later that month, Microsoft also announced Windows NT, a version of Windows designed to run on workstations and servers.
ECMA involvement
In February 1994, the PWI Specification Committee sent a draft specification to X/Open—who rejected it in March, after being threatened by Microsoft's assertion of intellectual property rights (IPR) over the Windows APIs—and the European Computer Manufacturers' Association (ECMA). In September, now part of an ECMA delegation, they made an informational presentation about the project at the ISO SC22 plenary meeting in The Hague, Netherlands. Their goal was to make it an ISO standard in order to force Microsoft to comply with it (in Windows) or risk not being able sell to European or Asian governments who can only buy ISO standards-compliant products.
In April 1995, Willows Software, Inc. (formerly Multiport, Inc.) a Saratoga, California-based Canopy-funded company, that had been working on Windows to Unix technologies (inherited from then defunct Hunter Systems, Inc.) since early 1993, joined the ad hoc ECMA group. This group became Technical Committee 37 in August (about the time Windows 95 was released). Willows vowed to complete a full draft specification by the end of the year. In October, the draft specification was completed under the name Application Programming Interface for Windows (APIW). This was accepted as ECMA-234 in December and was put on the fast-track program to become an ISO standard.
ISO delay
Again, Microsoft claimed intellectual property over Windows APIs and ISO put the standard on hold pending proof of their claims. The delay lasted until November 1997, when, hearing no response from Microsoft, ISO announced they were pushing through with the standard. However, there is no record of it ever being approved as an ISO standard.
See also
References
Ecma standards
ISO standards
Windows components
Application programming interfaces | Application Programming Interface for Windows | Technology | 724 |
54,284,846 | https://en.wikipedia.org/wiki/Aquilanti%E2%80%93Mundim%20deformed%20Arrhenius%20model | In chemical kinetics, the Aquilanti–Mundim deformed Arrhenius model is a generalization of the standard Arrhenius law.
Overview
Arrhenius plots, which are used to represent the effects of temperature on the rates of chemical and biophysical processes and on various transport phenomena in materials science, may exhibit deviations from linearity. Account of curvature is provided here by a formula, which involves a deformation of the exponential function, of the kind recently encountered in treatments of non-extensivity in statistical mechanics.
Theoretical model
Svante Arrhenius (1889) equation is often used to characterize the effect of temperature on the rates of chemical reactions. The Arrhenius formula gave a simple and powerful law, which in a vast generality of cases describes the dependence on absolute temperature of the rate constant as following,
(1)
where is the absolute temperature, is the gas constant and the factor varies only slightly with temperature. The meaning attached to the energy of activation is as the minimum energy, which molecules need have to overcome the threshold to reaction. Therefore, the year 1889 can be considered as the birth date of reactive dynamics as the study of the motion of atoms and molecules in a reactive event. Eq. (1) was motivated by the 1884 discovery by van't Hoff of the exponential dependence from the temperature of the equilibrium constants for most reactions: Eq.(1), when used for both a reaction and its inverse, agrees with van't Hoff's equation interpreting chemical equilibrium as dynamical at the microscopic level. In case of a single rate-limited thermally activated process, an Arrhenius plot gives a straight line, from which the activation energy and the pre-exponential factor can both be determined.
However, advances in experimental and theoretical methods have revealed the existence of deviation from Arrhenius behavior (Fig.1).
To overcome this problem, Aquilanti and Mundim proposed (2010) a generalized Arrhenius law based on algebraic deformation of the usual exponential function. Starting from the Euler exponential definition given by,
(2)
defining the deformed exponential function as,
(3)
Identifying the deformation parameter as a continuous generalization of . At the limit the d-exponential function, , coincides with the usual exponential according to the well-known limit due to Euler, that is,
(4)
This definition was first used in thermodynamics and statistical mechanics by Landau. In the most recent scientific literature, there is a variety of deformed algebras with applications in different areas of science. Considering the d-exponential function, we introduce the deformed reaction rate coefficient, , in the following way,
(5)
and at the limit the usual Arrhenius reaction law is recovered (Figs.1 and 1a). is pre-exponential factor. Taking the logarithm of , Eq.(5), we obtain the following expression for the non-Arrhenius plot,
(6)
The logarithm of the reaction rate coefficient against reciprocal temperature shows a curvature, rather than the straight-line behavior described by the usual Arrhenius law (Figs.1 and 1a).
In Tolman’s definition the barrier or activation energy is a phenomenological quantity defined in terms of the slope of an Arrhenius law; it is usually assumed to be independent of absolute temperature (T), requires only local equilibrium and in general is given by
(7)
where is constant and is the ideal gas constant.
To generalize Tolman´s definition, in the case chemical reactions, we assume that the barrier or activation energy is a function of the temperature given by the following differential equation,
or → (8)
where (constant) at limit and the usual activation energy law is recovered as a constant. Noticeably, on the contrary of the usual Arrhenius case, the barrier or activation energy is temperature dependent and has different concavities depending on the value of the d parameter (see Figs.1 and 1a). Thus, a positive convexity means that decreases with increasing temperature. This general result is explained by a new Tolman-like interpretation of the activation energy through Eq.(8).
In the recent literature, it is possible to find different applications to verify the applicability of this new chemical reaction formalism
Apparent Reciprocal Activation Energy or Transitivity
can be considered as temperature dependent. It was postulated as the basic expansion the reciprocal-activation reciprocal-temperature relationship, for which can provide a formal mathematical justification by Tolman Theorem. The function when written as the logarithmic derivative of the rate constants with respect to , Eq. (7), the concept to an activation energy represents an energetic obstacle to the progress of the reaction: therefore its reciprocal can be interpreted as a measure of the propensity for the reaction to proceed and defined as the specific transitivity () of the process:
(9)
This notation emphasizes the fact that in general the transitivity can take a gamma of values, but not including abrupt changes e.g. in the mechanism or in the phases of reactants. If it is admit a Laurent expansion in a neighbourhood around a reference value, it is possible recover the Eqs. (6) and (8).
What it is call the sub-Arrhenius behaviour would be accounted for traditionally by introducing a tunnelling parameter () in the conventional Transition-State-Theory. In the -TST formulation, it is replace the factor in the TST rate constant by the deformed exponential function, Eq. (3), yielding:
(10)
where is Planck constant, is Boltzmann constant and is the (translational, vibrational and rotational) partition functions of the reactants, and is the partition function of the activated complex. In Ref., the significance of the parameter and an explicit procedure for its calculation were proposed, which it is inversely proportional to the square of the barrier height ()and directly proportional to the square of the frequency for crossing the barrier () at a saddle point in the potential energy surface:
(11)
Fields of Applications and Related Subjects
This theory was initially developed for applications in chemical kinetics problems as above discussed, but has since been applied to a wide range of phenomena:
the characterization of reaction rates in Chemistry,
Transition state theory (TST),
Astrochemical process,
quantum tunneling,
stereodynamics stereochemistry of kinetics processes, solid-state diffusive reactions,
physical processes in supercooled liquids,
carbon nanotubes composite,
transport phenomena,
anomalous diffusion,
Brownian particles moving,
transport dynamics in ionic conductors,
a continuum approach for modeling gravitational effects on grain settling and shape distortion,
collision theory,
rate theory connecting kinetics to thermodynamics,
nonextensive statistical mechanics,
different fields of plasma chemical-physics,
modelling of high-temperature dark current in multi-quantum well structures from MWIR to VLWI,
molecular semiconductor problems,
Metallurgy: perspectives on lubricant additive corrosion,
Langevin stochastic dynamics,
predicting solubility of solids in supercritical solvents,
survey on operational perishables (food) quality control and logistics,
activation energy’s on biodiesel reaction,
flux over population analysis,
molecular quantum mechanics,
biological activity,
drug design,
protein folding.
Motor proteins,
Microbial growth laws,
Water dynamics
Classroom on Motivation and Sociability
Virial coefficients in chemical reaction
Diffusion in a binary colloidal mixture
Claisen–Schmidt condensation
Thermotherapy
Landscape topography
3D-printed powder components
Glass alloy
Li-ion Batteries
References
Chemical kinetics | Aquilanti–Mundim deformed Arrhenius model | Chemistry | 1,590 |
27,037,046 | https://en.wikipedia.org/wiki/Mobile%20harassment | Mobile harassment refers to the act of sending any type of text message, sex photo message, video message, or voicemail from a mobile phone that causes the receiver to feel harassed, threatened, tormented, humiliated, embarrassed or otherwise victimized. It is recognized as a form of cyberbullying.
Prevalence
Mobile harassment has emerged as a worldwide trend due to the prevalence of mobile devices. . Recent studies indicate that harassment through mobile texting is particularly pervasive in countries like the United Kingdom and Australia, while the United States experiences a higher prevalence of harassment through the Internet.
In 2009, a survey in the United Kingdom revealed that approximately 14 percent of participants reported they had been victims of mobile harassment ranging from name calling, threatening text messages, or photos or videos intended to frighten or intimidate. Another study from Queensland, Australia, found that 93.7 percent of teenagers experienced mobile harassment of some kind. This study concluded that girls tend to experience and perpetrate more mobile bullying than boys. A 2021 study indicated that there is a 1.8 percent higher prevalence of girls claiming to be victims of cyberbullying.
Interestingly, students who identify as transgender experience cyberbullying at a rate 11.7% higher than their peers. While teenagers who identify as transgender are less likely to commit mobile harassment, non-heterosexual teenagers are more likely to be victims and be the offenders.
Cyberbullying offending peaks at around 13 years old, but the age of victims peaks at about 14 to 15 years old. Researchers have also revealed that approximately one-third of adolescents have been subjected to harassment or cyberbullying. However, the actual number of victims could be higher, as some may not recognize they have experienced mobile harassment, while others may choose not to acknowledge it due to feelings of humiliation.
Cases of mobile harassment often transpire outside of school. However, situations where the perpetrators and victims are classmates can cause the harassment to spill over to the students' school environment.
Solutions
In the U.S., there isn't federal legislation that specifically addresses mobile harassment and cyber bullying. However, numerous schools have policies and regulations to prevent mobile harassment. For instance, administrators in some schools prohibit students from taking pictures and sharing visual materials within the school premises.
Certain schools have taken a step further by proposing complete bans on the use of mobile devices on school grounds. Waldorf Schools for instance, adhere to a strict anti-technology philosophy aimed at eradicating cyberbullying on campus. This approach has gained traction among families in Silicon Valley and is now used in more than 1,000 institutions across 91 countries, including 136 schools in the U.S.
Private organizations are also increasingly adopting regulatory policies to prevent mobile harassment. For instance, Facebook adopted internal harassment and bullying policies. The social media company, which is cited as one of the most commonly used networks to harass people, also adopted measures that enable them to "remove content that appears to purposefully target private individuals with the intention of degrading or shaming them.”
Raising Awareness
In November 2009, LG Mobile Phones launched an advertising campaign in the United States that used humor aimed to encourage teens to think before they text. The campaign, produced by DiGennaro Communications, featured James Lipton and carried the tagline "Before you text, give it a ponder."
In popular culture
The television show Gossip Girl features numerous episodes centered around misinterpreted or questionable text messages.
References
External links
Resources that help teens handle mobile harassment - BullyingUK
How to Defeat Online Trolls Retrieved 2017-02-23
Cyberbullying
Harassment and bullying | Mobile harassment | Biology | 733 |
41,202,355 | https://en.wikipedia.org/wiki/Runcicantellated%2024-cell%20honeycomb | In four-dimensional Euclidean geometry, the runcicantellated 24-cell honeycomb is a uniform space-filling honeycomb.
Alternate names
Runcicantellated icositetrachoric tetracomb/honeycomb
Prismatorhombated icositetrachoric tetracomb (pricot)
Great diprismatodisicositetrachoric tetracomb
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 118
o3x3x4o3x - apricot - O118
5-polytopes
Honeycombs (geometry) | Runcicantellated 24-cell honeycomb | Physics,Chemistry,Materials_science | 346 |
4,989,808 | https://en.wikipedia.org/wiki/Pakistan%20Institute%20of%20Nuclear%20Science%20%26%20Technology | The Pakistan Institute of Nuclear Science & Technology (PINSTECH) is a federally funded research and development laboratory in Nilore, Islamabad, Pakistan.
The site was designed by the American architect Edward Durell Stone and its construction was completed in 1965. It has been described as "[maybe] the most architecturally stunning physics complex in the world".
In response to the war with India in 1971, the lab was repurposed as a primary weapons laboratory from its original civilian mission. Since the 1990s, the lab has been focused increasingly on civilian mission and it maintains a broad portfolio in providing research opportunities in supercomputing, renewable energy, physical sciences, philosophy, materials science, medicine, environmental science, and mathematics.
Overview
The Pakistan Institute of Nuclear Science & Technology (PINSTECH) is one of nation's leading research and development Institution affiliated to the national security. It is a principle national laboratory that has the responsibility by ensuring the safety, security, and reliability of nation's nuclear weapons program by advancing applications in science and technology.
The PINSTECH is located in Nilore, about southeast of Islamabad, and was designed by the American firm, AMF Atomics and Edward Durell Stone who once worded: "This....has been my greatest work. I am proud that it looks like it belongs in this country."
Since owned by the Government of Pakistan, its managed by Pakistan Atomic Energy Commission. The scientific research programs are supported at the laboratory through the Pakistan Institute of Engineering and Applied Sciences, also in Nilore. The laboratory covers around area.
History
The establishment of the Pakistan Institute of Nuclear Science & Technology (Pinstech) was embodiment of the Atoms for Peace initiative in 1953 and a long-sought initiative led by Abdus Salam who was lobbying for a professional physical laboratory since 1951. Budget constraints and lack of interests by the government administration had left a deep impression on Salam who was determined to establish to create an institution to which scientists from the developing countries would come as a right to interact with their peers from industrially advanced countries without permanently leaving their own countries. Construction of the Pinstech began when Salam who was able to find funding from the United States in 1961.
Eventually, Salam and I. H. Usmani approached Glenn T. Seaborg for the further funding of the laboratory from the United States government, which stipulated the fund if the Pakistan Atomic Energy Commission were to set up a research reactor of their own at sum of US$ 350,000. Contrary to United States' financial pledges, it was reported that the actual cost of building the Pinstech was neared at US$ 6.6 million that was funded and paid by the Pakistani taxpayers in 1965.
From 1965–69, the Pinstech had an active and direct laboratory-to-laboratory interaction with the American national laboratories such as Oak Ridge, Argonne, Livermore, and Sandia.
The scientific library of the institute consisted of a large section containing historical references and literatures on the Manhattan Project, brought by Abdus Salam in 1971 prior to start of the nuclear weapons program under Zulfikar Ali Bhutto's administration.
The Pakistan Atomic Energy Commission (PAEC) hired the laboratory's first director Rafi Muhammad, a professor of physics at the Government College University, Lahore (GCU), who affiliated the Pinstech with the Quaid-i-Azam University in 1967, bearing some special materials testing. Soon, the scientists from Institute of Theoretical Physics of the Quaid-i-Azam University had an opportunity to seek permanent research employment in physics at the laboratory.
Major Projects
Strategic deterrence
After the costly war with India in 1971, the re-purposing of the Pinstech Laboratory was difficult since it was never intended to be a weapons laboratory. Initially, the plutonium pit production at Pinstech was quite difficult together with its tiny research reactor that could never be a source of weapons-grade plutonium. In spite of its short-comings, the investigations and classified studies on understanding the equation of state on plutonium was started the physicists at the Pinstech laboratory in 1972. The Pinstech laboratory became a main research and development laboratory when it initiated its ingenious program for the production of plutonium oxide (plutonia) and uranium oxide (Urania) in 1973.
The Pinstech laboratory was also a learning center for gaining expertise in nuclear fuel cycle which it provided training to other facilities after learning the very basic knowledge from the European industries prior to 1969. At the Pinstech laboratory, a pilot plant (New Labs) was built for reprocessing spent reactor fuel into plutonium pit production. Besides its fundamental and basic programs on physical sciences, the laboratory provided a ground for the Pakistani scientists to design and engineer weapon designs, with many feared that India was rapidly developing a nuclear bomb.
As Nilore became restricted site, the research efforts were directed towards working on understanding and producing first the reactor-grade plutonium and eventually to military-grade plutonium from the spent fuel rods by undergoing a chemical process, "reprocessing". The design work had carried out on 20 different laboratories at the lab, and it was its New Labs facility of the lab that was able to produce the first batch of the weapon grade plutonium of 239Pu by 1983. This weapon-grade plutonium was the source material that was carried on a nuclear test conducted at the Ras Koh Range on 30 May 1998.
Nuclear fuel cycle
The scientists at the Pinstech laboratory initiated the studies on understanding the ingenious nuclear fuel cycle in spite of having basic familiarity. In 1973, the lab conducted several studies on understanding the properties of uranium oxide, eventually producing the first fuel bundle in 1976 that was shipped to the Karachi Nuclear Power Plant to keep its grid operations running. The Pinstech also took initiatives in learning and understanding the chemistry of uranium hexafluoride, which the technology was transferred to the Islamabad Uranium Conversion Facility in 1974. In addition, the understanding of UF6 eventually led in producing the Zircaloys, which it was also produced at the lab first; and later having it transfer the technology to the Kundian Nuclear Fuel Complex in 1980.
As of today, PINSTECH has been shifted to peacetime research in medicine, biology, materials and physics. Its Molybdenum-42 facility was used to medical radioisotopes for treating cancer. Scientists from Nuclear Institute for Agriculture and Biology (NIAB) and Nuclear Institute for Food and Agriculture (NIFA) had been using the PINSTECH facilities to conduct advanced research in both medical and food sciences.
Plutonium research
Since its repurposing in 1972, the Pinstech laboratory conducts research into understanding the equation of state of plutonium, its phase diagrams, and its properties. In 1987, the Pinstech developed a technology by fabricating a Chromium kF39 and developed an innovative technique, "in-stu leaching", which allowed the extraction of actinides from the uranium ore without the need for conventional milling.
The computer scientists at the Pinstech Laboratory had built a supercomputer based on the vintage IBM computer architecture that allowed the physicists at the Pinstech to model the behavior of plutonium without the actual nuclear testing. Research work on plutonium is conducted at its special-purpose facility, the New Laboratories, where the weapon-grade nuclear explosives are designed and manufactured. Much of the work on plutonium is, however, is subjected to classified information.
The Centralized Analysis Facility (CAF) has been utilized chemistry on plutonium and other areas of actinides sciences are studied and conducts experiments at the Central Diagnostic Laboratory (CDL); both of labs are the most potent facilities in Pakistan.
Besides its national security mission, the lab promotes applications of radiation and isotope technology in various scientific and technological disciplines to support the nation. It is also working on important non-nuclear fields, which are crucial for the development of science and technology in the country. In 2020, expansion work was started at Pinstech lab to help its "ability to produce isotopes for medical use, especially for preparation of radiopharmaceuticals for cancer patients while also helping the country in its aspirations in other applications of peaceful use of nuclear technology."
Nuclear reactors
PINSTECH has particle accelerators and also operates two small nuclear research reactors, a reprocessing plant and another experimental neutron source based on:
PARR-I Reactor-Utilize Low-Enriched Uranium (LEU)
PARR-II Reactor-Utilize High-Enriched Uranium (HEU)
New Labs-Plutonium reprocessing (PR) facility.
Charged Particle Accelerator- a nuclear particle accelerator.
Fast Neutron Generator- An experimental neutron generator.
Research divisions
The PINSTECH four research directorates and each directorate is headed by an appointed Director-Generals. The following PINSTECH Divisions are listed below:
Directorate of Science
Physics Research Division (RPD)
The directorate of science consists of four division, and each divisions are headed by deputy director-generals. In 2004, the PINSTECH administration had brought together all of the groups, and were merged into one single Division, known as Physics Research Division (PRD). Meanwhile, the PINSTECH had also merged Nuclear Physics Division (NPD) and Radiation Physics Division (RPD), Nuclear and Applied Chemistry Divisions as well. The below is the list of research groups working in RPD.
Atomic and Nuclear Radiation Group
Fast Neutron Diffraction Group (FNDG)
Electronic and Magnetic Materials Group (EMMG)
Nuclear Track Studies Group
Nuclear Geology Group
Radiation Damage Group
Diagnostics Group
Mathematical Physics Group (MPG)
Theoretical Physics Group (TPG)
Chemistry Research Division (CRD)
Nuclear Chemistry Division (NCD) - The Nuclear Chemistry Division was founded in 1966 by Dr. Iqbal Hussain Qureshi. As of today, the division is the largest Divisions of the PINSTECH comprising five major groups. Nuclear Chemistry Division has gained experience in the characterization of reactor grade and high purity materials by using advanced analytical techniques and it is dealing with environmental and health related problems.
Applied Chemistry Division
Laser Development Division
Directorate of System and Services
The Directorate of System and Services (DSS, headed by Dr. Matiullah, consists of 5 research divisions that are listed below:
Health Physics Division (HPD) - The Health Physics Division (HPD) was established in 1965 by the small team of health physicists. Founded as a group, it was made a division of PINSTECH in 1966. The division heavily involves its research in medical physics and using nuclear technology in medical and agricultural sciences.
Nuclear Engineering Division (NED) - The Nuclear Engineering Division (NED, headed by Dr. Masood Iqbal, is one of the most prestigious and well-known Division of Pakistan Institute of Nuclear Science and Technology (PINSTECH). The Division was established in 1965 with the objective to develop technical expertise mainly in the area of Nuclear Reactor Technology. The NED has been used to provide technical assistance and training to the field of reactor technology.
Electronics Maintenance Division (EMD) - The Electronics Division (ED, headed by Mr. Hameed, was formally established in 1967, recognizing its important role in scientific research and development at PINSTECH. The Division has rendered valuable service to the scientific effort by carrying out maintenance of scientific equipment and development of electronic instruments for use in research and development projects. In 1989, the ED was involved in the upgrade program of the PARR-I Reactor led by PAEC chairman Munir Ahmad Khan. The ED had supplied and developed electronic material and system for the PARR-I Reactor, and had successfully converted PARR-I to utilize HEU fuel into LEU fuel. An outstanding achievement ED was the design and engineering of nuclear instrumentation of research reactor (PARR-1) which required a very high degree of sophistication and reliability.
General Services Division (GSD) - The General Services Division (GSD) is responsible for the routine operational research, maintenance repairs of the laboratories, upkeep and development of engineering services such as civil, electrical, mechanical workshops, air conditioning as well as water supply to PINSTECH and annexed labs.
Computer Division (CD) - Computer Division (CD) was established in January 1980 with an aim to provide service and support to the researchers and scientists of PINSTECH in the area of computer hardware and software. Although computer division is still providing computer hardware and software services but it has gradually shifted its activities from being only a service provider division to an important design and development division.
Directorate of Technology
The Directorate of Technology (D-TECH) consists of 3 divisions that are Materials Division (MD), Isotope Application Division (IAD), and the Isotope Production Division (IPD).This is currently overseen by Dr. Gulzar Hussain Zahid, Chief Engineer.
Materials Division (MD) - Materials Division (MD) was established in 1973, with aim of to provide technical assistance to other PAEC's projects on development, production and characterization of materials.
Isotope Application Division (IAD) - The Isotope Application Division (IAD) was established in PINSTECH by Dr. Naeem Ahmad Khan in early 1971. Having known as the problem solver in the institute, the IAD is responsible for solving the problems in Isotope Hydrolog, Environmental Pollution, Non-Destructive Testing, Industrial Applications, Life Sciences, and Isotope Geology. IAD also extends expert services to solve relevant problems faced by the industrial sector and different organizations.
Isotope Production Division (IPD) - The Isotope Production Division (IPD) It contains Molly Group, Generator Production group, Kit production Group. IPD also involves in modification of exiting isotope production facility.
Directorate of Coordination
The Directorate of Coordination, headed by Engr. Iqbal Hussain Khan, is an administrative directorate which consists of 3 technical support divisions. Computation, Information, Communication Technologies (CICT)/Management Information System (MIS) Division, The Scientific Information Division (SID), Programme Coordination Division (PCD) are included in this division.
Computation Information & Communication Technologies Division/Management Information Systems (MISD) - CICT/MIS division headed by Dr. Syed Zubair Ahmad was established in 1980 for developing computation and information technologies infrastructure at PINSTECH. Initially mainframe computer systems like VAX/11-780 were deployed to provide computational support to the scientific community at PINSTECH. Later on with the advent of distributed computing technologies, numerous distribute systems were deployed to achieve higher processing and storage capacity than mainframe computers. These include Data and Compute clusters, grids, clouds and applications. Data acquisition systems, enterprise resource planning (ERP) systems and advanced network architectures are also developed and deployed.
Scientific Information Division (SID)- The Scientific Information Division (SID, headed by Dr. Ishtiaq Hussain Bokhari, was established in PINSTECH in 1966. It was upgraded into a full-fledged division in 1984. SID is the central source of scientific and technical information not only for Pakistan Atomic Energy Commission but also for other scientific organizations and universities in the country and is responsible for the efficient acquisition, storage, retrieval and dissemination of Scientific and Technical information in support of the PAEC program.
User facilities
Analytical Laboratories
Charged Particle Accelerator
Computer Oriented Services
Corrosion Testing
Environmental Studies Building
Health Physics, Radiation Safety & Radioactive Waste Management
Irradiation Laboratories
Lasers Laboratory and Testing Facility
Materials Development & Characterization
Nuclear Geological Services
Processing of Polymers
Production of Radioisotopes & Radio-pharmaceuticals
Radiation & Radioisotope Applications
Repair & Maintenance of Electronic Equipment
Scientific & Industrial Instruments
Scientific Glass Blowing
Scientific Information
Technical Services & Collaboration
Vacuum Technology Laboratory
Vibration Analysis
Director generals (DGs) of PINSTECH
Notes
References
Nuclear technology in Pakistan
Nuclear research institutes
Particle physics facilities
International research institutes
Research institutes in Pakistan
Pakistan federal departments and agencies
Constituent institutions of Pakistan Atomic Energy Commission
Edward Durell Stone buildings
Laboratories in Pakistan
Mathematical institutes
Military research of Pakistan
Chemical research institutes
Pakistan Institute of Engineering and Applied Sciences
Biological research institutes
Supercomputer sites
1965 establishments in Pakistan
Abdus Salam
Theoretical physics institutes
Nuclear weapons programme of Pakistan | Pakistan Institute of Nuclear Science & Technology | Physics,Chemistry,Engineering | 3,242 |
6,800,575 | https://en.wikipedia.org/wiki/Security%20domain | A security domain is an application or a set of applications that collectively rely on a shared security token for processes such as authentication, authorization, and session management. In essence, a security token is granted to a user following their active authentication using a user ID and password within the security domain. The token establishes a foundation of trust, enabling secure interactions across the applications within the defined security domain.
A security domain is the determining factor in the classification of an enclave of servers/computers. A network with a different security domain is kept separate from other networks. For example, NIPRNet, SIPRNet, JWICS, and NSANet are all kept separate.
Examples of a security domain include:
All the web applications that trust a session cookie issued by a Web Access Management product
All the Windows applications and services that trust a Kerberos ticket issued by Active Directory
In an identity federation that spans two different organizations that share a business partner, customer or business process outsourcing relation – a partner domain would be another security domain with which users and applications (from the local security domain) interact.
Computer networking | Security domain | Technology,Engineering | 221 |
3,187,814 | https://en.wikipedia.org/wiki/Spectator%20ion | A spectator ion is an ion that exists both as a reactant and a product in a chemical equation of an aqueous solution.
For example, in the reaction of aqueous solutions of sodium carbonate and copper(II) sulfate:
2 (aq) + (aq) + (aq) + (aq) → 2 (aq) + (aq) + (s)
The and ions are spectator ions since they remain unchanged on both sides of the equation. They simply "watch" the other ions react and does not participate in any reaction, hence the name. They are present in total ionic equations to balance the charges of the ions. Whereas the and ions combine to form a precipitate of solid . In reaction stoichiometry, spectator ions are removed from a complete ionic equation to form a net ionic equation. For the above example this yields:
So: 2 (aq) + (aq) + (aq) + (aq) → 2 (aq) + (aq) + (s) (where x = spectator ion)
⇒ (aq) + (aq) → (s)
Spectator ion concentration only affects the Debye length. In contrast, potential determining ions, whose concentrations affect surface potential (by surface chemical reactions) as well the Debye length.
Net ionic equation
A net ionic equation ignores the spectator ions that were part of the original equation. So, the net ionic equation only shows the ions which reacted to produce a precipitate. Therefore, the total ionic reaction is different from the net reaction.
See also
Catalysis
References
Acid–base chemistry | Spectator ion | Chemistry | 341 |
19,371,115 | https://en.wikipedia.org/wiki/Multilink%20striping | Multilink striping is a type of data striping used in telecommunications to achieve higher throughput or increase the resilience of a network connection by data aggregation over multiple network links simultaneously.
Multipath routing and multilink striping are often used synonymously. However, there are some differences. When applied to end-hosts, multilink striping requires multiple physical interfaces and access to multiple networks at once. On the other hand, multiple routing paths can be obtained with a single end-host interface, either within the network, or, in case of a wireless interface and multiple neighboring nodes, at the end-host itself.
See also
RFC 1990, The PPP Multilink Protocol (MP)
Link aggregation
Computer networking | Multilink striping | Technology,Engineering | 146 |
26,210,425 | https://en.wikipedia.org/wiki/Self-framing%20metal%20buildings | Self-framing metal buildings are a form of pre-engineered building which utilizes roll formed roof and wall panel diaphragms as significant parts of the structural supporting system. Additional structural elements may include mill or cold-formed elements to stiffen the diaphragm perimeters, transfer forces between diaphragms and provide appropriate. As with most pre-engineered buildings, each building will be supplied with all necessary component parts to form a complete building system.
Design criteria
Regardless of project site location, buildings must be designed in accordance with appropriate engineering due diligence. Buildings should be designed for all applicable loads including the following:
Dead (self-weight) loads including mechanical and electrical components.
Vertical live load of the building will be not less than (per local code) pounds per square foot applied on the horizontal projection of the roof.
Wind load of the building will not be less than (per local code) miles per hour and will be distributed and applied in accordance with Chapter 16 of the International Building Code.
All combining and distribution of auxiliary equipment loads imposed on the building system will be done in accordance with Chapter 16 of the "International Building Code".
Seismic force magnitudes are not normally the controlling forces but North American building codes, require seismic analysis and assembly details to meet specific requirements regardless of force levels.
In the United States
Engineered structural design must comply with the applicable sections of the latest edition of the "Specification for Structural Steel Buildings" of the American Institute of Steel Construction (AISC) and the "Specification for the Design of Cold Formed Steel Structural Members" of the American Iron and Steel Institute (AISI).
Many areas of the United States require the use of state or local building codes which may differ from the "International Building Code". Building codes such as the "International Building Code" and Uniform Building Code (UBC) are markedly different from each other and are often revised at the local level.
In Canada
Self-framing buildings are within the scope of the National Building Code of Canada (NBCC) as adopted and modified by each Province and Territory. For steel structures, NBCC references CAN/CSA S16 Design of Steel Structures and CAN/CSA S136 North American Specification for the Design of Cold-Formed Steel Structural Members.
Manufacturers are required to be certified in accordance with CSA A660 Certification of Manufacturers of Steel Building Systems. Among other requirements, the manufacturer must supply drawings and documents sealed by a professional engineer licensed in the province or territory of the project site. A Certificate of Design and Manufacturing Conformance, duly completed by an engineer knowledgeable with the design and manufacturing, must be provided to the owner and submitted to the Authority Having Jurisdiction (AHJ) with the permit application.
The building code requires documents to be adequate to allow a review of the structural competence of the building (e.g. NBCC Part 4). The Authority Having Jurisdiction (AHJ) will usually require drawings expressing architectural aspects (e.g. Parts 3, 5 and 11). Due to the limited complexity and size of self-framed buildings, the manufacturer's drawings are frequently accepted for this purpose but the owner should be aware that this may not always be the case.
Roof and wall panels
Exterior roof panels are usually a single continuous length from eave to ridge line for gable style buildings or from low eave to high eave on single slope or shed style buildings. Many manufacturers provide minimum 24 gauge (nominal: 0.0239 inch; 0.61 mm) thick sheet steel in self-framing roof designs.
Exterior wall panels are usually a single continuous length from the base channel to the eave of the building except where interrupted by wall openings. Many manufacturers provide minimum 24 gauge (nominal: 0.0239 inch; 0.61 mm) thick sheet steel in self-framing wall designs.
Diaphragm or racking strength of the wall and roof systems are dependent on issues such as the manufacturer's panel lap seam assembly and should be qualified by full-scale testing. Openings reduce the local structural capacity of the wall or roof assembly and should be considered in the original structural design. the manufacturer may provide guidance for limited field modifications for additional openings.
Dimensions
Building width: 3 m (10' +/-) to 10 m (32' +/-) is common. Width is primarily limited by the capability of the roof panel to support the applied gravity loads (e.g. self-weight, snow) and wind uplift loads. In taller buildings, the wall panel may be a limiting factor to width due to buckling of the unsupported wall panel length.
Building length: 3 m (10' +/-) to 10 m (32' +/-) is common. Length is primarily limited by the ability of the load path to transfer loads to a vertical brace system (e.g. gable endwall). Building length can be extended with added discrete brace systems (e.g. roof level horizontal brace, portal frame, diagonal brace, interior partition shear wall).
Building height: 2.5 m (8' +/-) to 7.5 m (24' +/-) is common. Height is primarily limited by the capability of the wall panel to support the wind load. Height may be limited in narrow buildings due to shear capacity limit in the gable endwalls.
Many manufacturers publish tables relating loads to building dimensions and limitations (e.g. ratio of partial height panels to full height panels where wall openings are required).
Delivery
Typically, self-framed buildings will be shipped to site in knocked-down condition with all parts and hardware. Smaller self-framed buildings may be fully assembled at the manufacturer's facility and transported to site.
Project professionals and manufacturer-designed buildings
The project architect, sometimes called the Architect of Record, is typically responsible for aspects such as aesthetic, dimensional, occupant comfort and fire safety. When a pre-engineered building is selected for a project, the architect accepts conditions inherent in the manufacturer's product offerings for aspects such as materials, colours, structural form, dimensional modularity, etc. Despite the existence of the manufacturer's standard assembly details, the Architect remains responsible to ensure that the manufacturer's product and assembly is consistent with the building code requirements (e.g. continuity of air/vapour retarders, insulation, rain screen; size and location of exits; fire rated assemblies) and occupant or owner expectations.
Many jurisdictions recognize the distinction between the project engineer, sometimes called the Engineer of Record, and the manufacturer's employee or subcontract engineer, sometimes called a specialty engineer. The principal differences between these two entities on a project are the limits of commercial obligation, professional responsibility and liability.
The structural Engineer of Record is responsible to specify the design parameters for the project (e.g. materials, loads, design standards, service limits) and to ensure that the element and assembly designs by others are consistent in the global context of the finished building.
The specialty engineer is responsible to design only those elements which the manufacturer is commercially obligated to supply (e.g. by contract) and to communicate the assembly procedures, design assumptions and responses, to the extent that the design relies on or affects work by others, to the Engineer of Record – usually described in the manufacturer's erection drawings and assembly manuals. The manufacturer produces an engineered product but does not typically provide engineering services to the project.
In the context described, the Architect and Engineer of Record are the designers of the building and bear ultimate responsibility for the performance of the completed work. A buyer should be aware of the project professional distinctions when developing the project plan.
External Links
Metal buildings
See also
Nissen hut
References
Construction
Iron and steel buildings
Structural engineering
Structural system | Self-framing metal buildings | Technology,Engineering | 1,574 |
1,630,395 | https://en.wikipedia.org/wiki/Nokia%206680 | The Nokia 6680 is a high-end 3G mobile phone running Symbian operating system, with Series 60 2nd Edition user interface. It was announced on 14 February 2005, and was released the next month. The 6680 was Nokia's first device with a front camera, and was specifically marketed for video calling. It was also Nokia's first with a camera flash. It was the forerunner of the Nseries, which was released in April 2005; its successor being the N70.
Features
The device features Bluetooth, a 1.3-megapixel fixed-focus camera, front VGA (0.3-megapixel) video call camera, hot swappable Dual Voltage Reduced Size MMC (DV-RS-MMC) memory expansion card support, stereo audio playback and a 2.1", 176x208, 18-bit (262,144) color display with automatic brightness control based on the environment.
The 6680 is marketed as a high-end 3G device. It is a smartphone offering office and personal management facilities, including Microsoft Office compatible software. The phone initially offered an innovative active standby mode, but this was removed by some network operators (for example, Orange) under their own adapted firmware. The phone, however, has been marred by a more-than-normal number of bugs, which have included crashes and security issues. The phone was also criticised in some reviews for the relatively limited amount of RAM, with Steve Litchfield of All About Symbian unfavourably comparing the 6680 to the otherwise similarly-equipped N70, which had significantly more RAM available for applications and games.
In addition to the standard RS-MMC card, the 6680 can also use Dual Voltage Reduced Size MMC (DV-RS-MMC) cards which are also marketed as MMCmobile. While these cards have the same form factor as RS-MMC, the DV-RS-MMC have a 2nd row of connectors on the bottom.
The phone operates on GSM 900/1800/1900, and UMTS 2100 on 3G networks.
During its development, the 6680 was codenamed Milla.
This handset was similar to its predecessor, the Nokia 6630. Key changes were the new "active standby" feature, the facility for face-to-face video calls, a camera flash, better screen and improved styling.
The hardware application platform of this device is OMAP 1710.
Variants
The Nokia 6681 and Nokia 6682 are GSM handsets by Nokia, running the Series 60 user interface on the Symbian operating system. The phones are GSM-only versions of the Nokia 6680.
The only difference between the 6681 and the 6682 is the fact that the 6681 is targeted at the European market, being a GSM 900/1800/1900 tri-band handset, while the 6682 is sold for North American networks, supporting 850/1800/1900 frequencies.
In turn, both handset's specs are almost identical to those of the 6680, except for the lack of support for 3G networks, which means no UMTS support, or video call, thus the absence of the front video call camera.
Related handsets
Nokia N70
References
External links
Product pages
Nokia 6680 Official product page
Nokia 6681
Nokia 6682
Rui Carmo's 6680 first impressions
OCW's 6680 review
6680 review and specifications roundup
Texas Instruments OMAP 1710
Texas Instruments OMAP 1710
Forum Nokia specifications
Nokia 6681
Nokia 6682
Nokia 6680
Nokia smartphones
Mobile phones introduced in 2005
Discontinued flagship smartphones | Nokia 6680 | Technology | 762 |
5,956,435 | https://en.wikipedia.org/wiki/Flinders%20Island%20%28South%20Australia%29 | Flinders Island is an island in the Investigator Group off the coast of South Australia approximately west of mainland town Elliston. It was named by Matthew Flinders after his younger brother Samuel Flinders, the second lieutenant on in 1802.
It is part of the Investigator Islands Important Bird Area and has a colony of little penguins, but has suffered from the feral cats, black rats and mice, which threaten the bird life. The island is privately owned and was used mostly for farming since 1911, although that tailed off as transport costs rose. In 2020 the owners signed an agreement with the Government of South Australia which places a conservation agreement over , which is most of the island.
The island has been subject to diamond exploration following the discovery of a wide range of kimberlite indicator minerals there, which was continuing .
History
European discovery and use
Flinders named the island after his younger brother Samuel, who was the sloop's second lieutenant, on 13 February 1802. Flinders' expedition described some aspects of the island's flora and fauna. Lower land was covered with large bushes, unlike islands previously passed further north. There was very little of the white, velvety grass striplex or tufted wiry grass previously seen. A small macropod species was described as "numerous" and specimens were shot. There were a few small casuarinas growing on the island but firewood was scarce. The beaches were frequented by Australian sea lions, of which several family groups were closely inspected.
A sealing camp was in place on the south-east side of the island by the 1820s. There was also a whaling station. The sealers, their Aboriginal wives and children numbered up to twenty people at one stage. A pastoral survey of the site in 1890 identified ten separate structures associated with the sealing community, and archaeological examination of the structures took place in the 1980s. The Flinders Island Whaling and Sealing Site is listed on the South Australian Heritage Register.
Some time prior to 1911, sheep, horses, cattle, milk thistles and oats were introduced to Flinders Island, presumably by Willie Schlink and his family. At this time of the island had been cleared and was producing 1,400 to 2,000 bags of wheat annually. 4,000 sheep were kept and black and white rabbits ran wild on the island. By the time of the island's sale in 1911, 30,000 wallabies had been killed there. The island continued to be used mostly for farming, although that tailed off as transport costs rose. In the late 1970s, the island was bought by the Woolford family, who ran it as a sheep station for merinos. By 2020, there were only a few sheep and the island was used mainly for tourism (via house rental) and recreation.
Conservation
Flinders Island is one of the islands included in the Investigator Islands Important Bird Area identified by BirdLife International, a non-statutory status, awarded in 2009 because of the island group's population of fairy tern (a vulnerable species), as well as significant populations of Cape Barren geese, Pacific gull and black-faced cormorant. Other birds for which the IBA is significant include large numbers of breeding short-tailed shearwaters and white-faced storm-petrels. The biome-restricted rock parrot has been recorded from most islands in the group.
An account of Flinders Island's wildlife published in 1934 stated that penguins could "be seen waddling soldier-like among the rocks and cave entrances that constitute their homes". In 2006 there was an colony of little penguins believed to be "probably declining", with an estimated population of fewer than twenty birds, nesting at the base of some cliffs where feral cats have limited access. A risk assessment for the penguins commissioned by Department for Environment & Natural Resources in 2016 report based their recommendations on the 2006 estimate. It reported that the feral cats were responsible for the probable decline, but if they were eliminated, the rat population would grow, so both would need to be removed.
A strip of land along the north coast of the island extending west from the island’s most northerly headland, Point Malcolm, has been the subject of the subject of a heritage agreement as a protected area since 29 August 1995. The parcel of land which is identified as No. HA1003 is sized at . Since 2012, the waters adjoining the Flinders Island have been part of a habitat protection zone in the Investigator Marine Park.
In 2020 the owners signed an agreement with the Government of South Australia which places a conservation agreement over , which is most of the island. The feral cats, black rats and mice, which threaten the bird life, need to be eradicated, and threatened animals will be introduced. The three-year Flinders Island Safe Haven Project received through the Federal Government's Environment Restoration Fund, and from the Government of South Australia to co-manage the establishment of the project with the Woolford family.
Mining exploration
The island has been subject to diamond exploration following the discovery of a wide range of kimberlite indicator minerals there, which was continuing .
Citations
References
Islands of South Australia
Great Australian Bight
Seal hunting
Whaling in Australia
Wildlife conservation
Private islands of Australia | Flinders Island (South Australia) | Biology | 1,039 |
17,133,359 | https://en.wikipedia.org/wiki/Walter%20R.%20Evans | Walter Richard Evans (January 15, 1920 – July 10, 1999) was a noted American control theorist and the inventor of the root locus method and the Spirule device in 1948. He was the recipient of the 1987 American Society of Mechanical Engineers Rufus Oldenburger Medal and the 1988 AACC's Richard E. Bellman Control Heritage Award.
Biography
He was born on January 15, 1920, and received his B.E. in Electrical Engineering from Washington University in St. Louis in 1941 and his M.E. in Electrical Engineering from the University of California, Los Angeles in 1951.
Evans worked as an engineer at several companies, including General Electric, Rockwell International, and Ford Aeronautic Company.
He published a book named "Control System Dynamics" with McGraw-Hill in 1954.
He had four children. One of his children, Gregory Walter Evans, wrote an article about his father in the December 2004 issue of the IEEE Control Magazine.
Evans was taught to play chess by his grandmother, Eveline Allen Burgess, the American Women's Chess Champion from 1907 to 1920.
References
Gregory Walter Evans, "Bringing root locus to the classroom: the story of Walter R. Evans and his textbook Control System Dynamics", IEEE Control Magazine, pp. 74–81, December 2004.
External links
Biography
Memoriam
Spirule
1920 births
1999 deaths
Control theorists
American electrical engineers
Richard E. Bellman Control Heritage Award recipients
McKelvey School of Engineering alumni
UCLA Henry Samueli School of Engineering and Applied Science alumni | Walter R. Evans | Engineering | 304 |
12,310,699 | https://en.wikipedia.org/wiki/4-Nitroaniline | 4-Nitroaniline, p-nitroaniline or 1-amino-4-nitrobenzene is an organic compound with the formula C6H6N2O2. A yellow solid, it is one of three isomers of nitroaniline. It is an intermediate in the production of dyes, antioxidants, pharmaceuticals, gasoline, gum inhibitors, poultry medicines, and as a corrosion inhibitor.
Synthesis
4-Nitroaniline is produced industrially via the amination of 4-nitrochlorobenzene:
ClC6H4NO2 + 2 NH3 → H2NC6H4NO2 + NH4Cl
Below is a laboratory synthesis of 4-nitroaniline from aniline. The key step in this reaction sequence is an electrophilic aromatic substitution to install the nitro group para to the amino group. The amino group can be easily protonated and become a meta director. Therefore, a protection of the acetyl group is required. After this reaction, a separation must be performed to remove 2-nitroaniline, which is also formed in a small amount during the reaction.
Applications
4-Nitroaniline is mainly consumed industrially as a precursor to p-phenylenediamine, an important dye component. The reduction is effected using iron metal and by catalytic hydrogenation.
It is a starting material for the synthesis of Para Red, the first azo dye:
It is also a precursor to 2,6-dichloro-4-nitroaniline, also used in dyes.
Laboratory use
Nitroaniline undergoes diazotization, which allows access to 1,4-dinitrobenzene and nitrophenylarsonic acid. With phosgene, it converts to 4-nitrophenylisocyanate.
Carbon snake demonstration
When heated with sulfuric acid, it dehydrates and polymerizes explosively into a rigid foam. The exact composition of the foam is unclear, but the process is believed to involve acidic protonation as well as displacement of the amine group by a sulfonic acid moiety.
In Carbon snake demo, paranitroaniline can be used instead of sugar, if the experiment is allowed to proceed under an obligatory fumehood. With this method the reaction phase prior to the black snake's appearance is longer, but once complete, the black snake itself rises from the container very rapidly. This reaction may cause an explosion if too much sulfuric acid is used.
Toxicity
The compound is toxic by way of inhalation, ingestion, and absorption, and should be handled with care. Its in rats is 750.0 mg/kg when administered orally. 4-Nitroaniline is particularly harmful to all aquatic organisms, and can cause long-term damage to the environment if released as a pollutant.
See also
2-Nitroaniline
3-Nitroaniline
References
External links
Safety (MSDS)data for p-nitroaniline
MSDS Sheet for p-nitroaniline
Sigma-Aldrich Catalog data
CDC - NIOSH Pocket Guide to Chemical Hazards
Anilines
Dyes
Hazardous air pollutants
IARC Group 3 carcinogens
Nitrobenzene derivatives
Corrosion inhibitors | 4-Nitroaniline | Chemistry | 681 |
22,142,183 | https://en.wikipedia.org/wiki/Access%20mat | An access mat is a portable platform used to support equipment used in construction and other resource-based activities, including drilling rigs, camps, tanks, and helipads. It may also be used as a structural roadway to provide passage over unstable ground, pipelines and more.
Depending on its application, an access mat may be called a rig mat, swamp mat, industrial mat, ground protection mat, road mat, construction mat, mud mat, mobile mat, safety mat, or portable roadway. Because there is no body governing or standardizing terminology (nor design and construction), and terminology inconsistencies are compounded by regional and industry-specific vernacular, the types described below should be considered a general guide to access matting.
Types
There are three general categories that describe access matting:
Construction Mats
Construction mats are used to provide relatively clean, smooth, all-weather working, walking and driving surfaces in industrial or commercial construction settings where access would not otherwise be guaranteed. This category of mat reduces crew downtime, increasing the likelihood of timely task completion.
Construction mats have multiple applications, including:
Oil and Gas sector: temporary roads, platforming on leases, pipeline terminals, and facilities
Power lines: tower erection
Electrical sub-stations
Construction of buildings and warehouses
Residential housing projects (especially where local bylaws penalize depositing mud and dirt in situations where it may wash into storm drains)
A subset of construction mat, pipeline mats, may be considered construction mats, though they often begin as a different type of access mat. Pipeline mats are typically mats at the end of their productive lives, used as a rough, one-time access setting.
Access Mats
Access mats are a category of matting that are generally used to provide temporary roads and worksites. Access mats are often used to access work sites in remote or environmentally sensitive areas, such as bogs, wetlands or fens. For that reason, they are often referred to as swamp, bog or wetland mats.
Swamp mats are based on a design developed by Joe Penland in the late 20th century and consist of three layers of 2’ x 8’ lumber laminated together with steel bolts. Most commonly, the top and bottom layer are made up of 11 pieces and the middle layer, placed cross wise, is made from 21 pieces. In the USA, the majority of swamp mats are made from mixed hardwoods, although they are often referred to as oak. It is also usual to find hardwood mats in Canada, however the availability of durable coniferous species such as various firs, pines, and spruces make their use a more economical prospect. Common dimensions are 8' x 14' and 8' x 16'. Thicknesses vary between suppliers from 4.5” to a full 6”. Swamp mats are produced by many small and medium-sized manufacturers, and quality varies dramatically within the industry.
Rig mats, another variety of access mats, may also known as wood and steel mats or steel frame mats. These mats are commonly made of spruce, pine, fir or a combination thereof encased in a steel frame, though some suppliers also offer bamboo and fibreglass options. The frame is normally I Beam or HST steel. The steel is used to strengthen the mats, enabling the manufacturers of the mats to build them in larger sizes and to support more weight compared to all other types of mats. Common sizes are 8' x 20', 8' x 30', and 8' x 40'. One great advantage is the ease of repairing the wooden inserts which gives new life to an already long lasting and durable mat. This method of repair can be completed on both I Beam and HST style mats.
Mud mats are a combination of a reinforced member (such as metal bars or bamboo) confining geosynthetic fabric in a portable mat, that can be rolled up for ease of transport and deployment. A lightweight, light-duty flexible mat suitable for distributing loads over firm ground to avoid rutting. They are not commonly used for access matting in soft ground conditions.
Heavy-Equipment Mats
The final category of matting is known as Heavy-Equipment matting. These mats are constructed from the most durable, load-bearing materials, designed to be transited by heavy equipment.
One variety of heavy-equipment mat is the Crane mat. Designed for exceptionally heavy use, Crane mats (also known as digging mats, logging mats, or bridge mats) can be used in a wide variety of applications, including:
Powerline tower assembly
Mining and heavy access roads
Module yards, pipe yards, tank farms and staging areas
Wind farms
Bridge repair and construction
Piers and wharves
Crane mats are constructed of solid 8”, 10” or 12” timbers and are affixed by steel bolts, providing ground stabilization under extreme weight. The timber species used in these mats is generally Douglas fir and Hemlock as this species of wood has superior strength, durability and resiliency characteristics compared to other western softwood. The construction process of these mats allows for versatility as different size, length and quantity of timbers can be used to make different dimension mats. Most crane mat manufacturers are in the Western states and provinces.
Repurposing a Crane mat with cable loops allows them to be relocated on site by client-owned equipment, allowing a client to minimize the number of mats they require. This repurposing transforms a Crane mat into a Digging mat.
Logging mats are Crane mats with a reinforced slot which allows knuckle boom loaders or skid-steer loaders to move the mats with ease.
Oilfield Mat Combos are heavy-duty steel frame mats designed to provide spill containment and platform matting.
Construction materials
Depending on purpose, mats may be constructed of any of the following materials:
Wood
Wood is the most commonly used matting material used. Wooden mats range widely in cost, depending on the type of wood used, and may be constructed of:
Hardwood: hardwood is an imprecise term used in the matting industry, though it generally refers to any wood with a Janka Hardness Test rating at or above red oak.
Oak
Fir
Hybrid or SPF: in essence is a Fir mat with outside edge boards made of Oak. Oak boards are there to mitigate outer edge damages from the grapple used to place mats, as oak is much harder than fir. These mats are also low-cost, primarily used for leveling under drilling rigs, camps and tank farms.
Spruce: spruce is often used as a “throw away” (one-time use) mat. It’s an inexpensive material with low load bearing capacity, durability and lifespan compared to fir and hardwood access mats.
Bamboo: bamboo mats are a composite of glue-laminated bamboo strips, offering higher tensile strength than steel.
Composite
composite mats are constructed of multiple materials, to improve the strength or durability of the mat. They can be more expensive upfront, but as they have a much longer lifespan than wooden mats they can be more cost-efficient. Higher quality versions of the composite mat will include anti-static and/or UV protection additives to prevent the formation of sparks from static electricity and to prevent cracking, physical breakdown & fading of the mat. Composite mats feature a variety of connection mechanisms, from complicated systems that use small parts and specialized tools to large aluminum cam lock systems.
Composite mats range in size from 4’ x 4’ to 8’ x 14’. It is commonly thought that bigger is better with access mats, but it is important that the mats can be shipped by standard means. 7.5’ x 14’ composite mats, for example, will fit into an ISO container.
Examples of composite materials used in access matting include:
Fibreglass
Fibreglass offers high strength, long-term durability and is light weight.
Rubber
Rubber mats evolved by industries seeking sustainable products made from recycled materials. The rubber crumb recycled from scrap tires is a primary petroleum based material.
Depending on the site soil conditions, there are various thicknesses of mats that can be manufactured. The base material of the rubber mat is crumb rubber, urethane, and fibre from recycled motor vehicle tires. The production of one typical mat uses up to 350 tires which makes the product environmentally friendly. The mats are moulded into conventional 8’x14’ sections and the body has an embedded patented rigid spine which makes each mat extremely durable and virtually indestructible. The surface is textured and designed to provide excellent traction for all types of traffic and have been proven to be a viable and economical solution for long term use under some of the most unruly site conditions and usages across North America. They are effective when used as access roads, heli-pads, laydown areas, wash pads or sidewalks.
Rubber mats are also known as blast mats in the mining industry.
Plastic (HDPE)
Engineered, hollow rig matting systems may be made up of High-Density Polyethylene (HDPE). When compared to traditional wooden matting, composite mats are lighter in weight yet still can handle heavy loads.
Plastic (UHMWPE)
This solid plastic, UHMWPE (Ultra-High Molecular Weight Polyethylene), offers the highest impact strength while being highly resistant to corrosive chemicals.
Solid, one-piece compression moulded mats are made from recycled or virgin (HDPE) as well as recycled or virgin Ultra High Molecular Weight Polyethylene (UHMWPE).
Solid mats are lighter in weight than hollow mats but due to the simpler connection system provide the same or similar usable working surface area per mat once connected (10’ x 8’, 13.5’ x 6.5’). Typically this means that a significantly greater number of solid mats can be loaded onto a truck resulting in a number of key benefits including larger working surface area per truckload, reduced number of transport trips per project, reduced fuel consumption and greenhouse gas emissions per project, and reduced transport costs per project.
A simple connection system, using standard connectors and tools, is used which means that mats can easily be installed and connected on undulating as well as flat surfaces, avoiding the need to prepare the ground surface in advance of installation. This also results in project time and cost-savings.
Unlike hollow mats, solid mats cannot be punctured and therefore do not take on water (which can increase mat weight) or other fluids (such as fuel or chemicals) that could have an adverse environmental impact on sensitive sites.
Uses
Access matting has a variety of industrial and commercial uses, ranging from temporary, one-time use (for example, in the construction of pipeline access, where the mats are essentially destroyed in the process), reused over multiple projects over multiple seasons, or semi-permanent.
Access mats may also be used in other, non-traditional settings, such as providing access for cattle to water troughs where muddy conditions may prove detrimental to the livestock; for home owners who need access to buildings under construction before driveways are poured; to create temporary parking; or to provide nature enthusiasts with a low-impact, environmental trailway.
Installation and Removal
In some jurisdictions, access mats must be removed when they are no longer needed due to climatic conditions. Most access mat providers contract to remove used mats, which may then be re-rented, stored, or destroyed, depending on condition. Destruction of mats includes chipping/mulching, chipping and burying in approved locations, or chipping and incinerating.
References
External links
Access mats used to support Alberta, Canada’s New Wetland Policy (2016)
Construction equipment | Access mat | Engineering | 2,358 |
23,981,519 | https://en.wikipedia.org/wiki/C31H46O2 | {{DISPLAYTITLE:C31H46O2}}
The molecular formula C31H46O2 (molar mass: 450.69 g/mol, exact mass: 450.3498 u) may refer to:
Phytomenadione, also known as phylloquinone or Vitamin K1
Chemical formulas | C31H46O2 | Chemistry | 73 |
33,391,377 | https://en.wikipedia.org/wiki/Influenza%20Research%20Database | The Influenza Research Database (IRD) is an integrative and comprehensive publicly available database and analysis resource to search, analyze, visualize, save and share data for influenza virus research. IRD is one of the five Bioinformatics Resource Centers (BRC) funded by the National Institute of Allergy and Infectious Diseases (NIAID), a component of the National Institutes of Health (NIH), which is an agency of the United States Department of Health and Human Services.
Data types in IRD
Segment, protein, and strain data
Animal surveillance data
Human clinical data
Experimentally determined and predicted immune epitopes
Sequence Features
Predicted protein domains and motifs
Gene Ontology annotations
Computed sequence conservation score
Clade classification for highly-pathogenic avian influenza H5N1 HA sequences
3D protein structures
PCR primer data curated from literature
Experiment data from laboratory experiments and clinical trials
Phenotypic characteristic data curated from literature
Serology data
Host factor data
Analysis and visualization tools in IRD
BLAST: provides custom IRD databases to identify the most related sequence(s)
Short Peptide Search: allows users to find peptide sequences in target proteins
Identify Point Mutations: identifies influenza proteins having particular amino acids at user-specified positions
Multiple sequence alignment: allows users to align segment/protein sequences using MUSCLE
Sequence Alignment Visualization: uses JalView for sequence alignment visualization
Phylogenetic tree construction: calculates a tree using various algorithms and evolutionary models
Phylogenetic Tree Visualization: allows the color-coded display of strain metadata on a tree generated with one of several available algorithms and/or evolutionary models and viewed with Archaeopteryx
3D Protein Structure Visualization: integrates PDB protein structure files with sequence conservation score and IRD Sequence Features and provides an interactive 3D protein structure viewer using Jmol
Sequence Feature Variant Type (SFVT) analysis: provides a centralized repository of functional regions and automatically calculates all observed sequence variation within each defined region
Metadata-driven Comparative Analysis Tool for Sequences (Meta-CATS): an automated comparative statistical analysis to identify positions that significantly differ between user-defined sequence groups
Sequence Variation Analysis (SNP): pre-computed sequence variation analysis in all IRD sequences; also allows users to calculate the extent of sequence variation in user-specified sequences
PCR Primers/Probes: provides a repository of commonly used primers for influenza virus identification, and calculates the polymorphisms of all related IRD sequences at the primer positions
PCR Primer Design: allows PCR primer design for IRD and user-provided sequences
Sequence annotation: determines the user-provided nucleotide sequence's influenza type, segment number and subtype (for segments 4 and 6), and translates the nucleotide sequence
HPAI H5N1 Clade Classification: predicts the clade of highly pathogenic H5 HA sequences
ReadSeq: converts between various sequence formats
Data Submission Tool: allows users to submit influenza sequences, Sequence Features, and experimental data online
External Analysis Tools: displays a list and description of third-party tools for more specialized analyses
Personal Workbench to save and share data and analysis
References
External links
Bioinformatics Resource Centers The NIAID page describing the goals and activities of the BRCs
Influenza
Medical databases
Pathogen genomics | Influenza Research Database | Biology | 659 |
15,152,001 | https://en.wikipedia.org/wiki/Anomalous%20X-ray%20scattering | Anomalous X-ray scattering (AXRS or XRAS) is a non-destructive determination technique within X-ray diffraction that makes use of the anomalous dispersion that occurs when a wavelength is selected that is in the vicinity of an absorption edge of one of the constituent elements of the sample. It is used in materials research to study nanometer sized differences in structure.
Atomic scattering factors
In X-ray diffraction the scattering factor f for an atom is roughly proportional to the number of electrons that it possesses. However, for wavelengths that approximate those for which the atom strongly absorbs radiation the scattering factor undergoes a change due to anomalous dispersion. The dispersion not only affects the magnitude of the factor but also imparts a phase shift in the elastic collision of the photon. The scattering factor can therefore best be described as a complex number
f = fo + Δf + i.Δf"
Contrast variation
The anomalous aspects of X-ray scattering have become the focus of considerable interest in the scientific community because of the availability of synchrotron radiation. In contrast to desktop X-ray sources that work at a limited set of fixed wavelengths, synchrotron radiation is generated by accelerating electrons and using an undulator (device of periodic placed dipole magnets) to "wiggle" the electrons in their path, to generate the wanted wavelength of X-rays. This allows scientists to vary the wavelength, which in turn makes it possible to vary the scattering factor for one particular element in the sample under investigation. Thus a particular element can be highlighted. This is known as contrast variation. In addition to this effect the anomalous scatter is more sensitive to any deviation from sphericity of the electron cloud around the atom. This can lead to resonant effects involving transitions in the outer shell of the atom: resonant anomalous X-ray scattering.
Protein crystallography
In protein crystallography, anomalous scattering' refers to a change in a diffracting X-ray's phase that is unique from the rest of the atoms in a crystal due to strong X-ray absorbance. The amount of energy that individual atoms absorb depends on their atomic number. The relatively light atoms found in proteins such as carbon, nitrogen, and oxygen do not contribute to anomalous scattering at normal X-ray wavelengths used for X-ray crystallography. Thus, in order to observe anomalous scattering, a heavy atom must be native to the protein or a heavy atom derivative should be made. In addition, the X-ray's wavelength should be close to the heavy atom's absorption edge.
List of methods
Multi-wavelength anomalous diffraction (MAD)
Single-wavelength anomalous diffraction (SAD)
Diffraction anomalous fine structure (DAFS) combines the use of anomalous diffraction with X-ray absorption fine structure (XAFS).
References
External links
X-ray Anomalous Scattering at skuld.bmsc.washington.edu. A resource mainly aimed at crystallographers.
PHENIX glossary, describes the techniques supported by the commonly-used PHENIX refining program, including MAD & SAD.
Scientific techniques
X-ray crystallography | Anomalous X-ray scattering | Chemistry,Materials_science | 678 |
64,808,167 | https://en.wikipedia.org/wiki/Citrix%20Virtual%20Desktops | Citrix Virtual Desktops (formerly XenDesktop) is a desktop virtualization product.
History
The virtualization technology that led to XenDesktop was first developed in 2000 through an open-source hypervisor research project led by Ian Pratt at the University of Cambridge called Xen Project for x86. Pratt founded a company called XenSource in 2004, which made a commercial version of the Xen hypervisor. In 2007, Citrix acquired XenSource, releasing XenDesktop version 2.0 in 2008. The company continues to release updated versions, with XenDesktop 7.6 featuring HDX technology enhancements for audio, video and graphics user experience, as well as a reduction in storage costs associated with virtual desktop deployments as a result of improvements to Citrix provisioning services.
In 2018, the software was renamed Citrix Virtual Desktops.
Product overview
The product's aim is to give employees the ability to work from anywhere while cutting information technology management costs because desktops and applications are centralized. XenDesktop also aims to provide security, because data is not stored on the devices of end users, instead being saved in a centralized datacenter or cloud infrastructure. Citrix developed the software for use by medium to large enterprise customers.
Citrix Workspace is able to manage and deliver applications and desktops using a connection broker called Desktop Delivery Controller. It supports multiple hypervisors, including Citrix Hypervisor, VMware vSphere, Microsoft Hyper-V and Nutanix Acropolis to create virtual machines to run the applications and desktops. The software allows for several types of delivery methods and is compatible with multiple architectures, including desktops and servers, datacenters, and private, public or hybrid clouds. Virtualized applications can be delivered to virtual desktops using Virtual Apps.
Release history
Version 7.5 - March 26, 2014
Version 7.6 - September 30, 2014
Version 7.6 Feature Pack 1 - March 31, 2015
Version 7.6 Feature Pack 2 - June 30, 2015
Version 7.6 Feature Pack 3 - September 30, 2015
Version 7.6 LTSR - January 11, 2016
Version 7.7 - December 29, 2015
Version 7.8 - February 24, 2016
Version 7.9 - June 1, 2016
Version 7.11 - September 4, 2016
Version 7.12 - December 7, 2016
Version 7.13 - February 23, 2017
Version 7.14 - May 23, 2017
Version 7.15 LTSR - August 15, 2017
Version 7.16 - November, 2017
Version 7.17 - February, 2018
Version 7.18 - June, 2018
See also
EmbeddedXEN, Type-1 (bare metal) hypervisor
References
Citrix Systems
Cloud computing providers
Centralized computing
Remote desktop | Citrix Virtual Desktops | Technology | 575 |
9,591,722 | https://en.wikipedia.org/wiki/Buddam%20%28unit%29 | A buddam (also known as a chow) was an obsolete unit of mass used in the pearl trade in Mumbai (formerly Bombay) during the 19th century. One buddam was equivalent to 1/1600 of a chow, or 1/16 of a docra.
See also
List of customary units of measurement in South Asia
References
Units of mass
Customary units in India
Obsolete units of measurement | Buddam (unit) | Physics,Mathematics | 79 |
13,519,351 | https://en.wikipedia.org/wiki/Magnesium%20levulinate | Magnesium levulinate, the magnesium salt of levulinic acid, is a mineral supplement.
External links
Magnesium compounds
Salts of carboxylic acids | Magnesium levulinate | Chemistry | 32 |
31,278,688 | https://en.wikipedia.org/wiki/Lorvotuzumab%20mertansine | Lorvotuzumab mertansine (IMGN901) is an antibody-drug conjugate. It comprises the CD56-binding antibody, lorvotuzumab (huN901), with a maytansinoid cell-killing agent, DM1, attached using a disulfide linker, SPP. (When DM1 is attached to an antibody with the SPP linker, it is mertansine; when it is attached with the thioether linker, SMCC, it is emtansine.)
Lorvotuzumab mertansine is an experimental agent created for the treatment of CD56 positive cancers (e.g. small-cell lung cancer, ovarian cancer).
It has been granted Orphan drug status for Merkel cell carcinoma.
It has reported encouraging Phase II results for small-cell lung cancer.
References
Antibody-drug conjugates
Experimental cancer drugs
Monoclonal antibodies for tumors | Lorvotuzumab mertansine | Biology | 208 |
67,471,118 | https://en.wikipedia.org/wiki/Sony%20Xperia%205%20III | The Sony Xperia 5 III is an Android smartphone manufactured by Sony. Part of Sony's Xperia series, the phone was announced on April 14, 2021, along with the larger Xperia 1 III and the mid-range Xperia 10 III.
Design
The Xperia 5 III retains Sony's signature square design that is seen on previous Xperia phones. It is built similarly to the Xperia 1 III, using anodized aluminum for the frame and Corning Gorilla Glass 6 for the screen and back panel, as well as IP65 and IP68 certifications for water resistance. The build has a pair of symmetrical bezels on the top and the bottom, where the front-facing dual stereo speakers are placed. The left side of the phone contains a slot for a SIM card and a microSDXC card, while the right side contains a fingerprint reader embedded into the power button, a volume rocker and a shutter button. A dedicated Google Assistant button is located between the power and shutter buttons. The earpiece, front-facing camera, notification LED and various sensors are housed in the top bezel. The bottom edge has the primary microphone and USB-C port; the rear cameras are arranged in a vertical strip. The phone ships in three colors: Black, Green and Pink.
Specifications
Hardware
The Xperia 5 III is powered by the Qualcomm Snapdragon 888 SoC and an Adreno 660 GPU, accompanied by 8 GB of LPDDR5 RAM. It has 128 or 256 GB of UFS internal storage, which can be expanded up to 1 TB via the microSD card slot with a hybrid dual-SIM setup. The display is a 6.1-inch 1080p (2520 × 1080) HDR OLED with a 21:9 aspect ratio, resulting in a pixel density of 449 ppi. It features a 120 Hz refresh rate, and is capable of displaying one billion colors. The battery capacity is 4500 mAh; USB Power Delivery 3.0 is supported at 30W over USB-C, although it lacks wireless charging capabilities. The device includes a 3.5mm audio jack as well as an active external amplifier.
Camera
The Xperia 5 III has three 12 MP rear-facing cameras and an 8 MP front-facing camera. The rear cameras consist of a wide-angle lens (24 mm f/1.7), an ultra wide angle lens (16 mm f/2.2), and a variable telephoto lens that can switch between 70 mm and 105 mm with 3× or 4.4× optical zoom; each uses ZEISS' T✻ (T-Star) anti-reflective coating. Software improvements include real-time Eye AF and Optical SteadyShot.
Software
The Xperia 5 III runs on Android 11. Sony has also paired the phone's camera tech with a "Pro" mode developed by Sony's camera division CineAlta, whose features take after Sony's Alpha camera lineup.
References
Notes
Android (operating system) devices
Discontinued flagship smartphones
Sony smartphones
Mobile phones introduced in 2021
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording | Sony Xperia 5 III | Technology | 648 |
50,656,176 | https://en.wikipedia.org/wiki/%C3%89tienne%20Rabaud | Étienne Antoine Prosper Jules Rabaud (12 September 1868 in Saint-Affrique – 3 September 1956 in Villemade) was a French zoologist, known for his studies of animal behavior.
From 1894 he served as an assistant in the laboratory of teratology at the École des Hautes-Études, and in 1898 he obtained doctorates in both medicine and sciences. In 1907 he became a maître de conférences at the faculty of sciences in Paris, where he was later named an assistant professor (1920), professor without chair (1921) and a professor of experimental biology (1923). From 1910 to 1919 he served as director of the laboratory at Wimereux.
In 1904 he was named secretary of the Société d'Anthropologie de Paris. Later on in his career, he was appointed president of the Société entomologique de France (1915) and the Société zoologique de France (1921). In 1937 he became an officer of the Légion d'honneur.
Rabaud was a supporter of Lamarckian evolution. In 1937, in his book La Matière vivante et l'hérédité (Living matter and heredity), he was ironic about the "American candor" and "singularly disturbing mentality" of Morgan, of which he rejected without appeal any scientific production. According to Jean Rostand, this is an example of the distressing attitude that certain French biologists had at that time and which earned France several decades lag in genetics.
Selected works
Le transformisme et l'expérience, 1911 – Transformism and experience.
La tératogenèse; étude des variations de l'organisme, 1914 – Teratogenesis, study on variations of organisms.
Eléments de Biologie générale, 1920 – Elements of general biology.
L'hérédité, 1921 – On heredity.
L'adaptation et l'évolution, 1922 – Adaption and evolution.
J.H. Fabre et la science, 1924 – Jean-Henri Fabre and science.
"How animals find their way about; a study of distant orientation and place-recognition" (English translation by I H Myers, 1928).
Phénomène social et sociétés animales, 1937 – Social phenomena and animal societies.
L'instinct et le comportement animal, 1939 – Instinct and animal behavior.
Transformisme et adaptation, 1942 – Transformism and adaption.
References
1868 births
1956 deaths
People from Saint-Affrique
French zoologists
Lamarckism
Teratologists
Presidents of the Société entomologique de France | Étienne Rabaud | Biology | 527 |
59,730,366 | https://en.wikipedia.org/wiki/Video%20games%20and%20charity | Due to the perceived negative connotations of video games, both industry members and consumers of video games have frequently collaborated to counter this perception by engaging in video gaming for charitable purposes. Some of these have been charitable groups, or regular and annual events, and the scope of these efforts have continued to grow, with more than having been raised by video game-related charity efforts in the first half of 2018 alone.
Organizations and events
Child's Play
The creators of the Penny Arcade webcomic, Jerry Holkins and Mike Krahulik, established the Child's Play charity in 2003 following a string of mass media stories that attempted to portray video games in a negative light. The charity was designed to provide toys and video game systems and games to various children's hospitals in the United States, though both monetary and physical donations. By 2017, Child's Play had raised over through both cash donations and donated items.
Extra Life
Extra Life is an annual charity fund-raising event to support the Children's Miracle Network Hospitals. The event was started in 2008 to honor Victoria , a teenager who died of acute lymphoblastic leukemia. While gamers accept funds for Extra Life throughout the year, the event encourages streamers to play video games for twenty-four hours straight via Twitch or other streaming services over a specific weekend in November, and collect additional donations from their viewers. In 2017, over 50,000 streamers helped to raise over for the hospitals.
Desert Bus For Hope
The comedy group LoadingReadyRun ran a Child's Play event in 2007 by marathon-playing the game Desert Bus, a game created by Penn & Teller, in which the player must drive a bus on a desolate stretch of highway from Tucson to Las Vegas, roughly eight hours of continuous gameplay, with little challenge outside of player fatigue. The event was successful, in part due to recognition from Penn & Teller, and eventually spun out into its own annual "Desert Bus for Hope" event. During the stream, broadcast over Twitch and other streaming sites, viewers can donate to get virtual time behind the bus's wheel, as well as participate in various auctions. The 2018 event raised over for Child's Play, with total accumulated donations exceeding .
Games Done Quick
Games Done Quick was launched in 2010, inspired by the Desert Bus for Hope, with the idea that invited participants speedrun numerous games over the course of the event, typically five to six days, usually with commentary over the course of the games. During semi-annual event, the speedruns are performed live in front of an audience and broadcast to Twitch and other services, with viewers able to donate during the length of the event. There has also been shorter one-off special Games Done Quick events for specific occasions, such as one to support the victims of the 2011 Tōhoku earthquake and tsunami. As of January 21, 2019, Games Done Quick has raised over to various charities, including Prevent Cancer Foundation in its winter events and Doctors Without Borders in its summer events.
Humble Bundle
Humble Bundle was initially launched by Wolfire Games in 2010 as a series of game bundles, most frequently indie games, offered at a pay-what-you-want price, with all but a few dollars of each sale going to a designated charity. As the bundles became more successful, Humble Bundle's approach expanded to include other bundles, such as mobile games, console games, and digital books, as well as establishing its own company, created publishing support for indie games and establishing a dedicated storefront, where a portion of each purchase goes to a selected charity. By 2017, the various activities of Humble Bundle have raised over across 50 different charities, including Action Against Hunger, Child's Play, the Electronic Frontier Foundation, charity: water, the American Red Cross, WaterAid and the Wikimedia Foundation.
Jingle Jam
Since 2011, The Yogscast have organised a series of live streams every year in December to benefit charity. The idea began when fans would send presents to founders Lewis Brindley and Simon Lane during the Christmas season, but they would instead insist that the money be donated to charity.
By end December 2023, Jingle Jams had raised over £25m for over 40 different charities.
In August 2022, Jingle Jam was registered as a charity in England and Wales, with the Charity Commission, independent of The Yogscast. More info:
Notable one-off efforts
Pink Overwatch Mercy skin
During May 2018, to support the Breast Cancer Research Foundation, Blizzard Entertainment offered a limited time character customization skin for the Overwatch character Mercy that reflected the pink colors of breast cancer awareness, with all revenue from the sale of the skin going to the Foundation. By the end of the sale, Blizzard had raised over , which at the time, was the largest single donation that the Foundation had seen.
2018 E3 Fortnite Pro-Am
Epic Games ran a celebrity Fortnite Battle Royale pro-am during the Electronic Entertainment Expo 2018 in June 2018, which paired 50 popular streams with 50 celebrities, with an overall prize pool to be given to the winners in the names of their desired charities.
H.Bomberguy's stream for Mermaids
In January 2019, streamer Harry Brewis aka "hbomberguy" ran a marathon stream to beat Donkey Kong 64 and raise money for Mermaids, a British charity for transgender youth. The stream began following comments made by Graham Linehan which Brewis considered transphobic, and whilst potential funding of Mermaids by the British National Lottery was under review. Word of mouth about the stream quickly spread and several notable pro-trans supporters briefly joined his stream, including John Romero, Chelsea Manning, and U.S. congresswoman Alexandria Ocasio-Cortez. Brewis finished the marathon after 57 hours, having raised over (£265,000) for Mermaids, with 659,000 viewers having watched the stream.
References
Charity fundraising
History of video games
Online charity
Video game culture | Video games and charity | Technology | 1,210 |
13,694 | https://en.wikipedia.org/wiki/Microsoft%20Windows%20version%20history | Microsoft Windows was announced by Bill Gates on November 10, 1983, 2 years before it was first released. Microsoft introduced Windows as a graphical user interface for MS-DOS, which had been introduced two years earlier, on August 12, 1981. The product line evolved in the 1990s from an operating environment into a fully complete, modern operating system over two lines of development, each with their own separate codebase.
The first versions of Windows (1.0 through to 3.11) were graphical shells that ran from MS-DOS. Windows 95, though still being based on MS-DOS, was its own operating system. Windows 95 also had a significant amount of 16-bit code ported from Windows 3.1. Windows 95 introduced many features that have been part of the product ever since, including the Start menu, the taskbar, and Windows Explorer (renamed File Explorer in Windows 8). In 1997, Microsoft released Internet Explorer 4 which included the (at the time controversial) Windows Desktop Update. It aimed to integrate Internet Explorer and the web into the user interface and also brought many new features into Windows, such as the ability to display JPEG images as the desktop wallpaper and single window navigation in Windows Explorer. In 1998, Microsoft released Windows 98, which also included the Windows Desktop Update and Internet Explorer 4 by default. The inclusion of Internet Explorer 4 and the Desktop Update led to an antitrust case in the United States. Windows 98 included USB support out of the box, and also plug and play, which allows devices to work when plugged in without requiring a system reboot or manual configuration. Windows Me, the last DOS-based version of Windows, was aimed at consumers and released in 2000. It introduced System Restore, Help and Support Center, updated versions of the Disk Defragmenter and other system tools.
In 1993, Microsoft released Windows NT 3.1, the first version of the newly developed Windows NT operating system, followed by Windows NT 3.5 in 1994, and Windows NT 3.51 in 1995. "NT" is an initialism for "New Technology". Unlike the Windows 9x series of operating systems, it was a fully 32-bit operating system. NT 3.1 introduced NTFS, a file system designed to replace the older File Allocation Table (FAT) which was used by DOS and the DOS-based Windows operating systems. In 1996, Windows NT 4.0 was released, which included a fully 32-bit version of Windows Explorer written specifically for it, making the operating system work like Windows 95. Windows NT was originally designed to be used on high-end systems and servers, but with the release of Windows 2000, many consumer-oriented features from Windows 95 and Windows 98 were included, such as the Windows Desktop Update, Internet Explorer 5, USB support and Windows Media Player. These consumer-oriented features were further extended in Windows XP in 2001, which included a new visual style called Luna, a more user-friendly interface, updated versions of Windows Media Player and Internet Explorer 6 by default, and extended features from Windows Me, such as the Help and Support Center and System Restore. Windows Vista, which was released in 2007, focused on securing the Windows operating system against computer viruses and other malicious software by introducing features such as User Account Control. New features include Windows Aero, updated versions of the standard games (e.g. Solitaire), Windows Movie Maker, and Windows Mail to replace Outlook Express. Despite this, Windows Vista was critically panned for its poor performance on older hardware and its at-the-time high system requirements. Windows 7 followed in 2009 nearly three years after its launch, and despite it technically having higher system requirements, reviewers noted that it ran better than Windows Vista. Windows 7 removed many applications, such as Windows Movie Maker, Windows Photo Gallery and Windows Mail, instead requiring users to download separate Windows Live Essentials to gain some of those features and other online services. Windows 8, which was released in 2012, introduced many controversial changes, such as the replacement of the Start menu with the Start Screen, the removal of the Aero interface in favor of a flat, colored interface as well as the introduction of "Metro" apps (later renamed to Universal Windows Platform apps), and the Charms Bar user interface element, all of which received considerable criticism from reviewers. Windows 8.1, a free upgrade to Windows 8, was released in 2013.
The following version of Windows, Windows 10, which was released in 2015, reintroduced the Start menu and added the ability to run Universal Windows Platform apps in a window instead of always in full screen. Windows 10 was generally well-received, with many reviewers stating that Windows 10 is what Windows 8 should have been.
The latest version of Windows, Windows 11, was released to the general public on October 5, 2021. Windows 11 incorporates a redesigned user interface, including a new Start menu, a visual style featuring rounded corners, and a new layout for the Microsoft Store, and also included Microsoft Edge by default.
Windows 1.0
Windows 1.0, the first independent version of Microsoft Windows, released on November 20, 1985, achieved little popularity. The project was briefly codenamed "Interface Manager" before the windowing system was implemented—contrary to popular belief that it was the original name for Windows and Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be more appealing to customers.
Windows 1.0 was not a complete operating system, but rather an "operating environment" that extended MS-DOS, and shared the latter's inherent flaws.
The first version of Microsoft Windows included a simple graphics painting program called Windows Paint; Windows Write, a simple word processor; an appointment calendar; a card-filer; a notepad; a clock; a control panel; a computer terminal; Clipboard; and RAM driver. It also included the MS-DOS Executive and a game called Reversi.
Microsoft had worked with Apple Computer to develop applications for Apple's new Macintosh computer, which featured a graphical user interface. As part of the related business negotiations, Microsoft had licensed certain aspects of the Macintosh user interface from Apple; in later litigation, a district court summarized these aspects as "screen displays".
In the development of Windows 1.0, Microsoft intentionally limited its borrowing of certain GUI elements from the Macintosh user interface, to comply with its license. For example, windows were only displayed "tiled" on the screen; that is, they could not overlap or overlie one another.
On December 31, 2001, Microsoft declared Windows 1.0 obsolete and stopped providing support and updates for the system.
OS/2 and Windows 2.x
During the mid to late 1980s, Microsoft and IBM had cooperatively been developing OS/2 as a successor to DOS. OS/2 would take full advantage of the aforementioned protected mode of the Intel 80286 processor and up to 16 MB of memory. OS/2 1.0, released in 1987, supported swapping and multitasking and allowed running of DOS executables.
IBM licensed Windows' GUI for OS/2 as Presentation Manager, and the two companies stated that it and Windows 2.0 would be almost identical. Presentation Manager was not available with OS/2 until version 1.1, released in 1988. Its API was incompatible with Windows. Version 1.2, released in 1989, introduced a new file system, HPFS, to replace the FAT file system.
By the early 1990s, conflicts developed in the Microsoft/IBM relationship. They cooperated with each other in developing their PC operating systems and had access to each other's code. Microsoft wanted to further develop Windows, while IBM desired for future work to be based on OS/2. In an attempt to resolve this tension, IBM and Microsoft agreed that IBM would develop OS/2 2.0, to replace OS/2 1.3 and Windows 3.0, while Microsoft would develop the next version, OS/2 3.0.
This agreement soon fell apart however, and the Microsoft/IBM relationship was terminated. IBM continued to develop OS/2, while Microsoft changed the name of its (as yet unreleased) OS/2 3.0 to Windows NT. Both retained the rights to use OS/2 and Windows technology developed up to the termination of the agreement; Windows NT, however, was to be written anew, mostly independently (see below).
After an interim 1.3 version to fix up many remaining problems with the 1.x series, IBM released OS/2 version 2.0 in 1992. This was a major improvement: it featured a new, object-oriented GUI, the Workplace Shell (WPS), that included a desktop and was considered by many to be OS/2's best feature. Microsoft would later imitate much of it in Windows 95. Version 2.0 also provided a full 32-bit API, offered smooth multitasking and could take advantage of the 4 gigabytes of address space provided by the Intel 80386. Still, much of the system had 16-bit code internally which required, among other things, device drivers to be 16-bit code as well. This was one of the reasons for the chronic shortage of OS/2 drivers for the latest devices. Version 2.0 could also run DOS and Windows 3.0 programs, since IBM had retained the right to use the DOS and Windows code as a result of the breakup.
Microsoft Windows version 2.0 (2.01 and 2.03 internally) came out on December 9, 1987, and proved slightly more popular than its predecessor. Much of the popularity for Windows 2.0 came by way of its inclusion as a "run-time version" with Microsoft's new graphical applications, Excel and Word for Windows. They could be run from MS-DOS, executing Windows for the duration of their activity, and closing down Windows upon exit.
Microsoft Windows received a major boost around this time when Aldus PageMaker appeared in a Windows version, having previously run only on Macintosh. Some computer historians date this, the first appearance of a significant and non-Microsoft application for Windows, as the start of the success of Windows.
Like prior versions of Windows, version 2.0 could use the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasker like DESQview, which used the 286 protected mode. It was also the first version to support the High Memory Area when running on an Intel 80286 compatible processor. This edition was renamed Windows/286 with the release of Windows 2.1.
A separate Windows/386 edition had a protected mode kernel, which required an 80386 compatible processor, with LIM-standard EMS emulation and VxD drivers in the kernel. All Windows and DOS-based applications at the time were real mode, and Windows/386 could run them over the protected mode kernel by using the virtual 8086 mode, which was new with the 80386 processor.
Version 2.1 came out on May 27, 1988, followed by version 2.11 on March 13, 1989; they included a few minor changes.
In Apple Computer, Inc. v. Microsoft Corp., version 2.03, and later 3.0, faced challenges from Apple over its overlapping windows and other features Apple charged mimicked the ostensibly copyrighted "look and feel" of its operating system and "embodie[d] and generated a copy of the Macintosh" in its OS. Judge William Schwarzer dropped all but 10 of Apple's 189 claims of copyright infringement, and ruled that most of the remaining 10 were over uncopyrightable ideas.
On December 31, 2001, Microsoft declared Windows 2.x obsolete and stopped providing support and updates for the system.
Windows 3.0
Windows 3.0, released in May 1990, improved capabilities given to native applications. It also allowed users to better multitask older MS-DOS based software compared to Windows/386, thanks to the introduction of virtual memory.
Windows 3.0's user interface finally resembled a serious competitor to the user interface of the Macintosh computer. PCs had improved graphics by this time, due to VGA video cards, and the protected/enhanced mode allowed Windows applications to use more memory in a more painless manner than their DOS counterparts could. Windows 3.0 could run in real, standard, or 386 enhanced modes, and was compatible with any Intel processor from the 8086/8088 up to the 80286 and 80386. This was the first version to run Windows programs in protected mode, although the 386 enhanced mode kernel was an enhanced version of the protected mode kernel in Windows/386.
Windows 3.0 received two updates. A few months after introduction, Windows 3.0a was released as a maintenance release, resolving bugs and improving stability. A "multimedia" version, Windows 3.0 with Multimedia Extensions 1.0, was released in October 1991. This was bundled with "multimedia upgrade kits", comprising a CD-ROM drive and a sound card, such as the Creative Labs Sound Blaster Pro. This version was the precursor to the multimedia features available in Windows 3.1 (first released in April 1992) and later, and was part of Microsoft's specification for the Multimedia PC.
The features listed above and growing market support from application software developers made Windows 3.0 wildly successful, selling around 10 million copies in the two years before the release of version 3.1. Windows 3.0 became a major source of income for Microsoft, and led the company to revise some of its earlier plans. Support was discontinued on December 31, 2001.
Windows 3.1
In response to the impending release of OS/2 2.0, Microsoft developed Windows 3.1 (first released in April 1992), which included several improvements to Windows 3.0, such as display of TrueType scalable fonts (developed jointly with Apple), improved disk performance in 386 Enhanced Mode, multimedia support, and bugfixes. It also removed Real Mode, and only ran on an 80286 or better processor. Later Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in 1992.
In 1992 and 1993, Microsoft released Windows for Workgroups (WfW), which was available both as an add-on for existing Windows 3.1 installations and in a version that included the base Windows environment and the networking extensions all in one package. Windows for Workgroups included improved network drivers and protocol stacks, and support for peer-to-peer networking. There were two versions of Windows for Workgroups – 3.1 and 3.11. Unlike prior versions, Windows for Workgroups 3.11 ran in 386 Enhanced Mode only, and needed at least an 80386SX processor. One optional download for WfW was the "Wolverine" TCP/IP protocol stack, which allowed for easy access to the Internet through corporate networks.
All these versions continued version 3.0's impressive sales pace. Even though the 3.1x series still lacked most of the important features of OS/2, such as long file names, a desktop, or protection of the system against misbehaving applications, Microsoft quickly took over the OS and GUI markets for the IBM PC. The Windows API became the de facto standard for consumer software.
On December 31, 2001, Microsoft declared Windows 3.1 obsolete and stopped providing support and updates for the system. However, OEM licensing for Windows for Workgroups 3.11 on embedded systems continued to be available until November 1, 2008.
Windows NT 3.x
Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VAX/VMS at Digital Equipment Corporation. Microsoft hired him in October 1988 to create a successor to OS/2, but Cutler created a completely new system instead. Cutler had been developing a follow-on to VMS at DEC called MICA, and when DEC dropped the project he brought the expertise and around 20 engineers with him to Microsoft.
Windows NT Workstation (Microsoft marketing wanted Windows NT to appear to be a continuation of Windows 3.1) arrived in Beta form to developers at the July 1992 Professional Developers Conference in San Francisco. Microsoft announced at the conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (Windows 95, codenamed Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. In hindsight, Cairo was a much more difficult project than Microsoft had anticipated and, as a result, NT and Chicago would not be unified until Windows XP—albeit Windows 2000, oriented to business, had already unified most of the system's bolts and gears, it was XP that was sold to home consumers like Windows 95 and came to be viewed as the final unified OS. Parts of Cairo have still not made it into Windows : most notably, the WinFS file system, which was the much touted Object File System of Cairo. Microsoft announced in 2006 that they would not make a separate release of WinFS for Windows XP and Windows Vista and would gradually incorporate the technologies developed for WinFS in other products and technologies, notably Microsoft SQL Server.
Driver support was lacking due to the increased programming difficulty in dealing with NT's superior hardware abstraction model. This problem plagued the NT line all the way through Windows 2000. Programmers complained that it was too hard to write drivers for NT, and hardware developers were not going to go through the trouble of developing drivers for a small segment of the market. Additionally, although allowing for good performance and fuller exploitation of system resources, it was also resource-intensive on limited hardware, and thus was only suitable for larger, more expensive machines.
However, these same features made Windows NT perfect for the LAN server market (which in 1993 was experiencing a rapid boom, as office networking was becoming common). NT also had advanced network connectivity options and NTFS, an efficient file system. Windows NT version 3.51 was Microsoft's entry into this field, and took away market share from Novell (the dominant player) in the following years.
One of Microsoft's biggest advances initially developed for Windows NT was a new 32-bit API, to replace the legacy 16-bit Windows API. This API was called Win32, and from then on Microsoft referred to the older 16-bit API as Win16. The Win32 API had three levels of implementation: the complete one for Windows NT, a subset for Chicago (originally called Win32c) missing features primarily of interest to enterprise customers (at the time) such as security and Unicode support, and a more limited subset called Win32s which could be used on Windows 3.1 systems. Thus Microsoft sought to ensure some degree of compatibility between the Chicago design and Windows NT, even though the two systems had radically different internal architectures.
Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel.
As released, Windows NT 3.x went through three versions (3.1, 3.5, and 3.51), changes were primarily internal and reflected back end changes. The 3.5 release added support for new types of hardware and improved performance and data reliability; the 3.51 release was primarily to update the Win32 APIs to be compatible with software being written for the Win32c APIs in what became Windows 95. Support for Windows NT 3.51 ended in 2001 and 2002 for the Workstation and Server editions, respectively.
Windows 95
After Windows 3.11, Microsoft began to develop a new consumer-oriented version of the operating system codenamed Chicago. Chicago was designed to have support for 32-bit preemptive multitasking like OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new object-oriented GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.
Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance, and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors eventually began to impact the operating system's efficiency and stability.
Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft had a double gain from its release: first, it made it impossible for consumers to run Windows 95 on a cheaper, non-Microsoft DOS, secondly, although traces of DOS were never completely removed from the system and MS DOS 7 would be loaded briefly as a part of the booting process, Windows 95 applications ran solely in 386 enhanced mode, with a flat 32-bit address space and virtual memory. These features make it possible for Win32 applications to address up to 2 gigabytes of virtual RAM (with another 2 GB reserved for the operating system), and in theory prevented them from inadvertently corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer to Windows NT, although Windows 95/98/Me did not support more than 512 megabytes of physical RAM without obscure system tweaks. Three years after its introduction, Windows 95 was succeeded by Windows 98.
IBM continued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped pre-installed with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose market share.
It is probably impossible to choose one specific reason why OS/2 failed to gain much market share. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but the Win32s subset of Win32 API (see above). Unlike with Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM later introduced OS/2 into the United States v. Microsoft case, blaming unfair marketing tactics on Microsoft's part.
Microsoft went on to release five different versions of Windows 95:
Windows 95 – original release
Windows 95 A – included Windows 95 OSR1 slipstreamed into the installation
Windows 95 B (OSR2) – included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support
Windows 95 B USB (OSR2.1) – included basic USB support
Windows 95 C (OSR2.5) – included all the above features, plus IE 4.0; this was the last 95 version produced
OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity).
The first Microsoft Plus! add-on pack was sold for Windows 95. Microsoft ended extended support for Windows 95 on December 31, 2001.
Phases in development
Windows NT 4.0
Microsoft released the successor to NT 3.51, Windows NT 4.0, on August 24, 1996, one year after the release of Windows 95. It was Microsoft's primary business-oriented operating system until the introduction of Windows 2000. Major new features included the new Explorer shell from Windows 95, scalability and feature improvements to the core architecture, kernel, USER32, COM and MSRPC.
Windows NT 4.0 came in five versions:
Windows NT 4.0 Workstation
Windows NT 4.0 Server
Windows NT 4.0 Server, Enterprise Edition (includes support for 8-way SMP and clustering)
Windows NT 4.0 Terminal Server
Windows NT 4.0 Embedded
Microsoft ended mainstream support for Windows NT 4.0 Workstation on June 30, 2002, and ended extended support on June 30, 2004, while Windows NT 4.0 Server mainstream support ended on December 31, 2002, and extended support ended on December 31, 2004. Both editions were succeeded by Windows 2000 Professional and the Windows 2000 Server Family, respectively.
Microsoft ended mainstream support for Windows NT 4.0 Embedded on June 30, 2003, and ended extended support on July 11, 2006. This edition was succeeded by Windows XP Embedded.
Windows 98
On June 25, 1998, Microsoft released Windows 98 (code-named Memphis), three years after the release of Windows 95, two years after the release of Windows NT 4.0, and 21 months before the release of Windows 2000. It included new hardware drivers and the FAT32 file system which supports disk partitions that are larger than 2 GB (first introduced in Windows 95 OSR2). USB support in Windows 98 is marketed as a vast improvement over Windows 95. The release continued the controversial inclusion of the Internet Explorer browser with the operating system that started with Windows 95 OEM Service Release 1. The action eventually led to the filing of the United States v. Microsoft case, dealing with the question of whether Microsoft was introducing unfair practices into the market in an effort to eliminate competition from other companies such as Netscape.
In 1999, Microsoft released Windows 98 Second Edition, an interim release. One of the more notable new features was the addition of Internet Connection Sharing, a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. Hardware support through device drivers was increased and this version shipped with Internet Explorer 5. Many minor problems that existed in the first edition were fixed making it, according to many, the most stable release of the Windows 9x family.
Mainstream support for Windows 98 and 98 SE ended on June 30, 2002. Extended support ended on July 11, 2006.
Windows 2000
Microsoft released Windows 2000 on February 17, 2000, as the successor to Windows NT 4.0, 17 months after the release of Windows 98. It has the version number Windows NT 5.0, and it was Microsoft's business-oriented operating system starting with the official release on February 17, 2000, until 2001 when it was succeeded by Windows XP. Windows 2000 has had four official service packs. It was successfully deployed both on the server and the workstation markets. Amongst Windows 2000's most significant new features was Active Directory, a near-complete replacement of the NT 4.0 Windows Server domain model, which built on industry-standard technologies like DNS, LDAP, and Kerberos to connect machines to one another. Terminal Services, previously only available as a separate edition of NT 4, was expanded to all server versions. A number of features from Windows 98 were incorporated also, such as an improved Device Manager, Windows Media Player, and a revised DirectX that made it possible for the first time for many modern games to work on the NT kernel. Windows 2000 is also the last NT-kernel Windows operating system to lack product activation.
While Windows 2000 upgrades were available for Windows 95 and Windows 98, it was not intended for home users.
Windows 2000 was available in four editions:
Windows 2000 Professional
Windows 2000 Server
Windows 2000 Advanced Server
Windows 2000 Datacenter Server
Microsoft ended support for both Windows 2000 and Windows XP Service Pack 2 on July 13, 2010.
Windows Me
On September 14, 2000, Microsoft released a successor to Windows 98 called Windows Me, short for "Millennium Edition". It was the last DOS-based operating system from Microsoft. Windows Me introduced a new multimedia-editing application called Windows Movie Maker, came standard with Internet Explorer 5.5 and Windows Media Player 7, and debuted the first version of System Restore – a recovery utility that enables the operating system to revert system files back to a prior date and time. System Restore was a notable feature that would continue to thrive in all later versions of Windows.
Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP. Many of the new features were available from the Windows Update site as updates for older Windows versions (System Restore and Windows Movie Maker were exceptions). Windows Me was criticized for stability issues, as well as for lacking real mode DOS support, to the point of being referred to as the "Mistake Edition". Windows Me was the last operating system to be based on the Windows 9x (monolithic) kernel and MS-DOS, with its successor Windows XP being based on Microsoft's Windows NT kernel instead.
Windows XP, Server 2003 series and Fundamentals for Legacy PCs
On October 25, 2001, Microsoft released Windows XP (codenamed "Whistler"). The merging of the Windows NT/2000 and Windows 95/98/Me lines was finally achieved with Windows XP. Windows XP uses the Windows NT 5.1 kernel, marking the entrance of the Windows NT core to the consumer market, to replace the aging Windows 9x branch. The initial release was met with considerable criticism, particularly in the area of security, leading to the release of three major Service Packs. Windows XP SP1 was released in September 2002, SP2 was released in August 2004 and SP3 was released in April 2008. Service Pack 2 provided significant improvements and encouraged widespread adoption of XP among both home and business users. Windows XP was one of Microsoft's longest-running flagship operating systems, beginning with the public release on October 25, 2001, for at least 5 years, and ending on January 30, 2007, when it was succeeded by Windows Vista.
Windows XP is available in a number of versions:
Windows XP Home Edition, for home users
Windows XP Professional, for business and power users contained a number of features not available in Home Edition.
Windows XP N, like above editions, but without a default installation of Windows Media Player, as mandated by a European Union ruling
Windows XP Media Center Edition (MCE), released in October 2002 for desktops and notebooks with an emphasis on home entertainment. Contained all features offered in Windows XP Professional and the Windows Media Center. Subsequent versions are the same but have an updated Windows Media Center.
Windows XP Media Center Edition 2004, released on September 30, 2003
Windows XP Media Center Edition 2005, released on October 12, 2004. Included the Royale theme, support for Media Center Extenders, themes and screensavers from Microsoft Plus! for Windows XP. The ability to join an Active Directory domain is disabled.
Windows XP Tablet PC Edition, for tablet PCs
Windows XP Tablet PC Edition 2005
Windows XP Embedded, for embedded systems
Windows XP Starter Edition, for new computer users in developing countries
Windows XP Professional x64 Edition, released on April 25, 2005, for home and workstation systems utilizing 64-bit processors based on the x86-64 instruction set originally developed by AMD as AMD64; Intel calls their version Intel 64. Internally, XP x64 was a somewhat updated version of Windows based on the Server 2003 codebase.
Windows XP 64-bit Edition, is a version for Intel's Itanium line of processors; maintains 32-bit compatibility solely through a software emulator. It is roughly analogous to Windows XP Professional in features. It was discontinued in September 2005 when the last vendor of Itanium workstations stopped shipping Itanium systems marketed as "Workstations".
Windows Server 2003
On April 25, 2003, Microsoft launched Windows Server 2003, a notable update to Windows 2000 Server encompassing many new security features, a new "Manage Your Server" wizard that simplifies configuring a machine for specific roles, and improved performance. It is based on the Windows NT 5.2 kernel. A few services not essential for server environments are disabled by default for stability reasons, most noticeable are the "Windows Audio" and "Themes" services; users have to enable them manually to get sound or the "Luna" look as per Windows XP. The hardware acceleration for display is also turned off by default, users have to turn the acceleration level up themselves if they trust the display card driver.
In December 2005, Microsoft released Windows Server 2003 R2, which is actually Windows Server 2003 with SP1 (Service Pack 1), together with an add-on package.
Among the new features are a number of management features for branch offices, file serving, printing and company-wide identity integration.
Windows Server 2003 is available in six editions:
Web Edition (32-bit)
Enterprise Edition (32 and 64-bit)
Datacenter Edition (32 and 64-bit)
Small Business Server (32-bit)
Storage Server (OEM channel only)
Windows Server 2003 R2, an update of Windows Server 2003, was released to manufacturing on December 6, 2005. It is distributed on two CDs, with one CD being the Windows Server 2003 SP1 CD. The other CD adds many optionally installable features for Windows Server 2003. The R2 update was released for all x86 and x64 versions, except Windows Server 2003 R2 Enterprise Edition, which was not released for Itanium.
Windows XP x64 and Server 2003 x64 Editions
On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003, x64 Editions in Standard, Enterprise and Datacenter SKUs. Windows XP Professional x64 Edition is an edition of Windows XP for x86-64 personal computers. It is designed to use the expanded 64-bit memory address space provided by the x86–64 architecture.
Windows XP Professional x64 Edition is based on the Windows Server 2003 codebase, with the server features removed and client features added. Both Windows Server 2003 x64 and Windows XP Professional x64 Edition use identical kernels.
Windows XP Professional x64 Edition is not to be confused with Windows XP 64-bit Edition, as the latter was designed for Intel Itanium processors. During the initial development phases, Windows XP Professional x64 Edition was named Windows XP 64-Bit Edition for 64-Bit Extended Systems.
Windows Fundamentals for Legacy PCs
In July 2006, Microsoft released a thin-client version of Windows XP Service Pack 2, called Windows Fundamentals for Legacy PCs (WinFLP). It is only available to Software Assurance customers. The aim of WinFLP is to give companies a viable upgrade option for older PCs that are running Windows 95, 98, and Me that will be supported with patches and updates for the next several years. Most user applications will typically be run on a remote machine using Terminal Services or Citrix.
While being visually the same as Windows XP, it has some differences. For example, if the screen has been set to 16 bit colors, the Windows 2000 recycle bin icon and some XP 16-bit icons will show. Paint and some games like Solitaire aren't present too.
Windows Home Server 2007
Windows Home Server (code-named Q, Quattro) is a server product based on Windows Server 2003, designed for consumer use. The system was announced on January 7, 2007, by Bill Gates. Windows Home Server can be configured and monitored using a console program that can be installed on a client PC. Such features as Media Sharing, local and remote drive backup and file duplication are all listed as features. The release of Windows Home Server Power Pack 3 added support for Windows 7 to Windows Home Server.
Windows Vista and Server 2008
Windows Vista was released on November 30, 2006, to business customers—consumer versions followed on January 30, 2007. Windows Vista intended to have enhanced security by introducing a new restricted user mode called User Account Control, replacing the "administrator-by-default" philosophy of Windows XP. Vista was the target of much criticism and negative press, and in general was not well regarded, this was seen as leading to the relatively swift release of Windows 7.
One major difference between Vista and earlier versions of Windows, Windows 95 and later, was that the original start button was replaced with the Windows icon in a circle (called the Start Orb). Vista also featured new graphics features, the Windows Aero GUI, new applications (such as Windows Calendar, Windows DVD Maker and some new games including Chess, Mahjong, and Purble Place), Internet Explorer 7, Windows Media Player 11, and a large number of underlying architectural changes. Windows Vista had the version number NT 6.0. During its lifetime, Windows Vista had two service packs.
Windows Vista shipped in six editions:
Starter (only available in developing countries)
Home Basic
Home Premium
Business
Enterprise (only available to large business and enterprise)
Ultimate (combines both Home Premium and Enterprise)
All editions (except Starter edition) were available in both 32-bit and 64-bit versions. The biggest advantage of the 64-bit version was breaking the 4 gigabyte memory barrier, which 32-bit computers cannot fully access.
Windows Server 2008
Windows Server 2008, released on February 27, 2008, was originally known as Windows Server Codename "Longhorn". Windows Server 2008 built on the technological and security advances first introduced with Windows Vista, and was significantly more modular than its predecessor, Windows Server 2003.
Windows Server 2008 shipped in ten editions:
Windows Server 2008 Foundation (for OEMs only)
Windows Server 2008 Standard (32-bit and 64-bit)
Windows Server 2008 Enterprise (32-bit and 64-bit)
Windows Server 2008 Datacenter (32-bit and 64-bit)
Windows Server 2008 for Itanium-based Systems (IA-64)
Windows HPC Server 2008
Windows Web Server 2008 (32-bit and 64-bit)
Windows Storage Server 2008 (32-bit and 64-bit)
Windows Small Business Server 2008 (64-bit only)
Windows Essential Business Server 2008 (32-bit and 64-bit)
Windows 7 and Server 2008 R2
Windows 7 was released to manufacturing on July 22, 2009, and reached general retail availability on October 22, 2009. Since its release, Windows 7 had one service pack.
Some features of Windows 7 were faster booting, Device Stage, Windows PowerShell, less obtrusive User Account Control, multi-touch, and improved window management. The interface was renewed with a bigger taskbar and some improvements in the searching system and the Start menu. Features included with Windows Vista and not in Windows 7 include the sidebar (although gadgets remain) and several programs that were removed in favor of downloading their Windows Live counterparts. Windows 7 met with positive reviews, which said the OS was faster and easier to use than Windows Vista.
Windows 7 shipped in six editions:
Starter (available worldwide)
Home Basic
Home Premium
Professional
Enterprise (available to volume-license business customers only)
Ultimate
In some countries in the European Union, there were other editions that lacked some features such as Windows Media Player, Windows Media Center and Internet Explorer—these editions were called names such as "Windows 7 N."
Microsoft focused on selling Windows 7 Home Premium and Professional. All editions, except the Starter edition, were available in both 32-bit and 64-bit versions.
Unlike the corresponding Vista editions, the Professional and Enterprise editions were supersets of the Home Premium edition.
At the Professional Developers Conference (PDC) 2008, Microsoft also announced Windows Server 2008 R2, as the server variant of Windows 7. Windows Server 2008 R2 shipped in 64-bit versions (x64 and Itanium) only.
Windows Thin PC
In 2010, Microsoft released Windows Thin PC or WinTPC, which was a feature-and size-reduced locked-down version of Windows 7 expressly designed to turn older PCs into thin clients. WinTPC was available for software assurance customers and relied on cloud computing in a business network. Wireless operation is supported since WinTPC has full wireless stack integration, but wireless operation may not be as good as the operation on a wired connection.
Windows Home Server 2011
Windows Home Server 2011 code named 'Vail' was released on April 6, 2011. Windows Home Server 2011 is built on the Windows Server 2008 R2 code base and removed the Drive Extender drive pooling technology in the original Windows Home Server release. Windows Home Server 2011 is considered a "major release". Its predecessor was built on Windows Server 2003. WHS 2011 only supports x86-64 hardware.
Microsoft decided to discontinue Windows Home Server 2011 on July 5, 2012, while including its features into Windows Server 2012 Essentials. Windows Home Server 2011 was supported until April 12, 2016.
Windows 8 and Server 2012
On June 1, 2011, Microsoft previewed Windows 8 at both Computex Taipei and the D9: All Things Digital conference in California. The first public preview of Windows Server 2012 was shown by Microsoft at the 2011 Microsoft Worldwide Partner Conference. Windows 8 Release Preview and Windows Server 2012 Release Candidate were both released on May 31, 2012. Product development on Windows 8 was completed on August 1, 2012, and it was released to manufacturing the same day. Windows Server 2012 went on sale to the public on September 4, 2012. Windows 8 went on sale to the public on October 26, 2012. One edition, Windows RT, runs on some system-on-a-chip devices with mobile 32-bit ARM (ARMv7) processors. Windows 8 features a redesigned user interface, designed to make it easier for touchscreen users to use Windows. The interface introduced an updated Start menu known as the Start screen, and a new full-screen application platform. The desktop interface is also present for running windowed applications, although Windows RT will not run any desktop applications not included in the system. On the Building Windows 8 blog, it was announced that a computer running Windows 8 can boot up much faster than Windows 7. New features also include USB 3.0 support, the Windows Store, the ability to run from USB drives with Windows To Go, and others.
Windows 8.1 and Windows Server 2012 R2 were released on October 17, 2013. Windows 8.1 is available as an update in the Windows Store for Windows 8 users only and also available to download for clean installation. The update adds new options for resizing the live tiles on the Start screen. Windows 8 was given the kernel number NT 6.2, with its successor 8.1 receiving the kernel number 6.3. Neither had any service packs, although many consider Windows 8.1 to be a service pack for Windows 8. However, Windows 8.1 received two main updates in 2014. Both versions received some criticism due to the removal of the Start menu and some difficulties to perform tasks and commands.
Windows 8 is available in the following editions:
Windows 8
Windows 8 Pro
Windows 8 Enterprise
Windows RT
Microsoft ended support for Windows 8 on January 12, 2016, and for Windows 8.1 on January 10, 2023.
Windows 10 and corresponding Server versions
Windows 10 was unveiled on September 30, 2014, as the successor for Windows 8, and was released on July 29, 2015. It was distributed without charge to Windows 7 and 8.1 users for one year after release. A number of new features like Cortana, the Microsoft Edge web browser, the ability to view Windows Store apps as a window instead of fullscreen, the return of the Start menu, virtual desktops, revamped core apps, Continuum, and a unified Settings app were all features debuted in Windows 10. Like its successor, the operating system was announced as a service OS that would receive constant performance and stability updates. Unlike Windows 8, Windows 10 received mostly positive reviews, praising improvements of stability and practicality than its predecessor, however, it received some criticism due to mandatory update installation, privacy concerns and advertising-supported software tactics.
Although Microsoft claimed Windows 10 would be the last Windows version, eventually a new major release, Windows 11, was announced in 2021. That made Windows 10 last longer as Microsoft's flagship operating system than any other version of Windows, beginning with the public release on July 29, 2015, for six years, and ending on October 5, 2021, when Windows 11 was released. Windows 10 had received thirteen main updates.
Stable releases
Version 1507 (codenamed Threshold 1) was the original version of Windows 10 and released in July 2015. One of the big features was the introduction of Windows Hello, which at launch enabled users to log into Windows with facial recognition if the PC was equipped with a compatible active illuminated near-infrared, NIR, camera.
Version 1511, announced as the November Update and codenamed Threshold 2. It was released in November 2015. This update added many visual tweaks, such as more consistent context menus and the ability to change the color of window titlebars. Windows 10 can now be activated with a product key for Windows 7 and later, thus simplifying the activation process and essentially making Windows 10 free for anyone who has Windows 7 or later, even after the free upgrade period ended. A "Find My Device" feature was added, allowing users to track their devices if they lose them, similar to the Find My iPhone service that Apple offers. Controversially, the Start menu now displays "featured apps". A few tweaks were added to Microsoft Edge, including tab previews and the ability to sync the browser with other devices running Windows 10. Kernel version number: 10.0.10586.
Version 1607, announced as the Anniversary Update and codenamed Redstone 1. It was the first of several planned updates with the "Redstone" codename. Its version number, 1607, means that it was supposed to launch in July 2016, however it was delayed until August 2016. Many new features were included in the version, including more integration with Cortana, a dark theme, browser extension support for Microsoft Edge, click-to-play Flash by default, tab pinning, web notifications, swipe navigation in Edge, and the ability for Windows Hello to use a fingerprint sensor to sign into apps and websites, similar to Touch ID on the iPhone. Also added was Windows Ink, which improves digital inking in many apps, and the Windows Ink Workspace which lists pen-compatible apps, as well as quick shortcuts to a sticky notes app and a sketchpad. Microsoft, through their partnership with Canonical, integrated a full Ubuntu bash shell via the Windows Subsystem for Linux. Notable tweaks in this version of Windows 10 include the removal of the controversial password-sharing feature of Microsoft's Wi-Fi Sense service, a slightly redesigned Start menu, Tablet Mode working more like Windows 8, overhauled emoji, improvements to the lock screen, calendar integration in the taskbar, and the Blue Screen of Death now showing a QR code which users can scan to quickly find out what caused the error. This version of Windows 10's kernel version is 10.0.14393.
Version 1703, announced as the Creators Update and codenamed Redstone 2. Features for this update include a new Paint 3D application, which allows users to create and modify 3D models, integration with Microsoft's HoloLens and other "mixed-reality" headsets produced by other manufacturers, Windows My People, which allows users to manage contacts, Xbox game broadcasting, support for newly developed APIs such as WDDM 2.2, Dolby Atmos support, improvements to the Settings app, and more Edge and Cortana improvements. This version also included tweaks to system apps, such as an address bar in the Registry Editor, Windows PowerShell being the default command line interface instead of the Command Prompt and the Windows Subsystem for Linux being upgraded to support Ubuntu 16.04. This version of Windows 10 was released on April 11, 2017, as a free update.
Version 1709, announced as the Fall Creators Update and codenamed Redstone 3. It introduced a new design language—the Fluent Design System and incorporates it in UWP apps such as Calculator. It also added new features to the Photos application, which were once available only in Windows Movie Maker.
Version 1803, announced as the April 2018 Update and codenamed Redstone 4 introduced Timeline, an upgrade to the task view screen such that it has the ability to show past activities and let users resume them. The respective icon on the taskbar was also changed to reflect this upgrade. Strides were taken to incorporate Fluent Design into Windows, which included adding Acrylic transparency to the Taskbar and Taskbar Flyouts. The Settings App was also redesigned to have an Acrylic left pane. Variable Fonts were introduced.
Version 1809, announced as the Windows 10 October 2018 Update and codenamed Redstone 5 among new features, introduced Dark Mode for File Explorer, Your Phone App to link Android phone with Windows 10, new screenshot tool called Snip & Sketch, Make Text Bigger for easier accessibility, and Clipboard History and Cloud Sync.
Version 1903, announced as the Windows 10 May 2019 Update, codenamed 19H1, was released on May 21, 2019. It added many new features including the addition of a light theme to the Windows shell and a new feature known as Windows Sandbox, which allowed users to run programs in a throwaway virtual window. Notably, this was the first version to allow an application to default to using UTF-8 as the process code page and to default to UTF-8 as the code page in programs such as Notepad.
Version 1909, announced as the Windows 10 November 2019 Update, codenamed 19H2, was released on November 12, 2019. It unlocked many features that were already present, but hidden or disabled, on 1903, such as an auto-expanding menu on Start while hovering the mouse on it, OneDrive integration on Windows Search and creating events from the taskbar's clock. Some PCs with version 1903 had already enabled these features without installing 1909.
Version 2004, announced as the Windows 10 May 2020 Update, codenamed 20H1, was released on May 27, 2020. It introduces several new features such as renaming virtual desktops, GPU temperature control and type of disk on task manager, chat-based interface and window appearance for Cortana, and cloud reinstalling and quick searches (depends from region) for search home.
Version 20H2, announced as the Windows 10 October 2020 Update, codenamed 20H2, was released on October 20, 2020. It introduces resizing the start menu panels, a graphing mode for Calculator, process architecture view on task manager's Details pane, and optional drivers delivery from Windows Update and an updated in-use location icon on taskbar.
Version 21H1, announced as the Windows 10 May 2021 Update, codenamed 21H1, was released on May 18, 2021.
Version 21H2, announced as the Windows 10 November 2021 Update, codenamed 21H2, was released on November 16, 2021.
Version 22H2, announced as the Windows 10 2022 Update, codenamed 22H2, was released on October 18, 2022. It was the last version of Windows 10.
Windows Server 2016
Windows Server 2016 is a release of the Microsoft Windows Server operating system that was unveiled on September 30, 2014. Windows Server 2016 was officially released at Microsoft's Ignite Conference, September 26–30, 2016. It is based on the Windows 10 Anniversary Update codebase.
Windows Server 2019
Windows Server 2019 is a release of the Microsoft Windows Server operating system that was announced on March 20, 2018. The first Windows Insider preview version was released on the same day. It was released for general availability on October 2, 2018. Windows Server 2019 is based on the Windows 10 October 2018 Update codebase.
On October 6, 2018, distribution of Windows version 1809 (build 17763) was paused while Microsoft investigated an issue with user data being deleted during an in-place upgrade. It affected systems where a user profile folder (e.g. Documents, Music or Pictures) had been moved to another location, but data was left in the original location. As Windows Server 2019 is based on the Windows version 1809 codebase, it too was removed from distribution at the time, but was re-released on November 13, 2018. The software product life cycle for Server 2019 was reset in accordance with the new release date.
Windows Server 2022
Windows Server 2022 was released on August 18, 2021. This is the first NT server version which does not share the build number with any of its client version counterpart, although its codename is 21H2, similar to the Windows 10 November 2021 Update.
Windows 11 and corresponding Server versions
Windows 11 is the latest release of Windows NT, and the successor to Windows 10. It was unveiled on June 24, 2021, and was released on October 5, serving as a free upgrade to compatible Windows 10 devices. The system incorporates a renewed interface called "Mica", which includes translucent backgrounds, rounded edges and color combinations. The taskbar's icons are center aligned by default, while the Start menu replaces the "Live Tiles" with pinned apps and recommended apps and files. The MSN widget panel, the Microsoft Store, and the file browser, among other applications, have also been redesigned. However, some features and programs such as Cortana, Internet Explorer (replaced by Microsoft Edge as the default web browser) and Paint 3D were removed. Apps like 3D Viewer, Paint 3D, Skype and OneNote for Windows 10 can be downloaded from the Microsoft Store. Beginning in 2021, Windows 11 included compatibility with Android applications, however, Microsoft has announced support for Android apps will end in March, 2025; the Amazon Appstore is included in Windows Subsystem for Android. Windows 11 received a positive reception from critics. While it was praised for its redesigned interface, and increased security and productivity, it was criticized for its high system requirements (which includes an installed TPM 2.0 chip, enabling the Secure Boot protocol, and UEFI firmware) and various UI changes and regressions (such as requiring a Microsoft account for first-time setup, preventing users from changing default browsers, and inconsistent dark theme) compared to Windows 10.
Stable releases
Version 21H2, codenamed "Sun Valley", was the initial version of Windows 11 released on October 5, 2021.
Version 22H2, announced as the Windows 11 2022 Update, codenamed "Sun Valley 2", was released on September 20, 2022. Features in this Windows 11 version include an updated, UWP version of the Task Manager and the Smart App Control feature within the Windows Security app. This version has had three major updates, with features including tabbed browsing in the File Explorer, iOS support for the Phone Link app, Bluetooth Low Energy audio support, and a preview of Microsoft Copilot within Windows.
Version 23H2, announced as the Windows 11 2023 Update, codenamed "Sun Valley 3", was released on October 31, 2023.
Version 24H2, announced as the Windows 11 2024 Update, codenamed "Hudson Valley", was released on October 1, 2024.
Windows Server 2025
Windows Server 2025 follows on Windows Server 2022 and was released on November 1, 2024. It is graphically based on Windows 11 and uses features like Hotpatching, among others.
See also
Comparison of operating systems
History of operating systems
List of Microsoft codenames
References
Further reading
History of Microsoft
Microsoft Windows
Windows
OS/2
Software version histories | Microsoft Windows version history | Technology | 11,494 |
29,078,639 | https://en.wikipedia.org/wiki/Positive%20material%20identification | Positive material identification (PMI) is the analysis of a material, this can be any material but is generally used for the analysis of metallic alloy to establish composition by reading the quantities by percentage of its constituent elements. Typical methods for PMI include X-ray fluorescence (XRF) and optical emission spectrometry (OES).
PMI is a portable method of analysis and can be used in the field on components.
X-ray fluorescence (XRF) PMI can not detect small elements such as carbon. This means that when undertaking analysis of stainless steels such as grades 304 and 316 the low carbon 'L' variant can not be determined. This however can be analysed with optical emission spectrometry (OES)
References
Elemental analysis
Chemical tests
Quality control | Positive material identification | Chemistry | 163 |
1,879,769 | https://en.wikipedia.org/wiki/Lagrangian%20foliation | In mathematics, a Lagrangian foliation or polarization is a foliation of a symplectic manifold, whose leaves are Lagrangian submanifolds. It is one of the steps involved in the geometric quantization of a square-integrable functions on a symplectic manifold.
References
Kenji FUKAYA, Floer homology of Lagrangian Foliation and Noncommutative Mirror Symmetry, (2000)
Symplectic geometry
Foliations
Mathematical quantization | Lagrangian foliation | Physics,Mathematics | 108 |
2,555,800 | https://en.wikipedia.org/wiki/Aura%20%28paranormal%29 | According to spiritual beliefs, an aura or energy field is a colored emanation said to enclose a human body or any animal or object. In some esoteric positions, the aura is described as a subtle body. Psychics and holistic medicine practitioners often claim to have the ability to see the size, color and type of vibration of an aura.
In spiritual alternative medicine, the human being aura is seen as part of a hidden anatomy that reflects the state of being and health of a client, often understood to even comprise centers of vital force called chakras. Such claims are not supported by scientific evidence and are thus pseudoscience. When tested under scientific controlled experiments, the ability to see auras has not been proven to exist.
Etymology
In Latin and Ancient Greek, aura means wind, breeze or breath. It was used in Middle English to mean "gentle breeze". By the end of the 19th century, the word was used in some spiritualist circles to describe a speculated subtle emanation around the body.
History
The concept of auras was first popularized by Charles Webster Leadbeater, a former priest of the Church of England and a member of the mystic Theosophical Society. He had studied theosophy in India, and believed he had the capacity to use his clairvoyant powers to make scientific investigations. He claimed that he had discovered that most men came from Mars but the more advanced men came from the Moon, and that hydrogen atoms were made of six bodies contained in an egg-like form. In his book Man Visible and Invisible, published in 1903, Leadbeater illustrated the aura of man at various stages of his moral evolution, from the "savage" to the saint. In 1910, he introduced the modern conception of auras by incorporating the Tantric notion of chakras in his book The Inner Life. Leadbeater did not simply present the Tantric beliefs to the West: he reconstructed and reinterpreted them by mixing them with his own ideas. Some of Leadbeater's innovations are describing chakras as energy vortices, and associating each of them with a gland, an organ and other body parts.
In the following years, Leadbeater's ideas on the aura and chakras were adopted and reinterpreted by other theosophists such as Rudolf Steiner and Edgar Cayce, but his occult anatomy remained of minor interest within the esoteric counterculture until the 1980s, when it was picked up by the New Age movement.
In 1977, American esotericist Christopher Hills published the book Nuclear Evolution: The Rainbow Body, which presented a modified version of Leadbeater's occult anatomy. Whereas Leadbeater had drawn each chakras with intricately detailed shapes and multiple colors, Hills presented them as a sequence of centers, each one being associated with a color of the rainbow. Most of the subsequent New Age writers based their representations of the aura on Hill's interpretation of Leadbeater's ideas. Chakras became a part of mainstream esoteric speculations in the 1980s and 1990s. Many New Age techniques that aim to clear blockages of the chakras were developed during those years, such as crystal healing and aura-soma. By the late 1990s chakras were less connected with their theosophical and Hinduist roots, and more infused with New Age ideas. A variety of New Age books proposed different links between each chakras and colors, personality traits, illnesses, Christian sacraments, etc. Various type of holistic healing within the New Age movement claim to use aura reading techniques, such as bioenergetic analysis, spiritual energy and energy medicine.
Auric energy
In yoga participants attempt to focus on, or enhance their "auric energy shield". The concept of auric energy is spiritual and is concerned with metaphysics. Some people think that the aura carries a person's soul after death.
Aura photography
There have been numerous attempts to capture an energy field around the human body, going as far back as photographs by French physician Hippolyte Baraduc in the 1890s. Supernatural interpretations of these images have often been the result of a lack of understanding of the simple natural phenomena behind them, such as heat emanating from a human body producing aura-like images under infrared photography.
In 1939, Semyon Davidovich Kirlian discovered that by placing an object or body part directly on photographic paper, and then passing a high voltage across the object, he would obtain the image of a glowing contour surrounding the object. This process came to be known as Kirlian photography. Some parapsychologists, such as Thelma Moss of UCLA, have proposed that these images show levels of psychic powers and bioenergies. However, studies have found that the Kirlian effect is caused by the presence of moisture on the object being photographed. Electricity produces an area of gas ionization around the object if it is moist, which is the case for living things. This causes an alternation of the electric charge pattern on the film. After rigorous experimentations, no mysterious process has been discovered in relation to the Kirlian photography.
More recent attempts at capturing auras include the Aura Imaging cameras and software introduced by Guy Coggins in 1992. Coggins claims that his software uses biofeedback data to color the picture of the subject. The technique has failed to yield reproducible results.
Tests
Tests of psychic abilities to observe alleged aura emanations have repeatedly been met with failure.
One test involved placing people in a dark room and asking the psychic to state how many auras she could observe. Only chance results were obtained.
Recognition of auras has occasionally been tested on television. One test involved an aura reader standing on one side of a room with an opaque partition separating her from a number of slots which might contain either actual people or mannequins. The aura reader failed to identify the slots containing people, incorrectly stating that all contained people.
In another televised test another aura reader was placed before a partition where five people were standing. He claimed that he could see their auras from behind the partition. As each person moved out, the reader was asked to identify where that person was standing behind the slot. He identified two out of five correctly.
Attempts to prove the existence of auras scientifically have repeatedly met with failure; for example people are unable to see auras in complete darkness, and auras have never been successfully used to identify people when their identifying features are otherwise obscured in controlled tests. A 1999 study concluded that conventional sensory cues such as radiated body heat might be mistaken for evidence of a metaphysical phenomenon.
Scientific explanation
Psychologist Andrew Neher has written that "there is no good evidence to support the notion that auras are, in any way, psychic in origin." Studies in laboratory conditions have demonstrated that auras are instead best explained as visual illusions known as afterimages. Neurologists contend that people may perceive auras because of effects within the brain: epilepsy, migraines, or the influence of psychedelic drugs such as LSD.
It has been suggested that auras may result from synaesthesia. However, a 2012 study discovered no link between auras and synaesthesia, concluding "the discrepancies found suggest that both phenomena are phenomenological and behaviourally dissimilar." Clinical neurologist Steven Novella has written: "Given the weight of the evidence it seems that the connection between auras and synaesthesia is speculative and based on superficial similarities that are likely coincidental."
Other causes may include disorders within the visual system provoking optical effects.
Bridgette Perez, in a review for the Skeptical Inquirer, wrote: "perceptual distortions, illusions, and hallucinations might promote belief in auras... Psychological factors, including absorption, fantasy proneness, vividness of visual imagery, and after-images, might also be responsible for the phenomena of the aura."
Scientists have repeatedly concluded that the ability to see auras does not actually exist.
In popular culture
The book The Third Eye, written by Cyril Henry Hoskin under the pseudonym Lobsang Rampa, claims that Tibetan monks opened the spiritual third eye using trepanation in order to accelerate the development of clairvoyance and allow them to see the aura. It also includes body gazing techniques purported to help achieve aura visualization. The book is by some considered to be a hoax.
See also
Aureola
Clairvoyance
Confirmation bias
Energy field disturbance
Halo (religious iconography)
Human Design
Lesya
List of topics characterized as pseudoscience
Metaphysics
Scientific skepticism
Spirit photography
References
Works cited
External links
Auras in the "Skeptic's dictionary"
How Aura Photography Invaded Instagram
Energy (esotericism)
Hindu philosophical concepts
New Age
Paranormal terminology
Pseudoscience
Theosophical philosophical concepts
Vitalism
Parapsychology | Aura (paranormal) | Biology | 1,821 |
14,763,146 | https://en.wikipedia.org/wiki/BACH1 | Transcription regulator protein BACH1 is a protein that in humans is encoded by the BACH1 gene.
Function
This gene encodes a transcription factor that belongs to the cap'n'collar type of basic region leucine zipper factor family (CNC-bZip). The encoded protein contains broad complex, tramtrack, bric-a-brac/poxvirus and zinc finger (BTB/POZ) domains, which is atypical of CNC-bZip family members. These BTB/POZ domains facilitate protein-protein interactions and formation of homo- and/or hetero-oligomers. The C-terminus of the protein is a leucine zipper of the bzip_maf family. When this protein forms a heterodimer with MafK, it functions as a repressor of Maf recognition element (MARE) and transcription is repressed. Multiple alternatively spliced transcript variants have been identified for this gene. Some exons of this gene overlap with some exons from the GRIK1-AS2 gene, which is transcribed in an opposite orientation to this gene but does not encode a protein.
See also
Small Maf (sMaf)
Bach1-sMaf heterodimer
References
Further reading
External links
Transcription factors | BACH1 | Chemistry,Biology | 268 |
2,856,792 | https://en.wikipedia.org/wiki/List%20of%20cybercriminals | Convicted computer criminals are people who are caught and convicted of computer crimes such as breaking into computers or computer networks. Computer crime can be broadly defined as criminal activity involving information technology infrastructure, including illegal access (unauthorized access), illegal interception (by technical means of non-public transmissions of computer data to, from or within a computer system), data interference (unauthorized damaging, deletion, deterioration, alteration or suppression of computer data), systems interference (interfering with the functioning of a computer system by inputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer data), misuse of devices, forgery (or identity theft) and electronic fraud.
In the infancy of the hacker subculture and the computer underground, criminal convictions were rare because there was an informal code of ethics that was followed by white hat hackers. Proponents of hacking claim to be motivated by artistic and political ends, but are often unconcerned about the use of criminal means to achieve them. White hat hackers break past computer security for non-malicious reasons and do no damage, akin to breaking into a house and looking around. They enjoy learning and working with computer systems, and by this experience gain a deeper understanding of electronic security. As the computer industry matured, individuals with malicious intentions (black hats) would emerge to exploit computer systems for their own personal profit.
Convictions of computer crimes, or hacking, began as early as 1984 with the case of The 414s from the 414 area code in Milwaukee. In that case, six teenagers broke into a number of high-profile computer systems, including Los Alamos National Laboratory, Sloan-Kettering Cancer Center and Security Pacific Bank. On May 1, 1984, one of the 414s, Gerald Wondra, was sentenced to two years of probation. In May 1986, the first computer trespass conviction to result in a jail sentence was handed down to Michael Princeton Wilkerson, who received two weeks in jail for his infiltration of Microsoft, Sundstrand Corp., Kenworth Truck Co. and Resources Conservation Co.
In 2006, a prison term of nearly five years was handed down to Jeanson James Ancheta, who created hundreds of zombie computers to do his bidding via giant bot networks or botnets. He then sold the botnets to the highest bidder, who in turn used them for denial-of-service (DoS) attacks.
, the longest sentence for computer crimes is that of Albert Gonzalez for 20 years. The next longest sentences are those of 13 years for Max Butler, 108 months for Brian Salcedo in 2004 and upheld in 2006 by the U.S. 4th Circuit Court of Appeals, and 68 months for Kevin Mitnick in 1999.
Computer criminals
See also
Timeline of computer security hacker history
References
External links
Convicted computer criminals
Hacking (computer security)
Lists of criminals | List of cybercriminals | Technology | 580 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.