id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
11,798,327 | https://en.wikipedia.org/wiki/Lachancea%20kluyveri | Lachancea kluyveri is an ascomycetous yeast associated with fruit flies, slime fluxes, soils and foods.
Habitat
The habitat of L. kluyveri is not well known because only about 30 isolates have been recorded. It is, however, thought to be environmentally widespread. First described as Saccharomyces kluyveri in 1956 from fruit flies in California, this species has been isolated from slime fluxes on tree, soils in North America and Europe, and cheeses. It has also been reported as an agent of disseminated mycosis in a patient with HIV/AIDS.
Biology
Lachancea kluyveri is a budding yeast related to Saccharomyces cerevisiae, or baker's yeast, the model organism intensively used in biochemistry, genetics and cell biology. In 2003 it was transferred from the genus Saccharomyces to the genus Lachancea named for Canadian mycologist and yeast biologist Marc-André Lachance. Saccharomyces cerevisiae and L. kluyveri have several fundamental differences that warrant genomic comparisons. First, like most cell types, L. kluyveri resorts to fermentation (degrading sugars in the absence of oxygen) only when oxygen is limiting. S. cerevisiae, on the other hand, prefers to carry out fermentation even in the presence of oxygen. This means that L. kluyveri makes a more efficient use of glucose for energy production. Therefore, L. kluyveri provides a contrasting model to one of the most unusual features of S. cerevisiae. Second, L. kluyveri has a simpler genome organization than S. cerevisiae: it appears to have become a species before the whole genome duplication that occurred in the Saccharomyces lineage. As a result, its genome is smaller (about 9.5 million base pairs) than that of S. cerevisiae with fewer duplicated genes. Additionally, L. kluyveri is becoming more widely used as a model organism and for industrial applications, such as the production of proteins, since its biomass yield is greater than that of S. cerevisiae due to more efficient use of glucose.
Sequencing information
The L. kluyveri genome was originally sequenced in 2002 to approximately 3.5× whole genome shotgun (WGS) coverage.
References
Fungal strawberry diseases
Fungi described in 1956
Saccharomycetaceae
Fungus species | Lachancea kluyveri | [
"Biology"
] | 535 | [
"Fungi",
"Fungus species"
] |
11,798,335 | https://en.wikipedia.org/wiki/Zygosaccharomyces%20florentinus | Zygosaccharomyces florentinus is a plant pathogen.
See also
List of strawberry diseases
References
Fungal plant pathogens and diseases
Fungal strawberry diseases
Saccharomycetaceae
Fungi described in 1938
Fungus species | Zygosaccharomyces florentinus | [
"Biology"
] | 47 | [
"Fungi",
"Fungus species"
] |
11,798,344 | https://en.wikipedia.org/wiki/Saccharomyces%20florentinus | Saccharomyces florentinus is a yeast which was previously known as Saccharomyces pyriformis.
It is a component of the ginger beer plant used in the making of traditional ginger beer.
References
Yeasts
Fungal strawberry diseases
Fungi described in 1952
florentinus
Ginger beer
Fungus species | Saccharomyces florentinus | [
"Biology"
] | 65 | [
"Yeasts",
"Fungi",
"Fungus species"
] |
11,798,358 | https://en.wikipedia.org/wiki/Septogloeum%20potentillae | Septogloeum potentillae is an ascomycete fungus that is a plant pathogen infecting strawberries. The species' validity is considered unconfirmed by GBIF, as it has very few occurrences, and has not been described in published literature for over a century.
References
External links
USDA ARS Fungal Database
Fungal strawberry diseases
Fungi described in 1896
Enigmatic Ascomycota taxa
Fungus species | Septogloeum potentillae | [
"Biology"
] | 85 | [
"Fungi",
"Fungus species"
] |
11,798,369 | https://en.wikipedia.org/wiki/Septoria%20fragariaecola | Septoria fragariaecola is a fungal plant pathogen infecting strawberries.
References
Fungi described in 1928
Fungal strawberry diseases
fragariaecola
Fungus species | Septoria fragariaecola | [
"Biology"
] | 33 | [
"Fungi",
"Fungus species"
] |
11,798,468 | https://en.wikipedia.org/wiki/Gnomonia%20iliau | Gnomonia iliau is a plant pathogen.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Gnomoniaceae
Fungi described in 1912
Fungus species | Gnomonia iliau | [
"Biology"
] | 38 | [
"Fungi",
"Fungus species"
] |
11,798,496 | https://en.wikipedia.org/wiki/Marasmius%20stenophyllus | Marasmius stenophyllus is a fungal plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
stenophyllus
Taxa named by Camille Montagne
Fungus species | Marasmius stenophyllus | [
"Biology"
] | 45 | [
"Fungi",
"Fungus species"
] |
11,798,531 | https://en.wikipedia.org/wiki/Puccinia%20erianthi | Puccinia erianthi is a species of fungus and a plant pathogen.
It was originally found on the leaves of Erianthus fulvus (now called Eulalia aurea) in Punjab, India. It is a common cause of sugarcane rust.
See also
List of Puccinia species
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
erianthi
Fungi described in 1944
Fungus species | Puccinia erianthi | [
"Biology"
] | 90 | [
"Fungi",
"Fungus species"
] |
11,798,625 | https://en.wikipedia.org/wiki/Inonotus%20hispidus | Inonotus hispidus, commonly known as shaggy bracket, is a North American fungus and plant pathogen.
Description
The fruit bodies are generally semicircular and lumpy, measuring across. They are orangish with a lighter margin when fresh, blackening in age. The flesh is orangish and the spore print is brown.
Similar species
Inonotus quercustris is more frequent to the south, with Ischnoderma resinosum and Laetiporus persicinus also being similar.
Habitat and distribution
It is found on oak and other hardwoods through eastern North America.
Uses
This fungus has been used in eastern Asia as a popular remedy for many illnesses like cancer, diabetes, and other stomach ailments. In modern pharmacology, it has aided in lowering blood glucose levels, showing anti-tumor responses and improving overall health in mice.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
hispidus
Fungi of Europe
Fungi described in 1880
Fungus species | Inonotus hispidus | [
"Biology"
] | 212 | [
"Fungi",
"Fungus species"
] |
11,798,661 | https://en.wikipedia.org/wiki/Endothia%20radicalis | Endothia radicalis is a plant pathogen. It was discovered in 1916 by Stephen Bruner. He found it growing on eucalyptus, mango and avocado.
References
External links
Index Fungorum
USDA ARS Fungal Database
Diaporthales
Fungal plant pathogens and diseases
Fungi described in 1863
Fungus species | Endothia radicalis | [
"Biology"
] | 64 | [
"Fungi",
"Fungus species"
] |
11,798,667 | https://en.wikipedia.org/wiki/Endothia%20gyrosa | Endothia gyrosa, the orange hobnail canker, is a species of sac fungus in the family Cryphonectriaceae. It is the type species of the genus Endothia. While previously classified in the genus Melogramma, phylogenetic analyses have confirmed the independent status of this species. It is found on a variety of host genera in North America including Quercus, Fagus, Liquidambar, Acer, Ilex, Vitis and Prunus.
References
Diaporthales
Fungus species
Taxa named by Lewis David de Schweinitz
Fungi described in 1822 | Endothia gyrosa | [
"Biology"
] | 123 | [
"Fungi",
"Fungus species"
] |
4,116,107 | https://en.wikipedia.org/wiki/Anvil%20press | A multi-anvil press, or anvil press is a type of device related to a machine press that is used to create extraordinarily high pressures within a small volume.
Anvil presses are used in materials science and geology for the synthesis and study the different phases of materials under extreme pressure, as well as for the industrial production of valuable minerals, especially synthetic diamonds, as they mimic the pressures and temperatures that exist deep in the Earth. These instruments allow the simultaneous compression and heating of millimeter size solid phase samples such as rocks, minerals, ceramics, glasses, composite materials, or metals and are capable of reaching pressures above 25 GPa (around 250,000 atmospheres) and temperatures exceeding 2,500 °C. This allows mineral physicists and petrologists studying the Earth's interior to experimentally reproduce the conditions found throughout the lithosphere and upper mantle, a region that spans the near surface to a depth of 700 km. In addition to pressing on the sample, the experiment passes an electric current through a furnace within the assembly to generate temperatures up to 2,200 °C. Although Diamond anvil cells and light-gas guns can access even higher pressures, the multi-anvil apparatus can accommodate much larger samples, which simplifies sample preparation and improves the precision of measurements and the stability of the experimental parameters.
The multi-anvil press is a relatively rare research tool. Lawrence Livermore National Laboratory's two presses have been used for a variety of material property studies, including diffusion and deformation of ceramics and metals, deep-focus earthquake, and the high-pressure stability of mineral phases.
History
The 6-8 multi-anvil apparatus was introduced by Kawai and Endo using a split steel sphere suspended in pressurized oil, later modified to use the hydraulic ram. In 1990, Walker et al. simplified the first compression stage by introducing the removable hatbox design, allowing ordinary machine presses to be converted into multi-anvil systems. A variety of assembly designs have been introduced and standardized including the Walker castable, and the COMPRES assemblies. Recent advances have focused on in-situ measurements, and standardizing materials and calibrations.
Basic design
A typical Kawai cell 8–6 multi-anvil apparatus uses air pumps to pressurize oil, which drives a vertical hydraulic ram to compress a cylindrical cavity known as a hatbox. This cavity is filled with six steel anvils, three facing up and three facing down, that converge on a set of eight tungsten carbide cubes. The interior corners of these cubes truncated to fit an octahedral assembly. These octahedra range from 8 mm to 25 mm on edge and are typically composed of magnesium oxide or another material that deforms ductilely over the range of experimental conditions, to make sure the experiment is under hydrostatic stress. As this assembly is compressed, it extrudes out between the cubes, forming a gasket. A cylinder is drilled out between two opposite faces to accommodate the experiment. Experiments that require heating are surrounded by a cylindrical graphite or lanthanum chromite cylinder furnace, which can produce considerable heat by electrical resistance. However, the graphite furnace can be troublesome at higher pressures due to its tendency to transform into diamond. The DIA multi-anvil is the main alternative to the Kawai cell: it uses six anvils to compress a cubic sample.
Theory
In principle, the multi-anvil press is similar in design to a machine press except that it uses force magnification to amplify pressure by reducing the area over which force is applied:
This is analogous to the mechanical advantage utilized by a lever, except the force is applied linearly, instead of angularly. For example, a typical multi-anvil could apply 9,806,650 N (equivalent to a load of 1000 t) onto a 10 mm octahedral assembly, which has a surface area of 346.41 mm2, to produce a pressure of 28.31 GPa inside the sample, while the pressure in the hydraulic ram is a mere 0.3 GPa. Therefore, using smaller assemblies can increase the pressure in the sample. The load that can be applied is limited by the compressive yield strength of the tungsten carbide cubes, especially for heated experiments. Even higher pressures, up to 90 GPa, have been achieved by using 14 mm sintered diamond cubes instead of tungsten carbide.
Measurements in the Multi-Anvil
Most sample analysis is conducted after the experiment is quenched and removed from the multi-anvil. However, it is also possible to perform measurements in-situ. Circuits, including thermocouples or pressure variable resistors, can be built into the assembly to accurately measure temperature and pressure. Acoustic interferometry can be used to measure seismic velocities through a material or to infer density of materials. Resistivity can be measured by complex impedance spectroscopy. Magnetic properties can be measured using amplified nuclear magnetic resonance in specially configured multi-anvils. The DIA multi-anvil design often includes diamond or sapphire windows built into the tungsten anvils to allow x-rays or neutrons to penetrate into the sample. This type of device gives researchers at synchrotron and neutron spallation sources the capacity to perform diffraction experiments to measure the structure of samples under extreme conditions. This is essential for observing unquenchable phases of matter because they are kinetically and thermodynamically unstable at low temperatures and pressure. Viscosity and density of high-pressure melts can be measured in-situ using the sink float method and neutron tomography. In this method a sample is implanted with objects, such platinum spheres, that have different density and neutron scattering properties compared to the material surrounding them, and the path of the object is tracked as it sinks, or floats, through the melt. Two objects with contrasting buoyancy can be used simultaneously to calculate the density.
Applications
Pressure, like temperature, is a basic thermodynamic parameter that influences the molecular structure, and thus the electrical, magnetic, thermal, optical and mechanical properties of materials. Devices like the multi-anvil apparatus allow us to observe the effect of high pressure on material structure and properties.
Multi-anvil presses are occasionally used in industry to produce minerals of exceptional purity, size and quality, especially high-pressure high-temperature (HPHT) synthetic diamonds and c-Boron-Nitride. However, multi-anvils are high cost devices, and are very adaptable, so they are more often used as scientific instruments. Multi-anvils have three main scientific uses: 1) to synthesize novel high-pressure material; 2) to change the phases of a material; 3) to examine the properties of materials at high pressures. In materials science this includes the synthesis of novel or useful materials with potential mechanical or electronic applications, such as high-pressure super conductors or ultra-hard substances. Geologists are primarily concerned with reproducing the conditions and materials found in the deep earth, to study geological processes that cannot be directly observed. Minerals or rocks are synthesized to find what conditions are responsible for different mineral phases and textures. Geoscientists also use multi-anvils to measure the kinetics of reactions, density, viscosity, compressibility, diffusivity and thermal conductivity of rock under extreme conditions.
External links
The 1000-ton multi-anvil press at Caltech (archived version)
500 ton press at Oxford
References
Metal forming
Machine tools | Anvil press | [
"Engineering"
] | 1,562 | [
"Machine tools",
"Industrial machinery"
] |
4,116,113 | https://en.wikipedia.org/wiki/Jeffrey%20Harborne | Jeffrey Barry Harborne FRS (1 September 1928, in Bristol – 21 July 2002) was a British chemist who specialised in phytochemistry. He was Professor of Botany at the University of Reading, 1976–93, then Professor emeritus. He contributed to more than 40 books and 270 research papers and was a pioneer in ecological biochemistry, particularly in the complex chemical interactions between plants, microbes and insects.
Education
Harborne was educated at Wycliffe College, Stonehouse, Gloucestershire and the University of Bristol, where he graduated in chemistry in 1949. He earned a PhD in 1953 with a thesis on the naturally occurring oxygen heterocyclic compounds with Professor Wilson Baker (1900–2002).
Research
Between 1953 and 1955 he worked as a postdoc with Professor Theodore Albert Geissman at the University of California, Los Angeles, studying phenolic plant pigments, including anthocyanins. The identification of these substances, he made use of ultraviolet-visible spectroscopy.
After his return to the UK, he joined the Potato Genetics group at the John Innes Research Institute, then located at Bayfordbury. Here he worked with K.S. Dodds on the phenolics of Solanum species, extending his knowledge of anthocyanins. This work grew to encompass a wide range of mostly garden plants. In addition to discovering novel anthocyanidins, he made in-depth studies of their glycosylation and began work on their acylation. During this time he forged links with E. C. Bate-Smith and Tony Swain at Cambridge, Swain arranging for him to edit his first book, The biochemistry of phenolic compounds. His time at the John Innes ended when the Potato Genetics group was wound up, and the institution itself moved to Norwich.
Between 1965 and 1968 Harborne worked as a research assistant at the University of Liverpool. After this, he was Reader in the Department of Botany, the University of Reading, England. In 1976 he became Professor in the Department of Botany, the University of Reading. Between 1987 and 1993 he was head of the Department of Botany at the University of Reading. In 1993 he retired. He had during his tenure at the University of Reading also positions as visiting professor at the University Federal do Rio de Janeiro (1973), the University of Texas at Austin (1976), the University of California at Santa Barbara (1977) and the University of Illinois at Urbana-Champaign (1981).
Harborne investigated the role of flavonoids in interactions between plants and insects. He also investigated the relationship between anthocyanins and the ecology of pollination. He also studied the role of phytoalexins in members of the bean family (Fabaceae), the rose family (Rosaceae) and the carrot family (Apiaceae). He published on chemotaxonomy as in his research articles on the genetic control of expression of anthocyanins, flavones and aurones in the primrose family (Primulaceae) in snapdragons (Antirrhinum) and a number of other plants. He also published on isoflavones and chemical ecology.
In his book, Phytochemicals Methods: A Guide to Modern Techniques of Plant Analysis Prof. Harborne described a number of analytical methods in plant chemistry that he developed for the system of distribution of anthocyanins in major plant groups. In Comparative Biochemistry of the Flavonoids he described the biochemistry of flavonoids in various plant groups. In the scientific journal Natural Product Reports he wrote a series of review articles about the discovery of anthocyanins and other flavonoids. In his book Introduction to Ecological Biochemistry he described the ecological role of natural substances. The publication of this book is seen as the starting point of the study of environmental chemistry. Developments in the chemical ecology he described in a series of review articles in Natural Product Reports. He was (co) author of about 270 research and review articles. He was also author or editor of some forty books.
From 1972 Prof. Harborne was the Executive Editor of the journal Phytochemistry. Between 1986 and 1999 he was chief editor of this prestigious journal. He was the founder of the magazine Analysis Phytochemicals and he was editor of Methods in Plant Biochemistry.
Harborne had a number of awards during his lifetime. In 1985 he received the Linnean Society of London, the Linnean Medal for his services to botany. He also received medals from the Phytochemical Society of Europe (PSE Medal) (1986) and the International Society of Chemical Ecology (1993). In 1993 he was awarded the Pergamon Phytochemistry Prize. In 1995 he was elected a Fellow of the Royal Society. In 2010 the University of Reading's Plant Science Laboratories, where he was Professor, were named the Harborne Building in his honour.
Publications
Biochemistry of Phenolic Compounds, 1964
Comparative Biochemistry of the Flavonoids, 1967
Phytochemical Phylogeny, 1970
Phytochemical Ecology, 1972
Phytochemical Methods, 1973, 3rd edn 1998
Introduction to Ecological Biochemistry, 1977, 4th edn 1993
Phytochemical Aspects of Plant and Animal Coevolution, 1978
Plant Chemosystematics, 1984
The Flavonoids: advances in research since 1986, 1994
The Handbook of Natural Flavonoids, vol 1 and 2, 1999
Phytochemical Dictionary, 1993, 2nd edn 1999
Dictionary of Plant Toxins, 1996
The Handbook of Flavonoid Pigments, 1999
The Handbook of Natural Flavonoids, 1999
Chemical Dictionary of Economic Plants, 2001
Career
Biochemist, the John Innes Institute, 1955–65
Research Fellow, University of Liverpool, 1965–68
Reader, the University of Reading, UK, 1968–76
Professor, Dept. of Botany, the University of Reading, UK, 1976–93
Visiting Professor, University of Texas at Austin, 1976
Visiting Professor, University of California, 1977
He was editor-in-chief of the journal Phytochemistry, 1972–98.
Honours
Fellow of the Royal Society of Chemistry, 1956
Fellow of the Biochemical Society, 1957
Plenary Lecturer, IUPAC Natural Products Symposium, 1976
Gold Medal in Botany, Linnean Society, 1985
Fellow of the Linnean Society, 1986
Silver Medal, Phytochemical Society of Europe, 1986
Silver Medal, International Society of Chemical Ecology, 1993
Fellow of the Institute of Biology, 1994
Fellow of the Royal Society, 1995
Personal life
His niece, Katharine Harborne, studied Horticultural Botany at the University of Reading from 1979 to 1981 and became a plant pathologist researching the epidemiology of Sugarcane Mosaic Virus for the South African Sugar Association at Mount Edgecombe.
References
1928 births
2002 deaths
Fellows of the Royal Society
People educated at Wycliffe College, Gloucestershire
Alumni of the University of Bristol
Chemical ecologists
Academics of the University of Reading
Fellows of the Linnean Society of London | Jeffrey Harborne | [
"Chemistry"
] | 1,447 | [
"Chemical ecologists",
"Chemical ecology"
] |
4,116,487 | https://en.wikipedia.org/wiki/PComb3H | pComb3H, a derivative of pComb3 optimized for expression of human fragments, is a phagemid used to express proteins such as zinc finger proteins and antibody fragments on phage pili for the purpose of phage display selection.
For the purpose of phage production, it contains the bacterial ampicillin resistance gene (for B-lactamase), allowing the growth of only transformed bacteria.
References
Molecular biology
Plasmids | PComb3H | [
"Chemistry",
"Biology"
] | 97 | [
"Biochemistry",
"Plasmids",
"Bacteria",
"Molecular biology"
] |
4,116,488 | https://en.wikipedia.org/wiki/Initial%20algebra | In mathematics, an initial algebra is an initial object in the category of -algebras for a given endofunctor . This initiality provides a general framework for induction and recursion.
Examples
Functor
Consider the endofunctor , i.e. sending to , where is a one-point (singleton) set, a terminal object in the category. An algebra for this endofunctor is a set (called the carrier of the algebra) together with a function . Defining such a function amounts to defining a point and a function .
Define
and
Then the set of natural numbers together with the function is an initial -algebra. The initiality (the universal property for this case) is not hard to establish; the unique homomorphism to an arbitrary -algebra , for an element of and a function on , is the function sending the natural number to , that is, , the -fold application of to .
The set of natural numbers is the carrier of an initial algebra for this functor: the point is zero and the function is the successor function.
Functor
For a second example, consider the endofunctor on the category of sets, where is the set of natural numbers. An algebra for this endofunctor is a set together with a function . To define such a function, we need a point and a function . The set of finite lists of natural numbers is an initial algebra for this functor. The point is the empty list, and the function is cons, taking a number and a finite list, and returning a new finite list with the number at the head.
In categories with binary coproducts, the definitions just given are equivalent to the usual definitions of a natural number object and a list object, respectively.
Final coalgebra
Dually, a final coalgebra is a terminal object in the category of -coalgebras. The finality provides a general framework for coinduction and corecursion.
For example, using the same functor as before, a coalgebra is defined as a set together with a function . Defining such a function amounts to defining a partial function {{math|f''': X ⇸ X}} whose domain is formed by those for which does not belong to . Having such a structure, we can define a chain of sets: being a subset of on which is not defined, which elements map into by , which elements map into by , etc., and containing the remaining elements of . With this in view, the set , consisting of the set of natural numbers extended with a new element , is the carrier of the final coalgebra, where is the predecessor function (the inverse of the successor function) on the positive naturals, but acts like the identity on the new element : , . This set that is the carrier of the final coalgebra of is known as the set of conatural numbers.
For a second example, consider the same functor as before. In this case the carrier of the final coalgebra consists of all lists of natural numbers, finite as well as infinite. The operations are a test function testing whether a list is empty, and a deconstruction function defined on non-empty lists returning a pair consisting of the head and the tail of the input list.
Theorems
Initial algebras are minimal (i.e., have no proper subalgebra).
Final coalgebras are simple (i.e., have no proper quotients).
Use in computer science
Various finite data structures used in programming, such as lists and trees, can be obtained as initial algebras of specific endofunctors.
While there may be several initial algebras for a given endofunctor, they are unique up to isomorphism, which informally means that the "observable" properties of a data structure can be adequately captured by defining it as an initial algebra.
To obtain the type of lists whose elements are members of set , consider that the list-forming operations are:
Combined into one function, they give:
which makes this an -algebra for the endofunctor sending to . It is, in fact, the initial -algebra. Initiality is established by the function known as foldr'' in functional programming languages such as Haskell and ML.
Likewise, binary trees with elements at the leaves can be obtained as the initial algebra
Types obtained this way are known as algebraic data types.
Types defined by using least fixed point construct with functor can be regarded as an initial -algebra, provided that parametricity holds for the type.
In a dual way, similar relationship exists between notions of greatest fixed point and terminal -coalgebra, with applications to coinductive types. These can be used for allowing potentially infinite objects while maintaining strong normalization property. In the strongly normalizing (each program terminates) Charity programming language, coinductive data types can be used for achieving surprising results, e.g. defining lookup constructs to implement such “strong” functions like the Ackermann function.
See also
Algebraic data type
Catamorphism
Anamorphism
Notes
External links
Categorical programming with inductive and coinductive types by Varmo Vene
Recursive types for free! by Philip Wadler, University of Glasgow, 1990-2014.
Initial Algebra and Final Coalgebra Semantics for Concurrency by J.J.M.M. Rutten and D. Turi
Initiality and finality from CLiki
Typed Tagless Final Interpreters by Oleg Kiselyov
Category theory
Functional programming
Type theory | Initial algebra | [
"Mathematics"
] | 1,128 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical logic",
"Mathematical objects",
"Type theory",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
4,116,762 | https://en.wikipedia.org/wiki/Adenosine%20A1%20receptor | {{DISPLAYTITLE:Adenosine A1 receptor}}
The adenosine A1 receptor (A1AR) is one member of the adenosine receptor group of G protein-coupled receptors with adenosine as endogenous ligand.
Biochemistry
A1 receptors are implicated in sleep promotion by inhibiting wake-promoting cholinergic neurons in the basal forebrain. A1 receptors are also present in smooth muscle throughout the vascular system.
The adenosine A1 receptor has been found to be ubiquitous throughout the entire body.
Signaling
Activation of the adenosine A1 receptor by an agonist causes binding of Gi1/2/3 or Go protein. Binding of Gi1/2/3 causes an inhibition of adenylate cyclase and, therefore, a decrease in the cAMP concentration. An increase of the inositol triphosphate/diacylglycerol concentration is caused by an activation of phospholipase C, whereas the elevated levels of arachidonic acid are mediated by DAG lipase, which cleaves DAG to form arachidonic acid.
Several types of potassium channels are activated but N-, P-, and Q-type calcium channels are inhibited.
Effect
This receptor has an inhibitory function on most of the tissues in which it rests. In the brain, it slows metabolic activity by a combination of actions. At the neuron's synapse, it reduces synaptic vesicle release.
Ligands
Caffeine, as well as theophylline, has been found to antagonize both A1 and A2A receptors in the brain.
Agonists
2-Chloro-N(6)-cyclopentyladenosine (CCPA).
N6-Cyclopentyladenosine
N(6)-cyclohexyladenosine
Tecadenoson ((2R,3S,4R)-2-(hydroxymethyl)-5-(6-
((R)-tetrahydrofuran-3-ylamino)-9H-purin-9-yl)-tetrashydrofuran3,4-diol)
Selodenoson ((2S,3S,4R)-5-(6-(cyclopentylamino)-9Hpurin-9-yl)-N-ethyl-3,4-dihydroxytetrahydrofuran-2-carboxamide)
Capadenoson (BAY68-4986)
Benzyloxy-cyclopentyladenosine (BnOCPA) is an A1R selective agonist.
PAMs
2‑Amino-3-(4′-chlorobenzoyl)-4-substituted-5-arylethynyl thiophene # 4e
Antagonists
Non-selective
Caffeine
Theophylline
CGS-15943
Selective
8-Cyclopentyl-1,3-dimethylxanthine (CPX / 8-cyclopentyltheophylline)
8-Cyclopentyl-1,3-dipropylxanthine (DPCPX)
8-Phenyl-1,3-dipropylxanthine
Bamifylline
BG-9719
BG-9928
FK-453
FK-838
Rolofylline (KW-3902)
N-0861
ISAM-CV202
In the heart
In the heart, A1 receptors play roles in electrical pacing (chronotropy and dromotropy), fluid balance, local sympathetic regulation, and metabolism.
When bound by adenosine, A1 receptors inhibit impulses generated in supraventricular tissue (SA node, AV node) and the Bundle of His/Purkinje system, leading to negative chronotropy (slowing of the heart rate). Specifically, A1 receptor activation leads to inactivation of the inwardly rectifying K+ current and inhibition of the inward Ca2+ current (ICa) and
the 'funny' hyperpolarization-activated current (If). Adenosine agonism of A1ARs also inhibits release of norepinephrine from cardiac nerves. Norepinephrine is a positive chronotrope, inotrope, and dromotrope, through its agonism of β adrenergic receptors on pacemaker cells and ventricular myocytes.
Collectively, these mechanisms lead to an myocardial depressant effect by decreasing the conduction of electrical impulses and suppressing pacemaker cells function, resulting in a decrease in heart rate. This makes adenosine a useful medication for treating and diagnosing tachyarrhythmias, or excessively fast heart rates. This effect on the A1 receptor also explains why there is a brief moment of cardiac standstill when adenosine is administered as a rapid IV push during cardiac resuscitation. The rapid infusion causes a momentary myocardial stunning effect.
In normal physiological states, this serves as protective mechanisms. However, in altered cardiac function, such as hypoperfusion caused by hypotension, heart attack or cardiac arrest caused by nonperfusing bradycardias, adenosine has a negative effect on physiological functioning by preventing necessary compensatory increases in heart rate and blood pressure that attempt to maintain cerebral perfusion.
Metabolically, A1AR activation by endogenous adenosine across the body reduces plasma glucose, lactate, and insulin levels, however A2aR activation increased glucose and lactate levels to an extent greater than the A1AR effect on glucose and lactate. Thus, intravascular administration of adenosine increases the amount of glucose and lactate available in the blood for cardiac myocytes. A1AR activation also partially inhibits glycolysis, slowing its rate to align with oxidative metabolism, which limits post-ischemic damage through reduced H+ generation.
In the state of myocardial hypertrophy and remodeling, interstitial adenosine and the expression of the A1AR receptor are both increased. After transition to heart failure however, overexpression of A1AR is no longer present. Excess A1AR expression can induce cardiomyopathy, cardiac dilatation, and cardiac hypertrophy. Cardiac failure may involve increased A1AR expression and decreased adenosine in physical models of cardiac overload and in dysfunction induced by TNFα. Heart failure often involves secretion of atrial natriuretic peptide to compensate for reduced renal perfusion and thus, secretion of electrolytes. A1AR activation also increases secretion of atrial natriuretic peptide from atrial myocytes.
References
External links
Adenosine receptors | Adenosine A1 receptor | [
"Chemistry"
] | 1,425 | [
"Adenosine receptors",
"Signal transduction"
] |
4,116,838 | https://en.wikipedia.org/wiki/ETwinning | The eTwinning action is an initiative of the European Commission that aims to encourage European schools to collaborate using Information and Communication Technologies (ICT) by providing the necessary infrastructure (online tools, services, support). Teachers registered in the eTwinning action are enabled to form partnerships and develop collaborative, pedagogical school projects in any subject area with the sole requirements to employ ICT to develop their project and collaborate with teachers from other European countries.
Formation
The project was founded in 2005 under the European Union's e-Learning program and it has been integrated in the Lifelong Learning program since 2007. eTwinning is part of Erasmus+, the EU program for education, training, and youth.
History
The eTwinning action was launched in January 2005. Its main objectives complied with the decision by the Barcelona European Council in March 2002 to promote school twinning as an opportunity for all students to learn and practice ICT skills and to promote awareness of the multicultural European model of society.
More than 13,000 schools were involved in eTwinning within its first year. In 2008, over 50,000 teachers and 4,000 projects have been registered, while a new eTwinning platform was launched. As of January 2018, over 70,000 projects are running in classrooms across Europe. By 2021, more than 226,000 schools in taken part in this work.
In early 2009, the eTwinning motto changed from "School partnerships in Europe" to "The community for schools in Europe".
In 2022, eTwinning moved to a new platform.
Participating countries
Member States of the European Union are part of eTwinning: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and The Netherlands. Overseas territories and countries are also eligible. In addition, Albania, Bosnia and Herzegovina, North Macedonia, Iceland, Liechtenstein, Norway, Serbia and Turkey can also take part.
Seven countries from the European neighbourhood (including Armenia, Azerbaijan, Georgia, Moldova and Ukraine) are also part of eTwinning via the eTwinning Plus scheme, as well as countries which are part of the Eastern Partnership, and Tunisia and Jordan (which are part of the Euro-Mediterranean Partnership, EUROMED).
Operation
The main concept behind eTwinning is that schools are paired with another school elsewhere in Europe and they collaboratively develop a project, also known as eTwinning project. The two schools then communicate online (for example, by e-mail or video conferencing) to collaborate, share and learn from each other. eTwinning encourages and develops ICT skills as the main activities inherently use information technology. Being 'twinned' with a foreign school also encourages cross-cultural exchanges of knowledge, fosters students' intercultural awareness, and improves their communication skills.
eTwinning projects can last from one week to several months, and can go on to create permanent relationships between schools. Primary and secondary schools within the European Union member states can participate, in addition to schools from Turkey, Norway and Iceland.
In contrast with other European programs, such as the Comenius programme program, all communication is via the internet; therefore there is no need for grants. Along the same lines, face-to-face meetings between partners schools are not required, although they are not prohibited.
European schoolnet has been granted the role of Central Support Service (CSS) at European level. eTwinning is also supported by a network of National Support Services.
References
Gilleran, A. (2007) eTwinning - A New Path for European Schools, eLearning Papers
European Schoolnet (2007) Learning with eTwinning: A Handbook for Teachers 2007
European Schoolnet (2006) Learning with eTwinning
European Schoolnet (2008) eTwinning: Adventures in language and culture
Konstantinidis, A. (2012). Implementing Learning-Oriented Assessment in an eTwinning Online Course for Greek Teachers. MERLOT Journal of Online Learning and Teaching, 8(1), 45-62
External links
The official portal for eTwinning (available in 28 languages)
European Schoolnet
German eTwinning website
British Council eTwinning
Greek eTwinning website
eTwinning - Italy
Spanish eTwinning website
French eTwinning website
Página Portuguesa do eTwinning
Press Release for 2008 etwinning prizes
Video clips
eTwinning YouTube channel
Education in the European Union
Educational organizations based in Europe
Educational projects
Educational technology non-profits
Information technology organizations based in Europe
Information technology projects | ETwinning | [
"Technology",
"Engineering"
] | 942 | [
"Information technology",
"Information technology projects"
] |
4,116,856 | https://en.wikipedia.org/wiki/Adherence%20%28medicine%29 | In medicine, patient compliance (also adherence, capacitance) describes the degree to which a patient correctly follows medical advice. Most commonly, it refers to medication or drug compliance, but it can also apply to other situations such as medical device use, self care, self-directed exercises, or therapy sessions. Both patient and health-care provider affect compliance, and a positive physician-patient relationship is the most important factor in improving compliance. Access to care plays a role in patient adherence, whereby greater wait times to access care contributing to greater absenteeism. The cost of prescription medication also plays a major role.
Compliance can be confused with concordance, which is the process by which a patient and clinician make decisions together about treatment.
Worldwide, non-compliance is a major obstacle to the effective delivery of health care. 2003 estimates from the World Health Organization indicated that only about 50% of patients with chronic diseases living in developed countries follow treatment recommendations with particularly low rates of adherence to therapies for asthma, diabetes, and hypertension. Major barriers to compliance are thought to include the complexity of modern medication regimens, poor health literacy and not understanding treatment benefits, the occurrence of undiscussed side effects, poor treatment satisfaction, cost of prescription medicine, and poor communication or lack of trust between a patient and his or her health-care provider. Efforts to improve compliance have been aimed at simplifying medication packaging, providing effective medication reminders, improving patient education, and limiting the number of medications prescribed simultaneously. Studies show a great variation in terms of characteristics and effects of interventions to improve medicine adherence. It is still unclear how adherence can consistently be improved in order to promote clinically important effects.
Terminology
In medicine, compliance (synonymous with adherence, capacitance) describes the degree to which a patient correctly follows medical advice. Most commonly, it refers to medication or drug compliance, but it can also apply to medical device use, self care, self-directed exercises, or therapy sessions. Both patient and health-care provider affect compliance, and a positive physician-patient relationship is the most important factor in improving compliance.
As of 2003, US health care professionals more commonly used the term "adherence" to a regimen rather than "compliance", because it has been thought to reflect better the diverse reasons for patients not following treatment directions in part or in full. Additionally, the term adherence includes the ability of the patient to take medications as prescribed by their physician with regards to the correct drug, dose, route, timing, and frequency. It has been noted that compliance may only refer to passively following orders. The term adherence is often used to imply a collaborative approach to decision-making and treatment between a patient and clinician.
The term concordance has been used in the United Kingdom to involve a patient in the treatment process to improve compliance, and refers to a 2003 NHS initiative. In this context, the patient is informed about their condition and treatment options, involved in the decision as to which course of action to take, and partially responsible for monitoring and reporting back to the team. Informed intentional non-adherence is when the patient, after understanding the risks and benefits, chooses not to take the treatment.
As of 2005, the preferred terminology remained a matter of debate. As of 2007, concordance has been used to refer specifically to patient adherence to a treatment regimen which the physician sets up collaboratively with the patient, to differentiate it from adherence to a physician-only prescribed treatment regimen. Despite the ongoing debate, adherence has been the preferred term for the World Health Organization, The American Pharmacists Association, and the U.S. National Institutes of Health Adherence Research Network. The Medical Subject Headings of the United States National Library of Medicine defines various terms with the words adherence and compliance. Patient Compliance and Medication Adherence are distinguished under the MeSH tree of Treatment Adherence and Compliance.
Adherence factors
An estimated half of those for whom treatment regimens are prescribed do not follow them as directed.
Side effects
Negative side effects of a medicine can influence adherence.
Health literacy
Cost and poor understanding of the directions for the treatment, referred to as 'health literacy' have been known to be major barriers to treatment adherence. There is robust evidence that education and physical health are correlated. Poor educational attainment is a key factor in the cycle of health inequalities.
Educational qualifications help to determine an individual's position in the labour market, their level of income and therefore their access to resources.
Literacy
In 1999 one fifth of UK adults, nearly seven million people, had problems with basic skills, especially functional literacy and functional numeracy, described as: "The ability to read, write and speak in English, and to use mathematics at a level necessary to function at work and in society in general." This made it impossible for them to effectively take medication, read labels, follow drug regimes, and find out more.
In 2003, 20% of adults in the UK had a long-standing illness or disability and a national study for the UK Department of Health, found more than one-third of people with poor or very poor health had literary skills of Entry Level 3 or below.
Low levels of literacy and numeracy were found to be associated with socio-economic deprivation. Adults in more deprived areas, such as the North East of England, performed at a lower level than those in less deprived areas such as the South East. Local authority tenants and those in poor health were particularly likely to lack basic skills.
A 2002 analysis of over 100 UK local education authority areas found educational attainment at 15–16 years of age to be strongly associated with coronary heart disease and subsequent infant mortality.
A study of the relationship of literacy to asthma knowledge revealed that 31% of asthma patients with a reading level of a ten-year-old knew they needed to see the doctors, even when they were not having an asthma attack, compared to 90% with a high school graduate reading level.
Treatment cost
In 2013 the US National Community Pharmacists Association sampled for one month 1,020 Americans above age 40 for with an ongoing prescription to take medication for a chronic condition and gave a grade C+ on adherence. In 2009, this contributed to an estimated cost of $290 billion annually. In 2012, increase in patient medication cost share was found to be associated with low adherence to medication.
The United States is among the countries with the highest prices of prescription drugs mainly attributed to the government's lack of negotiating lower prices with monopolies in the pharmaceutical industry especially with brand name drugs. In order to manage medication costs, many US patients on long term therapies fail to fill their prescription, skip or reduce doses. According to a Kaiser Family Foundation survey in 2015, about three quarters (73%) of the public think drug prices are unreasonable and blame pharmaceutical companies for setting prices so high. In the same report, half of the public reported that they are taking prescription drugs and a "quarter (25%) of those currently taking prescription medicine report they or a family member have not filled a prescription in the past 12 months due to cost, and 18 percent report cutting pills in half or skipping doses". In a 2009 comparison to Canada, only 8% of adults reported to have skipped their doses or not filling their prescriptions due to the cost of their prescribed medications.
Age
The elderly often have multiple health conditions, and around half of all NHS medicines are prescribed for people over retirement age, despite representing only about 20% of the UK population. The recent National Service Framework on the care of older people highlighted the importance of taking and effectively managing medicines in this population. However, elderly individuals may face challenges, including multiple medications with frequent dosing, and potentially decreased dexterity or cognitive functioning. Patient knowledge is a concern that has been observed.
In 1999 Cline et al. identified several gaps in knowledge about medication in elderly patients discharged from hospital. Despite receiving written and verbal information, 27% of older people discharged after heart failure were classed as non-adherent within 30 days. Half the patients surveyed could not recall the dose of the medication that they were prescribed and nearly two-thirds did not know what time of day to take them. A 2001 study by Barat et al. evaluated the medical knowledge and factors of adherence in a population of 75-year-olds living at home. They found that 40% of elderly patients do not know the purpose of their regimen and only 20% knew the consequences of non-adherence. Comprehension, polypharmacy, living arrangement, multiple doctors, and use of compliance aids was correlated with adherence.
In children with asthma, self-management compliance is critical and co-morbidities have been noted to affect outcomes; in 2013 it has been suggested that electronic monitoring may help adherence.
Ethnicity
People of different ethnic backgrounds have unique adherence issues through literacy, physiology, culture or poverty. There are few published studies on adherence in medicine taking in ethnic minority communities. Ethnicity and culture influence some health-determining behaviour, such as participation in screening programmes and attendance at follow-up appointments.
Prieto et al emphasised the influence of ethnic and cultural factors on adherence. They pointed out that groups differ in their attitudes, values and beliefs about health and illness. This view could affect adherence, particularly with preventive treatments and medication for asymptomatic conditions. Additionally, some cultures fatalistically attribute their good or poor health to their god(s), and attach less importance to self-care than others.
Measures of adherence may need to be modified for different ethnic or cultural groups. In some cases, it may be advisable to assess patients from a cultural perspective before making decisions about their individual treatment.
Recent studies have shown that black patients and those with non-private insurance are more likely to be labeled as non-adherent. The increased risk is observed even in patients with a controlled A1c, and after controlling for other socioeconomic factors.
Prescription fill rates
Not all patients will fill the prescription at a pharmacy. In a 2010 U.S. study, 20–30% of prescriptions were never filled at the pharmacy. Reasons people do not fill prescriptions include the cost of the medication, A US nationwide survey of 1,010 adults in 2001 found that 22% chose not to fill prescriptions because of the price, which is similar to the 20–30% overall rate of unfilled prescriptions. Other factors are doubting the need for medication, or preference for self-care measures other than medication. Convenience, side effects and lack of demonstrated benefit are also factors.
Medication Possession Ratio
Prescription medical claims records can be used to estimate medication adherence based on fill rate. Patients can be routinely defined as being 'Adherent Patients' if the amount of medication furnished is at least 80% based on days' supply of medication divided by the number of days patient should be consuming the medication. This percentage is called the medication possession ratio (MPR). 2013 work has suggested that a medication possession ratio of 90% or above may be a better threshold for deeming consumption as 'Adherent'.
Two forms of MPR can be calculated, fixed and variable. Calculating either is relatively straightforward, for Variable MPR (VMPR) it is calculated as the number of days' supply divided by the number of elapsed days including the last prescription.
For the Fixed MPR (FMPR) the calculation is similar but the denominator is the number of days in a year whilst the numerator is constrained to be the number of days' supply within the year that the patient has been prescribed.
For medication in tablet form it is relatively straightforward to calculate the number of days' supply based on a prescription. Some medications are less straightforward though because a prescription of a given number of doses may have a variable number of days' supply because the number of doses to be taken per day varies, for example with preventative corticosteroid inhalers prescribed for asthma where the number of inhalations to be taken daily may vary between individuals based on the severity of the disease.
Contextual factors
Contextual factors along with intrapersonal circumstances such as mental states affect decisions. They can accurately predict decisions where most contextual information is identified. General compliance with recommendations to follow isolation is influenced beliefs such as taking health precaution to be protected against infection, perceived vulnerability, getting COVID-19 and trust in the government. Mobility reduction, compliance with quarantine regulations in European regions where level of trust in policymakers is high can influence whether one complies with isolation rules. In addition, perceived infectiousness of COVID-19 is a strong predictor of rule compliance such that the more contagious people think COVID-19 is, the less willing social distancing measures are taken, while the sense of duty and fear of the virus contribute to staying at home. People might not leave their homes due to trusting regulations to be effective or placing it in a higher power such that individuals who trust others demonstrate more compliance than those who do not. Compliant individuals see protective measures as effective, while non-compliant people see them as problematic.
Course completion
Once started, patients seldom follow treatment regimens as directed, and seldom complete the course of treatment. In respect of hypertension, 50% of patients completely drop out of care within a year of diagnosis. Persistence with first-line single antihypertensive drugs is extremely low during the first year of treatment. As far as lipid-lowering treatment is concerned, only one third of patients are compliant with at least 90% of their treatment. Intensification of patient care interventions (e.g. electronic reminders, pharmacist-led interventions, healthcare professional education of patients) improves patient adherence rates to lipid-lowering medicines, as well as total cholesterol and LDL-cholesterol levels.
The World Health Organization (WHO) estimated in 2003 that only 50% of people complete long-term therapy for chronic illnesses as they were prescribed, which puts patient health at risk. For example, in 2002 statin compliance dropped to between 25 and 40% after two years of treatment, with patients taking statins for what they perceive to be preventative reasons being unusually poor compliers.
A wide variety of packaging approaches have been proposed to help patients complete prescribed treatments. These approaches include formats that increase the ease of remembering the dosage regimen as well as different labels for increasing patient understanding of directions. For example, medications are sometimes packed with reminder systems for the day and/or time of the week to take the medicine. Some evidence shows that reminder packaging may improve clinical outcomes such as blood pressure.
A not-for-profit organisation called the Healthcare Compliance Packaging Council of Europe] (HCPC-Europe) was set up between the pharmaceutical industry, the packaging industry with representatives of European patients organisations. The mission of HCPC-Europe is to assist and to educate the healthcare sector in the improvement of patient compliance through the use of packaging solutions. A variety of packaging solutions have been developed by this collaboration.
World Health Organization Barriers to Adherence
The World Health Organization (WHO) groups barriers to medication adherence into five categories; health care team and system-related factors, social and economic factors, condition-related factors, therapy-related factors, and patient-related factors. Common barriers include:
Improving adherence rates
Role of health care providers
Health care providers play a great role in improving adherence issues. Providers can improve patient interactions through motivational interviewing and active listening. Health care providers should work with patients to devise a plan that is meaningful for the patient's needs. A relationship that offers trust, cooperation, and mutual responsibility can greatly improve the connection between provider and patient for a positive impact. The wording that health care professionals take when sharing health advice may have an impact on adherence and health behaviours, however, further research is needed to understand if positive framing (e.g., the chance of surviving is improved if you go for screening) versus negative framing (e.g., the chance of dying is higher if you do not go for screening) is more effective for specific conditions.
Technology
In 2012 it was predicted that as telemedicine technology improves, physicians will have better capabilities to remotely monitor patients in real-time and to communicate recommendations and medication adjustments using personal mobile devices, such as smartphones, rather than waiting until the next office visit.
Medication Event Monitoring Systems (MEMS), as in the form of smart medicine bottle tops, smart pharmacy vials or smart blister packages as used in clinical trials and other applications where exact compliance data are required, work without any patient input, and record the time and date the bottle or vial was accessed, or the medication removed from a blister package. The data can be read via proprietary readers, or NFC enabled devices, such as smartphones or tablets. A 2009 study stated that such devices can help improve adherence. More recently a 2016 scoping review suggested that in comparison to MEMS, median mediction adherence was grossly overestimated by 17% using self-report, by 8% using pill count and by 6% using rating as alternative methods for measuring medication adherence.
The effectiveness of two-way email communication between health care professionals and their patients has not been adequately assessed.
Mobile phones
, 5.15 billion people, which equates to 67% of the global population, have a mobile device and this number is growing. Mobile phones have been used in healthcare and has fostered its own term, mHealth. They have also played a role in improving adherence to medication. For example, text messaging has been used to remind patients to take medication in patients with chronic conditions such as asthma and hypertension. Other examples include the use of smartphones for synchronous and asynchronous Video Observed Therapy (VOT) as a replacement for the currently resource intensive standard of Directly Observed Therapy (DOT) (recommended by the WHO) for Tuberculosis management. Other mHealth interventions for improving adherence to medication include smartphone applications, voice recognition in interactive phone calls and Telepharmacy. Some results show that the use of mHealth improves adherence to medication and is cost-effective, though some reviews report mixed results. Studies show that using mHealth to improve adherence to medication is feasible and accepted by patients. Specific mobile applications might also support adherence. mHealth interventions have also been used alongside other telehealth interventions such as wearable wireless pill sensors, smart pillboxes and smart inhalers
Forms of medication
Depot injections need to be taken less regularly than other forms of medication and a medical professional is involved in the administration of drugs so can increase compliance. Depot's are used for oral contraceptive pill and antipsychotic medication used to treat schizophrenia and bipolar disorder.
Coercion
Sometimes drugs are given involuntarily to ensure compliance. This can occur if an individual has been involuntarily committed or are subjected to an outpatient commitment order, where failure to take medication will result in detention and involuntary administration of treatment. This can also occur if a patient is not deemed to have mental capacity to consent to treatment in an informed way.
Health and disease management
A WHO study estimates that only 50% of patients with chronic diseases in developed countries follow treatment recommendations.
Asthma non-compliance (28–70% worldwide) increases the risk of severe asthma attacks requiring preventable ER visits and hospitalisations; compliance issues with asthma can be caused by a variety of reasons including: difficult inhaler use, side effects of medications, and cost of the treatment.
Cancer
200,000 new cases of cancer are diagnosed each year in the UK. One in three adults in the UK will develop cancer that can be life-threatening, and 120,000 people will be killed by their cancer each year. This accounts for 25% of all deaths in the UK. However while 90% of cancer pain can be effectively treated, only 40% of patients adhere to their medicines due to poor understanding.
Results of a recent (2016) systematic review found a large proportion of patients struggle to take their oral antineoplastic medications as prescribed. This presents opportunities and challenges for patient education, reviewing and documenting treatment plans, and patient monitoring, especially with the increase in patient cancer treatments at home.
The reasons for non-adherence have been given by patients as follows:
The poor quality of information available to them about their treatment
A lack of knowledge as to how to raise concerns whilst on medication
Concerns about unwanted effects
Issues about remembering to take medication
Partridge et al (2002) identified evidence to show that adherence rates in cancer treatment are variable, and sometimes surprisingly poor. The following table is a summary of their findings:
Medication event monitoring system - a medication dispenser containing a microchip that records when the container is opened and from Partridge et al (2002)
In 1998, trials evaluating Tamoxifen as a preventative agent have shown dropout rates of around one-third:
36% in the Royal Marsden Tamoxifen Chemoprevention Study of 1998
29% in the National Surgical Adjuvant Breast and Bowel Project of 1998
In March 1999, the "Adherence in the International Breast Cancer Intervention Study" evaluating the effect of a daily dose of Tamoxifen for five years in at-risk women aged 35–70 years was
90% after one year
83% after two years
74% after four years
Diabetes
Patients with diabetes are at high risk of developing coronary heart disease and usually have related conditions that make their treatment regimens even more complex, such as hypertension, obesity and depression which are also characterised by poor rates of adherence.
Diabetes non-compliance is 98% in US and the principal cause of complications related to diabetes including nerve damage and kidney failure.
Among patients with Type 2 Diabetes, adherence was found in less than one third of those prescribed sulphonylureas and/or metformin. Patients taking both drugs achieve only 13% adherence.
Other aspects that drive medicine adherence rates is the idea of perceived self-efficacy and risk assessment in managing diabetes symptoms and decision making surrounding rigorous medication regiments. Perceived control and self-efficacy not only significantly correlate with each other, but also with diabetes distress psychological symptoms and have been directly related to better medication adherence outcomes. Various external factors also impact diabetic patients' self-management behaviors including health-related knowledge/beliefs, problem-solving skills, and self-regulatory skills, which all impact perceived control over diabetic symptoms.
Additionally, it is crucial to understand the decision-making processes that drive diabetics in their choices surrounding risks of not adhering to their medication. While patient decision aids (PtDAs), sets of tools used to help individuals engage with their clinicians in making decisions about their healthcare options, have been useful in decreasing decisional conflict, improving transfer of diabetes treatment knowledge, and achieving greater risk perception for disease complications, their efficacy in medication adherence has been less substantial. Therefore, the risk perception and decision-making processes surrounding diabetes medication adherence are multi-faceted and complex with socioeconomic implications as well. For example, immigrant health disparities in diabetic outcomes have been associated with a lower risk perception amongst foreign-born adults in the United States compared to their native-born counterparts, which leads to fewer protective lifestyle and treatment changes crucial for combatting diabetes. Additionally, variations in patients' perceptions of time (i.e. taking rigorous, costly medication in the present for abstract beneficial future outcomes can conflict with patients' preferences for immediate versus delayed gratification) may also present severe consequences for adherence as diabetes medication often requires systematic, routine administration.
Hypertension
Hypertension non-compliance (93% in US, 70% in UK) is the main cause of uncontrolled hypertension-associated heart attack and stroke.
In 1975, only about 50% took at least 80% of their prescribed anti-hypertensive medications.
As a result of poor compliance, 75% of patients with a diagnosis of hypertension do not achieve optimum blood-pressure control.
Mental illness
A 2003 review found that 41–59% of patients prescribed antipsychotics took the medication prescribed to them infrequently or not at all. Sometimes non-adherence is due to lack of insight, but psychotic disorders can be episodic and antipsychotics are then use prophylactically to reduce the likelihood of relapse rather than treat symptoms and in some cases individuals will have no further episodes despite not using antipsychotics. A 2006 review investigated the effects of compliance therapy for schizophrenia: and found no clear evidence to suggest that compliance therapy was beneficial for people with schizophrenia and related syndromes.
Rheumatoid arthritis
A longitudinal study has shown that adherence with treatment about 60%. The predictors of adherence were found to be more of psychological, communication and logistic nature rather than sociodemographic or clinical factors. The following factors were identified as independent predictors of adherence:
the type of treatment prescribed
agreement on treatment
having received information on treatment adaptation
clinician perception of patient trust
See also
Drug withdrawal
Patient participation
References
External links
Adherence to long-term therapies, a report from the World Health Organization
Technology report on NFC enabled smart medication packages
Medical terminology
Clinical pharmacology
Pharmacy
Health care quality | Adherence (medicine) | [
"Chemistry"
] | 5,134 | [
"Pharmacology",
"Pharmacy",
"Clinical pharmacology"
] |
4,116,974 | https://en.wikipedia.org/wiki/NGC%2040 | NGC 40 (also known as the Bow-Tie Nebula and Caldwell 2) is a planetary nebula discovered by William Herschel on November 25, 1788, and is composed of hot gas around a dying star. The star has ejected its outer layer which has left behind a small, hot star. Radiation from the star causes the shed outer layer to heat to about 10,000 degrees Celsius and become visible as a planetary nebula. The nebula is about one light-year across. About 30,000 years from now, scientists theorize that NGC 40 will fade away, leaving only a white dwarf star approximately the size of Earth.
Morphologically, the shape of NGC 40 resembles a barrel with the long axis pointing towards the north-northeast. There are two additional pairs of lobes around the poles, which correspond to additional ejections from the star.
The central star of NGC 40 has a Henry Draper Catalogue designation of HD 826. It has a spectral type of [WC8], indicating a spectrum similar to that of a carbon-rich Wolf–Rayet star. The central star has a bolometric luminosity of about and radius of . The star appears to have an effective temperature of about , but the temperature of the source ionizing the nebula is only about . One proposed explanation to this contradiction is that the star was previously cooler, but has experienced a late thermal pulse which re-ignited fusion and caused its temperature to increase.
Gallery
References
External links
NGC 0040
NGC 0040
0040
002b
17881125
Discoveries by William Herschel
Wolf–Rayet stars | NGC 40 | [
"Astronomy"
] | 321 | [
"Constellations",
"Cepheus (constellation)"
] |
4,117,029 | https://en.wikipedia.org/wiki/NGC%20246 | NGC 246 (also known as the Skull Nebula or Caldwell 56) is a planetary nebula in the constellation Cetus. It was discovered in 1785 by William Herschel. The nebula and the stars associated with it are listed in several catalogs, as summarized by the SIMBAD database.
The nebula is roughly away. NGC 246's central star is the 12th magnitude white dwarf HIP 3678 A. In 2014, astronomers discovered a second companion to NGC 246's central star, which has a comoving companion star called HIP 3678 B. The second companion star, a red dwarf known as HIP 3678 C, was discovered using the European Southern Observatory's Very Large Telescope. This makes NGC 246 the first planetary nebula to have a hierarchical triple star system at its center.
NGC 246 is not to be confused with the Rosette Nebula (NGC 2337), which is also referred to as the "Skull." Among some amateur astronomers, NGC 246 is known as the "Pac-Man Nebula" because of the arrangement of its central stars and the surrounding star field.
Image gallery
References
External links
Planetary nebulae
Cetus
0246
056b
17841127
Discoveries by William Herschel
3678
00445-1207 | NGC 246 | [
"Astronomy"
] | 252 | [
"Cetus",
"Constellations"
] |
4,117,066 | https://en.wikipedia.org/wiki/Murder%20for%20body%20parts | Murder for body parts also known as medicine murder (not to be confused with "medical murder") refers to the killing of a human being in order to excise body parts to use as medicine or purposes in witchcraft. Medicine murder is viewed as the obtaining of an item or items from a corpse to be used in traditional medicine. The practice occurs primarily in sub-equatorial Africa.
The illegal organ trade has led to murder for body parts, because of a worldwide demand of organs for transplant and organ donors. For example, criminal organizations have engaged in kidnapping and killing people for the purpose of harvesting their organs for illegal organ trade. The extent is unknown, and non-fatal organ theft and removal is more widely reported than murder.
Historically, anatomy murders took place during the earlier parts of modern Western medicine. In the 19th century, the human body was still poorly understood, and fresh cadavers for dissection and anatomical study were sometimes difficult to obtain. Mortuaries remained the most common source, but in some cases, such as the notorious Irish murderers Burke and Hare, victims were killed then sold for study and research purposes.
Medicine murder
Purpose and frequency
The objective of medicine murder is to create traditional medicine based partly on human flesh. Medicine murder is often termed ritual murder or muthi / muti murder, although there is evidence to suggest that the degree of ritual involved in the making of medicine is only a small element of the practice overall. Social anthropological ethnographies have documented anecdotes of medicine murder in southern Africa since the 1800s, and research has shown that incidences of medicine murder increase in times of political and economic stress.
The practice is commonly associated with witchcraft, although ethnographic evidence suggests that this has not always been the case, and that it may have been accorded local-level political sanction. Medicine murder is difficult to describe concisely, as it has changed over time, involving an ever-greater variety of perpetrator, victim, method and motive. Most detailed information about the minutiae of medicine murder is derived from state witnesses in trials, court records and third-party anecdote.
The phenomenon is widely acknowledged to occur in southern Africa, although no country has issued an accurate and up to date record of the frequency with which it takes place. This is not only because of the secrecy of the practice, given its controversial status, but also because of difficulties in classifying subcategories of murder. Medicine murder has been a topic of urban legends in South Africa, but this does not diminish its status as a practice that has resulted in legal trials and convictions of perpetrators.
Medicine murder in southern Africa has been documented in some small detail in South Africa, Lesotho and Swaziland, although it is a difficult subject to investigate because of its controversial standing in customary practices and laws. Very few research and discussion documents have been devoted to this subject. Three concerning Lesotho were published in 1951, 2000 and 2005 regarding the same events in the 1940s and 1950s; one concerning Swaziland was published in 1993 covering the 1970s and 1980s; and a commission of enquiry held in South Africa in 1995 covering medicine murder and witchcraft in the 1980s and 1990s.
Methodology
The perpetrators are usually men, although women have been convicted as well, most notably in Swaziland when Phillippa Mdluli was hanged in 1983 for commissioning a medicine murder. Perpetrators vary widely in age and social status.
An individual or group of individuals commissions an inyanga (a herbalist skilled in traditional medicine) to assist them by concocting a medicine called muti. The medicine supposedly strengthens the 'personality' or personal force of the person who commissions the medicine. This increased personal force enables the person to excel in business, politics, or other sphere of influence. A human victim is identified for murder in order to create the medicine.
Victims vary widely in age and social standing. They are often young children or elderly people, and are both male and female. In some instances, the victim is identified and 'purchased' via a transaction involving an often nominal amount of money. The victim is then abducted, often at night, and taken to an isolated place, often in the open countryside if the murder is being committed in a rural area. It is usually intended that the victim be mutilated while conscious, so that the medicine can be made more potent through the noises of the victim in agony. Mutilation does not take place in order to kill the victim, but it is expected that the victim will die of the wounds.
Body parts excised mostly include soft tissue and internal organs – eyelids, lips, scrota, labia and uteri – although there have been instances where entire limbs have been severed. These body parts are removed to be mixed with medicinal plants to create a medicine through a cooking process. The resulting medicine is sometimes consumed, but is often made into a paste that is carried on the person or rubbed onto scarifications.
Variances
Since the 1970s, the manner in which medicine murder is practiced has become altered to the methods described above, although the continued practice of medicine murder demonstrates that belief in human flesh as a powerful medicinal component remains strong in some communities. It would appear that medicine murder in the 18th and 19th centuries may have been considered the legitimate domain of traditional chiefs and leaders, in order to improve agriculture and protect against war (see Human sacrifice).
Following industrialisation and growth of commerce, the range of purposes for which medicine was used to increase influence expanded significantly. In the early 1990s when South Africa was experiencing internal political strife between several political groupings, it became clear that some mutilations for medicine were opportunistic and incidental to the assassination of political opponents. There have also been occurrences of mutilation of corpses in medical facilities. In not all cases does the employment of a traditional healer seem to have been thought necessary to the process.
Notable cases
1994 Segametsi Mogomotsi case
In 1994, a 14-year-old named Segametsi Mogomotsi was murdered in Mochudi, Botswana and body parts removed. The killing was widely believed to have been for muti, and the police even recovered some excised organs. However, these were destroyed before being tested to establish them as human, leading to accusations of police complicity with the murder. The killing led to riots as students in Mochudi protested about police inaction, and eventually Scotland Yard from Britain were asked to investigate, as neutral outsiders. Their report was given to the Botswana government, which did not release it to the public. These events inspired some of the events in the book The No. 1 Ladies' Detective Agency by Alexander McCall Smith.
2001 Thames torso case
A little boy whose headless and limbless body was found floating in the Thames in 2001 was identified by an arrestee in March 2011. The five-year-old's identity has remained a mystery after he was smuggled into Britain and murdered in a voodoo-style ritual killing. He was drugged with a ‘black-magic’ potion and sacrificed before being thrown into the Thames, where his torso washed up next to the Globe Theatre in September 2001. Detectives used pioneering scientific techniques to trace radioactive isotopes in his bones to his native Nigeria. They even enlisted Nelson Mandela to appeal for information about the murder.
They struggled to formally identify the boy, whom they called Adam, despite travelling to the West African state to try to trace his family. Nigerian Joyce Osiagede, the only person to be arrested in Britain as part of the inquiry, has claimed that the victim's real name is Ikpomwosa. In an interview with ITV's London Tonight, Mrs Osiagede said she looked after the boy in Germany for a year before travelling to Britain without him in 2001. She claimed she handed the boy over to a man known as Bawa who later told her that he was dead and threatened to kill her unless she kept silent.
Police have passed numerous files on the case to the Crown Prosecution Service but it has never gone to court. A second suspect, a Nigerian man, was arrested in Dublin in 2003 but was never charged. Mrs Osiagede was first questioned by police after they found clothing similar to that worn by ‘Adam’ in her Glasgow tower-block flat in 2002. The only clothing on his body was a pair of orange shorts, exclusively sold in Woolworths in Germany and Austria.
Dressed in a traditional gold and green dress, Mrs Osiagede denied any involvement with the death of the young boy.
Asked who killed him, she said a ‘group of people’. She added: "They used him for a ritual in the water." Claiming the boy was six years old, she said: ‘He was a lively boy. A very nice boy, he was also intelligent.’ Detailed analysis of a substance in the boy's stomach was identified as a ‘black magic’ potion. It included tiny clay pellets containing small particles of pure gold, an indication that Adam was the victim of a Muti ritual killing in which it is believed that the body parts of children are sacred. Bodies are often disposed of in flowing water.
2009 Masego Kgomo case
Masego Kgomo was a 10-year-old South African girl whose body parts were removed and sold to a sangoma in Soshanguve, South Africa. The little girl's body was found in bushes near the Mabopane railway station, north-west of Pretoria. Thirty-year-old Brian Mangwale was found guilty of the murder and sentenced to life imprisonment.
Illegal organ trade murders (the 'Red trade')
According to the World Health Organization (WHO), illegal organ trade occurs when organs are removed from the body for the purpose of commercial transactions. The illegal organ trade is growing, and a recent report by Global Financial Integrity estimates that globally it generates profits between $0.6 billion and $1.2 billion per year
In some cases, criminal organizations have engaged in kidnapping of people, especially children and teens, who are murdered and their organs harvested for profit. In 2014 an alleged member of the Mexican Knights Templar cartel was arrested for the kidnapping and deaths of minors, after children were found wrapped in blankets and stuffed in a refrigerated container inside a van.
According to the most recent Bulletin of the World Health Organization on the state of the international organ trade, 66,000 kidney transplants, 21,000 liver transplants, and 6000 heart transplants were performed globally in 2005, while another article reports that 2008 the median waiting time for the U.S. transplant list in 2008 was greater than 3 years and expected to rise, while the United Kingdom reported a lack of organs for 8000 patients, with the rate increasing at 8%. It was estimated that about 10% of all transplants occur illegally, with the Internet acting as a facilitator. Transplant tourism raises concerns because it involves the transfer of healthy organs in one direction, depleting the regions where organs are bought. This transfer typically occurs from South to North, developing to developed nations, females to males, and from colored peoples to whites, a trend that some experts say "has exacerbated old...divisions". While some organs such as the kidney can be transplanted routinely and the single remaining kidney is adequate for normal human needs, other organs are less easy to source. Liver transplants in particular are prominent, but incur an excruciating recovery that deters donations.
Most countries have laws which criminalize the buying and selling of organs, or the carrying out of medical procedures for the illegal organ trade.
Capital punishment and organ harvesting in China
In March 2006, three individuals alleged that thousands of Falun Gong practitioners had been killed at Sujiatun Thrombosis Hospital, to supply China's organ transplant industry. The third person, a doctor, said the so-called hospitals in Sujiatun are but one of 36 similar concentration camps all over China.
The allegations were the subject of investigative reports by Edward McMillan-Scott, Vice-President of the European Parliament, and by former Canadian Secretary of State David Kilgour and human rights lawyer David Matas.
The Kilgour-Matas report stated "the source of 41,500 transplants for the six year period 2000 to 2005 is unexplained" and concluded that "there has been and continues today to be large scale organ seizures from unwilling Falun Gong practitioners".
The report called attention to the extremely short wait times for organs in China—one to two weeks for a liver compared with 32.5 months in Canada—noting that this was indicative of organs being procured on demand. A significant increase in the number of annual organ transplants in China beginning in 1999, corresponded with the onset of the persecution of Falun Gong. Despite very low levels of voluntary organ donation, China performs the second-highest number of transplants per year. The report includes incriminating material from Chinese transplant center web sites advertising the immediate availability of organs from living donors, and transcripts of interviews in which hospitals told prospective transplant recipients that they could obtain Falun Gong organs. An updated version of their report was published as a book in 2009.
In 2014, investigative journalist Ethan Gutmann, published his own investigation. He conducted extensive interviews with former detainees of Chinese labor camps and prisons, as well as former security officers and medical professionals with knowledge of China's transplant practices. He reported that organ harvesting from political prisoners likely began in Xinjiang province in the 1990s, and then spread nationwide. Gutmann estimates 65,000 Falun Gong prisoners were killed for their organs from 2000 to 2008.
Data on availability and speed of transplants within China (under 2 – 3 weeks in some cases compared to years elsewhere) led several renowned doctors to state that the statistics and transplant rates seen would be impossible without access to a very large pool of pre-existing donors already available on very short notice for hearts and other organs; several governments also established restrictions intended to target such a practice.
The extent of evidence still led to many responses expressing "deep concerns" at the findings, and several countries took action as a result of the concerns and findings. Responses were noted from the Queensland Ministry of Health in Australia (abolished training programs for Chinese doctors in organ transplant procedures and banned joint research with China on organ transplantation), A petition signed by 140 Canadian physicians urged the Government to warn Canadian nationals that organ transplants in China were "sourced almost entirely from non-consenting people".
In 2012, State Organs: Transplant Abuse in China, edited by David Matas and Dr. Torsten Trey, was published with contributions from 12 specialists. Several of the essays in the book conclude that a primary source of organs has been prisoners of conscience, specifically practitioners of Falun Gong.
In May 2008, two United Nations Special Rapporteurs reiterated their requests for the Chinese government to fully explain the allegation of taking vital organs from Falun Gong practitioners and the source of organs for the sudden increase in organ transplants in China since 2000.
In August 2009, Manfred Nowak the United Nations Special Rapporteur on Torture said, "The Chinese government has yet to come clean and be transparent ... It remains to be seen how it could be possible that organ transplant surgeries in Chinese hospitals have risen massively since 1999, while there are never that many voluntary donors available."
Murder for dissection and study
An anatomy murder (sometimes called burking in British English) is a murder committed in order for all or part of the cadaver to be used for medical research or teaching. It is not a medicine murder because the body parts are not believed to have any medicinal use in themselves. The motive for the murder is created by the demand for cadavers for dissection, and the opportunity to learn anatomy and physiology as a result of the dissection. Rumors concerning the prevalence of anatomy murders are associated with the rise in demand for cadavers in research and teaching produced by the Scientific Revolution. During the nineteenth century, the sensational serial murders associated with Burke and Hare and the London Burkers led to legislation which provided scientists and medical schools with legal ways of obtaining cadavers. The practice has intermittently been reported since that time; in 1992 Colombian activist Juan Pablo Ordoñez, claimed that 14 poor residents of the town of Barranquilla had been killed for local medical study with a purported account by an alleged escapee being publicized by the international press. Rumors persist that anatomy murders are carried out wherever there is a high demand for cadavers. These rumors are hard to substantiate, and may reflect continued, deep-held fears of the use of cadavers as commodities.
See also
Persecution of people with albinism
Witchcraft accusations against children in Africa
Child sacrifice in Uganda
References
Sources
External links
"'I was forced to kill my baby'" – article about 2001 Thames torso case by BBC News
Murder
African shamanism
African witchcraft
Religious practices
Health in China
Organ trade
Blood libel
Killings by type | Murder for body parts | [
"Biology"
] | 3,471 | [
"Behavior",
"Religious practices",
"Human behavior"
] |
4,117,074 | https://en.wikipedia.org/wiki/Acoustic%20cleaning | Acoustic cleaning is a maintenance method used in material-handling and storage systems that handle bulk granular or particulate materials, such as grain elevators, to remove the buildup of material on surfaces. An acoustic cleaning apparatus, usually built into the material-handling equipment, works by generating powerful sound waves which shake particulates loose from surfaces, reducing the need for manual cleaning.
History and design
An acoustic cleaner consists of a sound source similar to an air horn found on trucks and trains, attached to the material-handling equipment, which directs a loud sound into the interior. It is powered by compressed air rather than electricity so there is no danger of sparking, which could set off an explosion. It consists of two parts:
The acoustic driver. In the driver, compressed air escaping past a diaphragm causes it to vibrate, generating the sound. It is usually made from solid machined stainless steel. The diaphragm, the only moving part, is usually manufactured from special aerospace grade titanium to ensure performance and longevity.
The bell, a flaring horn, usually made from spun 316 grade stainless steel. The bell serves as a sound resonator, and its flaring shape couples the sound efficiently to the air, increasing the volume of sound radiated.
The overall length of acoustic cleaner horns range from 430 mm to over 3 metres long. The device can operate from a pressure range of 4.8 to 6.2 bars or 70 to 90 psi. The resultant sound pressure level will be around 200 dB.
There are generally 4 ways to control the operation of an acoustic cleaner:
The most common is by a simple timer
Supervisory control and data acquisition (SCADA)
Programmable logic controller (PLC)
Manually by ball valve
An acoustic cleaner will typically sound for 10 seconds and then wait for a further 500 seconds before sounding again. This ratio for on/off is approximately proportional to the working life of the diaphragm. Provided the operating environment is between −40 °C and 100 °C, a diaphragm should last between 3 and 5 years. The wave generator and the bell have a much longer life span and will often outlast the environment in which they operate.
The older bells which were made from cast iron were susceptible to rusting in certain environments. The new bells made from 316 spun steel have no problem with rust and are ideal for sterile environments such as found in the food industry or in pharmaceutical plants.
Acoustic cleaning began in the early 1970s with experiments using ship horns or air raid sirens. The first acoustic cleaners were made from cast iron. From 1990 onwards the technology became commercially viable and began to be used in dry processing, storage, transport, power generation and manufacturing industries. The latest technology uses 316 spun stainless steel to ensure optimum performance.
Operation and performance
The majority of acoustic cleaners operate in the audio frequency range from 60 hertz up to 420 Hz. However a few operate in the infrasonic range, below 40 Hz, which is mostly below the human hearing range, to satisfy strict noise control requirements.
There are three scientific fields which converge in the understanding of acoustic cleaning technology.
Sound propagation. This relates to an understanding of the nature of the sound waves, how they vary and how they will interact with the environment.
Mathematics of the environment. Materials science, surface friction, distance and areas familiar to a mechanical engineer.
Chemical engineering. The chemical properties of the powder or substance to be debonded. Especially the auto adhesive properties of the powder.
An acoustic cleaner will create a series of very rapid and powerful sound induced pressure fluctuations which are then transmitted into the solid particles of ash, dust, granules or powder. This causes them to move at differing speeds and debond from adjoining particles and the surface that they are adhering to. Once they have been separated then the material will fall off due to gravity or it will be carried away by the process gas or air stream.
The key features which determine whether or not an acoustic cleaner will be effective for any given problem are the particle size range, the moisture content and the density of the particles as well as how these characteristics will change with temperature and time.
Typically particles between 20 micrometres and 5 mm with moisture content below 8.5% are ideal. Upper temperature limits are dependent upon the melting point of the particles and acoustic cleaners have been employed at temperatures above 1000 °C to remove ash build-up in boiler plants.
It is important to match the operating frequencies to the requirements. Higher frequencies can be directed more accurately whilst lower frequencies will carry further, and are generally used for more demanding requirements. A typical selection of frequencies available would be as follows:
420 Hz for a small acoustic cleaner which might be used to clear bridging at the base of a silo.
350 Hz will be more powerful and this frequency can be used to unblock material build-up in ID (induced draft) fans, filters, cyclones, mixers, dryers and coolers.
230 Hz. At this frequency, the power involved is sufficient to use in most electricity generation applications.
75 Hz and 60 Hz. These are generally the most powerful acoustic cleaners and are often used in large vessels and silos.
Health and safety
The introduction of acoustic cleaners has been a significant improvement in many areas of health and safety. For instance in silo cleaning - the previous solutions tended to be intrusive or destructive. Air cannons, soot blowers, external vibrators, hammering or costly man entry are all superseded by noninvasive sonic horns.
An acoustic cleaner requires no down time and will operate during normal usage of the site.
Taking the example of silo cleaning a little further, there are two typical problems.
Bridging
This is when the silo blocks at the outlet. Previously the problem was addressed by manual cleaning from underneath the silo which in its turn introduced significant risk from falling material when the blockage was cleared. An acoustic cleaner is able to operate from the top of a silo through in situ material to clear the blockage at the base.
Rat holing
Compaction on the side of a silo. This not only reduces the operating volume in a silo but it also compromises quality control by disrupting the first in first out cycle. Older material compacted on the side of a silo can also start to degrade and produce dangerous gases. An acoustic cleaner will produce sound waves which will make the compacted material resonate at a different rate to the surrounding environment resulting in debonding and clearance.
Advantages of acoustic cleaners
Repetitive use during operations means that there are fewer unscheduled shut downs.
Improved material flow by the elimination of hang-ups, blocking and bridging.
Minimisation of cross contamination by ensuring complete emptying of the environment.
Improved cleaning and reduction of health and safety risks.
Increased energy efficiency. Reducing the buildup on heat exchange surfaces results in lower energy usage.
Extended plant life. Aggressive cleaning regimes are avoided.
Ease of operation. It is easy to automate the horns either at regular intervals or to tie the sounding in to changes in their environment such as pressure or flow rates.
Importantly they prevent the material buildup problem from occurring in the first place.
These advantages mean that the financial payback is often very quick.
It is also possible to compare acoustic cleaners directly to alternative solutions.
Air cannons. These are well established but are expensive with limited coverage thus requiring multi unit purchase. They are also noise intrusive and have a high compressed air consumption.
Vibrators. These are easy to fit to an empty silo but can cause structural damage as well as contributing to powder compaction.
Low friction linings. These are very quiet but are expensive to install. Also they are prone to erosion and can then contaminate the environment or product.
Inflatable pads and liners. Again these are easy to install in an empty silo. They help side wall buildup but have no impact on bridging. They are also hard to maintain and can cause compaction.
Fluidisation through a 1 way membrane. This can help already compacted material. However they are expensive and difficult to install and maintain. They can also contribute to mechanical interlocking and bridging.
Specific applications
Boilers. Cleaning of the heat transfer surfaces.
Electrostatic precipitators. Acoustic cleaners are being used for cleaning hoppers, turning vanes, distribution plates, collecting plates and electrode wires.
Super heaters, economisers and air heaters.
Duct work.
Filters. Acoustic cleaners are used on reverse air, pulse jet and shaker units. They are effective in reducing pressure drop across the collection surface which will increase bag life and prevent hopper pluggage. Generally they can totally replace the both reverse air fans and shaker units and significantly reduce the compressed air requirement on pulse jet filters.
ID fans. Acoustic cleaning helps to provide a uniform cleaning pattern even for inaccessible parts of the fan. This maintains the balance of the fan.
Kiln inlet. Acoustic cleaners help to prevent particulate buildup at the kiln inlet and this will minimise nose ring formation.
Mechanical pre Collectors. Acoustic cleaners help prevent buildup around the impellers and between the tubes.
Mills. Acoustic cleaners help maintain material flow and also prevent blockages in the pre grind silos. They also help prevent material buildup in the downstream separators and fans.
Planetary Coolers. Acoustic cleaners help prevent bridging and ensure complete evacuation.
Precipitator. Acoustic cleaners help clean the turning vanes, distribution plates, collecting plates and electrode wires. They can either assist or replace the mechanical rapping systems. They also prevent particulate buildup in the under hoppers which would otherwise result in opacity spiking.
Pre heaters. Used in towers, gas risers, cyclones and fans.
Ship cargo holds. Used both to clean and de aerate current loads.
Silos and hoppers. To prevent bridging and rat holing.
Static cyclones. Acoustic cleaners will work both within the cyclone and with the associated duct work.
See also
Ultrasonic cleaner - Cleaning using higher frequencies than those found in acoustic cleaners.
Sonic soot blowers
Ultrasonic homogenizer
References
External links
Acoustics
Audio engineering
Cleaning tools
Cleaning methods | Acoustic cleaning | [
"Physics",
"Engineering"
] | 2,089 | [
"Electrical engineering",
"Audio engineering",
"Classical mechanics",
"Acoustics"
] |
4,117,087 | https://en.wikipedia.org/wiki/NGC%201514 | NGC 1514, also known as the Crystal Ball Nebula, is a planetary nebula in the zodiac constellation of Taurus, positioned to the north of the star Psi Tauri along the constellation border with Perseus. Distance to the nebula is 466 pc, according to GAIA DR2 data.
It was discovered by William Herschel on November 13, 1790, describing it as "a most singular phenomenon" and forcing him to rethink his ideas on the construction of the heavens. Up until this point Herschel was convinced that all nebulae consisted of masses of stars too remote to resolve, but now here was a single star "surrounded with a faintly luminous atmosphere". He concluded: "Our judgement I may venture to say, will be, that the nebulosity about the star is not of a starry nature."
This is a double-shell nebula that is described as, "a bright roundish amorphous PN" with a radius of around and a faint halo that has a radius of . It consists of an outer shell, an inner shell, and bright blobs. The inner shell appears to be distorted, but was likely originally spherical. An alternative description is of "lumpy nebula composed of numerous small bubbles" with a somewhat filamentary structure in the outer shell. Infrared observations show a huge region of dust surrounds the planetary nebula, spanning . There is also a pair of rings forming what appears to be a diabolo-like structure, similar to those found in MyCn 18, but these are extremely faint and only visible in the mid-infrared, The combined mass of the gas and dust is estimated at The ionized gas is moderately excited, and the electron temperature is estimated to be 15,000 K.
The nebula originated from a binary star system with the designation HD 281679 from the Henry Draper Catalogue. The bright, visible component is a giant star on the horizontal branch with a stellar classification of A0III, while the nebula-generating companion is now a hot, sub-luminous O-type star. The two were originally thought to have an orbital period on the order of 10 days, but observations of the system over years showed that their orbit is actually one of the longest known for any planetary nebula, with a period of about 9 years. Their orbital eccentricity is about 0.5.
References
External links
Basic data on NGC 1514
Discussion on the dynamics of the NGC 1514 system
Planetary nebulae
1514
Taurus (constellation)
Discoveries by William Herschel | NGC 1514 | [
"Astronomy"
] | 514 | [
"Taurus (constellation)",
"Constellations"
] |
4,117,384 | https://en.wikipedia.org/wiki/NGC%206302 | NGC 6302 (also known as the Bug Nebula, Butterfly Nebula, or Caldwell 69) is a bipolar planetary nebula in the constellation Scorpius. The structure in the nebula is among the most complex ever seen in planetary nebulae. The spectrum of Butterfly Nebula shows that its central star is one of the hottest stars known, with a surface temperature in excess of 250,000 degrees Celsius, implying that the star from which it formed must have been very large.
The central star, a white dwarf, was identified in 2009, using the upgraded Wide Field Camera 3 on board the Hubble Space Telescope. The star has a current mass of around 0.64 solar masses. It is surrounded by a dense equatorial disc composed of gas and dust. This dense disc is postulated to have caused the star's outflows to form a bipolar structure similar to an hourglass. This bipolar structure shows features such as ionization walls, knots and sharp edges to the lobes.
Observation history
As it is included in the New General Catalogue, this object has been known since at least 1888. The earliest-known study of NGC 6302 is by Edward Emerson Barnard, who drew and described it in 1907.
The nebula featured in some of the first images released after the final servicing mission of the Hubble Space Telescope in September 2009.
Characteristics
NGC 6302 has a complex structure, which may be approximated as bipolar with two primary lobes, though there is evidence for a second pair of lobes that may have belonged to a previous phase of mass loss. A dark lane runs through the waist of the nebula obscuring the central star at all wavelengths.
The nebula contains a prominent northwest lobe which extends up to 3.0′ away from the central star and is estimated to have formed from an eruptive event around 1,900 years ago. It has a circular part whose walls are expanding such that each part has a speed proportional to its distance from the central star. At an angular distance of 1.71′ from the central star, the flow velocity of this lobe is measured to be 263 km/s. At the extreme periphery of the lobe, the outward velocity exceeds 600 km/s. The western edge of the lobe displays characteristics suggestive of a collision with pre-existing globules of gas which modified the outflow in that region.
Central star
The central star, among the hottest stars known, had escaped detection because of a combination of its high temperature (meaning that it radiates mainly in the ultraviolet), the dusty torus (which absorbs a large fraction of the light from the central regions, especially in the ultraviolet) and the bright background from the star. It was not seen in the first Hubble Space Telescope images; the improved resolution and sensitivity of the new Wide Field Camera 3 of the same telescope later revealed the faint star at the centre. A temperature of 200,000 Kelvin is indicated, and a mass of 0.64 solar masses. The original mass of the star was much higher, but most was ejected in the event which created the planetary nebula. The luminosity and temperature of the star indicate it has ceased nuclear burning and is on its way to becoming a white dwarf, fading at a predicted rate of 1% per year.
Dust chemistry
The prominent dark lane that runs through the centre of the nebula has been shown to have an unusual composition, showing evidence for multiple crystalline silicates, crystalline water ice and quartz, with other features which have been interpreted as the first extra-solar detection of carbonates. This detection has been disputed, due to the difficulties in forming carbonates in a non-aqueous environment. The dispute remains unresolved.
One of the characteristics of the dust detected in NGC 6302 is the existence of both oxygen-bearing silicate molecules and carbon-bearing polycyclic aromatic hydrocarbons (PAHs). Stars are usually either oxygen-rich or carbon-rich, the change from the former to the latter occurring late in the evolution of the star due to nuclear and chemical changes in the star's atmosphere. NGC 6302 belongs to a group of objects where hydrocarbon molecules formed in an oxygen-rich environment.
See also
List of largest nebulae
Lists of nebulae
Notes
References
External links
NASA News Release
Discovery of the star
ESA/Hubble News Release
SIMBAD Query Result
Butterfly Nebula at Constellation Guide
069b
6302
Planetary nebulae
Scorpius
Sharpless objects | NGC 6302 | [
"Astronomy"
] | 901 | [
"Scorpius",
"Constellations"
] |
4,117,395 | https://en.wikipedia.org/wiki/Hugo%20von%20Seeliger | Hugo von Seeliger (23 September 1849 – 2 December 1924), also known as Hugo Hans Ritter von Seeliger, was a German astronomer, often considered the most important astronomer of his day.
Biography
He was born in Biala, completed high school in Teschen in 1867, and studied at the Universities of Heidelberg and Leipzig. He earned a doctorate in astronomy in 1872 from the latter, studying under Carl Christian Bruhns. He was on the staff of the University of Bonn Observatory until 1877, as an assistant to Friedrich Wilhelm Argelander. In 1874, he directed the German expedition to the Auckland Islands to observe the transit of Venus. In 1881, he became the Director of the Gotha Observatory, and in 1882 became a professor of Astronomy and Director of the Observatory at the University of Munich, which post he held until his death. His students included Hans Kienle, Ernst Anding, Julius Bauschinger, Paul ten Bruggencate, Gustav Herglotz, Richard Schorr, and especially Karl Schwarzschild, who earned a doctorate under him in 1898, and acknowledged Seeliger's influence in speeches throughout his career.
Seeliger was elected an Associate of the Royal Astronomical Society in 1892, and President of the Astronomische Gesellschaft from 1897 to 1921. He received numerous honours and medals, including knighthood (Ritter), between 1896 and 1917.
His contributions to astronomy include an explanation of the anomalous motion of the perihelion of Mercury (later one of the main tests of general relativity), a theory of nova coming from the collision of a star with a cloud of gas, and his confirmation of James Clerk Maxwell's theories of the composition of the rings of Saturn by studying variations in their albedo. He is also the discoverer of an apparent paradox in Newton's gravitational law, known as Seeliger's Paradox. However his main interest was in the stellar statistics of the Bonner Durchmusterung and Bonn section of the Astronomische Gesellschaft star catalogues, and in the conclusions these led about the structure of the universe. Seeliger's views on the dimensions of our galaxy were consistent with Jacobus Kapteyn's later studies.
Seeliger was an opponent of Albert Einstein's theory of relativity.
He continued his work until his death, on 2 December 1924, aged 75.
The asteroid 892 Seeligeria and the lunar crater Seeliger were named in his honour. The brightening of Saturn's rings at opposition is known as the Seeliger Effect, to acknowledge his pioneering research in this field. Minor planet 251 Sophia is named after his wife, Sophia.
Students
His PhD students were (after Mathematics Genealogy Project, Hugo Hans von Seeliger) :
Julius Bauschinger, Ludwig-Maximilians-Universität München, 1884
Ernst Anding, Ludwig-Maximilians-Universität München, 1888
Richard Schorr, Ludwig-Maximilians-Universität München, 1889
Karl Oertel, Ludwig-Maximilians-Universität München, 1890
Oscar Hecker, Ludwig-Maximilians-Universität München, 1891
Adalbert Bock, Ludwig-Maximilians-Universität München, 1892
George Myers, Ludwig-Maximilians-Universität München, 1896
Karl Schwarzschild, Ludwig-Maximilians-Universität, München 1897
Lucian Grabowski, Ludwig-Maximilians-Universität München, 1900
Gustav Herglotz, Ludwig-Maximilians-Universität München, 1900
Emil Silbernagel, Ludwig-Maximilians-Universität München, 1905
Ernst Zapp, Ludwig-Maximilians-Universität München, 1907
Kasimir Jantzen, Ludwig-Maximilians-Universität München, 1912
Wilhelm Keil, Ludwig-Maximilians-Universität München, 1918
Friedrich Burmeister, Ludwig-Maximilians-Universität München, 1919
Gustav Schnauder, Ludwig-Maximilians-Universität München, 1921
Walter Sametinger, Ludwig-Maximilians-Universität München, 1924
References
Freddy Litten:Hugo von Seeliger – Kurzbiographie Short biography (in German).
Obituary: Professor Hugo von Seeliger Scan from "The Observatory", Vol. 48, p. 77 (1925), presented by Smithsonian/NASA ADS Astronomy Abstract Service
1849 births
1924 deaths
People from Biała
People from Austrian Silesia
20th-century German astronomers
19th-century German astronomers
Bavarian nobility
Academic staff of the Ludwig Maximilian University of Munich
Recipients of the Pour le Mérite (civil class)
Relativity critics
Foreign associates of the National Academy of Sciences
Members of the Royal Society of Sciences in Uppsala | Hugo von Seeliger | [
"Physics"
] | 985 | [
"Relativity critics",
"Theory of relativity"
] |
4,117,439 | https://en.wikipedia.org/wiki/NGC%206751 | NGC 6751, also known as the Glowing Eye Nebula, is a planetary nebula in the constellation Aquila. It is estimated to be about 6,500 light-years (2.0 kiloparsecs) away.
NGC 6751 was discovered by the astronomer Albert Marth on 20 July 1863. John Louis Emil Dreyer, the compiler of the New General Catalogue, described the object as "pretty bright, small". The object was assigned a duplicate designation, NGC 6748.
The nebula was the subject of the winning picture in the 2009 Gemini School Astronomy Contest, in which Australian high school students competed to select an astronomical target to be imaged by Gemini.
NGC 6751 is an easy telescopic target for deep-sky observers because its location is immediately southeast of the extremely red-colored cool carbon star V Aquilae.
Properties
NGC 6751, like all planetary nebulae was formed when a dying star threw off its outer layers of gas several thousand years ago. It is estimated to be around 0.8 light-years in diameter.
NGC 6751 has a complex bipolar structure. There is a bright, inner bubble (shown in the photo), as well as two fainter halos. (The outer halo, with a radius of 50 is extremely faint and is broken, while the inner halo with a radius of 27 is roughly spherical). On both the west and east sides of the inner shell, knots can be seen that are surrounded by faint "lobes". These lobes are actually a ring, and the eastern side is nearer than the western side. As a whole, the system is approaching the Solar System with a heliocentric radial velocity of −31.7 km/s.
The central star of the nebula has a similar spectrum to a Wolf–Rayet star (spectral type [WC4]), and has an effective temperature of about 140,000 K and a radius of about . It is losing mass at a rate of per year, and its surface composition is mostly helium and carbon.
The winning image of the 2009 Gemini Astronomy Contest shows a nebula at the top left of NGC 6751. This 80 x 40 arcsec nebula was discovered in 1990 by Hua & Louise at the Newton focus of the Foucault telescope, 120cm in diameter at Observatoire de Haute Provence (O.H.P.) Saint Michel l'Observatoire.
See also
List of planetary nebulae
References
External links
NGC 6751 seds.org
Planetary nebulae
6751
Aquila (constellation)
177656 | NGC 6751 | [
"Astronomy"
] | 520 | [
"Aquila (constellation)",
"Constellations"
] |
4,117,491 | https://en.wikipedia.org/wiki/NGC%206781 | NGC 6781, also known as the Snowglobe Nebula, is a planetary nebula located in the equatorial constellation of Aquila, about 2.5° east-northeast of the 5th magnitude star 19 Aquilae. It was discovered July 30, 1788 by the Anglo-German astronomer William Herschel. The nebula lies at a distance of from the Sun. It has a visual magnitude of 11.4 and spans an angular size of .
The bipolar dust shell of this nebula is believed to be barrel-shaped and is being viewed from nearly pole-on. It has an outer angular radius of ; equivalent to a physical radius of . The total mass of gas ejected as the central star passed through its last asymptotic giant branch (AGB) thermal pulse event is , while the estimated dust mass is .
The magnitude 16.88 central star of the planetary nebula is a white dwarf with a spectral type of DAO. It has an M-type co-moving companion at a projected separation of under . The white dwarf progenitor star had an estimated initial mass of . It left the AGB and entered the cooling stage around 9,400 years ago.
References
External links
Planetary nebulae
6781
Aquila (constellation) | NGC 6781 | [
"Astronomy"
] | 253 | [
"Aquila (constellation)",
"Constellations"
] |
4,117,653 | https://en.wikipedia.org/wiki/Neovascularization | Neovascularization is the natural formation of new blood vessels (neo- + vascular + -ization), usually in the form of functional microvascular networks, capable of perfusion by red blood cells, that form to serve as collateral circulation in response to local poor perfusion or ischemia.
Growth factors that inhibit neovascularization include those that affect endothelial cell division and differentiation. These growth factors often act in a paracrine or autocrine fashion; they include fibroblast growth factor, placental growth factor, insulin-like growth factor, hepatocyte growth factor, and platelet-derived endothelial growth factor.
There are three different pathways that comprise neovascularization: (1) vasculogenesis, (2) angiogenesis, and (3) arteriogenesis.
Three pathways of neovascularization
Vasculogenesis
Vasculogenesis is the de novo formation of blood vessels. This primarily occurs in the developing embryo with the development of the first primitive vascular plexus, but also occurs to a limited extent with post-natal vascularization. Embryonic vasculogenesis occurs when endothelial cells precursors (hemangioblasts) begin to proliferate and migrate into avascular areas. There, they aggregate to form the primitive network of vessels characteristic of embryos. This primitive vascular system is necessary to provide adequate blood flow to cells, supplying oxygen and nutrients, and removing metabolic wastes.
Angiogenesis
Angiogenesis is the most common type of neovascularization seen in development and growth, and is important to both physiological and pathological processes. Angiogenesis occurs through the formation of new vessels from pre-existing vessels. This occurs through the sprouting of new capillaries from post-capillary venules, requiring precise coordination of multiple steps and the participation and communication of multiple cell types. The complex process is initiated in response to local tissue ischemia or hypoxia, leading to the release of angiogenic factors such as VEGF and HIF-1. This leads to vasodilatation and an increase in vascular permeability, leading to sprouting angiogenesis or intussusceptive angiogenesis.
Arteriogenesis
Arteriogenesis is the process of flow-related remodelling of existing vasculature to create collateral arteries. This can occur in response to ischemic vascular diseases or increase demand (e.g. exercise training). Arteriogenesis is triggered through nonspecific factors, such as shear stress and blood flow.
Ocular pathologies
Corneal neovascularization
Corneal neovascularization is a condition where new blood vessels invade into the cornea from the limbus. It is triggered when the balance between angiogenic and antiangiogenic factors are disrupted that otherwise maintain corneal transparency. The immature new blood vessels can lead to persistent inflammation and scarring, lipid exudation into the corneal tissues, and a reduction in corneal transparency, which can affect visual acuity.
Retinopathy of prematurity
Retinopathy of prematurity is a condition that occurs in premature babies. In premature babies, the retina has not completely vascularized. Rather than continuing in the normal in utero fashion, the vascularization of the retina is disrupted, leading to an abnormal proliferation of blood vessels between the areas of vascularized and avascular retina. These blood vessels grow in abnormal ways and can invade into the vitreous humor, where they can hemorrhage or cause retinal detachment in neonates.
Diabetic retinopathy
Diabetic retinopathy, which can develop into proliferative diabetic retinopathy, is a condition where capillaries in the retina become occluded, which creates areas of ischemic retina and triggering the release of angiogenic growth factors. This retinal ischemia stimulates the proliferation of new blood vessels from pre-existing retinal venules. It is the leading cause of blindness of working age adults.
Age-related macular degeneration
In persons who are over 65 years old, age-related macular degeneration is the leading cause of severe vision loss. A subtype of age-related macular degeneration, wet macular degeneration, is characterized by the formation of new blood vessels that originate in the choroidal vasculature and extend into the subretinal space.
Choroidal neovascularization
In ophthalmology, choroidal neovascularization is the formation of a microvasculature within the innermost layer of the choroid of the eye. Neovascularization in the eye can cause a type of glaucoma (neovascularization glaucoma) if the new blood vessels' bulk blocks the constant outflow of aqueous humour from inside the eye.
Neovascularization and therapy
Ischemic heart disease
Cardiovascular disease is the leading cause of death in the world. Ischemic heart disease develops when stenosis and occlusion of coronary arteries develops, leading to reduced perfusion of the cardiac tissues. There is ongoing research exploring techniques that might be able to induce healthy neovascularization of ischemic cardiac tissues.
See also
Choroidal neovascularization
Corneal neovascularization
Revascularization
Rubeosis iridis
Inosculation
References
Angiogenesis
Medical terminology | Neovascularization | [
"Biology"
] | 1,136 | [
"Angiogenesis"
] |
4,118,276 | https://en.wikipedia.org/wiki/Conditional%20random%20field | Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without considering "neighbouring" samples, a CRF can take context into account. To do so, the predictions are modelled as a graphical model, which represents the presence of dependencies between the predictions. The kind of graph used depends on the application. For example, in natural language processing, "linear chain" CRFs are popular, for which each prediction is dependent only on its immediate neighbours. In image processing, the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions.
Other examples where CRFs are used are: labeling or parsing of sequential data for natural language processing or biological sequences, part-of-speech tagging, shallow parsing, named entity recognition, gene finding, peptide critical functional region finding, and object recognition and image segmentation in computer vision.
Description
CRFs are a type of discriminative undirected probabilistic graphical model.
Lafferty, McCallum and Pereira define a CRF on observations and random variables as follows:
Let be a graph such that , so that is indexed by the vertices of .
Then is a conditional random field when each random variable , conditioned on , obeys the Markov property with respect to the graph; that is, its probability is dependent only on its neighbours in G:
, where means
that and are neighbors in .
What this means is that a CRF is an undirected graphical model whose nodes can be divided into exactly two disjoint sets and , the observed and output variables, respectively; the conditional distribution is then modeled.
Inference
For general graphs, the problem of exact inference in CRFs is intractable. The inference problem for a CRF is basically the same as for an MRF and the same arguments hold.
However, there exist special cases for which exact inference is feasible:
If the graph is a chain or a tree, message passing algorithms yield exact solutions. The algorithms used in these cases are analogous to the forward-backward and Viterbi algorithm for the case of HMMs.
If the CRF only contains pair-wise potentials and the energy is submodular, combinatorial min cut/max flow algorithms yield exact solutions.
If exact inference is impossible, several algorithms can be used to obtain approximate solutions. These include:
Loopy belief propagation
Alpha expansion
Mean field inference
Linear programming relaxations
Parameter Learning
Learning the parameters is usually done by maximum likelihood learning for . If all nodes have exponential family distributions and all nodes are observed during training, this optimization is convex. It can be solved for example using gradient descent algorithms, or Quasi-Newton methods such as the L-BFGS algorithm. On the other hand, if some variables are unobserved, the inference problem has to be solved for these variables. Exact inference is intractable in general graphs, so approximations have to be used.
Examples
In sequence modeling, the graph of interest is usually a chain graph. An input sequence of observed variables represents a sequence of observations and represents a hidden (or unknown) state variable that needs to be inferred given the observations. The are structured to form a chain, with an edge between each and . As well as having a simple interpretation of the as "labels" for each element in the input sequence, this layout admits efficient algorithms for:
model training, learning the conditional distributions between the and feature functions from some corpus of training data.
decoding, determining the probability of a given label sequence given .
inference, determining the most likely label sequence given .
The conditional dependency of each on is defined through a fixed set of feature functions of the form , which can be thought of as measurements on the input sequence that partially determine the likelihood of each possible value for . The model assigns each feature a numerical weight and combines them to determine the probability of a certain value for .
Linear-chain CRFs have many of the same applications as conceptually simpler hidden Markov models (HMMs), but relax certain assumptions about the input and output sequence distributions. An HMM can loosely be understood as a CRF with very specific feature functions that use constant probabilities to model state transitions and emissions. Conversely, a CRF can loosely be understood as a generalization of an HMM that makes the constant transition probabilities into arbitrary functions that vary across the positions in the sequence of hidden states, depending on the input sequence.
Notably, in contrast to HMMs, CRFs can contain any number of feature functions, the feature functions can inspect the entire input sequence at any point during inference, and the range of the feature functions need not have a probabilistic interpretation.
Variants
Higher-order CRFs and semi-Markov CRFs
CRFs can be extended into higher order models by making each dependent on a fixed number of previous variables . In conventional formulations of higher order CRFs, training and inference are only practical for small values of (such as k ≤ 5), since their computational cost increases exponentially with .
However, another recent advance has managed to ameliorate these issues by leveraging concepts and tools from the field of Bayesian nonparametrics. Specifically, the CRF-infinity approach constitutes a CRF-type model that is capable of learning infinitely-long temporal dynamics in a scalable fashion. This is effected by introducing a novel potential function for CRFs that is based on the Sequence Memoizer (SM), a nonparametric Bayesian model for learning infinitely-long dynamics in sequential observations. To render such a model computationally tractable, CRF-infinity employs a mean-field approximation of the postulated novel potential functions (which are driven by an SM). This allows for devising efficient approximate training and inference algorithms for the model, without undermining its capability to capture and model temporal dependencies of arbitrary length.
There exists another generalization of CRFs, the semi-Markov conditional random field (semi-CRF), which models variable-length segmentations of the label sequence . This provides much of the power of higher-order CRFs to model long-range dependencies of the , at a reasonable computational cost.
Finally, large-margin models for structured prediction, such as the structured Support Vector Machine can be seen as an alternative training procedure to CRFs.
Latent-dynamic conditional random field
Latent-dynamic conditional random fields (LDCRF) or discriminative probabilistic latent variable models (DPLVM) are a type of CRFs for sequence tagging tasks. They are latent variable models that are trained discriminatively.
In an LDCRF, like in any sequence tagging task, given a sequence of observations x = , the main problem the model must solve is how to assign a sequence of labels y = from one finite set of labels . Instead of directly modeling (y|x) as an ordinary linear-chain CRF would do, a set of latent variables h is "inserted" between x and y using the chain rule of probability:
This allows capturing latent structure between the observations and labels. While LDCRFs can be trained using quasi-Newton methods, a specialized version of the perceptron algorithm called the latent-variable perceptron has been developed for them as well, based on Collins' structured perceptron algorithm. These models find applications in computer vision, specifically gesture recognition from video streams and shallow parsing.
See also
Hammersley–Clifford theorem
Maximum entropy Markov model (MEMM)
References
Further reading
McCallum, A.: Efficiently inducing features of conditional random fields. In: Proc. 19th Conference on Uncertainty in Artificial Intelligence. (2003)
Wallach, H.M.: Conditional random fields: An introduction. Technical report MS-CIS-04-21, University of Pennsylvania (2004)
Sutton, C., McCallum, A.: An Introduction to Conditional Random Fields for Relational Learning. In "Introduction to Statistical Relational Learning". Edited by Lise Getoor and Ben Taskar. MIT Press. (2006) Online PDF
Klinger, R., Tomanek, K.: Classical Probabilistic Models and Conditional Random Fields. Algorithm Engineering Report TR07-2-013, Department of Computer Science, Dortmund University of Technology, December 2007. ISSN 1864-4503. Online PDF
Graphical models
Machine learning | Conditional random field | [
"Engineering"
] | 1,729 | [
"Artificial intelligence engineering",
"Machine learning"
] |
4,118,330 | https://en.wikipedia.org/wiki/138P/Shoemaker%E2%80%93Levy | 138P/Shoemaker–Levy, also known as Shoemaker–Levy 7, is a faint periodic comet in the Solar System. The comet last came to perihelion on 11 June 2012, but only brightened to about apparent magnitude 20.5.
There were 4 recovery images of 138P on 8 August 2018 by Pan-STARRS when the comet had a magnitude of about 21.5. The comet comes to perihelion on 2 May 2019.
This comet should not be confused with Comet Shoemaker–Levy 9 (D/1993 F2), which crashed into Jupiter in 1994.
References
External links
138P/Shoemaker-Levy 7 – Seiichi Yoshida @ aerith.net
Elements and Ephemeris for 138P/Shoemaker-Levy – Minor Planet Center
138P at Kronk's Cometography
Periodic comets
0138
138P
138P
138P
138P
138P
19911113 | 138P/Shoemaker–Levy | [
"Astronomy"
] | 191 | [
"Astronomy stubs",
"Comet stubs"
] |
4,118,424 | https://en.wikipedia.org/wiki/Schottky%20defect | A Schottky defect is an excitation of the site occupations in a crystal lattice leading to point defects named after Walter H. Schottky. In ionic crystals, this defect forms when oppositely charged ions leave their lattice sites and become incorporated for instance at the surface, creating oppositely charged vacancies. These vacancies are formed in stoichiometric units, to maintain an overall neutral charge in the ionic solid.
Definition
Schottky defects consist of unoccupied anion and cation sites in a stoichiometric ratio. For a simple ionic crystal of type A−B+, a Schottky defect consists of a single anion vacancy (A) and a single cation vacancy (B), or v + v following Kröger–Vink notation. For a more general crystal with formula AxBy, a Schottky cluster is formed of x vacancies of A and y vacancies of B, thus the overall stoichiometry and charge neutrality are conserved. Conceptually, a Schottky defect is generated if the crystal is expanded by one unit cell, whose a prior empty sites are filled by atoms that diffused out of the interior, thus creating vacancies in the crystal.
Schottky defects are observed most frequently when there is a small difference in size between the cations and anions that make up a material.
Illustration
Chemical equations in Kröger–Vink notation for the formation of Schottky defects in TiO2 and BaTiO3.
∅ v + 2 v
∅ v + v + 3 v
This can be illustrated schematically with a two-dimensional diagram of a sodium chloride crystal lattice:
Bound and dilute defects
The vacancies that make up the Schottky defects have opposite charge, thus they experience a mutually attractive Coulomb force. At low temperature, they may form bound clusters. The degree at which the Schottky defect affects the lattice is dependent on temperature where the higher temperatures around a cation vacancy multiple anion vacancies can also be observed. When there are anion vacancies located near a cation vacancy this will hinder the displacement of cation energy.
The bound clusters are typically less mobile than the dilute counterparts, as multiple species need to move in a concerted motion for the whole cluster to migrate. This has important implications for numerous functional ceramics used in a wide range of applications, including ion conductors, Solid oxide fuel cells and nuclear fuel.
Examples
This type of defect is typically observed in highly ionic compounds, highly coordinated compounds, and where there is only a small difference in sizes of cations and anions of which the compound lattice is composed. Typical salts where Schottky disorder is observed are NaCl, KCl, KBr, CsCl and AgBr. For engineering applications, Schottky defects are important in oxides with Fluorite structure, such as CeO2, cubic ZrO2, UO2, ThO2 and PuO2.
Effect on density
Typically, the formation volume of a vacancy is positive: the lattice contraction due to the strains around the defect does not make up for the expansion of the crystal due to the additional number of sites. Thus, the density of the solid crystal is less than the theoretical density of the material.
See also
Frenkel defect
Wigner effect
Crystallographic defects
References
Kovalenko, M.A, and A. Ya Kupryazhkin. “States of the Schottky Defect in Uranium Dioxide and Other Fluorite Type Crystals: Molecular Dynamics Study.” Journal of Alloys and Compounds, vol. 645, no. 0925-8388, 1 Oct. 2015, pp. 405–413, https://doi.org/10.1016/j.jallcom.2015.05.111. Accessed 30 Apr. 2024.
Notes
Crystallographic defects | Schottky defect | [
"Chemistry",
"Materials_science",
"Engineering"
] | 811 | [
"Crystallographic defects",
"Crystallography",
"Materials degradation",
"Materials science"
] |
4,118,466 | https://en.wikipedia.org/wiki/139P/V%C3%A4is%C3%A4l%C3%A4%E2%80%93Oterma | 139P/Väisälä–Oterma is a periodic comet in the Solar System. When it was discovered in 1939 it was not recognized as a comet and designated as asteroid 1939 TN.
References
External links
Orbital simulation from JPL (Java) / Horizons Ephemeris
139P/Vaisala-Oterma – Seiichi Yoshida @ aerith.net
139P at Kronk's Cometography
Periodic comets
0139
Discoveries by Liisi Oterma
+
139P
19391007 | 139P/Väisälä–Oterma | [
"Astronomy"
] | 106 | [
"Astronomy stubs",
"Comet stubs"
] |
4,119,009 | https://en.wikipedia.org/wiki/Anopheles%20gambiae | The Anopheles gambiae complex consists of at least seven morphologically indistinguishable species of mosquitoes in the genus Anopheles. The complex was recognised in the 1960s and includes the most important vectors of malaria in sub-Saharan Africa, particularly of the most dangerous malaria parasite, Plasmodium falciparum. It is one of the most efficient malaria vectors known. The An. gambiae mosquito additionally transmits Wuchereria bancrofti which causes lymphatic filariasis, a symptom of which is elephantiasis.
Discovery and elements
The Anopheles gambiae complex or Anopheles gambiae sensu lato was recognized as a species complex only in the 1960s. The A. gambiae complex consists of:
Anopheles arabiensis
Anopheles bwambae
Anopheles melas
Anopheles merus
Anopheles quadriannulatus
Anopheles gambiae sensu stricto (s.s.)
Anopheles amharicus
The individual species of the complex are morphologically difficult to distinguish from each other, although it is possible for larvae and adult females. The species exhibit different behavioural traits. For example, Anopheles quadriannulatus is both a saltwater and mineralwater species. A. melas and A. merus are saltwater species, while the remainder are freshwater species.
Anopheles quadriannulatus generally takes its blood meal from animals (zoophilic), whereas Anopheles gambiae sensu stricto generally feeds on humans, i.e. is considered anthropophilic.
Identification to the individual species level using the molecular methods of Scott et al. (1993) can have important implications in subsequent control measures.
Anopheles gambiae in the strict sense
An. gambiae sensu stricto (s.s.) has been discovered to be currently in a state of diverging into two different species—the Mopti (M) and Savannah (S) strains—though as of 2007, the two strains are still considered to be a single species.
A mechanism of species recognition using the sound emitted by the wings and identified by Johnston's organ was proposed in 2010, however this mechanism has never been confirmed since, and the overall mechanism theory through "harmonic convergence" has been challenged.
Genome
An. gambiae s.s. genomes have been sequenced three times, once for the M strain, once for the S strain, and once for a hybrid strain. Currently, ~90 miRNA have been predicted in the literature (38 miRNA officially listed in miRBase) for An. gambiae s.s. based upon conserved sequences to miRNA found in Drosophila. Holt et al., 2002 and Neafsey et al., 2016 find transposable elements to be ~13% of the genome, similar to Drosophila melanogaster (also in Diptera). However they find the proportion of TE types to be very different from D. melanogaster with approximately the same composition of long terminal repeat retrotransposons, non-long terminal repeat retrotransposons and DNA transposons. These proportions are believed to be representative of the genus.
The genetics and genomics of sex chromosomes have been discovered and studied by Windbichler et al., 2007 and Galizi et al., 2014 (a Physarum polycephalum homing endonuclease which destroys X chromosomes), Windbichler et al., 2008 and Hammond et al., 2016 (methods to reduce the female population), Windbichler et al., 2011 (trans from yeast), Bernardini et al., 2014 (a method to increase the male population), Kyrou et al., 2018 (a female necessary exon and a homing endonuclease to drive it), Taxiarchi et al., 2019 (sex chromosome dynamics in general) and Simoni et al., 2020 (an X chromosome destroying site specific nuclease). See below for their applications.
An. gambiae has a high degree of polymorphism. This is especially true in the cytochrome P450s, Wilding et al., 2009 finding 1 single nucleotide polymorphism (SNP)/26 base pairs. This species has the highest amount of polymorphism in the CYPs of any insect known, much tending to be found in "scaffolds" that are found only in particular subpopulations. These are termed "dual haplotype regions" by Holt et al., 2002 who sequenced the strain.
In common with many chromosomes, An. gambiae codes for spindle and kinetochore-associated proteins. Hanisch et al., 2006 locate AgSka1, the spindle and kinetochore-associated protein 1 gene, at EAL39257.
The entire Culicidae family may or may not conserve epigenetic mechanisms this remains unresolved. Toward answering this question, Marhold et al., 2004 compare their own previous work in Drosophila melanogaster against new sequences of D. pseudoobscura and An. gambiae. They find all three do share the DNA methylation enzyme DNMT2 (DmDNMT2, DpDNMT2, and ). This suggests all Diptera may conserve an epigenetic system employing Dnmt2.
Hosts
Hosts include Bos taurus, Capra hircus, Ovis aries and Sus scrofa.
Parasites
Parasites include Plasmodium berghei (for which it also serves as a vector), and the bioinsecticides/entomopathogenic fungi Metarhizium robertsii and Beauveria bassiana. All three of these parasites combine with insecticides to reduce fitness see below. CRISPR/Cas9 and U6-gRNA are increasingly () being used together for knockout experiments in mosquitoes. Dong et al., 2018 develops and presents a new U6-gRNA+Cas9 technique in An. gambiae, and utilizes it to knock out fibrinogen related protein 1 (FREP1), thereby severely reducing infection of the mosquito by P. berghei and P. falciparum. However this also demonstrates the centrality of FREP1 to the insect's success, impairing all measured activities across all life stages. Yang et al., 2020 uses the Dong method to do the same with mosGILT, also severely reducing Plasmodium infection of the mosquito but also finding a vital life process is impaired, in mosGILTs case ovary development.
Control
Insecticides
Parasites/bioinsecticides and chemical insecticides synergistically reduce fitness. Saddler et al., 2015 finds even An. gambiae with knockdown resistance (kdr) are more susceptible to DDT if they are first infected with Plasmodium berghei and Farenhorst et al., 2009 the same for Metarhizium robertsii or Beauveria bassiana. This is probably due to an effect found by Félix et al., 2010 and Stevenson et al., 2011: An. gambiae alters various activities especially CYP6M2 in response to P. berghei invasion. CYP6M2 is known to somehow produce pyrethroid resistance, and pyrethroids and DDT share a mechanism of action.
Gene drive
Research relevant to the development of gene drive controls of An. gambiae have been performed by Windbichler et al., 2007, Windbichler et al., 2008, Windbichler et al., 2011, Bernardini et al., 2014, Galizi et al., 2014, Hammond et al., 2016, Kyrou et al., 2018, Taxiarchi et al., 2019 and Simoni et al., 2020. For specific genes involved see above. These can all be used in pest control because they induce infertility.
Fecundity
Fecundity of An. gambiae depends on the detoxification of reactive oxygen species (ROS) by catalase. Reduction in catalase activity significantly reduces reproductive output of female mosquitoes, indicating that catalase plays a central role in protecting oocytes and early embryos from ROS damage.
Historical note
An. gambiae invaded northeastern Brazil in 1930, which led to a malaria epidemic in 1938/1939. The Brazilian government assisted by the Rockefeller Foundation in a programme spearheaded by Fred Soper eradicated these mosquitoes from this area. This effort was modeled on the earlier success in eradication of Aedes aegypti as part of the yellow fever control program. The exact species involved in this epidemic has been identified as An. arabiensis.
Peptide hormones
Kaufmann and Brown 2008 find the An. gambiae adipokinetic hormone (AKH) mobilizes carbohydrates but not lipids. Meanwhile AKH/Corazonin Peptide (ACP) does not mobilize (or inhibit mobilization) of either. Mugumbate et al., 2013 provides in solution and membrane bound structures from a nuclear magnetic resonance investigation.
References
External links
DiArk
gambiae
Insect vectors of human pathogens
Animal models
Insects described in 1902 | Anopheles gambiae | [
"Biology"
] | 1,935 | [
"Model organisms",
"Animal models"
] |
4,119,243 | https://en.wikipedia.org/wiki/Starseed%20launcher | Starseed is a proposed method of launching interstellar nanoprobes at one-third light speed.
The launcher uses a 1,000 km-long small-diameter hollow wire, with electrodes lining the hollow wire, an electrostatic accelerator tube, similar to K. Eric Drexler's ideas. The launcher is designed to accelerate its probes to 1/3 the speed of light, about 100,000 kilometers per second, at something on the order of 100 million gravities of acceleration.
Keeping the launch tube straight enough to avoid the probe hitting the tube walls is a major challenge. The launcher would have to be set up in deep space, well away from any planets, to avoid gravitational tide effects bending the tube too much.
The proposed starseed probes would be extremely small (roughly one microgram) nanomachines and nanocomputers. The required launch energy per probe would be low due to the low mass, and many nanoprobes would be launched in sequence and rendezvous in flight.
References
Reference to Starseed concept in paper from 2010 International Planetary Probe Workshop
Hypothetical spacecraft
Interstellar travel | Starseed launcher | [
"Astronomy",
"Technology"
] | 227 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Spacecraft stubs",
"Hypothetical spacecraft",
"Astronomy stubs",
"Interstellar travel"
] |
4,119,246 | https://en.wikipedia.org/wiki/Psychology%20of%20programming | The psychology of programming (PoP) is the field of research that deals with the psychological aspects of writing programs (often computer programs). The field has also been called the empirical studies of programming (ESP). It covers research into computer programmers' cognition, tools and methods for programming-related activities, and programming education.
Psychologically, computer programming is a human activity which involves cognitions such as reading and writing computer language, learning, problem solving, and reasoning.
History
The history of psychology of programming dates back to late 1970s and early 1980s, when researchers realized that computational power should not be the only thing to be evaluated in programming tools and technologies, but also the usability from the users. In the first Workshop on Empirical Studies of Programmers, Ben Shneiderman listed several important destinations for researchers. These destinations include refining the use of current languages, improving present and future languages, developing special purpose languages, and improving tools and methods. Two important workshop series have been devoted to psychology of programming in the last two decades: the Workshop on Empirical Studies of Programmers (ESP), based primarily in the US, and the Psychology of Programming Interest Group Workshop (PPIG), having a European character. ESP has a broader scope than pure psychology in programming, and on the other hand, PPIG is more focused in the field of PoP. However, PPIG workshops and the organization PPIG itself is informal in nature, It is group of people who are interested in PoP that comes together and publish their discussions.
Goals and purposes
It is desirable to achieve a programming performance such that creating a program meets its specifications, is on schedule, is adaptable for the future and runs efficiently. Being able to satisfy all these goals at a low cost is a difficult and common problem in software engineering and project management. By understanding the psychological aspects of computer programming, we can better understand how to achieve a higher programming performance, and to assist programmers to produce better software with less error.
Research methods
Some methods which one can use to study the psychological aspects of computer programming include introspection, observation, experiment, and qualitative research.
Cognitive biases
Cognitive biases are systematic differences from an optimal way of reasoning about something. Research has suggested there are a number of biases involved in programming. Anchoring bias have been identified for estimation and the reuse of ideas. There is an optimism bias applies to work being carried out.Availability bias can cause programmers to use incorrect keywords when searching documentation and not find relevant information and prevent programmers from applying lessons learned from previous projects in an organization.
Confirmation bias can apply to testing leading developers to write test cases that will work for the code rather than those that are likely to fail. It can also apply to searching documentation only for a programmers current hypothesis. Training in logical reasoning and hypothesis testing reduced this confirmation bias.
See also
Cognitive psychology
Human computer interaction
Learning
Problem solving
References
External links
Psychology of programming web site
Cognition
Computer programming
Cyberpsychology | Psychology of programming | [
"Technology",
"Engineering"
] | 598 | [
"Software engineering",
"Computer programming",
"Computers"
] |
4,119,350 | https://en.wikipedia.org/wiki/Nickel%28III%29%20oxide | Nickel (III) oxide is the inorganic compound with the formula Ni2O3. It is not well characterized, and is sometimes referred to as black nickel oxide. Traces of Ni2O3 on nickel surfaces have been mentioned.
Nickel (III) oxide has been studied theoretically since the early 1930s, supporting its unstable nature at standard temperatures. A nanostructured pure phase of the material was synthesized and stabilized for the first time in 2015 from the reaction of nickel(II) nitrate with sodium hypochlorite and characterized using powder X-ray diffraction and electron microscopy.
References
Inorganic compounds
Catalysts
Electrochemistry
Transition metal oxides
Nickel compounds
Non-stoichiometric compounds
Sesquioxides | Nickel(III) oxide | [
"Chemistry"
] | 145 | [
"Catalysis",
"Catalysts",
"Physical chemistry stubs",
"Inorganic compounds",
"Non-stoichiometric compounds",
"Electrochemistry",
"Chemical kinetics",
"Electrochemistry stubs"
] |
4,119,397 | https://en.wikipedia.org/wiki/Rydberg%20ionization%20spectroscopy | Rydberg ionization spectroscopy is a spectroscopy technique in which multiple photons are absorbed by an atom causing the removal of an electron to form an ion.
Resonance ionization spectroscopy
The ionization threshold energy of atoms and small molecules are typically larger than the photon energies that are most easily available experimentally. However, it can be possible to span this ionization threshold energy if the photon energy is resonant with an intermediate electronically excited state. While it is often possible to observe the lower Rydberg levels in conventional spectroscopy of atoms and small molecules, Rydberg states are even more important in laser ionization experiments. Laser spectroscopic experiments often involve ionization through a photon energy resonance at an intermediate level, with an unbound final electron state and an ionic core. On resonance for phototransitions permitted by selection rules, the intensity of the laser in combination with the excited state lifetime makes ionization an expected outcome. This RIS approach and variations permit sensitive detection of specific species.
Low Rydberg levels and resonance enhanced multiphoton ionization
High photon intensity experiments can involve multiphoton processes with the absorption of integer multiples of the photon energy. In experiments that involve a multiphoton resonance, the intermediate is often a Rydberg state, and the final state is often an ion. The initial state of the system, photon energy, angular momentum and other selection rules can help in determining the nature of the intermediate state. This approach is exploited in resonance enhanced multiphoton ionization spectroscopy (REMPI). An advantage of this spectroscopic technique is that the ions can be detected with almost complete efficiency and even resolved for their mass. It is also possible to gain additional information by performing experiments to look at the energy of the liberated photoelectron in these experiments. (Compton and Johnson pioneered the development of REMPI)
Near-threshold Rydberg levels
The same approach that produces an ionization event can be used to access the dense manifold of near-threshold Rydberg states with laser experiments. These experiments often involve a laser operating at one wavelength to access the intermediate Rydberg state and a second wavelength laser to access the near-threshold Rydberg state region. Because of the photoabsorption selection rules, these Rydberg electrons are expected to be in highly elliptical angular momentum states. It is the Rydberg electrons excited to nearly circular angular momentum states that are expected to have the longest lifetimes. The conversion between a highly elliptical and a nearly circular near-threshold Rydberg state might happen in several ways, including encountering small stray electric fields.
Zero electron kinetic energy spectroscopy
Zero electron kinetic energy (ZEKE) spectroscopy was developed with the idea of collecting only the resonance ionization photoelectrons that have extremely low kinetic energy. The technique involves waiting for a period of time after a resonance ionization experiment and then pulsing an electric field to collect the lowest energy photoelectrons in a detector. Typically, ZEKE experiments utilize two different tunable lasers. One laser photon energy is tuned to be resonant with the energy of an intermediate state. (This may be resonant with an excited state at a multiphoton transition.) Another photon energy is tuned to be close to the ionization threshold energy. The technique worked extremely well and demonstrated energy resolution that was significantly better than the laser bandwidth. It turns out that it was not the photoelectrons that were detected in ZEKE. The delay between the laser and the electric field pulse selected the longest lived and most circular Rydberg states closest to the energy of the ion core. The population distribution of surviving long-lived near threshold Rydberg states is close to the laser energy bandwidth. The electric field pulse Stark shifts the near-threshold Rydberg states and vibrational autoionization occurs. ZEKE has provided a significant advance in the study of the vibrational spectroscopy of molecular ions. Schlag, Peatman and Müller-Dethlefs originated ZEKE spectroscopy.
Mass analyzed threshold ionization
Mass analyzed threshold ionization (MATI) was developed with idea of collecting the mass of the ions in a ZEKE experiment.
MATI offered a mass resolution advantage to ZEKE. Because MATI also exploits vibrational autoionization of near-threshold Rydberg states, it also can offer a comparable resolution with the laser bandwidth. This information can be indispensable in understanding a variety of systems.
Photo-induced Rydberg ionization
Photo-induced Rydberg ionization (PIRI) was developed following REMPI experiments on electronic autoionization of low-lying Rydberg states of carbon dioxide. In REMPI photoelectron experiments, it was determined that a two-photon ionic core photoabsorption process (followed by prompt electronic autoionization) could dominate the direct single photon absorption in the ionization of some Rydberg states of carbon dioxide. These sorts of two excited electron systems had already been under study in the atomic physics, but there the experiments involved high order Rydberg states. PIRI works because electronic autoionization can dominate direct photoionization (photoionization). The circularized near-threshold Rydberg state is more likely to undergo a core photoabsorption than to absorb a photon and directly ionize the Rydberg state. PIRI extends the near-threshold spectroscopic techniques to allow access to the electronic states (including dissociative molecular states and other hard to study systems) as well as the vibrational states of molecular ions.
References
Spectroscopy | Rydberg ionization spectroscopy | [
"Physics",
"Chemistry"
] | 1,101 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
4,119,746 | https://en.wikipedia.org/wiki/Zero-crossing%20rate | The zero-crossing rate (ZCR) is the rate at which a signal changes from positive to zero to negative or from negative to zero to positive. Its value has been widely used in both speech recognition and music information retrieval, being a key feature to classify percussive sounds.
ZCR is defined formally as
where is a signal of length and is an indicator function.
In some cases only the "positive-going" or "negative-going" crossings are counted, rather than all the crossings, since between a pair of adjacent positive zero-crossings there must be a single negative zero-crossing.
For monophonic tonal signals, the zero-crossing rate can be used as a primitive pitch detection algorithm. Zero crossing rates are also used for Voice activity detection (VAD), which determines whether human speech is present in an audio segment or not.
See also
Zero crossing
Digital signal processing
References
Signal processing
Rates | Zero-crossing rate | [
"Technology",
"Engineering"
] | 185 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
4,119,862 | https://en.wikipedia.org/wiki/Q-difference%20polynomial | In combinatorial mathematics, the q-difference polynomials or q-harmonic polynomials are a polynomial sequence defined in terms of the q-derivative. They are a generalized type of Brenke polynomial, and generalize the Appell polynomials. See also Sheffer sequence.
Definition
The q-difference polynomials satisfy the relation
where the derivative symbol on the left is the q-derivative. In the limit of , this becomes the definition of the Appell polynomials:
Generating function
The generalized generating function for these polynomials is of the type of generating function for Brenke polynomials, namely
where is the q-exponential:
Here, is the q-factorial and
is the q-Pochhammer symbol. The function is arbitrary but assumed to have an expansion
Any such gives a sequence of q-difference polynomials.
References
A. Sharma and A. M. Chak, "The basic analogue of a class of polynomials", Riv. Mat. Univ. Parma, 5 (1954) 325–337.
Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263. (Provides a very brief discussion of convergence.)
Q-analogs
Polynomials | Q-difference polynomial | [
"Mathematics"
] | 275 | [
"Polynomials",
"Q-analogs",
"Algebra",
"Combinatorics"
] |
4,120,135 | https://en.wikipedia.org/wiki/Weinstein%20conjecture | In mathematics, the Weinstein conjecture refers to a general existence problem for periodic orbits of Hamiltonian or Reeb vector flows. More specifically, the conjecture claims that on a compact contact manifold, its Reeb vector field should carry at least one periodic orbit.
By definition, a level set of contact type admits a contact form obtained by contracting the Hamiltonian vector field into the symplectic form. In this case, the Hamiltonian flow is a Reeb vector field on that level set. It is a fact that any contact manifold (M,α) can be embedded into a canonical symplectic manifold, called the symplectization of M, such that M is a contact type level set (of a canonically defined Hamiltonian) and the Reeb vector field is a Hamiltonian flow. That is, any contact manifold can be made to satisfy the requirements of the Weinstein conjecture. Since, as is trivial to show, any orbit of a Hamiltonian flow is contained in a level set, the Weinstein conjecture is a statement about contact manifolds.
It has been known that any contact form is isotopic to a form that admits a closed Reeb orbit; for example, for any contact manifold there is a compatible open book decomposition, whose binding is a closed Reeb orbit. This is not enough to prove the Weinstein conjecture, though, because the Weinstein conjecture states that every contact form admits a closed Reeb orbit, while an open book determines a closed Reeb orbit for a form which is only isotopic to the given form.
The conjecture was formulated in 1978 by Alan Weinstein. In several cases, the existence of a periodic orbit was known. For instance, Rabinowitz showed that on star-shaped level sets of a Hamiltonian function on a symplectic manifold, there were always periodic orbits (Weinstein independently proved the special case of convex level sets). Weinstein observed that the hypotheses of several such existence theorems could be subsumed in the condition that the level set be of contact type. (Weinstein's original conjecture included the condition that the first de Rham cohomology group of the level set is trivial; this hypothesis turned out to be unnecessary).
The Weinstein conjecture was first proved for contact hypersurfaces in in 1986 by , then extended to cotangent bundles by Hofer–Viterbo and to wider classes of aspherical manifolds by Floer–Hofer–Viterbo. The presence of holomorphic spheres was used by Hofer–Viterbo. All these cases dealt with the situation where the contact manifold is a contact submanifold of a symplectic manifold. A new approach without this assumption was discovered in dimension 3 by Hofer and is at the origin of contact homology.
The Weinstein conjecture has now been proven for all closed 3-dimensional manifolds by Clifford Taubes. The proof uses a variant of Seiberg–Witten Floer homology and pursues a strategy analogous to Taubes' proof that the Seiberg-Witten and Gromov invariants are equivalent on a symplectic four-manifold. In particular, the proof provides a shortcut to the closely related program of proving the Weinstein conjecture by showing that the embedded contact homology of any contact three-manifold is nontrivial.
See also
Seifert conjecture
References
Further reading
Symplectic geometry
Hamiltonian mechanics
Conjectures
Unsolved problems in geometry
Contact geometry | Weinstein conjecture | [
"Physics",
"Mathematics"
] | 713 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Theoretical physics",
"Unsolved problems in geometry",
"Classical mechanics",
"Hamiltonian mechanics",
"Conjectures",
"Mathematical problems",
"Dynamical systems"
] |
4,120,162 | https://en.wikipedia.org/wiki/Pieter%20van%20Musschenbroek | Pieter van Musschenbroek (14 March 1692 – 19 September 1761) was a Dutch scientist. He was a professor in Duisburg, Utrecht, and Leiden, where he held positions in mathematics, philosophy, medicine, and astronomy. He is credited with the invention of the first capacitor in 1746: the Leyden jar. He performed pioneering work on the buckling of compressed struts. Musschenbroek was also one of the first scientists (1729) to provide detailed descriptions of testing machines for tension, compression, and flexure testing. An early example of a problem in dynamic plasticity was described in the 1739 paper (in the form of the penetration of butter by a wooden stick subjected to impact by a wooden sphere).
Early life and studies
Pieter van Musschenbroek was born on 14 March 1692 in Leiden, Holland, Dutch Republic. His father was Johannes van Musschenbroek and his mother was Margaretha van Straaten. The van Musschenbroeks, originally from Flanders, had lived in the city of Leiden since circa 1600. His father was an instrument maker, who made scientific instruments such as air pumps, microscopes, and telescopes.
Van Musschenbroek attended Latin school until 1708, where he studied Greek, Latin, French, English, High German, Italian, and Spanish. He studied medicine at Leiden University and received his doctorate in 1715. He also attended lectures by John Theophilus Desaguliers and Isaac Newton in London. He finished his study in philosophy in 1719.
Musschenbroek belonged to the tradition of Dutch thinkers who popularised the ontological argument of God's design. He is author of Oratio de sapientia divina (Prayer of Divine Wisdom. 1744).
Academic career
Duisburg
In 1719, he became professor of mathematics and philosophy at the University of Duisburg. In 1721, he also became professor of medicine.
Utrecht
In 1723, he left his posts in Duisburg and became professor at the University of Utrecht. In 1726 he also became professor in astronomy. Musschenbroek's Elementa Physica (1726) played an important part in the transmission of Isaac Newton's ideas in physics to Europe. In November 1734 he was elected a Fellow of the Royal Society.
Leiden
In 1739, he returned to Leiden, where he succeeded Jacobus Wittichius as professor.
Already during his studies at Leiden University, van Musschenbroek became interested in electrostatics. At that time, transient electrical energy could be generated by friction machines but there was no way to store it. Musschenbroek and his student Andreas Cunaeus discovered that the energy could be stored, in work that also involved Jean-Nicolas-Sébastien Allamand as collaborator. The apparatus was a glass jar filled with water into which a brass rod had been placed; and the stored energy could be released only by completing an external circuit between the brass rod and another conductor, originally a hand, placed in contact with the outside of the jar. Van Musschenbroek communicated this discovery to René Réaumur in January 1746, and it was Abbé Nollet, the translator of Musschenbroek's letter from Latin, who named the invention the 'Leyden jar'.
Soon afterwards, it transpired that a German scientist, Ewald Georg von Kleist, had independently constructed a similar device in late 1745, shortly before Musschenbroek.
He made a significant contribution to the field of tribology.
In 1754, he became an honorary professor at the Imperial Academy of Science in Saint Petersburg. He was also elected a foreign member of the Royal Swedish Academy of Sciences in 1747.
Van Musschenbroek died on 19 September 1761 in Leiden.
Works
Elementa Physica (1726)
Dissertationes physicae experimentalis et geometricae de magnete (1729)
Tentamina experimentorum naturalium in Accademia del Cimento (1731)
Institutiones physicae (1734)
Beginsels der Natuurkunde, Beschreeven ten dienste der Landgenooten, door Petrus van Musschenbroek, Waar by gevoegd is eene beschryving Der nieuwe en onlangs uitgevonden Luchtpompen, met haar gebruik tot veel proefnemingen (1736 / 1739)
Aeris praestantia in humoribus corporis humani (1739)
Oratio de sapientia divina (1744)
Institutiones physicae conscriptae in usus academicos (in Latin). Lugduni Batavorum : Apud S. Luchtmans et filium, 1748.
Institutiones logicae (1764)
References
External links
Biography by Eugenii Katz
Biography at Adventures in Cybersound
Leiden jar, Leiden University
List of Ph.D. students of Pieter van Musschenbroek
1692 births
1761 deaths
18th-century Dutch scientists
18th-century Dutch astronomers
Leiden University alumni
Academic staff of Utrecht University
Academic staff of Leiden University
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Royal Swedish Academy of Sciences
Members of the French Academy of Sciences
Fellows of the Royal Society
Honorary members of the Saint Petersburg Academy of Sciences
18th-century Dutch inventors
Tribologists | Pieter van Musschenbroek | [
"Materials_science"
] | 1,103 | [
"Tribology",
"Tribologists"
] |
4,120,345 | https://en.wikipedia.org/wiki/Silver%20center%20cent | The Silver center cent is an American pattern coin produced by the United States Mint in 1792. As a precursor to the large cent it was one of the first coins of the United States and an early example of a bimetallic coin. Only 12 original examples are known to exist, of which one is located in the National Numismatic Collection at the Smithsonian Institution. Two more specimens (Morris and California) exist but contain fabricated plugs added after minting.
Due to their rarity and historical significance Silver center cents are highly prized by collectors with one graded PCGS MS61 being sold in an online auction in April 2012 for .
Origins
During the early years of the American republic, there was a general consensus that the intrinsic bullion value of the new nation's coinage should be approximately equal to its face value. Some merchants would refuse to accept coins that did not meet this standard. For most denominations, bullion parity was achieved by producing the coins in a gold or silver alloy. However, the Coinage Act of 1792 specified that the cent was to consist of 11 pennyweight (264 grains or 17.1 g) of pure copper. Such a weight, needed to maintain intrinsic value, would have been too heavy for practical everyday use.
U.S. Secretary of State Thomas Jefferson suggested an alternative: a coin made of an alloy that was primarily copper, but that included enough silver to give a reasonably-sized coin an intrinsic value of one cent. This billon alloy was considered by the U.S. Mint, but U.S. Treasury Secretary Alexander Hamilton feared that it would be too susceptible to counterfeiting, since its appearance differed little from that of pure copper. In 1792, the Mint's chief coiner, Henry Voigt, hit upon a solution: a copper planchet, slightly smaller than that of a modern quarter, with a small silver "plug" inserted in a center hole during the striking process. The silver plug would have been worth approximately ¢ at contemporary bullion prices, while the copper planchet added an additional ¢ of intrinsic value. Several such coins were produced as test pieces. Ultimately, the additional labor required for these bimetallic coins proved unsuitable for mass production, and the large cent that was produced for circulation starting in 1793 consisted of 208 grains of 100% copper.
Design
The obverse of the silver center cent features a right-hand facing Liberty head with flowing unbound hair. The date appears below the portrait, and the words "LIBERTY PARENT OF SCIENCE & INDUST." are inscribed in a circular pattern around the central devices. The reverse design consists of a wreath with the words "ONE CENT" in the center, and the fraction "1/100" below. Surrounding the wreath, "UNITED STATES OF AMERICA" is inscribed.
Specimens
References
Bi-metallic coins
One-cent coins of the United States
Goddess of Liberty on coins | Silver center cent | [
"Chemistry"
] | 585 | [
"Bi-metallic coins",
"Bimetal"
] |
4,120,578 | https://en.wikipedia.org/wiki/Bisei%20Spaceguard%20Center | The is a spaceguard facility adjacent to the (BAO), an astronomical observatory located at Bisei-chō, Ibara, Okayama Prefecture, Japan. The facility was constructed during 1999–2000, where it since conducts the Bisei Asteroid Tracking Telescope for Rapid Survey or , an astronomical survey that solely tracks asteroids and space debris. BATTeRS has discovered numerous minor planets and the periodic, Halley-type comet and near-Earth object C/2001 W2 (BATTERS).
Space debris, along with defunct spaceships, satellites as well as other small objects can present a hazard to operating spacecraft. Built by the Japan Space Forum (JSF) with contributions by the Japanese Ministry of Education, Culture, Sports, Science and Technology, all expenses of the center are covered by the Japan Aerospace Exploration Agency (JAXA). The telescopes which keep track of any space debris are staffed and operated by members of the Japan Spaceguard Association.
The 1-meter Cassegrain telescope has a field of view of three degrees and there are plans to use a mosaic of ten CCD detectors each one of which will have dimensions of 2096 x 4096 pixels. A 0.5-meter telescope with a field of view of 2 x 2 degrees began operations in February 2000. Once the 1-meter NEO search telescope begins operations, the 0.5-meter telescope will be used to provide follow-up astrometric observations.
The main-belt asteroid 17286 Bisei, discovered by BATTeRS in July 2000, was named after the town where the Bisei Spaceguard Center and the Bisei Astronomical Observatory are located.
List of discovered minor planets
BATTeRS has discovered more than 400 minor planets during its course. As an anomaly, the survey is also credited with the discovery of at Kiso Observatory in 1996, or 4 years before the Bisei Spaceguard Center was constructed. Members of the program include Atsuo Asami, David J. Asher and Syuichi Nakano. Takeshi Urata was also a former member of BATTerS.
See also
Japan Spaceguard Association
References
External links
Official website
BATTeRS (プロジェクト)
Japan Spaceguard Association
Astronomical surveys
Asteroid surveys
Minor-planet discovering observatories
JAXA | Bisei Spaceguard Center | [
"Astronomy"
] | 453 | [
"Astronomical surveys",
"Works about astronomy",
"Astronomical objects"
] |
4,120,631 | https://en.wikipedia.org/wiki/CeNTech | The Center for Nanotechnology is one of the first centers for nanotechnology. It was founded in 2001 and is located in Münster, North Rhine-Westphalia, Germany. It offers many possibilities for research, education, start-ups and companies in nanotechnology. Hence it works together with the University of Münster (WWU), the Max Planck Institute for Molecular Biomedicine and many more research institutions.
External links
CeNTech Homepage
2001 establishments in Germany
Nanotechnology institutions
Research institutes established in 2001
Research institutes in North Rhine-Westphalia
University of Münster | CeNTech | [
"Materials_science"
] | 115 | [
"Nanotechnology",
"Nanotechnology institutions"
] |
4,120,642 | https://en.wikipedia.org/wiki/Q-exponential | In combinatorial mathematics, a q-exponential is a q-analog of the exponential function,
namely the eigenfunction of a q-derivative. There are many q-derivatives, for example, the classical q-derivative, the Askey–Wilson operator, etc. Therefore, unlike the classical exponentials, q-exponentials are not unique. For example, is the q-exponential corresponding to the classical q-derivative while are eigenfunctions of the Askey–Wilson operators.
The q-exponential is also known as the quantum dilogarithm.
Definition
The q-exponential is defined as
where is the q-factorial and
is the q-Pochhammer symbol. That this is the q-analog of the exponential follows from the property
where the derivative on the left is the q-derivative. The above is easily verified by considering the q-derivative of the monomial
Here, is the q-bracket.
For other definitions of the q-exponential function, see , , and .
Properties
For real , the function is an entire function of . For , is regular in the disk .
Note the inverse, .
Addition Formula
The analogue of does not hold for real numbers and . However, if these are operators satisfying the commutation relation , then holds true.
Relations
For , a function that is closely related is It is a special case of the basic hypergeometric series,
Clearly,
Relation with Dilogarithm
has the following infinite product representation:
On the other hand, holds.
When ,
By taking the limit ,
where is the dilogarithm.
References
Q-analogs
Exponentials | Q-exponential | [
"Mathematics"
] | 332 | [
"E (mathematical constant)",
"Exponentials",
"Q-analogs",
"Combinatorics"
] |
4,120,782 | https://en.wikipedia.org/wiki/Relativistic%20dynamics | For classical dynamics at relativistic speeds, see relativistic mechanics.
Relativistic dynamics refers to a combination of relativistic and quantum concepts to describe the relationships between the motion and properties of a relativistic system and the forces acting on the system. What distinguishes relativistic dynamics from other physical theories is the use of an invariant scalar evolution parameter to monitor the historical evolution of space-time events. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved.
Twentieth century experiments showed that the physical description of microscopic and submicroscopic objects moving at or near the speed of light raised questions about such fundamental concepts as space, time, mass, and energy. The theoretical description of the physical phenomena required the integration of concepts from relativity and quantum theory.
Vladimir Fock was the first to propose an evolution parameter theory for describing relativistic quantum phenomena, but the evolution parameter theory introduced by Ernst Stueckelberg is more closely aligned with recent work. Evolution parameter theories were used by Feynman, Schwinger and others to formulate quantum field theory in the late 1940s and early 1950s. Silvan S. Schweber wrote a nice historical exposition of Feynman's investigation of such a theory. A resurgence of interest in evolution parameter theories began in the 1970s with the work of Horwitz and Piron, and Fanchi and Collins.
Invariant Evolution Parameter Concept
Some researchers view the evolution parameter as a mathematical artifact while others view the parameter as a physically measurable quantity. To understand the role of an evolution parameter and the fundamental difference between the standard theory and evolution parameter theories, it is necessary to review the concept of time.
Time t played the role of a monotonically increasing evolution parameter in classical Newtonian mechanics, as in the force law F = dP/dt for a non-relativistic, classical object with momentum P. To Newton, time was an “arrow” that parameterized the direction of evolution of a system.
Albert Einstein rejected the Newtonian concept and identified t as the fourth coordinate of a space-time four-vector. Einstein's view of time requires a physical equivalence between coordinate time and coordinate space. In this view, time should be a reversible coordinate in the same manner as space. Particles moving backward in time are often used to display antiparticles in Feynman-diagrams, but they are not thought of as really moving backward in time usually it is done to simplify notation. However a lot of people think they are really moving backward in time and take it as evidence for time reversibility.
The development of non-relativistic quantum mechanics in the early twentieth century preserved the Newtonian concept of time in the Schrödinger equation. The ability of non-relativistic quantum mechanics and special relativity to successfully describe observations motivated efforts to extend quantum concepts to the relativistic domain. Physicists had to decide what role time should play in relativistic quantum theory. The role of time was a key difference between Einsteinian and Newtonian views of classical theory. Two hypotheses that were consistent with special relativity were possible:
Hypothesis I
Assume t = Einsteinian time and reject Newtonian time.
Hypothesis II
Introduce two temporal variables:
A coordinate time in the sense of Einstein
An invariant evolution parameter in the sense of Newton
Hypothesis I led to a relativistic probability conservation equation that is essentially a re-statement of the non-relativistic continuity equation. Time in the relativistic probability conservation equation is Einstein's time and is a consequence of implicitly adopting Hypothesis I. By adopting Hypothesis I, the standard paradigm has at its foundation a temporal paradox: motion relative to a single temporal variable must be reversible even though the second law of thermodynamics establishes an “arrow of time” for evolving systems, including relativistic systems. Thus, even though Einstein's time is reversible in the standard theory, the evolution of a system is not time reversal invariant. From the perspective of Hypothesis I, time must be both an irreversible arrow tied to entropy and a reversible coordinate in the Einsteinian sense. The development of relativistic dynamics is motivated in part by the concern that Hypothesis I was too restrictive.
The problems associated with the standard formulation of relativistic quantum mechanics provide a clue to the validity of Hypothesis I. These problems included negative probabilities, hole theory, the Klein paradox, non-covariant expectation values, and so forth. Most of these problems were never solved; they were avoided when quantum field theory (QFT) was adopted as the standard paradigm. The QFT perspective, particularly its formulation by Schwinger, is a subset of the more general Relativistic Dynamics.
Relativistic Dynamics is based on Hypothesis II and employs two temporal variables: a coordinate time, and an evolution parameter. The evolution parameter, or parameterized time, may be viewed as a physically measurable quantity, and a procedure has been presented for designing evolution parameter clocks. By recognizing the existence of a distinct parameterized time and a distinct coordinate time, the conflict between a universal direction of time and a time that may proceed as readily from future to past as from past to future is resolved. The distinction between parameterized time and coordinate time removes ambiguities in the properties associated with the two temporal concepts in Relativistic Dynamics.
See also
Ernst Stueckelberg
References
External links
Relativistic dynamics of stars near a supermassive black hole (2014)
International Association for Relativistic Dynamics (IARD)
Quantum mechanics
Theory of relativity
Theories | Relativistic dynamics | [
"Physics"
] | 1,172 | [
"Theoretical physics",
"Quantum mechanics",
"Theory of relativity"
] |
4,120,803 | https://en.wikipedia.org/wiki/CDC%206000%20series | The CDC 6000 series is a discontinued family of mainframe computers manufactured by Control Data Corporation in the 1960s. It consisted of the CDC 6200, CDC 6300, CDC 6400, CDC 6500, CDC 6600 and CDC 6700 computers, which were all extremely rapid and efficient for their time. Each is a large, solid-state, general-purpose, digital computer that performs scientific and business data processing as well as multiprogramming, multiprocessing, Remote Job Entry, time-sharing, and data management tasks under the control of the operating system called SCOPE (Supervisory Control Of Program Execution). By 1970 there also was a time-sharing oriented operating system named KRONOS. They were part of the first generation of supercomputers. The 6600 was the flagship of Control Data's 6000 series.
Overview
The CDC 6000 series computers are composed of four main functional devices:
the central memory
one or two high-speed central processors
ten peripheral processors (Peripheral Processing Unit, or PPU) and
a display console.
The 6000 series has a distributed architecture.
The family's members differ primarily by the number and kind of central processor(s):
The CDC 6600 is a single CPU with 10 functional units that can operate in parallel, each working on an instruction at the same time.
The CDC 6400 is a single CPU with an identical instruction set, but with a single unified arithmetic function unit that can only do one instruction at a time.
The CDC 6500 is a dual-CPU system with two 6400 central processors
The CDC 6700 is also a dual-CPU system, with a 6600 and a 6400 central processor.
Certain features and nomenclature had also been used in the earlier CDC 3000 series:
Arithmetic was ones complement.
The name COMPASS was used by CDC for the assembly languages on both families.
The name SCOPE was used for its operating system implementations on the 3000 and 6000 series.
The only currently (as of 2018) running CDC 6000 series machine, a 6500, has been restored by Living Computers: Museum + Labs It was built in 1967 and used by Purdue University until 1989 when it was decommissioned and then given to the Chippewa Falls Museum of Industry and Technology before being purchased by Paul Allen for LCM+L.
History
The first member of the CDC 6000 series was the supercomputer CDC 6600, designed by Seymour Cray and James E. Thornton in Chippewa Falls, Wisconsin. It was introduced in September 1964 and performs up to three million instructions per second, three times faster than the IBM Stretch, the speed champion for the previous couple of years. It remained the fastest machine for five years until the CDC 7600 was launched. The machine is cooled by Freon refrigerant.
Control Data manufactured about 100 machines of this type, selling for $6 to $10 million each.
The next system to be introduced was the CDC 6400, delivered in April 1966. The 6400 central processor is a slower, less expensive implementation with serial processing, rather than the 6600's parallel functional units. All other aspects of the 6400 are identical to the 6600. Then followed a machine with dual 6400-style central processors, the CDC 6500, designed principally by James E. Thornton, in October 1967. And finally, the CDC 6700, with both a 6600-style CPU and a 6400-style CPU, was released in October 1969.
Subsequent special edition options were custom-developed for the series, including:
Attaching a second system configured without a Central Processor (numbered 6416 and identified as "Augmented I/O Buffer and Control) to the first; the combined total effectively was 20 peripheral and control processors with 24 channels, and the purpose was to support additional peripherals and "significantly increase the multiprogramming and batch job processing of the 6000 series." (A 30-PPU, 36 channel 6600 machine was operated by Control Data's Software Research Lab during 1971–1973 as the Minneapolis Cybernet host, but this version was never sold commercially.)
Control Data also marketed a CDC 6400 with a smaller number of peripheral processors:
CDC 6415–7 with seven peripheral processors
CDC 6415–8 with eight peripheral processors
CDC 6415–9 with nine peripheral processors
Hardware
Central memory (CM)
In all the CDC 6000 series computers, the central processor communicates with around seven simultaneously active programs (jobs), which reside in central memory. Instructions from these programs are read into the central processor registers and are executed by the central processor at scheduled intervals. The results are then returned to central memory.
Information is stored in central memory in the form of words. The length of each word is 60 binary digits (bits). The highly efficient address and data control mechanisms involved permit a word to be moved into or out of central memory in as little as 100 nanoseconds.
Extended Core Storage (ECS)
An extended core storage unit (ECS) provides additional memory storage and enhances the powerful computing capabilities of the CDC 6000 series computers. The unit contains interleaved core banks, each one ECS word (488 bits) wide and an 488 bit buffer for each bank.
While nominally slower than CM, ECS included a buffer (cache) that in some applications gave ECS better performance than CM. However, with a more common reference pattern the CM was still faster.
Central processor
The central processor is the high-speed arithmetic unit that functions as the workhorse of the computer. It performs the addition, subtraction, and logical operations and all of the multiplication, division, incrementing, indexing, and branching instructions for user programs. Note that in the CDC 6000 architecture, the central processing unit performs no input/output (I/O) operations. Input/Output is totally asynchronous, and performed by peripheral processors.
A 6000 series CPU contains 24 operating registers, designated X0–X7, A0–A7, and B0–B7. The eight X registers are each 60 bits long, and used for most data manipulation—both integer and floating point. The eight B registers are 18 bits long, and generally used for indexing and address storage. Register B0 is hard-wired to always return 0. By software convention, register B1 is generally set to 1. (This often allows the use of 15-bit instructions instead of 30-bit instructions.) The eight 18-bit A registers are 'coupled' to their corresponding X registers: setting an address into any of registers A1 through A5 causes a memory load of the contents of that address into the corresponding X registers. Likewise, setting an address into registers A6 and A7 causes a memory store into that location in memory from X6 or X7. Registers A0 and X0 are not coupled in this way, so can be used as scratch registers. However A0 and X0 are used when addressing CDCs Extended Core Storage (ECS).
Instructions are either 15 or 30 bits long, so there can be up to four instructions per 60-bit word. A 60-bit word can contain any combination of 15-bit and 30-bit instructions that fit within the word, but a 30-bit instruction can not wrap to the next word. The op codes are six bits long. The remainder of the instruction is either three three-bit register fields (two operands and one result), or two registers with an 18-bit immediate constant. All instructions are 'register to register'. For example, the following COMPASS (assembly language) code loads two values from memory, performs a 60-bit integer add, then stores the result:
SA1 X SET REGISTER A1 TO ADDRESS OF X; LOADS X1 FROM THAT ADDRESS
SA2 Y SET REGISTER A2 TO ADDRESS OF Y; LOADS X2 FROM THAT ADDRESS
IX6 X1+X2 LONG INTEGER ADD REGISTERS X1 AND X2, RESULT INTO X6
SA6 A1 SET REGISTER A6 TO (A1); STORES X6 TO X; THUS, X += Y
The central processor used in the CDC 6400 series contains a unified arithmetic element which performs one machine instruction at a time. Depending on instruction type, an instruction can take anywhere from five clock cycles for 18-bit integer arithmetic to as many as 68 clock cycles (60-bit population count). The CDC 6500 is identical to the 6400, but includes two identical 6400 CPUs. Thus the CDC 6500 can nearly double the computational throughput of the machine, although the I/O throughput is still limited by the speed of external I/O devices served by the same 10 PPs/12 Channels. Many CDC customers worked on compute-bound problems.
The CDC 6600 computer, like the CDC 6400, has just one central processor. However, its central processor offers much greater efficiency. The processor is divided into 10 individual functional units, each of which was designed for a specific type of operation. All 10 functional units can operate simultaneously, each working on their own operation. The function units provided are: branch, Boolean, shift, long integer add, floating-point add, floating-point divide, two floating-point multipliers, and two increment (18-bit integer add) units. Functional unit latencies are between three clock cycles for increment add and 29 clock cycles for floating-point divide.
The 6600 processor can issue a new instruction every clock cycle, assuming that various processor (functional unit, register) resources were available. These resources are tracked by a scoreboard mechanism. Also contributing to keeping the issue rate high is an instruction stack, which caches the contents of eight instruction words (32 short instructions or 16 long instructions, or a combination). Small loops can reside entirely within the stack, eliminating memory latency from instruction fetches.
Both the 6400 and 6600 CPUs have a cycle time of 100 ns (10MHz). Due to the serial nature of the 6400 CPU, its exact speed is heavily dependent on instruction mix, but generally around 1 MIPS. Floating-point additions are fairly fast at 11 clock cycles, however floating-point multiplication is very slow at 57 clock cycles. Thus its floating-point speed will depend heavily on the mix of operations and can be under 200 kFLOPS. The 6600 is faster. With good compiler instruction scheduling, the machine can approach its theoretical peak of 10 MIPS. Floating-point additions take four clock cycles, and floating-point multiplications take 10 clocks (but there are two multiply functional units, so two operations can be processing at the same time.) The 6600 can therefore have a peak floating-point speed of 2-3 MFLOPS.
The CDC 6700 computer combines features of the other three computers. Like the CDC 6500, it has two central processors. One is a CDC 6400/CDC 6500 central processor with the unified arithmetic section; the other is the more efficient CDC 6600 central processor. The combination makes the CDC 6700 the fastest and the most powerful of the CDC 6000 series.
Peripheral processors
The central processor shares access to central memory with up to ten peripheral processors (PPs). Each peripheral processor is an individual computer with its own 1 μs memory of 4K 12-bit words. (They are somewhat similar to CDC 160A minicomputers, sharing the 12-bit word length and portions of the instruction set.)
While the PPs were designed as an interface to the 12 I/O channels, portions of the Chippewa Operating System (COS), and systems derived from it, e.g., SCOPE, MACE, KRONOS, NOS, and NOS/BE, run on the PPs. Only the PPs have access to the channels and can perform input/output: the transfer of information between central memory and peripheral devices such as disks and magnetic tape units. They relieve the central processor of all input/output tasks, so that it can perform calculations while the peripheral processors are engaged in input/output and operating system functions. This feature promotes rapid overall processing of user programs. Much of the operating system ran on the PPs, thus leaving the full power of the Central Processor available for user programs.
Each peripheral processor can add, subtract, and perform logical operations. Special instructions perform data transfer between processor memory and, via the channels, peripheral devices at up to 1 μs per word. The peripheral processors are collectively implemented as a barrel processor. Each executes routines independently of the others. They are a loose predecessor of bus mastering or direct memory access.
Instructions use a six-bit op code, thus leaving six bits for an operand. It is also possible to combine the next word's 12 bits, to form an 18-bit address (the size needed to access the full 131,072 words of Central Memory).
Data channels
For input or output, each peripheral processor accesses a peripheral device over a communication link called a data channel. One peripheral device can be connected to each data channel; however, a channel can be modified with hardware to service more than one device.
The data channels have no access to either central or peripheral
memory, and rely on programs running in a peripheral processor to access memory or to chain operations.
Each peripheral processor can communicate with any peripheral device if another peripheral processor is not using the data channel connected to that device. In other words, only one peripheral processor at a time can use a particular data channel to communicate to a peripheral device. However, a peripheral processor may write data to a channel that a different peripheral processor is reading.
Display console
In addition to communication between peripheral devices and peripheral processors, communication takes place between the computer operator and the operating system. This is made possible by the computer console, which had two CRT screens.
This display console was a significant departure from conventional computer consoles of the time, which contained hundreds of blinking lights and switches for every state bit in the machine. (See front panel for an example.) By comparison, the 6000 series console is an elegant design: simple, fast and reliable.
The console screens are calligraphic, not raster based. Analog circuitry steers the electron beams to draw the individual characters on the screen. One of the peripheral processors runs a dedicated program called "DSD" (Dynamic System Display), which drives the console. Coding in DSD needs to be fast as it needs to continually redraw the screen quickly enough to avoid visible flicker.
DSD displays information about the system and the jobs in process. The console also includes a keyboard through which the operator can enter requests to modify stored programs and display information about jobs in or awaiting execution.
A full-screen editor, called O26 (after the IBM model 026 key punch, with the first character made alphabetic due to operating system restrictions), can be run on the operator console. This text editor appeared in 1967—which made it one of the first full-screen editors. (Unfortunately, it took CDC another 15 years to offer FSE, a full-screen editor for normal time-sharing users on CDCs Network Operating System.)
There are also a variety of games that were written using the operator console. These included BAT (a baseball game), KAL (a kaleidoscope), DOG (Snoopy flying his doghouse across the screens), ADC (Andy Capp strutting across the screens), EYE (changes the screens into giant eyeballs, then winks them), PAC (a Pac-Man-like game), a lunar lander simulator, and more.
Minimum configuration
The minimum hardware requirements of a CDC 6000 series computer system consists of the computer, including 32,768 words of central memory storage, any combination of disks, disk packs, or drums to provide 24 million characters of mass storage, a punched card reader, punched card punch, printer with controllers, and two seven-track magnetic tape units.
Larger systems could be obtained by including optional equipment such as additional central memory, extended core storage (ECS), additional disk or drum units, card readers, punches, printers, and tape units. Graphic plotters and microfilm recorders were also available.
Peripherals
CDC 405 Card Reader - Unit reads 80-column cards at 1200 cards a minute and 51-column cards at 1600 cards per minute. Each tray holds 4000 cards to reduce the rate of required loading.
CDC 6602/6612 Console Display
CDC 6603 Disk System
CDC 606 Magnetic Tape Transports (7-track, IBM compatible)
CDC 626 Magnetic Tape Transports (14-track)
CDC 6671 Communications Multiplexer - supported up to 16 synchronous data connections up to 4800 bit/s each for Remote Job Entry
CDC 6676 Communications Multiplexer - supported up to 64 asynchronous data connections up to 300 bit/s each for timesharing access.
CDC 6682/6683 Satellite Coupler
CDC 6681 Data Channel Converter
Versions
The CDC 6600 was the flagship. The CDC 6400 was a slower, lower-performance CPU that cost significantly less.
The CDC 6500 was a dual CPU 6400, with two CPUs but only one set of I/O PPs, designed for computation-bound problems. The CDC 6700 was also a dual CPU machine, which had one 6600 CPU and one 6400 CPU. The CDC 6415 was an even cheaper and slower machine; it had a 6400 CPU but was available with only seven, eight, or nine PPUs instead of the normal ten. The CDC 6416 was an upgrade that could be added to a 6000 series machine; it added an extra PPU bank, giving a total of 20 PPUs and 24 channels, designed for significantly improved I/O performance.
The 6600
The CDC 6600 is the flagship mainframe supercomputer of the 6000 series of computer systems manufactured by Control Data Corporation.
Generally considered to be the first successful supercomputer, it outperformed its fastest predecessor, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600, of which about 100 were sold, was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.
The CDC 6600 anticipated the RISC design philosophy and, unusually, employed a ones'-complement representation of integers. Its successors would continue the architectural tradition for more than 30 years until the late 1980s, and were the last chips designed with ones'-complement integers.
The CDC 6600 was also the first widespread computer to include a load–store architecture, with the writing to its address registers triggering memory load or store of data from its data registers.
The first CDC 6600s were delivered in 1965 to the Livermore and Los Alamos National Labs (managed by the University of California). Serial #4 went to the Courant Institute of Mathematical Sciences Courant Institute at NYU in Greenwich Village, New York CIty. The first delivery outside the US went to CERN laboratory near Geneva, Switzerland, where it was used to analyse the two to three million photographs of bubble-chamber tracks that CERN experiments were producing every year. In 1966 another CDC 6600 was delivered to the Lawrence Radiation Laboratory, part of the University of California at Berkeley, where it was used for the analysis of nuclear events photographed inside the Alvarez bubble chamber. The University of Texas at Austin had one delivered for its Computer Science and Mathematics Departments, and installed underground on its main campus, tucked into a hillside with one side exposed, for cooling efficiency.
A CDC 6600 is on display at the Computer History Museum in Mountain View, California.
The 6400
The CDC 6400, a member of the CDC 6000 series, is a mainframe computer made by Control Data Corporation in the 1960s. The central processing unit was architecturally compatible with the CDC 6600. In contrast to the 6600, which had 10 parallel functional units which could work on multiple instructions at the same time, the 6400 had a unified arithmetic unit, which could only work on a single instruction at a time. This resulted in a slower, lower-performance CPU, but one that cost significantly less. Memory, peripheral processor-based input/output (I/O), and peripherals were otherwise identical to the 6600.
In December 1966, at UC Berkeley, a CDC 6400 system was put into operation as an academic computing system (December 1966 to August 1982).
In 1966, the Computing Center () of the RWTH Aachen University acquired a CDC 6400, the first Control Data supercomputer in Germany and the second one in Europe after the European Organization for Nuclear Research (CERN). It served the entire university also by 64 remote-line teletypes (TTY) until it was replaced by a CDC Cyber 175 computer in 1976.
Dual CPU systems
The 6500
The CDC 6500, which features a dual CPU 6400, is the third supercomputer in the 6000 series manufactured by the Control Data Corporation and designed by supercomputer pioneer Seymour Cray. The first 6500 was announced in 1964 and was delivered in 1967.
It includes twelve different independent computers. Ten are peripheral and control processors, each of which have a separate memory and can run programs separately from each other and the two 6400 central processors. Instead of being air-cooled, it has a liquid refrigeration system and each of the three bays of the computer has its own cooling unit.
CDC 6500 systems were installed at:
Purdue University - installed in 1967 at the oldest Computer Science department in the country, established in 1962.
Michigan State University - bought in 1968, meant to replace its CDC 3600, and it was the only academic mainframe on campus.
CERN - upgraded from a 6400 to a 6500 in April 1969.
the technical lab at the Patrick Air Force Base in 1978.
the Laboratory of Computing Techniques and Automation in the Joint Institute for Nuclear Research (USSR) - originally bought CDC 6200 in 1972, later upgraded to 6500, retired in 1995
University of Colorado Boulder
The 6700
Composed of a 6600 and a 6400, the CDC 6700 was the most powerful of the 6000 series.
See also
CDC Cyber - contained the successors to the 6000 series computers
Notes
References
CONTROL DATA 6400/6500/6600 Computer Systems Reference Manual, Publication No. 60100000 D, 1967
CONTROL DATA 6400/6500/6600/6700 Computer Systems, SCOPE 3.3 User's Guide, Publication No. 60252700 A, 1970
CONTROL DATA 6400/6500/6600/6700 Computer Systems, SCOPE Reference Manual, Publication No. 60305200, 1971
Computer history on CDC 6600
Gordon Bell on CDC computers
External links
Neil R. Lincoln with 18 Control Data Corporation (CDC) engineers on computer architecture and design, Charles Babbage Institute, University of Minnesota. Engineers include Robert Moe, Wayne Specker, Dennis Grinna, Tom Rowan, Maurice Hutson, Curt Alexander, Don Pagelkopf, Maris Bergmanis, Dolan Toth, Chuck Hawley, Larry Krueger, Mike Pavlov, Dave Resnick, Howard Krohn, Bill Bhend, Kent Steiner, Raymon Kort, and Neil R. Lincoln. Discussion topics include CDC 1604, CDC 6600, CDC 7600, and Seymour Cray.
CONTROL DATA 6400/6500/6600 COMPUTER SYSTEMS Reference Manual
2016 GeekWire article Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
2013 GeekWire article on the restoration of a CDC 6500 at the LCM.
Request a login to the working CDC 6500 at Living Computers: Museum + Labs, one of the computers online at Paul Allen's collection of timesharing and interactive computers.
6000 series
Supercomputers
60-bit computers
12-bit computers
Control Data Corporation mainframe computers
Transistorized computers
Control Data Corporation
Computer-related introductions in 1964 | CDC 6000 series | [
"Technology"
] | 4,918 | [
"Supercomputers",
"Supercomputing"
] |
4,121,234 | https://en.wikipedia.org/wiki/Science%20in%20Action%20%28book%29 | Science in Action: How to Follow Scientists and Engineers Through Society () is a seminal book by French philosopher, anthropologist and sociologist Bruno Latour first published in 1987. It is written in a textbook style, proposes an approach to the empirical study of science and technology, and is considered a canonical application of actor-network theory. It also entertains ontological conceptions and theoretical discussions making it a research monograph and not a methodological handbook per se.
In the introduction, Latour develops the methodological dictum that science and technology must be studied "in action", or "in the making". Because scientific discoveries turn esoteric and difficult to understand, it has to be studied where discoveries are made in practice. For example, Latour turns back time in the case of the discovery of the "double helix". Going back in time, deconstructing statements, machines and articles, it is possible to arrive at a point where scientific discovery could have chosen to take many other directions (contingency). Also the concept of "black box" is introduced. A black box is a metaphor borrowed from cybernetics denoting a piece of machinery that "runs by itself". That is, when a series of instructions are too complicated to be repeated all the time, a black box is drawn around it, allowing it to function only by giving it "input" and "output" data. For example, a CPU inside a computer is a black box. Its inner complexity doesn't have to be known; one only needs to use it in his/her daily activities.
Henning Schmidgen describes Science in Action as an anthropology of science, a manual where the main purpose is “a trip through the unfamiliar territory of “technoscience””. Similarly Science in Action has been described as "A guide that explains how to account for processes of making knowledge, facts, or truths. A guide designed to be used on site, while observing the negotiations and struggles that precede ready-made science".
Criticism
Latour's work, including Science in Action, has been extremely influentially on the field of Science and technology studies having been taught at preeminent institutions such as Massachusetts Institute of Technology. However there were some critics of it such as Olga Amsterdamska's who stated in a book review: "Somehow, the ideal of a social science whose only goal is to tell inconsistent, false, and incoherent stories about nothing in particular does not strike me as very appealing or sufficiently ambitious." Despite this harsh rejoinder, her criticism had little impact on the field.
See also
Laboratory Life (with Steve Woolgar)
Politics of Nature
We Have Never Been Modern
References
1987 non-fiction books
Science books
Sociology of scientific knowledge
Harvard University Press books
Works by Bruno Latour
Science and technology studies works
Books in philosophy of technology | Science in Action (book) | [
"Technology"
] | 577 | [
"Science and technology studies works",
"Science and technology studies"
] |
4,121,300 | https://en.wikipedia.org/wiki/Refuse-derived%20fuel | Refuse-derived fuel (RDF) is a fuel produced from various types of waste such as municipal solid waste (MSW), industrial waste or commercial waste.
The World Business Council for Sustainable Development provides a definition:
"Selected waste and by-products with recoverable calorific value can be used as fuels in a cement kiln, replacing a portion of conventional fossil fuels, like coal, if they meet strict specifications. Sometimes they can only be used after pre-processing to provide ‘tailor-made’ fuels for the cement process".
RDF consists largely of combustible components of such waste, as non recyclable plastics (not including PVC), paper cardboard, labels, and other corrugated materials. These fractions are separated by different processing steps, such as screening, air classification, ballistic separation, separation of ferrous and non ferrous materials, glass, stones and other foreign materials and shredding into a uniform grain size, or also pelletized in order to produce a homogeneous material which can be used as substitute for fossil fuels in e.g. cement plants, lime plants, coal fired power plants or as reduction agent in steel furnaces. If documented according to CEN/TC 343 it can be labeled as solid recovered fuels (SRF).
Others describe the properties, such as:
Secondary fuels
Substitute fuels
“AF“ as an abbreviation for alternative fuels
Ultimately most of the designations are only general paraphrases for alternative fuels which are either waste-derived or biomass-derived.
There is no universal exact classification or specification which is used for such materials. Even legislative authorities have not yet established any exact guidelines on the type and composition of alternative fuels. The first approaches towards classification or specification are to be found in Germany (Bundesgütegemeinschaft für Sekundärbrennstoffe) as well as at European level (European Recovered Fuel Organisation). These approaches which are initiated primarily by the producers of alternative fuels, follow a correct approach: Only through an exactly defined standardisation in the composition of such materials can both production and utilisation be uniform worldwide.
First approaches towards alternative fuel classification:
Solid recovered fuels are part of RDF in the fact that it is produced to reach a standard such as CEN/343 ANAS. A comprehensive review is now available on SRF / RDF production, quality standards and thermal recovery, including statistics on European SRF quality.
History
In the 1950s tyres were used for the first time as refuse derived fuel in the cement industry. Continuous use of various waste-derived alternative fuels then followed in the mid-1980s with “Brennstoff aus Müll“ (BRAM) – fuel from waste – in the Westphalian cement industry in Germany.
At that time the thought of cost reduction through replacement of fossil fuels was the priority as considerable competition pressure weighed down on the industry. Since the eighties the German Cement Works Association (Verein Deutscher Zementwerke e.V. (VDZ, Düsseldorf)) has been documenting the use of alternative fuels in the federal German cement industry. In 1987 less than 5% of fossil fuels were replaced by refuse derived fuels, in 2015 its use increased to almost 62%.
Refuse-derived fuels are used in a wide range of specialized waste to energy facilities, which are using processed refuse-derived fuels with lower calorific values of 8-14MJ/kg in grain sizes of up to 500 mm to produce electricity and thermal energy (heat/steam) for district heating systems or industrial uses.
Processing
Materials such as glass and metals are removed during the treatment processing since they are non-combustible. The metal is removed using a magnet and the glass using mechanical screening. After that, an air knife is used to separate the light materials from the heavy ones. The light materials have higher calorific value and they create the final RDF. The heavy materials will usually continue to a landfill. The residual material can be sold in its processed form (depending on the process treatment) as a plain mixture or it may be compressed into pellet fuel, bricks or logs and used for other purposes either stand-alone or in a recursive recycling process. RDF or SRF is the combustible sub-fraction of municipal solid waste and other similar solid waste, produced using a mix of mechanical and/or biological treatment methods such as biodrying. in mechanical-biological treatment (MBT) plants. During the production of RDF / SRF in MBT plants there are solid loses of otherwise combustible material, which generates a debate whether the production and use of RDF / SRF is resource efficient or not over traditional one-step combustion of residual MSW in incineration (Energy from waste) plants.
In the process of making RDF pellets from shredded SRF, drying is often required. Typically, the moisture content needs to be reduced to below 20% to produce high-calorific, high-density RDF pellets. Drying RDF often requires a substantial amount of energy, so choosing an inexpensive heat source is preferable.
The production of RDF may involve the following steps:
Bag splitting/Shredding
Manual sorting (typically to remove inerts, PVC and/or other unwanted objects)
Size screening
Magnetic separation
Eddy current separation (non-magnetic metals)
Air classifier (density separation)
Coarse shredding
Refining separation by infrared separation
Drying
Pelletizing
Mixing/homogenization
End markets
RDF can be used in a variety of ways to produce electricity or as a replacement of fossil fuels. It can be used alongside traditional sources of fuel in coal power plants. In Europe RDF can be used in the cement kiln industry, where strict air pollution control standards of the Waste Incineration Directive apply. The main limiting factor for RDF / SRF use in cement kilns is its total chlorine (Cl) content, with mean Cl content in average commercially manufactured SRF being at 0.76 w/w on a dry basis (± 0.14% w/wd, 95% confidence). RDF can also be fed into plasma arc gasification modules & pyrolysis plants. Where the RDF is capable of being combusted cleanly or in compliance with the Kyoto Protocol, RDF can provide a funding source where unused carbon credits are sold on the open market via a carbon exchange. However, the use of municipal waste contracts and the bankability of these solutions is still a relatively new concept, thus RDF's financial advantage may be debatable. The European market for the production of RDF have been grown fast due to the European landfill directive and the imposition of landfill taxes. Refuse derived fuel (RDF) exports from the UK to Europe and beyond are expected to have reached 3.3 million tonnes in 2015, representing a near-500,000 tonnes increase on the previous year.
Measurement of RDF and SRF properties: biogenic content
The biomass fraction of RDF and SRF has a monetary value under multiple greenhouse gas protocols, such as the European Union Emissions Trading Scheme and the Renewable Obligation Certificate program in the United Kingdom. Biomass is considered to be carbon-neutral since the liberated from the combustion of biomass is recycled in plants. The combusted biomass fraction of RDF/SRF is used by stationary combustion operators to reduce their overall reported emissions.
Several methods have been developed by the European CEN 343 working group to determine the biomass fraction of RDF/SRF. The initial two methods developed (CEN/TS 15440) were the manual sorting method and the selective dissolution method; a comparative assessment of these two methods is available. An alternative, but more expensive method was developed using the principles of radiocarbon dating. A technical review (CEN/TR 15591:2007) outlining the carbon-14 method was published in 2007, and a technical standard of the carbon dating method (CEN/TS 15747:2008) was published in 2008. In the United States, there is already an equivalent carbon-14 method under the standard method ASTM D6866.
Although carbon-14 dating can determine the biomass fraction of RDF/SRF, it cannot determine directly the biomass calorific value. Determining the calorific value is important for green certificate programs such as the Renewable Obligation Certificate program. These programs award certificates based on the energy produced from biomass. Several research papers, including the one commissioned by the Renewable Energy Association in the UK, have been published that demonstrate how the carbon-14 result can be used to calculate the biomass calorific value.
Quality assurance of RDF and SRF properties: representative laboratory sub-sampling
There are major challenges related to the quality assurance and especially the accurate determination of the RDF / SRF thermal recovery (combustion) properties, due to their inherently variable (heterogeneous) composition. Recent advances enable optimal sub-sampling schemes to arrive from the SRF / SRF sample of say 1 kg to the g or mg to be tested in the analytical devices such as the bomb calorimetry or TGA. With such solutions representative sub-sampling can be secured, but less so for the chlorine content. The new evidence suggests that the theory of sampling (ToS) may be overestimating the processing effort needed, to obtain a representative sub-sample.
Regional use
Campania
In 2009, in response to the Naples waste management issue in Campania, Italy, the Acerra incineration facility was completed at a cost of over €350 million. The incinerator burns 600,000 tons of waste per year. The energy produced from the facility is enough to power 200,000 households per year.
Iowa
The first full-scale waste-to-energy facility in the US was the Arnold O. Chantland Resource Recovery Plant, built in 1975 located in Ames, Iowa. This plant also produces RDF that is sent to a local power plant for supplemental fuel.
Manchester
The city of Manchester, in the north west of England, is in the process of awarding a contract for the use of RDF which will be produced by proposed mechanical biological treatment facilities as part of a huge PFI contract. The Greater Manchester Waste Disposal Authority has recently announced there is significant market interest in initial bids for the use of RDF which is projected to be produced in tonnages up to 900,000 tonnes per annum.
Bollnäs
During spring 2008 Bollnäs Ovanåkers Renhållnings AB (BORAB) in Sweden, started their new waste-to-energy plant. Municipal solid waste as well as industrial waste is turned into refuse-derived fuel. The 70,000-80,000 tonnes RDF that is produced per annum is used to power the nearby BFB-plant, which provides the citizens of Bollnäs with electricity and district heating.
Israel
In late March 2017, Israel launched its own RDF plant at the Hiriya Recycling Park; which daily will intake about 1,500 tonnes of household waste, which will amount to around half a million tonnes of waste each year, with an estimated production of 500 tonnes of RDF daily. The plant is part of Israel's "diligent effort to improve and advance waste management in Israel."
United Arab Emirates
In October 2018, the UAE's Ministry of Climate Change and Environment signed a concession agreement with Emirates RDF (BESIX, Tech Group Eco Single Owner, Griffin Refineries) to develop and operate a RDF facility in the Emirate of Umm Al Quwain. The facility will receive 1,000 tons per day of household waste and convert the waste of 550,000 residents from the emirates of Ajman and Umm Al Quwain into RDF. RDF will be used in cement factories to partially replace the traditional use of gas or coal.
See also
Biodrying
Cement kiln
Heavy metals
Isle of Wight gasification facility
Mechanical biological treatment
Mechanical heat treatment
Open burning of waste
Waste-to-energy
References
Incineration
Mechanical biological treatment
Waste treatment technology | Refuse-derived fuel | [
"Chemistry",
"Engineering"
] | 2,465 | [
"Water treatment",
"Combustion engineering",
"Incineration",
"Environmental engineering",
"Waste treatment technology"
] |
4,121,452 | https://en.wikipedia.org/wiki/Politics%20of%20Nature | Politics of Nature: How to Bring the Sciences Into Democracy (2004, ) is a book by the French theorist and philosopher of science Bruno Latour. The book is an English translation by Catherine Porter of the French book, Politiques de la nature. It is published by Harvard University Press.
Overview
In the book, Latour argues for a new and better take on political ecology (not the discipline but the ecological political movements, e.g. greens) that embraces his feeling that, "political ecology has nothing to do with nature". In fact, Latour argues that the idea of nature is unfair because it unfairly allows those engaged in political discourse to "short-circuit" discussions. Latour uses Plato's metaphor of "the cave" to describe the current role of nature and science in separating facts from values which is the role of politics and non-scientists. Building on the arguments levelled in his previous works, Latour argues that this distinction between facts and values is rarely useful and in many situations dangerous. He claims that it leads to a system that ignores nature's socially constructed status and creates a political order without "due process of individual will".
Instead, he calls for a "new Constitution" where different individuals can assemble democratically without the definitions of facts and values influenced by current attitudes towards nature and scientific knowledge. Latour describes an alternate set of rules by which this assembly, or collective as he calls it, might come together and be constituted. He also describes the way that entities will be allowed in or out in the future. In describing this collective, Latour draws attention to the role of the spokesperson, who must be doubted but who must speak for otherwise mute things in order to ensure that the collective involves both "humans and non-humans". This is also an important aspect of Actor-network theory (ANT) that can be found in his main sociological works.
The book includes a short summary at the end and a glossary of terms.
Reviews of the book
Sal Restivo emphasises that the book is reproducing the insights from Science Studies, which Bruno Latour himself has greatly contributed to. However, Sal Restivo questions whether Latour understood social constructivism and what sociologists actually do.
See also
Laboratory life (with Steve Woolgar)
Science in Action (book)
Aramis, or the Love of Technology
We Have Never Been Modern
References
External links
Introduction on Latour's website
2004 non-fiction books
Political books
Sociology of scientific knowledge
Science and technology studies works
Works by Bruno Latour
Books in philosophy of technology | Politics of Nature | [
"Technology"
] | 521 | [
"Science and technology studies works",
"Science and technology studies"
] |
4,121,573 | https://en.wikipedia.org/wiki/Adinazolam | Adinazolam (marketed under the brand name Deracyn) is a tranquilizer of the triazolobenzodiazepine (TBZD) class, which are benzodiazepines (BZDs) fused with a triazole ring. It possesses anxiolytic, anticonvulsant, sedative, and antidepressant properties. Adinazolam was developed by Jackson B. Hester, who was seeking to enhance the antidepressant properties of alprazolam, which he also developed. Adinazolam was never FDA approved and never made available to the public market; however, it has been sold as a designer drug.
Chemical Information
Reactivity
Adinazolam contains multiple reactive parts in its structure. The first is the dimethylamine which is mildly basic with a pKa of 6.30 making over 5% of the compound protonated under physiological pH. The tertiary amine could also be important in protein binding with the ability to form hydrogen bridges and is also likely a target for metabolism via demethylation. The dimethylamine is also labile for oxidative decomposition resulting in the loss of one methyl group forming N-desmethyladinazolam. Loss of the entire dimethyl methanamine is also possible via oxidative decomposition forming estazolam. The second reactive group is the nitrogen on the 4-position. With a pKa of 5.09, it is only protonated at pH levels lower than physiological. After protonation the group is labile for hydration, which results in the opening of the diazepine ring. Afterwards, ethylamine is cleaved or the ring is closed again resulting in the other structures.
Synthesis
One logical way to synthesize adinazolam is by reacting benzodiazepine precursors. One route followed by Hester et al. starts from 7-chloro-2-hydrazineyl-5-phenyl-3H-benzo[e][1,4]diazepine. First N-Phthalimidoyl-β-alanine is formed in situ from β-alanine with phthalic anhydride. The solution is cooled and treated with carbonyldiimidazole. Then 7-chloro-2-hydrazineyl-5-phenyl-3H-benzo[e][1,4]diazepine is added to the solution, which is left to react at rt. for 18 h. After the procedure ethyl acetate solvate was made to result in 2-(2-(8-chloro-6-phenyl-4H-benzo[f][1,2,4]triazolo[4,3-a][1,4]diazepin-1-yl)ethyl)isoindoline-1,3-dione. Subsequent treatment of 2-(2-(8-chloro-6-phenyl-4H-benzo[f][1,2,4]triazolo[4,3-a][1,4]diazepin-1-yl)ethyl)isoindoline-1,3-dione with a solution of 88% formic acid and 37% aq. formaldehyde (3:2 mol/mol) at 100 °C for 1 h under nitrogen resulted in the formation of 2-(2-(4-(2-benzoyl-4-chlorophenyl)-5-(dimethylamino)methyl)-4H-1,2,4-triazol-3-yl)ethyl)isoindoline-1,3-dione. The diazepine ring is cleaved during this step and the phthalate is transferred. In the last step, the diazepine ringed is formed back with the use of hydrazine hydrate at 70 °C for 1 hour and 30 min under nitrogen forming adinazolam.
Another route to synthesise adinazolam is via estazolam, as performed by Gall et al. During the synthesis bis-(dimethylamino) methane is dissolved in DMF and cooled to 0 °C. The solution is then treated with acetyl chloride in DMF forming a dimethyl(methylene) ammonium chloride salt. K2CO3 is then added to the solution, followed by a solution of estazolam in DMF. The mixture is then heated to 60 °C for 3 hours resulting in adinazolam after a workup.
Use and Purpose
Adinazolam is primarily used for its anxiolytic properties. As mentioned in the Introduction, and will be further explained in the Molecular mechanism of action, adinazolam is a derivative of a benzodiazepine, it acts on the GABA receptors in the central nervous system, promoting the inhibitory effects of GABA. This results in a calming effect, making it suitable for the treatment of anxiety disorders, panic disorders, and as antidepressants.
In the article of Amsterdam et al. research was done involving 43 outpatients meeting Research Diagnostic Criteria for major depression. They were randomized to receive either adinazolam or imipramine. Medication dosages were adjusted based on tolerability and needs, with weekly clinical ratings and evaluations using various scales.
It was found that adinazolam was as effective as imipramine in treating major depression, with similar efficacy in melancholic depression. Adinazolam did have some side effects like drowsiness/sedation, dry mouth, constipation, blurred vision, nausea/vomiting/diarrhea, nervousness, and headaches. However, they all showed fewer side effects than imipramine, except for drowsiness/sedation. The study suggests that adinazolam could be a promising alternative with potential therapeutic benefits, but further research is needed to clarify its clinical profile and safety.
Availability
Adinazolam was developed and tested as an antidepressant in the 1980s and 1990s but experienced a decline in research and testing following the initial evaluation period. There is limited information available regarding any further exploration of its pharmaceutical properties. The FDA’s rejection of adinazolam in the 1990s led to its absence from mainstream medical use.
After this rejection, adinazolam reemerged around 2015, being used in the market as a designer drug. This change in status and function raises questions about its application in non-medical contexts. Designer drugs often present challenges for regulatory bodies due to their modified chemical compositions and susceptibility to misuse.
Side effects
When using adinazolam, individuals may experience various side effects, both in the short- and long term. Initially, common side effects in the short term may include drowsiness, sedation, and mild cognitive impairment.
Overdose may include muscle weakness, ataxia, dysarthria and particularly in children paradoxical excitement, as well as diminished reflexes, confusion and coma may ensue in more severe cases.
A human study comparing the subjective effects and abuse potential of adinazolam (30 mg and 50 mg) with diazepam, lorazepam and a placebo showed that adinazolam causes the most "mental and physical sedation" and the greatest "mental unpleasantness".
In the long term, prolonged use of adinazolam could result in the development of tolerance, where higher doses are required to achieve the same therapeutic effects. Consequently, increasing doses may heighten the risk of adverse effects and potential complications. Extended use of adinazolam also carries the possibility of dependence, where individuals may become psychologically and physically reliant on the medication to manage anxiety symptoms. Dependence poses significant challenges, as rapid dose reduction can trigger withdrawal symptoms, ranging from rebound anxiety and insomnia to more severe manifestations like seizures. Furthermore, the prolonged use of adinazolam may contribute to cognitive impairment, impacting memory, concentration, and overall cognitive function.
Pharmacodynamics and pharmacokinetics
Adinazolam is a pro-drug for the metabolite N-demethyl-adinazolam (NDMAD) as it is the main active metabolite in humans. However, adinazolam and its other metabolites di-N-demethyl-adinazolam, ⍺-hydroxy-alprazolam, and estrazolam are active compounds by themselves. They act on the central nervous system (CNS) by binding positively allosterically to (central) benzodiazepine receptors (BzR) which are a subset of the GABAA receptor. Adinazolam has a high affinity toward the GABAA receptor, however, its metabolites are 20–40 times more potent in inhibiting the binding of [3H]flunitrazepam (used radiolabel).
The (GABA)A receptor respond to the release of γ-aminobutyric acid (GABA) which is the main inhibitory neurotransmitter in the brain and plays an important role in modulating the activity of neurons. The GABAA receptor is a protein complex located in the synapses; this protein is a ligand-gated ion channel (an ionotropic receptor) that conducts chloride ions across neuronal cell membranes. The complex consists of five subunits, two ⍺, two β, and one γ. GABA binds to the interface between the ⍺ and β subunits (2 binding sites) while benzodiazepines bind to the interface of the ⍺ and γ subunits, however, benzodiazepine binding is only possible in the presence of a histidine residue in the ⍺ varieties ⍺1, ⍺2, ⍺3 and ⍺5 which are called benzodiazepine receptors. The binding of adinazolam, or other benzodiazepines, acts as an agonist by inducing a conformational change in the benzodiazepine receptor which increases the affinity toward GABA, in turn reducing neuronal activity. This reduction in neuronal activity explains the observed clinical effects. The different pharmacological properties of benzodiazepines can be attributed to the variety of ⍺ subunits. The ⍺1 subunit is required for sedative, anterograde amnesic and anticonvulsant actions; the ⍺2 subunits for anxiolytic effects and for myorelaxant actions are mediated by GABAA receptors containing ⍺2, ⍺3 and ⍺5 subunits.
The firing of a neuron happens when its membrane potential, which is negative when resting, is increased, or depolarized, until a threshold, or action potential, is reached. When this potential is reached a voltage-gated sodium channel opens allowing sodium to rush into the cell. The binding of GABA to the GABA-A receptor prevents this by letting chloride ions into the cell decreasing, or polarizing, the membrane potential. The binding of adinazolam or other benzodiazepines increases the influx of chloride ions and therefore increasing the polarization of the membrane potential.
Metabolism
Adinazolam was reported to have active metabolites in the August 1984 issue of The Journal of Pharmacy and Pharmacology. The main metabolite is N-desmethyladinazolam. NDMAD has an approximately 25-fold high affinity for benzodiazepine receptors as compared to its precursor, accounting for the benzodiazepine-like effects after oral administration. Multiple N-dealkylations lead to the removal of the dimethylaminomethyl side chain, leading to the difference in its potency. The other two metabolites are alpha-hydroxyalprazolam and estazolam. In the August 1986 issue of that same journal, Sethy, Francis and Day reported that proadifen inhibited the formation of N-desmethyladinazolam.
Adinazolam is after ingestion primarily metabolised by N-dealkylation by the hepatic pathway and, enteric pathway. Adinazolam may undergo enteric and hepatic conversion into its active metabolite after oral intake, because this drug is a CYP3A4 substrate, although enteric metabolism plays a significant role before hepatic metabolism. CYP3A4 is an enzyme in the intestines and liver, which has a crucial role in drug metabolism. Therefore, the study of the enteric metabolic pathway of Adinazolam is also important to understand the overall pharmacology of this substance.
According to several studies the main metabolites of adinazolam, also known as the phase 1 metabolites which involve oxidative reactions catalysed by cytochrome P450 enzymes like CYP3A4, are mono-N-desmethyladinazolam (active metabolite) and (N, N-di)desmethyladinazolam. Mono-N-desmethyladinazolam is formed by the transformation of adinazolam where the methyl group attached to the nitrogen is removed by N-dealkylation and this active metabolite is further metabolised to desmethyladinazolam, where another methyl group is removed and replaced with a hydrogen atom. Deamination of desmethyladinazolam leads to the formation of an intermediate metabolite, which undergoes alpha-hydroxylation to form alpha-hydroxy-alprazolam or cleavage of side chain to form estazolam, which are minor metabolites. The metabolism of adinazolam and its metabolites.
See also
Benzodiazepine
Alprazolam
Fluadinazolam
GL-II-73
References
Triazolobenzodiazepines
Chloroarenes
Designer drugs
GABAA receptor positive allosteric modulators
Hypnotics | Adinazolam | [
"Biology"
] | 2,940 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
4,121,649 | https://en.wikipedia.org/wiki/Halazepam | Halazepam is a benzodiazepine derivative that was marketed under the brand names Paxipam in the United States, Alapryl in Spain, and Pacinone in Portugal.
Medical uses
Halazepam was used for the treatment of anxiety.
Adverse effects
Adverse effects include drowsiness, confusion, dizziness, and sedation. Gastrointestinal side effects have also been reported including dry mouth and nausea.
Pharmacokinetics and pharmacodynamics
Pharmacokinetics and pharmacodynamics were listed in Current Psychotherapeutic Drugs published on June 15, 1998 as follows:
Regulatory Information
Halazepam is classified as a schedule 4 controlled substance with a corresponding code 2762 by the Drug Enforcement Administration (DEA).
Commercial production
Halazepam was invented by Schlesinger Walter in the U.S. It was marketed as an anti-anxiety agent in 1981. However, Halazepam is not commercially available in the United States because it was withdrawn by its manufacturer for poor sales.
See also
Benzodiazepines
Nordazepam
Diazepam
Chlordiazepoxide
Quazepam, fletazepam, triflubazam — benzodiazepines with trifluoromethyl group attached
References
External links
Inchem - Halazepam
Withdrawn drugs
Benzodiazepines
Chloroarenes
Lactams
Trifluoromethyl compounds | Halazepam | [
"Chemistry"
] | 307 | [
"Drug safety",
"Withdrawn drugs"
] |
4,121,924 | https://en.wikipedia.org/wiki/Camazepam | Camazepam is a benzodiazepine psychoactive drug, marketed under the brand names Albego, Limpidon and Paxor. It is the dimethyl carbamate ester of temazepam, a metabolite of diazepam. While it possesses anxiolytic, anticonvulsant, skeletal muscle relaxant and hypnotic properties it differs from other benzodiazepines in that its anxiolytic properties are particularly prominent but has comparatively limited anticonvulsant, hypnotic and skeletal muscle relaxant properties.
Pharmacology
Camazepam, like others benzodiazepines, produce a variety of therapeutic and adverse effects by binding to the benzodiazepine receptor site on the GABAA receptor and modulating the function of the GABA receptor, the most prolific inhibitory receptor within the brain. The GABA chemical and receptor system mediates inhibitory or calming effects of camazepam on the nervous system.
Compared to other benzodiazepines, it has reduced side effects such as impaired cognition, reaction times and coordination, which makes it best suited as an anxiolytic because of these reduced sides effects.
Animal studies have shown camazepam and its active metabolites possess anticonvulsant properties.
Unlike other benzodiazepines it does not disrupt normal sleep patterns. Camazepam has been shown in animal experiments to have a very low affinity for benzodiazepine receptors compared to other benzodiazepines. Compared to temazepam, camazepam has shown roughly equal anxiolytic properties, and less anticonvulsant, sedative, and motor-impairing properties.
Pharmacokinetics
Following oral administration, camazepam is almost completely absorbed into the bloodstream, with 90 percent bioavailability achieved in humans.
In the human camazepam is metabolised into the active metabolite temazepam. Studies in dogs have shown that the half-life of the terminal elimination phase ranged from 6.4 to 10.5 h.
Medical uses
Camazepam is indicated for the short-term treatment of insomnia and anxiety. As with other benzodiazepines, its use should be reserved for patients in which the sleep disorder is severe, disabling, or causes marked distress.
Adverse effects
With higher doses, such as 40 mg of camazepam, impairments similar to those caused by other benzodiazepines manifest as disrupted sleep patterns and impaired cognitive performance. Skin disorders have been reported with use of camazepam however. One study has shown that camazepam may increase attention.
Research has demonstrated that Camazepam exhibits competitive binding to benzodiazepine receptors within the brain, albeit with a relatively modest affinity in animal models. This interaction with benzodiazepine receptors, facilitated by both Camazepam and its active metabolites, is accountable for the medication's anticonvulsant properties.
Contraindications
Use of camazepam is contraindicated in subjects with known hypersensitivity to drug or allergy to other drugs in the benzodiazepine class or any excipients contained in the pharmaceutical form. Use of camazepam should be avoided or carefully monitored by medical professionals in individuals with the following conditions: myasthenia gravis, severe liver deficiencies (e.g., cirrhosis), severe sleep apnea, pre-existing respiratory depression or cronic pulmonary insufficiency.
See also
Benzodiazepine
References
External links
Inchem - Camazepam
Benzodiazepines
Hypnotics
Carbamates
Lactams
Chloroarenes | Camazepam | [
"Biology"
] | 789 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
4,122,356 | https://en.wikipedia.org/wiki/Pichia%20kudriavzevii | Pichia kudriavzevii (formerly Candida krusei) is a budding yeast (a species of fungus) involved in chocolate production. P. kudriavzevii is an emerging fungal nosocomial pathogen primarily found in the immunocompromised and those with hematological malignancies. It has natural resistance to fluconazole, a standard antifungal agent. It is most often found in patients who have had prior fluconazole exposure, sparking debate and conflicting evidence as to whether fluconazole should be used prophylactically. Mortality due to P. kudriavzevii fungemia is much higher than the more common C. albicans. Other Candida species that also fit this profile are C. parapsilosis, C. glabrata, C. tropicalis, C. guillermondii and C. rugosa.
P. kudriavzevii can be successfully treated with voriconazole, amphotericin B, and echinocandins (micafungin, caspofungin, and anidulafungin).
Role in chocolate production
Cacao beans have to be fermented to remove the bitter taste and break them down. This takes place with two fungi: P. kudriavzevii and Geotrichum. Most of the time, the two fungi are already present on the seed pods and seeds of the cacao plant, but specific strains are used in modern chocolate making. Each chocolate company uses its own strains, which have been selected to provide optimum flavor and aroma to the chocolate.
The yeasts produce enzymes to break down the pulp on the outside of the beans and generate acetic acid, killing the cacao embryo inside the seed, developing a chocolatey aroma and eliminating the bitterness in the beans.
Growth and Metabolism
P. kudriavzevii grows at a maximum temperature of . Candida species are a major differential diagnosis and these generally require biotin for growth and some have additional vitamin requirements, but P. kudriavzevii can grow in vitamin-free media. Also, P. kudriavzevii grows on Sabouraud's dextrose agar as spreading colonies with a matte or a rough whitish-yellow surface, in contrast to the convex colonies of Candida spp. This characteristic, together with its "long grain rice" appearance on microscopy, helps the definitive identification of the species.
References
Further reading
External links
Food microbiology
Yeasts
Chocolate industry
Pathogenic microbes
Fungal pathogens of humans
Animal fungal diseases
Fungus species | Pichia kudriavzevii | [
"Biology"
] | 551 | [
"Yeasts",
"Fungi",
"Fungus species"
] |
4,122,426 | https://en.wikipedia.org/wiki/Modularity%20%28biology%29 | Modularity refers to the ability of a system to organize discrete, individual units that can overall increase the efficiency of network activity and, in a biological sense, facilitates selective forces upon the network. Modularity is observed in all model systems, and can be studied at nearly every scale of biological organization, from molecular interactions all the way up to the whole organism.
Evolution of Modularity
The exact evolutionary origins of biological modularity has been debated since the 1990s. In the mid 1990s, Günter Wagner argued that modularity could have arisen and been maintained through the interaction of four evolutionary modes of action:
[1] Selection for the rate of adaptation: If different complexes evolve at different rates, then those evolving more quickly reach fixation in a population faster than other complexes. Thus, common evolutionary rates could be forcing the genes for certain proteins to evolve together while preventing other genes from being co-opted unless there is a shift in evolutionary rate.
[2] Constructional selection: When a gene exists in many duplicated copies, it may be maintained because of the many connections it has (also termed pleiotropy). There is evidence that this is so following whole genome duplication, or duplication at a single locus. However, the direct relationship that duplication processes have with modularity has yet to be directly examined.
[3] Stabilizing selection: While seeming antithetical to forming novel modules, Wagner maintains that it is important to consider the effects of stabilizing selection as it may be "an important counter force against the evolution of modularity". Stabilizing selection, if ubiquitously spread across the network, could then be a "wall" that makes the formation of novel interactions more difficult and maintains previously established interactions. Against such strong positive selection, other evolutionary forces acting on the network must exist, with gaps of relaxed selection, to allow focused reorganization to occur.
[4] Compounded effect of stabilizing and directional selection: This is the explanation seemingly favored by Wagner and his contemporaries as it provides a model through which modularity is constricted, but still able to unidirectionally explore different evolutionary outcomes. The semi-antagonistic relationship is best illustrated using the corridor model, whereby stabilizing selection forms barriers in phenotype space that only allow the system to move towards the optimum along a single path. This allows directional selection to act and inch the system closer to optimum through this evolutionary corridor.
For over a decade, researchers examined the dynamics of selection on network modularity. However, in 2013 Clune and colleagues challenged the sole focus on selective forces, and instead provided evidence that there are inherent "connectivity costs" that limit the number of connections between nodes to maximize efficiency of transmission. This hypothesis originated from neurological studies that found that there is an inverse relationship between the number of neural connections and the overall efficiency (more connections seemed to limit the overall performance speed/precision of the network). This connectivity cost had yet to be applied to evolutionary analyses. Clune et al. created a series of models that compared the efficiency of various evolved network topologies in an environment where performance, their only metric for selection, was taken into account, and another treatment where performance as well as the connectivity cost were factored together. The results show not only that modularity formed ubiquitously in the models that factored in connection cost, but that these models also outperformed the performance-only based counterparts in every task. This suggests a potential model for module evolution whereby modules form from a system’s tendency to resist maximizing connections to create more efficient and compartmentalized network topologies.
References
Sources
SF Gilbert, JM Opitz, and RA Raff. 1996. "Resynthesizing Evolutionary and Developmental Biology". Developmental Biology. 173:357-372
G von Dassow and E Munro. "Modularity in Animal Development and Evolution: Elements of a Conceptual Framework for EvoDevo". J. Exp. Zool. 285:307-325.
MI Arnone and EH Davidson. 1997. The hardwiring of development: organization and function of genomic regulatory systems.
EH Davidson. The Regulatory Genome: Gene Regulatory Networks in Development and Evolution. Academic Press, 2006.
S Barolo and JW Posakony. 2002. "Three habits of highly effective signaling pathways: principles of transcriptional control by developmental cell signaling". Genes and Development. 16:1167-1181
EN Trifonov and ZM Frenkel. 2009. "Evolution of protein modularity. Current Opinion in Structural Biology". 19:335-340.
CR Baker, LN Booth, TR Sorrells, AD Johnson. 2012. "Protein Modularity, Cooperative Binding, and Hybrid Regulatory States Underlie Transcriptional Network Diversification". Cell. 151:80-95.
Y Pritykin and M Singh. 2012. "Simple Topological Features Reflect Dynamics and Modularity in Protein Interaction Networks". PLoS Computational Biology. 9(10): e1003243
GP Wagner. 1989. "Origin of Morphological Characters and the Biological Basis of Homology". Evolution. 43(6):1157-1171
SB Carroll, J Grenier, and S Weatherbee. From DNA to Diversity: Molecular Genetics and the Evolution of Animal Design. Wiley-Blackwell, 2002.
Further reading
W Bateson. Materials for the Study of Variation. London:Macmillan, 1984.
R Raff. The Shape of Life. University of Chicago Press, 1996.
EH Davidson. The Regulatory Genome: Gene Regulatory Networks in Development and Evolution. Academic Press, 2006.
M Ptashne and A Gann. Genes and Signals. Cold Spring Harbor Press, 2002.
Biology terminology | Modularity (biology) | [
"Biology"
] | 1,161 | [
"nan"
] |
4,123,257 | https://en.wikipedia.org/wiki/NBC%20suit | An NBC (nuclear, biological, chemical) suit, also called a chem suit, or chemical suit is a type of military personal protective equipment. NBC suits are designed to provide protection against direct contact with and contamination by radioactive, biological, or chemical substances, and provide protection from contamination with radioactive materials and all types of radiation. They are generally designed to be worn for extended periods to allow the wearer to fight (or generally function) while under threat of or under actual nuclear, biological, or chemical attack. The civilian equivalent is the hazmat suit. The term NBC has been replaced by CBRN (chemical, biological, radiological, nuclear), with the addition of the new threat of radiological weapons.
Use
NBC stands for nuclear, biological, and chemical. It is a term used in the armed forces and in health and safety, mostly in the context of weapons of mass destruction (WMD) clean-up in overseas conflict or protection of emergency services during the response to terrorism, though there are civilian and common-use applications (such as recovery and clean up efforts after industrial accidents).
In military operations, NBC suits are intended to be quickly donned over a soldier’s uniform and can continuously protect the user for up to several days. Most are made of impermeable material such as rubber, but some incorporate a filter, allowing air, sweat and condensation to slowly pass through. An example of this is the Canadian military NBC suit.
The older Soviet suit was impermeable rubber-coated canvas. Now known as the CBRN suit, the British Armed Forces suit is reinforced nylon with charcoal impregnated felt. It is more comfortable because of the breathability but has a shorter useful life, and must be replaced often. The British Armed Forces suit is known as a "Noddy suit" because some of them had a pointed hood like the hat worn by the fictional character Noddy. The Soviet style suit will protect the wearer at higher concentrations than the British suit but is less comfortable due to the build-up of moisture within it. A Soviet suit was known as a "Womble" because of its long faced respirator with round visor glasses. In Canadian terminology, an NBC suit or any kind of similar protective over-suit is also known as a "Bunnysuit".
See also
(Chemical, Biological, Radiological, and Nuclear, known formerly as NBC)
List of NBC warfare forces
(Mission Oriented Protective Posture gear)
(PPPS) (for use in biocontainment)
(WMD, formerly NBC weapon)
Joint Service Lightweight Integrated Suit Technology - Used as part of MOPP.
References
External links
Chemical protective suits reflect advancements in PPE
Environmental suits
Military personal equipment
Chemical, biological, radiological and nuclear defense | NBC suit | [
"Chemistry",
"Biology"
] | 565 | [
"Chemical",
" biological",
" radiological and nuclear defense",
"Biological warfare"
] |
577,624 | https://en.wikipedia.org/wiki/Roofing%20filter | A roofing filter is a type of filter used in a HF radio receiver that limits the passband in the early stages of the receiver electronics. It blocks strong signals outside the receive channel which can overload following amplifier and mixer stages.
Purpose
The roofing filter is usually found after the first receiver mixer (which normally contains an amplifier) to limit the first intermediate frequency (IF) stage's passband. It prevents overloading later amplifier stages, which would cause nonlinearity ("distortion") or clipping ("buzz") even if the overload occurred on frequencies whose signal is not heard directly.
Roofing filters are usually crystal or ceramic filter types, with a passband for general purpose shortwave radio reception of about 6–20 kHz (for AM–NFM). The receiver's bandwidth is not determined by the roofing filter passband, but instead by a follow-on crystal filter, mechanical filter, or DSP filter, all of which allow a much tighter filtering curve than a typical roofing filter.
For more demanding uses like listening to weak CW or SSB signals, a roofing filter is required that gives a smaller passband appropriate to the mode of the received signal. It is often used at a high first IF stage above 40 MHz, with passband widths of 250 Hz, 500 Hz (for CW), or 1.8 kHz (for SSB). These narrow filters require that the receiver uses a first IF well below VHF range, perhaps 9 or 11 MHz.
See also
Bandpass filter – category that includes roofing filters
Preselector – an external device that serves a similar function
References
Radio electronics
Radio technology
Receiver (radio)
Wireless tuning and filtering | Roofing filter | [
"Technology",
"Engineering"
] | 348 | [
"Information and communications technology",
"Radio electronics",
"Wireless tuning and filtering",
"Telecommunications engineering",
"Receiver (radio)",
"Radio technology"
] |
577,715 | https://en.wikipedia.org/wiki/Repeated%20sequence%20%28DNA%29 | Repeated sequences (also known as repetitive elements, repeating units or repeats) are short or long patterns that occur in multiple copies throughout the genome. In many organisms, a significant fraction of the genomic DNA is repetitive, with over two-thirds of the sequence consisting of repetitive elements in humans. Some of these repeated sequences are necessary for maintaining important genome structures such as telomeres or centromeres.
Repeated sequences are categorized into different classes depending on features such as structure, length, location, origin, and mode of multiplication. The disposition of repetitive elements throughout the genome can consist either in directly adjacent arrays called tandem repeats or in repeats dispersed throughout the genome called interspersed repeats. Tandem repeats and interspersed repeats are further categorized into subclasses based on the length of the repeated sequence and/or the mode of multiplication.
While some repeated DNA sequences are important for cellular functioning and genome maintenance, other repetitive sequences can be harmful. Many repetitive DNA sequences have been linked to human diseases such as Huntington's disease and Friedreich's ataxia. Some repetitive elements are neutral and occur when there is an absence of selection for specific sequences depending on how transposition or crossing over occurs. However, an abundance of neutral repeats can still influence genome evolution as they accumulate over time. Overall, repeated sequences are an important area of focus because they can provide insight into human diseases and genome evolution.
History
In the 1950s, Barbara McClintock first observed DNA transposition and illustrated the functions of the centromere and telomere at the Cold Spring Harbor Symposium. McClintock's work set the stage for the discovery of repeated sequences because transposition, centromere structure, and telomere structure are all possible through repetitive elements, yet this was not fully understood at the time. The term "repeated sequence" was first used by Roy John Britten and D. E. Kohne in 1968; they found out that more than half of the eukaryotic genomes were repetitive DNA through their experiments on reassociation of DNA. Although the repetitive DNA sequences were conserved and ubiquitous, their biological role was yet unknown. In the 1990s, more research was conducted to elucidate the evolutionary dynamics of minisatellite and microsatellite repeats because of their importance in DNA-based forensics and molecular ecology. DNA-dispersed repeats were increasingly recognized as a potential source of genetic variation and regulation. Discoveries of deleterious repetitive DNA-related diseases stimulated further interest in this area of study. In the 2000s, the data from full eukaryotic genome sequencing enabled the identification of different promoters, enhancers, and regulatory RNAs which are all coded by repetitive regions. Today, the structural and regulatory roles of repetitive DNA sequences remain an active area of research.
Types and functions
Many repeat sequences are likely to be non-functional, decaying remnants of Transposable elements, these have been labelled "junk" or "selfish" DNA. Nevertheless, occasionally some repeats may be exapted for other functions.
Tandem repeats
Tandem repeats are repeated sequences which are directly adjacent to each other in the genome. Tandem repeats may vary in the number of nucleotides comprising the repeated sequence, as well as the number of times the sequence repeats. When the repeating sequence is only 2–10 nucleotides long, the repeat is referred to as a short tandem repeat (STR) or microsatellite. When the repeating sequence is 10–60 nucleotides long, the repeat is referred to as a minisatellite. For minisatellites and microsatellites, the number of times the sequence repeats at a single locus can range from twice to hundreds of times.
Tandem repeats have a wide variety of biological functions in the genome. For example, minisatellites are often hotspots of meiotic homologous recombination in eukaryotic organisms. Recombination is when two homologous chromosomes align, break, and rejoin to swap pieces. Recombination is important as a source of genetic diversity, as a mechanism for repairing damaged DNA, and a necessary step in the appropriate segregation of chromosomes in meiosis. The presence of repeated sequence DNA makes it easier for areas of homology to align, thereby controlling when and where recombination occurs.
In addition to playing an important role in recombination, tandem repeats also play important structural roles in the genome. For example, telomeres are composed mainly of tandem TTAGGG repeats. These repeats fold into highly organized G quadruplex structures which protect the ends of chromosomal DNA from degradation. Repetitive elements are enriched in the middle of chromosomes as well. Centromeres are the highly compact regions of chromosomes which join sister chromatids together and also allow the mitotic spindle to attach and separate sister chromatids during cell division. Centromeres are composed of a 177 base pair tandem repeat named the α-satellite repeat. Pericentromeric heterochromatin, the DNA which surrounds the centromere and is important for structural maintenance, is composed of a mixture of different satellite subfamilies including the α-, β- and γ-satellites as well as HSATII, HSATIII, and sn5 repeats.
Some repetitive sequences, such as those with structural roles discussed above, play roles necessary for proper biological functioning. Other tandem repeats have deleterious roles which drive diseases. Many other tandem repeats, however, have unknown or poorly understood functions.
Interspersed repeats
Interspersed repeats are identical or similar DNA sequences which are found in different locations throughout the genome. Interspersed repeats are distinguished from tandem repeats in that the repeated sequences are not directly adjacent to each other but instead may be scattered among different chromosomes or far apart on the same chromosome. Most interspersed repeats are transposable elements (TEs), mobile sequences which can be "cut and pasted" or "copied and pasted" into different places in the genome. TEs were originally called "jumping genes" for their ability to move, yet this term is somewhat misleading as not all TEs are discrete genes.
Transposable elements that are transcribed into RNA, reverse-transcribed into DNA, then reintegrated into the genome are called retrotransposons. Just as tandem repeats are further subcategorized based on the length of the repeating sequence, there are many different types of retrotransposons. Long interspersed nuclear elements (LINEs) are typically 3–7 kilobases in length. Short interspersed nuclear elements (SINEs) are typically 100-300 base pairs and no longer than 600 base pairs. Long-terminal repeat retrotransposons (LTRs) are a third major class of retrotransposons and are characterized by highly repetitive sequences as the ends of the repeat. When a transposable element does not proceed through RNA as an intermediate, it is called a DNA transposon. Other classification systems refer to retrotransposons as "Class I" and DNA transposons as "Class II" transposable elements.
Transposable elements are estimated to constitute 45% of the human genome. Since uncontrolled propagation of TEs could wreak havoc on the genome, many regulatory mechanisms have evolved to silence their spread, including DNA methylation, histone modifications, non-coding RNAs (ncRNAs) including small interfering RNA (siRNA), chromatin remodelers, histone variants, and other epigenetic factors. However, TEs play a wide variety of important biological functions. When TEs are introduced into a new host, such as from a virus, they increase genetic diversity. In some cases, host organisms find new functions for the proteins which arise from expressing TEs in an evolutionary process called TE exaptation. Recent research also suggests that TEs serve to maintain higher-order chromatin structure and 3D genome organization. Furthermore, TEs contribute to regulating the expression of other genes by serving as distal enhancers and transcription factor binding sites.
The prevalence of interspersed elements in the genome has garnered attention for more research on their origins and functions. Some specific interspersed elements have been characterized, such as the Alu repeat and LINE1.
Intrachromosomal recombination
Homologous recombination between chromosomal repeated sequences in somatic cells of Nicotiana tabacum was found to be increased by exposure to mitomycin C, a bifunctional alkylating agent that crosslinks DNA strands. This increase in recombination was attributed to increased intrachromosomal recombinational repair. By this process, mitomycin C damaged DNA in one sequence is repaired using intact information from the other repeated sequence.
Direct and inverted repeats
While tandem and interspersed repeats are distinguished based on their location in the genome, direct and inverted repeats are distinguished based on the ordering of the nucleotide bases. Direct repeats occur when a nucleotide sequence is repeated with the same directionality. Inverted repeats occur when a nucleotide sequence is repeated in the inverse direction. For example, a direct repeat of "CATCAT" would be another repetition of "CATCAT". In contrast, the inverted repeated would be "ATGATG". When there are no nucleotides separating the inverted repeat, such as "CATCATATGATG", the sequence is called a palindromic repeat. Inverted repeats can play structural roles in DNA and RNA by forming stem loops and cruciforms.
Repeated sequences in human disease
For humans, some repeated DNA sequences are associated with diseases. Specifically, tandem repeat sequences, underlie several human disease conditions, particularly trinucleotide repeat diseases such as Huntington's disease, fragile X syndrome, several spinocerebellar ataxias, myotonic dystrophy and Friedreich's ataxia. Trinucleotide repeat expansions in the germline over successive generations can lead to increasingly severe manifestations of the disease. These trinucleotide repeat expansions may occur through strand slippage during DNA replication or during DNA repair synthesis. It has been noted that genes containing pathogenic CAG repeats often encode proteins that themselves have a role in the DNA damage response and that repeat expansions may impair specific DNA repair pathways. Faulty repair of DNA damages in repeat sequences may cause further expansion of these sequences, thus setting up a vicious cycle of pathology.
Huntington's disease
Huntington's disease is a neurodegenerative disorder which is due to the expansion of repeated trinucleotide sequence CAG in exon 1 of the huntingtin gene (HTT). This gene is responsible for encoding the protein huntingtin which plays a role in preventing apoptosis, otherwise known as cell death, and repair of oxidative DNA damage. In Huntington's disease the expansion of the trinucleotide sequence CAG encodes for a mutant huntingtin protein with an expanded polyglutamine domain. This domain causes the protein to form aggregates in nerve cells preventing normal cellular function and resulting in neurodegeneration.
Fragile X syndrome
Fragile X syndrome is caused by the expansion of the DNA sequence CCG in the FMR1 gene on the X chromosome. This gene produces the RNA-binding protein FMRP. In the case of Fragile X syndrome the repeated sequence makes the gene unstable and therefore silences the gene FMR1. Because the gene resides on the X chromosome, females who have two X chromosomes are less effected than males who only have on X chromosome and one Y chromosome because the second X chromosome can compensate for the silencing of the gene on the other X chromosome.
Spinocerebellar ataxias
The disease spinocerebellar ataxias has CAG trinucleotide repeat sequences that underlie several types of spinocerebellar ataxias (SCAs-SCA1; SCA2; SCA3; SCA6; SCA7; SCA12; SCA17). Similar to Huntington's disease, the polyglutamine tail created due to this trinucleotide expansion causes aggregation of proteins, preventing normal cellular function and causing neurodegeneration.
Friedreich's Ataxia
Friedreich's ataxia is a type of ataxia that has an expanded repeat sequence GAA in the frataxin gene. The frataxin gene is responsible for producing the frataxin protein, which is a mitochondrial protein involved in energy production and cellular respiration. The expanded GAA sequence results in the silencing of the first intron resulting in loss of function in the frataxin protein. The loss of a functional FXN gene leads to issues with mitochondrial functioning as a whole and can present phenotypically in patients as difficulty walking.
Myotonic dystrophy
Myotonic dystrophy is a disorder that presents as muscle weakness and consists of two main types: DM1 and DM2. Both types of myotonic dystrophy are due to expanded DNA sequences. In DM1 the DNA sequence that is expanded is CTG while in DM2 it is CCTG. These two sequences are found on different genes with the expanded sequence in DM2 being found on the ZNF9 gene and the expanded sequence in DM1 found on the DMPK gene. The two genes don't encode for proteins unlike other disorders like Huntington's disease or Fragile X syndrome. It has been shown, however, that there is a link between RNA toxicity and the repeat sequences in DM1 and DM2.
Amyotrophic lateral sclerosis and Frontotemporal dementia
Not all diseases caused by repeated DNA sequences are trinucleotide repeat diseases. The diseases amyotrophic lateral sclerosis and frontotemporal dementia are caused by hexanucleotide GGGGCC repeat sequences in the C9orf72 gene, causing RNA toxicity that leads to neurodegeneration.
Biotechnology
Repetitive DNA is hard to sequence using next-generation sequencing techniques because sequence assembly from short reads simply cannot determine the length of a repetitive part. This issue is particularly serious for microsatellites, which are made of tiny 1-6bp repeat units. Although they are difficult to sequence, these short repeats have great value in DNA fingerprinting and evolutionary studies. Many researchers have historically left out repetitive sequences when analyzing and publishing whole genome data due to technical limitations.
Bustos. et al. proposed one method of sequencing long stretches of repetitive DNA. The method combines the use of a linear vector for stabilization and exonuclease III for deletion of continuing simple sequence repeats (SSRs) rich regions. First, SSR-rich fragments are cloned into a linear vector that can stably incorporate tandem repeats up to 30kb. Expression of repeats is prohibited by the transcriptional terminators in the vector. The second step involves the use of exonuclease III. The enzyme can delete nucleotide at the 3' end which results in the production of a unidirectional deletion of SSR fragments. Finally, this product which has deleted fragments is multiplied and analyzed with colony PCR. The sequence is then built by an ordered sequencing of a set of clones containing different deletions.
See also
References
External links
Function of Repetitive DNA | Repeated sequence (DNA) | [
"Biology"
] | 3,141 | [
"Molecular genetics",
"Repetitive DNA sequences"
] |
577,764 | https://en.wikipedia.org/wiki/Rolls-Royce%20WR-21 | The Rolls-Royce WR-21 is a gas turbine marine engine, designed with a view to powering the latest naval surface combatants of the partner nations.
History
Developed with government funding input from the United Kingdom, France and the United States, the WR-21 was designed and manufactured by an international consortium led by Northrop Grumman as prime contractor. The turbine itself was designed primarily by Rolls-Royce with significant marine engineering and test facility input from DCN, with Northrop Grumman responsible for the intercooler, the recuperator and system integration.
WR-21 development draws heavily on the technology of the successful Rolls-Royce RB211 and Trent families of gas turbines.
The original design and development of the WR-21 was carried out by Westinghouse Electric Corporation (later Northrop Grumman Marine Systems) under a US Navy contract placed in December 1991. Later the Royal Navy and the French Navy became interested in the WR-21, leading to Rolls-Royce and DCN involvement.
The WR-21 is the propulsion system of Royal Navy Type 45 destroyers.
Design
The WR-21 is the first aeroderivative gas turbine to incorporate gas compressor intercooler and exhaust heat recovery system technologies that deliver low specific fuel consumption across the engine's operating range. It offers a reduction in fuel burn of 30% across the typical ship operating profile.
The intercooler cools air entering the high-pressure compressor, reducing the amount of energy required to compress the air.
The recuperator preheats the combustion air by recovering waste heat from the exhaust, improving cycle efficiency and reducing fuel consumption.
Specifications
Rated power: 25.2 MW
Specific fuel consumption: approximately
Main module wet weight:
Twin-spool gas generator and free-rotating power turbine
Low-pressure (LP) spool with six-stage compressor and single-stage turbine at 6,200 rpm
High-pressure (HP) spool with six-stage compressor and single-stage turbine at 8,100 rpm
Five-stage free power turbine 3,600 rpm (60 Hz)
Intercooler between LP and HP compressors
Nine radial combustors
Exhaust heat recuperator to combustor inlet air
Operational issues
In 2009 it was discovered that the Northrop Grumman intercooler as fitted in the WR-21, on Type 45 destroyers, had a major design flaw, failing to operate in water temperatures beyond 30C. The intercooler of the first Type 45 destroyer, , failed in the mid-Atlantic in 2010 and had to be repaired in Canada, with further repairs for intercooler failure in 2012 in Bahrain. The Type 45's pioneering integrated electric propulsion (IEP) system uses two WR-21s and two Wartsila 2-MW diesel generators to power everything on board, including weapons systems in addition to propulsion and other functions, leaving the ships vulnerable to "total electric failure". The Ministry of Defence said: "The Type 45 was designed for world-wide operations, from sub-Arctic to extreme tropical environments, and continues to operate effectively in the Gulf and the South Atlantic all year round."
The WR-21 engines will have to be supplemented by one or two additional diesel generators, fitted by cutting open the hull in dry dock.
Former First Sea Lord Admiral Philip Jones clarified that "WR-21 gas turbines were designed in extreme hot weather conditions to what we call “gracefully degrade” in their performance, until you get to the point where it goes beyond the temperature at which they would operate... we found that the resilience of the diesel generators and the WR-21 in the ship at the moment was not degrading gracefully; it was degrading catastrophically, so that is what we have had to address." The Admiral still argued that despite the problems, the Royal Navy has been able to deploy Type 45 destroyers in nine-month cycles to the Gulf region where temperatures are high with little fault. The Royal Navy has also been able to maintain at least two Type 45s at operational readiness.
References
External links
Rolls-Royce WR-21 Marine gas turbine page (archived 2013-11-08)
Aero-derivative engines
Marine gas turbines | Rolls-Royce WR-21 | [
"Technology"
] | 860 | [
"Aero-derivative engines",
"Engines"
] |
577,817 | https://en.wikipedia.org/wiki/Lamination | Lamination is the technique/process of manufacturing a material in multiple layers, so that the composite material achieves improved strength, stability, sound insulation, appearance, or other properties from the use of the differing materials, such as plastic. A laminate is a layered object or material assembled using heat, pressure, welding, or adhesives. Various coating machines, machine presses and calendering equipment are used.
In particular, laminating paper in plastic makes it sturdy, waterproof, and erasable.
Materials
There are different lamination processes, depending primarily on the type or types of materials to be laminated. The materials used in laminates can be identical or different, depending on the object to be laminated, the process and the desired properties.
Textile
Laminated fabric are widely used in different fields of human activity, including medical and military. Woven fabrics (organic and inorganic based) are usually laminated by different chemical polymers to give them useful properties like chemical resistance, dust, grease, photoluminescence (glowing and other light-effects e.g. in high-visibility clothing), tear strength, stiffness, thickness, and being wind proof . Coated fabrics may be considered as a subtype of laminated fabrics. Nonwoven fabrics (e.g. fiberglass) are also often laminated. According to a 2002 source, the nonwovens fabric industry was the biggest single consumer of different polymer binding resins.
Materials used in production of coated and laminated fabrics are generally subjected to heat treatment. Thermoplastics and thermosetting plastics (e.g. formaldehyde polymers) are equally used in laminating and coating textile industry.
In 2002 primary materials used included polyvinyl acetate, acrylics, polyvinyl chloride (PVC), polyurethanes, and natural and synthetic rubbers. Copolymers and terpolymers were also in use.
Thin-films of plastics were in wide use as well. Materials varied from polyethylene and PVC to kapton depending on application. In automotive industry for example the PVC/acrylonitrilebutadiene-styrene (ABS) mixtures were often applied for interiors by laminating onto a polyurethane foam to give a soft-touch properties. Specialty films were used in protective clothing, .e.g. polytetrafluoroethylene (PTFE), polyurethane etc.
Glass
Plastic film can be used to laminate either side of a sheet of glass. Vehicle windshields are commonly made as composites created by laminating a tough plastic film between two layers of glass. This is to prevent shards of glass detaching from the windshield in case it breaks.
Wood
Plywood is a common example of a laminate using the same material in each layer combined with an adhesive. Glued and laminated dimensional timber is used in the construction industry to make beams (glued laminated timber, or Glulam), in sizes larger and stronger than those that can be obtained from single pieces of wood. Another reason to laminate wooden strips into beams is quality control, as with this method each and every strip can be inspected before it becomes part of a highly stressed component.
Examples of laminate materials include melamine adhesive countertop surfacing and plywood. Decorative laminates and some modern millwork components are produced with decorative papers with a layer of overlay on top of the decorative paper, set before pressing them with thermoprocessing into high-pressure decorative laminates (HPDL). A new type of HPDL is produced using real wood veneer or multilaminar veneer as top surface. High-pressure laminates consists of laminates "molded and cured at pressures not lower than 1,000 lb per sq in.(70 kg per cm2) and more commonly in the range of 1,200 to 2,000 lb per sq in. (84 to 140 kg per cm2). Meanwhile, low pressure laminate is defined as "a plastic laminate molded and cured at pressures in general of 400 pounds per square inch (approximately 27 atmospheres or 2.8 × 106 pascals).
Paper
Corrugated fiberboard boxes are examples of laminated structures, where an inner core provides rigidity and strength, and the outer layers provide a smooth surface. A starch-based adhesive is usually used.
Laminating paper products, such as photographs, can prevent them from becoming creased, faded, water damaged, wrinkled, stained, smudged, abraded, or marked by grease or fingerprints. Photo identification cards and credit cards are almost always laminated with plastic film. Boxes and other containers may be laminated using heat seal layers, extrusion coatings, pressure sensitive adhesives, UV coating, etc.
Lamination is also used in sculpture using wood or resin. An example of an artist who used lamination in his work is the American Floyd Shaman.
Laminates can be used to add properties to a surface, usually printed paper, that would not have them otherwise, such as with the use of lamination paper. Sheets of vinyl impregnated with ferro-magnetic material can allow portable printed images to bond to magnets, such as for a custom bulletin board or a visual presentation. Specially surfaced plastic sheets can be laminated over a printed image to allow them to be safely written upon, such as with dry erase markers or chalk. Multiple translucent printed images may be laminated in layers to achieve certain visual effects or to hold holographic images. Printing businesses that do commercial lamination keep a variety of laminates on hand, as the process for bonding different types is generally similar when working with thin materials.
Paper is normally laminated on particle or fiberboards giving a good-looking and resistant surface for use as furniture, decoration panels and flooring.
Paper laminations are also used in packaging. For example, juiceboxes are fabricated from liquid packaging board which is usually six layers of paper, polyethylene, and aluminum foil. Paper is used in the lamination to shape the product and give the juicebox an extra source of strength.
The base is most often particle- or fiberboards, then some layers of absorbent kraft paper. The last layers are a decor paper covered with an overlay. The lamination papers are covered with an inert resin, often melamine, which is cured to form a hard composite with the structure of paper. The laminates may also have a lining on the back side of laminating kraft to compensate for the tension created by the top side lamination. Cheaper particle boards may have only a lining of laminating kraft to give surface washability and resistance to wear.
The decor paper can also be processed under heat and low/high pressure to create a melamine laminated sheet, that has several applications. The absorbent kraft paper is a normal kraft paper with controlled absorbency, which means a high degree of porosity. It is made of clean low kappa hardwood kraft with good uniformity. The grammage is 80 - 120 g/m2 and normally 2-4 plies are used. The decor paper is the most critical of the lamination papers as it gives the visual appearance of the laminate. The impregnation resin and cellulose have about the same refraction index which means that the cellulose fibers of the paper appear as a shade and only the dyestuffs and pigments are visible. Due to this the decor paper demands extreme cleanness and is produced only on small paper machines with grammage 50 - 150 g/m2. The overlay paper have grammage of 18 – 50 m2 and is made of pure cellulose, thus it must be made of well delignified pulp. It becomes transparent after impregnation letting the appearance of the decor paper come through. The laminating kraft have a grammage of 70 - 150 g/m2 and is a smooth dense kraft paper.
Metal
Electrical equipment such as transformers and motors usually use an electrical steel laminate coatings to form the core of the coils used to produce magnetic fields. The thin lamination reduces the power loss due to eddy currents. Fiber metal laminate is an example of thin metal laminated by, a glass fiber-reinforced and epoxy-glued sheets.
Microelectronics
Lamination is widely used in production of electronic components such as PV solar cells.
Film types
Laminate plastic film is generally categorized into these five categories:
Standard thermal laminating films
Low-temperature thermal laminating films
Heat set (or heat-assisted) laminating films
Pressure-sensitive films
Liquid laminate
Laminators
A laminator is a device which laminates pieces or rolls of paper or card stock, common in offices, schools, and homes.
Pouch
A pouch laminator uses a plastic pouch that is usually sealed on one edge. The inside of the lamination pouch is coated with a heat-activated film that adheres to the product being laminated as it runs through the laminator. The substrate side of the board contains a heat-activated adhesive that bonds the print to the substrate. This can be any of a number of board products or another sheet of laminate. The pouch containing the print, laminate, and substrate is passed through a set of heated rollers under pressure, ensuring that all adhesive layers bond to one another.
Pouch laminators are designed for moderate use in the office or home. For continuous, large-volume lamination projects, a roll laminator performs more efficiently.
Pouches can be bought with different thicknesses in micrometres. Standard home or office machines normally use 80–250 micrometre pouches, depending on the quality of the machine. The thicker the pouch, the higher the cost. Pouches can also measured in mil, which equals one thousandth of an inch. The most common pouch thicknesses are 3, 5, 7 and 10 mil (76, 127, 178 and 254 μm).
Certain pouches such as butterfly pouches can be used with a pouch laminator to form ID cards. Butterfly pouches are available with magnetic stripes embedded.
Many pouch laminators require the use of a carrier. A carrier holds the pouch as it is run through the laminator. This helps prevent the hot glue, some of which leaks from the sides of the pouches during the process, from gumming up the rollers. The carrier prevents the rollers from getting sticky, which helps to prevent the lamination pouch from wrapping around the rollers inside the laminator.
Many newer laminators claim that they can be used without a carrier. However the use of carriers will extend the laminator's life.
Heated roll
A heated roll laminator uses heated rollers to melt glue extruded onto lamination film. This film is in turn applied to a substrate such as paper or card using pressure rollers. The primary purpose of laminating with such a machine is to embellish or protect printed documents or images. Heated roll laminators can vary in size from handheld or desktop pouch laminators to industrial sized machines. Such industrial laminators are primarily used for high quantity/quality output by printers or print finishers.
Such laminators are used to apply varying thicknesses of lamination film onto substrates such as paper or fabrics. The main advantage of the use of heated roll laminators is speed. Heated laminators use heated rollers or heated shoes to melt the glue which is applied to lamination film. The process of heating the glue prior to applying the film to a substrate allows for a faster application of the film. The laminates and adhesives used are generally cheaper to manufacture than cold roll laminates, often as much as half the cost depending on the comparison made. As the materials are non-adhesive until exposed to heat, they are much easier to handle. The glue is solid at room temperature, so lamination of this type is less likely to shift or warp after its application than pressure activated laminates, which rely on a highly viscous, adhesive fluid.
Roll laminators typically use two rolls to complete the lamination process, with one roll being on top and the other roll on the bottom. These rolls slide onto metal bars, known as mandrels, which are then placed in the machine and feed through it. In the United States, the most common core size found on lamination film is one inch (25- to 27-inch-wide film). Larger format laminators use a larger core, often 2 to 3 inches in diameter. Film is usually available in 1.5, 3, 5, 7, and 10 mil thicknesses. The higher the number, the thicker the film. A mil is one thousandth of an inch (.001").
Printers or print finishers often use industrial heated roll laminators to laminate such things as paperback book covers, magazine covers, posters, cards and postcards, in-shop displays as well as other applications.
Cold roll
Cold roll laminators use a plastic film which is coated with an adhesive and glossy backing which does not adhere to the glue. When the glossy backing is removed, the adhesive is exposed, which then sticks directly onto the item which needs to be laminated. This method, apart from having the obvious benefit of not requiring expensive equipment, is also suitable for those items which would be damaged by heat. Cold laminators range from simple two roller, hand-crank machines up to large and complex motor-driven machines with high precision rollers, adjustable roller pressure, and other advanced features.
Cold lamination increased in popularity with the rise of wide-format inkjet printers, which often used inks and papers incompatible with hot lamination. A large percentage of cold laminate for use in the print industry is PVC, although a wide range of other materials are available. Cold laminating processes are also used outside of the print industry, for example, coating sheet glass or stainless steel with protective films.
Cold roll laminators are also used for laying down adhesive films in the sign-making industry, for example mounting a large print onto a board. A practiced operator can apply a large adhesive sheet in a fraction of the time it takes to do so by hand.
See also
Calender
References
External links
Choosing the Right Type of Laminating Pouch
Composite materials
Airship technology
Glass applications
Articles containing video clips
Office equipment
Paper products | Lamination | [
"Physics"
] | 2,952 | [
"Materials",
"Composite materials",
"Matter"
] |
577,830 | https://en.wikipedia.org/wiki/Titration%20curve | Titrations are often recorded on graphs called titration curves, which generally contain the volume of the titrant as the independent variable and the pH of the solution as the dependent variable (because it changes depending on the composition of the two solutions).
The equivalence point on the graph is where all of the starting solution (usually an acid) has been neutralized by the titrant (usually a base). It can be calculated precisely by finding the second derivative of the titration curve and computing the points of inflection (where the graph changes concavity); however, in most cases, simple visual inspection of the curve will suffice. In the curve given to the right, both equivalence points are visible, after roughly 15 and 30 mL of NaOH solution has been titrated into the oxalic acid solution. To calculate the logarithmic acid dissociation constant (pKa), one must find the volume at the half-equivalence point, that is where half the amount of titrant has been added to form the next compound (here, sodium hydrogen oxalate, then disodium oxalate). Halfway between each equivalence point, at 7.5 mL and 22.5 mL, the pH observed was about 1.5 and 4, giving the pKa.
In weak monoprotic acids, the point halfway between the beginning of the curve (before any titrant has been added) and the equivalence point is significant: at that point, the concentrations of the two species (the acid and conjugate base) are equal. Therefore, the Henderson-Hasselbalch equation can be solved in this manner:
Therefore, one can easily find the pKa of the weak monoprotic acid by finding the pH of the point halfway between the beginning of the curve and the equivalence point, and solving the simplified equation. In the case of the sample curve, the acid dissociation constant Ka = 10-pKa would be approximately 1.78×10−5 from visual inspection (the actual Ka2 is 1.7×10−5)
For polyprotic acids, calculating the acid dissociation constants is only marginally more difficult: the first acid dissociation constant can be calculated the same way as it would be calculated in a monoprotic acid. The pKa of the second acid dissociation constant, however, is the pH at the point halfway between the first equivalence point and the second equivalence point (and so on for acids that release more than two protons, such as phosphoric acid).
References
Titration | Titration curve | [
"Chemistry"
] | 536 | [
"Instrumental analysis",
"Titration"
] |
577,846 | https://en.wikipedia.org/wiki/Redox%20titration | A redox titration is a type of titration based on a redox reaction between the analyte and titrant. It may involve the use of a redox indicator and/or a potentiometer. A common example of a redox titration is the treatment of a solution of iodine with a reducing agent to produce iodide using a starch indicator to help detect the endpoint. Iodine (I2) can be reduced to iodide (I−) by, say, thiosulfate (), and when all the iodine is consumed, the blue colour disappears. This is called an iodometric titration.
Most often, the reduction of iodine to iodide is the last step in a series of reactions where the initial reactions convert an unknown amount of the solute (the substance being analyzed) to an equivalent amount of iodine, which may then be titrated. Sometimes other halogens (or haloalkanes) besides iodine are used in the intermediate reactions because they are available in better measurable standard solutions and/or react more readily with the solute. The extra steps in iodometric titration may be worthwhile because the equivalence point, where the blue turns a bit colourless, is more distinct than in some other analytical or volumetric methods.
The main redox titration types are:
{| class="wikitable"
|-
! Redox titration !! Titrant
|-
| Iodometry || Iodine (I2)
|-
| Bromatometry || Bromine (Br2)
|-
| Cerimetry || Cerium(IV) salts
|-
| Permanganometry || Potassium permanganate
|-
| Dichrometry || Potassium dichromate
|-hzhsisi
|}
Sources
See also
Oxidizing agent
Reducing agent
Titration | Redox titration | [
"Chemistry"
] | 400 | [
"Instrumental analysis",
"Titration"
] |
577,858 | https://en.wikipedia.org/wiki/Imitation | Imitation (from Latin imitatio, "a copying, imitation") is a behavior whereby an individual observes and replicates another's behavior. Imitation is also a form of learning that leads to the "development of traditions, and ultimately our culture. It allows for the transfer of information (behaviors, customs, etc.) between individuals and down generations without the need for genetic inheritance." The word imitation can be applied in many contexts, ranging from animal training to politics. The term generally refers to conscious behavior; subconscious imitation is termed mirroring.
Anthropology and social sciences
In anthropology, some theories hold that all cultures imitate ideas from one of a few original cultures or several cultures whose influence overlaps geographically. Evolutionary diffusion theory holds that cultures influence one another, but that similar ideas can be developed in isolation.
Scholars as well as popular authors have argued that the role of imitation in humans is unique among animals. However, this claim has been recently challenged by scientific research which observed social learning and imitative abilities in animals.
Psychologist Kenneth Kaye showed that the ability of infants to match the sounds or gestures of an adult depends on an interactive process of turn-taking over many successive trials, in which adults' instinctive behavior plays as great a role as that of the infant. These writers assume that evolution would have selected imitative abilities as fit because those who were good at it had a wider arsenal of learned behavior at their disposal, including tool-making and language.
However, research also suggests that imitative behaviors and other social learning processes are only selected for when outnumbered or accompanied by asocial learning processes: an over-saturation of imitation and imitating individuals leads humans to collectively copy inefficient strategies and evolutionarily maladaptive behaviors, thereby reducing flexibility to new environmental contexts that require adaptation. Research suggests imitative social learning hinders the acquisition of knowledge in novel environments and in situations where asocial learning is faster and more advantageous.
In the mid-20th century, social scientists began to study how and why people imitate ideas. Everett Rogers pioneered innovation diffusion studies, identifying factors in adoption and profiles of adopters of ideas. Imitation mechanisms play a central role in both analytical and empirical models of collective human behavior.
Neuroscience
Humans are capable of imitating movements, actions, skills, behaviors, gestures, pantomimes, mimics, vocalizations, sounds, speech, etc. and that we have particular "imitation systems" in the brain is old neurological knowledge dating back to Hugo Karl Liepmann. Liepmann's model 1908 "Das hierarchische Modell der Handlungsplanung" (the hierarchical model of action planning) is still valid. On studying the cerebral localization of function, Liepmann postulated that planned or commanded actions were prepared in the parietal lobe of the brain's dominant hemisphere, and also frontally. His most important pioneering work is when extensively studying patients with lesions in these brain areas, he discovered that the patients lost (among other things) the ability to imitate. He was the one who coined the term "apraxia" and differentiated between ideational and ideomotor apraxia. It is in this basic and wider frame of classical neurological knowledge that the discovery of the mirror neuron has to be seen. Though mirror neurons were first discovered in macaques, their discovery also relates to humans.
Human brain studies using functional magnetic resonance imaging (fMRI) revealed a network of regions in the inferior frontal cortex and inferior parietal cortex which are typically activated during imitation tasks. It has been suggested that these regions contain mirror neurons similar to the mirror neurons recorded in the macaque monkey. However, it is not clear if macaques spontaneously imitate each other in the wild.
Neurologist V. S. Ramachandran argues that the evolution of mirror neurons were important in the human acquisition of complex skills such as language and believes the discovery of mirror neurons to be a most important advance in neuroscience. However, little evidence directly supports the theory that mirror neuron activity is involved in cognitive functions such as empathy or learning by imitation.
Evidence is accumulating that bottlenose dolphins employ imitation to learn hunting and other skills from other dolphins.
Japanese monkeys have been seen to spontaneously begin washing potatoes after seeing humans washing them.
Mirror neuron system
Research has been conducted to locate where in the brain specific parts and neurological systems are activated when humans imitate behaviors and actions of others, discovering a mirror neuron system. This neuron system allows a person to observe and then recreate the actions of others. Mirror neurons are premotor and parietal cells in the macaque brain that fire when the animal performs a goal directed action and when it sees others performing the same action." Evidence suggests that the mirror neuron system also allows people to comprehend and understand the intentions and emotions of others. Problems of the mirror neuron system may be correlated with the social inadequacies of autism. There have been many studies done showing that children with autism, compared with typically-developing children, demonstrate reduced activity in the frontal mirror neuron system area when observing or imitating facial emotional expressions. Of course, the higher the severity of the disease, the lower the activity in the mirror neuron system is.
Animal behavior
Scientists debate whether animals can consciously imitate the unconscious incitement from sentinel animals, whether imitation is uniquely human, or whether humans do a complex version of what other animals do. The current controversy is partly definitional. Thorndike uses "learning to do an act from seeing it done." It has two major shortcomings: first, by using "seeing" it restricts imitation to the visual domain and excludes, e.g., vocal imitation and, second, it would also include mechanisms such as priming, contagious behavior and social facilitation, which most scientist distinguish as separate forms of observational learning. Thorpe suggested defining imitation as "the copying of a novel or otherwise improbable act or utterance, or some act for which there is clearly no instinctive tendency." This definition is favored by many scholars, though questions have been raised how strictly the term "novel" has to be interpreted and how exactly a performed act has to match the demonstration to count as a copy.
Hayes and Hayes (1952) used the "do-as-I-do" procedure to demonstrate the imitative abilities of their trained chimpanzee "Viki." Their study was repeatedly criticized for its subjective interpretations of their subjects' responses. Replications of this study found much lower matching degrees between subjects and models. However, imitation research focusing on the copying fidelity got new momentum from a study by Voelkl and Huber. They analyzed the motion trajectories of both model and observer monkeys and found a high matching degree in their movement patterns.
Paralleling these studies, comparative psychologists provided tools or apparatuses that could be handled in different ways. Heyes and co-workers reported evidence for imitation in rats that pushed a lever in the same direction as their models, though later on they withdrew their claims due to methodological problems in their original setup. By trying to design a testing paradigm that is less arbitrary than pushing a lever to the left or to the right, Custance and co-workers introduced the "artificial fruit" paradigm, where a small object could be opened in different ways to retrieve food placed inside—not unlike a hard-shelled fruit. Using this paradigm, scientists reported evidence for imitation in monkeys and apes. There remains a problem with such tool (or apparatus) use studies: what animals might learn in such studies need not be the actual behavior patterns (i.e., the actions) that were observed. Instead they might learn about some effects in the environment (i.e., how the tool moves, or how the apparatus works). This type of observational learning, which focuses on results, not actions, has been dubbed emulation (see Emulation (observational learning)).
In an article written by Carl Zimmer, he looked into a study being done by Derek Lyons, focusing on human evolution, in which he studied a chimpanzee. He first started with showing the chimpanzee how to retrieve food from a box. The chimpanzee soon caught on and did exactly what the scientist just did. They wanted to see if the chimpanzee's brain functioned just like a human brain, so they replicated the experiment using 16 children, following the same procedure; once the children saw how it was done, they followed the same exact steps.
Imitation in animals
Imitation in animals is a study in the field of social learning where learning behavior is observed in animals specifically how animals learn and adapt through imitation. Ethologists can classify imitation in animals by the learning of certain behaviors from conspecifics. More specifically, these behaviors are usually unique to the species and can be complex in nature and can benefit the individual's survival.
Some scientists believe true imitation is only produced by humans, arguing that simple learning though sight is not enough to sustain as a being who can truly imitate. Thorpe defines true imitation as "the copying of a novel or otherwise improbable act or utterance, or some act for which there is clearly no instinctive tendency," which is highly debated for its portrayal of imitation as a mindless repeating act. True imitation is produced when behavioral, visual and vocal imitation is achieved, not just the simple reproduction of exclusive behaviors. Imitation is not a simple reproduction of what one sees; rather it incorporates intention and purpose. Animal imitation can range from survival purpose; imitating as a function of surviving or adapting, to unknown possible curiosity, which vary between different animals and produce different results depending on the measured intelligence of the animal.
There is considerable evidence to support true imitation in animals. Experiments performed on apes, birds and more specifically the Japanese quail have provided positive results to imitating behavior, demonstrating imitation of opaque behavior. However the problem that lies is in the discrepancies between what is considered true imitation in behavior. Birds have demonstrated visual imitation, where the animal simply does as it sees. Studies on apes however have proven more advanced results in imitation, being able to remember and learn from what they imitate. Songbirds have specialized brain circuits for song learning and can imitate vocalizations of others. It is well established that birdsong is a type of animal culture transmitted across generations in certain groups. Studies have demonstrated far more positive results with behavioral imitation in primates and birds than any other type of animal. Imitation in non-primate mammals and other animals have been proven difficult to conclude solid positive results for and poses a difficult question to scientists on why that is so.
Theories
There are two types of theories of imitation, transformational and associative. Transformational theories suggest that the information that is required to display certain behavior is created internally through cognitive processes and observing these behaviors provides incentive to duplicate them. Meaning we already have the codes to recreate any behavior and observing it results in its replication. Albert Bandura's "social cognitive theory" is one example of a transformational theory. Associative, or sometimes referred to as "contiguity", theories suggest that the information required to display certain behaviors does not come from within ourselves but solely from our surroundings and experiences. These theories have not yet provided testable predictions in the field of social learning in animals and have yet to conclude strong results.
New developments
There have been three major developments in the field of animal imitation. The first, behavioral ecologists and experimental psychologists found there to be adaptive patterns in behaviors in different vertebrate species in biologically important situations. The second, primatologists and comparative psychologists have found imperative evidence that suggest true learning through imitation in animals. The third, population biologists and behavioral ecologists created experiments that demand animals to depend on social learning in certain manipulated environments.
Child development
Developmental psychologist Jean Piaget noted that children in a developmental phase he called the sensorimotor stage (a period which lasts up to the first two years of a child) begin to imitate observed actions. This is an important stage in the development of a child because the child is beginning to think symbolically, associating behaviors with actions, thus setting the child up for the development of further symbolic thinking. Imitative learning also plays a crucial role in the development of cognitive and social communication behaviors, such as language, play, and joint attention. Imitation serves as both a learning and a social function because new skills and knowledge are acquired, and communication skills are improved by interacting in social and emotional exchanges. It is shown, however, that "children with autism exhibit significant deficits in imitation that are associated with impairments in other social communication skills." To help children with autism, reciprocal imitation training (RIT) is used. It is a naturalistic imitation intervention that helps teach the social benefits of imitation during play by increasing child responsiveness and by increasing imitative language.
Reinforcement learning, both positive and negative, and punishment, are used by people that children imitate to either promote or discontinue behavior. If a child imitates a certain type of behavior or action and the consequences are rewarding, the child is very likely to continue performing the same behavior or action. The behavior "has been reinforced (i.e. strengthened)". However, if the imitation is not accepted and approved by others, then the behavior will be weakened.
Naturally, children are surrounded by many different types of people that influence their actions and behaviors, including parents, family members, teachers, peers, and even characters on television programs. These different types of individuals that are observed are called models. According to Saul McLeod, "these models provide examples of masculine and feminine behavior to observe and imitate." Children imitate the behavior they have observed from others, regardless of the gender of the person and whether or not the behavior is gender appropriate. However, it has been proven that children will reproduce the behavior that "its society deems appropriate for its sex."
Infants
Infants have the ability to reveal an understanding of certain outcomes before they occur, therefore in this sense they can somewhat imitate what they have perceived. Andrew N. Meltzoff, ran a series of tasks involving 14-month-old infants to imitate actions they perceived from adults. In this gathering he had concluded that the infants, before trying to reproduce the actions they wish to imitate, somehow revealed an understanding of the intended goal even though they failed to replicate the result wished to be imitated. These task implicated that the infants knew the goal intended. Gergely, Bekkering, and Király (2002) figured that infants not only understand the intended goal but also the intentions of the person they were trying to imitate engaging in "rational imitation", as described by Tomasello, Carpenter and others
It has long been claimed that newborn humans imitate bodily gestures and facial expressions as soon as their first few days of life. For example, in a study conducted at the Mailman Centre for Child Development at the University of Miami Medical School, 74 newborn babies (with a mean age of 36 hours) were tested to see if they were able to imitate a smile, a frown and a pout, and a wide-open mouth and eyes. An observer stood behind the experimenter (so he/she couldn't see what facial expressions were being made by the experimenter) and watched only the babies' facial expressions, recording their results. Just by looking only at the babies' faces, the observer was more often able to correctly guess what facial expression was being presented to the child by the experimenter. After the results were calculated, "the researchers concluded that...babies have an innate ability to compare an expression they see with their own sense of muscular feedback from making the movements to match that expression."
However, the idea that imitation is an inborn ability has been recently challenged. A research group from the University of Queensland in Australia carried out the largest-ever longitudinal study of neonatal imitation in humans. One hundred and nine newborns were shown a variety of gestures including tongue protrusion, mouth opening, happy and sad facial expressions, at four time points between one week and 9 weeks of age. The results failed to reveal compelling evidence that newborns imitate: Infants were just as likely to produce matching and non-matching gestures in response to what they saw.
At around eight months, infants will start to copy their child care providers' movements when playing pat-a-cake and peek-a-boo, as well as imitating familiar gestures, such as clapping hands together or patting a doll's back. At around 18 months, infants will then begin to imitate simple actions they observe adults doing, such as taking a toy phone out of a purse and saying "hello", pretending to sweep with a child-sized broom, as well as imitating using a toy hammer.
Toddlers
At around 30–36 months, toddlers will start to imitate their parents by pretending to get ready for work and school and saying the last word(s) of what an adult just said. For example, toddlers may say "bowl" or "a bowl" after they hear someone say, "That's a bowl." They may also imitate the way family members communicate by using the same gestures and words. For example, a toddler will say, "Mommy bye-bye" after the father says, "Mommy went bye-bye."
Toddlers love to imitate their parents and help when they can; imitation helps toddlers learn, and through their experiences lasting impressions are made. 12- to 36-month-olds learn by doing, not by watching, and so it is often recommended to be a good role model and caretaker by showing them simple tasks like putting on socks or holding a spoon.
Duke developmental psychologist Carol Eckerman did a study on toddlers imitating toddlers and found that at the age of 2 children involve themselves in imitation play to communicate with one another. This can be seen within a culture or across different cultures. 3 common imitative patterns Eckerman found were reciprocal imitation, follow-the-leader, and lead-follow.
Kenneth Kaye's "apprenticeship" theory of imitation rejected assumptions that other authors had made about its development. His research showed that there is no one simple imitation skill with its own course of development. What changes is the type of behavior imitated.
An important agenda for infancy is the progressive imitation of higher levels of use of signs, until the ultimate achievement of symbols. The principal role played by parents in this process is their provision of salient models within the facilitating frames that channel the infant's attention and organize his imitative efforts.
Gender and age differences
Imitation and imitative behaviors do not manifest ubiquitously and evenly in all human individuals; some individuals rely more on imitated information than others. Although imitation is very useful when it comes to cognitive learning with toddlers, research has shown that there are some gender and age differences when it comes to imitation. Research done to judge imitation in toddlers 2–3 years old shows that when faced with certain conditions "2-year-olds displayed more motor imitation than 3-year-olds, and 3-year-olds displayed more verbal-reality imitation than 2-year-olds. Boys displayed more motor imitation than girls."
No other research is more controversial pertaining gender differences in toddler imitation than psychologist, Bandura's, bobo doll experiments. The goal of the experiment was to see what happens to toddlers when exposed to aggressive and non-aggressive adults, would the toddlers imitate the behavior of the adults and if so, which gender is more likely to imitate the aggressive adult. In the beginning of the experiment Bandura had several predictions that actually came true. Children exposed to violent adults will imitate the actions of that adult when the adult is not present, boys who had observed an adult of the opposite sex act aggressively are less likely to act violently than those who witnessed a male adult act violently. In fact "boys who observed an adult male behaving violently were more influenced than those who had observed a female model behavior aggressively". One observation was that while boys are likely to imitate physical acts of violence, girls are likely to imitate verbal acts of violence.
Negative imitation
Imitation plays a major role on how a toddler interprets the world. Much of a child's understanding is derived from imitation, due to a lack of verbal skill imitation in toddlers for communication. It is what connects them to the communicating world, as they continue to grow they begin to learn more. This may mean that it is crucial for parents to be cautious as to how they act and behave around their toddlers. Imitation is the toddlers way of confirming and dis-conforming socially acceptable actions in society. Actions like washing dishes, cleaning up the house and doing chores are actions you want your toddlers to imitate. Imitating negative things is something that is never beyond young toddlers. If they are exposed to cursing and violence, it is going to be what the child views as the norm of their world, since imitation is the "mental activity that helps to formulate the conceptions of the world for toddlers". So it is important for parents to be careful what they say or do in front of their children.
Autism
Children with autism exhibit significant impairment in imitation skills. Imitation deficits have been reported on a variety of tasks including symbolic and non-symbolic body movements, symbolic and functional object use, vocalizations, and facial expressions. In contrast, typically-developing children can copy a broad range of novel (as well as familiar) rules from a very early age. Problems with imitation discriminate children with autism from those with other developmental disorders as early as age 2 and continue into adulthood.
Children with autism exhibit significant deficits in imitation that are associated with impairments in other social communication skills. It is unclear whether imitation is mediating these relationships directly, or whether they are due to some other developmental variable that is also reflected in the measurement of imitation skills.
On the contrary, research from the early 21st century suggests that people affected with forms of high-functioning autism easily interact with one another by using a more analytically-centered communication approach rather than an imitative cue-based approach, suggesting that reduced imitative capabilities do not affect abilities for expressive social behavior but only the understanding of said social behavior. Social communication is not negatively affected when said communication involves less or no imitation. Children with autism may have significant problems understanding typical social communication not because of inherent social deficits, but because of differences in communication style which affect reciprocal understanding.
Autistic individuals are also shown to possess increased analytical, cognitive, and visual processing, suggesting that they have no true impairments in observing the actions of others but may decide not to imitate them because they do not analytically understand them. A 2016 study has shown that involuntary, spontaneous facial mimicry – which supposedly depends on the mirror neuron system – is intact in individuals with autism, contrasting with previous studies and suggesting that the mirror neuron system is not inherently broken in autistic individuals.
Automatic imitation
The automatic imitation comes very fast when a stimulus is given to replicate. The imitation can match the commands with the visual stimulus (compatible) or it cannot match the commands with the visual stimulus (incompatible). For example: 'Simon Says', a game played with children where they are told to follow the commands given by the adult. In this game, the adult gives the commands and shows the actions; the commands given can either match the action to be done or it will not match the action. The children who imitate the adult who has given the command with the correct action will stay in the game. The children who imitate the command with the wrong action will go out of the game, and this is where the child's automatic imitation comes into play. Psychologically, the visual stimulus being looked upon by the child is being imitated faster than the imitation of the command. In addition, the response times were faster in compatible scenarios than in incompatible scenarios.
Children are surrounded by many different people, day by day. Their parents make a big impact on them, and usually what the children do is what they have seen their parent do. In this article they found that a child, simply watching its mother sweep the floor, right after soon picks up on it and starts to imitate the mother by sweeping the floor. By the children imitating, they are really teaching themselves how to do things without instruction from the parent or guardian. Toddlers love to play the game of house. They picked up on this game of house by television, school or at home; they play the game how they see it. The kids imitate their parents or anybody in their family. In the article it says it is so easy for them to pick up on the things they see on an everyday basis.
Over-imitation
Over-imitation is "the tendency of young children to copy all of an adult model's actions, even components that are irrelevant for the task at hand." According to this human and cross-cultural phenomenon, a child has a strong tendency to automatically encode the deliberate action of an adult as causally meaningful even when the child observes evidence that proves that its performance is unnecessary. It is suggested that over-imitation "may be critical to the transmission of human culture." Experiments done by Lyons et al. (2007) has shown that when there are obvious pedagogical cues, children tend to imitate step by step, including many unnecessary steps; without pedagogical cues, children will simply skip those useless steps.
However, another study suggests that children do not just "blindly follow the crowd" since they can also be just as discriminating as adults in choosing whether an unnecessary action should be copied or not. They may imitate additional but unnecessary steps to a novel process if the adult demonstrations are all the same. However, in cases where one out of four adults showed a better technique, only 40% actually copied the extra step, as described by Evans, Carpenter and others. Children's imitation is selective, also known as "selective imitation". Studies have shown that children tend to imitate older, competitive, and trustworthy individuals.
Deferred imitation
Piaget coined the term deferred imitation and suggested that it arises out of the child's increasing ability to "form mental representations of behavior performed by others." Deferred imitation is also "the ability to reproduce a previously witnessed action or sequence of actions in the absence of current perceptual support for the action." Instead of copying what is currently occurring, individuals repeat the action or behavior later on. It appears that infants show an improving ability for deferred imitation as they get older, especially by 24 months. By 24 months, infants are able to imitate action sequences after a delay of up to three months, meaning that "they're able to generalize knowledge they have gained from one test environment to another and from one test object to another."
A child's deferred imitation ability "to form mental representations of actions occurring in everyday life and their knowledge of communicative gestures" has also been linked to earlier productive language development. Between 9 (preverbal period) and 16 months (verbal period), deferred imitation performance on a standard actions-on-objects task was consistent in one longitudinal study testing participants' ability to complete a target action, with high achievers at 9 months remaining so at 16 months. Gestural development at 9 months was also linked to productive language at 16 months. Researchers now believe that early deferred imitation ability is indicative of early declarative memory, also considered a predictor of productive language development.
See also
Appropriation (sociology)
Articulation (sociology)
Associative Sequence Learning
Cognitive imitation
Copycat crime
Copycat suicide
Identification (psychology)
Mimicry
Royal Commission on Animal Magnetism
References
Further reading
External links
M. Metzmacher, 1995. La transmission du chant chez le Pinson des arbres (Fringilla c. coelebs) : phase sensible et rôle des tuteurs chez les oiseaux captifs. Alauda, 63 : 123 – 134.
M. Metzmacher, 2016. Imitations et transmission culturelle dans le chant du Pinson des arbres Fringilla coelebs ? Alauda, 84 : 203-220.
Social learning theory
Behaviorism
Copying
es:Mimesis#Sociología | Imitation | [
"Biology"
] | 5,843 | [
"Behavior",
"Behaviorism",
"Social learning theory"
] |
577,876 | https://en.wikipedia.org/wiki/Redox%20indicator | A redox indicator (also called an oxidation-reduction indicator) is an indicator which undergoes a definite color change at a specific electrode potential.
The requirement for fast and reversible color change means that the oxidation-reduction equilibrium for an indicator redox system needs to be established very quickly. Therefore, only a few classes of organic redox systems can be used for indicator purposes.
There are two common classes of redox indicators:
metal complexes of phenanthroline and bipyridine. In these systems, the metal changes oxidation state.
organic redox systems such as methylene blue. In these systems, a proton participates in the redox reaction. Therefore, sometimes redox indicators are also divided into two general groups: independent or dependent on pH.
The most common redox indicator are organic compounds.
Redox Indicator example:
The molecule 2,2'- Bipyridine is a redox Indicator. In solution, it changes from light blue to red at an electrode potential of 0.97 V.
pH independent
pH dependent
See also
Chemical analysis
pH indicator
Complexometric indicator
References
External links
Redox Indicators. Characteristics And Applications
Redox indicators
Physical chemistry | Redox indicator | [
"Physics",
"Chemistry"
] | 236 | [
"Applied and interdisciplinary physics",
"Redox indicators",
"Electrochemistry",
"nan",
"Physical chemistry"
] |
577,881 | https://en.wikipedia.org/wiki/Chromate%20and%20dichromate | Chromate salts contain the chromate anion, . Dichromate salts contain the dichromate anion, . They are oxyanions of chromium in the +6 oxidation state and are moderately strong oxidizing agents. In an aqueous solution, chromate and dichromate ions can be interconvertible.
Chemical properties
Chromates react with hydrogen peroxide, giving products in which peroxide, , replaces one or more oxygen atoms. In acid solution the unstable blue peroxo complex Chromium(VI) oxide peroxide, CrO(O2)2, is formed; it is an uncharged covalent molecule, which may be extracted into ether. Addition of pyridine results in the formation of the more stable complex CrO(O2)2py.
Acid–base properties
In aqueous solution, chromate and dichromate anions exist in a chemical equilibrium.
The predominance diagram shows that the position of the equilibrium depends on both pH and the analytical concentration of chromium. The chromate ion is the predominant species in alkaline solutions, but dichromate can become the predominant ion in acidic solutions.
Further condensation reactions can occur in strongly acidic solution with the formation of trichromates, , and tetrachromates, . All polyoxyanions of chromium(VI) have structures made up of tetrahedral CrO4 units sharing corners.
The hydrogen chromate ion, HCrO4−, is a weak acid:
+ H+; pKa ≈ 5.9
It is also in equilibrium with the dichromate ion:
2 + H2O
This equilibrium does not involve a change in hydrogen ion concentration, which would predict that the equilibrium is independent of pH. The red line on the predominance diagram is not quite horizontal due to the simultaneous equilibrium with the chromate ion. The hydrogen chromate ion may be protonated, with the formation of molecular chromic acid, H2CrO4, but the pKa for the equilibrium
is not well characterized. Reported values vary between about −0.8 and 1.6.
The dichromate ion is a somewhat weaker base than the chromate ion:
, pKa = 1.18
The pKa value for this reaction shows that it can be ignored at pH > 4.
Oxidation–reduction properties
The chromate and dichromate ions are fairly strong oxidizing agents. Commonly three electrons are added to a chromium atom, reducing it to oxidation state +3. In acid solution the aquated Cr3+ ion is produced.
+ 14 H+ + 6 e− → 2 Cr3+ + 7 H2O ε0 = 1.33 V
In alkaline solution chromium(III) hydroxide is produced. The redox potential shows that chromates are weaker oxidizing agent in alkaline solution than in acid solution.
+ 4 + 3 e− → + 5 ε0 = −0.13 V
Applications
Approximately of hexavalent chromium, mainly sodium dichromate, were produced in 1985. Chromates and dichromates are used in chrome plating to protect metals from corrosion and to improve paint adhesion. Chromate and dichromate salts of heavy metals, lanthanides and alkaline earth metals are only very slightly soluble in water and are thus used as pigments. The lead-containing pigment chrome yellow was used for a very long time before environmental regulations discouraged its use. When used as oxidizing agents or titrants in a redox chemical reaction, chromates and dichromates convert into trivalent chromium, Cr3+, salts of which typically have a distinctively different blue-green color.
Natural occurrence and production
The primary chromium ore is the mixed metal oxide chromite, FeCr2O4, found as brittle metallic black crystals or granules. Chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms iron(III) oxide, Fe2O3:
4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2
Subsequent leaching of this material at higher temperatures dissolves the chromates, leaving a residue of insoluble iron oxide. Normally the chromate solution is further processed to make chromium metal, but a chromate salt may be obtained directly from the liquor.
Chromate containing minerals are rare. Crocoite, PbCrO4, which can occur as spectacular long red crystals, is the most commonly found chromate mineral. Rare potassium chromate minerals and related compounds are found in the Atacama Desert. Among them is lópezite – the only known dichromate mineral.
Toxicity
Hexavalent chromium compounds can be toxic and carcinogenic (IARC Group 1). Inhaling particles of hexavalent chromium compounds can cause lung cancer. Also positive associations have been observed between exposure to chromium (VI) compounds and cancer of the nose and nasal sinuses. The use of chromate compounds in manufactured goods is restricted in the EU (and by market commonality the rest of the world) by EU Parliament directive on the Restriction of Hazardous Substances (RoHS) Directive (2002/95/EC).
See also
Chromate conversion coating
Notes
References
External links
National Pollutant Inventory - Chromium(VI) and compounds fact sheet
Demonstration of chromate-dichromate equilibrium
Oxidizing agents
Transition metal oxyanions
Oxometallates | Chromate and dichromate | [
"Chemistry"
] | 1,207 | [
"Chromates",
"Redox",
"Oxidizing agents",
"Salts"
] |
577,886 | https://en.wikipedia.org/wiki/Nurture | Nurture is usually defined as the process of caring for an organism, as it grows, usually a human. It is often used in debates as the opposite of "nature", whereby nurture means the process of replicating learned cultural information from one mind to another, and nature means the replication of genetic non-learned behavior.
Nurture is important in the nature versus nurture debate as some people see either nature or nurture as the final outcome of the origins of most of humanity's behaviours. There are many agents of socialization that are responsible, in some respects the outcome of a child's personality, behaviour, thoughts, social and emotional skills, feelings, and mental priorities.
Notes
References
Ecology
Virtue
Psychology
Nature | Nurture | [
"Biology"
] | 148 | [
"Behavioural sciences",
"Ecology",
"Behavior",
"Psychology"
] |
577,908 | https://en.wikipedia.org/wiki/Complexometric%20titration | Complexometric titration (sometimes chelatometry) is a form of volumetric analysis in which the formation of a colored complex is used to indicate the end point of a titration. Complexometric titrations are particularly useful for the determination of a mixture of different metal ions in solution. An indicator capable of producing an unambiguous color change is usually used to detect the end-point of the titration. Complexometric titrations are those reactions where a simple ion is transformed into a complex ion and the equivalence point is determined by using metal indicators or electrometrically.
Reactions used
In theory, any complexation reaction can be used as a volumetric technique provided that:
The reaction reaches equilibrium rapidly after each portion of titrant is added.
Interfering situations do not arise. For instance, the stepwise formation of several different complexes of the metal ion with the titrant, resulting in the presence of more than one complex in solution during the titration process.
A complexometric indicator capable of locating equivalence point with fair accuracy is available.
In practice, the use of EDTA as a titrant is well established.
Use of EDTA
EDTA, ethylenediaminetetraacetic acid, has four carboxyl groups and two amine groups that can act as electron pair donors, or Lewis bases. The ability of EDTA to potentially donate its six lone pairs of electrons for the formation of coordinate covalent bonds to metal cations makes EDTA a hexadentate ligand. However, in practice EDTA is usually only partially ionized, and thus forms fewer than six coordinate covalent bonds with metal cations.
Disodium EDTA is commonly used to standardize aqueous solutions of transition metal cations. Disodium EDTA (often written as Na2H2Y) only forms four coordinate covalent bonds to metal cations at pH values ≤ 12. In this pH range, the amine groups remain protonated and thus unable to donate electrons to the formation of coordinate covalent bonds. Note that the shorthand form Na4−xHxY can be used to represent any species of EDTA, with x designating the number of acidic protons bonded to the EDTA molecule.
EDTA forms an octahedral complex with most 2+ metal cations, M2+, in aqueous solution. The main reason that EDTA is used so extensively in the standardization of metal cation solutions is that the formation constant for most metal cation-EDTA complexes is very high, meaning that the equilibrium for the reaction:
M2+ + H4Y → MH2Y + 2H+
lies far to the right. Carrying out the reaction in a basic buffer solution removes H+ as it is formed, which also favors the formation of the EDTA-metal cation complex reaction product. For most purposes it can be considered that the formation of the metal cation-EDTA complex goes to completion, and this is chiefly why EDTA is used in titrations and standardizations of this type.
Data processing and calculation
First step is to plot the absorbance(A) values of standard solution against molar concentrations (c) of the known solution. Then the best straight line is plotted, passing through the origin. The experimental points are plotted as per Beer’s law:
A= E*c*l where E= molar extinction coefficient and l= optical path length usually 1 cm.
Second step is to measure absorbance (A’) of unknown solution and match it with the known absorbance-concentration plot of the standard solution. Thereby calculating the molar concentration of the unknown solution. This is calculated by using the formula, concentration of unknown =A’/(E*l). This can also be calculated using this given relation ,
concentration of unknown/ concentration of known = A’/A.
Indicators
To carry out metal cation titrations using EDTA, it is almost always necessary to use a complexometric indicator to determine when the end point has been reached. Common indicators are organic dyes such as Fast Sulphon Black, Eriochrome Black T, Eriochrome Red B, Patton Reeder, or Murexide. Color change shows that the indicator has been displaced (usually by EDTA) from the metal cations in solution when the end point has been reached. Thus, the free indicator (rather than the metal complex) serves as the endpoint indicator.
See also
Titration
Triethanolamine
References
Bibliography
“Spectroscopic Estimations “ G.N Mukherjee. Page 30
Titration
ja:キレート滴定 | Complexometric titration | [
"Chemistry"
] | 948 | [
"Instrumental analysis",
"Titration"
] |
577,961 | https://en.wikipedia.org/wiki/Department%20of%20Plant%20Sciences%2C%20University%20of%20Cambridge | The Department of Plant Sciences is a department of the University of Cambridge that conducts research and teaching in plant sciences. It was established in 1904, although the university has had a professor of botany since 1724.
Research
, the department pursues three strategic targets of research
Global food security
Synthetic biology and biotechnology
Climate science and ecosystem conservation
See also the Sainsbury Laboratory Cambridge University
Notable academic staff
Sir David Baulcombe, FRS, Regius Professor of Botany
Beverley Glover, Professor of Plant systematics and evolution, director of the Cambridge University Botanic Garden
Howard Griffiths, Professor of Plant Ecology
Julian Hibberd, Professor of Photosynthesis
Alison Smith, Professor of Plant Biochemistry and Head of Department
, the department also has 66 members of faculty and postdoctoral researchers, 100 graduate students, 19 Biotechnology and Biological Sciences Research Council (BBSRC) Doctoral Training Program (DTP) PhD students, 20 part II Tripos undergraduate students and 44 support staff.
History
The University of Cambridge has a long and distinguished history in Botany including work by John Ray and Stephen Hales in the 17th century and 18th century, Charles Darwin’s mentor John Stevens Henslow in the 19th century, and Frederick Blackman, Arthur Tansley and Harry Godwin in the 20th century.
Emeritus and alumni
More recently, the department has been home to:
John C. Gray, Emeritus Professor of Plant Molecular Biology since 2011
Thomas ap Rees, Professor of Botany
F. Ian Woodward, Lecturer and Fellow of Trinity Hall, Cambridge before being appointed Professor of Plant Ecology at the University of Sheffield
References
Plant Sciences, Department of
Biotechnology in the United Kingdom
Cambridge
Universities and colleges established in 1904
1904 establishments in England | Department of Plant Sciences, University of Cambridge | [
"Biology"
] | 328 | [
"Biotechnology in the United Kingdom",
"Biotechnology by country"
] |
577,962 | https://en.wikipedia.org/wiki/Half-cell | In electrochemistry, a half-cell is a structure that contains a conductive electrode and a surrounding conductive electrolyte separated by a naturally occurring Helmholtz double layer. Chemical reactions within this layer momentarily pump electric charges between the electrode and the electrolyte, resulting in a potential difference between the electrode and the electrolyte. The typical anode reaction involves a metal atom in the electrode being dissolved and transported as a positive ion across the double layer, causing the electrolyte to acquire a net positive charge while the electrode acquires a net negative charge. The growing potential difference creates an intense electric field within the double layer, and the potential rises in value until the field halts the net charge-pumping reactions. This self-limiting action occurs almost instantly in an isolated half-cell; in applications two dissimilar half-cells are appropriately connected to constitute a Galvanic cell.
A standard half-cell consists of a metal electrode in an aqueous solution where the concentration of the metal ions is 1 molar (1 mol/L) at 298 kelvins (25 °C). In the case of the standard hydrogen electrode (SHE), a platinum electrode is used and is immersed in an acidic solution where the concentration of hydrogen ions is 1M, with hydrogen gas at 1atm being bubbled through solution. The electrochemical series, which consists of standard electrode potentials and is closely related to the reactivity series, was generated by measuring the difference in potential between the metal half-cell in a circuit with a standard hydrogen half-cell, connected by a salt bridge.
The standard hydrogen half-cell:
2H+(aq) + 2e− → H2(g)
The half-cells of a Daniell cell:
Original equation
Zn + Cu2+ → Zn2+ + Cu
Half-cell (anode) of Zn
Zn → Zn2+ + 2e−
Half-cell (cathode) of Cu
Cu2+ + 2e− → Cu
See also
Standard electrode potential (data page)
References
Electrochemistry
Electrochemical cells | Half-cell | [
"Chemistry"
] | 432 | [
"Electrochemistry",
"Physical chemistry stubs",
"Electrochemical cells",
"Electrochemistry stubs"
] |
577,974 | https://en.wikipedia.org/wiki/John%20Innes%20Centre | The John Innes Centre (JIC), located in Norwich, Norfolk, England, is an independent centre for research and training in plant and microbial science founded in 1910. It is a registered charity (No 223852) grant-aided by the Biotechnology and Biological Sciences Research Council (BBSRC), the European Research Council (ERC) and the Bill and Melinda Gates Foundation and is a member of the Norwich Research Park. In 2017, the John Innes Centre was awarded a gold Athena SWAN Charter award.
History
The John Innes Horticultural Institution was founded in 1910 at Merton Park, Surrey (now London Borough of Merton), with funds bequeathed by John Innes, a merchant and philanthropist. The Institution occupied Innes's former estate at Merton Park, Surrey until 1945 when it moved to Bayfordbury, Hertfordshire. It moved to its present site in 1967.
In 1910, William Bateson became the first director of the John Innes Horticultural Institution and moved with his family to Merton Park. John Innes compost was developed by the institution in the 1930s, who donated the recipe to the "Dig for Victory" war effort. The John Innes Centre has never sold John Innes compost.
During the 1980s, the administration of the John Innes Institute was combined with that of the Plant Breeding Institute (formerly at Trumpington, Cambridgeshire) and the Nitrogen Fixation Laboratory. In 1994, following the relocation of the operations of other two organisations to the Norwich site, the three were merged as the John Innes Centre.
As of 2011 the institute was divided into six departments: Biological Chemistry, Cell & Developmental Biology, Computational & Systems Biology, Crop Genetics, Metabolic Biology and Molecular Microbiology.
The John Innes Centre has a tradition of training PhD students and post-docs. PhD degrees obtained via the John Innes Centre are awarded by the University of East Anglia. The John Innes Centre has a contingent of postdoctoral researchers, many of whom are recruited onto the institute's Post-doctoral Training Fellowship programme. The John Innes Centre also sponsors seminars and lectures, including the Bateson Lecture, Biffen Lecture, Chatt Lecture, Darlington Lecture and Haldane Lecture.
Research
The research at the John Innes Centre is divided into four Institute Strategic Programs (ISPs) funded by the Biotechnology and Biological Sciences Research Council (BBSRC). These ISPs, which combine the research of multiple groups to address a greater aim, were, from 2017 to 2023 as follows:
Genes in the Environment - aims to develop a wider and deeper understanding of how the environment influences plant growth and development.
Molecules from Nature - will investigate the vast diversity of chemicals produced by plants and microbes.
Plant Health - aims to understand the molecular dialogue between plants and microbes, establishing how they communicate with each other and how they have evolved in relation to one another.
Designing Future Wheat - a program with other BBSRC institutes Rothamsted Research and National Institute for Agricultural Botany (NIAB) and the University of Nottingham and the University of Bristol.
Affiliations
The John Innes Centre co-located with The Sainsbury Laboratory (Norwich), an institute focused studying plant disease. The Sainsbury Laboratory is closely affiliated with the University of East Anglia. Along with the Institute of Food Research and University of East Anglia (UEA), JIC hosted the BA Festival of Science (now the British Science Festival) in September 2006. The John Innes Centre, University of East Anglia (UEA) The Sainsbury Laboratory, The Earlham Institute and Quadram Institute Bioscience have since 2016, run Women of the Future an event aimed at promoting career in science to young women.
Directors
The John Innes Centre has been directed by:
William Bateson (1910–1926)
A. Daniel Hall (1926–1939)
C. D. Darlington (1939–1953)
Kenneth Dodds (1953–1967)
Roy Markham (1967–1980)
Harold Woolhouse (1980–1988)
Richard B. Flavell (1988–1999)
Chris Lamb (1999-2009)
Dale Sanders (2009–2022)
Graham Moore (2022–present)
Notable staff and alumni
Notable staff and alumni include:
John Innes Foundation
The John Innes Foundation (JIF) is an independent charitable foundation (registered Charity No. 1111527) and was formed in 1910 by John Innes. JIF set up the John Innes Horticultural Institution (JIHI) in London. Currently, the JIF owns the land and buildings at Newfound Farm in Bawburgh, Norfolk which are used by researchers from the John Innes Centre. The JIF trustees also play an active part in the management of John Innes Centre research and have the right to appoint three members of the Governing Council. The foundation sponsors several graduate studentships each year, support for educational programmes and the infrastructure of the site. They also fund student awards for scientific excellence and science communication. It also owns a very significant collection of archive material held in the Historical Collections library at the John Innes Centre.
The Special Collection and the History of Genetics Library
The John Innes Centre is home to a collection of rare botanical books, lab books, manuscripts and letters documenting the history of genetics and research carried out by its scientists. This includes a letter from William Bateson documenting the first use of the word "genetics". The History of Genetics library also contains the archives of the Genetical Society.
Germplasm Resources Unit
An important part of the John Innes Centre is the John Innes Centre Germplasm Resources Unit (GRU). This seedbank houses a number of germplasm collections, including the Watkins Landrace Wheat Collection, the John Innes Centre Pisum Collection, BBSRC Small Grain Cereal Collection, Crop wild relative collection and several specialist genetic stocks collections. This material is extensively used by UK and non-UK researchers and breeders, and is an available upon request to research, academic and commercial efforts, subject to availability. The complete list of the material can be found in the GRU database.
References
Biological research institutes in the United Kingdom
Botany
Genetics in the United Kingdom
Organisations based in Norwich
Plant breeding
Research institutes in Norfolk | John Innes Centre | [
"Chemistry",
"Biology"
] | 1,258 | [
"Plant breeding",
"Botany",
"Plants",
"Molecular biology"
] |
577,991 | https://en.wikipedia.org/wiki/Service%20life | A product's service life is its period of use in service. Several related terms describe more precisely a product's life, from the point of manufacture, storage, and distribution, and eventual use.
Service life has been defined as "a product's total life in use from the point of sale to the point of discard" and distinguished from replacement life, "the period after which the initial purchaser returns to the shop for a replacement". Determining a product's expected service life as part of business policy (product life cycle management) involves using tools and calculations from maintainability and reliability analysis. Service life represents a commitment made by the item's manufacturer and is usually specified as a median. It is the time that any manufactured item can be expected to be "serviceable" or supported by its manufacturer.
Service life is not to be confused with shelf life, which deals with storage time, or with technical life, which is the maximum period during which it can physically function. Service life also differs from predicted life, in terms of mean time before failure (MTBF) or maintenance-free operating period (MFOP). Predicted life is useful such that a manufacturer may estimate, by hypothetical modeling and calculation, a general rule for which it will honor warranty claims, or planning for mission fulfillment. The difference between service life and predicted life is most clear when considering mission time and reliability in comparison to MTBF and service life. For example, a missile system can have a mission time of less than one minute, service life of 20 years, active MTBF of 20 minutes, dormant MTBF of 50 years, and reliability of 99.9999%.
Consumers will have different expectations about service life and longevity based upon factors such as use, cost, and quality.
Product strategy
Manufacturers will commit to very conservative service life, usually 2 to 5 years for most commercial and consumer products (for example computer peripherals and components). However, for large and expensive durable goods, the items are not consumable, and service lives and maintenance activity will factor large in the service life. Again, an airliner might have a mission time of 11 hours, a predicted active MTBF of 10,000 hours without maintenance (or 15,000 hours with maintenance), reliability of .99999, and a service life of 40 years.
The most common model for item lifetime is the bathtub curve, a plot of the varying failure rate as a function of time. During early life, the bathtub shows increased failures, usually witnessed during product development. The middle portion of the bathtub, or 'useful life', is a slightly inclined, nearly constant failure rate period where the consumer enjoys the benefit conferred by the product. As time increases further, the curve reaches a period of increasing failures, modeling the product's wear-out phase.
For an individual product, the component parts may each have independent service lives, resulting in several bathtub curves. For instance, a tire will have a service life partitioning related to the tread and the casing.
Examples
For maintainable items, those wear-out items that are determined by logistical analysis to be provisioned for sparing and replacement will assure a longer service life than manufactured items without such planning. A simple example is automotive tires - failure to plan for this wear out item would limit automotive service life to the extent of a single set of tires.
An individual tire's life follows the bathtub curve, to boot. After installation, there is a not-small probability of failure which may be related to material or workmanship or even to the process for mounting the tire which may introduce some small damage. After the initial period, the tire will perform, given no defect introducing events such as encountering a road hazard (a nail or a pothole), for a long duration relative to its expected service life which is a function of several variables (design, material, process). After a period, the failure probability will rise; for some tires, this will occur after the tread is worn out. Then, a secondary market for tires puts a retread on the tire thereby extending the service life. It is not uncommon for an 80,000-mile tire to perform well beyond that limit.
It may be difficult to obtain reliable longevity data about many consumer products as, in general, efforts at actuarial analysis are not taken to the same extent as found with that needed to support insurance decisions. However, some attempts to provide this type of information have been made. An example is the collection of estimates for household components provided by the Old House Web which gathers data from the Appliance Statistical Review and various institutes involved with the homebuilding trade.
Some Engine manufacturers, such as for example Navistar and Volvo, use a so-called B-life rating,
based on the durability data of the engine manufacturer, B10 and B50 index for measuring the life expectancy of an engine.
When exposed to high temperatures, the lithium-ion batteries in smartphones are easily damaged and can fail faster than expected, in addition to letting the device run out of battery too often. Debris and other contaminants that enter through small cracks in the phone can also infringe on smartphone life expectancy. One of the most common factors that cause smartphones and other electronic devices to die quickly is physical impact and breakage, which can severely damage the internal pieces.
Operational life
For certain products, such as those that cannot be serviced during their operational life for technical reasons, a manufacturer may calculate a product's expected performance at both the beginning of operational life (BOL) and end of operational life (EOL). Batteries and other components that degrade over time may affect the operation of a product. The performance of mission critical components is therefore calculated for EOL, with the components exceeding their specification at BOL. For example, with spaceflight hardware, which must survive in the harsh environment of space, the capacity to generate electricity from solar panels or radioisotope thermoelectric generator (RTG) is likely to reduce throughout a mission, but must still meet a specific requirement at EOL in order to complete the mission. A spacecraft may also have a BOL mass that is greater than its EOL mass as propellant is depleted during its operational life.
See also
Availability
Capacity loss
Decrepit car
Design life
Durability
Maintainability
Planned obsolescence
Repairability
Shelf life
Throwaway society
Whole-life cost
References
Product management
Waste minimisation | Service life | [
"Engineering"
] | 1,323 | [
"Systems engineering",
"Reliability engineering"
] |
578,008 | https://en.wikipedia.org/wiki/Rocky%20Flats%20Plant | The Rocky Flats Plant was a United States manufacturing complex that produced nuclear weapons parts near Denver, Colorado. The facility's primary mission was the fabrication of plutonium pits, the fissionable part of a bomb that produces a nuclear explosion. The pits were shipped to other facilities to be assembled into complete nuclear weapons. Operated from 1952 to 1992 by private contractors, Dow Chemical Company, Rockwell International Corporation and EG&G, the complex was under the control of the U.S. Atomic Energy Commission (AEC), succeeded by the Department of Energy (DOE) in 1977. The plant manufactured 1,000 to 2,000 pits per year.
Plutonium pit production was halted in 1989 after EPA and FBI agents raided the facility and the plant was formally shut down in 1992. Rockwell then accepted a plea agreement for criminal violations of environmental law. At the time, the fine was one of the largest penalties ever in an environmental law case.
Cleanup began in the early 1990s, and the site achieved regulatory closure in 2006. The cleanup effort decommissioned and demolished the entire plant, more than 800 structures; removed over 21 tons of weapons-grade material; removed over 1.3 million cubic meters of waste; and treated more than of water. Four groundwater treatment systems were also constructed. The site of the former facility consists of two distinct areas: the "Central Operable Unit", which remains off-limits to the public as a CERCLA Superfund site, owned and managed by the U.S. Department of Energy, and the Rocky Flats National Wildlife Refuge, owned and managed by the U.S. Fish and Wildlife Service. Every five years, the U.S. Department of Energy, U.S. Environmental Protection Agency, and Colorado Department of Public Health and Environment review environmental data and other information to assess whether the remedy is functioning as intended. The latest Five-Year Review for the site, released in August 2022, concluded the site remedy is protective of human health and the environment. However, a protectiveness deferred determination was made for PFAS.
History
1950s
Following World War II, the United States increased production of nuclear weapons. A site about northwest of Denver on a windy plateau called Rocky Flats was chosen for the facility. Contemporary news reports stated that the site would not be used to produce nuclear bombs, but might be used to produce uranium and plutonium components for use in nuclear weapons.
The construction of Rocky Flats began in July 1951 and was a significant boon to the Colorado economy. Colorado state highways 72 and 93 were constructed to access the plant. Direct government construction contracts to Colorado business were worth $26 million and employed 2,800 people.
By April 1953, the plant began operating under Dow Chemical company. What the plant made, when construction was finished, when it began making its products, and how much product it was making was secret. Routine production processes began immediately leaking plutonium into the atmosphere. A later study on the risks to public health from Rocky Flats estimated that normal operations leaked up to 130,000 micro-curies of plutonium annually into the atmosphere as part of its normal production processes during the 1950s. This routine leakage declined in the 1960s and continued down to 60 pico-curies in 1989. For residents living in the area most contaminated by Rocky Flats, this is a comparable exposure to the plutonium from fallout of nuclear weapons testing. This does not include the much higher levels of exposure as a result of the later fires. The company strictly maintained that the workers would handle radioactive material but would not make nuclear weapons. This was technically true because the plant manufactured plutonium pits which were used at the Pantex plant in Amarillo, Texas to assemble fission weapons and the primary stages of thermonuclear weapons.
Over its history, Rocky Flats became the primary plutonium pit production site in the United States. Los Alamos National Laboratory would continue to be used as a pit R&D facility from 1949 to 2013. The Hanford Site also produced plutonium pits from 1949 to 1965.
The AEC called Rocky Flats a "Weapon Production Facility" in a 1956 report. At this time, the plant was expanded with an additional $18.4 million investment from the AEC. During its lifetime, the plant manufactured 1,000 to 2,000 pits per year.
In June 1957 two employees were taken to the onsite hospital after an explosion in the production line. They were treated for cuts from flying glass and exposure to plutonium.
On September 11, 1957, a plutonium fire occurred in one of the gloveboxes used to handle radioactive materials, igniting the combustible rubber gloves and plexiglas windows of the box. Metallic plutonium is a fire hazard and pyrophoric; under the right conditions it may ignite in air at room temperature. The fire escaped containment in under thirty minutes and the firefighters were forced to use water, a risky decision on plutonium fire, to put out the fire in the glovebox. After putting out the original source, there was an explosion in the ventilation system. The fire burned the filters that normally removed the plutonium from the building's air resulting in the release of 21 curies of plutonium into the atmosphere. For comparison the Fat Man, used 448 curies of plutonium for its core. The accident resulted in the contamination of Building 771 and caused $818,600 in damage. At the time, the AEC spokesman significantly downplayed the risk of plutonium exposure and estimated only $50,000 in damages.
An incinerator for plutonium-contaminated waste was installed in Building 771 in 1958. The offgas was treated with scrubbers and filters and eventually released to the atmosphere. The ash was occasionally able to be reprocessed for plutonium recovery.
1960s
In 1960, one of the workers that responded to the 1957 fire petitioned the state legislature to create a way for workers that receive unsafe doses of radiation on the job to be compensated for the health effects. Smaller incidents, such as a 1962 fire, continued to threaten the safety of the workers without posing a significant public health risk.
Throughout the 1960s, the plant continued to enlarge and add buildings. The AEC sponsored $4.5 million in new construction contracts at Rocky Flats for 1960 and $3 million in 1962 . Payroll reached $26 million annually by 1962.
The 1960s also brought more contamination to the site. By 1967, of plutonium-contaminated lubricants and solvents had accumulated on Pad 903. A large number of them were found to be leaking, and low-level contaminated soil was becoming wind-borne from this area. At least some of the leakage had been detected as early as 1962. From 1967-1968, barrels were moved to Idaho National Laboratory. After removing the barrels, it was discovered that the winds, which frequently exceeded 100mph, had moved plutonium contaminated soil off the 903 Area. It was then paved over with asphalt in 1969 to prevent further spread of the contaminated soil. Later analysis completed in 1999 for the CDPHE estimated that between 6 and 58 Curies of plutonium spilled on to Pad 903 soil due to barrel leakage.
On May 11, 1969, there was a major fire in a glovebox in Building 776/777. Later investigations disagree on what caused the fire. The 1999 report for the Colorado Department of Public Health and Environment said the fire was started by a pressed plutonium block which spontaneously ignited. Other fire investigators said the fire was started by plutonium-contaminated oil rags, and the AEC buried that information to protect individuals from liability. Given the fire crew's experience with the 1957 fire, the fire captain again determined to use water to put out the fire. After six hours the fire was extinguished. As in the 1957 fire, the air filters which normally removed plutonium from the building's exhaust were destroyed by the fire, and in this case between 10 and 60 mCi of plutonium was released.
This was likely the costliest industrial accident to occur in the United States up to that time. Approximately $20 million of plutonium was consumed in the fire and there were $50 million in other damages. Cleanup from the accident took two years. The U.S. congress ordered an investigation into the accident, which found government officials helped cover up details of the fire by abusing classified information protocols. The investigation also found that the AEC ignored safety recomendations after the 1957 fire that which may have prevented this accident. The investigation recommended extensive improvments to the building to increase safety. Fire sprinkler systems and firewalls were built during the reconstruction.
1970s
Joseph Sykes, a janitor at the plant, was denied unemployment compensation after he was fired for refusing to work in the builds 776/777 where the 1969 fire occurred. He felt that it was unsafe to work due to cancer risk, but the Colorado Industrial Commission ruled that he must demonstrate an actual hazard.
In order to reduce the danger of public contamination and to create a security area around the plant following protests, the United States Congress authorized the purchase of a buffer zone around the plant in 1972. In 1973, nearby Walnut Creek (Colorado) and the Great Western Reservoir were found to have elevated tritium levels. The tritium was determined to have been released from contaminated materials shipped to Rocky Flats from the Lawrence Livermore Laboratory. Discovery of the contamination by the Colorado Department of Health led to investigations by the AEC and United States Environmental Protection Agency (EPA). As a result of the investigation, several mitigation efforts were put in place to prevent further contamination. Some of the elements included channeling of wastewater runoff to three dams for testing before release into the water system and construction of a reverse osmosis facility to clean up wastewater.
The next year, elevated plutonium levels were found in the topsoil near the now covered Pad 903. An additional of buffer zone were purchased.
1975 saw Rockwell International replacing Dow Chemical as the contractor for the site. This year also saw local landowners suing for property contamination caused by the plant.
In 1978, 60 protesters belonging to the Rocky Flats Truth Force, or Satyagraha Affinity Group, based in Boulder, Colorado, were arrested for trespassing at Rocky Flats, and were brought to trial before Judge Kim Goldberger. Dr. John Candler Cobb, Professor of Preventive Medicine at the University of Colorado Medical Center, testified that the most significant danger of radioactive contamination came from the 1967 incident in which oil barrels containing plutonium leaked of oil into sand under the barrels, which was then blown by strong winds as far away as Denver.
Dr. Carl Johnson, Jefferson County health director from 1973 to 1981, directed numerous studies on contamination levels and health risks the plant posed to public health. Based on his conclusions, Johnson opposed housing development near Rocky Flats. He was fired for opposing home development in contaminated areas. He later won a whistleblower lawsuit against Jefferson County, Colorado . Kristen Iversen, author of Full Body Burden: Growing Up in the Nuclear Shadow of Rocky Flats, contends later studies confirmed many of his findings.
In 1985, after hearing from various experts, the U.S. District Court for the District Court of Colorado found the results of Dr. Carl Johnson's study were "unreliable because the reported relationship seems implausible given the latency period for the types of cancer reported and because the excess cancers are different from the types of cancers expected to result from internally deposited plutonium." In addition, the court agreed with the Colorado State epidemiologist that "no measurable increases in cancer incidence resulting from operations at Rocky Flats have been demonstrated by any appropriate scientific method." Subsequent and ongoing studies indicate likely ongoing contamination and health issues. To date, there has never been an epidemiological study of people who lived or live near the Rocky Flats site.
On April 28, 1979, a few weeks after the Three Mile Island accident, a crowd of close to 15,000 protesters assembled at a nearby site. Singers Jackson Browne and Bonnie Raitt took the stage along with various speakers. The following day, 286 protesters including Daniel Ellsberg were arrested for civil disobedience/trespassing on the Rocky Flats facility.
1980s
On December 11, 1980, Congress enacted the Comprehensive Environmental Response, Compensation, and Liability Act, which provided the authority to respond directly to releases or threatened releases at the nation's worst environmental sites.
Dark Circle is a 1982 American documentary film that focuses on the Rocky Flats Plant and its plutonium contamination of the area's environment. The film won the Grand Prize for documentary at the Sundance Film Festival and received a national Emmy Award for "Outstanding individual achievement in news and documentary".
Rocky Flats became a focus of protest by peace activists throughout the 1980s. In 1983, a demonstration was organized that brought together 17,000 people who joined hands in an encirclement around the perimeter of the plant.
A perimeter security zone was installed around the facility in 1983 and was upgraded with remote detection abilities in 1985. Also in 1983, the first radioactive waste was processed through the aqueous recovery system, creating a plutonium button.
A plant safety official, Jim Stone, warned Rockwell that their employees were being exposed to unsafe levels of beryllium in 1984. He was fired in 1986 for whistle blowing. The plant used beryllium as part of the weapons manufacturing process. Stone later claimed that Rockwell would fire employees who had been exposed in order to limit their liability for the health of their employees. Twelve Rocky Flats workers were discovered to have berylliosis, a lung disease cause by beryllium, as part of DOE testing. These findings spurred the DOE to investigate berylliosis in all their facilities.
A celebration of 250,000 continuous safe hours by the employees at Rocky Flats happened in 1985. The same year, Rockwell received Industrial Research Magazine's IR-100 award for a process to remove actinide contamination from wastewater at the plant. The next year, the site received a National Safety Council Award of Honor for outstanding safety performance.
By 1986 over 5,500 workers were employed at the site, and were represented by the Oil, Chemical and Atomic Workers International Union (OCAW).
In 1986, the State of Colorado's Public Health Department, EPA, and DOE entered into a compliance agreement with the goal of bringing the facility into compliance with RCRA and Colorado Hazardous Waste Act permitting, generator, and waste management requirements. The agreement also initiated a process for investigating and remediating environmental contamination. In addition, the agreement established a framework addressing DOE's mixed-waste.
On August 10, 1987, 320 demonstrators were arrested after they tried to force a one-day shutdown of the Rocky Flats nuclear weapons plant.
In 1988, a Department of Energy (DOE) safety evaluation resulted in a report that was critical of safety measures at the plant. The EPA fined the plant for polychlorinated biphenyl (PCB) leaks from a transformer. A solid waste form, called pondcrete, was found not to have cured properly and was leaking from containers. A boxcar of transuranic waste from the site was refused entry into Idaho and returned to the plant. Plans to potentially close the plant were released.
In 1989 an employee left a faucet running, resulting in chromic acid being released into the sanitary water system. The Colorado Department of Health and the EPA both posted full-time personnel at the plant to monitor safety. Plutonium production was suspended due to safety violations.
In August 1989, an estimated 3,500 people turned out for a demonstration at Rocky Flats.
FBI/EPA investigation, June 1989 raid
In 1987, plant insiders started to covertly inform the Environmental Protection Agency (EPA) and the Federal Bureau of Investigation (FBI) about the unsafe conditions. In December 1988, the FBI commenced clandestine flights of light aircraft over the area and confirmed via infrared video recordings that the "outdated and unpermitted" Building 771 incinerator was apparently being used late into the night. After several months of collecting evidence both from workers and via direct measurement in 1989, the FBI informed the DOE on June 6 that they wanted to meet to discuss a potential terrorist threat.
On June 6, 1989, the United States District Court for the District Court of Colorado issued a search warrant to the FBI, based in part on information collected by Colorado Department of Health (now CDPHE) inspectors during the 1980s. Dubbed "Operation Desert Glow", the raid, sponsored by the United States Department of Justice (DOJ), began at 9 a.m. on June 6. After arriving in the meeting room, the FBI agents revealed the true reason for the meeting to stunned DOE and Rockwell officials, including Dominic Sanchini, Rockwell International's manager of Rocky Flats, who died the next year in Boulder of cancer. The FBI discovered numerous violations of federal anti-pollution laws, including limited contamination of water and soil. In 1992, Rockwell International was charged with environmental crimes, including violations of the Resource Conservation and Recovery Act (RCRA) and the Clean Water Act. Rockwell pleaded guilty and paid an $18.5 million fine. This was the largest fine for an environmental crime to that date.
After the FBI raid, federal authorities used the subsequent grand jury investigation to gather evidence of wrongdoing and then sealed the record. In October 2006, DOE announced completion of the Rocky Flats cleanup without this information being available.
The FBI raid led to the formation of Colorado's first special grand jury in 1989, the juried testimony of 110 witnesses, reviews of 2,000 exhibits, and ultimately a 1992 plea agreement in which Rockwell admitted to 10 federal environmental crimes and agreed to pay $18.5 million in fines out of its own funds. This amount was less than the company had been paid in bonuses for running the plant as determined by the Government Accounting Office (GAO), and yet was also by far the highest hazardous-waste fine ever; four times larger than the previous record. Due to indemnification of nuclear contractors, without some form of settlement being arrived at between the U.S. Justice Department and Rockwell, the cost of paying any civil penalties would ultimately have been borne by U.S. taxpayers. While any criminal penalties allotted to Rockwell would not have been covered, for its part Rockwell claimed that the Department of Energy had specifically exempted them from most environmental laws, including hazardous waste.
Regardless, and as forewarned by the prosecuting U.S. Attorney, Ken Fimberg/Scott, the Department of Justice's stated findings and plea agreement with Rockwell were heavily contested by its own, 23-member special grand jury. Press leaks on both sides—members of the DOJ and the grand jury—occurred in violation of secrecy regarding grand jury information, a federal crime punishable by a prison sentence. The public contest led to U.S. Congressional oversight committee hearings chaired by Congressman Howard Wolpe, which issued subpoenas to DOJ principals despite several instances of DOJ's refusal to comply. The hearings, whose findings include that the Justice Department had "bargained away the truth", ultimately still did not fully reveal to the public the special grand jury's report, which remains sealed by the DOJ courts.
The special grand jury report was nonetheless leaked to Westword. According to its subsequent publications, the Rocky Flats special grand jury had compiled indictments charging three DOE officials and five Rockwell employees with environmental crimes. The grand jury also wrote a report, intended for the public's consumption per their charter, lambasting the conduct of DOE and Rocky Flats contractors for "engaging in a continuing campaign of distraction, deception and dishonesty" and noted that Rocky Flats, for many years, had discharged pollutants, hazardous materials and radioactive matter into nearby creeks and Broomfield's and Westminster's water supplies.
The DOE itself, in a study released in December of the year prior to the FBI raid, had called Rocky Flats' ground water the single greatest environmental hazard at any of its nuclear facilities.
Sealed grand jury records
Court records from the grand jury proceeding on Rocky Flats have been sealed for a number of years. The Federal Rules of Criminal Procedure, which govern federal grand jury proceedings, explicitly require grand jury proceedings to be kept secret unless otherwise provided by the Rules. Rocky Flats' secret grand jury proceedings were not unique.
However, some activists dispute the reasons for records confidentiality: Dr. LeRoy Moore, a Boulder theologian and peace activist; retired FBI Special Agent Jon Lipsky, who led the FBI's raid of the Rocky Flats plant to investigate illegal plutonium burning and other environmental crimes; and Wes McKinley, who was the foreman of the grand jury investigation into the operations at Rocky Flats (and served several terms as Colorado State Representative).
Former grand jury foreman McKinley chronicles his experiences in the 2004 book he co-authored with attorney Caron Balkany, The Ambushed Grand Jury, which begins with an open letter to the U.S. Congress from Special Agent Lipsky:
However, a former EPA employee and Jon Lipsky's partner disputes these claims: "Jon kind of went off the deep end," and "He started seeing conspiracy theories in everything."
1990s
Rockwell International was replaced by EG&G as primary contractor for the Rocky Flats plant. EG&G began an aggressive work safety and cleanup plan for the site that included construction of a system to remove contamination from the groundwater of the site. The Sierra Club vs. Rockwell case was decided in favor of the Sierra Club. The ruling directed Rocky Flats to manage plutonium residues as hazardous waste.
In 1991, an interagency agreement between DOE, the Colorado Department of Health, and the EPA outlined multiyear schedules for environmental restoration studies and remediation activities. DOE released a report that advocated downsizing the plant's production into a more streamlined facility. Due to the fall of the Soviet Union, production of most of the systems at Rocky Flats was no longer needed, leaving only the W88 warhead primary stages.
In 1992, due to an order by President G. H. W. Bush, production of submarine-based missiles using the W88 trigger was discontinued, leading to the layoff of 4,500 employees at the plant; 4,000 others were retained for long-term cleanup of the facility. The Rocky Flats Plant Transition Plan outlined the environmental restoration process. The DOE announced that of plutonium lined the exhaust ductwork in six buildings on the site.
Starting in 1993, weapons-grade plutonium began to be shipped to the Oak Ridge National Laboratory, Los Alamos National Laboratory, and the Savannah River Site.
In 1994 the site was renamed the Rocky Flats Environmental Technology Site, reflecting the changed nature of the site from weapon production to environmental cleanup and restoration. The cleanup effort was contracted to the Kaiser-Hill Company, which proposed the release of of the buffer zone for public access.
In 1998, the Colorado Department of Public Health and Environment's Cancer Registry conducted an independent study of cancer rates in areas around the Rocky Flats Site. Data showed no pattern of increased cancers tied to Rocky Flats.
Throughout the remainder of the 1990s and into the 2000s, cleanup of contaminated sites and dismantling of contaminated buildings continued with the waste materials being shipped to the Nevada Test Site, the Waste Isolation Pilot Plant in New Mexico, and the Envirocare company facility in Utah, which is now EnergySolutions.
2000s
In 2001, Congress passed the Rocky Flats National Wildlife Refuge Act. In July 2007, the U.S. Department of Energy transferred nearly of land on the Rocky Flats site to the U.S. Fish and Wildlife Service to establish the Rocky Flats National Wildlife Refuge. Surveys of the site reveal 630 species of vascular plants, 76% of which are native. Herds of elk are commonly seen on the site. However, the DOE retained the central area of the site, the Central Operable Unit.
The last contaminated building was removed and the last weapons-grade plutonium was shipped out in 2003, ending the cleanup based on a modified cleanup agreement. The modified agreement required a higher level of cleanup in the first of soil in exchange for not having to remove any contamination below that point unless it posed a chance of migrating to the surface or contaminating the groundwater. About half of the 800 buildings previously existing on the site had been dismantled by early December 2004. By 2006, over 800 buildings had been decommissioned and demolished, with no buildings remaining. Today, the plant and all buildings are gone.
The site is contaminated with residual plutonium due to several industrial fires that occurred on the site and other inadvertent releases caused by wind at a waste storage area. The other major contaminant is carbon tetrachloride (CCl4). Both of these substances affected areas adjacent to the site. In addition, there were small releases of beryllium and tritium, as well as dioxin from incineration.
Cleanup was declared complete on October 13, 2005. About of the original site, the former industrial area, remains under U.S. DOE Office of Legacy Management control for ongoing environmental monitoring and remediation. On March 14, 2007, DOE, EPA, and CDPHE entered into the Rocky Flats Legacy Management Agreement (RFLMA). The agreement establishes the regulatory framework for implementing the final remedy for the Rocky Flats site and ensuring the protection of human health and the environment.
In 2007, because the Peripheral Operable Unit was found to be suitable for unlimited use and unrestricted exposure, EPA posted public notice of its intent to delete this area (now largely the Rocky Flats National Wildlife Refuge) from the EPA's National Priorities List of CERCLA or "Superfund" sites. The Peripheral Operable Unit was subsequently removed from the National Priorities List.
2010s
In September 2010, after a 20-year legal battle, the Tenth Circuit Court of Appeals reversed a $926 million award in a class-action lawsuit against Dow Chemical and Rockwell International. The three-judge panel said that the jury reached its decision on faulty instructions that incorrectly stated the law. The appeals court tossed the jury verdict and sent the case back to the District Court. According to the Appellate Court, the owners of 12,000 properties in the class-action area had not proved their properties were damaged or that they suffered bodily injury from plutonium that blew onto their properties.
In response to historic and ongoing reports of health issues by people who live and lived near Rocky Flats, an online health survey was launched in May 2016 by Metropolitan State University, Rocky Flats Downwinders, and other local universities and health agencies to survey thousands of Coloradans who lived east of the Rocky Flats plant while it was operational.
On May 19, 2016, a $375 million settlement was reached over claims by more than 15,000 nearby homeowners that plutonium releases from the plant risked their health and devalued their property. This settlement ended a 26-year legal battle between residents and the two corporations that ran the Rocky Flats Plant, Dow Chemical and Rockwell International, for the Department of Energy.
June 2014 marked a quarter century since the historic FBI and EPA raid of the Rocky Flats plant. A 3-day weekend of events from Friday, June 6 through Sunday, June 8 took place at the Arvada Center for the Arts, "Rocky Flats Then and Now: 25 Years After the Raid". Panel discussions covered various aspects of the Rocky Flats raid and its aftermath. On display were historical photographs and artifacts, as well as Rocky Flats-inspired art.
In 2016, the Colorado Department of Public Health and Environment's Cancer Registry completed a cancer incidence study that looked at the incidence of reported cancers in areas around Rocky Flats from 1990 to 2014. This study followed-up on and was modeled after CDPHE's original Rocky Flats cancer incidence study, which was completed in 1998. Ten cancers specifically linked to plutonium exposure and other cancers of concern to a Health Advisory Panel were assessed in 1998, and again in 2016. The study found "the incidence of all cancers-combined for both adults and children was no different in the communities surrounding Rocky Flats than would be expected based on cancer rates in the remainder of the Denver Metro area for 1990 to 2014."
In 2017, the CDPHE Cancer Registry completed a supplement to this study that specifically looked at the incidence of thyroid and rare cancers in neighborhoods around Rocky Flats. Cancer incidence data showed "no evidence of higher than expected frequencies of thyroid cancer" and "the incidence of 'rare' cancer was not higher than expected compared to the remainder of the Denver Metro area."
In 2018, Metropolitan State University of Denver declined to further participate in the Downwinders' health survey.
In January 2019, activist groups questioning the contamination risk assessment for the wildlife refuge filed a lawsuit to unseal documents from the grand jury investigation.
In response to concerned citizens reports about a breast cancer cluster in young women, CDPHE's Central Cancer Registry also examined the incidence of breast cancer in young women in communities around Rocky Flats. The Cancer Registry maintains a statewide database of all cancers diagnosed in Colorado residents (with some skin cancer exceptions). Hospitals, physicians, and laboratories are required by law to report medically confirmed cancer data to CDPHE. In October 2019, CDPHE shared the Cancer Registry's findings. The Cancer Registry concluded, based on an analysis of the data, that "no increased incidence of breast cancer was found in young women in communities around Rocky Flats."
Labor
The labor movement and unions played a significant role in the development and operation of Rocky Flats. From the beginning construction was performed with a semi-unionized work force. A walkout halted construction in 1952. By 1958 there were at least 900 unionized workers at the plant represented by 16 different unions. Union representation grew to 1500 by 1962. Contract negotiations in 1962 ended after President Kennedy asked the Metal Trades Council to postpone a strike and the union agreed. The contract, involving scheduled 2.5% raises, had been proposed by federal negotiators but Dow did not accept. After the postponement, President Kennedy and Secretary of Labor Arthur Goldberg tried to forge an agreement between Dow and the union. However, Dow continued to push contracts allowing seven consecutive days of work which the union rejected and elected to strike.
See also
Atomic Energy Act
Cold War
Manhattan Project
Price-Anderson Act
Radioactive contamination from the Rocky Flats Plant
Timeline of nuclear weapons development
Notes
External links
U.S. Department of Energy, Legacy Management, Rocky Flats
U.S. Environmental Protection Agency, Rocky Flats
U.S. Fish & Wildlife Service, Rocky Flats Wildlife Refuge
Colorado Department of Public Health & Environment (CDPHE), Rocky Flats
Kristen Iversen
Rocky Flats History
Bomb Production at Rocky Flats: Death Downwind
Photography: a year of disobedience
Rocky Flats Cold War Museum
RockyFlatsFacts
Rocky Flats Collection at University of Colorado Boulder
University of Colorado Boulder Libraries resources
Full Body Burden, forthcoming documentary
Further reading
Doom with a View: Historical and Cultural Contexts of the Rocky Flats Nuclear Weapons Plant by Kristen Iversen (Chicago Review Press, 2020)
Making a Real Killing: Rocky Flats and the Nuclear West by Len Ackland (University of New Mexico Press, 2002)
Industrial buildings completed in 1956
Nuclear technology in the United States
Nuclear weapons infrastructure of the United States
Radioactive waste
Historic American Engineering Record in Colorado
Industrial buildings and structures on the National Register of Historic Places in Colorado
Military facilities on the National Register of Historic Places in Colorado
Historic districts on the National Register of Historic Places in Colorado
Buildings and structures in Jefferson County, Colorado
Manufacturing plants in the United States
United States Department of Energy facilities
Superfund sites in Colorado
Military installations in Colorado
Radioactively contaminated areas
Military research of the United States
National Register of Historic Places in Jefferson County, Colorado
1952 establishments in Colorado
1992 disestablishments in Colorado
Radiation accidents and incidents | Rocky Flats Plant | [
"Chemistry",
"Technology"
] | 6,482 | [
"Radioactively contaminated areas",
"Radioactive contamination",
"Soil contamination",
"Environmental impact of nuclear power",
"Radioactivity",
"Hazardous waste",
"Radioactive waste"
] |
578,038 | https://en.wikipedia.org/wiki/Reversal%20potential | In a biological membrane, the reversal potential is the membrane potential at which the direction of ionic current reverses. At the reversal potential, there is no net flow of ions from one side of the membrane to the other. For channels that are permeable to only a single type of ion, the reversal potential is identical to the equilibrium potential of the ion.
Equilibrium potential
The equilibrium potential for an ion is the membrane potential at which there is no net movement of the ion. The flow of any inorganic ion, such as Na+ or K+, through an ion channel (since membranes are normally impermeable to ions) is driven by the electrochemical gradient for that ion. This gradient consists of two parts, the difference in the concentration of that ion across the membrane, and the voltage gradient. When these two influences balance each other, the electrochemical gradient for the ion is zero and there is no net flow of the ion through the channel; this also translates to no current across the membrane so long as only one ionic species is involved. The voltage gradient at which this equilibrium is reached is the equilibrium potential for the ion and it can be calculated from the Nernst equation.
Mathematical models and the driving force
We can consider as an example a positively charged ion, such as K+, and a negatively charged membrane, as it is commonly the case in most organisms. The membrane voltage opposes the flow of the potassium ions out of the cell and the ions can leave the interior of the cell only if they have sufficient thermal energy to overcome the energy barrier produced by the negative membrane voltage. However, this biasing effect can be overcome by an opposing concentration gradient if the interior concentration is high enough which favours the potassium ions leaving the cell.
An important concept related to the equilibrium potential is the driving force. Driving force is simply defined as the difference between the actual membrane potential and an ion's equilibrium potential where refers to the equilibrium potential for a specific ion. Relatedly, the membrane current per unit area due to the type ion channel is given by the following equation:
where is the driving force and is the specific conductance, or conductance per unit area. Note that the ionic current will be zero if the membrane is impermeable to that ion in question or if the membrane voltage is exactly equal to the equilibrium potential of that ion.
Use in research
When Vm is at the reversal potential for an event such as a synaptic potential ( is equal to 0), the identity of the ions that flow during an EPC can be deduced by comparing the reversal potential of the EPC to the equilibrium potential for various ions. For instance several excitatory ionotropic ligand-gated neurotransmitter receptors including glutamate receptors (AMPA, NMDA, and kainate), nicotinic acetylcholine (nACh), and serotonin (5-HT3) receptors are nonselective cation channels that pass Na+ and K+ in nearly equal proportions, giving the reversal potential close to zero. The inhibitory ionotropic ligand-gated neurotransmitter receptors that carry Cl−, such as GABAA and glycine receptors, have reversal potentials close to the resting potential (approximately –70 mV) in neurons.
This line of reasoning led to the development of experiments (by Akira Takeuchi and Noriko Takeuchi in 1960) that demonstrated that acetylcholine-activated ion channels are approximately equally permeable to Na+ and K+ ions. The experiment was performed by lowering the external Na+ concentration, which lowers (makes more negative) the Na+ equilibrium potential and produces a negative shift in reversal potential. Conversely, increasing the external K+ concentration raises (makes more positive) the K+ equilibrium potential and produces a positive shift in reversal potential. A general expression for reversal potential of synaptic events, including for decreases in conductance, has been derived.
See also
Electrochemical potential
Cell potential
Goldman equation
References
External links
Nernst/Goldman Equation Simulator
Nernst Equation Calculator
Goldman-Hodgkin-Katz Equation Calculator
Electrochemical Driving Force Calculator
Membrane biology
Electrophysiology
Cardiac electrophysiology
Action potentials
Walther Nernst | Reversal potential | [
"Chemistry"
] | 875 | [
"Membrane biology",
"Molecular biology"
] |
578,057 | https://en.wikipedia.org/wiki/Tollmann%27s%20bolide%20hypothesis | Tollmann's bolide hypothesis is a hypothesis presented by Austrian palaeontologist Edith Kristan-Tollmann and geologist Alexander Tollmann in 1994. The hypothesis postulates that one or several bolides (asteroids or comets) struck the Earth around 7640 ± 200 years BCE, and a much smaller one approximately 3150 ± 200 BCE. The hypothesis tries to explain early Holocene extinctions and possibly legends of the Universal Deluge.
The claimed evidence for the event includes stratigraphic studies of tektites, dendrochronology, and ice cores (from Camp Century, Greenland) containing hydrochloric acid and sulfuric acid (indicating an energetic ocean strike) as well as nitric acids (caused by extreme heating of air).
Christopher Knight and Robert Lomas in their book, Uriel's Machine, argue that the 7640 BCE evidence is consistent with the dates of formation of a number of extant salt flats and lakes in dry areas of North America and Asia. They argue that these lakes are the remains of multiple-kilometer-high waves that penetrated deeply into continents as the result of oceanic strikes that they proposed occurred. Research by Quaternary geologists, palynologists, and others has been unable to confirm the validity of the hypothesis and proposes more frequently occurring geological processes for some of the data used for the hypothesis. The dating of ice cores and Australasian tektites has shown long time span differences between the proposed impact times and the impact ejecta products.
Scientific evaluation
Quaternary geologists, paleoclimatologists, and planetary geologists specialising in meteorite and comet impacts have rejected Tollmann's bolide hypothesis. They reject this hypothesis because:
The evidence offered to support the hypothesis can more readily be explained by more mundane and less dramatic geologic processes
Many of the events alleged to be associated with this impact occurred at the wrong time (i.e., many of the events occurred hundreds to thousands of years before or after the hypothesized impacts); and
There is a lack of any credible physical evidence for the cataclysmic environmental devastation and characteristic deposits that kilometre-high tsunamis would have created had they actually occurred.
Evidence used by proponents of the Tollmann's bolide hypothesis to argue for catastrophic Holocene extinctions have alternative explanations by more frequently occurring geological processes. The chemical composition and presence of volcanic ash with the specific acidity spikes in the Greenland ice cores shows evidence that they result from volcanic instead of impact origins. Also, the largest acidity spikes found in Antarctica ice cores have been dated to 17,300 to 17,500 BP, which is significantly older than hypothetical Holocene impacts. The formation of modern salt lakes and salt flats is explained by the concentration of salts and other evaporite minerals by the evaporation of water from stream-fed lakes lacking external outlets, called endorheic lakes, which commonly occur in arid climates on both hemispheres on Earth. The composition of the salts and other evaporite minerals found in these lakes is consistent with their precipitation from dissolved material continually carried into the lakes by rivers and streams and subsequent concentration by evaporation, instead of evaporation of seawater. Whether a lake becomes salty or not depends on whether the lake lacks an outlet and the relative balance between the inflow and outflow of lake waters via evaporation. Ocean water accessing a continental lake as the result of a single catastrophic event, as Tollmann's hypothesis proposes, would contain an inadequate amount of dissolved minerals to produce, when evaporated, the vast quantities of salts and other evaporites found in the salt lakes, flats, and pans cited as evidence of a mega-tsunami by this hypothesis.
Geological criticism
Isostatic rebound
Many published papers demonstrate that isostatic depression of the Earth's crust happened in the early Holocene. This process has led to submerging substantial portions of coastal areas adjacent to continental ice sheets and resulted in the accumulations of marine sediments and fossils within them. A well-documented example of flooding caused by isostatic depression is the case of Charlotte, The Vermont Whale, a fossil whale found in the deposits of the former Champlain Sea. Like many similar marine deposits, the sediments, which accumulated within the Champlain Sea lack the physical characteristics; i.e. sedimentary structures, interlayers, and textures, that characterise sediments deposited by a mega-tsunami. These deposits and the associated fossils have been dated to significantly earlier periods than the times the bolide hypothesis proposed. In the case of the Champlain Sea, its sediments started to accumulate around 13,000 BP, almost 3,400 years before the oldest of the hypothesized Holocene bolide impacts.
Dating
A significant amount of the physical evidence used by Kristan-Tollmann and Tollmann, as supporting their hypothesis, is either too old or too young to have been created by this hypothesized impact. In many cases, it is hundreds to thousands, and in one case hundreds of thousands, of years too old to be credible evidence of a Holocene impact. The research that dates the tektites, which Tollmann's bolide hypothesis regards as indicative of the timing of the impact, is outdated. Later research, has dated the Australasian tektites to the Middle Pleistocene; about 790,000 years BP. In addition, the formation of salt lakes and salt flats is neither synchronous nor consistent with the hypothesized impacts having occurred about either 9,640 BP or 5,150 BP. For example, in the case of Lake Bonneville, Lake Lahontan, Mono Lake, and other Pleistocene pluvial lakes in the western United States, the transition to salt lakes and salt flats occurred at different times between 12,000 and 16,000 BP. Thus, the change from freshwater to salty water and eventually salt flats started over 2,400 to 6,400 years before the oldest of the impacts hypothesized by the Tollmann bolide hypothesis occurred. As a result, it is impossible that the formation of these salt lakes could have been associated with the impact hypothesized by Kristan-Tollmann and Tollmann.
Megatsunami
There exists a lack of credible physical evidence of either multiple-kilometer-high tsunami waves penetrating deeply into continents, and the ecological devastation these would have caused. Thousands of paleoenvironmental records constructed from the study of lakes, bogs, mires, and river valleys all over the world by palynologists have not shown the existence of such a megatsunami. In the case of North America, research published by various authors provides detailed records of paleoenvironmental changes that have occurred throughout the last 10,000 to 15,000 years as reconstructed from pollen and other paleoenvironmental data from over a thousand sites throughout North America. These records do not recognise indications of either a resulting catastrophic environmental devastation or layers of tsunami deposits, which the mega-tsunamis postulated by Tollmann's bolide hypothesis would have created. Paleovegetation maps illustrate a distinct lack of the dramatic changes in North American paleovegetation during the Holocene, which would be expected from the cataclysmic ecological and physical destruction that a continent-wide mega-tsunamis would certainly have caused.
Grimm et al. in a paper published in Science in 1993, documented a 50,000-year-long record of environmental change by the analysis of pollen from an core from Lake Tulane in Highland county, Florida. Because of the low-lying nature of the peninsula, in which this part of Florida lies, this lake and the area around it would have been flooded and covered by tsunami deposits along with many of the other lakes and bogs described in there, and other publications. The forests and associated ecosystems of these areas would have been flooded and completely destroyed by the mega-tsunamis proposed by Kristan-Tollmann and Tollmann. Despite its location, both the core and the pollen record recovered from Lake Tulane lacks any indication of an abrupt, catastrophic environmental disruption, which the mega-tsunamis proposed by Tollmann's bolide hypothesis would have caused. Sedimentary cores obtained from Florida and other locations also lack sedimentary layers that have the characteristics of sediments deposited by either tsunamis or mega-tsunamis.
The cataclysmic scale of physical and ecological destruction that a megatsunami, like the one proposed by Kristan-Tollmann and Tollmann, would have caused, has not been recognised within the majority of long-term environmental records. Over a thousand cores from North America for which Holocene paleoclimatic and paleoenvironmental records have been reconstructed do not show evidence for the drastic environmental changes resulting from a large Holocene impact. There is a similar lack of evidence for mega-tsunami-related, Holocene, catastrophic environmental disruptions, and deposits reported from environmental records reconstructed from thousands of locations from all over the world. Other megatsunamis have been shown in coastal sediments analysed by geologists and palynologists and point to tsunamis locally caused by either earthquake, volcanic eruptions, or submarine slides. These non-impact related tsunamis show abundant records of their environmental effects through the study of pollen from cores and exposures.
Members of the Holocene Impact Working Group have published papers advocating the occurrence of mega-tsunamis created by extraterrestrial impacts at various times during the Holocene and Late Pleistocene. However, none of these proposed impacts match either the cataclysmic scale or timing proposed by Kristan-Tollmann and Tollmann for their hypothesized bolide.
See also
Timeline of environmental events
Younger Dryas impact hypothesis
References
External links
Pinter, N., and S.E. Ishman, 2008, Impacts, mega-tsunami, and other extraordinary claims PDF version, 304 KB. GSA Today. vol. 18, no. 1, pp. 37–38.
Historical geology
Hypothetical impact events
Extinction events | Tollmann's bolide hypothesis | [
"Astronomy",
"Biology"
] | 2,054 | [
"Astronomical hypotheses",
"Evolution of the biosphere",
"Extinction events",
"Hypothetical impact events",
"Biological hypotheses"
] |
578,099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Hypochlorous acid is an inorganic compound with the chemical formula , also written as HClO, HOCl, or ClHO. Its structure is . It is an acid that forms when chlorine dissolves in water, and itself partially dissociates, forming a hypochlorite anion, . HClO and are oxidizers, and the primary disinfection agents of chlorine solutions. HClO cannot be isolated from these solutions due to rapid equilibration with its precursor, chlorine.
Because of its strong antimicrobial properties, the related compounds sodium hypochlorite (NaOCl) and calcium hypochlorite () are ingredients in many commercial bleaches, deodorants, and disinfectants. The white blood cells of mammals, such as humans, also contain hypochlorous acid as a tool against foreign bodies. In living organisms, HOCl is generated by the reaction of hydrogen peroxide with chloride ions under the catalysis of the heme enzyme myeloperoxidase (MPO).
Like many other disinfectants, hypochlorous acid solutions will destroy pathogens, such as COVID-19, absorbed on surfaces. In low concentrations, such solutions can serve to disinfect open wounds.
History
Hypochlorous acid was discovered in 1834 by the French chemist Antoine Jérôme Balard (1802–1876) by adding, to a flask of chlorine gas, a dilute suspension of mercury(II) oxide in water. He also named the acid and its compounds.
Despite being relatively easy to make, it is difficult to maintain a stable hypochlorous acid solution. It is not until recent years that scientists have been able to cost-effectively produce and maintain hypochlorous acid water for stable commercial use.
Uses
In organic synthesis, HClO converts alkenes to chlorohydrins.
In biology, hypochlorous acid is generated in activated neutrophils by myeloperoxidase-mediated peroxidation of chloride ions, and contributes to the destruction of bacteria.
In medicine, hypochlorous acid water has been used as a disinfectant and sanitiser.
In wound care, and as of early 2016 the U.S. Food and Drug Administration has approved products whose main active ingredient is hypochlorous acid for use in treating wounds and various infections in humans and pets. It is also FDA-approved as a preservative for saline solutions.
In disinfection, it has been used in the form of liquid spray, wet wipes and aerosolised application. Recent studies have shown hypochlorous acid water to be suitable for fog and aerosolised application for disinfection chambers and suitable for disinfecting indoor settings such as offices, hospitals and healthcare clinics.
In food service and water distribution, specialized equipment to generate weak solutions of HClO from water and salt is sometimes used to generate adequate quantities of safe (unstable) disinfectant to treat food preparation surfaces and water supplies. It is also commonly used in restaurants due to its non-flammable and nontoxic characteristics.
In water treatment, hypochlorous acid is the active sanitizer in hypochlorite-based products (e.g. used in swimming pools).
Similarly, in ships and yachts, marine sanitation devices use electricity to convert seawater into hypochlorous acid to disinfect macerated faecal waste before discharge into the sea.
In deodorization, hypochlorous acid has been tested to remove up to 99% of foul odours including garbage, rotten meat, toilet, stool, and urine odours.
Formation, stability and reactions
Addition of chlorine to water gives both hydrochloric acid (HCl) and hypochlorous acid (HClO):
When acids are added to aqueous salts of hypochlorous acid (such as sodium hypochlorite in commercial bleach solution), the resultant reaction is driven to the left, and chlorine gas is formed. Thus, the formation of stable hypochlorite bleaches is facilitated by dissolving chlorine gas into basic water solutions, such as sodium hydroxide.
The acid can also be prepared by dissolving dichlorine monoxide in water; under standard aqueous conditions, anhydrous hypochlorous acid is currently impossible to prepare due to the readily reversible equilibrium between it and its anhydride:
, K = 3.55 × 10−3 dm3/mol (at 0 °C)
The presence of light or transition metal oxides of copper, nickel, or cobalt accelerates the exothermic decomposition into hydrochloric acid and oxygen:
Fundamental reactions
In aqueous solution, hypochlorous acid partially dissociates into the anion hypochlorite :
Salts of hypochlorous acid are called hypochlorites. One of the best-known hypochlorites is NaClO, the active ingredient in bleach.
HClO is a stronger oxidant than chlorine under standard conditions.
, E = +1.63 V
HClO reacts with HCl to form chlorine:
HClO reacts with ammonia to form monochloramine:
HClO can also react with organic amines, forming N-chloroamines.
Hypochlorous acid exists in equilibrium with its anhydride, dichlorine monoxide.
, K = 3.55 × 10−3 dm3/mol (at 0 °C)
Reactivity of HClO with biomolecules
Hypochlorous acid reacts with a wide variety of biomolecules, including DNA, RNA, fatty acid groups, cholesterol and proteins.
Reaction with protein sulfhydryl groups
Knox et al. first noted that HClO is a sulfhydryl inhibitor that, in sufficient quantity, could completely inactivate proteins containing sulfhydryl groups. This is because HClO oxidises sulfhydryl groups, leading to the formation of disulfide bonds that can result in crosslinking of proteins. The HClO mechanism of sulfhydryl oxidation is similar to that of monochloramine, and may only be bacteriostatic, because once the residual chlorine is dissipated, some sulfhydryl function can be restored. One sulfhydryl-containing amino acid can scavenge up to four molecules of HClO. Consistent with this, it has been proposed that sulfhydryl groups of sulfur-containing amino acids can be oxidized a total of three times by three HClO molecules, with the fourth reacting with the α-amino group. The first reaction yields sulfenic acid () then sulfinic acid () and finally . Sulfenic acids form disulfides with another protein sulfhydryl group, causing cross-linking and aggregation of proteins. Sulfinic acid and derivatives are produced only at high molar excesses of HClO, and disulfides are formed primarily at bacteriocidal levels. Disulfide bonds can also be oxidized by HClO to sulfinic acid. Because the oxidation of sulfhydryls and disulfides evolves hydrochloric acid, this process results in the depletion HClO.
Reaction with protein amino groups
Hypochlorous acid reacts readily with amino acids that have amino group side-chains, with the chlorine from HClO displacing a hydrogen, resulting in an organic chloramine. Chlorinated amino acids rapidly decompose, but protein chloramines are longer-lived and retain some oxidative capacity. Thomas et al. concluded from their results that most organic chloramines decayed by internal rearrangement and that fewer available NH2 groups promoted attack on the peptide bond, resulting in cleavage of the protein. McKenna and Davies found that 10 mM or greater HClO is necessary to fragment proteins in vivo. Consistent with these results, it was later proposed that the chloramine undergoes a molecular rearrangement, releasing HCl and ammonia to form an aldehyde. The aldehyde group can further react with another amino group to form a Schiff base, causing cross-linking and aggregation of proteins.
Reaction with DNA and nucleotides
Hypochlorous acid reacts slowly with DNA and RNA as well as all nucleotides in vitro. GMP is the most reactive because HClO reacts with both the heterocyclic NH group and the amino group. In similar manner, TMP with only a heterocyclic NH group that is reactive with HClO is the second-most reactive. AMP and CMP, which have only a slowly reactive amino group, are less reactive with HClO. UMP has been reported to be reactive only at a very slow rate. The heterocyclic NH groups are more reactive than amino groups, and their secondary chloramines are able to donate the chlorine. These reactions likely interfere with DNA base pairing, and, consistent with this, Prütz has reported a decrease in viscosity of DNA exposed to HClO similar to that seen with heat denaturation. The sugar moieties are nonreactive and the DNA backbone is not broken. NADH can react with chlorinated TMP and UMP as well as HClO. This reaction can regenerate UMP and TMP and results in the 5-hydroxy derivative of NADH. The reaction with TMP or UMP is slowly reversible to regenerate HClO. A second slower reaction that results in cleavage of the pyridine ring occurs when excess HClO is present. is inert to HClO.
Reaction with lipids
Hypochlorous acid reacts with unsaturated bonds in lipids, but not saturated bonds, and the ion does not participate in this reaction. This reaction occurs by hydrolysis with addition of chlorine to one of the carbons and a hydroxyl to the other. The resulting compound is a chlorohydrin. The polar chlorine disrupts lipid bilayers and could increase permeability. When chlorohydrin formation occurs in lipid bilayers of red blood cells, increased permeability occurs. Disruption could occur if enough chlorohydrin is formed. The addition of preformed chlorohydrin to red blood cells can affect permeability as well. Cholesterol chlorohydrin have also been observed, but do not greatly affect permeability, and it is believed that is responsible for this reaction. Hypochlorous acid also reacts with a subclass of glycerophospholipids called plasmalogens, yielding chlorinated fatty aldehydes which are capable of protein modification and may play a role in inflammatory processes such as platelet aggregation and the formation of neutrophil extracellular traps.
Mode of disinfectant action
E. coli exposed to hypochlorous acid lose viability in less than 0.1 seconds due to inactivation of many vital systems. Hypochlorous acid has a reported of 0.0104–0.156 ppm and 2.6 ppm caused 100% growth inhibition in 5 minutes. However, the concentration required for bactericidal activity is also highly dependent on bacterial concentration.
Inhibition of glucose oxidation
In 1948, Knox et al. proposed the idea that inhibition of glucose oxidation is a major factor in the bacteriocidal nature of chlorine solutions. They proposed that the active agent or agents diffuse across the cytoplasmic membrane to inactivate key sulfhydryl-containing enzymes in the glycolytic pathway. This group was also the first to note that chlorine solutions (HClO) inhibit sulfhydryl enzymes. Later studies have shown that, at bacteriocidal levels, the cytosol components do not react with HClO. In agreement with this, McFeters and Camper found that aldolase, an enzyme that Knox et al. proposes would be inactivated, was unaffected by HClO in vivo. It has been further shown that loss of sulfhydryls does not correlate with inactivation. That leaves the question concerning what causes inhibition of glucose oxidation. The discovery that HClO blocks induction of β-galactosidase by added lactose led to a possible answer to this question. The uptake of radiolabeled substrates by both ATP hydrolysis and proton co-transport may be blocked by exposure to HClO preceding loss of viability. From this observation, it proposed that HClO blocks uptake of nutrients by inactivating transport proteins. The question of loss of glucose oxidation has been further explored in terms of loss of respiration. Venkobachar et al. found that succinic dehydrogenase was inhibited in vitro by HClO, which led to the investigation of the possibility that disruption of electron transport could be the cause of bacterial inactivation. Albrich et al. subsequently found that HClO destroys cytochromes and iron-sulfur clusters and observed that oxygen uptake is abolished by HClO and adenine nucleotides are lost. It was also observed that irreversible oxidation of cytochromes paralleled the loss of respiratory activity. One way of addressing the loss of oxygen uptake was by studying the effects of HClO on succinate-dependent electron transport. Rosen et al. found that levels of reductable cytochromes in HClO-treated cells were normal, and these cells were unable to reduce them. Succinate dehydrogenase was also inhibited by HClO, stopping the flow of electrons to oxygen. Later studies revealed that Ubiquinol oxidase activity ceases first, and the still-active cytochromes reduce the remaining quinone. The cytochromes then pass the electrons to oxygen, which explains why the cytochromes cannot be reoxidized, as observed by Rosen et al. However, this line of inquiry was ended when Albrich et al. found that cellular inactivation precedes loss of respiration by using a flow mixing system that allowed evaluation of viability on much smaller time scales. This group found that cells capable of respiring could not divide after exposure to HClO.
Depletion of adenine nucleotides
Having eliminated loss of respiration, Albrich et al. proposes that the cause of death may be due to metabolic dysfunction caused by depletion of adenine nucleotides. Barrette et al. studied the loss of adenine nucleotides by studying the energy charge of HClO-exposed cells and found that cells exposed to HClO were unable to step up their energy charge after addition of nutrients. The conclusion was that exposed cells have lost the ability to regulate their adenylate pool, based on the fact that metabolite uptake was only 45% deficient after exposure to HClO and the observation that HClO causes intracellular ATP hydrolysis. It was also confirmed that, at bacteriocidal levels of HClO, cytosolic components are unaffected. So it was proposed that modification of some membrane-bound protein results in extensive ATP hydrolysis, and this, coupled with the cells inability to remove AMP from the cytosol, depresses metabolic function. One protein involved in loss of ability to regenerate ATP has been found to be ATP synthetase. Much of this research on respiration reconfirms the observation that relevant bacteriocidal reactions take place at the cell membrane.
Inhibition of DNA replication
Recently it has been proposed that bacterial inactivation by HClO is the result of inhibition of DNA replication. When bacteria are exposed to HClO, there is a precipitous decline in DNA synthesis that precedes inhibition of protein synthesis, and closely parallels loss of viability. During bacterial genome replication, the origin of replication (oriC in E. coli) binds to proteins that are associated with the cell membrane, and it was observed that HClO treatment decreases the affinity of extracted membranes for oriC, and this decreased affinity also parallels loss of viability. A study by Rosen et al. compared the rate of HClO inhibition of DNA replication of plasmids with different replication origins and found that certain plasmids exhibited a delay in the inhibition of replication when compared to plasmids containing oriC. Rosen's group proposed that inactivation of membrane proteins involved in DNA replication are the mechanism of action of HClO.
Protein unfolding and aggregation
HClO is known to cause post-translational modifications to proteins, the notable ones being cysteine and methionine oxidation. A recent examination of HClO's bactericidal role revealed it to be a potent inducer of protein aggregation. Hsp33, a chaperone known to be activated by oxidative heat stress, protects bacteria from the effects of HClO by acting as a holdase, effectively preventing protein aggregation. Strains of Escherichia coli and Vibrio cholerae lacking Hsp33 were rendered especially sensitive to HClO. Hsp33 protected many essential proteins from aggregation and inactivation due to HClO, which is a probable mediator of HClO's bactericidal effects.
Hypochlorites
Hypochlorites are the salts of hypochlorous acid; commercially important hypochlorites are calcium hypochlorite and sodium hypochlorite.
Production of hypochlorites using electrolysis
Solutions of hypochlorites can be produced in-situ by electrolysis of an aqueous sodium chloride solution in both batch and flow processes. The composition of the resulting solution depends on the pH at the anode. In acid conditions the solution produced will have a high hypochlorous acid concentration, but will also contain dissolved gaseous chlorine, which can be corrosive, at a neutral pH the solution will be around 75% hypochlorous acid and 25% hypochlorite. Some of the chlorine gas produced will dissolve forming hypochlorite ions. Hypochlorites are also produced by the disproportionation of chlorine gas in alkaline solutions.
Safety
HClO is classified as non-hazardous by the Environmental Protection Agency in the US. As an oxidising agent, it can be corrosive or irritant depending on its concentration and pH.
In a clinical test, hypochlorous acid water was tested for eye irritation, skin irritation, and toxicity. The test concluded that it was non-toxic and non-irritating to the eye and skin.
In a 2017 study, a saline hygiene solution preserved with pure hypochlorous acid was shown to reduce the bacterial load significantly without altering the diversity of bacterial species on the eyelids. After 20 minutes of treatment, there was more than 99% reduction of the Staphylococci bacteria.
Commercialisation
Commercial disinfection applications remained elusive for a long time after the discovery of hypochlorous acid because the stability of its solution in water is difficult to maintain. The active compounds quickly deteriorate back into salt water, losing the solution its disinfecting capability, which makes it difficult to transport for wide use. It is less commonly used as a disinfectant compared to bleach and alcohol due to cost, despite its stronger disinfecting capabilities.
Technological developments have reduced manufacturing costs and allow for manufacturing and bottling of hypochlorous acid water for home and commercial use. However, most hypochlorous acid water has a short shelf life. Storing away from heat and direct sunlight can help slow the deterioration. The further development of continuous flow electrochemical cells has been implemented in new products, allowing the commercialisation of domestic and industrial continuous flow devices for the in-situ generation of hypochlorous acid for disinfection purposes.
See also
Dichlorine monoxide: the corresponding acidic oxide
Hypofluorous acid
Perchloric acid
References
External links
National Pollutant Inventory – Chlorine
Reuters – Mystery solved: How bleach kills germs
"From Ground to Tap": a summary of the municipal tapwater treatment process
Disinfectants
Halogen oxoacids
Triatomic molecules
Hypochlorites
Mineral acids
Oxidizing acids | Hypochlorous acid | [
"Physics",
"Chemistry"
] | 4,313 | [
"Acids",
"Inorganic compounds",
"Mineral acids",
"Molecules",
"Oxidizing agents",
"Triatomic molecules",
"Oxidizing acids",
"Matter"
] |
578,143 | https://en.wikipedia.org/wiki/Edward%20Hebern | Edward Hugh Hebern (April 23, 1869 – February 10, 1952) was an early inventor of rotor machines, devices for encryption.
Background
Edward Hugh Hebern was born in Streator, Illinois, on April 23, 1869. His parents were Charles and Rosanna (Rosy) Hebern. They met in Harris County, Texas while Charles was serving as guard and escort from the civil war. On February 4, 1866, they married in Harris, Texas. Rosanna was only fifteen years old. After mustering out of the service on May 29, 1866, Charles and his new wife returned to Springfield, Illinois, and on June 18, 1866, he received his final pay and discharge.
Edward had an older sister, Arizona (Zoa) born in 1867, two younger brothers, Daniel Boone Hebern, born on February 17, 1871, and William Hebern, born April 8, 1875, in Houston, Texas, as well as a younger sister, Nellie Hebern, born in 1874.
At the age of 6, on August 4, 1875, Edward Hugh and three of his siblings were admitted to the Illinois Soldiers’ and Sailors’ home in Normal, Illinois. According to the Soldiers’ home register their father was listed as having died in 1874 in an unknown location, but he was admitted to the same Soldiers’ Home 40 years later. By February 13, 1879, the youngest Hebern child, William, was admitted to the Soldiers’ Home. Six months later on August 12, 1879, Rosanna married Archibald Thompson in Bloomington, Illinois.
On June 14, 1881, two months before her 14th birthday, Zoa left the Soldiers’ Home. Edward was discharged from the Soldiers’ Home in May 1883, after turning 14, and went to Odin, Illinois where he worked on a farm. By 1885, Daniel, Nellie, and William were all in Odin, Illinois. Zoa married Edward F. Clark (27 yrs. old) on August 18, 1886, in Coffey County, Kansas. Then they headed to Utah.
The rest of the children eventually moved to Madera, Ca. beginning with the two eldest boys in 1896. Daniel Boone Hebern was in North Fork, California working as a laborer; his brother, Edward Hugh Hebern was in Madera farming. Daniel purchased two plots of land in North Fork.
Patent
He got a patent in 1918, shortly before three others patented (in other countries) much the same thing. They were Arthur Scherbius in Germany, Hugo Koch in the Netherlands, and Arvid Damm in Sweden. Hebern started a company to market the Hebern rotor machine; one of his employees was Agnes Meyer, who left the Navy in Washington, D.C., to work for Hebern in California. Scherbius designed the Enigma, Koch sold his patent to Scherbius a few years later, and Damm's company — taken over by Boris Hagelin after his death — moved to Switzerland and is still in existence, as Crypto AG.
By September 1922 Hebern started construction of the Hebern Building at 829 Harrison Street in Oakland, California. The striking two-story structure was built to accommodate 1,500 workers and had a luxurious office for Hebern. The 1923 stockholders’ report said it was “one of the most beautiful structures in California and said to be the only building in the State of true Gothic architecture throughout.”
By the time it was completed the following year, it had cost somewhere between $380,000 and $400,000, and the company still had no income. In fact, its first sale, to the Italian government, was still twenty-three months away. Eventually Hebern would sell twelve of his early machines to the Navy, the Pacific Steamship Company of Seattle, and a few other buyers, but his ambitious building was repossessed. The Hebern code building still stands today at 829 Harrison Street in Oakland and is primarily used as Oakland’s Asian resource center.
Hebern's implementation of his idea was less secure than he believed, for William F. Friedman found at least one method of attack when it was offered to the US Government. Hebern's company did not prosper, his promotional efforts for it were questioned, and he was tried and convicted for fraud. Agnes Meyer returned to Washington to work for the Navy.
Friedman went on to design a much more secure and complex rotor machine for the US Army. It eventually became the SIGABA.
Patents
References
External links
Hebern Code Machines
American cryptographers
Cipher-machine cryptographers
Rotor machines
Encryption devices
1869 births
1952 deaths
People from Streator, Illinois
Mathematicians from Illinois | Edward Hebern | [
"Physics",
"Technology"
] | 966 | [
"Physical systems",
"Machines",
"Rotor machines"
] |
578,147 | https://en.wikipedia.org/wiki/Kyocera | is a Japanese multinational ceramics and electronics manufacturer headquartered in Kyoto, Japan. It was founded as in 1959 by Kazuo Inamori and renamed in 1982. It manufactures industrial ceramics, solar power generating systems, telecommunications equipment, office document imaging equipment, electronic components, semiconductor packages, cutting tools, and components for medical and dental implant systems.
History
Origins to 2000
Kyocera's original product was a ceramic insulator known as a "kelcima" for use in cathode-ray tubes. The company quickly adapted its technologies to produce an expanding range of ceramic components for electronic and structural applications. In the 1960s, as the NASA space program, the birth of Silicon Valley and the advancement of computer technology created demand for semiconductor integrated circuits (ICs), Kyocera developed ceramic semiconductor packages that remain among its core product lines.
In the mid-1970s, Kyocera began expanding its material technologies to produce a diverse range of applied ceramic products, including solar photovoltaic modules; biocompatible tooth- and joint-replacement systems; industrial cutting tools; consumer ceramics, such as ceramic-bladed kitchen knives and ceramic-tipped ballpoint pens; and lab-grown gemstones, including rubies, emeralds, sapphires, opals, alexandrites and padparadschahs.
The company acquired electronic equipment manufacturing and radio communication technologies in 1979 through an investment in Cybernet Electronics Corporation, which was merged into Kyocera in 1982. Shortly afterward, Kyocera introduced one of the first portable, battery-powered laptop computers, sold in the U.S. as the Tandy Model 100, which featured an LCD screen and telephone-modem data transfer capability.
Kyocera gained optical technology by acquiring Yashica in 1983, along with Yashica's prior licensing agreement with Carl Zeiss, and manufactured film and digital cameras under the Kyocera, Yashica and Contax trade names until 2005, when the company discontinued all film and digital camera production.
In the 1980s, Kyocera marketed audio components, such as CD players, receivers, turntables, and cassette decks. These featured unique elements, including Kyocera ceramic-based platforms. At one time, Kyocera owned the famous KLH brand founded by Henry Kloss, though Kloss and the original Cambridge design and engineering staff had left the company by the time of the Kyocera purchase. In 1989, Kyocera stopped production of audio components and sought a buyer for the KLH brand.
In 1989, Kyocera acquired Elco Corporation, a manufacturer of electronic connectors. In 1990, Kyocera's global operations expanded significantly with the addition of AVX Corporation, a global manufacturer of passive electronic components, such as ceramic chip capacitors, filters and voltage suppressors.
Expanding sales of photovoltaic solar energy products led the company to create Kyocera Solar Corporation in Japan in 1996, and Kyocera Solar, Inc. in the U.S. in 1999.
On August 4, 1999, Kyocera completed its merger with solar energy systems integrator Golden Genesis Company (Nasdaq:GGGO).
Since 2000
In January 2000, Kyocera acquired photocopier manufacturer Mita Industrial Company, following Mita's decline and bankruptcy in the late 1990s. This resulted in the creation of Kyocera Mita Corporation (now Kyocera Document Solutions Corporation), headquartered in Osaka, Japan, with subsidiaries in more than 25 nations.
Also in 2000, Kyocera acquired the mobile phone manufacturing operations of Qualcomm Incorporated to form Kyocera Wireless Corp. In 2003, Kyocera Wireless Corp. established Kyocera Wireless India (KWI), a mobile phone subsidiary in Bangalore. KWI has established alliances with several leading players providing CDMA services in India. Kyocera Wireless Corporation was the first to combine BREW capabilities and enhanced brilliant Color displays on entry-level CDMA Handsets, when it demonstrated BREW-enabled handsets at the BREW 2003 Developers Conference.
In 2008, Kyocera acquired Sanyo Mobile, the mobile phone division of Sanyo Electric Co., Ltd., and its associated operations in Japan, the United States and Canada.
In April 2009, Kyocera unveiled its EOS concept phone at CTIA, with an OLED and which is powered by kinetic energy from the user. The prototype phone also has a foldable design which is capable of morphing into a variety of shapes.
In 2009 Kyocera sold its Indian R&D Division (Wireless) to Mindtree Limited.
In March 2010, Kyocera launched its first Smartphone (Zio) since 2001, after focusing on lower cost phones.
In March, 2010, Kyocera announced the merger of its two wholly owned subsidiaries: San Diego–based Kyocera Wireless Corp. and Kyocera Communications, Inc. The merged enterprise continued under the name Kyocera Communications, Inc. Later that month, Kyocera agreed to acquire part of the thin film transistor (TFT) liquid crystal display (LCD) design and manufacturing business of Sony Corporation's subsidiary Sony Mobile Display Corporation.
In October 2010, Kyocera acquired 100% ownership of the shares of TA Triumph-Adler AG (Nuremberg, Germany) and converted the daughter company into TA Triumph-Adler GmbH. TA Triumph-Adler GmbH currently distributes Kyocera-made printing devices and software with TA Triumph-Adler and UTAX trademarks within the EMEA (Europe-Middle East-Africa) region. TA Triumph-Adler GmbH is located in Nuremberg, Germany and UTAX GmbH (subsidiary of TA Triumph-Adler) in Norderstedt, Germany.
In July 2011, Kyocera's wholly owned Germany-based subsidiary Kyocera Fineceramics GmbH acquired 100% ownership of the shares in Denmark-based industrial cutting tool manufacturing and sales company Unimerco Group A/S. Unimerco had been founded in Denmark in 1964. Today, the subsidiary is known as Kyocera Unimerco A/S, and comprises a tooling division and fastening division.
In February 2012, Kyocera became the total stock holder of Optrex Corporation, which was subsequently renamed Kyocera Display Corporation.
In March 2016, Kyocera acquired an international cutting tool company called SGS Tool Company for $89 million.
In August 2017, Kyocera acquired 100% ownership of Senco Industrial Tools.
In November, 2020, Kyocera acquired a light source company called SLD laser. The company innovated a product that uses phosphor to convert blue laser light to produce a broad-spectrum, incoherent, high luminance white light source.
Main products
Printers and multi-function devices
Kyocera Document Solutions Corporation manufactures a wide range of printers, MFPs. and toner cartridges which are sold throughout Europe, the Middle East, Africa, Australia and the Americas. Kyocera printing devices are also marketed under the Copystar name in Americas and under TA Triumph-Adler and Utax names in EMEA (Europe-Middle East-Africa) region. This division is overseen by Aaron Thomas (North American division President), Henry Goode, and Adam Stevens
Satellite phones
In the past, Kyocera manufactured satellite phones for the Iridium network. Three handsets were released in 1999 including one with an unusual docking station which contained the Iridium transceiver and antenna, as well as a pager for the Iridium network.
Mobile phones
North America (Kyocera International, Inc.)
Kyocera manufactures mobile phones for wireless carriers in the United States and Canada. Marketing is done by its subsidiary Kyocera International, Inc.
Kyocera acquired the terminal business of US digital communications technology company Qualcomm in February 2000, and became a major supplier of mobile handsets. In 2008, Kyocera also took over the handset business of Sanyo, eventually forming 'Kyocera Communications, Inc.'. The Kyocera Communications terminal division is located in San Diego.
Japan
Kyocera Corporation manufactures and markets phones for the Japanese market which are sold under different brands. Kyocera makes phones for some Japanese wireless carriers including au, willcom, SoftBank and Y!mobile.
In May 2012, Kyocera released the world's first speaker-less smartphone, the Kyocera Urbano Progresso. This phone produces vibration to conduct sound through the ear canal instead of the customary speaker, making it easier to hear phone conversations in busy and noisy places. This also benefits those who are having difficulty hearing, but are not totally deaf. It could be used across the world on CDMA, GSM, GPRS and UMTS networks. This phone was only available in Japan.
Solar cells
Kyocera maintains production bases for photovoltaic cells and solar modules in Japan and China. In 2009, it was announced that Kyocera's solar modules were available as on option on the Toyota Prius.
The company also operates solar power plants, such as the Kagoshima Nanatsujima Mega Solar Power Plant.
Advanced ceramics
Kyocera sells ceramic knives via its web store and retail outlets under the name Kyocera Advanced Ceramics.
Corporate affairs
Kyocera's headquarters building in Kyoto is tall. A 1,900-panel photovoltaic power system is on the roof and south wall of the building, which can supply 12.5% of the facility's needed energy, generating 182 megawatt hours per year.
Sponsorships
Between 1978 and 1998, Kyocera and the International Affairs Board of the City of San Diego sponsored an all-expense paid tour of Japan for students from the United States called HORIZON (stylized in all capital letters and designated by year: e.g. HORIZON '98). The program's purpose was to acquaint these students with the Japanese people and their culture, and to facilitate friendship and understanding. The program was open to students ages 10–14; applicants were chosen randomly.
The brand Mita was the first main sponsor of the Argentinian team Atlético Independiente, from 1985 to 1992. Mita also sponsored English club Aston Villa F.C., appearing on shirt fronts from 1984 to 1993, and Italian club Como 1907 from 1983 to 1989. Between 2005 and 2008, Kyocera also sponsored Reading F.C. and Brazilian football team Atlético Paranaense, having the naming rights of their stadium.
Kyocera is currently the sponsor of the football club Kyoto Sanga F.C. of the J-League (its hometown team; here the word "Kyocera" is written in Japanese katakana, everywhere else in the Latinized logo). Kyocera holds the naming rights for the Kyocera Dome Osaka, colloquially known as Osaka Dome. The indoor dome is the home field of the baseball teams Orix Buffaloes and Hanshin Tigers.
Gallery of products
See also
Cybernet (brand)
Kyoto Prize
Taito
List of digital camera brands
References
External links
Kyocera Global site
Kyocera Communications, Inc. site (archived 24 April 2014)
Kyotronic 85 (archived 27 July 2010)
Kyocera Plans to Build 350-MW Solar Cell Manufacturing Plant (archived 7 March 2012)
Kyocera Constructing New Solar Manufacturing Plant In China
Top 5 Best Kyocera Photocopiers
Electronics companies of Japan
Solar energy companies of Japan
Mobile phone manufacturers
Photovoltaics manufacturers
Conglomerate companies of Japan
Defense companies of Japan
Midori-kai
Multinational companies headquartered in Japan
Computer companies of Japan
Computer hardware companies
Computer printer companies
Conglomerate companies established in 1959
Electronics companies established in 1959
Manufacturing companies established in 1959
Manufacturing companies based in Kyoto
Companies listed on the Osaka Exchange
Companies listed on the Tokyo Stock Exchange
Companies in the Nikkei 225
Japanese brands
Japanese companies established in 1959
Knife manufacturing companies
1970s initial public offerings | Kyocera | [
"Technology",
"Engineering"
] | 2,426 | [
"Computer hardware companies",
"Photovoltaics manufacturers",
"Computers",
"Engineering companies"
] |
578,150 | https://en.wikipedia.org/wiki/Standard%20hydrogen%20electrode | In electrochemistry, the standard hydrogen electrode (abbreviated SHE), is a redox electrode which forms the basis of the thermodynamic scale of oxidation-reduction potentials. Its absolute electrode potential is estimated to be at 25 °C, but to form a basis for comparison with all other electrochemical reactions, hydrogen's standard electrode potential () is declared to be zero volts at any temperature. Potentials of all other electrodes are compared with that of the standard hydrogen electrode at the same temperature.
Nernst equation for SHE
The hydrogen electrode is based on the redox half cell corresponding to the reduction of two hydrated protons, into one gaseous hydrogen molecule,
General equation for a reduction reaction:
The reaction quotient () of the half-reaction is the ratio between the chemical activities () of the reduced form (the reductant, ) and the oxidized form (the oxidant, ).
Considering the redox couple:
2H_{(aq)}+ + 2e- <=> H2_{(g)}
at chemical equilibrium, the ratio of the reaction products by the reagents is equal to the equilibrium constant of the half-reaction:
where
and correspond to the chemical activities of the reduced and oxidized species involved in the redox reaction
represents the activity of .
denotes the chemical activity of gaseous hydrogen (), which is approximated here by its fugacity
denotes the partial pressure of gaseous hydrogen, expressed without unit; where
is the mole fraction
is the total gas pressure in the system
is the standard pressure (1 bar = 10 pascal) introduced here simply to overcome the pressure unit and to obtain an equilibrium constant without unit.
More details on managing gas fugacity to get rid of the pressure unit in thermodynamic calculations can be found at thermodynamic activity#Gases. The followed approach is the same as for chemical activity and molar concentration of solutes in solution. In the SHE, pure hydrogen gas () at the standard pressure of is engaged in the system. Meanwhile the general SHE equation can also be applied to other thermodynamic systems with different mole fraction or total pressure of hydrogen.
This redox reaction occurs at a platinized platinum electrode.
The electrode is immersed in the acidic solution and pure hydrogen gas is bubbled over its surface. The concentration of both the reduced and oxidised forms of hydrogen are maintained at unity. That implies that the pressure of hydrogen gas is 1 bar (100 kPa) and the activity coefficient of hydrogen ions in the solution is unity. The activity of hydrogen ions is their effective concentration, which is equal to the formal concentration times the activity coefficient. These unit-less activity coefficients are close to 1.00 for very dilute water solutions, but usually lower for more concentrated solutions.
As the general form of the Nernst equation at equilibrium is the following:
and as by definition in the case of the SHE,
The Nernst equation for the SHE becomes:
Simply neglecting the pressure unit present in , this last equation can often be directly written as:
And by solving the numerical values for the term
the practical formula commonly used in the calculations of this Nernst equation is:
(unit: volt)
As under standard conditions the equation simplifies to:
(unit: volt)
This last equation describes the straight line with a negative slope of -0.0591 volt/ pH unit delimiting the lower stability region of water in a Pourbaix diagram where gaseous hydrogen is evolving because of water decomposition.
where:
is the activity of the hydrogen ions (H+) in aqueous solution, with:
is the activity coefficient of hydrogen ions (H+) in aqueous solution
is the molar concentration of hydrogen ions (H+) in aqueous solution
is the standard concentration (1 M) used to overcome concentration unit
is the partial pressure of the hydrogen gas, in bar ()
is the universal gas constant: J⋅K−1⋅mol−1 (rounded here to 4 decimal)
is the absolute temperature, in kelvin (at 25 °C: 298.15 K)
is the Faraday constant (the charge per mole of electrons), equal to
is the standard pressure:
: as the system is at chemical equilibrium, hydrogen gas, is also in equilibrium with dissolved hydrogen, and the Nernst equation implicitly takes into account the Henry's law for gas dissolution. Therefore, there is no need to independently consider the gas dissolution process in the system, as it is already de facto included.
SHE vs NHE vs RHE
During the early development of electrochemistry, researchers used the normal hydrogen electrode as their standard for zero potential. This was convenient because it could actually be constructed by "[immersing] a platinum electrode into a solution of 1 N strong acid and [bubbling] hydrogen gas through the solution at about 1 atm pressure". However, this electrode/solution interface was later changed. What replaced it was a theoretical electrode/solution interface, where the concentration of H+ was 1 M, but the H+ ions were assumed to have no interaction with other ions (a condition not physically attainable at those concentrations). To differentiate this new standard from the previous one, it was given the name 'standard hydrogen electrode'.
Finally, there are also reversible hydrogen electrodes (RHEs), which are practical hydrogen electrodes whose potential depends on the pH of the solution.
In summary,
NHE (normal hydrogen electrode): potential of a platinum electrode in 1 M acid solution with 1 bar of hydrogen bubbled through
SHE (standard hydrogen electrode): potential of a platinum electrode in a theoretical ideal solution (the current standard for zero potential for all temperatures)
RHE (reversible hydrogen electrode): a practical hydrogen electrode whose potential depends on the pH of the solution
Choice of platinum
The choice of platinum for the hydrogen electrode is due to several factors:
inertness of platinum (it does not corrode)
the capability of platinum to catalyze the reaction of proton reduction
a high intrinsic exchange current density for proton reduction on platinum
excellent reproducibility of the potential (bias of less than 10 μV when two well-made hydrogen electrodes are compared with one another)
The surface of platinum is platinized (i.e., covered with a layer of fine powdered platinum also known as platinum black) to:
Increase total surface area. This improves reaction kinetics and maximum possible current
Use a surface material that adsorbs hydrogen well at its interface. This also improves reaction kinetics
Other metals can be used for fabricating electrodes with a similar function such as the palladium-hydrogen electrode.
Interference
Because of the high adsorption activity of the platinized platinum electrode, it's very important to protect electrode surface and solution from the presence of organic substances as well as from atmospheric oxygen. Inorganic ions that can be reduced to a lower valency state at the electrode also have to be avoided (e.g., , ). A number of organic substances are also reduced by hydrogen on a platinum surface, and these also have to be avoided.
Cations that can be reduced and deposited on the platinum can be source of interference: silver, mercury, copper, lead, cadmium and thallium.
Substances that can inactivate ("poison") the catalytic sites include arsenic, sulfides and other sulfur compounds, colloidal substances, alkaloids, and material found in biological systems.
Isotopic effect
The standard redox potential of the deuterium couple is slightly different from that of the proton couple (ca. −0.0044 V vs SHE). Various values in this range have been obtained: −0.0061 V, −0.00431 V, −0.0074 V.
2 D_{(aq)}+ + 2 e- -> D2_{(g)}
Also difference occurs when hydrogen deuteride (HD, or deuterated hydrogen, DH) is used instead of hydrogen in the electrode.
Experimental setup
The scheme of the standard hydrogen electrode:
platinized platinum electrode
hydrogen gas
solution of the acid with activity of H+ = 1 mol dm−3
hydroseal for preventing oxygen interference
reservoir through which the second half-element of the galvanic cell should be attached. The connection can be direct, through a narrow tube to reduce mixing, or through a salt bridge, depending on the other electrode and solution. This creates an ionically conductive path to the working electrode of interest.
See also
Table of standard electrode potentials
Reversible hydrogen electrode
Palladium-hydrogen electrode
Reference electrode
Dynamic hydrogen electrode
Quinhydrone electrode
Thermodynamic activity
Standard state
References
External links
Electrodes
Hydrogen technologies
ja:基準電極#標準水素電極 | Standard hydrogen electrode | [
"Chemistry"
] | 1,825 | [
"Electrochemistry",
"Electrodes"
] |
578,243 | https://en.wikipedia.org/wiki/Apocrine | Apocrine () is a term used to classify the mode of secretion of exocrine glands. In apocrine secretion, secretory cells accumulate material at their apical ends, often forming blebs or "snouts", and this material then buds off from the cells, forming extracellular vesicles. The secretory cells therefore lose part of their cytoplasm in the process of secretion.
An example of true apocrine glands is the mammary glands, responsible for secreting breast milk. Apocrine glands are also found in the anogenital region and axillae.
Apocrine secretion is less damaging to the gland than holocrine secretion (which destroys a cell) but more damaging than merocrine secretion (exocytosis).
Apocrine metaplasia
Apocrine metaplasia is a reversible transformation (metaplasia) of cells to an apocrine phenotype. It is common in the breast in the context of fibrocystic change. It is seen in women mostly over the age of 50 years. Metaplasia happens when there is an irritation to the breast (breast cyst). Apocrine-like cells form in a lining of developing microcysts, due to the pressure buildup within the lumen. The pressure build up is caused by secretions. This type of metaplasia represents an exception to the common rule of metaplasia increasing the risk for developing cancer in that apocrine metaplasia doesn't increase the possibility of developing breast cancer. Metaplastic apocrine cells belong to the category of oncocytes, which are a group characterized by abundant acidophilic, granular cytoplasm (from the Greek root onco-, which means mass, bulk).
Apocrine ductal carcinoma in situ
Apocrine ductal carcinoma in situ (ACDIS) is a very rare breast carcinoma which is regarded as a variant of the ductal carcinoma in situ breast tumors. ACDIS tumors have microscopic histopathology features that are similar to pure apocrine carcinoma of the breast tumors but differ from them in that they are completely localized, i.e. have not invaded nearby tissues or metastasized to distant tissues.
Apocrine carcinoma
Apocrine carcinoma is a very rare form of female breast cancer. The rate of incidence varies from 0.5 to 4%. Cytologically, the cells of apocrine carcinoma are relatively large, granular, and it has a prominent eosinophilic cytoplasm. When apocrine carcinoma is tested as a “triple negative", it means that the cells of the patient cannot express the estrogen receptor, progesterone receptor, or HER2 receptor.
References
External links
Diagram at uwa.edu.au
Exocrine system | Apocrine | [
"Biology"
] | 612 | [
"Exocrine system",
"Organ systems"
] |
578,271 | https://en.wikipedia.org/wiki/Audio%20description | Audio description (AD), also referred to as a video description, described video, or visual description, is a form of narration used to provide information surrounding key visual elements in a media work (such as a film or television program, or theatrical performance) for the benefit of blind and visually impaired consumers. These narrations are typically placed during natural pauses in the audio, and sometimes overlap dialogue if deemed necessary. Occasionally when a film briefly has subtitled dialogue in a different language, such as Greedo's confrontation with Han Solo in the 1977 film Star Wars: A New Hope, the narrator will read out the dialogue in character.
In museums or visual art exhibitions, audio described tours (or universally designed tours that include description or the augmentation of existing recorded programs on audio- or videotape), are used to provide access to visitors who are blind or have low vision. Docents or tour guides can be trained to employ audio description in their presentations.
In film and television, description is typically delivered via a secondary audio track. In North America, Second audio program (SAP) is typically used to deliver audio description by television broadcasters. To promote accessibility, some countries (such as Canada and the United States) have implemented requirements for broadcasters to air specific quotas of programming containing audio description.
History
The transition to "talkies" in the late 1920s resulted in a push to make the cinema accessible to the visually impaired. The New York Times documented the "first talking picture ever shown especially for the blind"—a 1929 screening of Bulldog Drummond attended by members of the New York Association for the Blind and New York League for the Hard of Hearing, which offered a live description for the visually-impaired portion of the audience. In the 1940s and 1950s, Radio Nacional de España aired live audio simulcasts of films from cinemas with descriptions, framing these as a form of radio drama before the advent of television.
In the 1980s, the Media Access Group of U.S. public television station WGBH-TV (which had already gained notability for their involvement in developing closed captioning) developed an implementation of audio description for television programming via second audio program (SAP), which it branded as "Descriptive Video Service" (DVS). It was developed in consultation with Dr. Margaret Pfanstiehl of Washington, D.C., who had performed descriptions at theatrical performances and had run a radio reading service known as the Washington Ear. After four years of development and on-air trials (which included a proof of concept that aired the descriptions on a radio station in simulcast with the television airing), WGBH officially launched audio description via 32 participating PBS member stations, beginning with the new season of American Playhouse on January 24, 1990.
In the 1990s at cinemas in California, RP International began to offer audio descriptions for theatrical films under the brand TheatreVision, relayed via earpieces to those who request it. A clip from Schindler's List was used to pitch the concept to the film's producers Gerald Molen and Branko Lustig, and one of the first films to be presented in this format was Forrest Gump (1994). TheatreVision sought notable personalities and celebrities to volunteer in providing these narrations, such as sportscaster Vin Scully, William Shatner, Monty Hall, and former U.S. president George H. W. Bush (for It's a Wonderful Life). Sometimes the narrator had ties to the film or was part of its cast; Irene Bedard described Pocahontas—a film where she had voiced the title character, and for the 1994 remake of Little Women, stars from previous versions of the film volunteered, including June Allyson, Margaret O'Brien, and Janet Leigh (whose grandmother was blind) from the 1949 version of the film, as well as Katharine Hepburn—star of the 1933 version. Other companies emerged in providing descriptions for programming in the U.S., including the National Captioning Institute, Narrative Television Network, and others.
In the UK Audio Description services were made available on the BBC and ITV after a collaborative project with industry partners. In 2000, the BBC voluntarily committed to providing descriptions for at least 20% of its programming annually. In practice, the BBC has often exceeded these targets. In 2009, BBC iPlayer became the first streaming video on-demand service in the world to support AD where every programme that was broadcast with AD also had AD on BBC iPlayer. On January 29, 2009, The Accessible Channel was launched in Canada, which broadcasts "open" audio descriptions on all programming via the primary audio track. Audio description has also been extended to live events, including sporting events, the ceremonies of the Olympic and Paralympic Games, and the royal wedding of Prince William and Catherine Middleton, among others.
In April 2015, the subscription streaming service Netflix announced that it had added support for audio description, beginning with Daredevil—a series based on a comic book character who himself is blind, and would add descriptions to current and past original series on the platform over time. The following year, as part of a settlement with the American Council of the Blind, Netflix agreed to provide descriptions for its original series within 30 days of their premiere, and add screen reader support and the ability to browse content by availability of descriptions.
On June 17, 2016, Pornhub announced that it would launch a collection of pornographic videos with audio descriptions. The initiative is sponsored by the website's philanthropic arm Pornhub Cares.
In the late-2010s, Procter & Gamble began to add descriptions to some of its television commercials, first in the United Kingdom, and later Spain and the United States.
Legal mandates in television broadcasting
Canada
Under Canadian Radio-television and Telecommunications Commission (CRTC) rules, broadcast television stations and former Category A services that dedicated more than half of their programming to comedy, drama, or long-form documentary programs, were required to broadcast at least four hours of programming with audio descriptions (known in Canadian English as described video) per-week, with two hours of this "original" to the channel per-week. These programs must have been drawn from children's, comedy, drama, long-form documentaries, general entertainment and human interest, reality, or variety genres. Broadcasters must also promote the availability of DV programming, including airing a standard audiovisual bumper and logo at the beginning of all programs offering description (the CRTC officially recommends that this announcement be repeated after the conclusion of each commercial break, but this is not typically practiced). All television providers are also required to carry AMI-tv (formerly The Accessible Channel), a specialty channel that broadcasts all programming with descriptions on the primary audio track.
On March 26, 2015, the CRTC announced that beginning September 1, 2019, most broadcast and specialty networks owned by vertically integrated conglomerates, as well as any channel previously subject to license conditions specifying minimums for DV, are required to supply described video for any prime-time programming (7:00 p.m. to 11:00 p.m.) that falls within the aforementioned genres. The requirement that a quota of DV programming be "original" to the network was also dropped. Citing the possibility that not enough imported U.S. programming may be supplied with descriptions for their first airing, and the burden this may place on their ability to carry these programs, the CRTC granted an exception to Bell Media, Corus Entertainment, and Rogers Media, along with minor companies DHX Media, CBC, Blue Ant Media, V, and TVA Group, for foreign programming that is received within 24 hours of its scheduled airing—provided that any future airings of the same program in prime-time contain descriptions. In addition, other licensed discretionary services would be expected to air at least four hours of DV programming per-week by the fourth year of their next license term.
United Kingdom
The Ofcom code on television access services requires broadcasters that have been on the air for at least five years to broadcast at least 10% of their programming with descriptions. Scrutiny has applied even to ESPN UK—a sports channel—which was fined £120,000 by Ofcom for not meeting an AD quota in 2012. The regulator rejected an argument by ESPN that AD was redundant to commentary, as it is "not provided with the needs of the visually impaired in mind".
United States
Initially, audio description was provided as a public service. However, in 2000, the Federal Communications Commission would enact a policy effective April 1, 2002, requiring the affiliates of the four major television networks in the top 25 markets, and television providers with more than 50,000 subscribers via the top 5 cable networks as determined by Nielsen ratings, to offer 50 hours of programming with descriptions during primetime or children's programming per-quarter. However, the order faced a court challenge led by the MPAA, who questioned the FCC's jurisdiction on the matter. In November 2002, the Court of Appeals for the District of Columbia Circuit ruled that the FCC had no statutory jurisdiction to enforce such a rule.
This was rectified in 2010 with the passing of the Twenty-First Century Communications and Video Accessibility Act, which gave the FCC jurisdiction to enforce video description requirements. The previously intended quotas were reinstated on July 1, 2012, and have been gradually increased to require more programming and wider participation since their implementation.
Operation
Broadcast audio description is typically delivered via an alternate audio track, either as a separate language track containing the narration only (which, if the playback device is capable of doing so, is mixed with the primary audio track automatically, and can have separate volume settings), or on a secondary audio track pre-mixed with the primary track, such as a secondary audio program (SAP).
Many video on demand (VOD) and streaming platforms host separate assets for the audio-described media, with the soundtrack pre-mixed. Despite AD typically being presented as something that can be enabled (as with subtitles), users can encounter problems when trying to turn AD on or off because the underlying media version they require is unavailable.
In movie theaters, audio description can be heard using DVS Theatrical and similar systems (including DTS-CSS and Dolby Screentalk). Users listen to the description on a wireless headset. Audio description is stored in the Digital Cinema Package as "Visually Impaired-Native" (VI-N) audio on Sound Track channel 8.
In live theaters, patrons also receive the description via a wireless device, a discreet monaural receiver. However, the description is provided live by describers located in a booth acoustically insulated from the audience, but from where they have a good view of the performance. They make their description which is fed to a small radio transmitter.
Audio description in football stadiums
In 2006, on the occasion of the 2006 FIFA World Cup in Germany, a project was launched with the aim of making the live commentary of a match available to blind and visually impaired football fans in the stadium. The project was very well-received and had great success. In 2008, audio description in football was also adopted in Switzerland. The radio of FC Basel 1893 was the first club in Switzerland to take up this topic. First, FC Basel installed an antenna in St. Jakob Park, which was used to broadcast the radio's live commentary. The visually impaired and blind fans could then listen to the commentary via a VHF frequency. More and more clubs in the Swiss Super League adopted this concept and today the matches can be heard via audio description in every stadium in Switzerland. At St. Jakob-Park in Basel, even without delay via the Internet. In the meantime, the outdated technology of FM transmission has been abolished. Today, the games are broadcast via cell phone apps. In Germany, almost every stadium is also equipped with this technology.
Descriptive Video Service
The Descriptive Video Service (DVS) is a major United States producer of audio description. DVS often is used to describe the product itself.
In 1985, PBS member television station WGBH-TV in Boston, Massachusetts, began investigating uses for the new technology of stereophonic television broadcasting, particularly multichannel television sound (MTS), which allowed for a third audio channel, called the Secondary Audio Program (SAP). With a history of developing closed captioning of programs for hearing-impaired viewers, WGBH considered the viability of using the new audio channel for narrated descriptions of key visual elements, much like those being done for live theatre in Washington, D.C., by Margaret Pfanstiehl, who had been experimenting with television description as part of her Washington Ear radio reading service.
After reviewing and conducting various studies, which found that blind and visually impaired people were consuming more television than ever but finding the activity problematic (often relying on sighted family and friends to describe for them), WGBH consulted more closely with Pfanstiehl and her husband, Cody, and then conducted its first tests of DVS in Boston in 1986. These tests (broadcasting to local groups of people of various ages and visual impairments) and further study were successful enough to merit a grant from the Corporation for Public Broadcasting to complete plans to establish the DVS organization permanently in 1988. After national testing, more feedback, more development of description technique, and additional grants, DVS became a regular feature of selected PBS programming in 1990. Later, DVS became an available feature in some films and home videos, including DVDs.
Technique
DVS describers watch a program and write a script describing visual elements which are important in understanding what is occurring at the time and the plot as a whole. For example, in the opening credit sequence of the children's series Arthur on PBS, the description has been performed as follows:
The length of descriptions and their placement by a producer into the program are largely dictated by what can fit in natural pauses in dialogue (other producers of description may have other priorities, such as synchronization with the timing of a described element's appearance, which differ from DVS's priority for detail). Once recorded, placed and mixed with a copy of the original soundtrack, the DVS track is then "laid back" to the master tape on a separate audio track (for broadcast on the SAP) or to its own DVS master (for home video). For feature films, the descriptions are not mixed with the soundtrack, but kept separate as part of a DTS soundtrack.
FCC involvement
When the Federal Communications Commission (FCC) started establishing various requirements for broadcasters in larger markets to improve their accessibility to audiences with hearing and vision impairments, DVS branched out to non-PBS programming, and soon description could be heard on the SAP for shows such as CSI: Crime Scene Investigation and The Simpsons. However, a federal court ruled in 2002 that the Federal Communications Commission had exceeded its jurisdiction by requiring broadcasters in the top 25 markets to carry video description.
Since that time, the amount of new DVS television programming in the United States declined, as did access to information regarding upcoming described programming, while broadcasters like ABC and Fox instead decided to devote their SAP channels to Spanish language dubbing tracks of their shows rather than DVS due to the technical limitations of the analog NTSC standard. Description by DVS and other producers was still available in a limited form on television (the greatest percentage of DVS programming is still on PBS). WGBH's Media Access Group continues supporting description of feature films (known as DVS Theatrical) and DVS home videos/DVDs are available from WGBH as well as other vendors and libraries. Commercial caption providers the National Captioning Institute and CaptionMax have also begun to describe programs. Benefit Media, Inc., a subsidiary of DuArt Film and Video in New York City provides DVS services to USA Network. For the 2016 Summer Olympics, NBC is providing description of events during the network's primetime block.
The 21st Century Communications and Video Accessibility Act of 2010 reinstates the FCC's involvement in providing rules for video description. Under the rules, affiliates in the top 25 markets and the top five-rated cable networks will have to provide at least 50 hours of video described programming per quarter; the rules took effect on July 1, 2012. However, this provision currently does not apply to syndicated programming; notably, many programs which have audio description in their network runs, such as those produced by Twentieth Century Fox Television, remove the DVS track for syndication, substituting in the Spanish dubbing track on SAP to reach more viewers, though as many stations affiliated with "netlets" like The CW and MyNetworkTV are not under the video description provision, do not have SAP channels and thus, neither an audio description or Spanish dub track can be heard. In some markets where SAP is activated on affiliate stations though, The CW had provided a Spanish SAP dub for Jane the Virgin through the series' entire run, and audio description is available and passed through for their Saturday morning One Magnificent Morning E/I block, which is done for all of the blocks produced for the major broadcast networks by Litton Entertainment. In 2019, the first primetime series with DVS for the network, In the Dark (which has a blind protagonist), was launched (the series' description propagated to its Netflix run several weeks after it was placed on that service after the first-season finale). MyNetworkTV has no provisions for audio description or language dub tracks, despite many of its scripted series having DVS tracks.
Online streaming services such as Hulu and the services of television networks themselves such as CBS All Access have yet to carry descriptive video service audio in most cases as they instead are currently focused on adding closed captioning to their libraries (the network app for ABC began to carry existing audio described shows in the fall of 2017). Netflix committed in April 2015 to begin audio description of their original series, starting with Daredevil (which features a blind protagonist with other heightened senses) and the remainder of their original programming in the next few months, making their goal in that timeframe, along with providing the DVS tracks of existing series in their library; however some platforms (mainly older versions for devices that are now unsupported) do not provide the alternate audio.
ABC, along with sister network Disney Channel has since added audio description to some of their programming (with a commensurate decline in Spanish-dubbed programming, though the ATSC standard allows more audio channels), but does not contract any of their shows to be described by the Media Access Group, instead going with commercial providers CaptionMax and Audio Eyes. Some special programming such as Toy Story of Terror! and Toy Story That Time Forgot is described by the Media Access Group under existing contracts with Walt Disney Pictures. NBC and their associated cable networks, along with outside productions by Universal Television such as Brooklyn Nine-Nine and The Mindy Project, solely use CaptionMax for description services; Netflix also utilizes CaptionMax for their original series, while going per studio for acquired programming. Most scripted programming on Fox, except for the shows of Gordon Ramsay (Hell's Kitchen, Hotel Hell and Kitchen Nightmares) is described by the Media Access Group; Ramsay's programs are contracted by his producing studio to have audio description done by Scottish-born voiceover artist Mhairi Morrison with Descriptive Video Works. Unique to most described shows, Fox's Empire uses actress Adrienne Barbeau for their description. CBS's described shows all use the Media Access Group.
Some shows have lost their DVS during their original network runs due to outside factors or complications. For instance, American Dad! had a two-season interregnum in part of season 12 and all of season 13 without any DVS service during its move from Fox to TBS in late 2014, before it returned in November 2016 for its fourteenth season. The Mindy Project lost DVS at the start of their fourth season upon the move to Hulu, which does not yet provide DVS service. Cartoon Network and their time-share partner Adult Swim began to pass-through DVS for their syndicated content in the last quarter of 2018.
See also
Novelization
Radio drama
TheatreVision
Citations
General and cited references
Cronin, Barry J. Ph.D. and Robertson King, Sharon, MA. "The Development of the Descriptive Video Service", Report for the National Center to Improve Practice. Retrieved on July 30, 2007.
"The ABC's of DVS", WGBH - Media Access Group. Retrieved on July 30, 2007.
"Our Inclusive Approach ", AudioVision. Retrieved on July 30, 2007.
DVS FAQ, WGBH - Media Access Group. Retrieved on July 30, 2007.
"Media Access Guide Volume 3", WGBH - Media Access Group. Retrieved on July 30, 2007.
"ACB Statement on Video Description" American Council for the Blind Legislative Seminar 2006, February 1, 2006. Retrieved from Audio Description International on July 30, 2007.
List of PBS series with DVS, August 2007, WGBH - Media Access Group. Retrieved on July 30, 2007.
Homepage, MoPix. Retrieved on July 30, 2007.
"DVS Home Video" WGBH - Media Access Group. Retrieved on July 30, 2007.
Further reading
Hirvonen, Maija: Multimodal Representation and Intermodal Similarity: Cues of Space in the Audio Description of Film. (Ph.D. thesis.) University of Helsinki, 2014. . On-line version.
External links
General
"Description Key for Educational Media" by The Described and Captioned Media Program
ACB's Audio Description Project
Audio Description Associates
Audio Description for Blind and Visually Impaired
"Who's Watching? A Profile of the Blind and Visually Impaired Audience for Television and Video"
List of UK audio described programmes on TV
List of UK audio described DVDs
Joe Clark on audio description
E-Inclusion Research Network
Media Access Australia: Audio Description
VocalEyes, UK audio description charity, providing access to the arts for blind and partially sighted people
Audiodescription-france.org
Audio Description Association (Hong Kong)
In the US:
WGBH - Media Access Group - DVS Services
The Audio Description Project
Schedule of USA Audio Described TV Programs, Produced by the American Council of the Blind's Audio Description Project
Metropolitan Washington Ear
Audio tracks of DVS version of Masterpiece Theatre's "Wind In the Willows" (regional restrictions may apply)
Poems written from a transcribed DVS version of Basic Instinct via Triple Canopy (online magazine)
Examples of audio description
adp.acb.org/samples.html
www.audiodescribe.com/samples/
www.artbeyondsight.org/handbook/acs-verbalsamples.shtml
Description of Neighbours and The Motorman from the National Film Board of Canada (QuickTime)
Assistive technology
Television technology | Audio description | [
"Technology"
] | 4,656 | [
"Information and communications technology",
"Television technology"
] |
578,327 | https://en.wikipedia.org/wiki/Future%20value | Future value is the value of an asset at a specific date. It measures the nominal future sum of money that a given sum of money is "worth" at a specified time in the future assuming a certain interest rate, or more generally, rate of return; it is the present value multiplied by the accumulation function.
The value does not include corrections for inflation or other factors that affect the true value of money in the future. This is used in time value of money calculations.
Overview
Money value fluctuates over time: $100 today has a different value than $100 in five years. This is because one can invest $100 today in an interest-bearing bank account or any other investment, and that money will grow/shrink due to the rate of return. Also, if $100 today allows the purchase of an item, it is possible that $100 will not be enough to purchase the same item in five years, because of inflation (increase in purchase price).
An investor who has some money has two options: to spend it right now or to invest it. The financial compensation for saving it (and not spending it) is that the money value will accrue through the interests that he will receive from a borrower (the bank account on which he has the money deposited).
Therefore, to evaluate the real worthiness of an amount of money today after a given period of time, economic agents compound the amount of money at a given interest rate. Most actuarial calculations use the risk-free interest rate which corresponds the minimum guaranteed rate provided the bank's saving account, for example. If one wants to compare their change in purchasing power, then they should use the real interest rate (nominal interest rate minus inflation rate).
The operation of evaluating a present value into the future value is called capitalization (how much will $100 today be worth in 5 years?). The reverse operation which consists in evaluating the present value of a future amount of money is called a discounting (how much $100 that will be received in 5 years- at a lottery, for example -are worth today?).
It follows that if one has to choose between receiving $100 today and $100 in one year, the rational decision is to cash the $100 today. If the money is to be received in one year and assuming the savings account interest rate is 5%, the person has to be offered at least $105 in one year so that two options are equivalent (either receiving $100 today or receiving $105 in one year). This is because if you have cash of $100 today and deposit in your savings account, you will have $105 in one year.
Simple interest
To determine future value (FV) using simple interest (i.e., without compounding):
where PV is the present value or principal, t is the time in years (or a fraction of year), and r stands for the per annum interest rate. Simple interest is rarely used, as compounding is considered more meaningful . Indeed, the Future Value in this case grows linearly (it's a linear function of the initial investment): it doesn't take into account the fact that the interest earned might be compounded itself and produce further interest (which corresponds to an exponential growth of the initial investment -see below-).
Compound interest
To determine future value using compound interest:
where PV is the present value, t is the number of compounding periods (not necessarily an integer), and i is the interest rate for that period. Thus the future value increases exponentially with time when i is positive. The growth rate is given by the period, and i, the interest rate for that period. Alternatively the growth rate is expressed by the interest per unit time based on continuous compounding. For example, the following all represent the same growth rate:
3 % per half year
6.09 % per year (effective annual rate, annual rate of return, the standard way of expressing the growth rate, for easy comparisons)
2.95588022 % per half year based on continuous compounding (because ln 1.03 = 0.0295588022)
5.91176045 % per year based on continuous compounding (simply twice the previous percentage)
Also the growth rate may be expressed in a percentage per period (nominal rate), with another period as compounding basis; for the same growth rate we have:
6% per year with half a year as compounding basis
To convert an interest rate from one compounding basis to another compounding basis (between different periodic interest rates), the following formula applies:
where
i1 is the periodic interest rate with compounding frequency n1 and
i2 is the periodic interest rate with compounding frequency n2.
If the compounding frequency is annual, n2 will be 1, and to get the annual interest rate (which may be referred to as the effective interest rate, or the annual percentage rate), the formula can be simplified to:
where r is the annual rate, i the periodic rate, and n the number of compounding periods per year.
Problems become more complex as you account for more variables. For example, when accounting for annuities (annual payments), there is no simple PV to plug into the equation. Either the PV must be calculated first, or a more complex annuity equation must be used. Another complication is when the interest rate is applied multiple times per period. For example, suppose the 10% interest rate in the earlier example is compounded twice a year (semi-annually). Compounding means that each successive application of the interest rate applies to all of the previously accumulated amount, so instead of getting 0.05 each 6 months, one must figure out the true annual interest rate, which in this case would be 1.1025 (one would divide the 10% by two to get 5%, then apply it twice: 1.052.) This 1.1025 represents the original amount 1.00 plus 0.05 in 6 months to make a total of 1.05, and get the same rate of interest on that 1.05 for the remaining 6 months of the year. The second six-month period returns more than the first six months because the interest rate applies to the accumulated interest as well as the original amount.
This formula gives the future value (FV) of an ordinary annuity (assuming compound interest):
where r = interest rate; n = number of periods. The simplest way to understand the above formula is to cognitively split the right side of the equation into two parts, the payment amount, and the ratio of compounding over basic interest. The ratio of compounding is composed of the aforementioned effective interest rate over the basic (nominal) interest rate. This provides a ratio that increases the payment amount in terms present value.
See also
Lifetime value
Present value
Time value of money
References
Theory of value (economics)
Mathematical finance | Future value | [
"Mathematics"
] | 1,416 | [
"Applied mathematics",
"Mathematical finance"
] |
578,348 | https://en.wikipedia.org/wiki/Frot | Frot or frotting (slang for frottage; ) is a sexual practice between men that usually involves direct penis-to-penis contact. The term was popularized by gay male activists who disparaged the practice of anal sex, but has since evolved to encompass a variety of preferences for the act, which may or may not imply particular attitudes towards other sexual activities. This can also be used as some type of foreplay.
Because it is not penetrative, frot has the safe sex benefit of reducing the risk of transmitting HIV/AIDS; however, it still carries the risk of skin-to-skin sexually transmitted infections, such as HPV and pubic lice (crabs), both of which can be transmitted even when lesions are not visible.
It is analogous to tribadism, which is vulva-to-vulva contact between women.
Concept and etymology
The modern definition of frot emerged in a context of a debate about the status of anal sex within the gay male community; some in the anti-anal, pro-frot camp insist that anal sex ought to be avoided altogether. One view argued that the popularity of anal sex would decline, presumably with a corresponding drop in HIV rates, if gay men could somehow be persuaded to stop thinking of anal sex as a "vanilla" practice, but rather as something "kinky" and not-quite-respectable—as was the case in the 1950s and 1960s, when gay men who preferred to do only mutual masturbation and fellatio sometimes used the disparaging slang term brownie queen for aficionados of anal sex.
Gay activist Bill Weintraub began to heavily promote and recommend the gender-specific meaning of "penis-to-penis rubbing" as frot on Internet forums sometime in the late 1990s, and said he coined the term. "I don't use the word 'frottage,' because it is an ersatz French word which can indicate any sort of erotic rubbing," he stated. "Frot, by contrast, is always phallus-to-phallus sex." Weintraub believes that is what actual sex is; genital-genital contact.
Alternative terms for frot include frictation, which can refer to the wider meaning of frottage but also penis-penis sex specifically, as well as sword-fighting, Oxford style, Princeton rub, and Ivy League rub.
Sexual practices
General
Frot can be enjoyable because it mutually and simultaneously stimulates the genitals of both partners as it tends to produce pleasurable friction against the frenulum nerve bundle on the underside of each partner's penile shaft, just below the urinary opening (meatus) of the penis head (glans penis).
Safer sex
Since frot is a non-penetrative sex act, the risk of passing a sexually transmitted infection (STI) that requires direct contact between the mucous membranes and pre-ejaculate or semen is reduced. HIV is among the infections that require such direct contact, and research indicates that there is no risk of HIV transmission via frot. However, frot can still transmit other sexually transmitted infections, such as HPV (which can cause genital warts) and pubic lice (crabs). Vaccines are available against HPV.
Comparison with anal sex and debates
Some gay men, or men who have sex with men (MSM) in general, prefer to engage in frot or other forms of mutual masturbation because they find it more pleasurable or more affectionate than anal sex, to preserve technical virginity, or as safe sex alternatives to anal penetration. This preference has led to some debate in the gay male and MSM community regarding what constitutes "real sex" or the most sensual expression of sexual intimacy. Some frot advocates consider "two genitals coming together by mingling, caressing, sliding" and rubbing to be sex more than other forms of male sexual activity. Other men who have sex with men associate male masculinity with the sexual positions of "tops" and "bottoms" during anal sex.
During anal sex, the insertive partner may be referred to as the top or active partner. The one being penetrated may be referred to as the bottom or passive partner. Those with no strong preference for either are referred to as versatile. Some frot advocates insist that such roles introduce inequality during sexual intimacy, and that frot is "equal" because of mutual genital-genital stimulation. The lack of mutual genital stimulation and role asymmetry has led other frot advocates to denounce anal sex as degrading to the receptive partner. This view of dominance and inequality associated with sex roles is disputed by researchers who state that it is not clear that specific sexual acts are necessarily indicative of general patterns of masculinity or dominance in a gay male relationship, and that, for both partners, anal intercourse can be associated with being masculine. Additionally, some frot advocates, such as Bill Weintraub, are concerned with diseases that may be acquired through anal sex. In a 2005 article in The Advocate, one anal sex opponent said that no longer showing anal sex as erotic would help avoid HIV/AIDS, and opined that some gay men perceived him to be antigay when he was only trying to keep gay and bisexual men alive and healthy.
Gay men, and MSM in general, who prefer anal sex may view it as "[their] version of intercourse" and as "the natural apex of sex, a wonderful expression of intimacy, and a great source of pleasure". Psychologist Walt Odets said, "I think that anal sex has for gay men the same emotional significance that vaginal sex has for heterosexuals." Anal sex is generally viewed as vanilla sex among MSM, and is often thought to be expected, even by MSM who do not prefer the act. "Some people like [anal] because it seems taboo or naughty," stated author and sex therapist Jack Morin. "Some people like the flavor of dominance and submission... some don't."
MSM who defend the essential validity of anal sex have rejected claims made by radical frot advocates. Others have at times disparaged frottage as a makeshift, second-rate form of male/male intimacy—something better left to inexperienced teenagers and "closeted" older men. Odets said, "No one would propose that we initiate a public health measure by de-eroticizing vaginal sex. It would sound like a ridiculous idea. It's no less ridiculous for gay men."
HuffPost contributor and sexologist Joe Kort proposed the term side for gay men who are not interested in anal sex and instead prefer "to kiss, hug and engage in oral sex, rimming, mutual masturbation and rubbing up and down on each other", viewing "sides" as simply another gay male sexual preference akin to being a top, bottom or versatile, and adding that "Whether a man enjoys anal sex or not is no reflection on his sexual orientation, and if he's gay, it doesn't define whether or not he's 'really' having sex."
Prevalence
A 2011 survey of gay and bisexual men by the Journal of Sexual Medicine found that out of over 1,300 different combinations of sexual acts practiced, the most common, at 16% of all encounters, was "holding their partner romantically, kissing partner on mouth, solo masturbation, masturbating partner, masturbation by partner, and genital–genital contact."
Among other animals
Genital–genital rubbing has been observed between males of other animals as well. Among bonobos, frottage frequently occurs when two males hang from a tree limb and engage in penis fencing; it also occurs while two males are in the missionary position.
Frot-like genital rubbing between non-primate males has been observed among bull manatees, in conjunction with "kissing". When engaging in genital–genital rubbing, male bottlenose dolphins often penetrate the genital slit or, less commonly, the anus. Penis-to-penis rubbing is also common among homosexually active mammals.
See also
Intercrural sex
Sex position
References
Further reading
Olivia Judson (2002). Dr. Tatiana's Sex Advice to All Creation.
1990s neologisms
Gay masculinity
LGBTQ slang
Gay male erotica
Human penis
Sexual acts
Male homosexuality
Non-penetrative sex | Frot | [
"Biology"
] | 1,741 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
578,365 | https://en.wikipedia.org/wiki/MIRACL | MIRACL, or Mid-Infrared Advanced Chemical Laser, is a directed energy weapon developed by the US Navy. It is a deuterium fluoride laser, a type of chemical laser.
The MIRACL laser first became operational in 1980. It can produce over a megawatt of output for up to 70 seconds, making it the most powerful continuous wave (CW) laser in the US. Its original goal was to be able to track and destroy anti-ship cruise missiles, but in later years it was used to test phenomenologies associated with national anti-ballistic and anti-satellite laser weapons. Originally tested at a contractor facility in California, as of the later 1990s and early 2000s, it was located at the former MAR-1 facility () in the White Sands Missile Range in New Mexico.
The beam size in the resonator is about wide. The beam is then reshaped to a square.
Amid much controversy in October 1997, MIRACL was tested against MSTI-3, a US Air Force satellite at the end of its original mission in orbit at a distance of . MIRACL failed during the test and was damaged and the Pentagon claimed mixed results for other portions of the test. A second, lower-powered chemical laser was able to temporarily blind the MSTI-3 sensors during the test.
References
Further reading
Chemical lasers
Military lasers
Directed-energy weapons of the United States
Military equipment introduced in the 1980s | MIRACL | [
"Chemistry"
] | 290 | [
"Chemical reaction engineering",
"Chemical lasers"
] |
578,412 | https://en.wikipedia.org/wiki/Acetone%20peroxide | Acetone peroxide ( also called APEX and mother of Satan) is an organic peroxide and a primary explosive. It is produced by the reaction of acetone and hydrogen peroxide to yield a mixture of linear monomer and cyclic dimer, trimer, and tetramer forms. The monomer is dimethyldioxirane. The dimer is known as diacetone diperoxide (DADP). The trimer is known as triacetone triperoxide (TATP) or tri-cyclic acetone peroxide (TCAP). Acetone peroxide takes the form of a white crystalline powder with a distinctive bleach-like odor when impure, or a fruit-like smell when pure, and can explode powerfully if subjected to heat, friction, static electricity, concentrated sulfuric acid, strong UV radiation, or shock. Until about 2015, explosives detectors were not set to detect non-nitrogenous explosives, as most explosives used preceding 2015 were nitrogen-based. TATP, being nitrogen-free, has been used as the explosive of choice in several terrorist bomb attacks since 2001.
History
Acetone peroxide (specifically, triacetone triperoxide) was discovered in 1895 by the German chemist Richard Wolffenstein. Wolffenstein combined acetone and hydrogen peroxide, and then he allowed the mixture to stand for a week at room temperature, during which time a small quantity of crystals precipitated, which had a melting point of .
In 1899, Adolf von Baeyer and Victor Villiger described the first synthesis of the dimer and described use of acids for the synthesis of both peroxides. Baeyer and Villiger prepared the dimer by combining potassium persulfate in diethyl ether with acetone, under cooling. After separating the ether layer, the product was purified and found to melt at . They found that the trimer could be prepared by adding hydrochloric acid to a chilled mixture of acetone and hydrogen peroxide. By using the depression of freezing points to determine the molecular weights of the compounds, they also determined that the form of acetone peroxide that they had prepared via potassium persulfate was a dimer, whereas the acetone peroxide that had been prepared via hydrochloric acid was a trimer, like Wolffenstein's compound.
Work on this methodology and on the various products obtained, was further investigated in the mid-20th century by Milas and Golubović.
Chemistry
The chemical name acetone peroxide is most commonly used to refer to the cyclic trimer, the product of a reaction between two precursors, hydrogen peroxide and acetone, in an acid-catalyzed nucleophilic addition, although monomeric and dimeric forms are also possible.
Specifically, two dimers, one cyclic (C6H12O4) and one open chain (C6H14O4), as well as an open dihydroperoxide monomer (C3H8O4), can also be formed; under a particular set of conditions of reagent and acid catalyst concentration, the cyclic trimer is the primary product. Under neutral conditions, the reaction is reported to produce the monomeric organic peroxide.
A tetrameric form has also been described, under different catalytic conditions, albeit not without disputes and controversy.
The most common route for nearly pure TATP is H2O2/acetone/HCl in 1:1:0.25 molar ratios, using 30% hydrogen peroxide. This product contains very little or none of DADP with some very small traces of chlorinated compounds. Product that contains large fraction of DADP can be obtained from 50% H2O2 using large amounts of concentrated sulfuric acid as catalyst or alternatively with 30% H2O2 and massive amounts of HCl as a catalyst.
The product made by using hydrochloric acid is regarded as more stable than the one made using sulfuric acid. It is known that traces of sulfuric acid trapped inside the formed acetone peroxide crystals lead to instability. In fact, the trapped sulfuric acid can induce detonation at temperatures as low as . This is the most likely mechanism behind accidental explosions of acetone peroxide that occur during drying on heated surfaces.
Organic peroxides in general are sensitive, dangerous explosives, and all forms of acetone peroxide are sensitive to initiation. TATP decomposes explosively; examination of the explosive decomposition of TATP at the very edge of detonation front predicts "formation of acetone and ozone as the main decomposition products and not the intuitively expected oxidation products." Very little heat is created by the explosive decomposition of TATP at the very edge of the detonation front; the foregoing computational analysis suggests that TATP decomposition is an entropic explosion. However, this hypothesis has been challenged as not conforming to actual measurements. The claim of entropic explosion has been tied to the events just behind the detonation front. The authors of the 2004 Dubnikova et al. study confirm that a final redox reaction (combustion) of ozone, oxygen and reactive species into water, various oxides and hydrocarbons takes place within about 180ps after the initial reaction—within about a micron of the detonation wave. Detonating crystals of TATP ultimately reach temperature of and pressure of 80 kbar. The final energy of detonation is about 2800 kJ/kg (measured in helium), enough to briefly raise the temperature of gaseous products to . Volume of gases at STP is 855 L/kg for TATP and 713 L/kg for DADP (measured in helium).
The tetrameric form of acetone peroxide, prepared under neutral conditions using a tin catalyst in the presence of a chelator or general inhibitor of radical chemistry, is reported to be more chemically stable, although still a very dangerous primary explosive. Its synthesis has been disputed.
Both TATP and DADP are prone to loss of mass via sublimation. DADP has lower molecular weight and higher vapor pressure. This means that DADP is more prone to sublimation than TATP. This can lead to dangerous crystal growth when the vapors deposit if the crystals have been stored in a container with a threaded lid. This process of repeated sublimation and deposition also results in a change in crystal size via Ostwald ripening.
Several methods can be used for trace analysis of TATP, including gas chromatography/mass spectrometry (GC/MS), high performance liquid chromatography/mass spectrometry (HPLC/MS), and HPLC with post-column derivitization.
Acetone peroxide is soluble in toluene, chloroform, acetone, dichloromethane and methanol. Recrystalization of primary explosives may yield large crystals that detonate spontaneously due to internal strain.
Industrial uses
Ketone peroxides, including acetone peroxide and methyl ethyl ketone peroxide, find application as initiators for polymerization reactions, e.g., silicone or polyester resins, in the making of fiberglass-reinforced composites. For these uses, the peroxides are typically in the form of a dilute solution in an organic solvent; methyl ethyl ketone peroxide is more common for this purpose, as it is stable in storage.
Acetone peroxide is used as a flour bleaching agent to bleach and "mature" flour.
Acetone peroxides are unwanted by-products of some oxidation reactions such as those used in phenol syntheses. Due to their explosive nature, their presence in chemical processes and chemical samples creates potential hazardous situations. For example, triacetone peroxide is the major contaminant found in diisopropyl ether as a result of photochemical oxidation in air. Accidental occurrence at illicit MDMA laboratories is possible.
Numerous methods are used to reduce their appearance, including shifting pH to more alkaline, adjusting reaction temperature, or adding inhibitors of their production.
Use in improvised explosive devices
TATP has been used in bomb and suicide attacks and in improvised explosive devices, including the London bombings on 7 July 2005, where four suicide bombers killed 52 people and injured more than 700. It was one of the explosives used by the "shoe bomber" Richard Reid in his 2001 failed shoe bomb attempt and was used by the suicide bombers in the November 2015 Paris attacks, 2016 Brussels bombings, Manchester Arena bombing, June 2017 Brussels attack, Parsons Green bombing, the Surabaya bombings, and the 2019 Sri Lanka Easter bombings. Hong Kong police claim to have found of TATP among weapons and protest materials in July 2019, when mass protests were taking place against a proposed law allowing extradition to mainland China.
TATP shockwave overpressure is 70% of that for TNT, and the positive phase impulse is 55% of the TNT equivalent. TATP at 0.4 g/cm3 has about one-third of the brisance of TNT (1.2 g/cm3) measured by the Hess test.
TATP is attractive to terrorists because it is easily prepared from readily available retail ingredients, such as hair bleach and nail polish remover. It was also able to evade detection because it is one of the few high explosives that do not contain nitrogen, and could therefore pass undetected through standard explosive detection scanners, which were hitherto designed to detect nitrogenous explosives. By 2016, explosives detectors had been modified to be able to detect TATP, and new types were developed.
Legislative measures to limit the sale of hydrogen peroxide concentrated to 12% or higher have been made in the European Union.
A key disadvantage is the high susceptibility of TATP to accidental detonation, causing injuries and deaths among illegal bomb-makers, which has led to TATP being referred to as the "Mother of Satan". TATP was found in the accidental explosion that preceded the 2017 terrorist attacks in Barcelona and surrounding areas.
Large-scale TATP synthesis is often betrayed by excessive bleach-like or fruity smells. This smell can even penetrate into clothes and hair in amounts that are quite noticeable; this was reported in the 2016 Brussels bombings.
References
External links
Explosive chemicals
Ketals
Organic peroxides
Organic peroxide explosives
Oxygen heterocycles
Radical initiators | Acetone peroxide | [
"Chemistry",
"Materials_science"
] | 2,149 | [
"Ketals",
"Radical initiators",
"Functional groups",
"Organic compounds",
"Polymer chemistry",
"Reagents for organic chemistry",
"Explosive chemicals",
"Organic peroxide explosives",
"Organic peroxides"
] |
578,436 | https://en.wikipedia.org/wiki/Self-medication | Self-medication, sometime called do-it-yourself (DIY) medicine, is a human behavior in which an individual uses a substance or any exogenous influence to self-administer treatment for physical or psychological conditions, for example headaches or fatigue.
The substances most widely used in self-medication are over-the-counter drugs and dietary supplements, which are used to treat common health issues at home. These do not require a doctor's prescription to obtain and, in some countries, are available in supermarkets and convenience stores.
The field of psychology surrounding the use of psychoactive drugs is often specifically in relation to the use of recreational drugs, alcohol, comfort food, and other forms of behavior to alleviate symptoms of mental distress, stress and anxiety, including mental illnesses or psychological trauma. Such treatment may cause serious detriment to physical and mental health if motivated by addictive mechanisms. In postsecondary (university and college) students, self-medication with "study drugs" such as Adderall, Ritalin, and Concerta has been widely reported and discussed in literature.
Products are marketed by manufacturers as useful for self-medication, sometimes on the basis of questionable evidence. Claims that nicotine has medicinal value have been used to market cigarettes as self-administered medicines. These claims have been criticized as inaccurate by independent researchers. Unverified and unregulated third-party health claims are used to market dietary supplements.
Self-medication is often seen as gaining personal independence from established medicine, and it can be seen as a human right, implicit in, or closely related to the right to refuse professional medical treatment. Self-medication can cause unintentional self-harm. Self-medication with antibiotics has been identified as one of the primary reasons for the evolution of antimicrobial resistance.
Sometimes self-medication or DIY medicine occurs because patients disagree with a doctor's interpretation of their condition, to access experimental therapies that are not available to the public, or because of legal bans on healthcare, as in the case of some transgender people or women seeking self-induced abortion. Other reasons for relying on DIY medical care is to avoid health care prices in the United States and anarchist beliefs.
Definition
Generally speaking, self-medication is defined as "the use of drugs to treat self-diagnosed disorders or symptoms, or the intermittent or continued use of a prescribed drug for chronic or recurrent disease or symptoms".
Self-medication can be defined as the use of drugs to treat an illness or symptom when the user is not a medically qualified professional. The term is also used to include the use of drugs outside their license or off-label.
Psychology and psychiatry
Self-medication hypothesis
As different drugs have different effects, they may be used for different reasons. According to the self-medication hypothesis (SMH), the individuals' choice of a particular drug is not accidental or coincidental, but instead, a result of the individuals' psychological condition, as the drug of choice provides relief to the user specific to his or her condition. Specifically, addiction is hypothesized to function as a compensatory means to modulate effects and treat distressful psychological states, whereby individuals choose the drug that will most appropriately manage their specific type of psychiatric distress and help them achieve emotional stability.
The self-medication hypothesis (SMH) originated in papers by Edward Khantzian, Mack and Schatzberg, David F. Duncan, and a response to Khantzian by Duncan. The SMH initially focused on heroin use, but a follow-up paper added cocaine. The SMH was later expanded to include alcohol, and finally all drugs of addiction.
According to Khantzian's view of addiction, drug users compensate for deficient ego function by using a drug as an "ego solvent", which acts on parts of the self that are cut off from consciousness by defense mechanisms. According to Khantzian, drug dependent individuals generally experience more psychiatric distress than non-drug dependent individuals, and the development of drug dependence involves the gradual incorporation of the drug effects and the need to sustain these effects into the defensive structure-building activity of the ego itself. The addict's choice of drug is a result of the interaction between the psychopharmacologic properties of the drug and the affective states from which the addict was seeking relief. The drug's effects substitute for defective or non-existent ego mechanisms of defense. The addict's drug of choice, therefore, is not random.
While Khantzian takes a psychodynamic approach to self-medication, Duncan's model focuses on behavioral factors. Duncan described the nature of positive reinforcement (e.g., the "high feeling", approval from peers), negative reinforcement (e.g. reduction of negative affect) and avoidance of withdrawal symptoms, all of which are seen in those who develop problematic drug use, but are not all found in all recreational drug users. While earlier behavioral formulations of drug dependence using operant conditioning maintained that positive and negative reinforcement were necessary for drug dependence, Duncan maintained that drug dependence was not maintained by positive reinforcement, but rather by negative reinforcement. Duncan applied a public health model to drug dependence, where the agent (the drug of choice) infects the host (the drug user) through a vector (e.g., peers), while the environment supports the disease process, through stressors and lack of support.
Khantzian revisited the SMH, suggesting there is more evidence that psychiatric symptoms, rather than personality styles, lie at the heart of drug use disorders. Khantzian specified that the two crucial aspects of the SMH were that (1) drugs of abuse produce a relief from psychological suffering and (2) the individual's preference for a particular drug is based on its psychopharmacological properties. The individual's drug of choice is determined through experimentation, whereby the interaction of the main effects of the drug, the individual's inner psychological turmoil, and underlying personality traits identify the drug that produces the desired effects.
Meanwhile, Duncan's work focuses on the difference between recreational and problematic drug use. Data obtained in the Epidemiologic Catchment Area Study demonstrated that only 20% of drug users ever experience an episode of drug abuse (Anthony & Helzer, 1991), while data obtained from the National Comorbidity Study demonstrated that only 15% of alcohol users and 15% of illicit drug users ever become dependent. A crucial determinant of whether a drug user develops drug abuse is the presence or absence of negative reinforcement, which is experienced by problematic users, but not by recreational users. According to Duncan, drug dependence is an avoidance behavior, where an individual finds a drug that produces a temporary escape from a problem, and taking the drug is reinforced as an operant behavior.
Specific mechanisms
Some people who have a mental illness attempt to correct their illnesses by using certain drugs. Depression is often self-medicated by the use of alcohol, tobacco, cannabis, or other mind-altering drugs. While this may provide immediate relief of some symptoms such as anxiety, it may evoke and/or exacerbate some symptoms of several kinds of mental illnesses that are already latently present, and may lead to addiction or physical dependency, among other side effects of long-term use of the drug. This does not differ significantly from the potential effects of drugs provided by physicians, which are equally capable of producing dependency and/or addiction and also have side effects arising from long-term use.
People with post-traumatic stress disorder have been known to self-medicate, as well as many individuals without this diagnosis who have experienced psychological trauma.
Due to the different effects of the different classes of drugs, the SMH postulates that the appeal of a specific class of drugs differs from person to person. In fact, some drugs may be aversive for individuals for whom the effects could worsen affective deficits.
CNS depressants
Alcohol and sedative/hypnotic drugs, such as barbiturates and benzodiazepines, are central nervous system (CNS) depressants that lower inhibitions via anxiolysis. Depressants produce feelings of relaxation and sedation, while relieving feelings of depression and anxiety. Though they are generally ineffective antidepressants, as most are short-acting, the rapid onset of alcohol and sedative/hypnotics softens rigid defenses and, in low to moderate doses, provides relief from depressive affect and anxiety. As alcohol also lowers inhibitions, alcohol is also hypothesized to be used by those who normally constrain emotions by attenuating intense emotions in high or obliterating doses, which allows them to express feelings of affection, aggression and closeness. Most patients that have been hospitalized for substance use or alcohol dependence reported using drugs in response to depressive symptoms. This type of misuse is more likely in men than in women. This makes diagnosing a psychiatric disorder very difficult in substance abusers, because of self medicating.
Alcohol
People with social anxiety disorder commonly use alcohol to overcome their highly set inhibitions.
Psychostimulants
Psychostimulants, such as cocaine, amphetamines, methylphenidate, caffeine, and nicotine, produce improvements in physical and mental functioning, including increased energy and alertness. Stimulants tend to be most widely used by people with attention deficit hyperactivity disorder (ADHD), which can either be diagnosed or undiagnosed. Because a significant portion of people with ADHD have not been diagnosed they are more prone to using stimulants like caffeine, nicotine or pseudoephedrine to mitigate their symptoms. Unawareness concerning the effects of illicit substances such as cocaine, methamphetamine or mephedrone can result in self-medication with these drugs by individuals affected with ADHD symptoms. This self medication can effectively prevent them from getting diagnosed with ADHD and receiving treatment with stimulants like methylphenidate and amphetamines.
Stimulants also can be beneficial for individuals who experience depression, to reduce anhedonia and increase self-esteem, however in some cases depression may occur as a comorbid condition originating from the prolonged presence of negative symptoms of undiagnosed ADHD, which can impair executive functions, resulting in lack of motivation, focus and contentment with one's life, so stimulants may be useful for treating treatment-resistant depression, especially in individuals thought to have ADHD. The SMH also hypothesizes that hyperactive and hypomanic individuals use stimulants to maintain their restlessness and heighten euphoria. Additionally, stimulants are useful to individuals with social anxiety by helping individuals break through their inhibitions. Some reviews suggest that students use psychostimulants to self medicate for underlying conditions, such as ADHD, depression or anxiety.
Opiates
Opiates, such as heroin and morphine, function as an analgesic by binding to opioid receptors in the brain and gastrointestinal tract. This binding reduces the perception of and reaction to pain, while also increasing pain tolerance. Opiates are hypothesized to be used as self-medication for aggression and rage. Opiates are effective anxiolytics, mood stabilizers, and anti-depressants, however, people tend to self-medicate anxiety and depression with depressants and stimulants respectively, though this is by no means an absolute analysis.
Modern research into novel antidepressants targeting opioid receptors suggests that endogenous opioid dysregulation may play a role in medical conditions including anxiety disorders, clinical depression, and borderline personality disorder. BPD is typically characterized by sensitivity to rejection, isolation, and perceived failure, all of which are forms of psychological pain. As research suggests that psychological pain and physiological pain both share the same underlying mechanism, it is likely that under the self-medication hypothesis some or most recreational opioid users are attempting to alleviate psychological pain with opioids in the same way opioids are used to treat physiological pain.
Cannabis
Cannabis is paradoxical in that it simultaneously produces stimulating, sedating and mildly psychedelic properties and both anxiolytic or anxiogenic properties, depending on the individual and circumstances of use. Depressant properties are more obvious in occasional users, and stimulating properties are more common in chronic users. Khantzian noted that research had not sufficiently addressed a theoretical mechanism for cannabis, and therefore did not include it in the SMH.
Effectiveness
Self-medicating excessively for prolonged periods of time with benzodiazepines or alcohol often makes the symptoms of anxiety or depression worse. This is believed to occur as a result of the changes in brain chemistry from long-term use. Of those who seek help from mental health services for conditions including anxiety disorders such as panic disorder or social phobia, approximately half have alcohol or benzodiazepine dependence issues.
Sometimes anxiety precedes alcohol or benzodiazepine dependence but the alcohol or benzodiazepine dependence acts to keep the anxiety disorders going, often progressively making them worse. However, some people addicted to alcohol or benzodiazepines, when it is explained to them that they have a choice between ongoing poor mental health or quitting and recovering from their symptoms, decide on quitting alcohol or benzodiazepines or both. It has been noted that every individual has an individual sensitivity level to alcohol or sedative hypnotic drugs, and what one person can tolerate without ill health, may cause another to experience very ill health, and even moderate drinking can cause rebound anxiety syndrome and sleep disorders. A person experiencing the toxic effects of alcohol will not benefit from other therapies or medications, as these do not address the root cause of the symptoms.
Nicotine addiction seems to worsen mental health problems. Nicotine withdrawal depresses mood, increases anxiety and stress, and disrupts sleep. Although nicotine products temporarily relieve their nicotine withdrawal symptoms, an addiction causes stress and mood to be worse on average, due to mild withdrawal symptoms between hits. Nicotine addicts need the nicotine to temporarily feel normal. Nicotine industry marketing has claimed that nicotine is both less harmful and therapeutic for people with mental illness, and is a form of self-medication. This claim has been criticised by independent researchers.
Self medicating is a very common precursor to full addictions and the habitual use of any addictive drug has been demonstrated to greatly increase the risk of addiction to additional substances due to long-term neuronal changes. Addiction to any/every drug of abuse tested so far has been correlated with an enduring reduction in the expression of GLT1 (EAAT2) in the nucleus accumbens and is implicated in the drug-seeking behavior expressed nearly universally across all documented addiction syndromes. This long-term dysregulation of glutamate transmission is associated with an increase in vulnerability to both relapse-events after re-exposure to drug-use triggers as well as an overall increase in the likelihood of developing addiction to other reinforcing drugs. Drugs which help to re-stabilize the glutamate system such as N-acetylcysteine have been proposed for the treatment of addiction to cocaine, nicotine, and alcohol.
Infectious diseases
In 89% of countries, antibiotics can be prescribed only by a doctor and supplied only by a pharmacy. Self-medication with antibiotics is defined as "the taking of medicines on one's own initiative or on another person's suggestion, who is not a certified medical professional". It has been identified as one of the primary reasons for the evolution of antimicrobial resistance.
Self-medication with antibiotics is an unsuitable way of using them but a common practice in developing countries. Many people resort to that out of necessity when access to a physician is unavailable because of lockdowns and GP surgery closures, or when the patients have a limited amount of time or money to see a prescribing doctor. While being cited as an important alternative to a formal healthcare system where it may be lacking, self-medication can pose a risk to both the patient and community as a whole. The reasons behind self-medication are unique to each region and can relate to health system, societal, economic, health factors, gender, and age. Risks include allergies, lack of cure, and even death.
Besides developing countries, self-medication with antibiotics is also a problem for higher-income countries. In the European Union the average prevalence was 7% in 2016 with the highest rates in southern countries. There are high rates of self-medication with antibiotics in Russia (83%), Central America (19%) and Latin America (14-26%) too.
Two significant issues with self-medication are the lack of knowledge of the public on, firstly, the dangerous effects of certain antimicrobials (for example, ciprofloxacin, which can cause tendonitis, tendon rupture and aortic dissection) and, secondly, broad microbial resistance and when to seek medical care if the infection is not clearing.
Also inappropriate use of over-the-counter ibuprofen or other nonsteroidal anti-inflammatory drugs during winter influenza outbreaks can lead to death, e.g. due to haemorrhagic duodenitis induced by ibuprofen, or the consequences of exceeding the recommended doses of paracetamol by combining doses of the generic product with proprietary flu-remedies and Tylex (paracetamol and codeine).
In a questionnaire designed to evaluate self-medication rates amongst the population of Khartoum, Sudan, 48.1% of respondents reported self-medicating with antibiotics within the past 30 days, whereas 43.4% reported self-medicating with antimalarials, and 17.5% reported self-medicating with both. Overall, the total prevalence of reported self-medication with one or both classes of anti-infective agents within the past month was 73.9%. Furthermore, according to the associated study, data indicated that self-medication "varies significantly with a number of socio-economic characteristics" and the "main reason that was indicated for the self-medication was financial constraints".
Similarly, in a survey of university students in southern China, 47.8% of respondents reported self-medicating with antibiotics.
Other uses
One area of DIY medicine is self-administered pharmaceutical drugs that are obtained without a prescription, as in the case of DIY transgender hormone therapy which is common among trans people. Prescription-only lifestyle drugs such as those to treat erectile dysfunction, male pattern baldness, and obesity are often purchased online by people who have no diagnosis or prescription. In 2017, the United Kingdom legalized the sale of sildenafil (Viagra) over the counter in part to cut down on the number of men buying it online from unlicensed pharmacies.
Self-managed abortion with medication is safe and effective, but is illegal in some jurisdictions. Before the current medication had been developed and in places where abortion is illegal, people may resort to unsafe methods of self-managed abortion.
Another area is the creation of medical devices, such as PPE for protection against COVID-19 and epinephrine injectors. Some people with insulin-dependent diabetes have created their own automated insulin delivery systems. One review found that "the quality of glucose control achieved with DIY AID systems is impressively good". With DIY brain stimulation, individuals with depression create their own devices to access an experimental treatment. Other people self-administer fecal transplant as a treatment for various diseases.
Physicians and medical students
In a survey of West Bengal, India undergraduate medical school students, 57% reported self-medicating. The type of drugs most frequently used for self-medication were antibiotics (31%), analgesics (23%), antipyretics (18%), antiulcerics (9%), cough suppressants (8%), multivitamins (6%), and anthelmintics (4%).
Another study indicated that 53% of physicians in Karnataka, India reported self-administration of antibiotics.
Children
A study of Luo children in western Kenya found that 19% reported engaging in self-treatment with either herbal or pharmaceutical medicine. Proportionally, boys were much more likely to self-medicate using conventional medicine than herbal medicine as compared with girls, a phenomenon which was theorized to be influenced by their relative earning potential.
Regulation
Self-medication is highly regulated in much of the world and many classes of drugs are available for administration only upon prescription by licensed medical personnel. Safety, social order, commercialization, and religion have historically been among the prevailing factors that lead to such prohibition.
People trying to buy pharmaceutical drugs online without a prescription may be the victim of fraud, phishing, or receive counterfeit medication. Selling prescription drugs to people without a valid prescription is illegal in many jurisdictions and can be considered an example of transnational organized crime. In a 2021 article, Jack E. Fincham argues that unlicensed sales of prescription drugs online are a significant public health threat. It is also possible to obtain controlled substances such as amphetamine, benzodiazepines, and Z-drugs online without a prescription.
See also
Biodiversity and drugs
Cognitive liberty
Coping
Dual diagnosis
Alcoholism
Emotional eating
Psychedelic microdosing
Zoopharmacognosy
Self-surgery
Self-diagnosis
References
Further reading
External links
Comprehensive Drug Self-administration and Discrimination Bibliographic Databases
Self-medication at Medical Subject Headings
Pharmacy
Addiction
Substance-related disorders
Alcohol and health
Substance dependence
Mood disorders
Anxiety
Psychological stress
DIY medicine
Patient advocacy | Self-medication | [
"Chemistry"
] | 4,500 | [
"Pharmacology",
"Pharmacy"
] |
578,460 | https://en.wikipedia.org/wiki/Loop%20invariant | In computer science, a loop invariant is a property of a program loop that is true before (and after) each iteration. It is a logical assertion, sometimes checked with a code assertion. Knowing its invariant(s) is essential in understanding the effect of a loop.
In formal program verification, particularly the Floyd-Hoare approach, loop invariants are expressed by formal predicate logic and used to prove properties of loops and by extension algorithms that employ loops (usually correctness properties).
The loop invariants will be true on entry into a loop and following each iteration, so that on exit from the loop both the loop invariants and the loop termination condition can be guaranteed.
From a programming methodology viewpoint, the loop invariant can be viewed as a more abstract specification of the loop, which characterizes the deeper purpose of the loop beyond the details of this implementation. A survey article covers fundamental algorithms from many areas of computer science (searching, sorting, optimization, arithmetic etc.), characterizing each of them from the viewpoint of its invariant.
Because of the similarity of loops and recursive programs, proving partial correctness of loops with invariants is very similar to proving the correctness of recursive programs via induction. In fact, the loop invariant is often the same as the inductive hypothesis to be proved for a recursive program equivalent to a given loop.
Informal example
The following C subroutine max() returns the maximum value in its argument array a[], provided its length n is at least 1.
Comments are provided at lines 3, 6, 9, 11, and 13. Each comment makes an assertion about the values of one or more variables at that stage of the function.
The highlighted assertions within the loop body, at the beginning and end of the loop (lines 6 and 11), are exactly the same. They thus describe an invariant property of the loop.
When line 13 is reached, this invariant still holds, and it is known that the loop condition i!=n from line 5 has become false. Both properties together imply that m equals the maximum value in a[0...n-1], that is, that the correct value is returned from line 14.
int max(int n, const int a[]) {
int m = a[0];
// m equals the maximum value in a[0...0]
int i = 1;
while (i != n) {
// m equals the maximum value in a[0...i-1]
if (m < a[i])
m = a[i];
// m equals the maximum value in a[0...i]
++i;
// m equals the maximum value in a[0...i-1]
}
// m equals the maximum value in a[0...i-1], and i==n
return m;
}
Following a defensive programming paradigm, the loop condition i!=n in line 5 should better be modified to i<n, in order to avoid endless looping for illegitimate negative values of n. While this change in code intuitively shouldn't make a difference, the reasoning leading to its correctness becomes somewhat more complicated, since then only i>=n is known in line 13. In order to obtain that also i<=n holds, that condition has to be included into the loop invariant. It is easy to see that i<=n, too, is an invariant of the loop, since i<n in line 6 can be obtained from the (modified) loop condition in line 5, and hence i<=n holds in line 11 after i has been incremented in line 10. However, when loop invariants have to be manually provided for formal program verification, such intuitively too obvious properties like i<=n are often overlooked.
Floyd–Hoare logic
In Floyd–Hoare logic, the partial correctness of a while loop is governed by the following rule of inference:
This means:
If some property is preserved by the code —more precisely, if holds after the execution of whenever both and held beforehand— (upper line) then
and are guaranteed to be false and true, respectively, after the execution of the whole loop , provided was true before the loop (lower line).
In other words: The rule above is a deductive step that has as its premise the Hoare triple . This triple is actually a relation on machine states. It holds whenever starting from a state in which the boolean expression is true and successfully executing some code called , the machine ends up in a state in which is true. If this relation can be proven, the rule then allows us to conclude that successful execution of the program will lead from a state in which is true to a state in which holds. The boolean formula in this rule is called a loop invariant.
With some variations in the notation used, and with the premise that the loop halts, this rule is also known as the Invariant Relation Theorem. As one 1970s textbook presents it in a way meant to be accessible to student programmers:
Let the notation P { seq } Q mean that if P is true before the sequence of statements seq run, then Q is true after it. Then the invariant relation theorem holds that
P & c { seq } P
implies
P { DO WHILE (c); seq END; } P & ¬c
Example
The following example illustrates how this rule works. Consider the program
while (x < 10)
x := x+1;
One can then prove the following Hoare triple:
The condition C of the while loop is . A useful loop invariant has to be guessed; it will turn out that is appropriate. Under these assumptions it is possible to prove the following Hoare triple:
While this triple can be derived formally from the rules of Floyd-Hoare logic governing assignment, it is also intuitively justified: Computation starts in a state where is true, which means simply that is true. The computation adds 1 to , which means that is still true (for integer x).
Under this premise, the rule for while loops permits the following conclusion:
However, the post-condition ( is less than or equal to 10, but it is not less than 10) is logically equivalent to , which is what we wanted to show.
The property is another invariant of the example loop, and the trivial property is another one.
Applying the above inference rule to the former invariant yields .
Applying it to invariant yields , which is slightly more expressive.
Programming language support
Eiffel
The Eiffel programming language provides native support for loop invariants. A loop invariant is expressed with the same syntax used for a class invariant. In the sample below, the loop invariant expression x <= 10 must be true following the loop initialization, and after each execution of the loop body; this is checked at runtime.
from
x := 0
invariant
x <= 10
until
x > 10
loop
x := x + 1
end
Whiley
The Whiley programming language also provides first-class support for loop invariants. Loop invariants are expressed using one or more where clauses, as the following illustrates:
function max(int[] items) -> (int r)
// Requires at least one element to compute max
requires |items| > 0
// (1) Result is not smaller than any element
ensures all { i in 0..|items| | items[i] <= r }
// (2) Result matches at least one element
ensures some { i in 0..|items| | items[i] == r }:
//
nat i = 1
int m = items[0]
//
while i < |items|
// (1) No item seen so far is larger than m
where all { k in 0..i | items[k] <= m }
// (2) One or more items seen so far matches m
where some { k in 0..i | items[k] == m }:
if items[i] > m:
m = items[i]
i = i + 1
//
return m
The max() function determines the largest element in an integer array. For this to be defined, the array must contain at least one element. The postconditions of max() require that the returned value is: (1) not smaller than any element; and, (2) that it matches at least one element. The loop invariant is defined inductively through two where clauses, each of which corresponds to a clause in the postcondition. The fundamental difference is that each clause of the loop invariant identifies the result as being correct up to the current element i, whilst the postconditions identify the result as being correct for all elements.
Use of loop invariants
A loop invariant can serve one of the following purposes:
purely documentary
to be checked within in the code, e.g. by an assertion call
to be verified based on the Floyd-Hoare approach
For 1., a natural language comment (like // m equals the maximum value in a[0...i-1] in the above example) is sufficient.
For 2., programming language support is required, such as the C library assert.h, or the above-shown invariant clause in Eiffel. Often, run-time checking can be switched on (for debugging runs) and off (for production runs) by a compiler or a runtime option.
For 3., some tools exist to support mathematical proofs, usually based on the above-shown Floyd–Hoare rule, that a given loop code in fact satisfies a given (set of) loop invariant(s).
The technique of abstract interpretation can be used to detect loop invariant of given code automatically. However, this approach is limited to very simple invariants (such as 0<=i && i<=n && i%2==0).
Distinction from loop-invariant code
Loop-invariant code consists of statements or expressions that can be moved outside a loop body without affecting the program semantics. Such transformations, called loop-invariant code motion, are performed by some compilers to optimize programs.
A loop-invariant code example (in the C programming language) is
for (int i=0; i<n; ++i) {
x = y+z;
a[i] = 6*i + x*x;
}
where the calculations x = y+z and x*x can be moved before the loop, resulting in an equivalent, but faster, program:
x = y+z;
t1 = x*x;
for (int i=0; i<n; ++i) {
a[i] = 6*i + t1;
}
In contrast, e.g. the property 0<=i && i<=n is a loop invariant for both the original and the optimized program, but is not part of the code, hence it doesn't make sense to speak of "moving it out of the loop".
Loop-invariant code may induce a corresponding loop-invariant property. For the above example, the easiest way to see it is to consider a program where the loop invariant code is computed both before and within the loop:
x1 = y+z;
t1 = x1*x1;
for (int i=0; i<n; ++i) {
x2 = y+z;
a[i] = 6*i + t1;
}
A loop-invariant property of this code is (x1==x2 && t1==x2*x2) || i==0, indicating that the values computed before the loop agree with those computed within (except before the first iteration).
See also
Invariant (computer science)
Loop-invariant code motion
Loop variant
Weakest-preconditions of While loop
References
Further reading
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. . Pages 17–19, section 2.1: Insertion sort.
David Gries. "A note on a standard strategy for developing loop invariants and loops." Science of Computer Programming, vol 2, pp. 207–214. 1984.
Michael D. Ernst, Jake Cockrell, William G. Griswold, David Notkin. "Dynamically Discovering Likely Program Invariants to Support Program Evolution." International Conference on Software Engineering, pp. 213–224. 1999.
Robert Paige. "Programming with Invariants." IEEE Software, 3(1):56–69. January 1986.
Yanhong A. Liu, Scott D. Stoller, and Tim Teitelbaum. Strengthening Invariants for Efficient Computation. Science of Computer Programming, 41(2):139–172. October 2001.
Michael Huth, Mark Ryan. "Logic in Computer Science.", Second Edition.
Formal methods
Control flow | Loop invariant | [
"Engineering"
] | 2,694 | [
"Software engineering",
"Formal methods"
] |
578,519 | https://en.wikipedia.org/wiki/Marcel%20Benoist%20Prize | The Marcel Benoist Prize, offered by the Marcel Benoist Foundation, is a monetary prize that has been offered annually since 1920 to a scientist of Swiss nationality or residency who has made the most useful scientific discovery. Emphasis is placed on those discoveries affecting human life. Since 1997, candidates in the humanities have also been eligible for the prize.
The Marcel Benoist Foundation was established by the will of the French lawyer Marcel Benoist, a wartime resident of Lausanne, who died in 1918. It is managed by a group of trustees comprising the Swiss interior minister and heads of the main Swiss universities. It has been dubbed the "Swiss Nobel Prize."
History
The first award was given to immunologist Maurice Arthus (1862–1945) at the University of Lausanne. Other winners have included computer scientist Niklaus Wirth, astronomer Michel Mayor, and cardiologist Max Holzmann. , eleven Marcel Benoist winners have later also won the Nobel Prize: Paul Karrer, Leopold Ruzicka, Walter R. Hess, Tadeus Reichstein, Vladimir Prelog, Niels Kaj Jerne, Johannes G. Bednorz, Karl. Alexander Müller, Richard R. Ernst, Kurz Wüthrich, and Michel Mayor.
In 2009, Françoise Gisou van der Goot (École polytechnique fédérale de Lausanne) was the first woman to win the Marcel Benoist Prize.
Laureates
1920: Maurice Arthus
1921: Conrad Brunner
1922: Paul Karrer
1923: Albert Heim
1924: Heinrich Zangger
1925: Alfred Gysi
1926: Emile Argand
1927: Hermann Sahli
1928: Jules Gonin
1929: Paul Niggli
1930: Aloys Müller
1931: Walter R. Hess
1932: Maurice Lugeon
1933: Robert Doerr
1934: Max Askanazy
1935: Jakob Eugster
1936: Alfredo Vannotti
1937: Charles Dhéré
1938: Leopold Ruzicka
1939: Fritz Baltzer
1940: Friedrich T. Wahlen
1941: Hermann Mooser
1942: Arthur Stoll
1943: Paul Scherrer
1944:
1945: Ernst Albert Gäumann
1946: Alexander von Muralt
1947: Tadeus Reichstein
1948: Hans E. Walther
1949: Albert Frey-Wyssling
1950: Emile Guyénot
1951: Anton Fonio
1952: Otto Gsell
1953: Alfred Fleisch
1954: Ernst Hadorn
1955: Max Holzmann
1956: Siegfried Rosin
1957: Jakob Seiler
1958: Klaus Clusius
1959: Albert Wettstein
1960: Pierre Duchosal
1961: Werner Kuhn
1962: Alfred Hässig
1963: Gerold Schwarzenbach
1964: Vladimir Prelog
1965: Georges de Rham
1966: Edouard Kellenberger and Alfred Tissières
1967: Kurt Mühlethaler and Hans J. Moor
1968: Michel Dolivo
1969: Walter Heitler
1970: Charles Weissmann
1971: Manfred Bleuler
1972: Albert Eschenmoser
1973: Lucien Girardier, Eric Jéquier and Georges Spinnler
1974: Ewald Weibel
1975: M. Gazi Yasargil
1976: Theodor K. Brunner, Jean Charles Cerottini and Jean Lindenmann
1977: Hans Günthard and Edgar Heilbronner
1978: Niels Kaj Jerne
1979: Michel Cuénod
1980: Hans Kummer
1981: Karl Illmensee
1982: Franz Fankhauser
1983: Hans R. Brunner
1984: Harald Reuter
1985: Richard R. Ernst
1986: Johannes G. Bednorz and Karl Alexander Müller
1987: Maurice E. Müller, Martin Allgöwer and Hans R. Willenegger
1988: Ulrich Laemmli
1989: Niklaus Wirth
1990: Bruno Messerli, Hans Oeschger and Werner Stumm
1991: Duilio Arigoni and Kurt Wüthrich
1992: Gottfried Schatz
1993: no prize
1994: Martin Schwab
1995: Henri Isliker and Alfred Pletscher
1996: Bernard Rossier
1997: Jürg M. Fröhlich
1998: Michel Mayor
1999: Jörg Paul Müller and Luzius Wildhaber
2000: Dieter Seebach
2001:
2002: Rüdiger Wehner
2003: Denis Duboule
2004: Adriano Aguzzi
2005: Othmar Keel
2006: Timothy J. Richmond
2007: Ari Helenius
2008: Ernst Fehr
2009: Françoise Gisou van der Goot (first time that the prize is awarded to a woman)
2010: Daniel Loss
2011: Michele Parrinello
2012: Michael N. Hall
2013: Michael Grätzel
2014: Nicolas Gisin
2015: Laurent Keller
2016: Johan Auwerx
2017: Thomas Stocker
2018: Lars-Erik Cederman
2019: Nicola Spaldin
2020: Rudolf Aebersold
2021: Thomas Berger
2022: Ursula Keller
2023: Ted Turlings
2024: Pascal Gygax
See also
List of general science and technology awards
Science and technology in Switzerland
Prizes named after people
Latsis Foundation
Louis-Jeantet Prize for Medicine
References
External links
Benoist
Benoist
Benoist
Benoist Prize | Marcel Benoist Prize | [
"Technology"
] | 1,053 | [
"Science and technology awards",
"Science award stubs"
] |
578,631 | https://en.wikipedia.org/wiki/High-bandwidth%20Digital%20Content%20Protection | High-bandwidth Digital Content Protection (HDCP) is a form of digital copy protection developed by Intel Corporation to prevent copying of digital audio and video content as it travels across connections. Types of connections include DisplayPort (DP), Digital Visual Interface (DVI), and High-Definition Multimedia Interface (HDMI), as well as less popular or now deprecated protocols like Gigabit Video Interface (GVIF) and Unified Display Interface (UDI).
The system is meant to stop HDCP-encrypted content from being played on unauthorized devices or devices which have been modified to copy HDCP content. Before sending data, a transmitting device checks that the receiver is authorized to receive it. If so, the transmitter encrypts the data to prevent eavesdropping as it flows to the receiver.
In order to make a device that plays HDCP-enabled content, the manufacturer must obtain a license for the patent from Intel subsidiary Digital Content Protection LLC, pay an annual fee, and submit to various conditions. For example, the device cannot be designed to copy; it must "frustrate attempts to defeat the content protection requirements"; it must not transmit high definition protected video to non-HDCP receivers; and DVD-Audio works can be played only at CD-audio quality by non-HDCP digital audio outputs (analog audio outputs have no quality limits). If the device has a feature like Intel Management Engine disabled, HDCP will not work.
Cryptanalysis researchers demonstrated flaws in HDCP as early as 2001. In September 2010, an HDCP master key that allows for the generation of valid device keys was released to the public, rendering the key revocation feature of HDCP useless. Intel has confirmed that the crack is real, and believes the master key was reverse engineered rather than leaked. In practical terms, the impact of the crack has been described as "the digital equivalent of pointing a video camera at the TV", and of limited importance for consumers because the encryption of high-definition discs has been attacked directly, with the loss of interactive features like menus. Intel threatened to sue anyone producing an unlicensed device.
Specification
HDCP uses three systems:
Authentication prevents non-licensed devices from receiving content.
Encryption of the data sent over DisplayPort, DVI, HDMI, GVIF, or UDI interfaces prevents eavesdropping of information and man-in-the-middle attacks.
Key revocation prevents devices that have been compromised and cloned from receiving data.
Each HDCP-capable device has a unique set of 40 56-bit keys. Failure to keep them secret violates the license agreement. For each set of values, a special private key called a KSV (Key Selection Vector) is created. Each KSV consists of 40 bits (one bit for each HDCP key), with 20 bits set to 0 and 20 bits set to 1.
During authentication, the parties exchange their KSVs under a procedure called Blom's scheme. Each device adds its own secret keys together (using unsigned addition modulo 256) according to a KSV received from another device. Depending on the order of the bits set to 1 in the KSV, a corresponding secret key is used or ignored in the addition. The generation of keys and KSVs gives both devices the same 56-bit number, which is later used to encrypt data.
Encryption is done by a stream cipher. Each decoded pixel is encrypted by applying an XOR operation with a 24-bit number produced by a generator. The HDCP specifications ensure constant updating of keys after each encoded frame.
If a particular set of keys is compromised, their corresponding KSV is added to a revocation list burned onto new discs in the DVD and Blu-ray formats. (The lists are signed with a DSA digital signature, which is meant to keep malicious users from revoking legitimate devices.) During authentication, the transmitting device looks for the receiver's KSV on the list, and if it is there, will not send the decrypted work to the revoked device.
Uses
HDCP devices are generally divided into three categories:
Source The source sends the content to be displayed. Examples include set-top boxes, DVD, HD DVD and Blu-ray Disc players, and computer video cards. A source has only an HDCP/HDMI transmitter.
Sink The sink renders the content for display so it can be viewed. Examples include TVs and digital projectors. A sink has one or more HDCP/HDMI receivers.
Repeater A repeater accepts content, decrypts it, then re-encrypts and retransmits the data. It may perform some signal processing, such as upconverting video into a higher-resolution format, or splitting out the audio portion of the signal. Repeaters have HDMI inputs and outputs. Examples include home theater audio-visual receivers that separate and amplify the audio signal, while re-transmitting the video for display on a TV. A repeater could also simply send the input data stream to multiple outputs for simultaneous display on several screens.
Each device may contain one or more HDCP transmitters and/or receivers. (A single transmitter or receiver chip may combine HDCP and HDMI functionality.)
In the United States, the Federal Communications Commission (FCC) approved HDCP as a "Digital Output Protection Technology" on 4 August 2004. The FCC's Broadcast flag regulations, which were struck down by the United States Court of Appeals for the District of Columbia Circuit, would have required DRM technologies on all digital outputs from HDTV signal demodulators. Congress is still considering legislation that would implement something similar to the Broadcast Flag. The HDCP standard is more restrictive than the FCC's Digital Output Protection Technology requirement. HDCP bans compliant products from converting HDCP-restricted content to full-resolution analog form, presumably in an attempt to reduce the size of the analog hole.
On 19 January 2005, the European Information, Communications, and Consumer Electronics Technology Industry Associations (EICTA) announced that HDCP is a required component of the European "HD ready" label.
Microsoft Windows Vista and Windows 7 both use HDCP in computer graphics cards and monitors.
Circumvention
HDCP strippers decrypt the HDCP stream and transmit an unencrypted HDMI video signal so it will work in a non-HDCP display. It is currently unclear whether such devices would remain working if the HDCP licensing body issued key-revocation lists, which may be installed via new media (e.g. newer Blu-ray Discs) played-back by another device (e.g. a Blu-ray Disc player) connected to it.
Cryptanalysis
In 2001, Scott Crosby of Carnegie Mellon University wrote a paper with Ian Goldberg, Robert Johnson, Dawn Song, and David Wagner called "A Cryptanalysis of the High-bandwidth Digital Content Protection System", and presented it at ACM-CCS8 DRM Workshop on 5 November.
The authors concluded that HDCP's linear key exchange is a fundamental weakness, and discussed ways to:
Eavesdrop on any data.
Clone any device with only its public key.
Avoid any blacklist on devices.
Create new device key vectors.
In aggregate, usurp the authority completely.
They also said the Blom's scheme key swap could be broken by a so-called conspiracy attack: obtaining the keys of at least 40 devices and reconstructing the secret symmetrical master matrix that was used to compute them.
Around the same time, Niels Ferguson independently claimed to have broken the HDCP scheme, but he did not publish his research, citing legal concerns arising from the controversial Digital Millennium Copyright Act.
In November 2011 Professor Tim Güneysu of Ruhr-Universität Bochum revealed he had broken the HDCP 1.3 encryption standard.
Master key release
On 14 September 2010, Engadget reported the release of a possible genuine HDCP master key which can create device keys that can authenticate with other HDCP compliant devices without obtaining valid keys from The Digital Content Protection LLC. This master key would neutralize the key revocation feature of HDCP, because new keys can be created when old ones are revoked. Since the master key is known, it follows that an unlicensed HDCP decoding device could simply use the master key to dynamically generate new keys on the fly, making revocation impossible. It was not immediately clear who discovered the key or how they discovered it, though the discovery was announced via a Twitter update which linked to a Pastebin snippet containing the key and instructions on how to use it. Engadget said the attacker may have used the method proposed by Crosby in 2001 to retrieve the master key, although they cited a different researcher. On 16 September, Intel confirmed that the code had been cracked. Intel has threatened legal action against anyone producing hardware to circumvent the HDCP, possibly under the Digital Millennium Copyright Act.
HDCP v2.2, v2.1 and v2.0 breach
In August 2012 version 2.1 was proved to be broken. The attack used the fact that the pairing process sends the Km key obfuscated with an XOR. That makes the encryptor (receiver) unaware of whether it encrypts or decrypts the key. Further, the input parameters for the XOR and the AES above it are fixed from the receiver side, meaning the transmitter can enforce repeating the same operation. Such a setting allows an attacker to monitor the pairing protocol, repeat it with a small change and extract the Km key. The small change is to pick the "random" key to be the encrypted key from the previous flow. Now, the attacker runs the protocol and in its pairing message it gets E(E(Km)). Since E() is based on XOR it undoes itself, thus exposing the Km of the legitimate device.
V2.2 was released to fix that weakness by adding randomness provided by the receiver side. However the transmitter in V2.2 must not support receivers of V2.1 or V2.0 in order to avoid this attack. Hence a new erratum was released to redefine the field called "Type" to prevent backward compatibility with versions below 2.2. The "Type" flag should be requested by the content's usage rules (i.e. via the DRM or CAS that opened the content).
In August 2015, version 2.2 was rumored to be broken. An episode of AMC's series Breaking Bad was leaked to the Internet in UHD format; its metadata indicated it was an HDMI cap, meaning it was captured through HDMI interface that removed HDCP 2.2 protection.
On 4 November 2015, Chinese company LegendSky Tech Co., already known for their other HDCP rippers/splitters under the HDFury brand, released the HDFury Integral, a device that can remove HDCP 2.2 from HDCP-enabled UHD works. On 31 December 2015, Warner Bros and Digital Content Protection, LLC (DCP, the owners of HDCP) filed a lawsuit against LegendSky. Nevertheless, the lawsuit was ultimately dropped after LegendSky argued that the device did not "strip" HDCP content protection but rather downgraded it to an older version, a measure which is explicitly permitted in DCP's licensing manual.
Problems
HDCP can cause problems for users who want to connect multiple screens to a device; for example, a bar with several televisions connected to one satellite receiver or when a user has a closed laptop and uses an external display as the only monitor. HDCP devices can create multiple keys, allowing each screen to operate, but the number varies from device to device; e.g., a Dish or Sky satellite receiver can generate 16 keys. The technology sometimes causes handshaking problems where devices cannot establish a connection, especially with older high-definition displays.
Edward Felten wrote "the main practical effect of HDCP has been to create one more way in which your electronics could fail to work properly with your TV," and concluded in the aftermath of the master key fiasco that HDCP has been "less a security system than a tool for shaping the consumer electronics market."
Additional issues arise when interactive media (i.e. video games) suffer from control latency, because it requires additional processing for encoding/decoding. Various everyday usage situations, such as live streaming or capture of game play, are also adversely affected.
There is also the problem that all Apple laptop products, presumably in order to reduce switching time, when confronted with an HDCP-compliant sink device, automatically enable HDCP encryption from the HDMI / Mini DisplayPort / USB-C connector port. This is a problem if the user wishes to use recording or videoconferencing facilities further down the chain, because these devices most often do not decrypt HDCP-enabled content (since HDCP is meant to avoid direct copying of content, and such devices could conceivably do exactly that). This applies even if the output is not HDCP-requiring content, like a PowerPoint presentation or merely the device's UI. Some sink devices have the ability to disable their HDCP reporting entirely, however, preventing this issue from blocking content to videoconferencing or recording. However, HDCP content will then refuse to play on many source devices if this is disabled while the sink device is connected.
When connecting a HDCP 2.2 source device through compatible distribution to a video wall made of multiple legacy displays the ability to display an image cannot be guaranteed.
Versions
HDCP v2.x
The 2.x version of HDCP is not a continuation of HDCPv1, and is rather a completely different link protection. Version 2.x employs industry-standard encryption algorithms, such as 128-bit AES with 3072 or 1024-bit RSA public key and 256-bit HMAC-SHA256 hash function. While all of the HDCP v1.x specifications support backward compatibility to previous versions of the specification, HDCPv2 devices may interface with HDCPv1 hardware only by natively supporting HDCPv1, or by using a dedicated converter device. This means that HDCPv2 is only applicable to new technologies. It has been selected for the WirelessHD and Miracast (formerly WiFi Display) standards.
HDCP 2.x features a new authentication protocol, and a locality check to ensure the receiver is relatively close (it must respond to the locality check within 7 ms on a normal DVI/HDMI link). Version 2.1 of the specification was cryptanalyzed and found to have several flaws, including the ability to recover the session key.
There are still a few commonalities between HDCP v2 and v1.
Both are under DCP LLC authority.
They share the same license agreement, compliance rules and robustness rules.
They share the same revocation system and same device ID formats.
See also
HDCP repeater bit
Digital Transmission Content Protection
Digital rights management
Encrypted Media Extensions
Defective by Design
Trusted Computing
References
External links
Audiovisual introductions in 2000
Computer-related introductions in 2000
Broken stream ciphers
Copy protection
High-definition television
Intel products
Digital rights management standards | High-bandwidth Digital Content Protection | [
"Technology"
] | 3,157 | [
"Computer standards",
"Digital rights management standards"
] |
578,650 | https://en.wikipedia.org/wiki/Scholz%20conjecture | In mathematics, the Scholz conjecture is a conjecture on the length of certain addition chains.
It is sometimes also called the Scholz–Brauer conjecture or the Brauer–Scholz conjecture, after Arnold Scholz who formulated it in 1937 and Alfred Brauer who studied it soon afterward and proved a weaker bound.
Neill Clift has announced an example showing that the bound of the conjecture is not always tight.
Statement
The conjecture states that
,
where is the length of the shortest addition chain producing n.
Here, an addition chain is defined as a sequence of numbers, starting with 1, such that every number after the first can be expressed as a sum of two earlier numbers (which are allowed to both be equal). Its length is the number of sums needed to express all its numbers, which is one less than the length of the sequence of numbers (since there is no sum of previous numbers for the first number in the sequence, 1). Computing the length of the shortest addition chain that contains a given number can be done by dynamic programming for small numbers, but it is not known whether it can be done in polynomial time measured as a function of the length of the binary representation of . Scholz's conjecture, if true, would provide short addition chains for numbers of a special form, the Mersenne numbers.
Example
As an example, : it has a shortest addition chain
1, 2, 4, 5
of length three, determined by the three sums
1 + 1 = 2,
2 + 2 = 4,
4 + 1 = 5.
Also, : it has a shortest addition chain
1, 2, 3, 6, 12, 24, 30, 31
of length seven, determined by the seven sums
1 + 1 = 2,
2 + 1 = 3,
3 + 3 = 6,
6 + 6 = 12,
12 + 12 = 24,
24 + 6 = 30,
30 + 1 = 31.
Both and equal 7.
Therefore, these values obey the inequality (which in this case is an equality) and the Scholz conjecture is true for the case .
Partial results
By using a combination of computer search techniques and mathematical characterizations of optimal addition chains, showed that the conjecture is true for all . Additionally, he verified that for all , the inequality of the conjecture is actually an equality.
The bound of the conjecture is not always an exact equality. For instance, for , , with .
References
External links
Shortest addition chains
OEIS sequence A003313
Addition chains
Conjectures
Unsolved problems in number theory | Scholz conjecture | [
"Mathematics"
] | 512 | [
"Sequences and series",
"Unsolved problems in mathematics",
"Mathematical structures",
"Addition chains",
"Unsolved problems in number theory",
"Conjectures",
"Mathematical problems",
"Number theory"
] |
578,656 | https://en.wikipedia.org/wiki/Addition%20chain | In mathematics, an addition chain for computing a positive integer can be given by a sequence of natural numbers starting with 1 and ending with , such that each number in the sequence is the sum of two previous numbers. The length of an addition chain is the number of sums needed to express all its numbers, which is one less than the cardinality of the sequence of numbers.
Examples
As an example: (1,2,3,6,12,24,30,31) is an addition chain for 31 of length 7, since
2 = 1 + 1
3 = 2 + 1
6 = 3 + 3
12 = 6 + 6
24 = 12 + 12
30 = 24 + 6
31 = 30 + 1
Addition chains can be used for addition-chain exponentiation. This method allows exponentiation with integer exponents to be performed using a number of multiplications equal to the length of an addition chain for the exponent. For instance, the addition chain for 31 leads to a method for computing the 31st power of any number using only seven multiplications, instead of the 30 multiplications that one would get from repeated multiplication, and eight multiplications with exponentiation by squaring:
2 = ×
3 = 2 ×
6 = 3 × 3
12 = 6 × 6
24 = 12 × 12
30 = 24 × 6
31 = 30 ×
Methods for computing addition chains
Calculating an addition chain of minimal length is not easy; a generalized version of the problem, in which one must find a chain that simultaneously forms each of a sequence of values, is NP-complete. There is no known algorithm which can calculate a minimal addition chain for a given number with any guarantees of reasonable timing or small memory usage. However, several techniques are known to calculate relatively short chains that are not always optimal.
One very well known technique to calculate relatively short addition chains is the binary method, similar to exponentiation by squaring. In this method, an addition chain for the number is obtained recursively, from an addition chain for . If is even, it can be obtained in a single additional sum, as . If is odd, this method uses two sums to obtain it, by computing and then adding one.
The factor method for finding addition chains is based on the prime factorization of the number to be represented. If has a number as one of its prime factors, then an addition chain for can be obtained by starting with a chain for , and then concatenating onto it a chain for , modified by multiplying each of its numbers by . The ideas of the factor method and binary method can be combined into Brauer's m-ary method by choosing any number (regardless of whether it divides ), recursively constructing a chain for , concatenating a chain for (modified in the same way as above) to obtain , and then adding the remainder. Additional refinements of these ideas lead to a family of methods called sliding window methods.
Chain length
Let denote the smallest so that there exists an addition chain
of length which computes .
It is known that
where is the Hamming weight (the number of ones) of the binary expansion of .
One can obtain an addition chain for from an addition chain for by including one additional sum , from which follows the inequality on the lengths of the chains for and . However, this is not always an equality,
as in some cases may have a shorter chain than the one obtained in this way. For instance, , observed by Knuth. It is even possible for to have a shorter chain than , so that ; the smallest for which this happens is , which is followed by , , and so on .
Brauer chain
A Brauer chain or star addition chain is an addition chain in which each of the sums used to calculate its numbers uses the immediately previous number. A Brauer number is a number for which a Brauer chain is optimal.
Brauer proved that
where is the length of the shortest star chain. For many values of , and in particular for , they are equal: . But Hansen showed that there are some values of n for which , such as which has . The smallest such n is 12509.
Scholz conjecture
The Scholz conjecture (sometimes called the Scholz–Brauer or Brauer–Scholz conjecture), named after Arnold Scholz and Alfred T. Brauer), is a conjecture from 1937 stating that
This inequality is known to hold for all Hansen numbers, a generalization of Brauer numbers; Neill Clift checked by computer that all are Hansen (while 5784689 is not). Clift further verified that in fact for all .
See also
Addition-subtraction chain
Vectorial addition chain
Lucas chain
References
External links
. Note that the initial "1" is not counted (so element #1 in the sequence is 0).
F. Bergeron, J. Berstel. S. Brlek "Efficient computation of addition chains"
NP-complete problems | Addition chain | [
"Mathematics"
] | 1,011 | [
"Sequences and series",
"Mathematical structures",
"Addition chains",
"Computational problems",
"Mathematical problems",
"NP-complete problems"
] |
578,666 | https://en.wikipedia.org/wiki/Frequency%20counter | A frequency counter is an electronic instrument, or component of one, that is used for measuring frequency. Frequency counters usually measure the number of cycles of oscillation or pulses per second in a periodic electronic signal. Such an instrument is sometimes called a cymometer, particularly one of Chinese manufacture.
Operating principle
Most frequency counters work by using a counter, which accumulates the number of events occurring within a specific period of time. After a preset period known as the gate time (1 second, for example), the value in the counter is transferred to a display, and the counter is reset to zero. If the event being measured repeats itself with sufficient stability and the frequency is considerably lower than that of the clock oscillator being used, the resolution of the measurement can be greatly improved by measuring the time required for an entire number of cycles, rather than counting the number of entire cycles observed for a pre-set duration (often referred to as the reciprocal technique). The internal oscillator, which provides the time signals, is called the timebase, and must be calibrated very accurately.
If the event to be counted is already in electronic form, simple interfacing with the instrument is all that is required. More complex signals may need some conditioning to make them suitable for counting. Most general-purpose frequency counters will include some form of amplifier, filtering, and shaping circuitry at the input. DSP technology, sensitivity control and hysteresis are other techniques to improve performance. Other types of periodic events that are not inherently electronic in nature will need to be converted using some form of transducer. For example, a mechanical event could be arranged to interrupt a light beam, and the counter made to count the resulting pulses.
Frequency counters designed for radio frequencies (RF) are also common and operate on the same principles as lower frequency counters. Often, they have more range before they overflow. For very high (microwave) frequencies, many designs use a high-speed prescaler to bring the signal frequency down to a point where normal digital circuitry can operate. The displays on such instruments consider this so they still display the correct value. Microwave frequency counters can currently measure frequencies up to almost 56 GHz. Above these frequencies, the signal to be measured is combined in a mixer with the signal from a local oscillator, producing a signal at the difference frequency, which is low enough to be measured directly.
Accuracy and resolution
The accuracy of a frequency counter is strongly dependent on the stability of its timebase. A timebase is very delicate, like the hands of a watch, and can be changed by movement, interference, or even drift due to age, meaning it might not "tick" correctly. This can make a frequency reading, when referenced to the timebase, seem higher or lower than the actual value. Highly accurate circuits are used to generate timebases for instrumentation purposes, usually using a quartz crystal oscillator within a sealed temperature-controlled chamber, known as an oven-controlled crystal oscillator or crystal oven.
For higher accuracy measurements, an external frequency reference tied to a very high stability oscillator, such as a GPS disciplined rubidium oscillator, may be used. Where the frequency does not need to be known to such a high degree of accuracy, simpler oscillators can be used. It is also possible to measure frequency using the same techniques in software in an embedded system. A central processing unit (CPU), for example, can be arranged to measure its own frequency of operation, provided it has some reference timebase to compare with.
Accuracy is often limited by the available resolution of the measurement. The resolution of a single count is generally proportional to the timebase oscillator frequency and the gate time. Improved resolution can be obtained by several techniques such as oversampling/averaging.
Additionally, accuracy can be significantly degraded by jitter on the signal being measured. It is possible to reduce this error by oversampling/averaging techniques.
I/O Interfaces
I/O interfaces allow the user to send information to the frequency counter and receive information from the frequency counter. Commonly used interfaces include RS-232, USB, GPIB and Ethernet. Besides sending measurement results, a counter can notify users when user-defined measurement limits are exceeded. Common to many counters are the SCPI commands used to control them. A new development is built-in LAN-based control via Ethernet complete with GUI's. This allows one computer to control one or several instruments and eliminates the need to write SCPI commands.
See also
Frequency meter
References
External links
Agilent's AN200: Fundamentals of electronic frequency counters 1 2
LCD Frequency Counter
How to build your own Frequency Counter
Digital electronics
Counting instruments
Electronic test equipment | Frequency counter | [
"Mathematics",
"Technology",
"Engineering"
] | 972 | [
"Digital electronics",
"Counting instruments",
"Electronic test equipment",
"Measuring instruments",
"Electronic engineering",
"Numeral systems"
] |
578,684 | https://en.wikipedia.org/wiki/Digital%20Display%20Working%20Group | The Digital Display Working Group (DDWG) was a group whose purpose was to define and maintain the Digital Visual Interface standard, which was formed in 1998. It was organized by Intel, Silicon Image, Compaq, Fujitsu, HP, IBM, and NEC. The best-known published specification is the DVI interface.
The group developed the Digital Visual Interface (DVI) standard in 1999.
In 2011, founding member HP reported that the group had not met in 5 years.
References
External links
Technology consortia
Organizations established in 1998 | Digital Display Working Group | [
"Technology"
] | 110 | [
"Computing stubs"
] |
578,688 | https://en.wikipedia.org/wiki/Programmable%20logic%20array | A programmable logic array (PLA) is a kind of programmable logic device used to implement combinational logic circuits. The PLA has a set of programmable AND gate planes, which link to a set of programmable OR gate planes, which can then be conditionally complemented to produce an output. It has 2N AND gates for N input variables, and for M outputs from the PLA, there should be M OR gates, each with programmable inputs from all of the AND gates. This layout allows for many logic functions to be synthesized in the sum of products canonical forms.
PLAs differ from programmable array logic devices (PALs and GALs) in that both the AND and OR gate planes are programmable. PAL has programmable AND gates but fixed OR gates
History
In 1970, Texas Instruments developed a mask-programmable IC based on the IBM read-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8 JK flip-flops for memory. TI coined the term Programmable Logic Array for this device.
Implementation procedure
Preparation in SOP (sum of products) form.
Obtain the minimum SOP form to reduce the number of product terms to a minimum.
Decide the input connection of the AND matrix for generating the required product term.
Then decide the input connections of the OR matrix to generate the sum terms.
Decide the connections of the inversion matrix.
Program the PLA.
PLA block diagram:
Advantages over read-only memory
The desired outputs for each combination of inputs could be programmed into a read-only memory, with the inputs being driven by the address bus and the outputs being read out as data. However, that would require a separate memory location for every possible combination of inputs, including combinations that are never supposed to occur, and also duplicating data for "don't care" conditions (for example, logic like "if input A is 1, then, as far as output X is concerned, we don't care what input B is": in a ROM this would have to be written out twice, once for each possible value of B, and as more "don't care" inputs are added, the duplication grows exponentially); therefore, a programmable logic array can often implement a piece of logic using fewer transistors than the equivalent in read-only memory. This is particularly valuable when it is part of a processing chip where transistors are scarce (for example, the original 6502 chip contained a PLA to direct various operations of the processor).
Applications
One application of a PLA is to implement the control over a datapath. It defines various states in an instruction set, and produces the next state (by conditional branching). [e.g. if the machine is in state 2, and will go to state 4 if the instruction contains an immediate field; then the PLA should define the actions of the control in state 2, will set the next state to be 4 if the instruction contains an immediate field, and will define the actions of the control in state 4]. Programmable logic arrays should correspond to a state diagram for the system.
The earliest Commodore 64 home computers released in 1982 (into early 1983) initially used a programmed Signetics 82S100 PLA, but as the demand increased, MOS Technology / Commodore Semiconductor Group began producing a mask-programmed PLA, which bore part number 906114-01.
See also
Field-programmable gate array
Gate array
Programmable Array Logic
References
External links
Electronic design automation
Gate arrays | Programmable logic array | [
"Technology",
"Engineering"
] | 755 | [
"Computer engineering",
"Gate arrays"
] |
578,693 | https://en.wikipedia.org/wiki/Programmable%20Array%20Logic | Programmable Array Logic (PAL) is a family of programmable logic device semiconductors used to implement logic functions in digital circuits that was introduced by Monolithic Memories, Inc. (MMI) in March 1978. MMI obtained a registered trademark on the term PAL for use in "Programmable Semiconductor Logic Circuits". The trademark is currently held by Lattice Semiconductor.
PAL devices consisted of a small PROM (programmable read-only memory) core and additional output logic used to implement particular desired logic functions with few components.
Using specialized machines, PAL devices were "field-programmable". PALs were available in several variants:
"One-time programmable" (OTP) devices could not be updated and reused after initial programming. (MMI also offered a similar family called HAL, or "hard array logic", which were like PAL devices except that they were mask-programmed at the factory.)
UV erasable versions (e.g.: PALCxxxxx e.g.: PALC22V10) had a quartz window over the chip die and could be erased for re-use with an ultraviolet light source just like an EPROM.
Later versions (PALCExxx e.g.: PALCE22V10) were flash erasable devices.
In most applications, electrically erasable GALs are now deployed as pin-compatible direct replacements for one-time programmable PALs.
History
Before PALs were introduced, designers of digital logic circuits would use small-scale integration (SSI) components, such as those in the 7400 series TTL (transistor-transistor logic) family; the 7400 family included a variety of logic building blocks, such as gates (NOT, NAND, NOR, AND, OR), multiplexers (MUXes) and demultiplexers (DEMUXes), flip-flops (D-type, JK, etc.) and others. One PAL device would typically replace dozens of such "discrete" logic packages, so the SSI business declined as the PAL business took off. PALs were used advantageously in many products, such as minicomputers, as documented in Tracy Kidder's best-selling book The Soul of a New Machine.
PALs were not the first commercial programmable logic devices; Signetics had been selling its field programmable logic array (FPLA) since 1975. These devices were completely unfamiliar to most circuit designers and were perceived to be too difficult to use. The FPLA had a relatively slow maximum operating speed (due to having both programmable-AND and programmable-OR arrays), was expensive, and had a poor reputation for testability. Another factor limiting the acceptance of the FPLA was the large package, a 600-mil (0.6", or 15.24 mm) wide 28-pin dual in-line package (DIP).
The project to create the PAL device was managed by John Birkner and the actual PAL circuit was designed by H. T. Chua. In a previous job (at mini-computer manufacturer Computer Automation), Birkner had developed a 16-bit processor using 80 standard logic devices. His experience with standard logic led him to believe that user-programmable devices would be more attractive if the devices were designed to replace standard logic. This meant that the package sizes had to be more typical of the existing devices, and the speeds had to be improved. MMI intended PALs to be a relatively low cost (sub $3) part. However, the company initially had severe manufacturing yield problems and had to sell the devices for over $50. This threatened the viability of the PAL as a commercial product, and MMI was forced to license the product line to National Semiconductor. PALs were later "second sourced" by Texas Instruments and Advanced Micro Devices.
Process technologies
Early PALs were 20-pin DIP components fabricated in silicon using bipolar transistor technology with one-time programmable (OTP) titanium-tungsten programming fuses. Later devices were manufactured by Cypress, Lattice Semiconductor and Advanced Micro Devices using CMOS technology.
The original 20- and 24-pin PALs were denoted by MMI as medium-scale integration (MSI) devices.
PAL architecture
The PAL architecture consists of two main components: a logic plane and output logic macrocells.
Programmable logic plane
The programmable logic plane is a programmable read-only memory (PROM) array that allows the signals present on the device pins, or the logical complements of those signals, to be routed to output logic macrocells.
PAL devices have arrays of transistor cells arranged in a "fixed-OR, programmable-AND" plane used to implement "sum-of-products" binary logic equations for each of the outputs in terms of the inputs and either synchronous or asynchronous feedback from the outputs.
Output logic
The early 20-pin PALs had 10 inputs and 8 outputs. The outputs were active low and could be registered or combinational. Members of the PAL family were available with various output structures called "output logic macrocells" or OLMCs. Prior to the introduction of the "V" (for "variable") series, the types of OLMCs available in each PAL were fixed at the time of manufacture. (The PAL16L8 had 8 combinational outputs, and the PAL16R8 had 8 registered outputs. The PAL16R6 had 6 registered and 2 combinational outputs, while the PAL16R4 had 4 of each.) Each output could have up to 8 product terms (effectively AND gates); however, the combinational outputs used one of the terms to control a bidirectional output buffer. There were other combinations that had fewer outputs with more product terms per output and were available with active high outputs ("H" series). The "X" series of devices had an XOR gate before the register. There were also similar 24-pin versions of these PALs.
This fixed output structure often frustrated designers attempting to optimize the utility of PAL devices because output structures of different types were often required by their applications. (For example, one could not get 5 registered outputs with 3 active high combinational outputs.) So, in June 1983 AMD introduced the 22V10, a 24-pin device with 10 output logic macrocells. Each macrocell could be configured by the user to be combinational or registered, active high or active low. The number of product terms allocated to an output varied from 8 to 16. This one device could replace all of the 24-pin fixed function PAL devices. Members of the PAL "V" ("variable") series included the PAL16V8, PAL20V8 and PAL22V10.
Programming PALs
PALs were programmed electrically using binary patterns (as JEDEC ASCII/hexadecimal files) and a special electronic programming system available from either the manufacturer or a third party, such as DATA I/O. In addition to single-unit device programmers, device feeders and gang programmers were often used when more than just a few PALs needed to be programmed. (For large volumes, electrical programming costs could be eliminated by having the manufacturer fabricate a custom metal mask used to program the customers' patterns at the time of manufacture; MMI used the term "hard array logic" (HAL) to refer to devices programmed in this way.)
Programming languages (by chronological order of appearance)
Though some engineers programmed PAL devices by manually editing files containing the binary fuse pattern data, most opted to design their logic using a hardware description language (HDL) such as Data I/O's ABEL, Logical Devices' CUPL, or MMI's PALASM. These were computer-assisted design (CAD) (now referred to as "electronic design automation") programs which translated (or "compiled") the designers' logic equations into binary fuse map files used to program (and often test) each device.
PALASM
The PALASM (from "PAL assembler") language was developed by John Birkner in the early 1980s and the PALASM compiler was written by MMI in FORTRAN IV on an IBM 370/168. MMI made the source code available to users at no cost. By 1983, MMI customers ran versions on the DEC PDP-11, Data General NOVA, Hewlett-Packard HP 2100, MDS800 and others.
It was used to express Boolean equations for the output pins in a text file, which was then converted to the 'fuse map' file for the programming system using a vendor-supplied program; later the option of translation from schematics became common, and later still, 'fuse maps' could be 'synthesized' from an HDL (hardware description language) such as Verilog.
CUPL
Assisted Technology released CUPL (Compiler for Universal Programmable Logic) in September 1983. The software was always referred to as CUPL and never the expanded acronym. It was the first commercial design tool that supported multiple PLD families. The initial release was for the IBM PC and MS-DOS, but it was written in the C programming language so it could be ported to additional platforms. Assisted Technology was acquired by Personal CAD Systems (P-CAD) in July 1985. In 1986, PCAD's schematic capture package could be used as a front end for CUPL. CUPL was later acquired by Logical Devices and is now owned by Altium. CUPL is currently available as an integrated development package for Microsoft Windows.
Atmel releases for free WinCUPL (their own design software for all Atmel SPLDs and CPLDs). Atmel was acquired by Microchip in 2016.
ABEL
Data I/O Corporation released ABEL in April, 1984. The development team was Michael Holley, Mike Mraz, Gerrit Barrere, Walter Bright, Bjorn Freeman-Benson, Kyu Lee, David Pellerin, Mary Bailey, Daniel Burrier and Charles Olivier.
Data I/O spun off the ABEL product line into an electronic design automation company called Synario Design Systems and then sold Synario to MINC Inc in 1997.
MINC was focused on developing FPGA development tools. The company closed its doors in 1998 and Xilinx acquired some of MINC's assets including the ABEL language and tool set. ABEL then became part of the Xilinx Webpack tool suite.
Now Xilinx owns ABEL.
Device programmers
Popular device programmers included Data I/O Corporation's Model 60A Logic Programmer and Model 2900.
One of the first PAL programmers was the Structured Design SD20/24. They had the PALASM software built-in and only required a CRT terminal to enter the equations and view the fuse plots. After fusing, the outputs of the PAL could be verified if test vectors were entered in the source file.
Successors
After MMI succeeded with the 20-pin PAL parts introduced circa 1978, AMD introduced the 24-pin 22V10 PAL with additional features. After buying out MMI (circa 1987), AMD spun off a consolidated operation as Vantis, and that business was acquired by Lattice Semiconductor in 1999.
Altera introduced the EP300 (first CMOS PAL) in 1983 and later moved into the FPGA business.
Lattice Semiconductor introduced the generic array logic (GAL) family in 1985, with functional equivalents of the "V" series PALs that used reprogrammable logic planes based on EEPROM (electrically eraseable programmable read-only memory) technology. National Semiconductor was a second source for GAL parts.
AMD introduced a similar family called PALCE. In general one GAL part is able to function as any of the similar family PAL devices. For example, the 16V8 GAL is able to replace the 16L8, 16H8, 16H6, 16H4, 16H2 and 16R8 PALs (and many others besides).
ICT (International CMOS Technology) introduced the PEEL 18CV8 in 1986. The 20-pin CMOS EEPROM part could be used in place of any of the registered-output bipolar PALs and used much less power.
Larger-scale programmable logic devices were introduced by Atmel, Lattice Semiconductor, and others. These devices extended the PAL architecture by including multiple logic planes and/or burying logic macrocells within the logic plane(s). The term complex programmable logic device (CPLD) was introduced to differentiate these devices from their PAL and GAL predecessors, which were then sometimes referred to as simple programmable logic devices (SPLDs).
Another large programmable logic device is the field-programmable gate array (FPGA). These are devices currently made by Intel (who acquired Altera) and Xilinx (who was acquired by AMD) and other semiconductor manufacturers.
See also
Combinational logic
Other types of programmable logic devices:
Field-programmable gate array (FPGA)
Programmable logic array (PLA)
Programmable logic device (PLD)
Complex programmable logic device (CPLD)
Erasable programmable logic device (EPLD)
Field programmable logic array (Signetics FPLA)
Current and former makers of programmable logic devices:
Actel
Advanced Micro Devices (PAL, PALCE)
Altera (Flex, Max)
Atmel
Cypress Semiconductor
Intel
Lattice Semiconductor (GAL)
Microchip Technology (FPGA, SPLD, CPLD)
National Semiconductor (GAL)
QuickLogic Corp.
Signetics (FPLA)
Texas Instruments
Xilinx
Current and former makers of PAL device programmers:
Data I/O Corporation
References
Further reading
Books
Programmable Logic Designer's Guide; Roger Alford; Sams Publishing; 1989; . (archive)
PAL Programmable Logic Handbook; 4ed; Monolithic Memories; 1985. (archive)
Databooks
Bipolar LSI 1984 Databook; 5ed; Monolithic Memories; 1984. (archive)
Specifications
Standard Data Transfer Format Between Data Preparation System and Programmable Logic Device Programmer; JEDEC Standard JESD3-C; JEDEC; June 1994.
Electronic design automation
Gate arrays | Programmable Array Logic | [
"Technology",
"Engineering"
] | 2,925 | [
"Computer engineering",
"Gate arrays"
] |
578,784 | https://en.wikipedia.org/wiki/Photobiology | Photobiology is the scientific study of the beneficial and harmful interactions of light (technically, non-ionizing radiation) in living organisms. The field includes the study of photophysics, photochemistry, photosynthesis, photomorphogenesis, visual processing, circadian rhythms, photomovement, bioluminescence, and ultraviolet radiation effects.
The division between ionizing radiation and non-ionizing radiation is typically considered to be a photon energy greater than 10 eV, which approximately corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen at about 14 eV.
When photons come into contact with molecules, these molecules can absorb the energy in photons and become excited. Then they can react with molecules around them and stimulate "photochemical" and "photophysical" changes of molecular structures.
Photophysics
This area of Photobiology focuses on the physical interactions of light and matter. When molecules absorb photons that matches their energy requirements they promote a valence electron from a ground state to an excited state and they become a lot more reactive. This is an extremely fast process, but very important for different processes.
Photochemistry
This area of Photobiology studies the reactivity of a molecule when it absorbs energy that comes from light. It also studies what happens with this energy, it could be given off as heat or fluorescence so the molecule goes back to ground state.
There are 3 basic laws of photochemistry:
1) First Law of Photochemistry: This law explains that in order for photochemistry to happen, light has to be absorbed.
2) Second Law of Photochemistry: This law explains that only one molecule will be activated by each photon that is absorbed.
3) Bunsen-Roscoe Law of Reciprosity: This law explains that the energy in the final products of a photochemical reaction will be directly proportional to the total energy that was initially absorbed by the system.
Plant Photobiology
Plant growth and development is highly dependent on light. Photosynthesis is one of the most important biochemical processes for life on earth and its possible only due to the ability of plants to use energy from photons and convert it into molecules such as NADPH and ATP, to then fix carbon dioxide and make it into sugars that plants can use for their growth and development. But photosynthesis is not the only plant process driven by light, other processes such as photomorphology and plant photoperiod are extremely important for regulation of vegetative and reproductive growth as well as production of plant secondary metabolites.
Photosynthesis
Photosynthesis is defined as a series of biochemical reactions that phototrophic cells perform to transform light energy to chemical energy and store it in carbon-carbon bonds of carbohydrates. As it is widely known, this process happens inside of the chloroplast of photosynthetic plant cells where light absorbing pigments can be found embedded in the membranes of structures called thylakoids. There are 2 main pigments present in the Photosystems of higher plants: chlorophyll (a or b) and carotenes. These pigments are organized to maximize the light reception and transfer, and they absorb specific wavelengths to broaden the amount of light that can be captured and used for photo-redox reactions.
Photosynthetically Active Radiation (PAR)
Due to the limited amount of pigments in plant photosynthetic cells, there is a limited range of wavelengths that plants can use to perform photosynthesis. This range is called "Photosynthetically Active Radiation (PAR)". This range is interestingly almost the same as the human visible spectrum and it extends in wavelengths from approximately 400-700 nm. PAR is measured in μmol s−1m−2 and it measures the rate and intensity of radiant light in terms of micro-moles per unit of surface area and time that plants can use for photosynthesis.
Photobiologically Active Radiation (PBAR)
Photobiologically Active Radiation (PBAR) is a range of light energy beyond and including PAR. Photobiological Photon Flux (PBF) is the metric used to measure PBAR.
Photomorphogenesis
This process refers to the development of the morphology of plants which is light-mediated and controlled by 5 distinct photoreceptors: UVR8, Cryptochrome, Phototropin, Phytochrome r and Phytochrome fr. Light can control morphogenic processes such as leaf size and shoot elongation.
Different wavelengths of light produce different changes in plants. Red to Far Red light for example, regulates stem growth and straightening of the seedling shoots that are coming out of the ground. Some studies also claim that red and far red light increases the rooting mass of tomatoes as well as the rooting percentage of grape plants. On the other hand, blue and UV light regulate the germination and elongation of the plant as well as other physiological processes such as stomatal control and responses to environmental stress. Finally, green light was thought not to be available to plants due to the lack of pigments that would absorb this light. However, in 2004 it was found that green light can influence stomatal activity, stem elongation of young plants and leaf expansion.
Secondary Plant Metabolites
These compounds are chemicals that plants produce as part of their biochemical processes and help them perform certain functions as well as protect themselves from different environmental factors. In this case, some metabolites such as anthocyanins, flavonoids, and carotenes, can accumulate in plant tissues to protect them from UV radiation and very high light intensity
Photobiologists
Thomas Patrick Coohill, former president of the American Society for Photobiology
Harold F. Blum, who explored sunlight-induced skin cancer
Paul Bert, 1878 photobiology pioneer
See also
References
External links
Branches of biology
Light | Photobiology | [
"Physics",
"Biology"
] | 1,216 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Light",
"nan"
] |
578,869 | https://en.wikipedia.org/wiki/Minor%20Planet%20Center | The Minor Planet Center (MPC) is the official body for observing and reporting on minor planets under the auspices of the International Astronomical Union (IAU). Founded in 1947, it operates at the Smithsonian Astrophysical Observatory.
Function
The Minor Planet Center is the official worldwide organization in charge of collecting observational data for minor planets (such as asteroids), calculating their orbits and publishing this information via the Minor Planet Circulars. Under the auspices of the International Astronomical Union (IAU), it operates at the Smithsonian Astrophysical Observatory, which is part of the Center for Astrophysics along with the Harvard College Observatory.
The MPC runs a number of free online services for observers to assist them in observing minor planets and comets. The complete catalogue of minor planet orbits (sometimes referred to as the "Minor Planet Catalogue") may also be freely downloaded. In addition to astrometric data, the MPC collects light curve photometry of minor planets. A key function of the MPC is helping observers coordinate follow up observations of possible near-Earth objects (NEOs) via its NEO web form and blog, the Near-Earth Object Confirmation Page. The MPC is also responsible for identifying, and alerting to, new NEOs with a risk of impacting Earth in the few weeks following their discovery (see Potentially hazardous objects and ).
History
The Minor Planet Center was set up at the University of Cincinnati in 1947, under the direction of Paul Herget. Upon Herget's retirement on June 30, 1978, the MPC was moved to the Smithsonian Astrophysical Observatory, under the direction of Brian G. Marsden. From 2006 to 2015, the director of the MPC was Timothy Spahr, who oversaw a staff of five. From 2015 to 2021, the Minor Planet Center was headed by interim director Matthew Holman. Under his leadership, the MPC experienced a significant period of reorganization and growth, doubling both its staff size and the volume of observations processed per year. Upon Holman's resignation on February 9, 2021 (announced on February 19, 2021) Matthew Payne became acting director of the MPC.
Directors
1947–1978: Paul Herget
1978–2006: Brian Marsden
2006–2015: Timothy Spahr
2015–2021: Matthew Holman
2021–present: Matthew Payne
Periodical publications
The MPC periodically releases astrometric observations of minor planets, as well as of comets and natural satellites. These publications are the Minor Planet Circulars (MPCs), the Minor Planet Electronic Circulars (MPECs), and the Minor Planet Supplements (MPSs and MPOs). An extensive archive of publications in a PDF format is available at the Minor Planet Center's website. The archive's oldest publication dates back to 1 November 1977 (MPC 4937–5016).
Minor Planet Circulars (M.P.C. or MPCs), established 1947, is a scientific journal that is generally published by the Minor Planet Center on the date of each full moon, when the number of reported observations are minimal due to the brighter night sky. The Circulars contain astrometric observations, orbits and ephemerides of minor planets, comets and certain natural satellites. The astrometric observations of comets are published in full, while the minor planet observations are summarised by observatory code (the full observations now being given in the Minor Planet Circulars Supplement). New numberings and namings of minor planets (also see Naming of Minor Planets), as well as numberings of periodic comets and natural satellites, are announced in the Circulars. New orbits for comets and natural satellites appear in the Circulars; new orbits for minor planets appear in the Minor Planets and Comets Orbit Supplement (see below).
The Minor Planet Electronic Circulars (MPECs) are published by the Minor Planet Center. They generally contain positional observations and orbits of unusual minor planets and all comets. Monthly lists of observable unusual objects, observable distant objects, observable comets and the critical list of numbered minor planets also appear on these circulars. Daily Orbit Update MPECs, issued every day, contain new identifications and orbits of minor planets, obtained over the previous 24 hours.
The Minor Planets and Comets Supplement (MPS) is published on behalf of IAU's Division F (Planetary Systems and Bioastronomy) by the Minor Planet Center.
The Minor Planets and Comets Orbit Supplement (MPO) is published on behalf of IAU's Division F by the Minor Planet Center.
Natural Satellites Ephemeris Service
The Natural Satellites Ephemeris Service is an online service of the Minor Planet Center. The service provides "ephemerides, orbital elements and residual blocks for the outer irregular satellites of the giant planets".
See also
Central Bureau for Astronomical Telegrams
IAU Circular
List of astronomical societies
List of minor-planet groups
List of minor planets
Meanings of minor-planet names
References
External links
MPC/MPO/MPS Archive, all published circulars since 1977 (downloadable as PDF)
The MPC Orbit (MPCORB) Database
The Minor Planet Center Status Report, Matthew J. Holman, 8 November 2015
Recent MPECs, list of most-recently published Minor Planet Electronic Circulars
Videos
Astronomy data and publications
Astronomy magazines
Astronomy organizations
Science and technology magazines published in the United States
Magazines established in 1947 | Minor Planet Center | [
"Astronomy"
] | 1,079 | [
"Astronomy magazines",
"Works about astronomy",
"Astronomy data and publications",
"Astronomy organizations"
] |
579,026 | https://en.wikipedia.org/wiki/Gravitational%20potential | In classical mechanics, the gravitational potential is a scalar potential associating with each point in space the work (energy transferred) per unit mass that would be needed to move an object to that point from a fixed reference point in the conservative gravitational field. It is analogous to the electric potential with mass playing the role of charge. The reference point, where the potential is zero, is by convention infinitely far away from any mass, resulting in a negative potential at any finite distance. Their similarity is correlated with both associated fields having conservative forces.
Mathematically, the gravitational potential is also known as the Newtonian potential and is fundamental in the study of potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies.
Potential energy
The gravitational potential (V) at a location is the gravitational potential energy (U) at that location per unit mass:
where m is the mass of the object. Potential energy is equal (in magnitude, but negative) to the work done by the gravitational field moving a body to its given position in space from infinity. If the body has a mass of 1 kilogram, then the potential energy to be assigned to that body is equal to the gravitational potential. So the potential can be interpreted as the negative of the work done by the gravitational field moving a unit mass in from infinity.
In some situations, the equations can be simplified by assuming a field that is nearly independent of position. For instance, in a region close to the surface of the Earth, the gravitational acceleration, g, can be considered constant. In that case, the difference in potential energy from one height to another is, to a good approximation, linearly related to the difference in height:
Mathematical form
The gravitational potential V at a distance x from a point mass of mass M can be defined as the work W that needs to be done by an external agent to bring a unit mass in from infinity to that point:
where G is the gravitational constant, and F is the gravitational force. The product GM is the standard gravitational parameter and is often known to higher precision than G or M separately. The potential has units of energy per mass, e.g., J/kg in the MKS system. By convention, it is always negative where it is defined, and as x tends to infinity, it approaches zero.
The gravitational field, and thus the acceleration of a small body in the space around the massive object, is the negative gradient of the gravitational potential. Thus the negative of a negative gradient yields positive acceleration toward a massive object. Because the potential has no angular components, its gradient is
where x is a vector of length x pointing from the point mass toward the small body and is a unit vector pointing from the point mass toward the small body. The magnitude of the acceleration therefore follows an inverse square law:
The potential associated with a mass distribution is the superposition of the potentials of point masses. If the mass distribution is a finite collection of point masses, and if the point masses are located at the points x1, ..., xn and have masses m1, ..., mn, then the potential of the distribution at the point x is
If the mass distribution is given as a mass measure dm on three-dimensional Euclidean space R3, then the potential is the convolution of with dm. In good cases this equals the integral
where is the distance between the points x and r. If there is a function ρ(r) representing the density of the distribution at r, so that , where dv(r) is the Euclidean volume element, then the gravitational potential is the volume integral
If V is a potential function coming from a continuous mass distribution ρ(r), then ρ can be recovered using the Laplace operator, :
This holds pointwise whenever ρ is continuous and is zero outside of a bounded set. In general, the mass measure dm can be recovered in the same way if the Laplace operator is taken in the sense of distributions. As a consequence, the gravitational potential satisfies Poisson's equation. See also Green's function for the three-variable Laplace equation and Newtonian potential.
The integral may be expressed in terms of known transcendental functions for all ellipsoidal shapes, including the symmetrical and degenerate ones. These include the sphere, where the three semi axes are equal; the oblate (see reference ellipsoid) and prolate spheroids, where two semi axes are equal; the degenerate ones where one semi axes is infinite (the elliptical and circular cylinder) and the unbounded sheet where two semi axes are infinite. All these shapes are widely used in the applications of the gravitational potential integral (apart from the constant G, with 𝜌 being a constant charge density) to electromagnetism.
Spherical symmetry
A spherically symmetric mass distribution behaves to an observer completely outside the distribution as though all of the mass was concentrated at the center, and thus effectively as a point mass, by the shell theorem. On the surface of the earth, the acceleration is given by so-called standard gravity g, approximately 9.8 m/s2, although this value varies slightly with latitude and altitude. The magnitude of the acceleration is a little larger at the poles than at the equator because Earth is an oblate spheroid.
Within a spherically symmetric mass distribution, it is possible to solve Poisson's equation in spherical coordinates. Within a uniform spherical body of radius R, density ρ, and mass m, the gravitational force g inside the sphere varies linearly with distance r from the center, giving the gravitational potential inside the sphere, which is
which differentiably connects to the potential function for the outside of the sphere (see the figure at the top).
General relativity
In general relativity, the gravitational potential is replaced by the metric tensor. When the gravitational field is weak and the sources are moving very slowly compared to light-speed, general relativity reduces to Newtonian gravity, and the metric tensor can be expanded in terms of the gravitational potential.
Multipole expansion
The potential at a point is given by
The potential can be expanded in a series of Legendre polynomials. Represent the points x and r as position vectors relative to the center of mass. The denominator in the integral is expressed as the square root of the square to give
where, in the last integral, and is the angle between x and r.
(See "mathematical form".) The integrand can be expanded as a Taylor series in , by explicit calculation of the coefficients. A less laborious way of achieving the same result is by using the generalized binomial theorem. The resulting series is the generating function for the Legendre polynomials:
valid for and . The coefficients Pn are the Legendre polynomials of degree n. Therefore, the Taylor coefficients of the integrand are given by the Legendre polynomials in . So the potential can be expanded in a series that is convergent for positions x such that for all mass elements of the system (i.e., outside a sphere, centered at the center of mass, that encloses the system):
The integral is the component of the center of mass in the direction; this vanishes because the vector x emanates from the center of mass. So, bringing the integral under the sign of the summation gives
This shows that elongation of the body causes a lower potential in the direction of elongation, and a higher potential in perpendicular directions, compared to the potential due to a spherical mass, if we compare cases with the same distance to the center of mass. (If we compare cases with the same distance to the surface, the opposite is true.)
Numerical values
The absolute value of gravitational potential at a number of locations with regards to the gravitation from the Earth, the Sun, and the Milky Way is given in the following table; i.e. an object at Earth's surface would need 60 MJ/kg to "leave" Earth's gravity field, another 900 MJ/kg to also leave the Sun's gravity field and more than 130 GJ/kg to leave the gravity field of the Milky Way. The potential is half the square of the escape velocity.
Compare the gravity at these locations.
See also
Applications of Legendre polynomials in physics
Standard gravitational parameter (GM)
Geoid
Geopotential
Geopotential model
Notes
References
.
.
Energy (physics)
Gravity
Potentials
Scalar physical quantities | Gravitational potential | [
"Physics",
"Mathematics"
] | 1,734 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Energy (physics)",
"Wikipedia categories named after physical quantities"
] |
579,041 | https://en.wikipedia.org/wiki/Magnetomotive%20force | In physics, the magnetomotive force (abbreviated mmf or MMF, symbol ) is a quantity appearing in the equation for the magnetic flux in a magnetic circuit, Hopkinson's law. It is the property of certain substances or phenomena that give rise to magnetic fields:
where is the magnetic flux and is the reluctance of the circuit. It can be seen that the magnetomotive force plays a role in this equation analogous to the voltage in Ohm's law, , since it is the cause of magnetic flux in a magnetic circuit:
where is the number of turns in a coil and is the electric current through the coil.
where is the magnetic flux and is the magnetic reluctance
where is the magnetizing force (the strength of the magnetizing field) and is the mean length of a solenoid or the circumference of a toroid.
Units
The SI unit of mmf is the ampere, the same as the unit of current (analogously the units of emf and voltage are both the volt). Informally, and frequently, this unit is stated as the ampere-turn to avoid confusion with current. This was the unit name in the MKS system. Occasionally, the cgs system unit of the gilbert may also be encountered.
History
The term magnetomotive force was coined by Henry Augustus Rowland in 1880. Rowland intended this to indicate a direct analogy with electromotive force. The idea of a magnetic analogy to electromotive force can be found much earlier in the work of Michael Faraday (1791–1867) and it is hinted at by James Clerk Maxwell (1831–1879). However, Rowland coined the term and was the first to make explicit an Ohm's law for magnetic circuits in 1873.
Ohm's law for magnetic circuits is sometimes referred to as Hopkinson's law rather than Rowland's law as some authors attribute the law to John Hopkinson instead of Rowland. According to a review of magnetic circuit analysis methods this is an incorrect attribution originating from an 1885 paper by Hopkinson. Furthermore, Hopkinson actually cites Rowland's 1873 paper in this work.
References
Bibliography
Cited sources
Hon, Giora; Goldstein, Bernard R, "Symmetry and asymmetry in electrodynamics from Rowland to Einstein", Studies in History and Philosophy of Modern Physics, vol. 37, iss. 4, pp. 635–660, Elsevier December 2006.
Hopkinson, John, "Magnetisation of iron", Philosophical Transactions of the Royal Society, vol. 176, pp. 455–469, 1885.
Lambert, Mathieu; Mahseredjian, Jean; Martínez-Duró, Manuel; Sirois, Frédéric, "Magnetic circuits within electric circuits: critical review of existing methods and new mutator implementations", IEEE Transactions on Power Delivery, vol. 30, iss. 6, pp. 2427–2434, December 2015.
Rowland, Henry A, "On magnetic permeability and the maximum magnetism of iron, steel, and nickel", Philosophical Magazine, series 4, vol. 46, no. 304, pp. 140–159, August 1873.
Rowland, Henry A, "On the general equations of electro-magnetic action, with application to a new theory of magnetic attractions, and to the theory of the magnetic rotation of the plane of polarization of light" (part 2), American Journal of Mathematics, vol. 3, nos. 1–2, pp. 89–113, March 1880.
Schmidt, Robert Munnig; Schitter, Georg, "Electromechanical actuators", ch. 5 in Schmidt, Robert Munnig; Schitter, Georg; Rankers, Adrian; van Eijk, Jan, The Design of High Performance Mechatronics, IOS Press, 2014 .
Thompson, Silvanus Phillips, The Electromagnet and Electromagnetic Mechanism, Cambridge University Press, 2011 (first published 1891) .
Smith, R.J. (1966), Circuits, Devices and Systems, Chapter 15, Wiley International Edition, New York. Library of Congress Catalog Card No. 66-17612
Waygood, Adrian, An Introduction to Electrical Science, Routledge, 2013 .
General references
The Penguin Dictionary of Physics, 1977,
A Textbook of Electrical Technology, 2008,
Magnetism
Physical quantities
it:Forza magnetomotrice | Magnetomotive force | [
"Physics",
"Mathematics"
] | 892 | [
"Physical phenomena",
"Quantity",
"Physical quantities",
"Physical properties"
] |
579,060 | https://en.wikipedia.org/wiki/Lipotropic | Lipotropic compounds are those that help catalyse the breakdown of fat during metabolism in the body. A lipotropic nutrient promotes or encourages the export of fat from the liver. Lipotropics are necessary for maintenance of a healthy liver, and for burning the exported fat for additional energy. Without lipotropics, such as choline and inositol, fats and bile can become trapped in the liver, causing severe problems such as cirrhosis and blocking fat metabolism.
Choline is the major lipotrope in mammals and other known lipotropes are important only insofar as they contribute to the synthesis of choline. Choline is essential for fat metabolism. Choline functions as a methyl donor and it is required for proper liver function. Though choline can be synthesized from methionine or serine, mammals don't produce a sufficient amount on their own. Liver, eggs, wheat bran, meat, and broccoli are dietary sources of choline.
Inositol exerts lipotropic effects as well. Oranges and cantaloupe are high in inositol.
Methionine, an essential amino acid, is a major lipotropic compound in humans. When estrogen levels are high, the body requires more methionine. Estrogens reduce bile flow through the liver and increase bile cholesterol levels. Methionine helps deactivate estrogens. Egg whites are high in methionine.
Methionine levels also affect the amount of sulfur-containing compounds, such as glutathione, in the liver. Glutathione and other sulfur-containing peptides play a critical role in defending against toxic compounds. Supplementation with vitamin C, vitamin D, and NAC can increase glutathione levels.
Betaine hydrochloride is a lipotropic and increases gastric acid. Betaine itself (in a non-hydrochloric form, also known as TMG or Trimethylglycine) also has a lipotropic effect. Quinoa is high in betaine.
References
Further reading
Metabolism | Lipotropic | [
"Chemistry",
"Biology"
] | 451 | [
"Biotechnology stubs",
"Biochemistry stubs",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
579,154 | https://en.wikipedia.org/wiki/Radio%20Print%20Handicapped%20Network | RPH Australia is the national peak representative organisation for a unique Australian network of radio reading services designed to meet the daily information needs of people who, for any reason, are unable to access printed material. It is estimated that 22% of the Australian population has a print disability (over 5 million).
History
Historically, RPH stood for "Radio for the Print Handicapped", and these services began in Australia in 1975 on Melbourne's 3ZZ.
On 23 July 1978, the Minister for Post and Telecommunications announced, "The establishment of a special radio communications service for the blind and other people with reading difficulties."
The federal government began its direct funding of the service with a $250,000 grant in the 1981–82 budget.
Initially using marine band (today's extended AM broadcast band) frequencies, stations in Hobart, Melbourne, and Sydney began operating. 7RPH Hobart went to air in June 1982. 3RPH Melbourne was officially opened in December the same year.
By 1984–85, RPH services were also operating in Brisbane and Canberra. After another review, the specialised stations of the service transferred to normal broadcast band frequencies in 1990 and 1991.
Material from the network is heard on a small number of non-network community stations in Australia and on the Radio Reading Service of New Zealand.
In December 2013, all RPH Australia network stations joined the new VAST satellite platform.
RPH Australia Radio Reading Network Stations
Radio 1RPH
Canberra: 1125 kHz AM and DAB+
Wagga Wagga: 89.5 MHz FM
Junee: 99.5 FM (retransmission by Junee Shire Council)
Internet stream:
Australia: Viewer Access Satellite Television, radio channel 632
2RPH
Sydney: 1224 kHz AM and DAB+
Sydney Eastern Suburbs: 100.5 MHz FM and DAB+
Newcastle: 100.5 MHz FM
Wollongong: 93.3 MHz FM
Australia: Viewer Access Satellite Television, radio channel 632
4RPH
Brisbane: 1296 kHz AM and DAB+
Australia: Viewer Access Satellite Television, radio channel 632
Vision Australia Radio (3RPH)
Albury: 2APH 101.7 MHz FM
Bendigo: 3BPH 88.7 MHz FM
Darwin, Northern Territory: 3RPH DAB+
Geelong: 3GPH 99.5 kHz FM
Melbourne: 3RPH 1179 kHz AM and DAB+
Mildura: 3MPH 107.5 MHz FM
Shepparton: 3SPH 100.1 MHz FM
Warragul: 3RPH 93.5 MHz FM
Warrnambool: 3RPH 882 kHz AM
Australia & New Zealand: Optus Aurora, radio channel 12 (Melbourne feed)?? VAST
Australia: Viewer Access Satellite Television, radio channel 632
5RPH
Adelaide: 1197 kHz AM and DAB+
Australia: Viewer Access Satellite Television, radio channel 632
990 Vision Australia Radio, Perth (6RPH)
Perth: 990 kHz AM and DAB+
Australia: Viewer Access Satellite Television, radio channel 632
Print Radio Tasmania (7RPH)
Hobart: 864 kHz AM and DAB+
Launceston: 106.9 MHz FM
Devonport: 96.1 MHz FM
Australia: Viewer Access Satellite Television, radio channel 632
NDIS
In February 2016, funding from the National Disability Insurance Scheme (NDIS) was called into question after July 2016, with 1RPH being the first station to have funding from previous Disability services sources withdrawn.
See also
Community Broadcasting Association of Australia
Community Broadcasting Foundation
External links
RPH Australia
Community Broadcasting Association of Australia (CBAA)
Accessibility
Australian radio networks
Radio reading services of Australia
Radio stations established in 1982 | Radio Print Handicapped Network | [
"Engineering"
] | 745 | [
"Accessibility",
"Design"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.