id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
2,174,645 | https://en.wikipedia.org/wiki/EPPO%20Code | An EPPO code, formerly known as a Bayer code, is an encoded identifier that is used by the European and Mediterranean Plant Protection Organization (EPPO), in a system designed to uniquely identify organisms – namely plants, pests and pathogens – that are important to agriculture and crop protection. EPPO codes are a core component of a database of names, both scientific and vernacular. Although originally started by the Bayer Corporation, the official list of codes is now maintained by EPPO.
EPPO code database
All codes and their associated names are included in a database (EPPO Global Database). In total, there are over 93,500 species listed in the EPPO database, including:
55,000 species of plants (e.g. cultivated, wild plants and weeds)
27,000 species of animals (e.g. insects, mites, nematodes, rodents), biocontrol agents
11,500 microorganism species (e.g. bacteria, fungi, viruses, viroids and virus-like)
Plants are identified by a five-letter code, other organisms by a six-letter one. In many cases the codes are mnemonic abbreviations of the scientific name of the organism, derived from the first three or four letters of the genus and the first two letters of the species. For example, corn, or maize (Zea mays), was assigned the code "ZEAMA"; the code for potato late blight (Phytophthora infestans) is "PHYTIN". The unique and constant code for each organism provides a shorthand method of recording species. The EPPO code avoids many of the problems caused by revisions to scientific names and taxonomy which often result in different synonyms being in use for the same species. When the taxonomy changes, the EPPO code stays the same. The EPPO system is used by governmental organizations, conservation agencies, and researchers.
Example
External links
EPPO Global Database (lookup EPPO codes)
EPPO Data Services (download EPPO codes)
References
Taxonomy (biology)
Plant pathogens and diseases | EPPO Code | [
"Biology"
] | 431 | [
"Plant pathogens and diseases",
"Taxonomy (biology)",
"Plants"
] |
2,175,118 | https://en.wikipedia.org/wiki/Refugium%20%28population%20biology%29 | In biology, a refugium (plural: refugia) is a location which supports an isolated or relict population of a once more widespread species. This isolation (allopatry) can be due to climatic changes, geography, or human activities such as deforestation and overhunting.
Present examples of refugial animal species are the mountain gorilla, isolated to specific mountains in central Africa, and the Australian sea lion, isolated to specific breeding beaches along the south-west coast of Australia, due to humans taking so many of their number as game. This resulting isolation, in many cases, can be seen as only a temporary state; however, some refugia may be longstanding, thereby having many endemic species, not found elsewhere, which survive as relict populations. The Indo-Pacific Warm Pool has been proposed to be a longstanding refugium, based on the discovery of the "living fossil" of a marine dinoflagellate called Dapsilidinium pastielsii, currently found in the Indo-Pacific Warm Pool only.
For plants, anthropogenic climate change propels scientific interest in identifying refugial species that were isolated into small or disjunct ranges during glacial episodes of the Pleistocene, yet whose ability to expand their ranges during the warmth of interglacial periods (such as the Holocene) was apparently limited or precluded by topographic, streamflow, or habitat barriers—or by the extinction of coevolved animal dispersers. The concern is that ongoing warming trends will expose them to extirpation or extinction in the decades ahead.
In anthropology, refugia often refers specifically to Last Glacial Maximum refugia, where some ancestral human populations may have been forced back to glacial refugia (similar small isolated pockets on the face of the continental ice sheets) during the last glacial period. Going from west to east, suggested examples include the Franco-Cantabrian region (in northern Iberia), the Italian and Balkan peninsulas, the Ukrainian LGM refuge, and the Bering Land Bridge. Archaeological and genetic data suggest that the source populations of Paleolithic humans survived the glacial maxima (including the Last Glacial Maximum) in sparsely wooded areas and dispersed through areas of high primary productivity while avoiding dense forest cover. Glacial refugia, where human populations found refuge during the last glacial period, may have played a crucial role in shaping the emergence and diversification of the language families that exist in the world today.
More recently, refugia has been used to refer to areas that could offer relative climate stability in the face of modern climate change.
Speciation
As an example of a locale refugia study, Jürgen Haffer first proposed the concept of refugia to explain the biological diversity of bird populations in the Amazonian river basin. Haffer suggested that climatic change in the late Pleistocene led to reduced reservoirs of habitable forests in which populations become allopatric. Over time, that led to speciation: populations of the same species that found themselves in different refugia evolved differently, creating parapatric sister-species. As the Pleistocene ended, the arid conditions gave way to the present humid rainforest environment, reconnecting the refugia.
Scholars have since expanded the idea of this mode of speciation and used it to explain population patterns in other areas of the world, such as Africa, Eurasia, and North America. Theoretically, current biogeographical patterns can be used to infer past refugia: if several unrelated species follow concurrent range patterns, the area may have been a refugium. Moreover, the current distribution of species with narrow ecological requirements tend to be associated with the spatial position of glacial refugia.
Simple environment examples of temperature
One can provide a simple explanation of refugia involving core temperatures and exposure to sunlight. In the northern hemisphere, north-facing sites on hills or mountains, and places at higher elevations count as cold sites. The reverse are sun- or heat-exposed, lower-elevation, south-facing sites: hot sites. (The opposite directions apply in the southern hemisphere.) Each site becomes a refugium, one as a "cold-surviving refugium" and the other as a "hot-surviving refugium". Canyons with deep hidden areas (the opposite of hillsides, mountains, mesas, etc. or other exposed areas) lead to these separate types of refugia.
A concept not often referenced is that of "sweepstakes colonization": when a dramatic ecological event occurs, for example a meteor strike, and global, multiyear effects occur. The sweepstake-winning species happens to already be living in a fortunate site, and their environment is rendered even more advantageous, as opposed to the "losing" species, which immediately fails to reproduce.
Past climate change refugia
Ecological understanding and geographic identification of climate refugia that remained significant strongholds for plant and animal survival during the extremes of past cooling and warming episodes largely pertain to the Quaternary glaciation cycles during the past several million years, especially in the Northern Hemisphere. A number of defining characteristics of past refugia are prevalent, including "an area where distinct genetic lineages have persisted through a series of Tertiary or Quaternary climate fluctuations owing to special, buffering environmental characteristics", "a geographical region that a species inhabits during the period of a glacial/interglacial cycle that represents the species' maximum contraction in geographical range," and "areas where local populations of a species can persist through periods of unfavorable regional climate."
Future climate change refugia
In systematic conservation planning, the term refugium has been used to define areas that could be used in protected area development to protect species from climate change. The term has been used alternatively to refer to areas with stable habitats or stable climates. More specifically, the term in situ refugium is used to refer to areas that will allow species that exist in an area to remain there even as conditions change, whereas ex situ refugium refers to an area into which species distributions can move to in response to climate change. Sites that offer in situ refugia are also called resilient sites in which species will continue to have what they need to survive even as climate changes.
One study found with downscaled climate models that areas near the coast are predicted to experience overall less warming than areas toward the interior of the US State of Washington. Other research has found that old-growth forests are particularly insulated from climatic changes due to evaporative cooling effects from evapotranspiration and their ability to retain moisture. The same study found that such effects in the Pacific Northwest would create important refugia for bird species. A review of refugia-focused conservation strategy in the Klamath-Siskiyou Ecoregion found that, in addition to old-growth forest, the northern aspects of hillslopes and deep gorges would provide relatively cool areas for wildlife and seeps or bogs surrounded by mature and old-growth forests would continue to supply moisture even as water availability decreases.
Beginning in 2010 the concept of geodiversity (a term used previously in efforts to preserve scientifically important geological features) entered into the literature of conservation biologists as a potential way to identify climate change refugia and as a surrogate (in other words, a proxy used when planning for protected areas) for biodiversity. While the language to describe this mode of conservation planning hadn't fully developed until recently, the use of geophysical diversity in conservation planning goes back at least as far as the work by Hunter and others in 1988, and Richard Cowling and his colleagues in South Africa also used "spatial features" as surrogates for ecological processes in establishing conservation areas in the late 1990s and early 2000s. The most recent efforts have used the idea of land facets (also referred to as geophysical settings, enduring features, or geophysical stages), which are unique combinations of topographical features (such as slope steepness, slope direction, and elevation) and soil composition, to quantify physical features. The density of these facets, in turn, is used as a measure of geodiversity. Because geodiversity has been shown to be correlated with biodiversity, even as species move in response to climate change, protected areas with high geodiversity may continue to protect biodiversity as niches get filled by the influx of species from neighboring areas. Highly geodiverse protected areas may also allow for the movement of species within the area from one land facet or elevation to another.
Conservation scientists, however, emphasize that the use of refugia to plan for climate change is not a substitute for fine-scale (more localized) and traditional approaches to conservation, as individual species and ecosystems will need to be protected where they exist in the present. They also emphasize that responding to climate change in conservation is not a substitute for actually limiting the causes of climate change.
See also
Notes
References
Biogeography
Biomes
Habitat
Population ecology
pt:Teoria dos Refúgios | Refugium (population biology) | [
"Biology"
] | 1,854 | [
"Biogeography"
] |
14,463,498 | https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Mazur%20swindle | In mathematics, the Eilenberg–Mazur swindle, named after Samuel Eilenberg and Barry Mazur, is a method of proof that involves paradoxical properties of infinite sums. In geometric topology it was introduced by and is often called the Mazur swindle. In algebra it was introduced by Samuel Eilenberg
and is known as the Eilenberg swindle or Eilenberg telescope (see telescoping sum).
The Eilenberg–Mazur swindle is similar to the following well known joke "proof" that 1 = 0:
1 = 1 + (−1 + 1) + (−1 + 1) + ... = 1 − 1 + 1 − 1 + ... = (1 − 1) + (1 − 1) + ... = 0
This "proof" is not valid as a claim about real numbers because Grandi's series 1 − 1 + 1 − 1 + ... does not converge, but the analogous argument can be used in some contexts where there is some sort of "addition" defined on some objects for which infinite sums do make sense,
to show that if A + B = 0 then A = B = 0.
Mazur swindle
In geometric topology the addition used in the swindle is usually the connected sum of knots or manifolds.
Example : A typical application of the Mazur swindle in geometric topology is the proof that the sum of two non-trivial knots A and B is non-trivial. For knots it is possible to take infinite sums by making the knots smaller and smaller, so if A + B is trivial then
so A is trivial (and B by a similar argument). The infinite sum of knots is usually a wild knot, not a tame knot.
See for more geometric examples.
Example: The oriented n-manifolds have an addition operation given by connected sum, with 0 the n-sphere. If A + B is the n-sphere, then A + B + A + B + ... is Euclidean space so the Mazur swindle shows that the connected sum of A and Euclidean space is Euclidean space, which shows that A is the 1-point compactification of Euclidean space and therefore A is homeomorphic to the n-sphere. (This does not show in the case of smooth manifolds that A is diffeomorphic to the n-sphere, and in some dimensions, such as 7, there are examples of exotic spheres A with inverses that are not diffeomorphic to the standard n-sphere.)
Eilenberg swindle
In algebra the addition used in the swindle is usually the direct sum of modules over a ring.
Example: A typical application of the Eilenberg swindle in algebra is the proof that if A is a projective module over a ring R then there is a free module F with . To see this, choose a module B such that is free, which can be done as A is projective, and put
F = B ⊕ A ⊕ B ⊕ A ⊕ B ⊕ ⋯.
so that
A ⊕ F = A ⊕ (B ⊕ A) ⊕ (B ⊕ A) ⊕ ⋯ = (A ⊕ B) ⊕ (A ⊕ B) ⊕ ⋯ ≅ F.
Example: Finitely generated free modules over commutative rings R have a well-defined natural number as their dimension which is additive under direct sums, and are isomorphic if and only if they have the same dimension.
This is false for some noncommutative rings, and a counterexample can be constructed using the Eilenberg swindle as follows. Let X be an abelian group such that X ≅ X ⊕ X (for example the direct sum of an infinite number of copies of any nonzero abelian group), and let R be the ring of endomorphisms of X. Then the left R-module R is isomorphic to the left R-module R ⊕ R.
Example: If A and B are any groups then the Eilenberg swindle can be used to construct a ring R such that the group rings R[A] and R[B] are isomorphic rings: take R to be the group ring of the restricted direct product of infinitely many copies of A ⨯ B.
Other examples
The proof of the Cantor–Bernstein–Schroeder theorem might be seen as antecedent of the Eilenberg–Mazur swindle. In fact, the ideas are quite similar. If there are injections of sets from X to Y and from Y to X, this means that formally we have and for some sets A and B, where + means disjoint union and = means there is a bijection between two sets. Expanding the former with the latter,
X = X + A + B.
In this bijection, let Z consist of those elements of the left hand side that correspond to an element of X on the right hand side. This bijection then expands to the bijection
X = A + B + A + B + ⋯ + Z.
Substituting the right hand side for X in Y = B + X gives the bijection
Y = B + A + B + A + ⋯ + Z.
Switching every adjacent pair B + A yields
Y = A + B + A + B + ⋯ + Z.
Composing the bijection for X with the inverse of the bijection for Y then yields
X = Y.
This argument depended on the bijections and as well as the well-definedness of infinite disjoint union.
Notes
References
External links
Exposition by Terence Tao on Mazur's swindle in topology
Knot theory
Module theory | Eilenberg–Mazur swindle | [
"Mathematics"
] | 1,178 | [
"Fields of abstract algebra",
"Module theory"
] |
14,463,701 | https://en.wikipedia.org/wiki/Outline%20of%20abnormal%20psychology | The following outline is provided as an overview of and topical guide to abnormal psychology:
Abnormal psychology – is the scientific study of abnormal behavior in order to describe, predict, explain, and change abnormal patterns of functioning. Abnormal psychology in clinical psychology studies the nature of psychopathology, its causes, and its treatments. Of course, the definition of what constitutes 'abnormal' has varied across time and across cultures. Individuals also vary in what they regard as normal or abnormal behavior. Additionally, many current theories and approaches are held by psychologists, including biological, psychological, behavioral, humanistic, existential, and sociocultural. In general, abnormal psychology can be described as an area of psychology that studies people who are consistently unable to adapt and function effectively in a variety of conditions. The main contributing factors to how well an individual is able to adapt include their genetic makeup, physical condition, learning and reasoning, and socialization.
Nature of abnormal psychology
What type of thing is abnormal psychology?
Abnormal psychology can be described as all of the following:
An academic discipline – focused study in one academic field or profession. A discipline incorporates expertise, people, projects, communities, challenges, studies, inquiry, and research areas that are strongly associated with a given discipline.
One of the social sciences – concerned with society and the relationships among individuals within a society.
A branch of psychology – study of mind and behavior.
An applied science – discipline of science that applies existing scientific knowledge to develop more practical applications, like treating the mentally ill.
Essence of abnormal psychology
Abnormality
Mental disorder
Psychology
Psychopathology
Approaches of abnormal psychology
Somatogenic – abnormality is seen as a result of biological disorders in the brain. This approach has led to the development of radical biological treatments, e.g. lobotomy.
Psychogenic – abnormality is caused by psychological problems. Psychoanalytic (Freud), Cathartic, Hypnotic and Humanistic Psychology (Carl Rogers, Abraham Maslow) treatments were all derived from this paradigm.
Mental disorders
Mental disorder
– examples of mental disorders include:
Anxiety disorder
Bipolar disorder
Delusional disorder
Impulse control disorder
Kleptomania
Pyromania
Personality disorder
Obsessive–compulsive personality disorder
Schizoaffective disorder
Schizophrenia
Substance use disorder
Substance abuse
Substance dependence
Thought disorder
Treatment of mental disorders
Psychological evaluation
Psychotherapy
Psychiatric medication
Mental health professions
Mental health profession
Psychiatry
Clinical psychology
Psychiatric rehabilitation
School psychology
Clinical social work
Mental health professionals
Mental health professional
Psychiatrist
Clinical psychologist
School psychologist
Mental health counselor
History of abnormal psychology
History of mental disorders
History of mental disorders, by type
History of anxiety disorders
History of posttraumatic stress disorder
History of bipolar disorder
History of depression
History of major depressive disorder
History of neurodevelopmental disorders
History of autism
History of Asperger syndrome
History of obsessive–compulsive disorder
History of personality disorders
History of psychopathy
History of schizophrenia
History of the treatment of mental disorders
History of clinical psychology
History of electroconvulsive therapy
History of electroconvulsive therapy in the United Kingdom
History of psychiatry
History of psychiatric institutions
History of psychosurgery
History of psychosurgery in the United Kingdom
Lobotomy – consists of cutting or scraping away most of the connections to and from the prefrontal cortex, the anterior part of the frontal lobes of the brain. The purpose of the operation was to reduce the symptoms of mental disorder, and it was recognized that this was accomplished at the expense of the patient's personality and intellect! By the late 1970s, the practice of lobotomy had generally ceased.
History of psychotherapy
Abnormal psychology organizations
American Psychological Association (APA) – largest organization of psychologists in the United States.
National Institute of Mental Health (NIMH) – part of the U.S. Department of Health and Human Services, it specializes in mental illness research.
National Alliance on Mental Illness (NAMI) – provides support, education, and advocacy for people affected by mental illness.
Abnormal psychology publications
Journals
Behavior Genetics
British Journal of Clinical Psychology
Communication Disorders Quarterly
Journal of Abnormal Child Psychology
Journal of Abnormal Psychology
Journal of Clinical Psychology
Journal of Consulting and Clinical Psychology
Molecular Psychiatry
Psychological Medicine
Psychology of Addictive Behaviors
Psychology of Violence
Psychosis (journal)
Persons influential in abnormal psychology
Sigmund Freud
Jacques Lacan
B.F. Skinner
Deirdre Barrett
Kay Redfield Jamison
Theodore Millon
See also
Outline of psychology
References
External links
Definition of abnormal psychology, from Merriam-Webster MedlinePlus Medical Dictionary
Abnormal Psychology Students Practice Resources
Science Direct
A Course in Abnormal Psychology
NIMH.NIH.gov - National Institute of Mental Health
International Committee of Women Leaders on Mental Health
Mental Illness Watch
Metapsychology Online Reviews: Mental Health
The New York Times: Mental Health & Disorders
The Guardian: Mental Health
Mental Illness (Stanford Encyclopedia of Philosophy)
Abnormal psychology
Abnormal psychology | Outline of abnormal psychology | [
"Biology"
] | 971 | [
"Behavioural sciences",
"Behavior",
"Abnormal psychology"
] |
14,463,750 | https://en.wikipedia.org/wiki/Stress%20field | A stress field is the distribution of internal forces in a body that balance a given set of external forces. Stress fields are widely used in fluid dynamics and materials science.
Consider that one can picture the stress fields as the stress created by adding an extra half plane of atoms to a crystal. The bonds are clearly stretched around the location of the dislocation and this stretching causes the stress field to form. Atomic bonds farther and farther away from the dislocation centre are less and less stretched which is why the stress field dissipates as the distance from the dislocation centre increases. Each dislocation within the material has a stress field associated with it. The creation of these stress fields is a result of the material trying to dissipate mechanical energy that is being exerted on the material. By convention, these dislocations are labelled as either positive or negative depending on whether the stress field of the dislocation is mostly compressive or tensile.
By modelling of dislocations and their stress fields as either a positive (compressive field) or negative (tensile field) charges we can understand how dislocations interact with each other in the lattice. If two like fields come in contact with one another they will be repelled by one another. On the other hand, if two opposing charges come into contact with one another they will be attracted to one another. These two interactions will both strengthen the material in different ways. If two equivalently charged fields come in contact and are confined to a particular region, excessive force is needed to overcome the repulsive forces needed to elicit dislocation movement past one another. If two oppositely charged fields come into contact with one another they will merge with one another to form a jog. A jog can be modelled as a potential well that traps dislocations. Thus, excessive force is needed to force the dislocations apart. Since dislocation motion is the primary mechanism behind plastic deformation, increasing the stress required to move dislocations directly increases the yield strength of the material.
The theory of stress fields can be applied to various strengthening mechanisms for materials. Stress fields can be created by adding different sized atoms to the lattice (solute strengthening). If a smaller atom is added to the lattice a tensile stress field is created. The atomic bonds are longer due to the smaller radius of the solute atom. Similarly, if a larger atom is added to the lattice a compressive stress field is created. The atomic bonds are shorter due to the larger radius of the solute atom. The stress fields created by adding solute atoms form the basis of the material strengthening process that occurs in alloys.
Further reading
Arno Zang, Ove Stephansson, Stress Field of the Earth's Crust, Springer, 2010. Chapter 1, Introduction, page 1
Classical mechanics
Materials science | Stress field | [
"Physics",
"Materials_science",
"Engineering"
] | 582 | [
"Applied and interdisciplinary physics",
"Classical mechanics",
"Materials science",
"Mechanics",
"nan"
] |
14,464,469 | https://en.wikipedia.org/wiki/Outline%20of%20black%20holes | The following outline is provided as an overview of and topical guide to black holes:
Black hole – mathematically defined region of spacetime exhibiting such a strong gravitational pull that no particle or electromagnetic radiation can escape from inside it. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon. Although crossing the event horizon has enormous effect on the fate of the object crossing it, it appears to have no locally detectable features. In many ways a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, making it essentially impossible to observe.
What type of thing is a black hole?
A black hole can be described as all of the following:
Astronomical object
Black body
Collapsed star
Types of black holes
Schwarzschild metric – In Einstein's theory of general relativity, the Schwarzschild solution, named after Karl Schwarzschild, describes the gravitational field outside a spherical, uncharged, non-rotating mass such as a star, planet, or black hole.
Rotating black hole – black hole that possesses spin angular momentum.
Charged black hole – black hole that possesses electric charge.
Virtual black hole – black hole that exists temporarily as a result of a quantum fluctuation of spacetime.
Types of black holes, by size
Micro black hole – predicted as tiny black holes, also called quantum mechanical black holes, or mini black holes, for which quantum mechanical effects play an important role. These could potentially have arisen as primordial black holes.
Extremal black hole – black hole with the minimal possible mass that can be compatible with a given charge and angular momentum.
Black hole electron – if there were a black hole with the same mass and charge as an electron, it would share many of the properties of the electron including the magnetic moment and Compton wavelength.
Stellar black hole – black hole formed by the gravitational collapse of a massive star. They have masses ranging from about three to several tens of solar masses.
Intermediate-mass black hole – black hole whose mass is significantly more than stellar black holes yet far less than supermassive black holes.
Supermassive black hole – largest type of black hole in a galaxy, on the order of hundreds of thousands to billions of solar masses.
Quasar – very energetic and distant active galactic nucleus.
Active galactic nucleus – compact region at the centre of a galaxy that has a much higher than normal luminosity over at least some portion, and possibly all, of the electromagnetic spectrum.
Blazar – very compact quasar associated with a presumed supermassive black hole at the center of an active, giant elliptical galaxy.
Specific black holes
List of black holes – incomplete list of black holes organized by size; some items in this list are galaxies or star clusters that are believed to be organized around a black hole.
Black hole exploration
Rossi X-ray Timing Explorer – satellite that observes the time structure of astronomical X-ray sources, named after Bruno Rossi.
Formation of black holes
Stellar evolution – process by which a star undergoes a sequence of radical changes during its lifetime.
Gravitational collapse – inward fall of a body due to the influence of its own gravity.
Neutron star – type of stellar remnant that can result from the gravitational collapse of a massive star during a Type II, Type Ib or Type Ic supernova event.
Compact star – white dwarfs, neutron stars, other exotic dense stars, and black holes.
Quark star – hypothetical type of exotic star composed of quark matter, or strange matter.
Exotic star – compact star composed of something other than electrons, protons, and neutrons balanced against gravitational collapse by degeneracy pressure or other quantum properties.
Tolman–Oppenheimer–Volkoff limit – upper bound to the mass of stars composed of neutron-degenerate matter.
White dwarf – also called a degenerate dwarf, is a small star composed mostly of electron-degenerate matter.
Supernova – stellar explosion that is more energetic than a nova.
Hypernova – also known as a Type Ic Supernova, refers to an immensely large star that collapses at the end of its lifespan.
Gamma-ray burst – flashes of gamma rays associated with extremely energetic explosions that have been observed in distant galaxies.
Properties of black holes
Accretion disk – structure (often a circumstellar disk) formed by diffused material in orbital motion around a massive central body, typically a star. Accretion disks of black holes radiate in the X-ray part of the spectrum.
Black hole thermodynamics – area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons.
Schwarzschild radius – distance from the center of an object such that, if all the mass of the object were compressed within that sphere, the escape speed from the surface would equal the speed of light.
M–sigma relation – empirical correlation between the stellar velocity dispersion of a galaxy bulge and the mass M of the supermassive black hole at
Event horizon – boundary in spacetime beyond which events cannot affect an outside observer.
Quasi-periodic oscillation – manner in which the X-ray light from an astronomical object flickers about certain frequencies.
Photon sphere – spherical region of space where gravity is strong enough that photons are forced to travel in orbits.
Ergosphere – region located outside a rotating black hole.
Hawking radiation – black-body radiation that is predicted to be emitted by black holes, due to quantum effects near the event horizon.
Penrose process – process theorised by Roger Penrose wherein energy can be extracted from a rotating black hole.
Bondi accretion – spherical accretion onto an object.
Spaghettification – vertical stretching and horizontal compression of objects into long thin shapes in a very strong gravitational field, and is caused by extreme tidal forces.
Gravitational lens – distribution of matter between a distant source and an observer, that is capable of bending the light from the source, as it travels towards the observer.
History of black holes
History of black holes
Timeline of black hole physics – Timeline of black hole physics
John Michell – geologist who first proposed the idea "dark stars" in 1783
Dark star
Pierre-Simon Laplace – early mathematical theorist (1796) of the idea of black holes
Albert Einstein – in 1915, arrived at the theory of general relativity
Karl Schwarzschild – described the gravitational field of a point mass in 1915
Subrahmanyan Chandrasekhar – in 1931, using special relativity, postulated that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at 1.4 solar masses) has no stable solutions.
Tolman–Oppenheimer–Volkoff limit, a higher limit than the Chandrasekhar limit above which neutron stars would certainly collapse further, was predicted in 1939, along with a description of the mechanism by which a black hole could be produced.
David Finkelstein – identified the Schwarzschild surface as an event horizon
Roy Kerr – In 1963, found the exact solution for a rotating black hole
Stephen Hawking and Roger Penrose show that global singularities can occur and black holes are not a mathematical artifact in the late 1960s.
Cygnus X-1, discovered in 1964, was the first astrophysical object commonly accepted to be a black hole after further observations in the early 1970s.
James Bardeen and Jacob Bekenstein formulate black hole thermodynamics alongside Hawking and others in the early 1970s.
Hawking predicts Hawking radiation in 1974 as a consequence of black hole thermodynamics.
The LIGO Scientific Collaboration announces the first detection of a black hole merger via gravitational wave observations on February 11, 2016.
The Event Horizon Telescope observes the supermassive black hole in Messier 87's galactic center in 2017, leading to the first direct image of a black hole being published on April 10, 2019.
Models of black holes
Gravitational singularity – or spacetime singularity is a location where the quantities that are used to measure the gravitational field become infinite in a way that does not depend on the coordinate system.
Penrose–Hawking singularity theorems – set of results in general relativity which attempt to answer the question of when gravitation produces singularities.
Primordial black hole – hypothetical type of black hole that is formed not by the gravitational collapse of a large star but by the extreme density of matter present during the universe's early expansion.
Gravastar – object hypothesized in astrophysics as an alternative to the black hole theory by Pawel Mazur and Emil Mottola.
Dark star (Newtonian mechanics) – theoretical object compatible with Newtonian mechanics that, due to its large mass, has a surface escape velocity that equals or exceeds the speed of light.
Dark-energy star
Black star (semiclassical gravity) – gravitational object composed of matter.
Magnetospheric eternally collapsing object – proposed alternatives to black holes advocated by Darryl Leiter and Stanley Robertson.
Fuzzball (string theory) – theorized by some superstring theory scientists to be the true quantum description of black holes.
White hole – hypothetical region of spacetime which cannot be entered from the outside, but from which matter and light have the ability to escape.
Naked singularity – gravitational singularity without an event horizon.
Ring singularity – describes the altering gravitational singularity of a rotating black hole, or a Kerr black hole, so that the gravitational singularity becomes shaped like a ring.
Immirzi parameter – numerical coefficient appearing in loop quantum gravity, a nonperturbative theory of quantum gravity.
Membrane paradigm – useful "toy model" method or "engineering approach" for visualising and calculating the effects predicted by quantum mechanics for the exterior physics of black holes, without using quantum-mechanical principles or calculations.
Kugelblitz (astrophysics) – concentration of light so intense that it forms an event horizon and becomes self-trapped: according to general relativity, if enough radiation is aimed into a region, the concentration of energy can warp spacetime enough for the region to become a black hole.
Wormhole – hypothetical topological feature of spacetime that would be, fundamentally, a "shortcut" through spacetime.
Quasi-star – hypothetical type of extremely massive star that may have existed very early in the history of the Universe.
Black hole neural network
Issues pertaining to black holes
No-hair theorem – postulates that all black hole solutions of the Einstein-Maxwell equations of gravitation and electromagnetism in general relativity can be completely characterized by only three externally observable classical parameters: mass, electric charge, and angular momentum.
Black hole information paradox – results from the combination of quantum mechanics and general relativity.
Cosmic censorship hypothesis – two mathematical conjectures about the structure of singularities arising in general relativity.
Nonsingular black hole models – mathematical theory of black holes that avoids certain theoretical problems with the standard black hole model, including information loss and the unobservable nature of the black hole event horizon.
Holographic principle – property of quantum gravity and string theories which states that the description of a volume of space can be thought of as encoded on a boundary to the region—preferably a light-like boundary like a gravitational horizon.
Black hole complementarity – conjectured solution to the black hole information paradox, proposed by Leonard Susskind and Gerard 't Hooft.
Black hole metrics
Schwarzschild metric – describes the gravitational field outside a spherical, uncharged, non-rotating mass such as a star, planet, or black hole.
Kerr metric – describes the geometry of empty spacetime around an uncharged, rotating black hole (axially symmetric with an event horizon which is topologically a sphere)
Reissner–Nordström metric – static solution to the Einstein-Maxwell field equations, which corresponds to the gravitational field of a charged, non-rotating, spherically symmetric body of mass M.
Kerr-Newman metric – solution of the Einstein–Maxwell equations in general relativity, describing the spacetime geometry in the region surrounding a charged, rotating mass.
Astronomical objects including a black hole
Hypercompact stellar system – dense cluster of stars around a supermassive black hole that has been ejected from the centre of its host galaxy.
Persons influential in black hole research
Stephen Hawking
Jacob Bekenstein - for the foundation of black hole thermodynamics and the elucidation of the relation between entropy and the area of a black hole's event horizon.
Karl Schwarzschild - found a solution to the equations of general relativity that characterizes a black hole.
J. Robert Oppenheimer - for the discovery of the Tolman–Oppenheimer–Volkoff limit and his work with Hartland Snyder showing how a black hole could develop.
Roger Penrose - showed, alongside Hawking, that global singularities can exist.
Albert Einstein - arrived at the theory of general relativity; published a paper in 1939 arguing black holes cannot actually exist.
See also
Outline of astronomy
Outline of space science
References
External links
Stanford Encyclopedia of Philosophy: "Singularities and Black Holes" by Erik Curiel and Peter Bokulich.
Black Holes: Gravity's Relentless Pull—Interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute
Frequently Asked Questions (FAQs) on Black Holes
"Schwarzschild Geometry"
Videos
16-year-long study tracks stars orbiting Milky Way black hole
Movie of Black Hole Candidate from Max Planck Institute
Nature.com 2015-04-20 3D simulations of colliding black holes
Computer visualisation of the signal detected by LIGO
Two Black Holes Merge into One (based upon the signal GW150914
Black hole
Black hole
Black holes, Outline
Black holes, Outine | Outline of black holes | [
"Physics",
"Astronomy"
] | 2,872 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Galaxies",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects"
] |
14,464,580 | https://en.wikipedia.org/wiki/Mobile%20TV%20format | Mobile TV Format is a colloquial and collective name for technology standards set out for broadcasting TV services to mobile devices, mostly mobile handsets for now. Currently, there are four prevalent formats known as DMB, DVB-H, OneSeg and MediaFLO. As of December 2007, ITU approved T-DMB, DVB-H, OneSeg and MediaFLO as the global standard for real-time mobile video and audio broadcasting. Thus far, none of the four formats has secured a dominant position in the global market, except in their respective home markets.
History
Samsung and LG were the first to tout new generation mobile phones that would allow users to watch live multi-channel TV on the move during International Broadcasting Convention (IBC) in Amsterdam in September 2005. South Korea's top mobile operator SK Telecom launched a satellite pay-TV service to mobile phones in South Korea in May 2005. The Korean handset makers' push into the European Mobile TV market was soon to be met by strong competition, particularly from Nokia, while the Korean handset makers were looking forward to the 2006 World Cup soccer game in Germany as a crucial launch pad.
Market Developments
South Korea's DMB made a head start in May 2005, but European Union advocates for a single standard and has officially endorsed Nokia's DVB-H. In the US, however, Qualcomm's MediaFLO has got the upper hand for now. Japan is developing its own standard. Journalists and market analyzers are currently taking widely split views about the future course of mobile TV format wars.
"We see DVB-H winning out over all, but there will also be limited space for some of the other technologies," said Adrian Drozd, a London-based senior analyst with Datamonitor. "DMB has a head start, but from 2007 onward DVB-H should get momentum and become the dominant technology."
Which format is going to be an ultimate winner in the end is less important than when mobile TV will grow out of its infancy to be a prime pastime for mobile handset users globally. For instance, five-channel Virgin Mobile TV (VMTV) was launched in UK in October 2006, based on DMB technology with a £2.5m advertising campaign. But it failed to take off with customers, and in July 2007 VMTV has reached a decision to dump its mobile TV service.
Speaking about Virgin's decision to dump its mobile TV service, Bruce Renny, marketing director at mobile TV group ROK Entertainment Group, said expectations of the commercial take-up of broadcast mobile TV had been "over-optimistic, and the demise of Virgin's mobile TV service reflects that".
"After all, why pay a subscription fee to receive the same TV content on your mobile that you already get at home? Particularly when people don't watch TV on mobiles for more than a few minutes at a time.
"Most mobile TV viewing is for just a few minutes. To be commercially successful, you have to provide a combination of live news, sports updates and video-on-demand made-for-mobile content which is instantly engaging. Simply broadcasting linear TV to mobiles is not the answer," he said.
Contrary to the above-mentioned failure, however, South Korea seems to be on the right track. As of February 2006, Satellite DMB(S-DMB) subscribers came to 440,000 since the service launch in May 2005, while the number of Terrestrial DMB (T-DMB) subscribers reached 110,000 since the service launch in December 2005. As of December 2007, South Korea is the only country where T-DMB is widely deployed. More than 7 million handsets, laptops, car navigators and other gadgets that are equipped with T-DMB receivers are in use. USB-type receivers are being sold at around 50,000 won ($50). It is being tested in 11 other nations including Germany, Italy, France, Britain and China. Nevertheless, both T-DMB and S-DMB have not garnered decent profits so far.
References
Mobile content | Mobile TV format | [
"Technology"
] | 851 | [
"Mobile content"
] |
14,464,979 | https://en.wikipedia.org/wiki/Outline%20of%20cell%20biology | The following outline is provided as an overview of and topical guide to cell biology:
Cell biology – A branch of biology that includes study of cells regarding their physiological properties, structure, and function; the organelles they contain; interactions with their environment; and their life cycle, division, and death. This is done both on a microscopic and molecular level. Cell biology research extends to both the great diversities of single-celled organisms like bacteria and the complex specialized cells in multicellular organisms like humans. Formerly, the field was called cytology (from Greek κύτος, kytos, "a hollow;" and -λογία, -logia).
A branch of science
Cell biology can be described as all of the following:
Branch of science – A systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.
Branch of natural science – The branch of science concerned with the description, prediction, and understanding of natural phenomena based on observational and empirical evidence. Validity, accuracy, and social mechanisms ensuring quality control, such as peer review and repeatability of findings, are among the criteria and methods used for this purpose.
Branch of biology – The study of life and living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.
Academic discipline – Focused study in one academic field or profession. A discipline incorporates expertise, people, projects, communities, challenges, studies, inquiry, and research areas that are strongly associated with a given discipline.
Essence of cell biology
Cell – The structural and functional unit of all known living organisms. It is the smallest unit of an organism that is classified as living, and also known as the building block of life. Cell comes from the Latin cellula, meaning, a small room. Robert Hooke first coined the term in his book, Micrographia, where he compared the structure of cork cells viewed through his microscope to that of the small rooms (or monks' "cells") of a monastery.
Cell theory – The scientific theory which states that all organisms are composed of one or more cells. Vital functions of an organism occur within cells. All cells come from preexisting cells and contain the hereditary information necessary for regulating cell functions and for transmitting information to the next generation of cells.
Cell biology – (formerly cytology) The study of cells.
Cell division – The process of one parent cell separating into two or more daughter cells.
Endosymbiotic theory – The evolutionary theory that certain eukaryotic organelles originated as separate prokaryotic organisms which were taken inside the cell as endosymbionts.
Cellular respiration – The metabolic reactions and processes that take place in a cell or across the cell membrane to convert biochemical energy from fuel molecules into adenosine triphosphate (ATP) and then release the cell's waste products.
Lipid bilayer – A membrane composed of two layers of lipid molecules (usually phospholipids). The lipid bilayer is a critical component of the cell membrane.
Aspects of cells
Homeostasis – The property of either an open system or a closed system, especially a living organism, that regulates its internal environment so as to maintain a stable, constant condition.
Life – A condition of growth through metabolism, reproduction, and the power of adaptation to environment through changes originating internally.
Microscopic – The scale of objects, like cells, that are too small to be seen easily by the naked eye and which require a lens or microscope to see them clearly.
Unicellular – Organisms which are composed of only one cell.
Multicellular – Organisms consisting of more than one cell and having differentiated cells that perform specialized functions.
Tissues – A collection of interconnected cells that perform a similar function within an organism.
Cellular differentiation – A concept in developmental biology whereby less specialized cells become a more specialized cell type in multicellular organisms.
Types of cells
Cell type – Distinct morphological or functional form of cell. When a cell switches state from one cell type to another, it undergoes cellular differentiation. There are at least several hundred distinct cell types in the adult human body.
By organism
Eukaryote – Organisms whose cells are organized into complex structures enclosed within membranes, including plants, animals, fungi, and protists.
Animal cell – Eukaryotic cells belonging to kingdom Animalia, characteristically having no cell wall or chloroplasts.
Plant cell – Eukaryotic cells belonging to kingdom Plantae and having chloroplasts, cellulose cell walls, and large central vacuoles.
Fungal hypha – The basic cellular unit of organisms in kingdom fungi. Typically tubular, multinucleated, and with a chitinous cell wall.
Protist – A highly variable kingdom of eukaryotic organisms which are mostly unicellular and not plants, animals, or fungi.
Prokaryote – A group of organisms whose cells lack a membrane-bound cell nucleus, or any other membrane-bound organelles, including bacteria.
Bacterial cells – A prokaryotic cell belonging to the mostly unicellular Domain Bacteria.
Archea cell – A cell belonging to the prokaryotic and single-celled microorganisms in Domain Archea.
By function
Gamete – A haploid reproductive cell. Sperm and ova are gametes. Gametes fuse with another gamete during fertilization (conception) in organisms that reproduce sexually.
Sperm – Male reproductive cell (a gamete).
Ovum – Female reproductive cell (a gamete).
Zygote – A cell that is the result of fertilization (the fusing of two gametes).
Egg – The zygote of most birds and reptiles, resulting from fertilization of the ovum. The largest existing single cells currently known are (fertilized) eggs.
Meristemic cell – Undifferentiated plants cells analogous to animal stem cells.
Stem cell – Undifferentiated cells found in most multi-cellular organisms which are capable of retaining the ability to reinvigorate themselves through mitotic cell division and can differentiate into a diverse range of specialized cell types.
Germ cell – Gametes and gonocytes, these are often . Germ cells should not be confused with "germs" (pathogens).
Somatic cell – Any cells forming the body of an organism, as opposed to germline cells.
more...
General cellular anatomy
Cellular compartment – All closed parts within a cell whose lumen is usually surrounded by a single or double lipid layer membrane.
Organelles – A specialized subunit within a cell that has a specific function, and is separately enclosed within its own lipid membrane or traditionally any subcellular functional unit.
Organelles
Endomembrane system
Endoplasmic reticulum – An organelle composed of an interconnected network of tubules, vesicles and cisternae.
Membrane bound polyribosome – Polyribosomes that are attached to a cell's endoplasmic reticulum.
Smooth endoplasmic reticulum – A section of endoplasmic reticulum on which ribosomes are not attached is termed as smooth endoplasmic reticulum. It has functions in several metabolic processes, including synthesis of lipids, metabolism of carbohydrates and calcium concentration, drug detoxification, and attachment of receptors on cell membrane proteins.
Rough endoplasmic reticulum – A section of the endoplasmic reticulum on with the protein manufacturing organelle i.e. ribosomes are attached is termed as rough endoplasmic reticulum which give it a "rough" appearance (hence its name). Its primary function is the synthesis of enzymes and other proteins.
Vesicle – A relatively small intracellular, membrane-enclosed sac that stores or transports substances.
Golgi apparatus – A eukaryotic organelle that processes and packages macromolecules such as proteins and lipids that are synthesized by the cell.
Nuclear envelope – It is the double lipid bilayer membrane which surrounds the genetic material and nucleolus in eukaryotic cells. The nuclear membrane consists of two lipid bilayers:
Inner nuclear membrane
Outer nuclear membrane
Perinuclear space – Space between the nuclear membranes, a region contiguous with the lumen (inside) of the endoplasmic reticulum. The nuclear membrane has many small holes called nuclear pores that allow material to move in and out of the nucleus.
Lysosomes – It is a membrane-bound cell organelle found in most animal cells (they are absent in red blood cells). Structurally and chemically, they are spherical vesicles containing hydrolytic enzymes capable of breaking down virtually all kinds of biomolecules, including proteins, nucleic acids, carbohydrates, lipids, and cellular debris. lysosomes act as the waste disposal system of the cell by digesting unwanted materials in the cytoplasm, both from outside of the cell and obsolete components inside the cell. For this function they are popularly referred to as "suicide bags" or "suicide sacs" of the cell.
Endosomes – It is a membrane-bounded compartment inside eukaryotic cells. It is a compartment of the endocytic membrane transport pathway from the plasma membrane to the lysosome. Endosomes represent a major sorting compartment of the endomembrane system in cells.
Cell nucleus – A membrane-enclosed organelle found in most eukaryotic cells. It contains most of the cell's genetic material, organized as multiple long linear DNA molecules in complex with a large variety of proteins, such as histones, to form chromosomes.
Nucleoplasm – Viscous fluid, inside the nuclear envelope, similar to cytoplasm.
Nucleolus – Where ribosomes are assembled from proteins and RNA.
Chromatin – All DNA and its associated proteins in the nucleus.
Chromosome – A single DNA molecule with attached proteins.
Energy creators
Mitochondrion – A membrane-enclosed organelle found in most eukaryotic cells. Often called "cellular power plants", mitochondria generate most of cells' supply of adenosine triphosphate (ATP), the body's main source of energy.
Chloroplast – An organelles found in plant cells and eukaryotic algae that conduct photosynthesis.
Centrosome – The main microtubule organizing center of animal cells as well as a regulator of cell-cycle progression.
Lysosome – The organelles that contain digestive enzymes (acid hydrolases). They digest excess or worn-out organelles, food particles, and engulfed viruses or bacteria.
Peroxisome – A ubiquitous organelle in eukaryotes that participates in the metabolism of fatty acids and other metabolites. Peroxisomes have enzymes that rid the cell of toxic peroxides.
Ribosome – It is a large and complex molecular machine, found within all living cells, that serves as the site of biological protein synthesis (translation). Ribosomes build proteins from the genetic instructions held within messenger RNA.
Symbiosome – A temporary organelle that houses a nitrogen-fixing endosymbiont.
Vacuole – A membrane-bound compartments within some eukaryotic cells that can serve a variety of secretory, excretory, and storage functions.
Structures
Cell membrane – (also called the plasma membrane, plasmalemma or "phospholipid bilayer") A semipermeable lipid bilayer found in all cells; it contains a wide array of functional macromolecules.
Cell wall – A fairly rigid layer surrounding a cell, located external to the cell membrane, which provides the cell with structural support, protection, and acts as a filtering mechanism.
Centriole – A barrel shaped microtubule structure found in most eukaryotic cells other than those of plants and fungi.
Cluster of differentiation – A cell surface molecules present on white blood cells initially but found in almost any kind of cell of the body, providing targets for immunophenotyping of cells. Physiologically, CD molecules can act in numerous ways, often acting as receptors or ligands (the molecule that activates a receptor) important to the cell. A signal cascade is usually initiated, altering the behavior of the cell (see cell signaling).
Cytoskeleton – A cellular "scaffolding" or "skeleton" contained within the cytoplasm that is composed of three types of fibers: microfilaments, intermediate filaments, and microtubules.
Cytoplasm – A gelatinous, semi-transparent fluid that fills most cells, it includes all cytosol, organelles and cytoplasmic inclusions.
Cytosol – It is the internal fluid of the cell, and where a portion of cell metabolism occurs.
Inclusions – A chemical substances found suspended directly in the cytosol.
Photosystem – They are functional and structural units of protein complexes involved in photosynthesis that together carry out the primary photochemistry of photosynthesis: the absorption of light and the transfer of energy and electrons. They are found in the thylakoid membranes of plants, algae and cyanobacteria (in plants and algae these are located in the chloroplasts), or in the cytoplasmic membrane of photosynthetic bacteria.
Plasmid – An extrachromosomal DNA molecule separate from the chromosomal DNA and capable of sexual replication, it is typically ring shaped and found in bacteria.
Spindle fiber – The structure that separates the chromosomes into the daughter cells during cell division.
Stroma – The colorless fluid surrounding the grana within the chloroplast. Within the stroma are grana, stacks of thylakoids, the sub-organelles, the daughter cells, where photosynthesis is commenced before the chemical changes are completed in the stroma.
Thylakoid membrane – It is the site of the light-dependent reactions of photosynthesis with the photosynthetic pigments embedded directly in the membrane.
Molecules
DNA – Deoxyribonucleic acid (DNA) is a nucleic acid that contains the genetic instructions used in the development and functioning of all known living organisms and some viruses.
DNA helicase
DNA polymerase
DNA ligase
RNA – Ribonucleic acid is a nucleic acid made from a long chain of nucleotide, in a cell it is typically transcribed from DNA.
RNA polymerase
mRNA
rRNA
tRNA
Proteins – Biochemical compounds consisting of one or more polypeptides typically folded into a globular or fibrous form, facilitating a biological function.
List of proteins
Enzymes – Proteins that catalyze (i.e. accelerate) the rates of specific chemical reactions within cells.
Pigments
Chlorophyll – It is a term used for several closely related green pigments found in cyanobacteria and the chloroplasts of algae and plants. Chlorophyll is an extremely important biomolecule, critical in photosynthesis, which allows plants to absorb energy from light.
Carotenoid – They are organic pigments that are found in the chloroplasts and chromoplasts of plants and some other photosynthetic organisms, including some bacteria and some fungi. Carotenoids can be produced from fats and other basic organic metabolic building blocks by all these organisms. There are over 600 known carotenoids; they are split into two classes, xanthophylls (which contain oxygen) and carotenes (which are purely hydrocarbons, and contain no oxygen).
Biological activity of cells
Cellular metabolism
Cellular respiration –
Glycolysis – The foundational process of both aerobic and anaerobic respiration, glycolysis is the archetype of universal metabolic processes known and occurring (with variations) in many types of cells in nearly all organisms.
Pyruvate dehydrogenase – Enzyme in the eponymous complex linking glycolysis and the subsequent citric acid cycle.
Citric acid cycle – Also known as the Krebs cycle, an important aerobic metabolic pathway.
Electron transport chain – A biochemical process which associates electron carriers (such as NADH and FADH2) and mediating biochemical reactions that produce adenosine triphosphate (ATP), which is a major energy intermediate in living organisms. Typically occurs across a cellular membrane.
Photosynthesis – The conversion of light energy into chemical energy by living organisms.
Light-dependent reactions – A series of biochemical reactions driven by light that take place across thylakoid membrane to provide for the Calvin cycle reactions.
Calvin cycle – A series of anabolic biochemical reactions that takes place in the stroma of chloroplasts in photosynthetic organisms. It is one of the light-independent reactions or dark reactions.
Electron transport chain – A biochemical process which associates electron carriers (such as NADH and FADH2) and mediating biochemical reactions that produce adenosine triphosphate (ATP), which is a major energy intermediate in living organisms. Typically occurs across a cellular membrane.
Metabolic pathway – A series of chemical reactions occurring within a cell which ultimately leads to sequestering of energy.
Alcoholic fermentation – The anaerobic metabolic process by which sugars such as glucose, fructose, and sucrose, are converted into cellular energy and thereby producing ethanol, and carbon dioxide as metabolic waste products.
Lactic acid fermentation – An anaerobic metabolic process by which sugars such as glucose, fructose, and sucrose, are converted into cellular energy and the metabolic waste product lactic acid.
Chemosynthesis – The biological conversion of one or more carbon molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic molecules (e.g. hydrogen gas, hydrogen sulfide) or methane as a source of energy, rather than sunlight, as in photosynthesis.
Important molecules:
ADP – Adenosine diphosphate (ADP) (Adenosine pyrophosphate (APP)) is an important organic compound in metabolism and is essential to the flow of energy in living cells. A molecule of ADP consists of three important structural components: a sugar backbone attached to a molecule of adenine and two phosphate groups bonded to the 5 carbon atom of ribose.
ATP – A multifunctional nucleotide that is most important as a "molecular currency" of intracellular energy transfer.
NADH – A coenzyme found in all living cells which serves as an important electron carrier in metabolic processes.
Pyruvate – It is the "energy-molecule" output of the aerobic metabolism of glucose known as glycolysis.
Glucose – An important simple sugar used by cells as a source of energy and as a metabolic intermediate. Glucose is one of the main products of photosynthesis and starts cellular respiration in both prokaryotes and eukaryotes.
Cellular reproduction
Cell cycle – The series of events that take place in a eukaryotic cell leading to its replication.
Interphase – The stages of the cell cycle that prepare the cell for division.
Mitosis – In eukaryotes, the process of division of the nucleus and genetic material.
Prophase – The stage of mitosis in which the chromatin condenses into a highly ordered structure called chromosomes and the nuclear membrane begins to break up.
Metaphase – The stage of mitosis in which condensed chromosomes, carrying genetic information, align in the middle of the cell before being separated into each of the two daughter cells.
Anaphase – The stage of mitosis when chromatids (identical copies of chromosomes) separate as they are pulled towards opposite poles within the cell.
Telophase – The stage of mitosis when the nucleus reforms and chromosomes unravel into longer chromatin structures for reentry into interphase.
Cytokinesis – The process cells use to divide their cytoplasm and organelles.
Meiosis – The process of cell division used to create gametes in sexually reproductive eukaryotes.
Chromosomal crossover – (or crossing over) It is the exchange of genetic material between homologous chromosomes that results in recombinant chromosomes during sexual reproduction. It is one of the final phases of genetic recombination, which occurs in the pachytene stage of prophase I of meiosis during a process called synapsis.
Binary fission – The process of cell division used by prokaryotes.
Transcription and Translation
Transcription – Fundamental process of gene expression through turning DNA segment into a functional unit of RNA.
Translation – It is the process in which cellular ribosomes create proteins.
mRNA
rRNA
tRNA
Introns
Exons
Miscellaneous cellular processes
Cell transport
Osmosis – The diffusion of water through a cell wall or membrane or any partially permeable barrier from a solution of low solute concentration to a solution with high solute concentration.
Passive transport – Movement of molecules into and out of cells without the input of cellular energy.
Active transport – Movement of molecules into and out of cells with the input of cellular energy.
Bulk transport
Endocytosis – It is a form of active transport in which a cell transports molecules (such as proteins) into the cell by engulfing them in an energy-using process.
Exocytosis – It is a form of active transport in which a cell transports molecules (such as proteins) out of the cell by expelling them.
Phagocytosis – The process a cell uses when engulfing solid particles into the cell membrane to form an internal phagosome, or "food vacuole."
Tonicity – This is a measure of the effective osmotic pressure gradient (as defined by the water potential of the two solutions) of two solutions separated by a semipermeable membrane.
Programmed cell death – The death of a cell in any form, mediated by an intracellular program (ex. apoptosis or autophagy).
Apoptosis – A series of biochemical events leading to a characteristic cell morphology and death, which is not caused by damage to the cell.
Autophagy – The process whereby cells "eat" their own internal components or microbial invaders.
Cell senescence – The phenomenon where normal diploid differentiated cells lose the ability to divide after about 50 cell divisions.
Cell signaling – Regulation of cell behavior by signals from outside.
Cell adhesion – Holding together cells and tissues.
Motility and Cell migration – The various means for a cell to move, guided by cues in its environment.
Cytoplasmic streaming – Flowing of cytoplasm in eukaryotic cells.
DNA repair – The process used by cells to fix damaged DNA sections.
Applied cell biology concepts
Cell therapy – The process of introducing new cells into a tissue in order to treat a disease.
Cloning – Processes used to create copies of DNA fragments (molecular cloning), cells (cell cloning), or organisms.
Cell disruption – A method or process for releasing biological molecules from inside a cell.
Laboratory procedures
Bacterial conjugation – Transfer of genetic material between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. Conjugation is a convenient means for transferring genetic material to a variety of targets. In laboratories, successful transfers have been reported from bacteria to yeast, plants, mammalian cells and isolated mammalian mitochondria.
Cell culture – The process by which cells are grown under controlled conditions, generally outside of their natural environment. In practice, the term "cell culture" now refers to the culturing of cells derived from multi-cellular eukaryotes, especially animal cells.
Cell disruption, and cell unroofing – Methods for releasing molecules from cells.
Cell fractionation – Separation of homogeneous sets from a larger population of cells.
Cell incubator – The device used to grow and maintain microbiological cultures or cell cultures. The incubator maintains optimal temperature, humidity and other conditions such as the carbon dioxide () and oxygen content of the atmosphere inside.
Cyto-Stain – Commercially available mix of staining dyes for polychromatic staining in histology.
Fluorescent-activated cell sorting – Specialized type of flow cytometry. It provides a method for sorting a heterogeneous mixture of biological cells into two or more containers, one cell at a time, based upon the specific light scattering and fluorescent characteristics of each cell.
Spinning – Using a special bioreactor which features an impeller, stirrer or similar device to agitate the contents (usually a mixture of cells, medium and products like proteins that can be harvested).
History of cell biology
See also Cell biologists below
History of cell biology – is intertwined with the history of biochemistry and the history of molecular biology. Other articles pertaining to the history of cell biology include:
History of cell theory, embryology and germ theory
History of biochemistry, microbiology, and molecular biology
History of the optical microscope
Timeline of microscope technology
Cell biologists
Past
Karl August Möbius – In 1884 first observed the structures that would later be called "organelles".
Bengt Lidforss – Coined the word "organells" which later became "organelle".
Robert Hooke – Coined the word "cell" after looking at cork under a microscope.
Anton van Leeuwenhoek – First observed microscopic single celled organisms in apparently clean water.
Hans Adolf Krebs – Discovered the citric acid cycle in 1937.
Konstantin Mereschkowski – Russian botanist who in 1905 described the Theory of Endosymbiosis.
Edmund Beecher Wilson – Known as America's first cellular biologist, discovered the sex chromosome arrangement in humans.
Albert Claude – Shared the Nobel Prize in 1974 "for describing the structure and function of organelles in biological cells"
Theodor Boveri – In 1888 identified the centrosome and described it as the 'special organ of cell division.'
Peter D. Mitchell – British biochemist who was awarded the 1978 Nobel Prize for Chemistry for his discovery of the chemiosmotic mechanism of ATP synthesis.
Lynn Margulis – An American biologist best known for her theory on the origin of eukaryotic organelles, and her contributions and support of the endosymbiotic theory.
Current
Günter Blobel – An American biologist who won a Nobel Prize for protein targeting in cells.
Peter Agre – An American chemist who won a Nobel Prize for discovering cellular aquaporins.
Christian de Duve – Shared the Nobel Prize in 1974 "for describing the structure and function of organelles in biological cells"
George Emil Palade – Shared the Nobel Prize in 1974 "for describing the structure and function of organelles in biological cells.”
Ira Mellman – An American cell biologist who discovered endosomes.
Paul Nurse – Shared a 2001 Nobel Prize for discoveries regarding cell cycle regulation by cyclin and cyclin dependent kinases.
Leland H. Hartwell – Shared a 2001 Nobel Prize for discoveries regarding cell cycle regulation by cyclin and cyclin dependent kinases.
R. Timothy Hunt – Shared a 2001 Nobel Prize for discoveries regarding cell cycle regulation by cyclin and cyclin dependent kinases.
Closely allied sciences
Cytopathology – A branch of pathology that studies and diagnoses diseases on the cellular level. The most common use of cytopathology is the Pap smear, used to detect cervical cancer at an early treatable stage.
Genetics – The science of heredity and variation in living organisms.
Biochemistry – The study of the chemical processes in living organisms. It deals with the structure and function of cellular components, such as proteins, carbohydrates, lipids, nucleic acids, and other biomolecules.
Cytochemistry – The biochemistry of cells, especially that of the macromolecules responsible for cell structure and function.
Molecular biology – The study of biology at a molecular level, including the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis and learning how these interactions are regulated.
Developmental biology – The study of the process by which organisms grow and develop, including the genetic control of cell growth, differentiation and "morphogenesis", which is the process that gives rise to tissues, organs and anatomy.
Microbiology – The study of microorganisms, which are unicellular or cell-cluster microscopic organisms as well as viruses.
Cellular microbiology – A discipline bridging microbiology and cell biology.
See also
Outline of biology
Further reading
Young John K (2010). Introduction to Cell Biology. & (pbk).
References
cell biology
cell biology
Cell biology
Biology-related lists | Outline of cell biology | [
"Biology"
] | 5,941 | [
"Cell biology"
] |
14,465,687 | https://en.wikipedia.org/wiki/Oleg%20Viro | Oleg Yanovich Viro () (b. 13 May 1948, Leningrad, USSR) is a Russian mathematician in the fields of topology and algebraic geometry, most notably real algebraic geometry, tropical geometry and knot theory.
Contributions
Viro developed a "patchworking" technique in algebraic geometry, which allows real algebraic varieties to be constructed by a "cut and paste" method. Using this technique, Viro completed the isotopy classification of non-singular plane projective curves of degree 7. The patchworking technique was one of the fundamental ideas which motivated the development of tropical geometry. In topology, Viro is most known for his joint work with Vladimir Turaev, in which the Turaev-Viro invariants (relatives of the Reshetikhin-Turaev invariants) and related topological quantum field theory notions were introduced.
Education and career
Viro studied at the Leningrad State University where he received his Ph.D. degree in 1974; his advisor was Vladimir Rokhlin. Viro taught from 1973 until 1991 at Leningrad State University. Since 1986 he has been a member of the Saint Petersburg Department of the Steklov Institute of Mathematics. In 1992-1997, Viro was a F. B. Jones chair professor in Topology at the University of California, Riverside.
In 1994-2003 he was a professor at Uppsala University, Sweden. On 8 February 2007, Viro and his colleague Burglind Juhl-Jöricke were forced to resign from the university. There had been a history of conflict at the Mathematics Institute, with allegations of disagreeable behavior by several parties in the conflict. A number of Swedish, European and American mathematicians protested the manner in which the two Professors of Mathematics were forced to resign. These protests include the following:
an open letter by Lennart Carleson, former president of the International Mathematical Union,
a letter by Ari Laptev, current president of the European Mathematical Society, and
a letter from M. Salah Baouendi, Arthur Jaffe, Joel Lebowitz, Elliott H. Lieb and Nicolai Reshetikhin.
As of 2009, Viro is a senior researcher at the St. Petersburg Department of the Steklov Institute of Mathematics, and a professor at Stony Brook University.
Awards and honors
Viro was an invited speaker at the International Congress of Mathematicians in 1983 (Warsaw) and the European Congress of Mathematicians in 2000 (Barcelona). He was awarded the Göran Gustafsson Prize (1997) by the Swedish government.
In 2012 he became a fellow of the American Mathematical Society.
References
External links
Oleg Viro's website
1948 births
Living people
20th-century Russian mathematicians
21st-century Russian mathematicians
Soviet mathematicians
Topologists
Algebraic geometers
University of California, Riverside faculty
Fellows of the American Mathematical Society
Stony Brook University faculty | Oleg Viro | [
"Mathematics"
] | 566 | [
"Topologists",
"Topology"
] |
14,465,804 | https://en.wikipedia.org/wiki/Papoose%20board | In the medical field a papoose board is a temporary medical stabilization board used to limit a patient's freedom of movement to decrease risk of injury while allowing safe completion of treatment. The term papoose board refers to a brand name.
It is most commonly used during dental work, venipuncture, and other medical procedures. It is also sometimes used during medical emergencies to keep an individual from moving when total sedation is not possible. It is usually used on patients as a means of temporarily and safely limiting movement and is generally more effective than holding the person down. It is mostly used on young patients and patients with special needs.
A papoose board is a cushioned board with fabric Velcro straps that can be used to help limit a patient's movement and hold them steady during the medical procedure. Sometimes oral, IV or gas sedation such as nitrous oxide will be used to calm the patient prior to or during use. Using a papoose board to temporarily and safely limit movement is often preferable to medical sedation, which presents serious potential risks, including death. As a result, restraint is preferred by some parents as an alternative to sedation, behavior management/anxiety reduction techniques, better pain management or a low-risk anxiolytic such as nitrous oxide. Informed consent from a parent or guardian is usually required before a papoose board can be used. If assent from the child is required, then in most cases, the papoose board would be prohibited as it is unlikely that a child would agree to restraint and not struggle. In some countries, the papoose board is banned and considered a serious breach of ethics (for example, the U.K.).
Use of papoose boards in dentistry
The American Academy of Pediatric Dentistry approves of partial or complete stabilization of the patient in cases when it is necessary to protect the patient, practitioner, staff, or parent from injury while providing dental care. As of 2004, 85 percent of dental programs across the U.S. teach protective stabilization as an acceptable behavioral management practice. By 2004 The Colorado Springs Gazette reported that the dental chain Small Smiles Dental Centers used papoose boards almost 7,000 times in one period of 18 months, according to Colorado state records. Michael and Edward DeRose, two of the owners of Small Smiles, said that they used papoose boards so that they could do dental work on larger numbers of children in a more rapid manner. Small Smiles dentists from other states learned the papoose board method in Colorado and began practicing the method in other states. As a result, a Colorado Board of Dental Examiners-appointed committee established a new Colorado state law forbidding the usage of papoose boards for children unless a dentist has exhausted other possibilities for controlling a child's behavior, and if the dentist uses a papoose board, he or she must document why the papoose board was used in the patient's record.
Controversies
In some countries, the papoose board is banned and considered a serious breach of ethical practice. Although the papoose board is discussed as a behavior management technique, it is simply a restraint technique although ethically questionable, thus preventing any behavior from occurring that could be managed with recognized behavioral and anxiety reduction techniques.
Origins
Papoose boards were originally a wood-and-leather device used by many Native American tribes to swaddle their infants and children. Papoose boards, also known as cradle boards, are still in use in many places.
References
Medical equipment | Papoose board | [
"Biology"
] | 722 | [
"Medical equipment",
"Medical technology"
] |
14,465,871 | https://en.wikipedia.org/wiki/PowerShell | PowerShell is a task automation and configuration management program from Microsoft, consisting of a command-line shell and the associated scripting language. Initially a Windows component only, known as Windows PowerShell, it was made open-source and cross-platform on August 18, 2016, with the introduction of PowerShell Core. The former is built on the .NET Framework, the latter on .NET (previously .NET Core).
PowerShell is bundled with all currently supported Windows versions, and can also be installed on macOS and Linux. Since Windows 10 build 14971, PowerShell replaced Command Prompt (cmd.exe) and became the default command shell for File Explorer.
In PowerShell, administrative tasks are generally performed via cmdlets (pronounced command-lets), which are specialized .NET classes implementing a particular operation. These work by accessing data in different data stores, like the file system or Windows Registry, which are made available to PowerShell via providers. Third-party developers can add cmdlets and providers to PowerShell. Cmdlets may be used by scripts, which may in turn be packaged into modules. Cmdlets work in tandem with the .NET API.
PowerShell's support for .NET Remoting, WS-Management, CIM, and SSH enables administrators to perform administrative tasks on both local and remote Windows systems. PowerShell also provides a hosting API with which the PowerShell runtime can be embedded inside other applications. These applications can then use PowerShell functionality to implement certain operations, including those exposed via the graphical interface. This capability has been used by Microsoft Exchange Server 2007 to expose its management functionality as PowerShell cmdlets and providers and implement the graphical management tools as PowerShell hosts which invoke the necessary cmdlets. Other Microsoft applications including Microsoft SQL Server 2008 also expose their management interface via PowerShell cmdlets.
PowerShell includes its own extensive, console-based help (similar to man pages in Unix shells) accessible via the Get-Help cmdlet. Updated local help contents can be retrieved from the Internet via the Update-Help cmdlet. Alternatively, help from the web can be acquired on a case-by-case basis via the -online switch to Get-Help.
Background
The command-line interpreter (CLI) has been an inseparable part of most Microsoft operating systems. MS-DOS and Xenix relied almost exclusively on the CLI (though also came with a complementary graphical DOS Shell.) The Windows 9x family came bundled with COMMAND.COM, the command-line environment of MS-DOS. The Windows NT and Windows CE families, however, came with a new cmd.exe that bore strong similarities to COMMAND.COM. Both environments support a few basic internal commands and a primitive scripting language (batch files), which can be used to automate various tasks. However, they cannot automate all facets of Windows graphical user interface (GUI) because command-line equivalents of operations are limited and the scripting language is elementary.
Microsoft attempted to address some of these shortcomings by introducing the Windows Script Host in 1998 with Windows 98, and its command-line based host, cscript.exe. It integrates with the Active Script engine and allows scripts to be written in compatible languages, such as JScript and VBScript, leveraging the APIs exposed by applications via the component object model (COM). Its shortcomings are: its documentation is not very accessible, and it quickly gained a reputation as a system vulnerability vector after several high-profile computer viruses exploited weaknesses in its security provisions. Different versions of Windows provided various special-purpose command-line interpreters (such as netsh and WMIC) with their own command sets but they were not interoperable. Windows Server 2003 further attempted to improve the command-line experience but scripting support was still unsatisfactory.
Kermit
By the late 1990s, Intel had come to Microsoft asking for help in making Windows, which ran on Intel CPUs, a more appropriate platform to support the development of future Intel CPUs. At the time, Intel CPU development was accomplished on Sun Microsystems computers which ran Solaris (a Unix variant) on RISC-architecture CPUs. The ability to run Intel's many KornShell automation scripts on Windows was identified as a key capability. Internally, Microsoft began an effort to create a Windows port of Korn Shell, which was code-named Kermit. Intel ultimately pivoted to a Linux-based development platform that could run on Intel CPUs, rendering the Kermit project redundant. However, with a fully funded team, Microsoft program manager Jeffrey Snover realized there was an opportunity to create a more general-purpose solution to Microsoft's problem of administrative automation.
Monad
By 2002, Microsoft had started to develop a new approach to command-line management, including a CLI called Monad (also known as or MSH). The ideas behind it were published in August 2002 in a white paper called the "Monad Manifesto" by its chief architect, Jeffrey Snover. In a 2017 interview, Snover explains the genesis of PowerShell, saying that he had been trying to make Unix tools available on Windows, which didn't work due to "core architectural difference[s] between Windows and Linux". Specifically, he noted that Linux considers everything a text file, whereas Windows considers everything an "API that returns structured data". They were fundamentally incompatible, which led him to take a different approach.
Monad was to be a new extensible CLI with a fresh design capable of automating a range of core administrative tasks. Microsoft first demonstrated Monad publicly at the Professional Development Conference in Los Angeles in October 2003. A few months later, they opened up private beta, which eventually led to a public beta. Microsoft published the first Monad public beta release on June 17, 2005, and the Beta 2 on September 11, 2005, and Beta 3 on January 10, 2006.
PowerShell
On April 25, 2006, not long after the initial Monad announcement, Microsoft announced that Monad had been renamed Windows PowerShell, positioning it as a significant part of its management technology offerings. Release Candidate (RC) 1 of PowerShell was released at the same time. A significant aspect of both the name change and the RC was that this was now a component of Windows, rather than a mere add-on.
Release Candidate 2 of PowerShell version 1 was released on September 26, 2006, with final release to the web on November 14, 2006. PowerShell for earlier versions of Windows was released on January 30, 2007. PowerShell v2.0 development began before PowerShell v1.0 shipped. During the development, Microsoft shipped three community technology previews (CTP). Microsoft made these releases available to the public. The last CTP release of Windows PowerShell v2.0 was made available in December 2008.
PowerShell v2.0 was completed and released to manufacturing in August 2009, as an integral part of Windows 7 and Windows Server 2008 R2. Versions of PowerShell for Windows XP, Windows Server 2003, Windows Vista and Windows Server 2008 were released in October 2009 and are available for download for both 32-bit and 64-bit platforms. In an October 2009 issue of TechNet Magazine, Microsoft called proficiency with PowerShell "the single most important skill a Windows administrator will need in the coming years".
Windows 10 shipped with Pester, a script validation suite for PowerShell.
On August 18, 2016, Microsoft announced that they had made PowerShell open-source and cross-platform with support for Windows, macOS, CentOS and Ubuntu. The source code was published on GitHub. The move to open source created a second incarnation of PowerShell called "PowerShell Core", which runs on .NET Core. It is distinct from "Windows PowerShell", which runs on the full .NET Framework. Starting with version 5.1, PowerShell Core is bundled with Windows Server 2016 Nano Server.
Design
A key design tactic for PowerShell was to leverage the large number of APIs that already existed in Windows, Windows Management Instrumentation, .NET Framework, and other software. PowerShell cmdlets "wrap around" existing functionality. The intent with this tactic is to provide an administrator-friendly, more-consistent interface between administrators and a wide range of underlying functionality. With PowerShell, an administrator doesn't need to know .NET, WMI, or low-level API coding, and can instead focus on using the cmdlets exposed by PowerShell. In this regard, PowerShell creates little new functionality, instead focusing on making existing functionality more accessible to a particular audience.
Grammar
PowerShell's developers based the core grammar of the tool on that of the POSIX 1003.2 KornShell.
However, PowerShell's language was also influenced by PHP, Perl, and many other existing languages.
Named Commands
Windows PowerShell can execute four kinds of named commands:
cmdlets (.NET Framework programs designed to interact with PowerShell)
PowerShell scripts (files suffixed by .ps1)
PowerShell functions
Standalone executable programs
If a command is a standalone executable program, PowerShell launches it in a separate process; if it is a cmdlet, it executes in the PowerShell process. PowerShell provides an interactive command-line interface, where the commands can be entered and their output displayed. The user interface offers customizable tab completion. PowerShell enables the creation of aliases for cmdlets, which PowerShell textually translates into invocations of the original commands. PowerShell supports both named and positional parameters for commands. In executing a cmdlet, the job of binding the argument value to the parameter is done by PowerShell itself, but for external executables, arguments are parsed by the external executable independently of PowerShell interpretation.
Extended Type System
The PowerShell Extended Type System (ETS) is based on the .NET type system, but with extended semantics (for example, propertySets and third-party extensibility). For example, it enables the creation of different views of objects by exposing only a subset of the data fields, properties, and methods, as well as specifying custom formatting and sorting behavior. These views are mapped to the original object using XML-based configuration files.
Cmdlets
Cmdlets are specialized commands in the PowerShell environment that implement specific functions. These are the native commands in the PowerShell stack. Cmdlets follow a Verb-Noun naming pattern, such as Get-ChildItem, which makes it self-documenting code. Cmdlets output their results as objects and can also receive objects as input, making them suitable for use as recipients in a pipeline. If a cmdlet outputs multiple objects, each object in the collection is passed down through the entire pipeline before the next object is processed.
Cmdlets are specialized .NET classes, which the PowerShell runtime instantiates and invokes at execution time. Cmdlets derive either from Cmdlet or from PSCmdlet, the latter being used when the cmdlet needs to interact with the PowerShell runtime. These base classes specify certain methods – BeginProcessing(), ProcessRecord() and EndProcessing() – which the cmdlet's implementation overrides to provide the functionality. Whenever a cmdlet runs, PowerShell invokes these methods in sequence, with ProcessRecord() being called if it receives pipeline input. If a collection of objects is piped, the method is invoked for each object in the collection. The class implementing the cmdlet must have one .NET attribute – CmdletAttribute – which specifies the verb and the noun that make up the name of the cmdlet. Common verbs are provided as an enum.
If a cmdlet receives either pipeline input or command-line parameter input, there must be a corresponding property in the class, with a mutator implementation. PowerShell invokes the mutator with the parameter value or pipeline input, which is saved by the mutator implementation in class variables. These values are then referred to by the methods which implement the functionality. Properties that map to command-line parameters are marked by ParameterAttribute and are set before the call to BeginProcessing(). Those which map to pipeline input are also flanked by ParameterAttribute, but with the ValueFromPipeline attribute parameter set.
The implementation of these cmdlet classes can refer to any .NET API and may be in any .NET language. In addition, PowerShell makes certain APIs available, such as WriteObject(), which is used to access PowerShell-specific functionality, such as writing resultant objects to the pipeline. Cmdlets can use .NET data access APIs directly or use the PowerShell infrastructure of PowerShell Providers, which make data stores addressable using unique paths. Data stores are exposed using drive letters, and hierarchies within them, addressed as directories. Windows PowerShell ships with providers for the file system, registry, the certificate store, as well as the namespaces for command aliases, variables, and functions. Windows PowerShell also includes various cmdlets for managing various Windows systems, including the file system, or using Windows Management Instrumentation to control Windows components. Other applications can register cmdlets with PowerShell, thus allowing it to manage them, and, if they enclose any datastore (such as a database), they can add specific providers as well.
The number of cmdlets included in the base PowerShell install has generally increased with each version:
Cmdlets can be added into the shell through snap-ins (deprecated in v2) and modules; users are not limited to the cmdlets included in the base PowerShell installation.
Pipeline
PowerShell implements the concept of a pipeline, which enables piping the output of one cmdlet to another cmdlet as input. As with Unix pipelines, PowerShell pipelines can construct complex commands, using the | operator to connect stages. However, the PowerShell pipeline differs from Unix pipelines in that stages execute within the PowerShell runtime rather than as a set of processes coordinated by the operating system. Additionally, structured .NET objects, rather than byte streams, are passed from one stage to the next. Using objects and executing stages within the PowerShell runtime eliminates the need to serialize data structures, or to extract them by explicitly parsing text output. An object can also encapsulate certain functions that work on the contained data, which become available to the recipient command for use. For the last cmdlet in a pipeline, PowerShell automatically pipes its output object to the Out-Default cmdlet, which transforms the objects into a stream of format objects and then renders those to the screen.
Because all PowerShell objects are .NET objects, they share a .ToString() method, which retrieves the text representation of the data in an object. In addition, PowerShell allows formatting definitions to be specified, so the text representation of objects can be customized by choosing which data elements to display, and in what manner. However, in order to maintain backward compatibility, if an external executable is used in a pipeline, it receives a text stream representing the object, instead of directly integrating with the PowerShell type system.
Scripting
Windows PowerShell includes a dynamically typed scripting language which can implement complex operations using cmdlets imperatively. The scripting language supports variables, functions, branching (if-then-else), loops (while, do, for, and foreach), structured error/exception handling and closures/lambda expressions, as well as integration with .NET. Variables in PowerShell scripts are prefixed with $. Variables can be assigned any value, including the output of cmdlets. Strings can be enclosed either in single quotes or in double quotes: when using double quotes, variables will be expanded even if they are inside the quotation marks. Enclosing the path to a file in braces preceded by a dollar sign (as in ${C:\foo.txt}) creates a reference to the contents of the file. If it is used as an L-value, anything assigned to it will be written to the file. When used as an R-value, the contents of the file will be read. If an object is assigned, it is serialized before being stored.
Object members can be accessed using . notation, as in C# syntax. PowerShell provides special variables, such as $args, which is an array of all the command-line arguments passed to a function from the command line, and $_, which refers to the current object in the pipeline. PowerShell also provides arrays and associative arrays. The PowerShell scripting language also evaluates arithmetic expressions entered on the command line immediately, and it parses common abbreviations, such as GB, MB, and KB.
Using the function keyword, PowerShell provides for the creation of functions. A simple function has the following general look:
function name ([Type]$Param1, [Type]$Param2) {
# Instructions
}
However, PowerShell allows for advanced functions that support named parameters, positional parameters, switch parameters and dynamic parameters.
function Verb-Noun {
param (
# Definition of static parameters
)
dynamicparam {
# Definition of dynamic parameters
}
begin {
# Set of instruction to run at the start of the pipeline
}
process {
# Main instruction sets, ran for each item in the pipeline
}
end {
# Set of instruction to run at the end of the pipeline
}
}
The defined function is invoked in either of the following forms:
name value1 value2
Verb-Noun -Param1 value1 -Param2 value2
PowerShell allows any static .NET methods to be called by providing their namespaces enclosed in brackets ([]), and then using a pair of colons (::) to indicate the static method. For example:[Console]::WriteLine("PowerShell")There are dozens of ways to create objects in PowerShell. Once created, one can access the properties and instance methods of an object using the . notation.
PowerShell accepts strings, both raw and escaped. A string enclosed between single quotation marks is a raw string while a string enclosed between double quotation marks is an escaped string. PowerShell treats straight and curly quotes as equivalent.
The following list of special characters is supported by PowerShell:
For error handling, PowerShell provides a .NET-based exception-handling mechanism. In case of errors, objects containing information about the error (Exception object) are thrown, which are caught using the try ... catch construct (although a trap construct is supported as well). PowerShell can be configured to silently resume execution, without actually throwing the exception; this can be done either on a single command, a single session or perpetually.
Scripts written using PowerShell can be made to persist across sessions in either a .ps1 file or a .psm1 file (the latter is used to implement a module). Later, either the entire script or individual functions in the script can be used. Scripts and functions operate analogously with cmdlets, in that they can be used as commands in pipelines, and parameters can be bound to them. Pipeline objects can be passed between functions, scripts, and cmdlets seamlessly. To prevent unintentional running of scripts, script execution is disabled by default and must be enabled explicitly. Enabling of scripts can be performed either at system, user or session level. PowerShell scripts can be signed to verify their integrity, and are subject to Code Access Security.
The PowerShell scripting language supports binary prefix notation similar to the scientific notation supported by many programming languages in the C-family.
Hosting
One can also use PowerShell embedded in a management application, which uses the PowerShell runtime to implement the management functionality. For this, PowerShell provides a managed hosting API. Via the APIs, the application can instantiate a runspace (one instantiation of the PowerShell runtime), which runs in the application's process and is exposed as a Runspace object. The state of the runspace is encased in a SessionState object. When the runspace is created, the Windows PowerShell runtime initializes the instantiation, including initializing the providers and enumerating the cmdlets, and updates the SessionState object accordingly. The Runspace then must be opened for either synchronous processing or asynchronous processing. After that it can be used to execute commands.
To execute a command, a pipeline (represented by a Pipeline object) must be created and associated with the runspace. The pipeline object is then populated with the cmdlets that make up the pipeline. For sequential operations (as in a PowerShell script), a Pipeline object is created for each statement and nested inside another Pipeline object. When a pipeline is created, Windows PowerShell invokes the pipeline processor, which resolves the cmdlets into their respective assemblies (the command processor) and adds a reference to them to the pipeline, and associates them with InputPipe, OutputPipe and ErrorOutputPipe objects, to represent the connection with the pipeline. The types are verified and parameters bound using reflection. Once the pipeline is set up, the host calls the Invoke() method to run the commands, or its asynchronous equivalent, InvokeAsync(). If the pipeline has the Write-Host cmdlet at the end of the pipeline, it writes the result onto the console screen. If not, the results are handed over to the host, which might either apply further processing or display the output itself.
Microsoft Exchange Server 2007 uses the hosting APIs to provide its management GUI. Each operation exposed in the GUI is mapped to a sequence of PowerShell commands (or pipelines). The host creates the pipeline and executes them. In fact, the interactive PowerShell console itself is a PowerShell host, which interprets the scripts entered at command line and creates the necessary Pipeline objects and invokes them.
Desired State Configuration
DSC allows for declaratively specifying how a software environment should be configured.
Upon running a configuration, DSC will ensure that the system gets the state described in the configuration. DSC configurations are idempotent. The Local Configuration Manager (LCM) periodically polls the system using the control flow described by resources (imperative pieces of DSC) to make sure that the state of a configuration is maintained.
Versions
Initially using the code name "Monad", PowerShell was first shown publicly at the Professional Developers Conference in October 2003 in Los Angeles. All major releases are still supported, and each major release has featured backwards compatibility with preceding versions.
Windows PowerShell 1.0
PowerShell 1.0 was released in November 2006 for Windows XP SP2, Windows Server 2003 SP1 and Windows Vista. It is an optional component of Windows Server 2008.
Windows PowerShell 2.0
PowerShell 2.0 is integrated with Windows 7 and Windows Server 2008 R2 and is released for Windows XP with Service Pack 3, Windows Server 2003 with Service Pack 2, and Windows Vista with Service Pack 1.
PowerShell v2 includes changes to the scripting language and hosting API, in addition to including more than 240 new cmdlets.
New features of PowerShell 2.0 include:
PowerShell remoting: Using WS-Management, PowerShell 2.0 allows scripts and cmdlets to be invoked on a remote machine or a large set of remote machines.
Background jobs: Also called a PSJob, it allows a command sequence (script) or pipeline to be invoked asynchronously. Jobs can be run on the local machine or on multiple remote machines. An interactive cmdlet in a PSJob blocks the execution of the job until user input is provided.
Transactions: Enable cmdlet and developers can perform transactional operations. PowerShell 2.0 includes transaction cmdlets for starting, committing, and rolling back a PSTransaction as well as features to manage and direct the transaction to the participating cmdlet and provider operations. The PowerShell Registry provider supports transactions.
Advanced functions: These are cmdlets written using the PowerShell scripting language. Initially called "script cmdlets", this feature was later renamed "advanced functions".
SteppablePipelines: This allows the user to control when the BeginProcessing(), ProcessRecord() and EndProcessing() functions of a cmdlet are called.
Modules: This allows script developers and administrators to organize and partition PowerShell scripts in self-contained, reusable units. Code from a module executes in its own self-contained context and does not affect the state outside the module. Modules can define a restricted runspace environment by using a script. They have a persistent state as well as public and private members.
Data language: A domain-specific subset of the PowerShell scripting language that allows data definitions to be decoupled from the scripts and allows localized string resources to be imported into the script at runtime (Script Internationalization).
Script debugging: It allows breakpoints to be set in a PowerShell script or function. Breakpoints can be set on lines, line & columns, commands and read or write access of variables. It includes a set of cmdlets to control the breakpoints via script.
Eventing: This feature allows listening, forwarding, and acting on management and system events. Eventing allows PowerShell hosts to be notified about state changes to their managed entities. It also enables PowerShell scripts to subscribe to ObjectEvents, PSEvents, and WmiEvents and process them synchronously and asynchronously.
Windows PowerShell Integrated Scripting Environment (ISE): PowerShell 2.0 includes a GUI-based PowerShell host that provides integrated debugger, syntax highlighting, tab completion and up to 8 PowerShell Unicode-enabled consoles (Runspaces) in a tabbed UI, as well as the ability to run only the selected parts in a script.
Network file transfer: Native support for prioritized, throttled, and asynchronous transfer of files between machines using the Background Intelligent Transfer Service (BITS).
New cmdlets: Including Out-GridView, which displays tabular data in the WPF GridView object, on systems that allow it, and if ISE is installed and enabled.
New operators: -Split, -Join, and Splatting (@) operators.
Exception handling with Try-Catch-Finally: Unlike other .NET languages, this allows multiple exception types for a single catch block.
Nestable Here-Strings: PowerShell Here-Strings have been improved and can now nest.
Block comments: PowerShell 2.0 supports block comments using <# and #> as delimiters.
New APIs: The new APIs range from handing more control over the PowerShell parser and runtime to the host, to creating and managing collection of Runspaces (RunspacePools) as well as the ability to create Restricted Runspaces which only allow a configured subset of PowerShell to be invoked. The new APIs also support participation in a transaction managed by PowerShell
Windows PowerShell 3.0
PowerShell 3.0 is integrated with Windows 8 and with Windows Server 2012. Microsoft has also made PowerShell 3.0 available for Windows 7 with Service Pack 1, for Windows Server 2008 with Service Pack 1, and for Windows Server 2008 R2 with Service Pack 1.
PowerShell 3.0 is part of a larger package, Windows Management Framework 3.0 (WMF3), which also contains the WinRM service to support remoting. Microsoft made several Community Technology Preview releases of WMF3. An early community technology preview 2 (CTP 2) version of Windows Management Framework 3.0 was released on December 2, 2011. Windows Management Framework 3.0 was released for general availability in December 2012 and is included with Windows 8 and Windows Server 2012 by default.
New features in PowerShell 3.0 include:
Scheduled jobs: Jobs can be scheduled to run on a preset time and date using the Windows Task Scheduler infrastructure.
Session connectivity: Sessions can be disconnected and reconnected. Remote sessions have become more tolerant of temporary network failures.
Improved code writing: Code completion (IntelliSense) and snippets are added. PowerShell ISE allows users to use dialog boxes to fill in parameters for PowerShell cmdlets.
Delegation support: Administrative tasks can be delegated to users who do not have permissions for that type of task, without granting them perpetual additional permissions.
Help update: Help documentations can be updated via Update-Help command.
Automatic module detection: Modules are loaded implicitly whenever a command from that module is invoked. Code completion works for unloaded modules as well.
New commands: Dozens of new modules were added, including functionality to manage disks get-WmiObject win32_logicaldisk, volumes, firewalls, network connections, and printers, which had previously been performed via WMI.
Windows PowerShell 4.0
PowerShell 4.0 is integrated with Windows 8.1 and with Windows Server 2012 R2. Microsoft has also made PowerShell 4.0 available for Windows 7 SP1, Windows Server 2008 R2 SP1 and Windows Server 2012.
New features in PowerShell 4.0 include:
Desired State Configuration: Declarative language extensions and tools that enable the deployment and management of configuration data for systems using the DMTF management standards and WS-Management Protocol
New default execution policy: On Windows Servers, the default execution policy is now RemoteSigned.
Save-Help: Help can now be saved for modules that are installed on remote computers.
Enhanced debugging: The debugger now supports debugging workflows, remote script execution and preserving debugging sessions across PowerShell session reconnections.
-PipelineVariable switch: A new ubiquitous parameter to expose the current pipeline object as a variable for programming purposes
Network diagnostics to manage physical and Hyper-V's virtualized network switches
Where and ForEach method syntax provides an alternate method of filtering and iterating over objects.
Windows PowerShell 5.0
Windows Management Framework (WMF) 5.0 RTM which includes PowerShell 5.0 was re-released to web on February 24, 2016, following an initial release with a severe bug.
Key features included:
The new class keyword that creates classes for object-oriented programming
The new enum keyword that creates enums
OneGet cmdlets to support the Chocolatey package manager
Extending support for switch management to layer 2 network switches.
Debugging for PowerShell background jobs and instances of PowerShell hosted in other processes (each of which is called a "runspace")
Desired State Configuration (DSC) Local Configuration Manager (LCM) version 2.0
DSC partial configurations
DSC Local Configuration Manager meta-configurations
Authoring of DSC resources using PowerShell classes
Windows PowerShell 5.1
It was released along with the Windows 10 Anniversary Update on August 2, 2016, and in Windows Server 2016. PackageManagement now supports proxies, PSReadLine now has ViMode support, and two new cmdlets were added: Get-TimeZone and Set-TimeZone. The LocalAccounts module allows for adding/removing local user accounts. A preview for PowerShell 5.1 was released for Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 on July 16, 2016, and was released on January 19, 2017.
PowerShell 5.1 is the first version to come in two editions of "Desktop" and "Core". The "Desktop" edition is the continuation of the traditional Windows PowerShell that runs on the .NET Framework stack. The "Core" edition runs on .NET Core and is bundled with Windows Server 2016 Nano Server. In exchange for smaller footprint, the latter lacks some features such as the cmdlets to manage clipboard or join a computer to a domain, WMI version 1 cmdlets, Event Log cmdlets and profiles. This was the final version of PowerShell made exclusively for Windows. Windows PowerShell 5.1 remains pre-installed on Windows 10, Windows 11 and Windows Server 2022, while the .NET PowerShell needs to be installed separately and can run side-by-side with Windows PowerShell.
PowerShell Core 6
PowerShell Core 6.0 was first announced on August 18, 2016, when Microsoft unveiled PowerShell Core and its decision to make the product cross-platform, independent of Windows, free and open source. It achieved general availability on January 10, 2018, for Windows, macOS and Linux. It has its own support lifecycle and adheres to the Microsoft lifecycle policy that is introduced with Windows 10: Only the latest version of PowerShell Core is supported. Microsoft expects to release one minor version for PowerShell Core 6.0 every six months.
The most significant change in this version of PowerShell is the expansion to the other platforms. For Windows administrators, this version of PowerShell did not include any major new features. In an interview with the community on January 11, 2018, the PowerShell team was asked to list the top 10 most exciting things that would happen for a Windows IT professional who would migrate from Windows PowerShell 5.1 to PowerShell Core 6.0; in response, Angel Calvo of Microsoft could only name two: cross-platform and open-source. PowerShell 6 changed to UTF-8 as default encoding, with some exceptions. (PowerShell 7.4 changes more to UTF-8)
6.1
According to Microsoft, one of the new features of PowerShell 6.1 is "Compatibility with 1900+ existing cmdlets in Windows 10 and Windows Server 2019." Still, no details of these cmdlets can be found in the full version of the change log. Microsoft later professes that this number was insufficient as PowerShell Core failed to replace Windows PowerShell 5.1 and gain traction on Windows. It was, however, popular on Linux.
6.2
PowerShell Core 6.2 is focused primarily on performance improvements, bug fixes, and smaller cmdlet and language enhancements that improved developer productivity.
PowerShell 7
PowerShell 7 is the replacement for PowerShell Core 6.x products as well as Windows PowerShell 5.1, which is the last supported Windows PowerShell version. The focus in development was to make PowerShell 7 a viable replacement for Windows PowerShell 5.1, i.e. to have near parity with Windows PowerShell in terms of compatibility with modules that ship with Windows.
New features in PowerShell 7 include:
The -Parallel switch for the ForEach-Object cmdlet to help handle parallel processing
Near parity with Windows PowerShell in terms of compatibility with built-in Windows modules
A new error view
The Get-Error cmdlet
Pipeline chaining operators (&& and ||) that allow conditional execution of the next cmdlet in the pipeline
The ?: operator for ternary operation
The ?? operator for null coalescing
The ??= operator for null coalescing assignment
Cross-platform Invoke-DscResource (experimental)
Return of the Out-GridView cmdlet
Return of the -ShowWindow switch for the Get-Help
PowerShell 7.2
PowerShell 7.2 is the next long-term support version of PowerShell, after version 7.0. It uses .NET 6.0 and features universal installer packages for Linux. On Windows, updates to PowerShell 7.2 and later come via the Microsoft Update service; this feature has been missing from PowerShell 6.0 through 7.1.
PowerShell 7.3
This version includes some general Cmdlet updates and fixes, testing for framework dependent package in release pipeline as well as build and packaging improvements.
PowerShell 7.4
PowerShell 7.4 is based on .NET 8. And with that release webcmdlets default to UTF-8 encoding (changing from ASCII-superset Windows-1252 aka ISO-8859-1, that does not support Unicode). Previously UTF-8 was default for other, but not all, things.
Comparison of cmdlets with similar commands
The following table contains a selection of the cmdlets that ship with PowerShell, noting similar commands in other well-known command-line interpreters. Many of these similar commands come out-of-the-box defined as aliases within PowerShell, making it easy for people familiar with other common shells to start working.
Notes
Filename extensions
Application support
Alternative implementation
A project named Pash, a pun on the widely known "bash" Unix shell, has been an open-source and cross-platform reimplementation of PowerShell via the Mono framework. Pash was created by Igor Moochnick, written in C# and was released under the GNU General Public License. Pash development stalled in 2008, was restarted on GitHub in 2012, and finally ceased in 2016 when PowerShell was officially made open-source and cross-platform.
See also
Common Information Model (computing)
Comparison of command shells
Comparison of programming languages
Web-Based Enterprise Management
Windows Script Host
Windows Terminal
References
Further reading
External links
Windows PowerShell Survival Guide on TechNet Wiki
.NET programming languages
Unix shells
Windows command shells
Dynamically typed programming languages
Configuration management
Free and open-source software
Interpreters (computing)
Microsoft free software
Microsoft programming languages
Object-oriented programming languages
Procedural programming languages
Programming languages created in 2006
Scripting languages
Software using the MIT license
Text-oriented programming languages
Windows administration
Formerly proprietary software | PowerShell | [
"Technology",
"Engineering"
] | 7,857 | [
"Windows commands",
"Systems engineering",
"Configuration management",
"Computing commands"
] |
14,465,985 | https://en.wikipedia.org/wiki/Dipropylcyclopentylxanthine | 8-Cyclopentyl-1,3-dipropylxanthine (DPCPX, PD-116,948) is a drug which acts as a potent and selective antagonist for the adenosine A1 receptor. It has high selectivity for A1 over other adenosine receptor subtypes, but as with other xanthine derivatives DPCPX also acts as a phosphodiesterase inhibitor, and is almost as potent as rolipram at inhibiting PDE4. It has been used to study the function of the adenosine A1 receptor in animals, which has been found to be involved in several important functions such as regulation of breathing and activity in various regions of the brain, and DPCPX has also been shown to produce behavioural effects such as increasing the hallucinogen-appropriate responding produced by the 5-HT2A agonist DOI, and the dopamine release induced by MDMA, as well as having interactions with a range of anticonvulsant drugs.
See also
DMPX
CPX
Xanthine
References
Adenosine receptor antagonists
Phosphodiesterase inhibitors
Xanthines
Cyclopentanes
Propyl compounds | Dipropylcyclopentylxanthine | [
"Chemistry"
] | 253 | [
"Alkaloids by chemical classification",
"Xanthines"
] |
14,466,034 | https://en.wikipedia.org/wiki/Planetarium%20software | Planetarium software is application software that allows a user to simulate the celestial sphere at any time of day, especially at night, on a computer. Such applications can be as rudimentary as displaying a star chart or sky map for a specific time and location, or as complex as rendering photorealistic views of the sky.
While some planetarium software is meant to be used exclusively on a personal computer, some applications can be used to interface with and control telescopes or planetarium projectors. Optional features may include inserting the orbital elements of comets and other newly discovered bodies for display.
Comparison of planetarium software
See also
Space flight simulation game
List of space flight simulation games
List of observatory software
References
Educational software
Entertainment software
Astronomy software
Planetarium technology | Planetarium software | [
"Astronomy"
] | 150 | [
"Astronomy software",
"Works about astronomy",
"Astronomy stubs"
] |
14,466,203 | https://en.wikipedia.org/wiki/Service%20Interface%20for%20Real%20Time%20Information | The Standard Interface for Real-time Information or SIRI is an XML protocol to allow distributed computers to exchange real-time information about public transport services and vehicles.
The protocol is a CEN norm, developed originally as a technical standard with initial participation by France, Germany (Verband Deutscher Verkehrsunternehmen), Scandinavia, and the UK (RTIG)
SIRI is based on the CEN Transmodel abstract model for public transport information, and comprises a general purpose model, and an XML schema for public transport information.
A SIRI White Paper is available for further information on the protocol.
Scope
CEN SIRI allows pairs of server computers to exchange structured real-time information about schedules, vehicles, and connections, together with informational messages related to the operation of the services. The information can be used for many different purposes, for example:
To provide real-time departure from stop information for display on stops, internet and mobile delivery systems;
To provide real-time progress information about individual vehicles;
To manage the movement of buses roaming between areas covered by different servers;
To manage the synchronisation of guaranteed connections between fetcher and feeder services;
To exchange planned and real-time timetable updates;
To distribute status messages about the operation of the services;
To provide performance information to operational history and other management systems.
CEN SIRI includes a number of optional capabilities.
Different countries may specify a country profile of the subset of SIRI capabilities that they wish to adopt.
Architecture
The CEN SIRI standard has two distinct components:
SIRI Common Protocol Framework. The Framework provides a uniform architecture for defining data messages either as request/response pairs or as publish/subscribe services. The payload content model is separated from the messaging aspects so that the same payload content can be used in both request and subscription services and the same common messaging components can be used for all the different functional services. Common functions for subscription management, service monitoring, content level authentication, etc are provided.
SIRI Functional Services. The SIRI specification specifies a number of distinct functional services, each designed for the exchange of a specific type of public transport data, all using the same protocol framework and basing their payload content on the Transmodel conceptual model. Additional functional services may be added that use the same framework but different payload content models to cover additional services.
CEN SIRI Functional Services
SIRI V1.0 defined eight functional services:
SIRI-PT: Planned Timetable service: Allows the exchange of the planned timetable for a public transport service along a route.
SIRI-ET: Estimated Timetable service: Allows the exchange of the real-time timetable for a public transport service along a route.
SIRI-ST: Stop Timetable service: Allows the exchange of the planned arrivals and departures at a stop of public transport services.
SIRI-SM: Stop Monitoring service: Allows the exchange of the real-time arrivals and departures at a stop of public transport services.
SIRI-VM: Vehicle Monitoring service: Allows the exchange of the real-time positions of public transport vehicles.
SIRI-CT: Connection Timetable service: Allows the exchange of the planned connections of public transport services at a stop.
SIRI-CM: Connection Monitoring service: Allows the exchange of the real-time connections of public transport services at a stop, taking into account delays.
SIRI-GM: General Messaging service: Allows the exchange of the simple messages relating to public transport services.
Two further functional services have been added as part of the CEN SIRI specification;
SIRI-FM: Facility Monitoring service: Allows the exchange of the real-time status of facilities at a stop such as lifts, escalators, etc.
SIRI-SX: Situation Exchange service: Allows the exchange of the structured messages relating to public transport services and networks.
Other CEN Standards that use the SIRI Common Protocol Framework
The CEN SIRI Common Protocol Framework can be used by other standards to define their own Functional Services. Two CEN standards that do this are:
The CEN NeTEx specification for Public Transport reference data uses the CEN SIRI Common Protocol Framework to define a SIRI based exchange service to exchange any type of NeTEx data element within a frame.
The CEN Open API for distributed journey planning uses the CEN SIRI Common Protocol Framework to define a protocol for journey planning.
Current version & Documentation
Version 2.0 of SIRI , representing the CEN documents as published, is currently available as a set of XSD files packaged as a zip file .
CEN TS 15531-1:2015 - Part 1: Context and framework.
CEN TS 15531-2:2015 - Part 2: Communications infrastructure.
CEN TS 15531-3:2015 - Part 3: Functional service interfaces (covering the SIRI-PT, SIRI-ET, SIRI-ST, SIRI-SM, SIRI-VM, SIRI-CT, SIRI-CM, and SIRI-GM functional services).
CEN/TS 15531-4:2011 - Part 4: Functional service interfaces - Facility Monitoring.
CEN/TS 15531-5:2016 - Part 5: Functional service interfaces - Situation Exchange.
SIRI is maintained under a maintenance regime, with version control managed by a working group of the CEN TC/278 Working Group 3 . Later versions of the schema are available at the same site, together with change notes.
History
The CEN SIRI standard was developed from European national standards for real-time data exchange, in particular the German VDV 453 standard, between 2000 and 2005, and included eight functional services. V1.0 became a CEN Technical Standard in 2006 and a full CEN standard in 2009.
Two additional functional services were added later Situation Exchange (SX) (Technical Standard 2009, Standard 2016) and Facility Monitoring (FM) (2011).
A number of small enhancements were subsequently added as informal changes creating interim releases v1.1, v1.2, etc.
Two other CEN standards were developed that made use of the 'SIRI Common Protocol Framework' to define their own functional services; NeTEx (v1.0 published in 2014) and Open API for distributed journey planning (v 1.0 published in 2017).
Version 2.0 of CEN-SIRI was developed between adopted in 2015. This is backwards compatible with V1.0 and both formalises the adoption of the interim enhancements and adds a number of additional features.
An important new addition in SIRI v2.0 was the description of a uniform transform for rendering CEN-SIRI messages into a flat format that can be used in simple http requests without an XML rendering.
Example of sites using SIRI
Different SIRI implementations are used in a number of sites globally
Europe
Leicester Travel: Bus real-time from SIRI-SM
Transport for London Incidents from SIRI-GMS & Real-time data from LBS River http://www.tfl.gov.uk
Entur, Norway: National hub for SIRI and NeTEx data https://developer.entur.org/pages-real-time-intro
Västtrafik, PTA for western Sweden, uses SIRI ET and SX for real-time information in the travel planner: http://reseplanerare.vasttrafik.se/bin/query.exe/en
Traveline Scotland: SIRI-SX for disruption information http://www.travelinescotland.com
Helsingin Seudun Liikenne, Finland uses siri vm http://dev.hsl.fi/
North America
New York City MTA BusTime - SIRI-SM and SIRI-VM - http://bustime.mta.info/wiki/Developers/Index
Utah Transit Authority : http://developer.rideuta.com/StopMonitoringInstructions.aspx
METRO (Houston, TX) : https://web.archive.org/web/20150111120549/http://developer.ridemetro.apiphany.com/products
Asia
Ningbo City - Buses, Real-time traffic control systems with SIRI, stations and vehicles electronic devices 2011-2012 http://www.novasolution.com.hk
Israel - Real-time information on public buses and trains - https://www.gov.il/he/Departments/General/real_time_information_siri
Australia
Transport for New South Wales - SIRI-SX for disruption information: https://transportnsw.info
See also
Identification of Fixed Objects In Public Transport (IFOPT)
NeTEx
Intermodal Journey Planner
Transmodel
TransXChange
Transport standards organisations
GTFS Realtime
References
External links
SIRI homepage
SIRI XML schema and examples on github
Siri on VDV website
Siri on Transmodel website
Transmodel
NeTEx
RTIG Website
CEN Website
Public transport information systems
Real-time computing
Standards
Travel technology | Service Interface for Real Time Information | [
"Technology"
] | 1,866 | [
"Public transport information systems",
"Information systems",
"Real-time computing"
] |
14,466,384 | https://en.wikipedia.org/wiki/Product-determining%20step | The product-determining step is the step of a chemical reaction that determines the ratio of products formed via differing reaction mechanisms that start from the same reactants. The product determining step is not rate limiting if the rate limiting step of each mechanism is the same.
See also
Rate-determining step
References
Chemical reactions | Product-determining step | [
"Chemistry"
] | 62 | [
"Chemical reaction stubs",
"nan"
] |
14,466,835 | https://en.wikipedia.org/wiki/Photon%20Factory | The Photon Factory (PF) is a synchrotron located at KEK, in Tsukuba, Japan, about fifty kilometres from Tokyo.
History
The Photon Factory turned on its synchrotron for the first time in 1982, becoming the first light source accelerator in Japan to produce x-rays. In 1997, it joined the Institute of Materials Structure Science (IMSS), a Japanese-run international particle physics organization based at KEK.
The current head of the Photon Factory is N. Igarashi.
Research and design
There are two major facilities, the Photon Factory itself which is a 2.5GeV synchrotron with a beam current of around 450mA, and the PF-AR 'Advanced Ring for Pulsed X-Rays', which is a 6.5GeV machine running in a single-bunch mode with a beam current of around 60mA. It operates with a pulse width of about 100 picoseconds.
The Photon Factory’s photon accelerator is one of the Institute of Materials Structure Science’s four quantum beams used for particle physics research. Its macromolecular crystallography beam is used substantially for Japan's structural genomics project. More recently, the Photon Factory has partnered with the Saha Institute and Jawaharlal Nehru Centre in India to create the Indian Beam, which is open to Indian particle and nuclear physicists to use for experiments in power diffraction, scattering, and reflectivity.
References
External links
Photon Factory - KEK IMSS website
Synchrotron radiation facilities | Photon Factory | [
"Physics",
"Materials_science"
] | 316 | [
"Particle physics stubs",
"Materials testing",
"Particle physics",
"Synchrotron radiation facilities"
] |
14,467,117 | https://en.wikipedia.org/wiki/Michael%20Hinchey | Michael Gerard Hinchey (born 1969) is an Irish computer scientist and former Director of the Irish Software Engineering Research Centre (Lero), a multi-university research centre headquartered at the University of Limerick, Ireland. He now serves as Head of Department of the Department of Computer Science & Information Systems at University of Limerick.
Mike Hinchey studied at the University of Limerick as an undergraduate (was the leading student in his graduating year), Oxford University (at Wolfson College) for his MSc and Cambridge University (at St John's College) for his PhD.
Hinchey has been a promulgator of formal methods throughout his career, especially CSP and the Z notation. He was Director of the NASA Software Engineering Laboratory at NASA Goddard Space Flight Center and is the founding editor-in-chief of the NASA journal Innovations in Systems and Software Engineering, launched in 2005.
He has held many academic positions, both visiting and permanent, at a number of universities including the University of Nebraska, Queen's University Belfast, New Jersey Institute of Technology, Hiroshima University the University of Skövde in Sweden and was at Loyola College in Maryland (now Loyola University Maryland), United States, before his current post.
Hinchey is a Member of Academia Europaea, a Fellow of the IET, a Fellow of the IMA, and a Senior Member of the IEEE. He is a Chartered Engineer, Chartered Professional Engineer, Chartered Mathematician and Chartered IT Professional.
As of 2016, Hinchey has been serving as President of IFIP (International Federation for Information Processing).
Selected publications
Hinchey, M.G. and Bowen, J.P., editors, Applications of Formal Methods. Prentice Hall International Series in Computer Science, 1995. .
Dean, C.N. and Hinchey, M.G., editors, Teaching and Learning Formal Methods, Academic Press, London, 1996. .
Bowen, J.P. and Hinchey, M.G., editors, High-Integrity System Specification and Design. Springer-Verlag, London, FACIT series, 1999. .
Hinchey, M.G. and Bowen, J.P., editors, Industrial-Strength Formal Methods in Practice. Springer-Verlag, London, FACIT series, 1999. .
References
External links
Mike Hinchey web page
Interview – The Irish Times
1969 births
Living people
Alumni of the University of Limerick
Alumni of Wolfson College, Oxford
Alumni of St John's College, Cambridge
Irish computer scientists
Formal methods people
Fellows of the Institution of Engineering and Technology
NASA people
Loyola University Maryland faculty
Academics of the University of Limerick
Irish book editors
Irish non-fiction writers
Irish male non-fiction writers
Senior members of the IEEE
Computer science writers
Academic journal editors
Academic staff of the University of Skövde | Michael Hinchey | [
"Engineering"
] | 567 | [
"Institution of Engineering and Technology",
"Fellows of the Institution of Engineering and Technology"
] |
14,467,558 | https://en.wikipedia.org/wiki/Surface%20force | Surface force denoted fs is the force that acts across an internal or external surface element in a material body.
Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces.
Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area.
Equations for surface force
Surface force due to pressure
, where f = force, p = pressure, and A = area on which a uniform pressure acts
Examples
Pressure related surface force
Since pressure is , and area is a ,
a pressure of over an area of will produce a surface force of .
See also
Body force
Contact force
References
Classical mechanics
Fluid dynamics
Force | Surface force | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 159 | [
"Fluid dynamics stubs",
"Force",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Classical mechanics stubs",
"Mass",
"Classical mechanics",
"Mechanics",
"Piping",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
14,467,753 | https://en.wikipedia.org/wiki/Univac%20Text%20Editor | ED or ED-1100 is an interactive text editor implemented on the UNIVAC 1100/2200 series.
"ED was developed at Univac in the mid-60s. It was loosely based on the Project MAC editor developed for the MULTICS system at MIT."-Tom McCarthy
"Project MAC editor was programmed by Jerry Saltzer as a way to produce documentation. In fact, that editor became the first interactive word-processor ever programmed."
"The command TYPSET is used to create and edit 12-bit BCD line-marked files"
ED was improved by Dr. Roger M. Firestone in the mid-1970s.
See also
List of UNIVAC products
History of computing hardware
References
External links
History of software
UNIVAC software
Text editors | Univac Text Editor | [
"Technology"
] | 156 | [
"History of software",
"History of computing"
] |
14,468,359 | https://en.wikipedia.org/wiki/Verification%20%28spaceflight%29 | Verification in the field of space systems engineering covers two verification processes: Qualification and Acceptance
Overview
In the field of spaceflight verification standards are developed by DoD, NASA and the ECSS, among others. Large aerospace corporations may also developed their own internal standards. These standards exist in order to specify requirements for the verification of a space system product, such as:
the fundamental concepts of the verification process
the criteria for defining the verification strategy and
the rules, organization, and process for the implementation of the verification program
Verification or qualification, is one main reason that costs for space systems are high. All data are to be documented and to stay accessible for potential, later failure analyses. In previous times that approach was executed down to piece-parts level (resistors, switches etc.) whereas nowadays it is tried to reduce cost by usage of "CAM (Commercial, Avionics, Military) equipment" for non-safety relevant units.
Qualification and Acceptance
Qualification is the formal proof that the design meets all requirements of the specification and the parameters agreed in the Interface Control Documents (ICD) requirements with adequate margin, including tolerances due to manufacturing imperfections, wear-out within specified life-time, faults, etc. The end of the qualification process is the approval signature of the customer on the Certificate of Qualification (CoQ), or Qualification Description Document (QDD) agreeing that all the requirements are met by the product to be delivered under the terms of a contract.
Acceptance is the formal proof that the product identified is free of workmanship defects and meets preset performance requirements with adequate margin. Acceptance is based on the preceding qualification by reference to the used design / manufacturing documentation. The end of the acceptance process is the approval signature of the customer on the CoA, or QDD, agreeing that all the requirements are met by the product to be delivered under the terms of a contract.
There are five generally accepted Qualification methods:
Analysis
Test
Inspection
Demonstration
Similarity (although Similarity is a form of Analysis, in most space applications, it is recommended to highlight it as its own category)
Being qualified means demonstrating with margin that the design, and the implementation of the design, meets the intended preset requirements. There are many different Qualification strategies in order to reach the same goals. It consists of designing hardware (or software) to qualification requirements (including margin), testing dedicated hardware (or software) to qualification requirements to verify the design, followed by acceptance testing of flight hardware to screen workmanship defects. There are other strategies as well, the Proto-Qualification strategy for instance. Proto-Qualification consists of testing the first flight hardware to Proto-Qualification requirements to verify design, and testing subsequent flight hardware to acceptance levels to screen workmanship defects. This first Proto-Qualification unit is flight-worthy.
There are three generally accepted Acceptance methods:
Test
Inspection
Demonstration
If a deviation against the qualified item is detected (higher tolerances, scratches etc.) a Non-Conformance is to be processed; to justify that this item can be used despite this deviation an Analysis might be required.
See also
Spacecraft
System engineering
References
Further reading
ECSS-E-ST-10-02: Verification (European Space Standard)
DoD, MIL-STD-1540D: Product Verification Requirements for Launch, Upper Stage, and Space Vehicles
Spaceflight concepts
Systems engineering | Verification (spaceflight) | [
"Engineering"
] | 665 | [
"Systems engineering"
] |
14,468,598 | https://en.wikipedia.org/wiki/Content%20managed%20hosting | Content managed hosting is a service that couples website hosting with a content management system. Content management systems enable Web site owners or marketing departments to edit website content, share files, and hyperlink pages without needing to know markup or programming languages. It is an alternative to using an open-source content management system or purchasing an off-the-shelf system.
See also
Content Management Website
References
Website management | Content managed hosting | [
"Technology"
] | 81 | [
"Computing stubs",
"World Wide Web stubs"
] |
14,468,774 | https://en.wikipedia.org/wiki/Comparison%20of%20usability%20evaluation%20methods | Usability testing methods aim to evaluate the ease of use of a software product by its users. As existing methods are subjective and open to interpretation, scholars have been studying the efficacy of each method
and their adequacy to different subjects, comparing which one may be the most appropriate in fields like e-learning,
e-commerce,
or mobile applications.
See also
Usability inspection
Partial concurrent thinking aloud
References
External links
Exploring two methods of usability testing: concurrent versus retrospective think-aloud protocols
Usability
Human–computer interaction
Computing comparisons | Comparison of usability evaluation methods | [
"Technology",
"Engineering"
] | 108 | [
"Human–computer interaction",
"Computing comparisons",
"Human–machine interaction"
] |
14,469,114 | https://en.wikipedia.org/wiki/Rhenium%E2%80%93osmium%20dating | Rhenium–osmium dating is a form of radiometric dating based on the beta decay of the isotope 187Re to 187Os. This normally occurs with a half-life of 41.6 × 109 y, but studies using fully ionised 187Re atoms have found that this can decrease to only 33 y. Both rhenium and osmium are strongly siderophilic (iron loving), while Re is also chalcophilic (sulfur loving) making it useful in dating sulfide ores such as gold and Cu–Ni deposits.
This dating method is based on an isochron calculated based on isotopic ratios measured using N-TIMS (Negative – Thermal Ionization Mass Spectrometry).
Rhenium–osmium isochron
Rhenium–osmium dating is carried out by the isochron dating method. Isochrons are created by analysing several samples believed to have formed at the same time from a common source. The Re–Os isochron plots the ratio of radiogenic 187Os to non-radiogenic 188Os against the ratio of the parent isotope 187Re to the non-radiogenic isotope 188Os. The stable and relatively abundant osmium isotope 188Os is used to normalize the radiogenic isotope in the isochron.
The Re–Os isochron is defined by the following equation:
where:
t is the age of the sample,
λ is the decay constant of 187Re,
(eλt−1) is the slope of the isochron which defines the age of the system.
A good example of an application of the Re–Os isochron method is a study on the dating of a gold deposit in the Witwatersrand mining camp, South Africa.
Rhenium–osmium isotopic evolution
Rhenium and osmium were strongly refractory and siderophile during the initial accretion of the Earth which caused both elements to preferentially enter the Earth's core. Thus the two elements should be depleted in the silicate Earth yet the 187Os / 188Os ratio of mantle is chondritic. The reason for this apparent contradiction is owed to the change in behavior between Re and Os in partial melt events. Re tends to enter the melt phase (incompatible) while Os remains in the solid residue (compatible). This causes high ratios of Re/Os in oceanic crust (which is derived from partial melting of mantle) and low ratios of Re/Os in the lower mantle. In this regard, the Re–Os system to study the geochemical evolution of mantle rocks and in defining the chronology of mantle differentiation is extremely helpful.
Peridotite xenoliths which are thought to sample the upper mantle sometimes contain supra-chondritic Os-isotopic ratios. This is thought to evidence of subducted ancient high Re/Os basaltic crust that is being recycled back into the mantle. This combination of radiogenic (187Os that was created by decay of 187Re) and nonradiogenic melts helps to support the theory of at least two Os-isotopic reservoirs in the mantle. The volume of both these reservoirs is thought to be around 5–10% of the whole mantle. The first reservoir is characterized by depletion in Re and proxies for melt fertility (such as concentrations of elements like Ca and Al). The second reservoir is chondritic in composition.
Direct measurement of the age of continental crust through Re–Os dating is difficult. Infiltration of xenoliths by their commonly Re-rich magma alters the true elemental Re/Os ratios. Instead, determining model ages can be done in two ways: "Re depletion" model ages or the "melting age" model. The former finds the minimum age of the extraction event assuming the elemental Re/Os ratio equals 0 (komatiite residues have Re/Os of 0, so this is assuming a xenolith was extracted from a near-komatiite melt). The latter gives the age of the melting event inferred from the point when a melt proxy like Al2O3 is equal to 0 (ancient subcontinental lithosphere has weight percentages of CaO and Al2O3 ranging from 0 to 2%).
Pt–Re–Os systematics
The radioactive decay of 190Pt to 186Os has a half-life of 4.83(3)×1011 years (which is longer than the age of the universe, so it is essentially stable). However, in-situ 187Os / 188Os and 186Os / 188Os of modern plume related magmas show simultaneous enrichment which implies a source that is supra-chondritic in Pt/Os and Re/Os. Since both parental isotopes have extremely long half-lives, the Os-isotope rich reservoir must be very old to allow enough time for the daughter isotopes to form. These observations are interpreted to support the theory that the Archean subducted crust contributed Os-isotope rich melts back into the mantle.
References
Radiometric dating
Rhenium
Osmium | Rhenium–osmium dating | [
"Chemistry"
] | 1,050 | [
"Radiometric dating",
"Radioactivity"
] |
14,469,299 | https://en.wikipedia.org/wiki/Autonomous%20convergence%20theorem | In mathematics, an autonomous convergence theorem is one of a family of related theorems which specify conditions guaranteeing global asymptotic stability of a continuous autonomous dynamical system.
History
The Markus–Yamabe conjecture was formulated as an attempt to give conditions for global stability of continuous dynamical systems in two dimensions. However, the Markus–Yamabe conjecture does not hold for dimensions higher than two, a problem which autonomous convergence theorems attempt to address. The first autonomous convergence theorem was constructed by Russell Smith. This theorem was later refined by Michael Li and James Muldowney.
An example autonomous convergence theorem
A comparatively simple autonomous convergence theorem is as follows:
Let be a vector in some space , evolving according to an autonomous differential equation . Suppose that is convex and forward invariant under , and that there exists a fixed point such that . If there exists a logarithmic norm such that the Jacobian satisfies for all values of , then is the only fixed point, and it is globally asymptotically stable.
This autonomous convergence theorem is very closely related to the Banach fixed-point theorem.
How autonomous convergence works
Note: this is an intuitive description of how autonomous convergence theorems guarantee stability, not a strictly mathematical description.
The key point in the example theorem given above is the existence of a negative logarithmic norm, which is derived from a vector norm. The vector norm effectively measures the distance between points in the vector space on which the differential equation is defined, and the negative logarithmic norm means that distances between points, as measured by the corresponding vector norm, are decreasing with time under the action of . So long as the trajectories of all points in the phase space are bounded, all trajectories must therefore eventually converge to the same point.
The autonomous convergence theorems by Russell Smith, Michael Li and James Muldowney work in a similar manner, but they rely on showing that the area of two-dimensional shapes in phase space decrease with time. This means that no periodic orbits can exist, as all closed loops must shrink to a point. If the system is bounded, then according to Pugh's closing lemma there can be no chaotic behaviour either, so all trajectories must eventually reach an equilibrium.
Michael Li has also developed an extended autonomous convergence theorem which is applicable to dynamical systems containing an invariant manifold.
Notes
Stability theory
Fixed points (mathematics)
Theorems in dynamical systems | Autonomous convergence theorem | [
"Mathematics"
] | 500 | [
"Theorems in dynamical systems",
"Mathematical theorems",
"Mathematical analysis",
"Fixed points (mathematics)",
"Stability theory",
"Topology",
"Mathematical problems",
"Dynamical systems"
] |
14,469,908 | https://en.wikipedia.org/wiki/Rural%20internet | Rural Internet describes the characteristics of Internet service in rural areas (also referred to as "the country" or "countryside"), which are settled places outside towns and cities. Inhabitants live in villages, hamlets, on farms and in other isolated houses. Mountains and other terrain can impede rural Internet access.
Internet service in many rural areas is provided over voiceband by 56k modem. Poor-quality telephone lines, many of which were installed or last upgraded between the 1930s and the 1960s, often limit the speed of the network to bit rates of 26 kbit/s or less. Since many of these lines serve relatively few customers, phone company maintenance and speed of repair of these lines has degraded and their upgrade for modern quality requirements is unlikely. This results in a digital divide.
High-speed, wireless Internet service is becoming increasingly common in rural areas. Here, service providers deliver Internet service over radio-frequency via special radio-equipped antennas.
Methods for broadband Internet access in rural areas include:
Mobile Internet (broadband if HSPA or higher)
Hybrid Access Networks
Power-line Internet
Terrestrial Wireless Internet
Satellite Internet
ADSL loop extender
Internet of Things
White Space Internet
Digital divide
Scholarship on the topic of the digital divide has shifted from an understanding of people who do and do not have access to the internet to an analysis of the quality of internet access. Because opting out of internet activity is no longer a choice with internet-only customer service, online banking, and online schooling, internet access has become an increasing need in rural communities with inadequate infrastructure.
Although government programs such as E-rate provisions provide internet connection to schools and libraries under the U.S. federal government, more general internet access to a broader community has not been directly addressed in policy. The provision of "national" internet services tends to favor urban metropolitan regions. For a long time, even, many within the U.S. considered the internet to be a luxury. In 2001, then FCC Chair Michael Powell said, “I think there’s a Mercedes divide. I’d like to have one. I can’t afford one” when asked about solutions to shrinking the digital divide. At the time, the internet was still largely new, as less than half of the U.S. did not have access to any home internet. In 2021, 77% of Americans have home broadband according to the most recent Pew Research Center survey. The attitude in the U.S. has largely shifted since Powell's remarks, however, as under the current administration and President Joe Biden there is a common belief that "broadband is infrastructure" and that is must be treated as such.
The digital divide is even more prominent in developing countries, where physical access to internet services are at a much lower rate. While developed countries such as the U.S. face the challenge of providing universal service (ensuring that everyone has access to internet service in the home), developing countries face the challenge of providing universal access (ensuring that everyone has the opportunity to make use of the internet). For example, in Egypt there are only about six phone lines per 100 people, with less than two lines per 100 people in rural areas, which makes it even more difficult for people to access the internet.
In the United States
The United States Department of Agriculture’s Economic Research Service has provided numerous studies and data on the Internet in rural America. One such article from the Agricultural Outlook magazine, Communications & the Internet in Rural America, summarizes internet uses in rural areas of the United States in 2002. It indicates, "Internet use by rural and urban households has also increased significantly during the 1990s, so significantly that it has one of the fastest rates of adoption for any household service."
Another area for inclusion of the Internet is American farming. One study reviewed data from 2003 and found that "56 percent of farm operators used the Internet while 31 percent of rural workers used it at their place of work." In later years challenges to economical rural telecommunications remain. People in inner city areas are closer together, so the access network to connect them is shorter and cheaper to build and maintain, while rural areas require more equipment per customer. However, even with this challenge the demand for services continues to grow.
In 2011 the Federal Communications Commission (FCC) proposed to use the Universal Service Fund to subsidize rural broadband Internet services. In 2019, the FCC estimated that only 73.6% of the rural population had access to broadband services at 25 Mbps in 2017, compared to 98.3% of the population in urban areas. However, many studies have contested FCC findings, claiming a greater number of Americans are without access to internet services at sufficient speeds. For instance, in 2019 Pew Research Center found that only about two-thirds of rural Americans claimed to have a broadband internet connection at home, and although the gap in mobile technology ownership between rural and urban adults has narrowed, rural adults remain less likely to own these devices.
One study in particular examined the ways in which inaccessibility for rural and "quasi-rural" residents affects their daily life, conceptualizing issues of accessibility as a form of socioeconomic inequity. By using Illinois as a case study - a state with both urban and rural environments—the authors demonstrate how the rural-urban digital divide negatively impacts those that live in areas that fall between the two distinct categories of rural and urban. Interviews with residents from Illinois describe "missed pockets," or areas in which service installation is not available or far too expensive. This inaccessibility leads many to experience sentiments of social isolation as residents feel disconnected from current events, cultural trends, and even close friends and family members.
Internet access inequalities are further deepened by public policy and commercial investment. In 2003, The Information Society published an article explaining how exchange areas and local access transport areas (LATAs) arrange citizens into markets for telecommunication companies, which centralizes access rather than encouraging businesses to cater to more remote communities. These areas were created through regulatory measures intended to ensure greater access and are perpetuated by investment patterns as more disparate communities hold less potential for profits, thus creating "missed pockets."
In Canada
In Canada, when pressed by Member of Parliament David de Burgh Graham, the Federation of Canadian Municipalities did not see access to the internet a right. Telecommunications co-operatives like Antoine-Labelle provide an alternative to big Internet Service Providers.
In Spain
In Spain, the Guifi.net project has been for some people the only alternative to get access to the Internet. Usually, neighbors are the responsible to collect the necessary money to buy the network equipment that will do a Wireless link with another zone that already has internet access. There have also been cases in which the own city council has invested in the infrastructure.
In the United Kingdom
In the UK, the government aimed to provide superfast broadband (speeds of 24 Mbit/s or more) to 95% of the country by 2017. In 2014, a study by the Oxford Internet Institute found that in areas less than from large cities, internet speed dropped below 2 Mbit/s, the speed designated as "adequate" by the government.
Frustrated by the slow progress being made by private telecoms companies, some rural communities have built their own broadband networks, such as the B4RN initiative.
In India
India has the second-biggest online market globally, yet a large portion of its populace – almost 700 million individuals – are detached. Indian internet network access AirJaldi has collaborated with Microsoft to give reasonable online access to rural areas. Dependable broadband associations are imperative for many youngsters who are being homeschooled during the pandemic for COVID-19. That may change as Indian web access provider, AirJaldi, is widening access through an imaginative undertaking with worldwide tech giant Microsoft.
Internet of Things
Due to poor telecommunication access in most rural areas, low-energy solutions such as those offered by Internet of Things networks are seen as a cost-effective solution well-adapted to agricultural environments. Tasks such as controlling livestock conditions and numbers, the state of crops, and pests are progressively being taken over by m2m communications. Companies such as Sigfox, Cisco Systems and Fujitsu are delving into the agricultural market, offering innovative solutions to common problems in countries such as the U.S., Japan, Ireland and Uruguay.
Innovation and solutions
There is increasing conversation around the growing social necessity of being connected in today's world and moreover, growing social expectation that one is connected either with at home broadband, reliable cell-service, and at least email access. Currently, rural areas often depend on small, unreliable ISP providers and scrape by "siphoning from surplus data and bandwidth capacity, creating their own systems of redundancy, or (in some cases) launching community-based, local ISP when large incumbent providers fail to show an interest in the area."
Many of the difficulties faced by rural communities are "geo-policy barriers," defined as "chokepoints [or] mechanisms of control created through the interaction of geography, market forces, and public policies" that constrict not just access, but "also construct both communication and communities." In the US, regulatory mandates have helped extend basic telecommunications to rural areas while mitigating market failure. However, despite efforts from the government, the telecommunications industry has stayed relatively monopolized therefore little competition has resulted in basic telecommunications without adequate connectivity for the developing needs of rural citizens. One state-based effort that has proved successful in adequately connecting Americans are EAS, or "expanded area service", programs, which "generally reduce intra-LATAS [local access transport areas] long-distance costs between specific exchanges or throughout a contiguous geographic area." In regards to Internet access, one of the most important EAS programs creates "flat-rate calling zones that allow remote customers to reach an Internet service provider in a more populous area."
Issues of rural connectivity have been exacerbated by the COVID-19 pandemic and reveal how "poor management of the Universal Service Fund, which subsidizes phone and internet access in rural areas, has meant some companies get the money without delivering on the promised numbers of households served or service quality." Therefore, one immediate fix to rural connectivity would be accountability within U.S.F programs and arguably, more funding. While governments begin pondering questions such as, "is Internet access a right?", ideas on how to approach this issue fall along political party lines. Mainly, Democrats believe more government funding would help connect rural Americans while Republicans are backing new 5G mobile Internet technology to replace home Internet lines and solve access gaps. These arguments are very similar to political arguments about "electricity and phone service in the early 1900s."
The Federal Communications Commission (FCC) recently released an overview of initiatives based on "bridging the digital divide for all Americans," some of these include:
Launching the Rural Digital Opportunity Fund, which would direct up to $20.4 billion to expand broadband in unserved rural areas.
Establishing the Digital Opportunity Data Collection, a new process for collecting fixed broadband data to improve mapping and better identify gaps in broadband coverage across the nation.
Approving $950 million in funding to improve, expand, and harden communications networks in Puerto Rico and the U.S. Virgin Islands.
Updating rules that govern access to utility poles and conduits, which can be a costly and time-consuming barrier to broadband deployment.
Revising rules that needlessly delay or even stop companies from replacing copper with fiber and that delay discontinuance of technologies from the 1970s in favor of services using Internet Protocol (IP) technologies.
See also
Dial-up Internet access
Broadband Internet access
Hybrid Access Networks
Coverage
Flat fee
Internet in the United States
Open Access Network
Rural electrification
Rural free delivery
ASTRA2Connect example of a rural satellite internet system
Starlink
satellite internet
Project Kuiper
satellite internet constellation
Notes
External links
“Rural Telecommunications Briefing Room.” (February 9, 2006). Economic Research Service. Retrieved December 30, 2008.
“Telecommunications Resources.” (August 22, 2008). National Agricultural Library. Rural Information Center. Retrieved December 30, 2008.
“Rural High-Speed Internet Ontario.” (June 21, 2019). Rural Internet Provider in Southwestern Ontario
Digital divide
Internet access
Rural geography | Rural internet | [
"Technology"
] | 2,501 | [
"Internet access",
"IT infrastructure"
] |
14,469,976 | https://en.wikipedia.org/wiki/Microbial%20cyst | A microbial cyst is a resting or dormant stage of a microorganism, that can be thought of as a state of suspended animation in which the metabolic processes of the cell are slowed and the cell ceases all activities like feeding and locomotion. Many groups of single-celled, microscopic organisms, or microbes, possess the ability to enter this dormant state.
Encystment, the process of cyst formation, can function as a method for dispersal and as a way for an organism to survive in unfavorable environmental conditions. These two functions can be combined when a microbe needs to be able to survive harsh conditions between habitable environments (such as between hosts) in order to disperse. Cysts can also be sites for nuclear reorganization and cell division, and in parasitic species they are often the infectious stage between hosts. When the encysted microbe reaches an environment favorable to its growth and survival, the cyst wall breaks down by a process known as excystation.
Environmental conditions that may trigger encystment include, but are not limited to: lack of nutrients or oxygen, extreme temperatures, desiccation, adverse pH, and presence of toxic chemicals which are not conducive for the growth of the microbe.
History and terminology
The idea that microbes could temporarily assume an alternate state of being to withstand changes in environmental conditions began with Antonie van Leeuwenhoek’s 1702 study on Animalcules, currently known as rotifers:
“'I have often placed the Animalcules I have before described out of the water, not leaving the quantity of a grain of sand adjoining to them, in order to see whether when all the water about them was evaporated and they were exposed to the air their bodies would burst, as I had often seen in other Animalcules. But now I found that when almost all the water was evaporated, so that the creature could no longer be covered with water, nor move itself as usual, it then contracted itself into an oval figure, and in that state it remained, nor could I perceive that the moisture evaporated from its body, for it preserved its oval and round shape, unhurt."
Leeuwenhoek later continued his work with rotifers to discover that when he returned the dried bodies to their preferred aquatic conditions, they resumed their original shape and began swimming again. These observations did not gain traction with the general microbiological community of the time, and the phenomena as Leeuwenhoek observed it was never given a name.
In 1743, John Turberville Needham observed the revival of the encysted larval stage of the wheat parasite, Anguillulina tritici and later published these findings in New Microscopal Discoveries (1745). Several others repeated and expanded upon this work, informally referring to their studies on the “phenomenon of reviviscence.”
In the late 1850s, reviviscence became embroiled in the debate surrounding the theory of spontaneous generation of life, leading two highly involved scientists on either side of the issue to call upon the Biological Society of France for an independent review of their opposing conclusions on the matter. Doyere, who believed rotifers could be desiccated and revitalized, and Pouchet, who believed they could not, allowed independent observers of various scientific backgrounds to observe and attempt to replicate their findings. The resulting report leaned toward the arguments made by Pouchet, with notable dissension from the main author who blamed his framing of the issue in the report on fear of religious retribution. Despite the attempt by Doyere and Pouchet to conclude debate on the topic of resurrection, investigations continued.
In 1872, Wilhelm Preyer introduced the term ‘anabiosis’ (return to life) to describe the revitalization of viable, lifeless organisms to an active state. This was quickly followed by Schmidt’s 1948 proposal of the term ‘abiosis,’ leading to some confusion between terms describing the beginning of life from non-living elements, viable lifelessness, and nonliving components that are necessary for life.
As part of his 1959 review of Leeuwenhoek’s original findings and the evolution of the science surrounding microbial cysts and other forms of metabolic suspension, D. Keilin proposed the term ‘cryptobiosis’ (latent life) to describe:
“...the state of an organism when it shows no visible signs of life and when its metabolic activity becomes hardly measurable, or comes reversibly to a standstill.”
As microbial research began to gain popularity exponentially, details about ciliated protist physiology and cyst formation led to increased curiosity about the role of encystment in the life cycle of ciliates and other microbes. The realization that no one category of microscopic organism ‘owns’ the ability to form metabolically dormant cysts necessitates the term ‘microbial cyst’ to describe the physical object as it exists in all its forms. Also important in the generation of the term, is the delineation of endospores and microbial cysts as different forms of cryptobiosis or dormancy. Endospores exhibit more extreme isolation from their environment in terms of cell wall thickness, impermeability to substrates, and presence of dipicolinic acid, a compound known to confer resistance to extreme heat. Microbial cysts have been likened to modified vegetative cells with the addition of a specialized capsule. Importantly, encystment is a process observed to precede cell division, while the formation of an endospore involves non-reproductive cellular division. The study of the encystment process was mostly confined to the 1970s and '80s, resulting in the lack of understanding of genetic mechanisms and additional defining characteristics, though they are generally thought to follow a different formation sequence than endospores.
Formation and composition of the cyst wall
Indicators of cyst formation in ciliated protists include varying degrees of ciliature resorption, with some ciliates losing both cilia and the membranous structures supporting them while others maintain kinetosomes and/or microtubular structures. De novo synthesis of cyst wall precursors in the endoplasmic reticulum also frequently indicate a ciliate is undergoing encystment.
The composition of the cyst wall is variable in different organisms.
The cyst walls of bacteria are formed by the thickening of the normal cell wall with added peptidoglycan layers.
The walls of protozoan cysts are made of chitin, a type of glycopolymer.
The cyst wall of some ciliated protists is composed of four layers, ectocyst, mesocyst, endocyst, and the granular layer. The ectocyst is the outer layer and contains a plug-like structure through which the vegetative cell reemerges during excystation. Interior to the ectocyst, the thick mesocyst is compact yet stratified in density. Chitinase treatments indicate the presence of chitin in the mesocyst of some ciliate species, but this compositional characteristic appears to be highly heterogeneous. The thin endocyst, interior to the mesocyst, is less dense than the ectocyst and is believed to be composed of proteins. The innermost granular layer lies directly outside the pellicle and is composed of de novo synthesized precursors of granular material.
Cyst formation across species
In bacteria
In bacteria (for instance, Azotobacter sp.), encystment occurs by changes in the cell wall; the cytoplasm contracts and the cell wall thickens. Various members of the Azotobacteraceae family have been shown to survive in an encysted form for up to 24 years. The extremophile Rhodospirillum centenum, an anoxygenic, photosynthetic, nitrogen-fixing bacterium that grows in hot springs was found to form cysts in response to desiccation as well. Bacteria do not always form a single cyst. Varieties of cysts formation events are known. Rhodospirillum centenum can change the number of cysts per cell, usually ranging from four to ten cells per cyst depending on the environment.
Some species of filamentous cyanobacteria have been known to form heterocysts to escape levels of oxygen concentration detrimental to their nitrogen fixing processes. This process is distinct from other types of microbial cysts in that the heterocysts are often produced in a repeating pattern within a filament composed of several vegetative cells, and once formed, heterocysts cannot return to a vegetative state.
In protists
Protists, especially protozoan parasites, are often exposed to very harsh conditions at various stages in their life cycle. For example, Entamoeba histolytica, a common intestinal parasite that causes dysentery, has to endure the highly acidic environment of the stomach before it reaches the intestine and various unpredictable conditions like desiccation and lack of nutrients while it is outside the host. An encysted form is well suited to survive such extreme conditions, although protozoan cysts are less resistant to adverse conditions compared to bacterial cysts. Cytoplasmic dehydration, high autophagic activity, nuclear condensation, and decrease of cell volume are all indicators of encystment initiation in ciliated protists. In addition to survival, the chemical composition of certain protozoan cyst walls may play a role in their dispersal. The sialyl groups present in the cyst wall of Entamoeba histolytica confer a net negative charge to the cyst which prevents its attachment to the intestinal wall thus causing its elimination in the feces. Other protozoan intestinal parasites like Giardia lamblia and Cryptosporidium also produce cysts as part of their life cycle (see oocyst). Due to the hard outer shell of the cyst, Cryptosporidium and Giardia are resistant to common disinfectants used by water treatment facilities such as chlorine. In some protozoans, the unicellular organism multiplies during or after encystment and releases multiple trophozoites upon excystation.
Many additional species of protists have been shown to exhibit encystment when confronted with unfavorable environmental conditions.
In rotifers
Rotifers also produce diapause cysts, which are different from quiescent (environmentally triggered) cysts in that the process of their formation begins before environmental conditions have deteriorated to unfavorable levels and the dormant state may extend past the restoration of ideal conditions for microbial life. Food limited females of some Synchaeta pectinata strains produce unfertilized diapausing eggs with a thicker shell. Fertilized diapausing eggs can be produced in both food limited and non-food limited conditions, indicative of a bet-hedging mechanism for food availability or perhaps an adaptation to variation in food levels throughout a growing season.
Pathology
While the cyst component itself is not pathogenic, the formation of a cyst is what gives Giardia its primary tool of survival and its ability to spread from host to host. Ingestion of contaminated water, foods, or fecal matter gives rise to the most commonly diagnosed intestinal disease, giardiasis.
Whereas it was previously believed that encystment only served a purpose for the organism itself, it has been found that protozoan cysts have a harboring effect. Common pathogenic bacteria can also be found taking refuge in the cyst of free-living protozoa. Survival times for bacteria in these cysts range from a few days to a few months in harsh environments. Not all bacteria are guaranteed to survive in the cyst formation of a protozoan; many species of bacteria are digested by the protozoan as it undergoes cystic growth.
See also
Cryptobiosis
Spore (in bacteria, fungi and algae)
Endospore (in firmicute bacteria)
Resting spore (in fungi)
Trophozoite
References
Microbiology
Pathology | Microbial cyst | [
"Chemistry",
"Biology"
] | 2,560 | [
"Microbiology",
"Pathology",
"Microscopy"
] |
14,470,268 | https://en.wikipedia.org/wiki/Sir%20William%20Lawrence%2C%201st%20Baronet | Sir William Lawrence, 1st Baronet (16 July 1783 – 5 July 1867) was an English surgeon who became President of the Royal College of Surgeons of London and Serjeant Surgeon to the Queen.
In his mid-thirties, he published two books of his lectures which contained pre-Darwinian ideas on man's nature and, effectively, on evolution. He was forced to withdraw the second (1819) book after fierce criticism; the Lord Chancellor ruled it blasphemous. Lawrence's transition to respectability occurred gradually, and his surgical career was highly successful. In 1822, Lawrence was elected a member of the American Philosophical Society in Philadelphia. He was President of the Medical and Chirurgical Society of London in 1831.
Lawrence had a long and successful career as a surgeon. He reached the top of his profession, and just before his death in 1867 the Queen rewarded him with a baronetcy (see Lawrence baronets).
Early life and education
Lawrence was born in Cirencester, Gloucestershire, the son of William Lawrence, the town's chief surgeon and physician, and Judith Wood. His father's side of the family were descended from the Fettiplace family; His great-great-grandfather (also William Lawrence) married Elizabeth Fettiplace, granddaughter of Sir Edmund Fettiplace. His younger brother Charles Lawrence was one of the founding members of the Royal Agricultural College at Cirencester.
He was educated at Elmore Court School in Gloucester. At 15, he was apprenticed to, and lived with, John Abernethy (FRS 1796) for five years.
Career
Surgical career
Said to be a brilliant scholar, Lawrence was the translator of several anatomical works written in Latin, and was fully conversant with the latest research on the continent. He had good looks and a charming manner, and was a fine lecturer. His quality as a surgeon was never questioned. Lawrence helped the radical campaigner Thomas Wakley found the Lancet journal, and was prominent at mass meetings for medical reform in 1826. Elected to the Council of the RCS in 1828, he became its president in 1846, and again in 1855. He delivered their Hunterian Oration in 1834.
During Lawrence's surgical career he held the posts of Professor of Anatomy and Surgery, Royal College of Surgeons (1815–1822); Surgeon to the hospitals of Bridewell and Bethlem, and to the London Infirmary for Diseases of the Eye; Demonstrator of Anatomy, then Assistant Surgeon, later Surgeon, St Bartholomew's Hospital (1824–1865). Later in his career, he was appointed Surgeon Extraordinary, later Serjeant Surgeon, to the Queen. His specialty was ophthalmology, although he practised in and lectured and wrote on all branches of surgery. Pugin and Queen Victoria were among his patients with eye problems.
Shelley and his second wife Mary Shelley consulted him on a variety of ailments from 1814. Mary's novel Frankenstein might have been inspired by the vitalist controversy between Lawrence and Abernethy, and "Lawrence could have guided the couple's reading in the physical sciences". Both Samuel Coleridge and John Keats were also influenced by the vitalist controversy
Despite reaching the height of his profession, with the outstanding quality of his surgical work, and his excellent textbooks, Lawrence is mostly remembered today for an extraordinary period in his early career which brought him fame and notoriety, and led him to the brink of ruin.
Controversy and Chancery
At the age of 30, in 1813, Lawrence was elected a Fellow of the Royal Society. In 1815, he was appointed Professor of Anatomy and Surgery by the College of Surgeons. His lectures started in 1816, and the set was published the same year. The book was immediately attacked by Abernethy and others for materialism, and for undermining the moral welfare of the people. One of the issues between Lawrence and his critics concerned the origin of thoughts and consciousness. For Lawrence, as for ourselves, mental processes were a function of the brain. John Abernethy and others thought differently: they explained thoughts as the product of vital acts of an immaterial kind. Abernethy also published his lectures, which contained his support for John Hunter's vitalism, and his objections to Lawrence's materialism.
In subsequent years Lawrence vigorously contradicted his critics until, in 1819, he published a second book, known by its short title of the Natural history of man. The book caused a storm of disapproval from conservative and clerical quarters for its supposed atheism, and within the medical profession because he advocated a materialist rather than vitalist approach to human life. He was linked by his critics with such other 'revolutionaries' as Thomas Paine and Lord Byron. It was "the first great scientific issue that widely seized the public imagination in Britain, a premonition of the debate over Darwin's theory of evolution by natural selection, exactly forty years later".
Hostility from the established Church of England was guaranteed. "A vicious review in the Tory Quarterly Review execrated his materialist explanation of man and mind"; the Lord Chancellor, Lord Eldon, in the Court of Chancery (1822), ruled his lectures blasphemous, on the grounds that the book contradicted Holy Scripture (the Bible). This destroyed the book's copyright. Lawrence was also repudiated by his own teacher, John Abernethy, with whom he had already had a controversy about John Hunter's teachings. There were supporters, such as Richard Carlile and Thomas Forster, and "The Monthly Magazine", in which Lawrence was compared to Galileo. However, faced with persecution, perhaps prosecution, and certainly ruin through the loss of surgical patients, Lawrence withdrew the book and resigned from his teaching position. The time had not yet arrived when a science which dealt with man as a species could be conducted without interference from the religious authorities.
It is interesting that the Court of Chancery was acting, here, in its most ancient role, that of a court of conscience. This entailed the moral law applied to prevent peril to the soul of the wrongdoer through mortal sin. The remedy was given to the plaintiff (the Crown, in this case) to look after the wrongdoer's soul; the benefit to the plaintiff was only incidental. This is also the explanation for specific performance, which compels the sinner to put matters right. The whole conception is mediæval in origin.
It is difficult to find a present-day parallel. The withholding of copyright, though only an indirect financial penalty, was both an official act and a hostile signal. We do not seem to have a word for this kind of indirect pressure, though suppression of dissent comes closer than censorship. Perhaps the modern 'naming and shaming' comes closest. The importance of respectability, reputation and public standing were critical in this case, as so often in traditional societies.
Transition to respectability
After repudiating his book, Lawrence returned to respectability, but not without regrets. He wrote in 1830 to William Hone, who was acquitted of libel in 1817, explaining his expediency and commending Hone's "much greater courage in these matters".
His last major contribution to the debate was an article on "Life" in the 1819 Rees's Cyclopaedia although this volume had in fact appeared in 1812.
He continued to espouse radical ideas and, led by the famous radical campaigner Thomas Wakley, Lawrence was part of the small group which launched The Lancet, and wrote material for it. Lawrence wrote pungent editorials, and chaired the public meetings in 1826 at the Freemasons' Tavern. He was also co-owner of the Aldersgate Private Medical Academy, with Frederick Tyrrell.
The 1826 meetings
Meetings for members of the college were attended by about 1200 people. The meetings were called to protest against the way surgeons abused their privileges to set student fees and control appointments.
In his opening speech Lawrence criticised the by-laws of the College of Surgeons for preventing all but a few teachers in London, Dublin, Edinburgh, Glasgow and Aberdeen from issuing certificates of attendance at preparatory lectures. He pointed out that Aberdeen and Glasgow had no cadavers for dissection, without which anatomy could not be properly taught.
A proposed change in the regulations of the College of Surgeons would soon cut the ground from under the private summer schools, since diplomas taken in the summer were not to be recognised.
"It would appear from the new regulations that sound knowledge was the sort acquired in the winter, when the hospital lecturers delivered their courses, while unsound knowledge was imparted in the summer when only the private schools could provide the instruction". Lawrence in his opening speech, Freemason's Tavern, 1826.
Lawrence concluded by protesting against the exclusion of the great provincial teachers from giving recognised certificates.
Gradual change
However, gradually Lawrence conformed more to the style of the College of Surgeons, and was elected to their Council in 1828. This somewhat wounded Wakley, who complained to Lawrence, and made some remarks in the Lancet. But, true to form, Wakley soon saw Lawrence's rise in the college as providing him with an inside track into the working of the institution he was hoping to reform. For some years Lawrence hunted with the Lancet and ran with the college. From the inside, Lawrence was able to help forward several of the much-needed reforms espoused by Wakley. The College of Surgeons was at last reformed, to some extent at least, by a new charter in 1843.
This episode marks Lawrence's return to respectability; in fact, Lawrence succeeded Abernethy as the 'dictator' of Bart's.
His need for respectability and worldly success might have been influenced by his marriage in 1828, at the age of 45, to the 25-year-old socially ambitious Louisa Senior.
At any rate, from then on Lawrence's career went ever forward. He never looked back: he became President of the Royal College of Surgeons, and Serjeant-Surgeon to Queen Victoria. Before he died she made him a baronet. He had for many years declined such honours, and family tradition was that he finally accepted to help his son's courtship of an aristocratic young woman (which did not succeed). "Never again [did] he venture to express his views on the processes of evolution, on the past or the future of man." He did, however, warn the young T.H. Huxley – in vain, it must be said – not to broach the dangerous topic of the evolution of man.
In 1844 Carl Gustav Carus, the physiologist and painter, made "a visit to Mr Lawrence, author of a work on the "Physiology of Man" which had interested me much some years ago, but which had rendered the author obnoxious to the clergy... He appears to have allowed himself to be frightened by this, and is now merely a practising surgeon, who keeps his Sunday in the old English fashion, and has let physiology and psychology alone for the present. I found him a rather dry, but honest man". Looking back in 1860 on his controversies with Abernethy, Lawrence wrote of "events which though important at the time of occurrence have long ceased to occupy my thoughts".
In 1828, he was elected a foreign member of the Royal Swedish Academy of Sciences and in 1855 a Foreign Honorary Member of the American Academy of Arts and Sciences.
Darwin
The careful anonymity in which the Vestiges of the Natural History of Creation was published in 1844, and the very great caution shown by Darwin in publishing his own evolutionary ideas, can be seen in the context of the need to avoid a direct conflict with the religious establishment. In 1838 Darwin referred in his "C" transmutation notebook to a copy of Lawrence's "Lectures on physiology, zoology, and the natural history of man", and historians have speculated that he brooded about the implied consequences of publishing his own ideas.
In Lawrence's day the impact of laws on sedition and blasphemy were even more threatening than they were in Darwin's time. Darwin referred to Lawrence (1819) six times in his Descent of man (1871).
Lawrence's Natural history of man contained some remarkable anticipations of later thought, but was ruthlessly suppressed. To this day, many historical accounts of evolutionary ideas do not mention Lawrence's contribution. He is omitted, for example, from many of the Darwin biographies, from some evolution textbooks, essay collections, and even from accounts of pre-Darwinian science and religion.
Although the only idea of interest which Darwin found in Lawrence was that of sexual selection in man, the influence on Alfred Russel Wallace, was more positive. Wallace "found in Lawrence a possible mechanism of organic change, that of spontaneous variation leading to the formation of new species".
Context
Lawrence was one of three British medical men who wrote on evolution-related topics from 1813 to 1819. They would all have been familiar with Erasmus Darwin and Lamarck at least; and probably also Malthus. Two (Prichard and Lawrence) dedicated their works to Blumenbach, the founder of physical anthropology. "The men who took up the challenge of Lamarck were three English physicians, Wells, Lawrence and Prichard... All three men denied soft heredity (Lamarckism)" This account is not too accurate in biographical terms, as Lawrence was actually a surgeon, Wells was born in Carolina to a Scottish family, and Prichard was a Scot. However, it is correct in principle on the main issue. Each grasped aspects of Darwin's theory, yet none saw the whole picture, and none developed the ideas any further. The later publication of Robert Chambers' Vestiges and Matthew's Naval timber was more explicit; the existence of the whole group suggests there was something real (though intangible) about the intellectual atmosphere in Britain which is captured by the phrase 'evolution was in the air'.
The years 1815–1835 saw much political and social turmoil in Britain, not least in the medical profession. There were radical medical students and campaigners in both Edinburgh and London, the two main training centres for the profession at the time. Many of these were materialists who held views favouring evolution, but of a Lamarckian or Geoffroyan kind. It is the allegiance to hard inheritance or to natural selection which distinguishes Lawrence, Prichard and Wells, because those ideas have survived, and are part of the present-day account of evolution.
Lawrence on heredity
The existence of races is a token of change in the human species, and suggests there is some significance in geographical separation. Lawrence noted that racial characteristics were inherited, not caused by the direct effect of, for instance, climate. As an example, he considered the way skin colour was inherited by children of African origin when born in temperate climates: how their colour developed without exposure to the sun, and how this continued through generations. This was evidence against the direct effect of climate.
Lawrence's ideas on heredity were many years ahead of their time, as this extract shows: "The offspring inherit only [their parents'] connate peculiarities and not any of the acquired qualities". This is as clear a rejection of soft inheritance as one can find. However, Lawrence qualified it by including the origin of birth defects owing to influences on the mother (an old folk superstition). So Mayr places Wilhelm His, Sr. in 1874 as the first unqualified rejection of soft inheritance. However, the number of places in the text where Lawrence explicitly rejects the direct action of the environment on heredity justifies his recognition as an early opponent of Geoffroyism.
Darlington's interpretation
Here, as seen by Cyril Darlington, are some of the ideas presented by Lawrence in his book, much abbreviated and rephrased in more modern terms:
Mental as well as physical differences in man are inherited.
Races of man have arisen by mutations such as may be seen in litters of kittens.
Sexual selection has improved the beauty of advanced races and governing classes.
The separation of races preserves their characters.
'Selections and exclusions' are the means of change and adaptation.
Men can be improved by selection in breeding just as domesticated cattle can be. Conversely, they can be ruined by inbreeding, a consequence which can be observed in many royal families.
Zoological study, the treatment of man as an animal, is the only proper foundation for teaching and research in medicine, morals, or even in politics.
Darlington's account goes further than other commentators. He seems to credit Lawrence with a modern appreciation of selection (which he definitely did not have); subsequently, Darlington's account was criticised as an over-statement. Darlington does not claim Lawrence actually enunciated a theory of evolution, though passages in Lawrence's book do suggest that races were historically developed. On heredity and adaptation, and the rejection of Lamarckism (soft inheritance), Lawrence is quite advanced.
Content of the second book
The introductory sections
Lecture I: introductory to the lectures of 1817.Reply to the charges of Mr Abernethy; Modern history and progress of comparative anatomy.
This follows the first publication of Lawrence's ideas in 1816, and Abernethy's criticism of them in his lectures for 1817.
"Gentlemen! I cannot presume to address you again... without first publicly clearing myself from a charge publicly made... of propagating opinions detrimental to society... for the purpose of loosening those restraints, on which the welfare of mankind depends."*[footnote] Physiological lectures, exhibiting a general view of Mr Hunter's Physiology &c &c. by John Abernethy FRS. [references] "too numerous to be particularized." This book of lectures at the same College of Surgeons contained the charge of which Lawrence complained. In this very long footnote Lawrence says that the elementary anatomy in Abernethy's text is used "like water in a medical prescription... an innocent vehicle for the more active ingredients."
The early part of the 1819 book is marked by Lawrence's reaction to Abernethy's attack on the 'materialism' of the first book. After a long preamble, in which Lawrence extols the virtues of freedom of speech, he eventually gets to the point:
"It is alleged that there is a party of modern sceptics, co-operating in the diffusion of these noxious opinions with a no less terrible band of French physiologists, for the purpose of demoralising mankind! Such is the general tenor of the accusation..." p3
"Where, Gentlemen! shall we find proofs of this heavy charge? p4 I see the animal functions inseparable from the animal organs... examine the mind... Do we not see it actually built up before our eyes by the actions of the five external senses, and of the gradually developed internal faculties? p5 (see also p74-81 on the functions of the brain)I say, physiologically speaking... because the theological doctrine of the soul, and its separate existence, has nothing to do with this physiological question, but rests on a species of proof altogether different." p6
Lawrence is here arguing that medical questions should be answered by medical evidence, in other words, he is arguing for rational thought and empiricism instead of revelation or received religion. In particular, he insisted that mental activity was produced as a function of the brain, and has nothing to do with metaphysical concepts such as the 'soul'. Also, there is an implication, never quite stated, that Abernethy's motive might be venal; that jealousy (for example) might be revealed by "a consideration of the real motives" (phrase from his long initial footnote). It is absolutely clear that the conflict predates the publication of Lawrence's book.
Evidence from geology and palaeontology
The discussion drawn from stratigraphy is interesting:
"The inferior layers, or the first in order of time, contain the remains most widely different from the animals of the living creation; and as we advance to the surface there is a gradual approximation to our present species." p39
Refers to Cuvier, Brongniart and Lamarck in France, and Parkinson in Britain in connection with fossils:
"... the extinct races of animals... those authentic memorials of beings... whose living existence... has been supposed, with considerable probability, to be of older date than the formation of the human race." p39
Summary of ideas on human races
Chapter VII raises the issue of whether different races have similar diseases (p162 et seq) and ends with a list of reasons for placing man in one distinct species. The reasons are mostly anatomical with some behavioural, such as speech. They remain valid today.
Next there is a lengthy discussion of variation in man, and of the differences between races. Then he considers causation. Lectures of 1818, Chapter IX: On the causes of the varieties of the human species:
"Having examined the principal points in which the several tribes of the human species differ from each other... I proceed to inquire whether the diversities enumerated ... are to be considered as characteristic distinctions coeval with the origin of the species, or as a result of subsequent variation; and in the event of the latter... whether they are the effect of external... causes, or of native or congenital variety." p343 "Great influence has at all times been ascribed to climate... [but] we have abundance of proof that [differences of climate] are entirely inadequate to account for the differences between the different races of men. p343–4
He shows clearly in several places that differences between races (and between varieties of domesticated animals) are inherited, and not caused by the direct action of the environment; then follows this admission:
"We do not understand the exact nature of the process by which it [meaning the correspondence between climate and racial characteristics] is effected." p345
So, after insisting on empirical (non-religious) evidence, he has clearly rejected Lamarckism but has not thought of natural selection.
Ideas on mechanism
Although in places Lawrence disclaims all knowledge of how the differences between races arose, elsewhere there are passages which hint at a mechanism. In Chapter IX, for example, we find:
"These signal diversities which constitute differences of race in animals... can only be explained by two principles... namely, the occasional production of an offspring with different characters from those of the parents, as a native or congenital variety; [ie heritable] and the propagation of such varieties by generation." p348 [continues with examples of heritable variety in offspring in one litter of kittens, or sheep. This is Mendelian inheritance and segregation]
Passages like this are interpreted by Darlington in his first two points above; there is more on variety and its origin in Chapter IV, p67-8. It is clear that Lawrence's understanding of heredity was well ahead of his time, (ahead of Darwin, in fact) and that he only lacks the idea of selection to have a fully-fledged theory of evolution.
Introduction of the word biology
At least five people have been claimed as the first to use the word biology:
Michael Christoph Hanov (Philosophiae naturalis sive physicae dogmaticae: Geologia, biologia, phytologia generalis et dendrologia, 1767)
Karl Friedrich Burdach (in 1800)
Gottfried Reinhold Treviranus (Biologie oder Philosophie der lebenden Natur, 1802). Treviranus used it to apply to the study of human life and character.
Jean-Baptiste Lamarck (Hydrogéologie, 1802, p. 8)
Lawrence, in 1819. According to the OED, Lawrence was the first person to use the word in English.
Contradiction of the Bible
Direct contradiction of the Bible was something Lawrence might have avoided, but his honesty and forthright approach led him onto this dangerous ground:
"The representations of all the animals being brought before Adam in the first instance and subsequently of their being collected in the ark... are zoogically impossible." p169
"The entire or even partial inspiration of the... Old Testament has been, and is, doubted by many persons, including learned divines and distinguished oriental and biblical scholars. The account of the creation and of subsequent events, has the allegorical character common to eastern compositions..." p168-9 incl. footnotes.
"The astronomer does not portray the heavenly motions, or lay down the laws which govern them, according to the Jewish scriptures [Old Testament] nor does the geologist think it necessary to modify the results of experience according to the contents of the Mosaic writings. I conclude then, that the subject is open for discussion." p172
Passages such as these, fully in the tradition of British empiricism and the Age of Enlightenment, were no doubt pointed out to the Lord Chancellor. In his opinion, the subject was not open for discussion.
Ealing Park
In June 1838, Lawrence purchased the Ealing Park mansion along with the surrounding 100 acres known as "Little Ealing" (then in Middlesex) at a purchase price of £9,000 (). Ealing Park is described by Pevsner as "Low and long; nine bays with pediment over the centre and an Ionic one-storeyed colonnade all along." The property was grandly furnished, as may be seen from the catalogue of the sale of the contents after her death. The estate boasted livestock, including poultry of all sorts, cows, sheep and pigs. There were thousands of bedding plants, "tove plants, more than 600 plants in early forcing houses, nearly a hundred camellias, and more.
However, they mainly lived on Whitehall Place in City of Westminster. His son later sold Ealing Park.
Personal life and family
On 4 August 1823, Lawrence married Louisa Senior (1803–1855), the daughter of a Mayfair haberdasher, who built up social fame through horticulture. They had two sons and three daughters. Their elder son died in childhood but their second son, Sir Trevor Lawrence, 2nd Baronet, was himself a prominent horticulturist and was for many years President of the Royal Horticultural Society. One daughter died at age 18 months and the other two died unmarried.
William James (10 October 1829 – buried 5 November 1839)
John James Trevor (30 December 1831 – 22 December 1913)
Mary Louisa (28 August 1833 – buried 7 March 1835)
Louisa Elizabeth (22 February 1836 – 4 January 1920)
Mary Wilhelmina (1 November 1839 – 24 November 1920)
Louisa Lawrence died 14 August 1855. Lawrence suffered an attack of apoplexy whilst descending the stairs at the College of Surgeons and died on 5 July 1867 at his house, 18 Whitehall Place, London.
References
Bibliography
Lawrence, William FRS 1816. An introduction to the comparative anatomy and physiology, being the two introductory lectures delivered at the Royal College of Surgeons on the 21st and 25th of March 1816. J. Callow, London. 179pp. [Chapter 2 'On life' was the start of his troubles, and caused the first attacks of the grounds of materialism &c]
Lawrence, William FRS 1819. Lectures on physiology, zoology and the natural history of man. J. Callow, London. 579pp. Reprinted 1822. There were a number of unauthorized reprints of this work, pirated (in the sense that the author went unrecompensed) but seemingly unexpurgated. These editions also lacked the protection of copyright, and date from 1819 to 1848. Some of them were by quite respectable publishers. Desmond's view is that the Chancery decision was "a ringing endorsement to atheist ears. Six pauper presses pirated the offending book, keeping it in print for decades. As a result, although officially withdrawn, Lawrence's magnum opus could be found on every dissident's bookshelf." Desmond & Moore 1991. Darwin p253. The text of all editions is probably identical, though no-one has published a full bibliographical study.
1822 W. Benbow. 500pp. Darwin's copy was of this edition.
1822 Kaygill & Price (no plates). 2 vols, 288+212pp.
1823 J&C Smith (new plates). 532pp.
1838 J. Taylor. ('twelve new engravings'; seventh edition – stereotyped). 396pp.
1844 J. Taylor (old plates; 'ninth edition – stereotyped). 396pp.
1848 H.G. Bohn (ninth edition, as above).
The British Library also holds a number of pamphlets, mostly attacking Lawrence's ideas.
Lawrence, William FRS 1807. Treatise on hernia. Callow, London. Later editions from 1816 entitled Treatise on ruptures: an anatomical description of each species with an account of its symptoms, progress, and treatment. 5th and last ed 1858. "The standard text for many years" Morton, A medical bibliography #3587.
[Lawrence, William] 1819. 'Life', an anonymous article in Abraham Rees' Cyclopaedia, vol 22. Longman, London.
Lawrence, W. 1833. A treatise on the diseases of the eye. Churchill, London. This work is based on lectures delivered at the London Ophthalmic Infirmary; later edition 1845. "He did much to advance the surgery of the eye. This comprehensive work marks an epoch in ophthalmic surgery." Morton, A medical bibliography #5849.
Lawrence, William 1834. The Hunterian Oration, delivered at the Royal College of Surgeons on the 14th of February 1834. Churchill, London.
Lawrence, William 1863. Lectures on surgery. London.
External links
Biography in Plarr's Lives of the Fellows Online
1783 births
1867 deaths
People from Cirencester
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Society
English zoologists
English surgeons
Proto-evolutionary biologists
Fellows of the Royal College of Surgeons of England
Members of the Royal Swedish Academy of Sciences
501
William
19th-century English writers | Sir William Lawrence, 1st Baronet | [
"Biology"
] | 6,247 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
14,470,771 | https://en.wikipedia.org/wiki/Light%20effects%20on%20circadian%20rhythm | Light effects on circadian rhythm are the response of circadian rhythms to light.
Most animals and other organisms have a biological clock that synchronizes their physiology and behaviour with the daily changes in the environment. The physiological changes that follow these clocks are known as circadian rhythms. Because the endogenous period of these rhythms are approximately but not exactly 24 hours, these rhythms must be reset by external cues to synchronize with the daily cycles in the environment. This process is called entrainment. One of the most important cues to entrain circadian rhythms is light.
Mechanism
Light first passes into a mammal's system through the retina, then takes one of two paths: the light gets collected by rod cells and cone cells and the retinal ganglion cells (RGCs), or it is directly collected by these RGCs.
The RGCs use the photopigment melanopsin to absorb the light energy. Specifically, this class of RGCs being discussed is referred to as "intrinsically photosensitive", which just means they are sensitive to light. There are five known types of intrinsically photosensitive retinal ganglion cells (ipRGCs): M1, M2, M3, M4, and M5. Each of these differently ipRGC types have different melanopsin content and photosensitivity. These connect to amacrine cells in the inner plexiform layer of the retina. Ultimately, via this retinohypothalamic tract (RHT) the suprachiasmatic nucleus (SCN) of the hypothalamus receives light information from these ipRGCs.
The ipRGCs serve a different function than rods and cones, even when isolated from the other components of the retina, ipRGCs maintain their photo-sensitivity and as a result can be sensitive to different ranges of the light spectrum. Additionally, ipRGC firing patterns may respond to light conditions as low as 1 lux whereas previous research indicated 2500 lux was required to suppress melatonin production. Circadian and other behavioral responses have shown to be more sensitive at lower wavelengths than the photopic luminous efficiency function that is based on sensitivity to cone receptors.
The core region of the SCN houses the majority of light-sensitive neurons. From here, signals are transmitted via a nerve connection with the pineal gland that regulates various hormones in the human body.
There are specific genes that determine the regulation of circadian rhythm in conjunction with light. When light activates NMDA receptors in the SCN, CLOCK gene expression in that region is altered and the SCN is reset, and this is how entrainment occurs. Genes also involved with entrainment are PER1 and PER2.
Some important structures directly impacted by the light–sleep relationship are the superior colliculus-pretectal area and the ventrolateral pre-optic nucleus.
The progressive yellowing of the crystalline lens with age reduces the amount of short-wavelength light reaching the retina and may contribute to circadian alterations observed in older adulthood.
Effects
Primary
All of the mechanisms of light-affected entrainment are not yet fully known, however numerous studies have demonstrated the effectiveness of light entrainment to the day/night cycle. Studies have shown that the timing of exposure to light influences entrainment; as seen on the phase response curve for light for a given species. In diurnal (day-active) species, exposure to light soon after wakening advances the circadian rhythm, whereas exposure before sleeping delays the rhythm. An advance means that the individual will tend to wake up earlier on the following day(s). A delay, caused by light exposure before sleeping, means that the individual will tend to wake up later on the following day(s).
The hormones cortisol and melatonin are affected by the signals light sends through the body's nervous system. These hormones help regulate blood sugar to give the body the appropriate amount of energy that is required throughout the day. Cortisol levels are high upon waking and gradually decrease over the course of the day, melatonin levels are high when the body is entering and exiting a sleeping status and are very low over the course of waking hours. The earth's natural light-dark cycle is the basis for the release of these hormones.
The length of light exposure influences entrainment. Longer exposures have a greater effect than shorter exposures. Consistent light exposure has a greater effect than intermittent exposure. In rats, constant light eventually disrupts the cycle to the point that memory and stress coping may be impaired.
The intensity and the wavelength of light influence entrainment. Dim light can affect entrainment relative to darkness. Brighter light is more effective than dim light. In humans, a lower intensity short wavelength (blue/violet) light appears to be equally effective as a higher intensity of white light.
Exposure to monochromatic light at the wavelengths of 460 nm and 550 nm on two control groups yielded results showing decreased sleepiness at 460 nm tested over two groups and a control group. Additionally, in the same study but testing thermoregulation and heart rate researchers found significantly increased heart rate in 460 nm light over the course of a 1.5-hour exposure period.
In a study done on the effect of lighting intensity on delta waves, a measure of sleepiness, high levels of lighting (1700 lux) showed lower levels of delta waves measured through an EEG than low levels of lighting (450 lux). This shows that lighting intensity is directly correlated with alertness in an office environment.
Humans are sensitive to light with a short wavelength. Specifically, melanopsin is sensitive to blue light with a wavelength of approximately 480 nm. The effect this wavelength of light has on melanopsin leads to physiological responses such as the suppression of melatonin production, increased alertness, and alterations to the circadian rhythm.
Secondary
While light has direct effects on circadian rhythm, there are indirect effects seen across studies. Seasonal affective disorder creates a model in which decreased day length during autumn and winter increases depressive symptoms. A shift in the circadian phase response curve creates a connection between the amount of light in a day (day length) and depressive symptoms in this disorder. Light seems to have therapeutic antidepressant effects when an organism is exposed to it at appropriate times during the circadian rhythm, regulating the sleep-wake cycle.
In addition to mood, learning and memory become impaired when the circadian system shifts due to light stimuli, which can be seen in studies modeling jet lag and shift work situations. Frontal and parietal lobe areas involved in working memory have been implicated in melanopsin responses to light information.
"In 2007, the International Agency for Research on Cancer classified shift work with circadian disruption or chronodisruption as a probable human carcinogen."
Exposure to light during the hours of melatonin production reduces melatonin production. Melatonin has been shown to mitigate the growth of tumors in rats. By suppressing the production of melatonin over the course of the night rats showed increased rates of tumors over the course of a four-week period.
Artificial light at night causing circadian disruption additionally impacts sex steroid production. Increased levels of progestogens and androgens was found in night shift workers as compared to "working hour" workers.
The proper exposure to light has become an accepted way to alleviate some of the effects of seasonal affective disorder (SAD). In addition exposure to light in the morning has been shown to assist Alzheimer patients in regulating their waking patterns.
In response to light exposure, alertness levels can increase as a result of suppression of melatonin secretion. A linear relationship has been found between alerting effects of light and activation in the posterior hypothalamus.
Disruption of circadian rhythm as a result of light also produces changes in metabolism.
Measured lighting for rating systems
Historically light was measured in the units of luminous intensity (candelas), luminance (candelas/m2) and illuminance (lumen/m2). After the discovery of ipRGCs in 2002 additional units of light measurement have been researched in order to better estimate the impact of different inputs of the spectrum of light on various photoreceptors. However, due to the variability in sensitivity between rods, cones and ipRGCs and variability between the different ipRGC types a singular unit does not perfectly reflect the effects of light on the human body.
The accepted current unit is equivalent melanopic lux, which is a calculated ratio multiplied by the unit lux. The melanopic ratio is determined taking into account the source type of light and the melanopic illuminance values for the eye's photopigments. The source of light, the unit used to measure illuminance and the value of illuminance informs the spectral power distribution. This is used to calculate the Photopic illuminance and the melanopic lux for the five photopigments of the human eye, which is weighted based on the optical density of each photopigment.
The WELL Building standard was designed for "advancing health and well-being in buildings globally" Part of the standard is the implementation of Credit 54: Circadian Lighting Design. Specific thresholds for different office areas are designated in order to achieve credits. Light is measured at 1.2 m above the finished floor for all areas.
Work areas must have at least a value of 200 equivalent melanopic lux present for 75% or more work stations between the hours of 09:00 and 13:00 for each day of the year when daylight is incorporated into calculations. If daylight is not taken into account all workstations require lighting at the value of 150 equivalent melanopic lux or greater.
Living environments, which are bedrooms, bathrooms and rooms with windows, at least one fixture must provide a melanopic lux value of at least 200 during the day and a melanopic lux value less than 50 during the night, measured 0.76 m above the finished floor.
Breakrooms require an average melanopic lux of 250.
Learning areas require either that light models which may incorporate daylighting have an equivalent melanopic lux of 125 for at least 75% of desks for at least four hours per day or that ambient lights maintain the standard lux recommendations set forth by Table 3 of the IES-ANSI RP-3-13.
The WELL Building standard additionally provides direction for circadian emulation in multi-family residences. In order to more accurately replicate natural cycles lighting users must be able to set a wake and bed time. An equivalent melanopic lux of 250 must be maintained in the period of the day between the indicated wake time and two hours before the indicated bed time. An equivalent melanopic lux of 50 or less is required for the period of the day spanning from two hours before the indicated bed time through the wake time. In addition at the indicated wake time melanopic lux should increase from 0 to 250 over the course of at least 15 minutes.
Other factors
Although many researchers consider light to be the strongest cue for entrainment, it is not the only factor acting on circadian rhythms. Other factors may enhance or decrease the effectiveness of entrainment. For instance, exercise and other physical activity, when coupled with light exposure, results in a somewhat stronger entrainment response. Other factors such as music and properly timed administration of the neurohormone melatonin have shown similar effects. Numerous other factors affect entrainment as well. These include feeding schedules, temperature, pharmacology, locomotor stimuli, social interaction, sexual stimuli and stress.
Circadian-based effects have also been found on visual perception to discomfort glare. The time of day at which people are shown a light source that produces visual discomfort is not perceived evenly. As the day progress, people tend to become more tolerant to the same levels of discomfort glare (i.e., people are more sensitive to discomfort glare in the morning compared to later in the day.) Further studies on chronotype show that early chronotypes can also tolerate more discomfort glare in the morning compared to late chronotypes.
See also
Chronobiology
Circadian advantage
Circadian clock
Circadian oscillator
Circadian rhythm disorders
Electronic media and sleep
Light therapy
Scotobiology
References
Circadian rhythm
Circadian
Health effects by subject | Light effects on circadian rhythm | [
"Physics",
"Biology"
] | 2,549 | [
"Physical phenomena",
"Behavior",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Circadian rhythm",
"Light",
"Sleep"
] |
14,470,857 | https://en.wikipedia.org/wiki/Radiation-induced%20cognitive%20decline | Radiation-induced cognitive decline describes the possible correlation between radiation therapy and cognitive impairment. Radiation therapy is used mainly in the treatment of cancer. Radiation therapy can be used to cure, care or shrink tumors that are interfering with quality of life. Sometimes radiation therapy is used alone; other times it is used in conjunction with chemotherapy and surgery. For people with brain tumors, radiation can be an effective treatment because chemotherapy is often less effective due to the blood–brain barrier. Unfortunately for some patients, as time passes, people who received radiation therapy may begin experiencing deficits in their learning, memory, and spatial information processing abilities. The learning, memory, and spatial information processing abilities are dependent on proper hippocampus functionality. Therefore, any hippocampus dysfunction will result in deficits in learning, memory, and spatial information processing ability.
The hippocampus is one of two structures of the central nervous system where neurogenesis continues after birth. The other structure that undergoes neurogenesis is the olfactory bulb. Therefore, it has been proposed that neurogenesis plays some role in the proper functionality of the hippocampus and the olfactory bulb. To test this proposal, a group of rats with normal hippocampal neurogenesis (control) were subjected to a placement recognition exercise that required proper hippocampus function to complete. Afterwards a second group of rats (experimental) were subjected to the same exercise but in that trial their neurogenesis in the hippocampus was arrested. It was found that the experimental group was not able to distinguish between its familiar and unexplored territory. The experimental group spent more time exploring the familiar territory, while the control group spent more time exploring the new territory. The results indicate that neurogenesis in the hippocampus is important for memory and proper hippocampal functionality. Therefore, if radiation therapy inhibits neurogenesis in the hippocampus it would lead to the cognitive decline observed in patients who have received this radiation therapy.
In animal studies discussed by Monje and Palmer in "Radiation Injury and Neurogenesis", it has been proven that radiation does indeed decrease or arrest neurogenesis altogether in the hippocampus. This decrease in neurogenesis is due to apoptosis of the neurons which usually occurs after irradiation. However it has not been proven whether the apoptosis is a direct result of the radiation itself or if there are other factors that cause neuronal apoptosis, namely changes in the hippocampus micro-environment or damage to the precursor pool. Determining the exact cause of the cell apoptosis is important because then it may be possible to inhibit the apoptosis and reverse the effects of the arrested neurogenesis.
Radiation therapy
Ionizing radiation is classified as a neurotoxicant. A 2004 cohort study concluded that irradiation of the brain with dose levels overlapping those imparted by computed tomography can, in at least some instances, adversely affect intellectual development.
Radiation therapy at doses around "23.4 Gy" was found to cause cognitive decline that was especially apparent in young children who underwent the treatment for cranial tumors, between the ages of 5 and 11. Studies found, for example, that the IQ of 5-year-old children declined each year after treatment by additional several IQ points, thereby the child's IQ decreased and decreased while growing older though may plateau at adulthood.
Radiation of 100 mGy to the head at infancy resulted in the beginning appearance of statistically significant cognitive-deficits in one Swedish/radiation-therapy follow-up study. Radiation of 1300-1500mGy to the head at childhood was similarly found to be roughly the threshold dose for the beginning increase in statistically significant rates of schizophrenia.
From soliciting for participants in a study and then examination of the prenatally exposed at Hiroshima & Nagasaki, those who experienced the prompt burst of ionizing radiation at the 8-15 and 16–25 week periods after gestation were to, especially in the closest survivors, have a higher rate of severe mental retardation as well as variation in intelligence quotient (IQ) and school performance. It is uncertain, if there exists a threshold dose, under which one or more of these effects, of prenatal exposure to ionizing radiation, do not exist, though from analysis of the limited data, "0.1" Gy is suggested for both.
Warfare
Adult humans receiving an acute whole body incapacitating dose (30 Gy) have their performance degraded almost immediately and become ineffective within several hours. A dose of 5.3 Gy to 8.3 Gy is considered lethal within months to half of male adults but not immediately incapacitating. Personnel exposed to this amount of radiation have their cognitive performance degraded in two to three hours. Depending on how physically demanding the tasks they must perform are, and remain in this disabled state at least two days. However, at that point they experience a recovery period and can perform non-demanding tasks for about six days, after which they relapse for about four weeks. At this time they begin exhibiting symptoms of radiation poisoning of sufficient severity to render them totally ineffective. Death follows for about half of males at approximately six weeks after exposure.
Nausea and vomiting generally occur within 24–48 hours after exposure to mild (1–2 Gy) doses of radiation. Headache, fatigue, and weakness are also seen with mild exposure.
Exposure of adults to 150−500 mSv results in the beginning observance of cerebrovascular pathology, and exposure to 300 mSv results in the beginning of the observance of neuropsychiatric and neurophysiological dose-related effects. Cumulative equivalent doses above 500 mSv of ionizing radiation to the head, were proven with epidemiological evidences to cause cerebrovascular atherosclerotic damage, thus increasing the chances of stroke in later life. The equivalent dose of 0.5 Gy (500 mGy) x-rays is 500 mSv.
Acute ablation of precursor cells
Recent studies have shown that there is a decrease in neurogenesis in the hippocampus after irradiation therapy. The decrease in neurogenesis is the result of a reduction in the stem cell pool due to apoptosis. However, the question remains whether radiation therapy results in a complete ablation of the stem cell pool in the hippocampus or whether some stem cells survive. Animal studies have been performed by Monje and Palmer to determine if there is an acute ablation of the stem cell pool. In the study, rats were subjected to 10 Gy dosage of radiation. The 10 Gy radiation dosage is comparable to that used in irradiation therapy in humans. One month after the reception of the dosage, living precursor cells from these rats’ hippocampus were successfully isolated and cultured. Therefore, a complete ablation of the precursor cell pool by irradiation does not occur.
Precursor cell integrity
Precursor cells may be damaged by radiation. This damage of the cells may prevent the precursor cells from differentiating into neurons and result in decreased neurogenesis. To determine whether the precursor cells are impaired in their ability to differentiate, two cultures were prepared by Fike et al. One of these cultures contained precursor cells from an irradiated rat's hippocampus and the second culture contained non-irradiated precursor cells from a rat hippocampus. The precursor cells were then observed while they continued to develop. The results indicated that the irradiated culture contained a higher number of differentiated neuron and glial cells in comparison to the control. It was also found that the ratios of glial cells to neurons in both cultures were similar. These results suggest that the radiation did not impair the precursor cells ability to differentiate into neurons and therefore neurogenesis is still possible.
Alterations in hippocampus microenvironment
The microenvironment is an important component to consider for precursor survival and differentiation. It is the microenvironment that provides the signals to the precursor cells that help it survive, proliferate, and differentiate. To determine if the microenvironment is altered as a result of radiation, an animal study was performed by Fike et al. where highly enriched, BrdU labeled, non-irradiated stem cells from a rat hippocampus were implanted into a hippocampus that was irradiated one month prior. The stem cells were allowed to remain in the live rat for 3–4 weeks. Afterwards, the rat was killed and the stem cells were observed using immunohistochemistry and confocal microscopy. The results show that stem cell survival was similar to that found in a control subject (normal rat hippocampus); however, the number of neurons generated was decreased by 81%. Therefore, alterations of the microenvironment post radiation can lead to a decrease in neurogenesis.
In addition, studies mentioned by Fike et al. found that there are two main differences between the hippocampus of an irradiated rat and a non-irradiated rat that are part of the microenvironment. There was a significantly larger number of activated microglia cells in the hippocampus of irradiated rats in comparison to non-irradiated rats. The presence of microglia cells is characteristic of the inflammatory response which is most likely due to radiation exposure. Also the expected clustering of stem cells around the vasculature of the hippocampus was disrupted. Therefore, focusing on the microglial activation, inflammatory response, and microvasculature may produce a direct link to the decrease in neurogenesis post irradiation.
Inflammatory response affects neurogenesis
Radiation therapy usually results in chronic inflammation, and in the brain this inflammatory response comes in the form of activated microglia cells. Once activated, these microglia cells start to release stress hormones and various pro-inflammatory cytokines. Some of what is released by the activated microglia cells, like the glucocorticoid stress hormone, may result in a decrease in neurogenesis. To investigate this concept, an animal study was performed by Monje et al. in order to determine the specific cytokines or stress hormones that were released by activated microglial cells that decrease neurogenesis in an irradiated hippocampus. In this study, microglia cells were exposed to bacterial lipopolysaccharide to elicit an inflammatory response, thus activating the microglia cells. These activated microglia were then co-cultured with normal hippocampal neural stem cells. Also, as a control, non-activated microglia cells were co-cultured with normal hippocampal neural stem cells. In comparing the two co-cultures, it was determined that neurogenesis in the activated microglia cell culture was 50% less than in the control. A second study was also performed to ensure that the decrease in neurogenesis was the result of released cytokines and not cell-to-cell contact of microglia and stem cells. In this study, neural stem cells were cultured on preconditioned media from activated microglia cells and a comparison was made with a neural stem cells cultured on plain media. The results of this study indicated that neurogenesis also showed a similar decrease in the preconditioned media culture versus the control.
When microglia cells are activated, they release the pro-inflammatory cytokine IL-1β, TNF-α, INF-γ, and IL-6. In order to identify the cytokines that decreased neurogenesis, Monje et al. allowed progenitor cells to differentiate while exposed to each cytokine. The results of the study showed that only the recombinant IL-6 and TNF-α exposure significantly reduced neurogenesis. Then the IL-6 was inhibited and neurogenesis was restored. This implicates IL-6 as the main cytokine responsible for the decrease of neurogenesis in the hippocampus.
Microvasculature and neurogenesis
The microvasculature of the subgranular zone, located in dentate gyrus of hippocampus, plays an important role in neurogenesis. As precursor cells develop in the subgranular zone, they form clusters. These clusters usually contain dozens of cells. The clusters are made up of endothelial cells and neuronal precursor cells that have the ability to differentiate into either neurons or glia cells. With time, these clusters eventually migrate towards microvessels in the subgranular zone. As the clusters get closer to the vessels, some of the precursor cells differentiate in glia cells and eventually the remaining precursor cells will differentiate into neurons. Upon investigation of the close association between the vessels and clusters, it is apparent that the actual migration of the precursor cells to these vessels is not random. Since endothelial cells forming the vessel wall do secrete brain-derived neurotrophic factor, it is plausible that the neuronal precursor cells migrate to those regions in order to grow, survive, and differentiate. Also, since the clusters do contain endothelial cells, they might be attracted to the vascular endothelial growth factor that is released in the area of vessels to promote endothelial survival and angiogenesis. However, as noted previously, clustering along the capillaries in the subgranular zone does decrease when the brain is subject to radiation. The exact reasoning for this disruption of the close association between cluster and vessels remains unknown. It is possible that any signaling that would normally attract the clusters to the region, for example the bone-derived growth factor and the vascular endothelial growth factor, may be suppressed.
Reversal
Blocking inflammatory cascade
Neurogenesis in the hippocampus usually decreases after exposure to radiation and usually leads to a cognitive decline in patients undergoing radiation therapy. As discussed above, the decrease in neurogenesis is heavily influenced by changes in the microenvironment of the hippocampus upon exposure to radiation. Specifically, disruption of the cluster/vessel association in the subgranular zone of the dentate gyrus and cytokines released by activated microglia as part of the inflammatory response do impair neurogenesis in the irradiated hippocampus. Thus several studies have used this knowledge to reverse the reduction in neurogenesis in the irradiated hippocampus. In one study, indomethacin treatment was given to the irradiated rat during and after irradiation treatment. It was found that the indomethacin treatment caused a 35% decrease in the number of activated microglia per dentate gyrus in comparison to microglia activation in irradiated rats without indomethacin treatment. This decrease in microglia activation reduces the amount of cytokines and stress-hormone release, thus reducing the effect of the inflammatory response. When the number of precursor cells adopting a neuronal fate was quantified, it was determined that the ratio of neurons to glia cells increased. This increase in neurogenesis was only 20-25% of that observed in control animals. However, in this study the inflammatory response was not eliminated entirely, and some cytokines or stress hormones continued to be secreted by the remaining activated microglia cells causing the reduction in neurogenesis. In a second study, the inflammatory cascade was also blocked at another stage. This study focused mainly on the c-Jun NH2 – terminal kinase pathway which when activated results in the apoptosis of neurons. This pathway was chosen because, upon irradiation, it is the only mitogen-activated protein kinase that is activated. The mitogen-activated protein kinases are important for regulation of migration, proliferation, differentiation, and apoptosis. The JNK pathway is activated by cytokines released by activated microglia cells, and blocking this pathway significantly reduces neuronal apoptosis. In the study, the JNK was inhibited using 5 μM SP600125 dosage, and this resulted in a decrease of neural stem cells apoptosis. This decrease in apoptosis results in increased neuronal recovery.
Environmental enrichment
In previous work, environmental enrichment has been used to determine its effect on brain activity. In these studies, the environmental enrichment has positively impacted the brain functionality in both normal, healthy animals and animals that had suffered severe brain injury. It has already been shown by Elodie Bruel-Jungerman et al. that subjecting animals to learning exercises that are heavily dependent on the hippocampus results in increased neurogenesis. Therefore, the question of whether environmental enrichment can enhance neurogenesis in an irradiated hippocampus is raised. In a study performed by Fan et al., the effects of environmental enrichment on gerbils were tested. There were four groups of gerbils used for this experiment, where group one consisted on non-irradiated animals that lived in a standard environment, group two were non-irradiated animals that lived in an enriched environment, group three were irradiated animals that lived in a standard environment, and group four were irradiated animals that lived in an enriched environment. After two months of maintaining the gerbils in the required environments, they were killed and hippocampal tissue was removed for analysis. It was found that the number of precursor neurons that were differentiated into neurons from group four (irradiated and enriched environment) was significantly more than group three (irradiated and standard environment). Similarly, the number of neuron precursor cells was more in group two (non-irradiated and enriched environment), in comparison to group one (non-irradiated and standard environment). The results indicate that neurogenesis was increased in the animals that were exposed to the enriched environment, in comparison to animals in the standard environment. This outcome indicates that environmental enrichment can indeed increase neurogenesis and reverse the cognitive decline.
See also
Post-chemotherapy cognitive impairment
Targeted therapy
Electrochemotherapy
Electro therapy
Chemotherapy
Radiotherapy
References
Further reading
Long-term consequences of in utero irradiated mice indicate proteomic changes in synaptic plasticity related signalling .Pathways involved in cognition, the transcription factor cAMP response element-binding protein (CREB)
Convergence and divergence between the transcriptional responses to Zika virus infection and prenatal irradiation
EU-funded 'Cognitive and Cerebrovascular Effects Induced by Low Dose Ionising Radiation' (CEREBRAD)
Low dose cranial irradiation-induced cerebrovascular damage is reversible in mice.
Radiation health effects | Radiation-induced cognitive decline | [
"Chemistry",
"Materials_science"
] | 3,854 | [
"Radiation effects",
"Radiation health effects",
"Radioactivity"
] |
14,471,554 | https://en.wikipedia.org/wiki/Bed%20rest | Bed rest, also referred to as the rest-cure, is a medical treatment in which a person lies in bed for most of the time to try to cure an illness. Bed rest refers to voluntarily lying in bed as a treatment and not being confined to bed because of a health impairment which physically prevents leaving bed. The practice is still used although a 1999 systematic review found no benefits for any of the 17 conditions studied and no proven benefit for any conditions at all, beyond that imposed by symptoms.
In the United States, nearly 20% of pregnant women have some degree of restricted activity prescribed despite the growing data showing it to be dangerous, causing some experts to call its use "unethical".
Medical uses
Extended bed rest has been proven to be a potentially harmful treatment needing more careful evaluation.
Pregnancy
Women who are pregnant and are experiencing early labor, vaginal bleeding, and cervix complications have been prescribed bed rest. This practice in 2013 was strongly discouraged due to no evidence of benefit and evidence of potential harm.
Evidence is unclear if it affects the risk of preterm birth and due to potential side effects the practice is not routinely recommended. It is also not recommended for routine use in pregnant women with high blood pressure or to prevent miscarriage.
Women pregnant with twins or higher-order multiples are at higher risk for pregnancy complications. Routine bed rest in twin pregnancies (bed rest in the absence of complications) does not improve outcomes. Bed rest is therefore not recommended routinely in those with a multiple pregnancy.
Use in combination with assisted reproductive technology such as embryo transfer is also not recommended.
Back pain
For people with back pain bed rest has previously been recommended. Bed rest, however, is less beneficial than staying active. As a treatment for low back pain, bed rest should not be used for more than 48 hours.
Other
As of 2016 it is unclear if bed rest is useful for people in wheelchairs who have pressure ulcers.
Bed rest may be sufficient treatment for mild cases of Sydenham chorea.
In those with deep vein thrombosis early movement rather than bed rest appears helpful.
Adverse effects
Prolonged bed rest has long been known to have deleterious physiological effects, such as muscle atrophy and other forms of deconditioning such as arterial constriction. Besides lack of physical exercise it was shown that another important factor is that the hydrostatic pressure (caused by gravity) acts anomalously, resulting in altered distribution of body fluids. In other words, when getting up, this can cause an orthostatic hypertension, potentially inducing a vasovagal response.
Additionally, prolonged bed rest can lead to the formation of skin pressure ulcers. Even physical exercise in bed fails to address certain adverse effects.
Phlebothrombosis is marked by the formation of a clot in a vein without prior inflammation of the wall of the vein. It is associated with prolonged bed rest, surgery, pregnancy, and other conditions in which blood flow becomes sluggish or the blood coagulates more readily than normal. The affected area, usually the leg, may become swollen and tender. The danger is that the clot may become dislodged and travel to the lungs (a pulmonary embolism).
Technique
Complete bed rest refers to discouraging the person in treatment from sitting up for any reason, including daily activities like drinking water.
Placing the head of a bed lower than the foot is sometimes used as a means of simulating the physiology of spaceflight.
History
As a treatment, bed rest is mentioned in the earliest medical writings. The rest cure, or bed rest cure, was a 19th-century treatment for many mental disorders, particularly hysteria. "Taking to bed" and becoming an "invalid" for an indefinite period was a culturally accepted response to some of the adversities of life. Melville Arnott noted the increased use of bed rest in late-19th and early-20th century medical practice:
It has, of course always been recognised that rest is essential for the acutely ill person [...]. But there is little mention of bed rest in the 18th and early 19th century by such authors as Withering, Heberden, and Stokes. [...] The mid-19th century saw the impact of Hilton's Rest and Pain
[...]. In one case after another Hilton scored success, after all sorts of fantastic treatments had failed, because he recognised the value of rest in inflammation - particularly in osteomylitis and bone and joint tuberculosis which was then so prevalent. As so often happens, opinion swung to the opposite extreme, and rest came to be regarded as the universal healer. [...] Another reason for undue emphasis on bed rest may be the tendency, since the 19th century, to treat illness in hospital, rather than at home. In most hospitals, even today, the patient is expected to be in bed: the whole organisation is geared to such a state, and there is little provision for the up patient. [...] Furthermore, the routine of the bed bath and the bedpan is firmly established in nursing care. Indeed, many of our older hospitals - especially those for the chronic sick, with large inadequately heated wards and too few nurses - enforce bed rest as the only modus operandi.
In addition to bed rest, patients were secluded from all family contact to reduce dependence on others. The only person whom bed-rest patients were allowed to see was the nurse who massaged, bathed, and clothed them. Not only were patients isolated in bed for an extended time, they were advised to avoid other activities that might mentally exhaust them - such as writing or drawing.
In some extreme cases electrotherapy was prescribed. The food the patient was served usually consisted of fatty dairy products to revitalize the body. This "rest cure" as well as its name were created by Doctor Silas Weir Mitchell (1829-1914),
and it was almost always prescribed to women, many of whom were suffering from depression, especially postpartum depression. It was not effective and caused many to go insane, suffer complications of prostration, or die.
Before the advent of effective antihypertension medications, bed rest was a standard treatment for markedly high blood pressure. It is still used in cases of carditis secondary to rheumatic fever. Its popularity and perceived efficacy have varied greatly over the centuries.
In 1892, feminist writer Charlotte Perkins Gilman published "The Yellow Wallpaper", a horror short-story based on her experience when placed under the rest cure by Dr. Silas W. Mitchell himself. She wasn't allowed to write in a journal, paint a picture, or release her imagination in any way, though she was artistically inclined. If she ever felt ill, she was simply told to return to bed. Her specific instructions from Dr. Mitchell were to "Live as domestic a life as possible. Have your child with you all the time... Lie down an hour after each meal. Have but two hours' intellectual life a day. And never touch pen, brush or pencil as long as you live." Gilman abided by Mitchell's instructions for several months before practically losing control of her sanity.
Eventually, Gilman divorced her husband and pursued a life as a writer and women's rights activist. She later explained in her 1935 autobiography The Living of Charlotte Perkins Gilman that she could not be restrained to the domestic lifestyle without losing her sanity, and that "it was not a choice between going and staying, but between going, sane, and staying, insane."
The narrator in "The Yellow Wallpaper" reflected her own authentic account. The narrator was advised by her husband to perform the rest cure and avoid creative activities while struggling with fits of depression. After becoming obsessed with the yellow wallpaper in her room, the narrator suffers a mental breakdown and frees a "woman behind the wall", metaphorically resembling Gilman's own mental break and release from female expectations. Gilman sent her short story to Dr. Mitchell, hoping that he might change his treatment of women with mental health and help save people from her own experience. The story became a symbol of feminism in the 1970s at the time of its rediscovery.
The author Virginia Woolf was prescribed the rest cure, which she parodied in her novel Mrs Dalloway (1925) with the description "you invoke proportion; order rest in bed; rest in solitude; silence and rest; rest without friends, without books, without messages; six months rest; until a man who went in weighing seven stone six comes out weighing twelve".
Some negative effects of bed rest were historically attributed to drugs taken in bed rest.
See also
Bedridden
Postpartum confinement, the period after giving birth
Lying-in, the historic term for enforced rest after giving birth
Reduced muscle mass, strength and performance in space
References
Further reading
Stuempfle, K., and D. Drury. "The Physiological Consequences of Bed Rest". Journal of Exercise Physiology online (June 2007) 10(3):32-41.
Medical treatments
Beds
Culture of beds | Bed rest | [
"Biology"
] | 1,866 | [
"Beds",
"Behavior",
"Sleep"
] |
14,471,564 | https://en.wikipedia.org/wiki/Outline%20of%20medicine | The following outline is provided as an overview of and topical guide to medicine:
Medicine – science of healing. It encompasses a variety of health care practices evolved to maintain health by the prevention and treatment of illness.
Aims
Cure
Health
Homeostasis
Medical ethics
Prevention of illness
Palliation
Branches of medicine
Anesthesiology – practice of medicine dedicated to the relief of pain and total care of the surgical patient before, during and after surgery.
Alternative medicine; is any healing practice, "that does not fall within the realm of conventional medicine.
Cardiology – branch of medicine that deals with disorders of the heart and the blood vessels.
Critical care medicine – focuses on life support and the intensive care of the seriously ill.
Dentistry – branch of medicine that deals with treatment of diseases in the oral cavity
Dermatology – branch of medicine that deals with the skin, hair, and nails.
Emergency medicine – focuses on care provided in the emergency department
Endocrinology – branch of medicine that deals with disorders of the endocrine system.
Epidemiology – study of cause and prevalence of diseases and programs to contain them
First aid – assistance given to any person experiencing a sudden illness or injury, with care provided to preserve life, prevent the condition from worsening, and/or promote recovery. It includes initial intervention in a serious condition prior to professional medical help being available, such as performing CPR while awaiting an ambulance, as well as the complete treatment of minor conditions, such as applying a plaster to a cut.
Gastroenterology – branch of medicine that deals with the study and care of the digestive system.
General practice (often called family medicine) is a branch of medicine that specializes in primary care.
Geriatrics – branch of medicine that deals with the general health and well-being of the elderly.
Gynaecology – diagnosis and treatment of the female reproductive system
Hematology – branch of medicine that deals with the blood and the circulatory system.
Hepatology – branch of medicine that deals with the liver, gallbladder and the biliary system.
Infectious disease (Outline of concepts) – branch of medicine that deals with the diagnosis and management of infectious disease, especially for complex cases and immunocompromised patients.
Internal medicine – involved with adult diseases
Neurology – branch of medicine that deals with the brain and the nervous system.
Nephrology – branch of medicine which deals with the kidneys.
Obstetrics – care of women during and after pregnancy
Occupational medicine – branch of medicine concerned with the maintenance of health in the workplace
Oncology – branch of medicine that studies the types of cancer.
Ophthalmology – branch of medicine that deals with the eyes.
Optometry – branch of medicine that involves examining the eyes and applicable visual systems for defects or abnormalities as well as the medical diagnosis and management of eye disease.
Orthopaedics – branch of medicine that deals with conditions involving the musculoskeletal system.
Otorhinolaryngology – branch of medicine that deals with the ears, nose and throat.
Pathology – study of causes and pathogenesis of diseases.
Pediatrics – branch of medicine that deals with the general health and well-being of children and in some countries like the U.S. young adults.
Preventive medicine – measures taken for disease prevention, as opposed to disease treatment.
Psychiatry – branch of medicine that deals with the study, diagnosis, treatment, and prevention of mental disorders.
Pulmonology – branch of medicine that deals with the respiratory system.
Radiology – branch of medicine that employs medical imaging to diagnose and treat disease.
Sports medicine – branch of medicine that deals with physical fitness and the treatment and prevention of injuries related to sports and exercise.
Rheumatology – branch of medicine that deals with the diagnosis and treatment of rheumatic diseases.
Surgery – branch of medicine that uses operative techniques to investigate or treat both disease and injury, or to help improve bodily function or appearance.
Urology – branch of medicine that deals with the urinary system of both sexes and the male reproductive system
History of medicine
Prehistoric medicine
Homeopathy
Herbalism
Siddha medicine
Ayurveda
Ancient Egyptian medicine
Babylonian medicine
Ancient Iranian medicine
Traditional Chinese medicine
Jewish medicine
Greco-Roman medicine
Medicine in the medieval Islamic world
Medieval medicine of Western Europe
Medical biology
Medical biology
Fields of medical biology
Anatomy – study of the physical structure of organisms. In contrast to macroscopic or gross anatomy, cytology and histology are concerned with microscopic structures.
List of anatomical topics
List of bones of the human skeleton
List of homologues of the human reproductive system
List of human anatomical features
List of human anatomical parts named after people
List of human blood components
List of human hormones
List of human nerves
List of muscles of the human body
List of regions in the human brain
Biochemistry – study of the chemistry taking place in living organisms, especially the structure and function of their chemical components.
Bioinformatics
Biological engineering
Biophysics
Biostatistics – application of statistics to biological fields in the broadest sense. A knowledge of biostatistics is essential in the planning, evaluation, and interpretation of medical research. It is also fundamental to epidemiology and evidence-based medicine.
Biotechnology
Nanobiotechnology
Cell biology – microscopic study of individual cells.
Embryology – study of the early development of organisms.
Gene therapy
Genetics – study of genes, and their role in biological inheritance.
Cytogenetics
Histology – study of the structures of biological tissues by light microscopy, electron microscopy and immunohistochemistry.
Immunology – study of the immune system, which includes the innate and adaptive immune system in humans, for example.
Laboratory medical biology
Microbiology – study of microorganisms, including protozoa, bacteria, fungi, and viruses.
Molecular biology
Neuroscience (outline) – includes those disciplines of science that are related to the study of the nervous system. A main focus of neuroscience is the biology and physiology of the human brain and spinal cord.
Parasitology
Pathology – study of disease, including the causes, course, progression and resolution thereof.
Physiology – study of the normal functioning of the body and the underlying regulatory mechanisms.
Systems biology
Virology
Toxicology – study of hazardous effects of drugs and poisons.
and many others (typically, life sciences that pertain to medicine)
Illness (diseases and disorders)
Disease
Disability
List of cancer types
List of childhood diseases
List of diseases caused by insects
List of eponymous diseases
List of fictional diseases
List of food-borne illness outbreaks in the United States
List of genetic disorders
List of human parasitic diseases
List of illnesses related to poor nutrition
List of infectious diseases
List of infectious diseases causing flu-like syndrome
List of latent human viral infections
List of mental illnesses
List of neurological disorders
List of notifiable diseases
List of parasites (human)
List of skin-related conditions
List of systemic diseases with ocular manifestations
Medical practice
Practice of medicine
Physical examination
Diagnosis
Surgery
Medication
Drugs
Drugs
Drug
Pharmaceutical drug/ Medication
Recreational drug
List of anaesthetic drugs
List of antibiotics
List of antiviral drugs
List of bestselling drugs
List of drugs affected by grapefruit
List of drugs banned from the Olympics
List of controlled drugs in the United Kingdom
List of medical inhalants
List of monoclonal antibodies
List of psychedelic drugs
List of psychiatric medications
List of psychiatric medications by condition treated
List of schedules of controlled substances (USA)
List of Schedule I drugs
List of Schedule II drugs
List of Schedule III drugs
List of Schedule IV drugs
List of Schedule V drugs
List of withdrawn drugs
Medical equipment
Medical equipment
MRI
Computed axial tomography
Medical labs
Blood test
Medical facilities
Clinic
Hospice
List of hospice programs
Hospital
List of hospitals in the United States
List of burn centers in the United States
List of Veterans Affairs medical facilities
Medical education
Medical education – education related to the practice of being a medical practitioner; either the initial training to become a physician, additional training thereafter, and fellowship.
Medical school
List of medical schools
Internship
Residency
Fellowship
Medical research
Medical research
Clinical research (outline)
Medical jargon
Medical terminology
List of medical roots, suffixes and prefixes
Medical abbreviations and acronyms
Acronyms in healthcare
List of medical abbreviations: Overview
List of medical abbreviations: Latin abbreviations
List of abbreviations for diseases and disorders
List of abbreviations for medical organisations and personnel
List of abbreviations used in medical prescriptions
List of abbreviations used in health informatics
List of optometric abbreviations
Medical glossaries
Glossary of alternative medicine
Glossary of anatomical terminology, definitions and abbreviations
Glossary of clinical research
Glossary of communication disorders
Glossary of diabetes
Glossary of medical terms related to communications disorders
Glossary of medicine
Glossary of psychiatry
Medical organizations
List of medical organisations
List of LGBT medical organizations
List of pharmacy associations
Government agencies
Centers for Disease Control and Prevention (US)
Food and Drug Administration (US)
National Academy of Medicine (US)
National Institutes of Health (US)
Medical publications
List of important publications in medicine
List of medical journals
List of defunct medical journals
List of medical and health informatics journals
Persons influential in medicine
List of physicians
Medical scholars
The earliest known physician, Hesyre.
The first recorded female physician, Peseshet.
Borsippa, a Babylonian who wrote the Diagnostic Handbook.
The Iranian chemist, Rhazes.
Avicenna, the philosopher and physician.
Greco-Roman medical scholars:
Hippocrates, commonly considered the father of modern medicine.
Galen, known for his ambitious surgeries.
Andreas Vesalius
Oribasius, a Byzantine who compiled medical knowledge.
Abu al-Qasim, an Islamic physician known as the father of modern surgery.
Medieval European medical scholars:
Theodoric Borgognoni, one of the most significant surgeons of the medieval period, responsible for introducing and promoting important surgical advances including basic antiseptic practice and the use of anaesthetics.
Guy de Chauliac, considered to be one of the earliest fathers of modern surgery, after the great Islamic surgeon, Abu al-Qasim.
Realdo Colombo, anatomist and surgeon who contributed to understanding of lesser circulation.
Michael Servetus, considered to be the first European to discover the pulmonary circulation of the blood.
Ambroise Paré suggested using ligatures instead of cauterisation and tested the bezoar stone.
William Harvey describes blood circulation.
John Hunter, surgeon.
Amato Lusitano described venous valves and guessed their function.
Garcia de Orta first to describe Cholera and other tropical diseases and herbal treatments
Percivall Pott, surgeon.
Sir Thomas Browne physician and medical neologist.
Thomas Sydenham physician and so-called "English Hippocrates."
Kuan Huang, who studied abroad and brought his techniques back to homeland china.
Ignaz Semmelweis, who studied and decreased the incidence of childbed fever.
Louis Pasteur and Robert Koch founded bacteriology.
Alexander Fleming, whose accidental discovery of penicillin advanced the field of antibiotics.
Pioneers in medicine
Wilhelm Röntgen discovered x-rays, earning the first Nobel Prize in Physics in 1901, "in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays (or x-rays)," and invented radiography.
Christiaan Barnard performed the first heart transplant
Ian Donald pioneered the use of the ultrasound scan, which led to its use as a diagnostic tool.
Sir Godfrey Hounsfield invented the computed tomography (CT) scanner, sharing the 1979 Nobel Prize in Physiology or Medicine with Allan M. Cormack, "for the development of computer assisted tomography."
Sir Peter Mansfield invented the MRI scanner, sharing the 2003 Nobel Prize in Physiology or Medicine with Paul Lauterbur for their "discoveries concerning magnetic resonance imaging."
Robert Jarvik, inventor of the artificial heart.
Anthony Atala, creator of the first lab-grown organ, an artificial urinary bladder.
General concepts in medicine
Epidemiology – study of the demographics of disease processes, and includes, but is not limited to, the study of epidemics.
Nutrition – study of the relationship of food and drink to health and disease, especially in determining an optimal diet. Medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases.
Pharmacology – study of drugs and their actions.
Psychology – an academic and applied discipline that involves the scientific study of mental functions and behaviors.
Outline of nutrition
List of macronutrients
List of micronutrients
Outline of emergency medicine
List of emergency medicine courses
List of surgical procedures
List of eye surgical procedures
List of disabilities
List of disability-related terms with negative connotations
List of medical emergencies
List of eponymous fractures
List of AIDS-related topics
List of clinically important bacteria
List of distinct cell types in the adult human body
List of eponymous medical signs
List of life extension-related topics
List of medical inhalants
List of medical symptoms
List of oncology-related terms
List of oral health and dental topics
List of pharmaceutical companies
List of psychotherapies
List of vaccine topics
Outline of autism
Outline of exercise
Outline of obstetrics (pregnancy and childbirth)
Outline of psychology
Pharmacology, for list of medicinal substances
See also
Health
Outline of health
Outline of health sciences
External links
NLM (US National Library of Medicine, contains resources for patients and health care professionals)
U.S. National Library of Medicine
MedicineNet.com
Science-Based Medicine – exploring issues and controversies in science and medicine.
WebMD Health topics A-Z
Outline
Medicine
Medicine | Outline of medicine | [
"Biology"
] | 2,738 | [
"Medicine"
] |
14,471,620 | https://en.wikipedia.org/wiki/Outline%20of%20acoustics | The following outline is provided as an overview of and topical guide to acoustics:
Acoustics – interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.
History of acoustics
Branches of acoustics
Archaeoacoustics – study of sound within archaeology. This typically involves studying the acoustics of archaeological sites and artefacts.
Aeroacoustics – study of noise generated by air movement, for instance via turbulence, and the movement of sound through the fluid air. This knowledge is applied in acoustical engineering to study how to quieten aircraft. Aeroacoustics is important to understanding how wind musical instruments work.
Architectural acoustics – science of how to achieve a good sound within a building. It typically involves the study of speech intelligibility, speech privacy and music quality in the built environment. Also known as building acoustics.
Bioacoustics – scientific study of the hearing and calls of animal calls, as well as how animals are affected by the acoustic and sounds of their habitat.
Electroacoustics – concerned with the recording, manipulation and reproduction of audio using electronics. This might include products such as mobile phones, large scale public address systems or virtual reality systems in research laboratories.
Environmental noise – concerned with noise and vibration caused by railways, road traffic, aircraft, industrial equipment and recreational activities. The main aim of these studies is to reduce levels of environmental noise and vibration. Research work now also has a focus on the positive use of sound in urban and natural environments: soundscapes and tranquility.
Musical acoustics – study of the physics of acoustic instruments; the audio signal processing used in electronic music; the computer analysis of music and composition, and the perception and cognitive neuroscience of music.
Psychoacoustics – study of how humans respond to sounds.
Acoustic signal processing – electronic manipulation of acoustic signals. Applications include: active noise control; design for hearing aids or cochlear implants; echo cancellation; music information retrieval, and perceptual coding (e.g. MP3 or Opus).
Acoustics of speech – acousticians study the production, processing and perception of speech. Speech recognition and Speech synthesis are two important areas of speech processing using computers. The subject also overlaps with the disciplines of physics, physiology, psychology, and linguistics.
Ultrasound – Ultrasonics deals with sounds at frequencies too high to be heard by humans. Specialisms include medical ultrasonics (including medical ultrasonography), sonochemistry, material characterisation and underwater acoustics (Sonar).
Underwater acoustics – scientific study of natural and man-made sounds underwater. Applications include sonar to locate submarines, underwater communication by whales, climate change monitoring by measuring sea temperatures acoustically, sonic weapons, and marine bioacoustics.
Acoustics of vibration – study of how mechanical systems vibrate and interact with their surroundings. Applications might include: ground vibrations from railways; vibration isolation to reduce vibration in operating theatres; studying how vibration can damage health (vibration white finger); vibration control to protect a building from earthquakes, or measuring how structure-borne sound moves through buildings.
Acoustic software
Baudline
Beatmapping
Composers Desktop Project
Diamond Cut Audio Restoration Tools
Enhanced Acoustic Simulator for Engineers
Kyma (sound design language)
NU-Tech
Scratch Live
Unit generator
Vinyl emulation software
Acoustics organizations
Acoustics publications
Applied Acoustics
Journal of Sound and Vibration
Journal of the Acoustical Society of America
Ultrasonics
Influential acoustician
Christian Andrews Doppler
Lord Rayleigh
James Lighthill
See also
Sound
Wave
References
External links
Acoustical Society of America
Institute of Acoustic in UK
National Council of Acoustical Consultants
International Commission for Acoustics
Institute of Noise Control Engineers
Acoustics
Acoustics
Acoustics | Outline of acoustics | [
"Physics"
] | 820 | [
"Classical mechanics",
"Acoustics"
] |
14,471,804 | https://en.wikipedia.org/wiki/Deconditioning | Deconditioning is adaptation of an organism to a less demanding environment, or, alternatively, the decrease of physiological adaptation to normal conditions. Deconditioning can result from decreased physical activity, prescribed bed rest, orthopedic casting, paralysis, aging. A particular interest in the study of deconditioning is in aerospace medicine, to diagnose, fight, and prevent adverse effects of the conditions of space flight.
Deconditioning due to decreased physical effort results in muscle loss, including heart muscles.
Deconditioning due to lack of gravity or non-standard gravity action (e.g., during bed rest) results in abnormal distribution of body fluids.
See also
Atrophy
Effect of spaceflight on the human body
Long COVID
References
Physiology | Deconditioning | [
"Biology"
] | 158 | [
"Physiology"
] |
14,472,315 | https://en.wikipedia.org/wiki/Caterpillar%20930G | The Caterpillar 930G is a hydraulic front end loader manufactured by Caterpillar Inc. The 930G, with of net flywheel power at 2300 rpm, it is classified as a small wheeled loader in the line of Caterpillar's excavators. The MSRP of a standard 930G is $145,400.
Specifications
Engine
Net Flywheel Power: 149 hp (110 kW)
Net Power (ISO 9249)(1997): 150 hp (111 kW)
Net Power (SAE J1349): 149 hp (110 kW)
Net Power (EEC 80/1269): 150 hp (111 kW)
Weights
Operating Weight: 28,725 lb (13,029 kg)
Maximum Weight: 29,044 lb (13,174 kg)
Optional Counterweight: 470 kg (1040 lb)
Attachments and work tools
Angle blades
Angle broom
High dump and rollout buckets
Loader rakes
Log and lumber forks
Material handling buckets
Multi-purpose buckets
Pallet forks
Pickup broom
Reversible plows
Side dump buckets
Top clamp buckets
Woodchip buckets
References
930G
Construction equipment | Caterpillar 930G | [
"Engineering"
] | 240 | [
"Construction equipment",
"Construction",
"Engineering vehicles",
"Caterpillar Inc. vehicles",
"Industrial machinery"
] |
14,472,947 | https://en.wikipedia.org/wiki/Sacral%20nerve%20stimulation | Sacral nerve stimulation, also termed sacral neuromodulation, is a type of medical electrical stimulation therapy.
It typically involves the implantation of a programmable stimulator subcutaneously, which delivers low amplitude electrical stimulation via a lead to the sacral nerve, usually accessed via the S3 foramen.
The U.S. Food and Drug Administration has approved InterStim Therapy, by Medtronic, as a sacral nerve stimulator for treatment of urinary incontinence, high urinary frequency and urinary retention. Sacral nerve stimulation is also under investigation as treatment for other conditions, including constipation brought on by nerve damage due to surgical procedures. An experimental procedure for constipation in children is being conducted in Nationwide Children's Hospital.
In the event that the nerves and the brain are no longer communicating effectively, resulting in a bowel/bladder disorder, this type of treatment is designed to imitate a signal sent via the central nervous system.
One of the major nerve routes is from the brain, along the spinal cord and through the back. This is commonly referred to as the sacral area. This area controls the everyday function of the pelvic floor, urethral sphincter, bladder and bowel. By stimulating the sacral nerve (located in the lower back), a signal is sent that manipulates a contraction within the pelvic floor. Over time these contractions rebuild the strength of the organs and muscles within it. This effectively alleviates all symptoms of urinary/faecal disorders, and in many cases eliminates them completely.
Medical uses
Urge incontinence
Many studies have been initiated using the sacral nerve stimulation (SNS) technique to treat patients that suffer with urinary problems. When applying this procedure, proper patient screening is essential, because some disorders that affect the urinary tract (like bladder calculus or carcinoma in-situ) have to be treated differently. Once the patient is selected, he receives a temporary external pulse generator connected to wire leads at S3 foramina for 1–2 weeks. If the person's symptoms improve by more than 50%, he receives the permanent wire leads and stimulator that is implanted in the hip in the subcutaneous tissue. The first follow-up happens 1–2 weeks later to check if the permanent devices are providing improvement in the user's symptoms and to program the pulse generator adequately.
Bleeding, infection, pain and unwanted stimulation in the extremities are some of the complications resulting from this therapy. Currently, battery replacements are necessary 5–10 years after implementation depending upon the strength of the stimulation therapy. (The newest interstim's battery can be wirelessly recharged (roughly weekly) using a paddle placed against the skin outside the implant.) This procedure has shown long term success rate that ranges from 50% to 90%, and one study concluded that it was a good option for patients with lower urinary tract dysfunction refractive to conservative and pharmacological interventions.
Fecal incontinence
Fecal incontinence, the involuntary loss of stool and flatus release afflicting mainly elderly people, can also be treated with sacral nerve stimulation as long as patients have intact sphincter muscles. The FDA approved the approach for treating the fecal incontinence in March 2011. The etiology is not well understood yet and both conservative treatments (like antidiarrheics, special diet and biofeedback) and surgical treatments for this disorder are not regarded as ideal options.
Pascual et al. (2011) revised the follow-up results of the first 50 people that submit to sacral nerve stimulation (SNS) to treat fecal incontinence in Madri (Spain). The most common cause for the fecal incontinence was obstetric procedures, idiopathic origin and prior anal surgery, and all these people were refractory to the conservative treatment. The procedure consisted of placing a temporary pulse generator connected to a unilateral electrode at S3 or S4 foramen for 2–4 weeks. After it was confirmed that the SNS was decreasing the incontinence episodes, the patients received the definitive electrode and pulse generator that was implanted in the gluteus or in the abdomen. Two patients did not show improvement in the first step and did not receive the definitive stimulator. Mean follow-up was 17.02 months and during this time the patients showed improvement in the voluntary contraction pressure and reduction of incontinence episodes. Complications were two cases of infection, two cases with pain and one broken electrode. Therefore, although the reason the SNS is effective is unknown, this procedure had satisfactory results in these clinical cases with a low incidence of complications, and the study concluded that it was a good option for treatment of anal incontinence.
Limited evidence from a Cochrane review of randomised controlled trials suggests that sacral nerve stimulation may help to reduce fecal incontinence.
Method
TENS (transcutaneous electrical nerve stimulation) was patented and first used in 1974 for pain relief. TENS is non-invasive; it sends electric current through electrodes placed directly on the skin. Although predominantly carried out as a percutaneous procedure, it is possible to apply sacral nerve stimulation with the use of these external electrodes. It is not known if TENS helps with chronic pain in people with fibromyalgia or neuropathic pain. There are currently no studies into the efficacy of this on an overactive bladder and other associated symptoms of urinary incontinence, however, in a report carried out by GUT (an international peer-reviewed journal for health professionals and researchers in gastroenterology and hepatology) it was found that 20% of the group tested achieved complete continence. All others saw a significant reduction in the frequency of FI episodes and an improvement in the ability to defer defecation.
The first percutaneous sacral nerve stimulation study was performed in 1988. By penetrating the skin, sacral nerve stimulation aims to give a direct and localized electric current to specific nerves in order to elicit a favored response. Today it is one of the most common neuromodulation techniques.
Percutaneous procedure
Patients interested in getting a sacral nerve stimulator implanted in them because less severe methods have failed all must go through a trial for their own safety, known as the PNE (percutaneous nerve evaluation). PNE involves inserting a temporary electrode to the left or right of the S3 posterior foramen. This electrode is connected to an external pulse generator, which generates a signal for 3–5 days. If this neuromodulation has positive results for the patient, the option of implanting a permanent electrode for permanent sacral neuromodulation is possible.
The procedure has low level of invasiveness, as all incisions are relatively small. A pulse generator is implanted in a subcutaneous pocket in the upper, outer quadrant of the buttock or even the lower abdomen. The generator is attached to a thin lead wire with a small electrode tip which is anchored near the sacral nerve.
The most common postoperative complaints are pain and lead migration. In most studies, usually 5-10% of subjects need post-operative correction to lead migration, but since leads can be anchored near the sacral nerve, subsequent operations are generally unnecessary.
Mechanism
Stimulation of the sacral nerve causes contraction of external sphincter and pelvic floor muscle, which in turn causes the inhibition of bladder contractions which may be involuntarily releasing urine. Researchers currently believe that the sacral neuromodulation blocks the c-afferent fibers, which are a critical part of the afferent limb of a pathological reflex arc believed to be responsible for incontinence.
See also
Urinary incontinence
Fecal incontinence
Transcutaneous electrical nerve stimulation (TENS)
Electrical muscle stimulation
References
Bibliography
External links
Fecal Incontinence
Neurotechnology | Sacral nerve stimulation | [
"Biology"
] | 1,669 | [
"Incontinence",
"Excretion"
] |
14,473,130 | https://en.wikipedia.org/wiki/UX%20Tauri | UX Tauri, abbreviated as UX Tau, is a binary star system approximately 450 light-years away in the constellation of Taurus (the Bull). It is notable for the fact that, despite its recent (in stellar terms) creation, the Spitzer Space Telescope discovered that its protoplanetary disk contains a gap. The dust, which normally accumulates in an expanding ring starting right next to the star at such a young age, is either very thin or nonexistent at a range of 0.2 to 56 AU from the star. Typically, this means that the early ancestors of planets may be forming from the disk, though the star only ignited about 1 million years ago. In contrast, Earth was formed approximately 4.54 billion years ago, placing its formation about sixty million years after the Sun's ignition around 4.6 billion years ago.
See also
HD 98800
Vega
V4046 Sagittarii
References
External links
Circumstellar disks
Binary stars
T Tauri stars
Taurus (constellation)
Hypothetical planetary systems
Tauri, UX
K-type main-sequence stars
M-type main-sequence stars
Emission-line stars
285846
020990 | UX Tauri | [
"Astronomy"
] | 245 | [
"Taurus (constellation)",
"Constellations"
] |
14,473,878 | https://en.wikipedia.org/wiki/Human%E2%80%93computer%20information%20retrieval | Human–computer information retrieval (HCIR) is the study and engineering of information retrieval techniques that bring human intelligence into the search process. It combines the fields of human-computer interaction (HCI) and information retrieval (IR) and creates systems that improve search by taking into account the human context, or through a multi-step search process that provides the opportunity for human feedback.
History
This term human–computer information retrieval was coined by Gary Marchionini in a series of lectures delivered between 2004 and 2006. Marchionini's main thesis is that "HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy."
In 1996 and 1998, a pair of workshops at the University of Glasgow on information retrieval and human–computer interaction sought to address the overlap between these two fields. Marchionini notes the impact of the World Wide Web and the sudden increase in information literacy – changes that were only embryonic in the late 1990s.
A few workshops have focused on the intersection of IR and HCI. The Workshop on Exploratory Search, initiated by the University of Maryland Human-Computer Interaction Lab in 2005, alternates between the Association for Computing Machinery Special Interest Group on Information Retrieval (SIGIR) and Special Interest Group on Computer-Human Interaction (CHI) conferences. Also in 2005, the European Science Foundation held an Exploratory Workshop on Information Retrieval in Context. Then, the first Workshop on Human Computer Information Retrieval was held in 2007 at the Massachusetts Institute of Technology.
Description
HCIR includes various aspects of IR and HCI. These include exploratory search, in which users generally combine querying and browsing strategies to foster learning and investigation; information retrieval in context (i.e., taking into account aspects of the user or environment that are typically not reflected in a query); and interactive information retrieval, which Peter Ingwersen defines as "the interactive communication processes that occur during the retrieval of information by involving all the major participants in information retrieval (IR), i.e. the user, the intermediary, and the IR system."
A key concern of HCIR is that IR systems intended for human users be implemented and evaluated in a way that reflects the needs of those users.
Most modern IR systems employ a ranked retrieval model, in which the documents are scored based on the probability of the document's relevance to the query. In this model, the system only presents the top-ranked documents to the user. This systems are typically evaluated based on their mean average precision over a set of benchmark queries from organizations like the Text Retrieval Conference (TREC).
Because of its emphasis in using human intelligence in the information retrieval process, HCIR requires different evaluation models – one that combines evaluation of the IR and HCI components of the system. A key area of research in HCIR involves evaluation of these systems. Early work on interactive information retrieval, such as Juergen Koenemann and Nicholas J. Belkin's 1996 study of different levels of interaction for automatic query reformulation, leverage the standard IR measures of precision and recall but apply them to the results of multiple iterations of user interaction, rather than to a single query response. Other HCIR research, such as Pia Borlund's IIR evaluation model, applies a methodology more reminiscent of HCI, focusing on the characteristics of users, the details of experimental design, etc.
Goals
HCIR researchers have put forth the following goals towards a system where the user has more control in determining relevant results.
Systems should
no longer only deliver the relevant documents, but must also provide semantic information along with those documents
increase user responsibility as well as control; that is, information systems require human intellectual effort
have flexible architectures so they may evolve and adapt to increasingly more demanding and knowledgeable user bases
aim to be part of information ecology of personal and shared memories and tools rather than discrete standalone services
support the entire information life cycle (from creation to preservation) rather than only the dissemination or use phase
support tuning by end users and especially by information professionals who add value to information resources
be engaging and fun to use
In short, information retrieval systems are expected to operate in the way that good libraries do. Systems should help users to bridge the gap between data or information (in the very narrow, granular sense of these terms) and knowledge (processed data or information that provides the context necessary to inform the next iteration of an information seeking process). That is, good libraries provide both the information a patron needs as well as a partner in the learning process — the information professional — to navigate that information, make sense of it, preserve it, and turn it into knowledge (which in turn creates new, more informed information needs).
Techniques
The techniques associated with HCIR emphasize representations of information that use human intelligence to lead the user to relevant results. These techniques also strive to allow users to explore and digest the dataset without penalty, i.e., without expending unnecessary costs of time, mouse clicks, or context shift.
Many search engines have features that incorporate HCIR techniques. Spelling suggestions and automatic query reformulation provide mechanisms for suggesting potential search paths that can lead the user to relevant results. These suggestions are presented to the user, putting control of selection and interpretation in the user's hands.
Faceted search enables users to navigate information hierarchically, going from a category to its sub-categories, but choosing the order in which the categories are presented. This contrasts with traditional taxonomies in which the hierarchy of categories is fixed and unchanging. Faceted navigation, like taxonomic navigation, guides users by showing them available categories (or facets), but does not require them to browse through a hierarchy that may not precisely suit their needs or way of thinking.
Lookahead provides a general approach to penalty-free exploration. For example, various web applications employ AJAX to automatically complete query terms and suggest popular searches. Another common example of lookahead is the way in which search engines annotate results with summary information about those results, including both static information (e.g., metadata about the objects) and "snippets" of document text that are most pertinent to the words in the search query.
Relevance feedback allows users to guide an IR system by indicating whether particular results are more or less relevant.
Summarization and analytics help users digest the results that come back from the query. Summarization here is intended to encompass any means of aggregating or compressing the query results into a more human-consumable form. Faceted search, described above, is one such form of summarization. Another is clustering, which analyzes a set of documents by grouping similar or co-occurring documents or terms. Clustering allows the results to be partitioned into groups of related documents. For example, a search for "java" might return clusters for Java (programming language), Java (island), or Java (coffee).
Visual representation of data is also considered a key aspect of HCIR. The representation of summarization or analytics may be displayed as tables, charts, or summaries of aggregated data. Other kinds of information visualization that allow users access to summary views of search results include tag clouds and treemapping.
Related areas
Exploratory video search
Information foraging
References
External links
Information retrieval genres
Human–computer interaction | Human–computer information retrieval | [
"Engineering"
] | 1,515 | [
"Human–computer interaction",
"Human–machine interaction"
] |
14,474,114 | https://en.wikipedia.org/wiki/Probabilistic%20causation | Probabilistic causation is a concept in a group of philosophical theories that aim to characterize the relationship between cause and effect using the tools of probability theory. The central idea behind these theories is that causes raise the probabilities of their effects, all else being equal.
Deterministic versus probabilistic theory
Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B. In this sense, war does not cause deaths, nor does smoking cause cancer. As a result, many turn to a notion of probabilistic causation. Informally, A probabilistically causes B if As occurrence increases the probability of B. This is sometimes interpreted to reflect imperfect knowledge of a deterministic system but other times interpreted to mean that the causal system under study has an inherently indeterministic nature. (Propensity probability is an analogous idea, according to which probabilities have an objective existence and are not just limitations in a subject's knowledge).
Philosophers such as Hugh Mellor and Patrick Suppes have defined causation in terms of a cause preceding and increasing the probability of the effect. (Additionally, Mellor claims that cause and effect are both facts - not events - since even a non-event, such as the failure of a train to arrive, can cause effects such as my taking the bus. Suppes, by contrast, relies on events defined set-theoretically, and much of his discussion is informed by this terminology.)
Pearl argues that the entire enterprise of probabilistic causation has been misguided from the very beginning, because the central notion that causes "raise the probabilities" of their effects cannot be expressed in the language of probability theory. In particular, the inequality Pr(effect | cause) > Pr(effect | ~cause) which philosophers invoked to define causation, as well as its many variations and nuances, fails to capture the intuition behind "probability raising", which is inherently a manipulative or counterfactual notion.
The correct formulation, according to Pearl, should read:
Pr(effect | do(cause)) > Pr(effect | do(~cause))
where do(C) stands for an external intervention that compels the truth of C. The conditional probability Pr(E | C), in contrast, represents a probability resulting from a passive observation of C, and rarely coincides with Pr(E | do(C)). Indeed, observing the barometer falling increases the probability of a storm coming, but does not
"cause" the storm; were the act of manipulating the barometer to change the probability of storms, the falling barometer would qualify as a cause of storms. In general, formulating the notion of "probability raising" within the calculus of do-operators resolves the difficulties that probabilistic causation has encountered in the past half-century,Cartwright, N. (1989). Nature's Capacities and Their Measurement, Clarendon Press, Oxnard. among them the infamous Simpson's paradox, and clarifies precisely what relationships exist between probabilities and causation.
The establishing of cause and effect, even with this relaxed reading, is notoriously difficult, expressed by the widely accepted statement "Correlation does not imply causation". For instance, the observation that smokers have a dramatically increased lung cancer rate does not establish that smoking must be a cause of that increased cancer rate: maybe there exists a certain genetic defect which both causes cancer and a yearning for nicotine; or even perhaps nicotine craving is a symptom of very early-stage lung cancer which is not otherwise detectable. Scientists are always seeking the exact mechanisms by which Event A produces Event B. But scientists also are comfortable making a statement like, "Smoking probably causes cancer," when the statistical correlation between the two, according to probability theory, is far greater than chance. In this dual approach, scientists accept both deterministic and probabilistic causation in their terminology.
In statistics, it is generally accepted that observational studies (like counting cancer cases among smokers and among non-smokers and then comparing the two) can give hints, but can never establish cause and effect. Often, however, qualitative causal assumptions (e.g., absence of causation between some variables) may permit the derivation of consistent
causal effect estimates from observational studies.
The gold standard for causation here is the randomized experiment: take a large number of people, randomly divide them into two groups, force one group to smoke and prohibit the other group from smoking, then determine whether one group develops a significantly higher lung cancer rate. Random assignment plays a crucial role in the inference to causation because, in the long run, it renders the two groups equivalent in terms of all other possible effects on the outcome (cancer) so that any changes in the outcome will reflect only the manipulation (smoking). Obviously, for ethical reasons this experiment cannot be performed, but the method is widely applicable for less damaging experiments. One limitation of experiments, however, is that whereas they do a good job of testing for the presence of some causal effect they do less well at estimating the size of that effect in a population of interest. (This is a common criticism of studies of safety of food additives that use doses much higher than people consuming the product would actually ingest.)
Closed versus open systems
In a closed system the data may suggest that cause A * B precedes effect C in a defined interval of time τ. This relationship can determine causality with confidence bounded by τ. However, this same relationship may not be deterministic with confidence in an open system where uncontrolled factors may affect the result.
An example would be a system of A, B and C, where A, B and C are known. Characteristics are below and limited to a given time (such as 50 ms, or 50 hours). "^" means "not", "*" means "and":
^A * ^B => ^C (99.9999998027%)
A * ^B => ^C (99.9999998027%)
^A * B => ^C (99.9999998027%)
A * B => C (99.9999998027%)
One can reasonably claim, within 6 Standard Deviations, that A * B cause C given the time boundary (such as 50 ms, or 50 hours) IF And Only IF A, B and C are the only' parts of the system in question. Any result outside of this may be considered a deviation.
See also
Bayesian inference
Notes
References
Causality
Causal inference
causation | Probabilistic causation | [
"Physics"
] | 1,402 | [] |
14,474,795 | https://en.wikipedia.org/wiki/Community%20Broadband%20Bill | The Community Broadband Act was a bill (proposed law) that was never enacted into legislation by the U.S. Senate,110th Congress The act was intended to promote affordable broadband access by allowing municipal governments to provide telecommunications capability and services.
Supporters of the bill believed it would have encouraged widespread broadband development in the United States by overturning existing state bans on public broadband deployments and eliminating existing barriers to broadband development.
Acquiring municipal broadband for some communities is problematic because the laws of certain states prohibit local municipalities from installing their own broadband networks and private sector companies are unable to provide the electric services needed for broadband. As a result, many rural and remote communities are left with without broadband services. Some municipalities may find broadband service, but it may be limited to already available commercial options, which may fall short of community need.
Bill
Specific provisions of the bill:
Prevent State governments from enforcing or adopting laws that would prohibit municipalities from providing broadband services
Encourage the development of public-private partnerships to spread the use of broadband services
Initiate notice requirements about broadband deployment to ensure the public has adequate information available to evaluate options
Give private providers the opportunity to provide alternative broadband services
Ensure public and private providers of broadband services are treated equally with respect to the laws, guidelines and policies that apply to all providers of broadband services
Economic impact
The onset of free or low-cost municipal broadband access to citizens in competition with commercial broadband services would have economic implications.
The Senate Committee on Commerce, Science and Transportation on October 30, 2007 estimated enacting the community broadband bill would have no significant impact on the federal budget. Because the act would preempt laws in 15 states that presently ban the provision of broadband services by public entities, including municipalities, it would, however, impose mandates on some state and local governments. In accordance with the act, public providers would be required to publish notice of their intent to offer broadband services. Public providers would also be required to provide details about the types of broadband services they intend to offer in addition to allowing private bids for those services.
Since the preemption laws and private bidding requirements would be considered intergovernmental mandates as defined by the UMRA, the Senate Committee on Commerce, Science and Transportation determined that the cost the mandates could not exceed the threshold established by UMRA and adjusted yearly for inflation. In 2007 the UMRA threshold was $66 million USD.
Background
Bush Administration
Although broadband access is national problem, it must be addressed on the local level. Acknowledging the importance of broadband in the increasingly competitive global economy, President Bush in June 2004 initiated a goal for universal and affordable broadband access for every American by 2007. However the United States is still far behind in reaching this goal. The Organisation for Economic Co-operation and Development conducted a study ranking the U.S. in 12th place worldwide in the percentage of people with broadband connections. The majority of nations in the top ranks showed to have successfully combined private-public partnerships to provide broadband access for citizens and businesses alike.
Proposal
On July 27, 2007, Senator Frank Lautenberg introduced the legislation, noting the importance of broadband services and how they are essential to providing important educational and economic opportunities, especially for rural areas. By providing public-private broadband service partnerships, the bill would make it easier for municipalities, cities, and towns across the nation to offer broadband access to their residents.
Telecommunications Industry
Despite the advocacy for public-private broadband partnerships and service there are many telecommunication firms seeking to bar the enactment of a community broadband act. Citing government-backed networks would compete unfairly with private companies. Also requiring heavy taxpayer subsidization that would minimize net benefits to local residents. Douglas Boone, chief executive of Premier Communications when speaking with U.S. Telecom Association said “setting up a government-owned network is like having city hall opening a chain of grocery stores or gas stations." Government-backed broadband might also lead to stifled innovation inhibiting technological advancement.
There are also telecommunication companies supportive of universal community broadband placements such as Earthlink. Believing partnerships between governments and private companies can provide low-cost, high-speed citywide service that offers many advantages to residents, visitors and taxpayers. EarthLink Municipal Networks, a subsidiary created to design and implement wireless broadband services, with the country's biggest municipal ISP contracts, covering Philadelphia and Anaheim, Calif. The partnership with Earthlink is a prime example of how affordable broadband service is being provided to low-income neighborhoods that would otherwise be passed over by private companies.
Community Broadband Coalition
On September 21, 2007 the Community Broadband Coalition formed by trade associations, public interest organizations, and private companies with an interest to enhance the availability of broadband services throughout the country submitted a congressional letter in support of the bill. The letter urged other senators to cosponsor the bipartisan ,pointing out the benefits of adopting community broadband networks.
Increased economic development and jobs enhancing market competition
Improved and accelerated delivery of e-government services
Universal, affordable Internet access for all Americans.
Major organizations included in the Community Broadband Coalition letter in support of The Community Broadband Act:
ACUTA
American Association of Law Libraries
American Library Association
American Public Power Association
Association of Research Libraries
EDUCAUSE
Free Press
Google
Intel
Media Access Project
National Association of Counties
National Association of Telecommunications Officers and Advisors (NATOA)
Tropos Networks
Utah Telecommunication Open Infrastructure Agency (UTOPIA)
XO Communications
Cosponsors
United States senators who cosponsored the bill in the 110th United States Congress
Gordon H. Smith
John Kerry
John McCain
Claire McCaskill
Olympia Snowe
Ted Stevens
Daniel Inouye
Russell Feingold
See also
Telecommunications Act of 2005
municipal broadband
References
Proposed legislation of the 110th United States Congress
Telecommunications law
Computer law
Broadband | Community Broadband Bill | [
"Technology"
] | 1,140 | [
"Computer law",
"Computing and society"
] |
14,474,893 | https://en.wikipedia.org/wiki/Institut%20de%20Recherche%20en%20Astrophysique%20et%20Plan%C3%A9tologie | The Institut de Recherche en Astrophysique et Planétologie (IRAP), formerly the Centre d'Etude Spatiale des Rayonnements (CESR), is a French laboratory of space astrophysics. It is located in Toulouse. The center's main areas of investigation are: space plasmas, planetology, the high energy universe, and the cold universe.
The center is jointly operated by CNRS and Toulouse's Paul Sabatier University, and was opened on 1 January 2011.
Projects
The ChemCam instrument on the Curiosity rover (Mars Science Laboratory) was developed by CESR in conjunction with the Los Alamos National Laboratory. It landed on the planet Mars in August 2012.
The SuperCam instrument on the Perseverance rover (Mars 2020) was developed by IRAP in conjunction with the Los Alamos National Laboratory. It landed on the planet Mars in February 2021.
See also
Observatoire Midi-Pyrénées
References
External links
Official website
CNES
French National Centre for Scientific Research
Space science organizations
2011 establishments in France | Institut de Recherche en Astrophysique et Planétologie | [
"Astronomy"
] | 216 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
13,300,928 | https://en.wikipedia.org/wiki/Guajataca%20Lake | Guajataca Lake, or Lago Guajataca, is a reservoir of the Guajataca River created by the Puerto Rico Electric Power Authority in 1929. It is located between the municipalities of San Sebastián, Quebradillas, and Isabela in Puerto Rico, and receives most of its water from the Rio Guajataca and Rio Chiquito de Cibao rivers. The lake primarily functions as a water reservoir as well as for recreational activities such as boating and fishing. Various species of fish such as peacock bass, largemouth bass, sunfish, perch, catfish, tilapia and threadfin shad can be found in the lake. The Guajataka Scout Reservation partially borders the southern portion of the lake. The dam at Guajataca Lake experienced a structural failure on September 22, 2017, due to the hit from Hurricane Maria.
The reservoir is considered a touristic area.
Guajataca Dam
The Guajataca Dam is an earthen dam is currently used for irrigation and potable water purposes. A hydroelectric power station was built, but not longer in use. The reservoir has a normal surface area of , its length is , its maximum width is , the mean depth is 12 m and the maximum depth is 27 m, located near the dam. its maximum discharge is per second. Its normal storage capacity is , and its drainage basin is .
Dam construction
The construction of the dam was authorized by act 63 of the Legislature of Puerto Rico, known as the "Isabela public irrigation law," approved April 19, 1919.
The dam was constructed starting in 1928. The reservoir had an initial storage capacity of , but by 1999 (71 years later), the capacity had been reduced to , as about 13% less, attributed to sediment erosion.
The surface area of the reservoir was in 1999.
According to the National Inventory of Dams, Guajataca Dam was designed by and is owned by the Puerto Rico Electric Power Authority.
Guajataca dam failure risk
On September 22, 2017, at 18:10 GMT, following Hurricane Maria, operators at Guajataca Dam announced that the dam's spillway was failing at the northern end of the lake and it could result in the whole dam collapsing. The National Weather Service a few minutes later urged all 70,000 residents in the flood area to be evacuated. The National Weather Service stated the dam was a "life-threatening situation". "It’s a structural failure. I don’t have any more details," Governor Ricardo Rosselló stated. "We’re trying to evacuate as many people as possible." Rosselló ordered the Puerto Rico National Guard and the Police to help assist in the evacuation effort downstream. The dam lies across the Guajataca River to form a reservoir that can hold roughly 11 billion gallons of water.
, the dam was last inspected on October 23, 2013.
The first phase of repairs to avoid the threat of flooding were completed on November 17, 2017. Since then about 10,000 residents, including farmers, who depend on the waters of the reservoir, have been struggling with the rationing of water. There is confusion and little transparency as to how the issue is being handled. Final repairs will be ongoing, until 2028.
Gallery
See also
List of dams and reservoirs in Puerto Rico
References
External links
2017 in Puerto Rico
Dam failures in the United States
Isabela, Puerto Rico
Reservoirs in Puerto Rico
San Sebastián, Puerto Rico
United States Army Corps of Engineers
2017 disasters in the United States
September 2017 events in the United States | Guajataca Lake | [
"Engineering"
] | 722 | [
"Engineering units and formations",
"United States Army Corps of Engineers"
] |
13,301,401 | https://en.wikipedia.org/wiki/United%20States%20Radium%20Corporation | The United States Radium Corporation was a company, most notorious for its operations between the years 1917 to 1926 in Orange, New Jersey, in the United States that led to stronger worker protection laws. After initial success in developing a glow-in-the-dark radioactive paint, the company was subject to several lawsuits in the late 1920s in the wake of severe illnesses and deaths of workers (the Radium Girls) who had ingested radioactive material. The workers had been told that the paint was harmless. During World War I and World War II, the company produced luminous watches and gauges for the United States Army for use by soldiers.
U.S. Radium workers, especially women who painted the dials of watches and other instruments with luminous paint, suffered serious radioactive contamination. Lawyer Edward Markley was in charge of defending the company in these cases.
History
The company was founded in 1914 in New York City, by Dr. Sabin Arnold von Sochocky and Dr. George S. Willis, as the Radium Luminous Material Corporation. The company produced uranium from carnotite ore and eventually moved into the business of producing radioluminescent paint, and then to the application of that paint. Over the next several years, it opened facilities in Newark, Jersey City, and Orange. In August 1921, von Sochocky was forced from the presidency, and the company was renamed the United States Radium Corporation, Arthur Roeder became the president of the company. In Orange, where radium was extracted from 1917 to 1926, the U.S. Radium facility processed half a ton of ore per day. The ore was obtained from "Undark mines" in Paradox Valley, Colorado and in Utah.
A notable employee from 1921 to 1923 was Victor Francis Hess, who would later receive the Nobel Prize in Physics.
The company's luminescent paint, marketed as Undark, was a mixture of radium and zinc sulfide; the radiation causing the sulfide to fluoresce. During World War I, demand for dials, watches, and aircraft instruments painted with Undark surged, and the company expanded operations considerably. The delicate task of painting watch and gauge faces was done mostly by young women, who were instructed to maintain a fine tip on their paintbrushes by licking them.
At the time, the dangers of radiation were not well understood. Around 1920, a similar radium dial business, known as the Radium Dial Company, a division of the Standard Chemical Company, opened in Chicago. It soon moved its dial painting operation to Ottawa, Illinois to be closer to its major customer, the Westclox Clock Company. Several workers died, and the health risks associated with radium were allegedly known, but this company continued dial painting operations until 1940.
U.S. Radium's management and scientists took precautions such as masks, gloves, and screens, but did not similarly equip the workers. Unbeknownst to the women, the paint was highly radioactive and therefore, carcinogenic. The ingestion of the paint by the women, brought about while licking the brushes, resulted in a condition called radium jaw (radium necrosis), a painful swelling and porosity of the upper and lower jaws that ultimately led to many of their deaths. This led to litigation against U.S. Radium by the so-called Radium Girls, starting with former dial painter Marguerite Carlough in 1925. The case was eventually settled in 1926 and several more suits were brought against the company in 1927 by Grace Fryer and Katherine Schaub. The company did not stop the hand painting of dials until 1947.
The company struggled after World War I: the loss of military contracts sharply reduced demand for luminescent paint and dials, and in 1922, high-grade ore was discovered in Katanga, driving all U.S. suppliers out of business except U.S. Radium and the Standard Chemical Company. U.S. Radium consolidated its operations in Manhattan in 1927, leasing out the Orange plant and selling off other property. But demand for luminescent products surged again during World War II; by 1942, it employed as many as 1,000 workers, and in 1944 was reported to have radium mining, processing, and application facilities in Bloomsburg, Pennsylvania; Bernardsville, New Jersey; Whippany, New Jersey; and North Hollywood, California as well as New York City. In 1945 the Office of Strategic Services enlisted the company's help for tests of a psychological-warfare scheme to release foxes with glowing paint in Japan.
After the war came another period of retrenchment. Not only did military supply contracts end, but luminous dial manufacturing shifted to promethium-147 and tritium. Also, radium mining in Canada ceased in 1954, driving up supply costs. In that year, the company consolidated its operations at facilities in Morristown, New Jersey and South Centre Township east of Bloomsburg, Pennsylvania. In Bloomsburg, it continued to produce items with luminescent paint using radium, strontium-90 and cesium-137 such as watch dials, instrument gauge faces, deck markers, and paint. It ceased radium processing altogether in 1968, spinning off those operations as Nuclear Radiation Development Corporation, LLC, based in Grand Island, New York. The following year, a new facility at the Bloomsburg plant opened for the manufacturing of "tritiated metal foils and tritium activated self-luminous light tubes," and the company switched focus to the manufacture of glow-in-the-dark exit and aircraft signs using tritium.
Starting in 1979, the company underwent an extensive reorganization. A new corporation, Metreal, Inc., was created to hold the assets of the Bloomsburg plant. Manufacturing operations were subsequently moved into new wholly owned subsidiary corporations: Safety Light Corporation, USR Chemical Products, USR Lighting, USR Metals, and U.S. Natural Resources. Finally, in May 1980, U.S. Radium created a new holding company, USR Industries, Inc., and merged itself into it.
The Safety Light Corporation, in turn, was sold to its management and spun off as an independent entity in 1982. Tritium-illuminated signs were marketed under the name Isolite, which also became the name of new subsidiary to market and distribute Safety Light Corporation's products.
In 2005, the Nuclear Regulatory Commission declined to renew the licenses for the Bloomsburg facility, and shortly thereafter the EPA added the Bloomsburg facility to the National Priorities List for remediation through Superfund. All tritium operations at the plant ceased by the end of 2007.
Immediate aftermath
The chief medical examiner of Essex County, New Jersey, Harrison Stanford Martland, MD, published a report in 1925 that identified the radioactive material the women had ingested as the cause of their bone disease and aplastic anemia, and ultimately death.
Illness and death resulting from ingestion of radium paint and the subsequent legal action taken by the women forced closure of the company's Orange facility in 1927. The case was settled out of court in 1928, but not before a substantial number of the litigants were seriously ill or had died from bone cancer and other radiation-related illnesses. The company, it was alleged, deliberately delayed settling litigation, leading to further deaths.
In November 1928, Dr. von Sochocky, the inventor of the radium-based paint, died of aplastic anemia resulting from his exposure to the radioactive material, "a victim of his own invention."
The victims were so contaminated that radiation could still be detected at their graves in 1987 using a Geiger counter.
Superfund site
The company processed about 1,000 pounds of ore daily while in operation, which was dumped on the site. The radon and radiation resulting from the 1,600 tons of material on the abandoned factory resulted in the site's designation as a Superfund site by the United States Environmental Protection Agency in 1983. From 1997 through 2005, the EPA remediated the site in a process that involved the excavation and off-site disposal of radium-contaminated material at the former plant site, and at 250 residential and commercial properties that had been contaminated in the intervening decades. In 2009, the EPA wrapped up their long-running Superfund cleanup effort.
See also
Undark
Radium dials
Radium Girls
Radiation poisoning
Radium Dial Company
References
External links
Radium dial painters, 1920–1926
Radioluminescent Paint, Oak Ridge Associated Universities
Radium Luminous Material Corporation stock certificate
United States Radium Corporation stock certificate
Nuclear safety and security
Radium
Radioactivity
Orange, New Jersey
Historic American Engineering Record in New Jersey
History of New Jersey
Superfund sites in New Jersey
Defunct technology companies based in New Jersey
Chemical companies established in 1914
Manufacturing companies disestablished in 1980
1914 establishments in New York City
1980 disestablishments in New Jersey
American companies established in 1914
American companies disestablished in 1980
Defunct manufacturing companies based in New Jersey | United States Radium Corporation | [
"Physics",
"Chemistry"
] | 1,833 | [
"Radioactivity",
"Nuclear physics"
] |
13,301,464 | https://en.wikipedia.org/wiki/Sadr%20Region | The Sadr Region (also known as IC 1318 or the Gamma Cygni Nebula) is the diffuse emission nebula surrounding Sadr (γ Cygni) at the center of Cygnus's cross. The Sadr Region is one of the surrounding nebulous regions; others include the Butterfly Nebula and the Crescent Nebula. It contains many dark nebulae in addition to the emission diffuse nebulae.
Sadr itself has approximately a magnitude of 2.2. The nebulous regions around the region are also fairly bright.
Image gallery
See also
NGC 6910
References
http://stars.astro.illinois.edu/sow/sadr.html
H II regions
Cygnus (constellation)
IC objects | Sadr Region | [
"Astronomy"
] | 147 | [
"Nebula stubs",
"Cygnus (constellation)",
"Astronomy stubs",
"Constellations"
] |
13,301,535 | https://en.wikipedia.org/wiki/Seed%20nucleus | A seed nucleus is an isotope that is the starting point for any of a variety of fusion chain reactions. The mix of nuclei produced at the conclusion of the chain reaction generally depends strongly on the relative availability of the seed nucleus or nuclei and the component being fused—whether neutrons as in the r-process and s-process or protons as in the rp-process. A smaller proportion of seed nuclei will generally result in products of larger mass, whereas a larger seed-to-neutron or seed-to-proton ratio will tend to produce comparatively lighter masses.
Nuclear physics | Seed nucleus | [
"Physics"
] | 118 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
13,301,558 | https://en.wikipedia.org/wiki/Dunhuang%20Star%20Chart | The Dunhuang map or Dunhuang Star map is one of the first known graphical representations of stars from ancient Chinese astronomy, dated to the Tang dynasty (618–907). Before this map, much of the star information mentioned in historical Chinese texts had been questioned. The map provides a graphical verification of the star observations, and are part of a series of pictures on one of the Dunhuang manuscripts. The astronomy behind the map is explained in an educational resource posted on the website of the International Dunhuang Project, where much of the research on the map has been done. The Dunhuang Star map is to date the world's oldest complete preserved star atlas.
History
Early in 1900s (decade), a walled-up cave containing a cache of manuscripts was discovered by Chinese Taoist Wang Yuan-lu in the Mogao Caves. The scroll with the star chart was found amongst those documents by Aurel Stein when he visited and examined the content of the cave in 1907. One of the first public mentionings of this script in Western studies was from Joseph Needham's 1959 version of the book Science and Civilisation in China. Since that time, only a few publications have been devoted to the map, nearly all being Chinese publications.
Colors
The symbols for the stars are divided into three different groups. The groups are presented in three colors representing the "Three Schools of Astronomical tradition".
See also
Chinese star maps
References
External links
"Star Atlas: Translation", by Imre Galambos, 2010, International Dunhuang Project.
Astronomy in China
British Library oriental manuscripts
Chinese manuscripts
Star atlases
Dunhuang manuscripts | Dunhuang Star Chart | [
"Astronomy"
] | 333 | [
"Astronomy in China",
"History of astronomy"
] |
13,301,794 | https://en.wikipedia.org/wiki/INVITE%20of%20Death | An INVITE of Death is a type of attack on a VoIP-system that involves sending a malformed or otherwise malicious SIP INVITE request to a telephony server, resulting in a crash of that server. Because telephony is usually a critical application, this damage causes significant disruption to the users and poses tremendous acceptance problems with VoIP. These kinds of attacks do not necessarily affect only SIP-based systems; all implementations with vulnerabilities in the VoIP area are affected. The DoS attack can also be transported in other messages than INVITE. For example, in December 2007 there was a report about a vulnerability in the BYE message ("BYE BYE") by using an obsolete header with the name "Also". However, sending INVITE packets is the most popular way of attacking telephony systems. The name is a reference to the ping of death attack that caused serious trouble in 1995–1997.
VoIP Servers (INVITE of Death)
The INVITE of Death vulnerability was found on February 16, 2009. The vulnerability allows the attacker to crash the server causing remote Denial of Service (DoS) by sending a single malformed packet. An impersonator can, using a malformed packet, overflow the specific string buffers, add a large number of token characters, and modify fields in an illegal fashion. As a result, a server is tricked into an undefined state, which can lead to call processing delays, unauthorized access, and a complete denial of service. The problem specifically exists in OpenSBC version 1.1.5-25 in the handling of the “Via” field from a maliciously crafted SIP packet. The INVITE of Death packet was also used to find a new vulnerability in the patched OpenSBC server through network dialog minimization.
For the popular open source-based Asterisk PBX, there are security advisories that cover not only signaling-related problems, but also problems with other protocols and their resolution. Problems may be malformed SDP attachments where codex numbers are out of the valid range or obsolete headers such as “Also”.
The INVITE of Death is specifically a problem for operators that run their servers on the public internet. Because SIP allows the usage of UDP packets, it is easy for an attacker to spoof any source address in the internet and send the INVITE of death from untraceable locations. By sending these kinds of requests periodically, attackers can completely interrupt the telephony service. The only choice for the service provider is to upgrade their systems until the attack does not crash the system anymore.
VoIP phones
A large number of VoIP Vulnerabilities exist for IP phones. DoS attacks on VoIP phones are less critical than attacks on central devices like IP-PBX, as, usually, only the endpoint is affected.
References
External links
Debian Security Advisory
Denial-of-service attacks | INVITE of Death | [
"Technology"
] | 585 | [
"Denial-of-service attacks",
"Computer security exploits"
] |
13,301,859 | https://en.wikipedia.org/wiki/Diffusion-controlled%20reaction | Diffusion-controlled (or diffusion-limited) reactions are reactions in which the reaction rate is equal to the rate of transport of the reactants through the reaction medium (usually a solution). The process of chemical reaction can be considered as involving the diffusion of reactants until they encounter each other in the right stoichiometry and form an activated complex which can form the product species. The observed rate of chemical reactions is, generally speaking, the rate of the slowest or "rate determining" step. In diffusion controlled reactions the formation of products from the activated complex is much faster than the diffusion of reactants and thus the rate is governed by collision frequency.
Diffusion control is rare in the gas phase, where rates of diffusion of molecules are generally very high. Diffusion control is more likely in solution where diffusion of reactants is slower due to the greater number of collisions with solvent molecules. Reactions where the activated complex forms easily and the products form rapidly are most likely to be limited by diffusion control. Examples are those involving catalysis and enzymatic reactions. Heterogeneous reactions where reactants are in different phases are also candidates for diffusion control.
One classical test for diffusion control of a heterogeneous reaction is to observe whether the rate of reaction is affected by stirring or agitation; if so then the reaction is almost certainly diffusion controlled under those conditions.
Derivation
The following derivation is adapted from Foundations of Chemical Kinetics.
This derivation assumes the reaction . Consider a sphere of radius , centered at a spherical molecule A, with reactant B flowing in and out of it. A reaction is considered to occur if molecules A and B touch, that is, when the distance between the two molecules is apart.
If we assume a local steady state, then the rate at which B reaches is the limiting factor and balances the reaction.
Therefore, the steady state condition becomes
1.
where
is the flux of B, as given by Fick's law of diffusion,
2. ,
where is the diffusion coefficient and can be obtained by the Stokes-Einstein equation, and the second term is the gradient of the chemical potential with respect to position. Note that [B] refers to the average concentration of B in the solution, while [B](r) is the "local concentration" of B at position r.
Inserting 2 into 1 results in
3. .
It is convenient at this point to use the identity
allowing us to rewrite 3 as
4. .
Rearranging 4 allows us to write
5.
Using the boundary conditions that , ie the local concentration of B approaches that of the solution at large distances, and consequently , as , we can solve 5 by separation of variables, we get
6.
or
7. (where : )
For the reaction between A and B, there is an inherent reaction constant , so . Substituting this into 7 and rearranging yields
8.
Limiting conditions
Very fast intrinsic reaction
Suppose is very large compared to the diffusion process, so A and B react immediately. This is the classic diffusion limited reaction, and the corresponding diffusion limited rate constant, can be obtained from 8 as . 8 can then be re-written as the "diffusion influenced rate constant" as
9.
Weak intermolecular forces
If the forces that bind A and B together are weak, ie for all r except very small r, . The reaction rate 9 simplifies even further to
10.
This equation is true for a very large proportion of industrially relevant reactions in solution.
Viscosity dependence
The Stokes-Einstein equation describes a frictional force on a sphere of diameter as where is the viscosity of the solution. Inserting this into 9 gives an estimate for as , where R is the gas constant, and is given in centipoise. For the following molecules, an estimate for is given:
See also
Diffusion limited enzyme
References
Chemical reactions
Chemical reaction engineering
Chemical kinetics | Diffusion-controlled reaction | [
"Chemistry",
"Engineering"
] | 784 | [
"Chemical engineering",
"Chemical kinetics",
"Chemical reaction engineering",
"nan"
] |
13,301,952 | https://en.wikipedia.org/wiki/Orphan%20Train | The Orphan Train Movement was a supervised welfare program that transported children from crowded Eastern cities of the United States to foster homes located largely in rural areas of the Midwest short on farming labor. The orphan trains operated between 1854 and 1929, relocating from about 200,000 children. The co-founders of the orphan train movement claimed that these children were orphaned, abandoned, abused, or homeless, but this was not always true. They were mostly the children of new immigrants and the children of the poor and destitute families living in these cities. Criticisms of the program include ineffective screening of caretakers, insufficient follow-ups on placements, and that many children were used as strictly slave farm labor.
Three charitable institutions, Children's Village (founded 1851 by 24 philanthropists
), the Children's Aid Society (established 1853 by Charles Loring Brace) and later, New York Foundling Hospital, endeavored to help these children. The institutions were supported by wealthy donors and operated by professional staff. The three institutions developed a program that placed homeless, orphaned, and abandoned city children, who numbered an estimated 30,000 in New York City alone in the 1850s, in foster homes throughout the country. The children were transported to their new homes on trains that were labeled "orphan trains" or "baby trains". This relocation of children ended in 1930 due to decreased need for farm labor in the Midwest.
Background
The first orphanage in the United States was reportedly established in 1729 in Natchez, Mississippi, but institutional orphanages were uncommon before the early 19th century. Relatives or neighbors usually raised children who had lost their parents. Arrangements were informal and rarely involved courts.
Around 1830, the number of homeless children in large Eastern cities such as New York City exploded. In 1850, there were an estimated 10,000 to 30,000 homeless children in New York City. At the time, New York City's population was only 500,000. Some children were orphaned when their parents died in epidemics of typhoid, yellow fever or the flu. Others were abandoned due to poverty, illness, or addiction. Many children sold matches, rags, or newspapers to survive. For protection against street violence, they banded together and formed gangs.
In 1853, a young minister named Charles Loring Brace became concerned with the plight of street children (often known as "street Arabs"). He founded the Children's Aid Society. During its first year the Children's Aid Society primarily offered boys religious guidance and vocational and academic instruction. Eventually, the society established the nation's first runaway shelter, the Newsboys' Lodging House, where vagrant boys received inexpensive room and board and basic education. Brace and his colleagues attempted to find jobs and homes for individual children, but they soon became overwhelmed by the numbers needing placement. Brace hit on the idea of sending groups of children to rural areas for adoption.
Brace believed that street children would have better lives if they left the poverty and debauchery of their lives in New York City and were instead raised by morally upright farm families. Recognizing the need for labor in the expanding farm country, Brace believed that farmers would welcome homeless children, take them into their homes and treat them as their own. His program would turn out to be a forerunner of modern foster care.
After a year of dispatching children individually to farms in nearby Connecticut, Pennsylvania and rural New York, the Children's Aid Society mounted its first large-scale expedition to the Midwest in September 1854.
The term "orphan train"
The phrase "orphan train" was first used in 1854 to describe the transportation of children from their home area via the railroad. However, the term "orphan train" was not widely used until long after the orphan train program had ended.
The Children's Aid Society referred to its relevant division first as the Emigration Department, then as the Home-Finding Department, and finally, as the Department of Foster Care. Later, the New York Foundling Hospital sent out what it called "baby" or "mercy" trains.
Organizations and families generally used the terms "family placement" or "out-placement" ("out" to distinguish it from the placement of children "in" orphanages or asylums) to refer to orphan train passengers.
Widespread use of the term "orphan train" may date to 1978, when CBS aired a fictional miniseries entitled The Orphan Trains. One reason the term was not used by placement agencies was that less than half of the children who rode the trains were in fact orphans, and as many as 25 percent had two living parents. Children with both parents living ended up on the trains—or in orphanages—because their families did not have the money or desire to raise them or because they had been abused or abandoned or had run away. And many teenage boys and girls went to orphan train sponsoring organizations simply in search of work or a free ticket out of the city.
The term "orphan trains" is also misleading because a substantial number of the placed-out children didn't take the railroad to their new homes and some didn't even travel very far. The state that received the greatest number of children (nearly one-third of the total) was New York. Connecticut, New Jersey, and Pennsylvania also received substantial numbers of children. For most of the orphan train era, the Children's Aid Society bureaucracy made no distinction between local placements and even its most distant ones. They were all written up in the same record books and, on the whole, managed by the same people. Also, the same child might be placed one time in the West and the next time—if the first home did not work out—in New York City. The decision about where to place a child was made almost entirely on the basis of which alternative was most readily available at the moment the child needed help.
The first orphan train
The first group of 45 children arrived in Dowagiac, Michigan, on 1 October 1854. The children had traveled for days in uncomfortable conditions. They were accompanied by E. P. Smith of the Children's Aid Society. Smith himself had let two different passengers on the riverboat from Manhattan adopt boys without checking their references. Smith added a boy he met in the Albany railroad yard—a boy whose claim to orphanhood Smith never bothered to verify. At a meeting in Dowagiac, Smith played on his audience's sympathy while pointing out that the boys were handy and the girls could be used for all types of housework.
In an account of the trip published by the Children's Aid Society, Smith said that in order to get a child, applicants had to have recommendations from their pastor and a justice of the peace, but it is unlikely that this requirement was strictly enforced. By the end of that first day, fifteen boys and girls had been placed with local families. Five days later, twenty-two more children had been adopted. Smith and the remaining eight children traveled to Chicago where Smith put them on a train to Iowa City by themselves where a Reverend C. C. Townsend, who ran a local orphanage, took them in and attempted to find them foster families. This first expedition was considered such a success that in January 1855 the society sent out two more parties of homeless children to Pennsylvania.
Logistics of orphan trains
Committees of prominent local citizens were organized in the towns where orphan trains stopped. These committees were responsible for arranging a site for the adoptions, publicizing the event, and arranging lodging for the orphan train group. These committees were also required to consult with the Children's Aid Society on the suitability of local families interested in adopting children.
Brace's system put its faith in the kindness of strangers. Orphan train children were placed in homes for free and were expected to serve as an extra pair of hands to help with chores around the farm. Families were expected to raise them as they would their natural-born children, providing them with decent food and clothing, a "common" education, and $100 when they turned twenty-one. Older children placed by The Children's Aid Society were supposed to be paid for their labors. Legal adoption was not required.
According to the Children's Aid Society's "Terms on Which Boys are Placed in Homes," boys under twelve were to be "treated by the applicants as one of their own children in matters of schooling, clothing, and training," and boys twelve to fifteen were to be "sent to a school a part of each year." Representatives from the society were supposed to visit each family once a year to check conditions, and children were expected to write letters back to the society twice a year. There were only a handful of agents to monitor thousands of placements.
Before they boarded the train, children were dressed in new clothing, given a Bible, and placed in the care of Children's Aid Society agents who accompanied them west. Few children understood what was happening. Once they did, their reactions ranged from delight at finding a new family to anger and resentment at being "placed out" when they had relatives "back home".
Most children on the trains were white. An attempt was made to place non-English speakers with people who spoke their language.
Babies were easiest to place, but finding homes for children older than 14 was always difficult because of concern that they were too set in their ways or might have bad habits. Children who were physically or mentally disabled or sickly were difficult to find homes for. Although many siblings were sent out together on orphan trains, prospective parents could choose to take a single child, separating siblings.
Many orphan train children went to live with families that placed orders specifying age, gender, and hair and eye color. Others were paraded from the depot into a local playhouse, where they were put up on stage, thus the origin of the term "up for adoption." According to an exhibit panel from the National Orphan Train Complex, the children "took turns giving their names, singing a little ditty, or 'saying a piece." According to Sara Jane Richter, professor of history at Oklahoma Panhandle State University, the children often had unpleasant experiences. "People came along and prodded them, and looked, and felt, and saw how many teeth they had."
Press accounts convey the spectacle, and sometimes auction-like atmosphere, attending the arrival of a new group of children. ''Some ordered boys, others girls, some preferred light babies, others dark, and the orders were filled out properly and every new parent was delighted,'' reported The Daily Independent of Grand Island, NE in May 1912. ''They were very healthy tots and as pretty as anyone ever laid eyes on.''
Brace raised money for the program through his writings and speeches. Wealthy people occasionally sponsored trainloads of children. Charlotte Augusta Gibbs, wife of John Jacob Astor III, had sent 1,113 children west on the trains by 1884. Railroads gave discount fares to the children and the agents who cared for them.
Scope of the orphan train movement
The Children's Aid Society's sent an average of 3,000 children via train each year from 1855 to 1875. Orphan trains were sent to 45 states, as well as Canada and Mexico. During the early years, Indiana received the largest number of children. At the beginning of the Children's Aid Society orphan train program, children were not sent to the southern states, as Brace was an ardent abolitionist.
By the 1870s, the New York Foundling Hospital and the New England Home for Little Wanderers in Boston had orphan train programs of their own.
New York Foundling Hospital "Mercy Trains"
The New York Foundling Hospital was established in 1869 by Sister Mary Irene Fitzgibbon of the Sisters of Charity of New York as a shelter for abandoned infants. The Sisters worked in conjunction with Priests throughout the Midwest and South in an effort to place these children in Catholic families. The Foundling Hospital sent infants and toddlers to prearranged Roman Catholic homes from 1875 to 1914. Parishioners in the destination regions were asked to accept children, and parish priests provided applications to approved families. This practice was first known as the "Baby Train," then later the "Mercy Train." By the 1910s, 1,000 children a year were placed with new families.
Criticisms
Linda McCaffery, a professor at Barton County Community College, explained the range of orphan train experiences: "Many were used as strictly slave farm labor, but there are stories, wonderful stories of children ending up in fine families that loved them, cherished them, [and] educated them."
Orphan train children faced obstacles ranging from the prejudice of classmates because they were ''train children'' to feeling like outsiders in their families during their entire lives. Many rural people viewed the orphan train children with suspicion, as the incorrigible offspring of drunkards and prostitutes.
Criticisms of the orphan train movement focused on concerns that initial placements were made hastily, without proper investigation, and that there was an insufficient follow-up on placements. Charities were also criticized for not keeping track of children placed while under their care. In 1883, Brace consented to an independent investigation. It found the local committees were ineffective at screening foster parents. The supervision was lax. Many older boys had run away. But its overall conclusion was positive. The majority of children under fourteen were leading satisfactory lives.
Applicants for children were supposed to be screened by committees of local businessmen, ministers, or physicians, but the screening was rarely very thorough. Small-town ministers, judges, and other local leaders were often reluctant to reject a potential foster parent as unfit if he were also a friend or customer.
Many children lost their identity through forced name changes and repeated moves. In 1996, Alice Ayler said, "I was one of the luckier ones because I know my heritage. They took away the identity of the younger riders by not allowing contact with the past."
Many children who were placed out west had survived on the streets of New York, Boston or other large Eastern cities and generally, they were not the obedient children which many families expected them to be. In 1880, a Mr. Coffin of Indiana editorialized, "Children so thrown out from the cities are a source of much corruption in the country places where they are thrown. ... Very few such children are useful."
Some residents of placement locations charged that orphan trains were dumping undesirable children from the East onto Western communities. In 1874, the National Prison Reform Congress charged that these practices resulted in increased correctional expenses in the West.
Older boys wanted to be paid for their labor, sometimes, they asked for additional pay or they left a placement in order to find a higher paying placement. It is estimated that young men initiated 80% of the placement changes.
One of the many children who rode the train was Lee Nailing. Lee's mother died of sickness; after her death, Lee's father could not afford to keep his children.
Another orphan train child was named Alice Ayler. Alice rode the train because her single mother could not provide for her children; before the journey, they lived on "berries" and "green water."
Catholic clergy maintained that some charities were deliberately placing Catholic children in Protestant homes in order to change their religious practices. The Society for the Protection of Destitute Roman Catholic Children in the City of New York (known as the Protectory) was founded in 1863. The Protectory ran orphanages and place out programs for Catholic youth in response to Brace's Protestant-centered program. Similar charges of conversion via adoption were made concerning the placement of Jewish children.
Not all of the orphan train children were real orphans, but they were classified as orphans after they were forcibly removed from their biological families and transported to other states. Some claimed that this practice was a deliberate pattern which was intended to break up immigrant Catholic families. Some abolitionists opposed placements of children with Western families, viewing indentureship as a form of slavery.
Orphan trains were the target of lawsuits, generally filed by parents who attempted to reclaim their children. Suits were occasionally filed by receiving parents or receiving family members who claimed that they either lost money or were harmed as the result of the placement.
The Minnesota State Board of Corrections and Charities reviewed Minnesota orphan train placements between 1880 and 1883. The Board found that while children were hastily placed into their placements without proper investigations, only a few children were "depraved" or abused. The review criticized local committee members who were swayed by pressure from wealthy and important individuals in their community. The Board also pointed out that older children were frequently placed with farmers who expected to profit from their labor. The Board recommended that paid agents replace or supplement local committees in investigating and reviewing all applications and placements.
A complicated lawsuit arose from a 1904 Arizona Territory orphan train placement in which the New York Foundling Hospital sent 40 white children between the ages of 18 months and 5 years to be indentured to Catholic families in an Arizona Territory parish. The families which were approved for placement by the local priest were identified as "Mexican Indian" families in the subsequent litigation. The nuns who escorted these children were unaware of the racial tension which existed between local Anglo and Mexican groups and as a result, they placed white children with Mexican Indian families. A group of white men, described as "just short of a lynch mob," forcibly took the children from the Mexican Indian homes and placed most of them with Anglo families. Some of the children were returned to the Foundling Hospital, but 19 of them remained with the Anglo Arizona Territory families. The Foundling Hospital filed a writ of habeas corpus in which it sought the return of these children. The Arizona Supreme Court ruled that the best interests of the children required them to remain in their new Arizona homes. On appeal, the U.S. Supreme Court ruled that the filing of a writ of habeas corpus which sought the return of a child constituted an improper use of the writ. Habeas corpus writs should be used "solely in cases of arrest and forcible imprisonment under color or claim of warrant of law," and they should not be used to obtain or transfer the custody of children. At the time, these events were well-documented in published newspaper stories which were titled "Babies Sold Like Sheep," telling readers that the New York Foundling Hospital "has for years been shipping children in car-loads all over the country, and they are given away and sold like cattle."
End of the orphan train movement
As the West was settled, the demand for adoptable children declined. Additionally, Midwestern cities such as Chicago, Cleveland, and St. Louis began to experience the neglected children problems that New York, Boston, and Philadelphia had experienced in the mid-1800s. These cities began to seek ways to care for their own orphan populations.
In 1895, Michigan passed a statute prohibiting out-of-state children from local placement without payment of a bond guaranteeing that children placed in Michigan would not become a public charge in the State. Similar laws were passed by Indiana, Illinois, Kansas, Minnesota, Missouri, and Nebraska. Negotiated agreements between one or more New York charities and several western states allowed the continued placement of children in these states. Such agreements included large bonds as a security for placed children. In 1929, however, these agreements expired and were not renewed as charities changed their child care support strategies.
Lastly, the need for the orphan train movement decreased as legislation was passed providing in-home family support. Charities began developing programs to support destitute and needy families limiting the need for intervention to place out children.
Legacy of the program
Between 1854 and 1929, an estimated 200,000 American children traveled west by rail in search of new homes.
The Children's Aid Society rated its transplanted wards successful if they grew into "creditable members of society," and frequent reports documented the success stories. A 1910 survey concluded that 87 percent of the children sent to country homes had "done well," while 8 percent had returned to New York and the other 5 percent had either died, disappeared, or gotten arrested.
Brace's notion that children are better cared for by families than in institutions is the most basic tenet of present-day foster care.
Organizations
The Orphan Train Heritage Society of America, Inc. founded in 1986 in Springdale, Arkansas preserves the history of the orphan train era. The National Orphan Train Complex in Concordia, KS is a museum and research center dedicated to the Orphan Train Movement, the various institutions that participated, and the children and agents who rode the trains. The museum is located at the restored Union Pacific Railroad Depot in Concordia which is listed on the National Register of Historic Places. The Complex maintains an archive of riders' stories and houses a research facility. Services offered by the museum include rider research, educational material, and a collection of photos and other memorabilia.
The Louisiana Orphan Train Museum was founded in 2009 in a restored Union Pacific freight depot housed within Le Vieux Village Heritage Park in Opelousas, Louisiana. The museum has a collection of original documents, clothing, and photographs of orphan train riders as both children and adults. It focuses particularly on how the riders assimilated into the South Louisiana community, as the majority were legally adopted into their foster families. The museum is also the seat for the Louisiana Orphan Train Society. Founded in 1990 and chartered in 2003, this society staffs the volunteer-run museum, conducts historical outreach, researches the stories of riders, and hosts a large annual event akin to a family reunion.
Forwarding institutions
Some of the children who took the trains came from the following institutions: (partial list)
Angel Guardian Home
Association for Befriending Children & Young Girls
Association for Benefit of Colored Orphans
Baby Fold
Baptist Children's Home of Long Island
Bedford Maternity, Inc.
Bellevue Hospital
Bensonhurst Maternity
Berachah Orphanage
Berkshire Farm for boys
Berwind Maternity Clinic
Beth Israel Hospital
Bethany Samaritan Society
Bethlehem Lutheran Children's Home
Booth Memorial Hospital
Borough Park Maternity Hospital
Brace Memorial Newsboys House
Bronx Maternity Hospital
Brooklyn Benevolent Society
Brooklyn Hebrew Orphan Asylum
Brooklyn Home for Children
Brooklyn Hospital
Brooklyn Industrial school
Brooklyn Maternity Hospital
Brooklyn Nursery & Infants Hospital
Brookwood Child Care
Catholic Child Care Society
Catholic Committee for Refugees
Catholic Guardian Society
Catholic Home Bureau
Child Welfare League of America
Children's Aid Society
Children's Haven
Children's Village, Inc.
Church Mission of Help
Colored Orphan Asylum
Convent of Mercy
Dana House
Door of Hope
Duval College for Infant Children
Edenwald School for Boys
Erlanger Home
Euphrasian Residence
Family Reception Center
Fellowship House for boys
Ferguson House
Five Points House of Industry
Florence Crittendon League
Goodhue Home
Grace Hospital
Graham Windham Services
Greer-Woodycrest Children's Services
Guardian Angel Home
Guild of the Infant Savior
Hale House for Infants, Inc.
Half-Orphan Asylum
Harman Home for Children
Heartsease Home
Hebrew Orphan Asylum
Hebrew Sheltering Guardian Society
Holy Angels' School
Home for Destitute Children
Home for Destitute Children of Seamen
Home for Friendless Women and Children
Hopewell Society of Brooklyn
House of the Good Shepherd
House of Mercy
House of Refuge
Howard Mission & Home for Little Wanderers
Infant Asylum
Infants' Home of Brooklyn
Institution of Mercy
Jewish Board of Guardians
Jewish Protector & Aid Society
Kallman Home for Children
Little Flower Children's Services
Maternity Center Association
McCloskey School & Home
McMahon Memorial Shelter
Mercy Orphanage
Messiah Home for Children
Methodist Child Welfare Society
Misericordia Hospital
Mission of the Immaculate Virgin
Morrisania City Hospital
Mother Theodore's Memorial Girls' Home
Mothers & Babies Hospital
Mount Siani Hospital
New York Foundling Hospital
New York Home for Friendless Boys
New York House of Refuge
New York Juvenile Asylum (Children's Village)
New York Society for Prevention of Cruelty to Children
Ninth St. Day Nursery & Orphans' Home
Orphan Asylum Society of the City of Brooklyn
Orphan House
Ottilie Home for Children
In popular media
Big Brother by Annie Fellow-Johnson, an 1893 children's fiction book.
Extra! Extra! The Orphan Trains and Newsboys of New York by Renée Wendinger, an unabridged nonfiction resource book and pictorial history about the orphan trains.
Good Boy (Little Orphan at the Train), a Norman Rockwell painting
"Eddie Rode The Orphan Train", a song by Jim Roll and covered by Jason Ringenberg
Last Train Home: An Orphan Train Story, a 2014 historical novella by Renée Wendinger
Orphan Train, a 1979 television film directed by William A. Graham.
"Rider on an Orphan Train", a song by David Massengill from his 1995 album The Return
Orphan Train, a 2013 novel by Christina Baker Kline
Placing Out, a 2007 documentary sponsored by the Kansas Humanities Council
Toy Story 3, a 2010 Pixar animated film in which "Orphan Train" is referenced briefly at 00:02:04 - 00:02:07. Foster relationships are a reoccurring theme throughout the series.
"Orphan Train", a song by U. Utah Phillips released on disc 3 of the 4-CD compilation Starlight on the Rails: A Songbook in 2005
Swamplandia!, a novel by Karen Russell, in which a character, Louis Thanksgiving, had been taken from New York to the MidWest on an Orphan Train by The New York Foundling Society after his unwed immigrant mother died in childbirth.
In his albumn The Man From God Knows Where Tom Russell includes the track Rider on an Orphan Train, recounting the loss to the family of two relatives that ran away from home on an orphan train whilst young boys.
Lost Children Archive, a novel by Valeria Luiselli, where the main character researches the forced movement of several demographics throughout the Americas' history, including the Orphan Trains.
The Copper Children, a play by Karen Zacarías premiered in 2020 at the Oregon Shakespeare Festival.
My Heart Remembers, a 2008 novel by Kim Vogel Sawyer, where the main character and her siblings were separated at a young age as orphans on the orphan train.
"Orphan Train Series" by Jody Hedlund, a series about three orphaned sisters in the 1850s, the New York Children's Aid Society, and the resettling of orphans from New York to the Midwest
0.5 An Awakened Heart (2017)
1. With You Always (2017)
2. Together Forever (2018)
3. Searching for You (2018)
’’ Orphan Train, episode 16, season 2 of Dr. Quinn, Medicine Woman
Buffalo Kids a 2024 animated film where the main protagonists ride a train until they get left behind and eventually have to rescue the other kids from the main antagonists of the movie
Orphan Train children
eden ahbez songwriter Nature Boy
Joe Aillet
John Green Brady
Andrew H. Burke
Henry L. Jost
See also
Home Children – similar program in the UK
Treni della felicità – similar program in Italy
Notes
Further reading
Creagh, Dianne. "The Baby Trains: Catholic Foster Care and Western Migration, 1873–1929", Journal of Social History (2012) 46(1): 197–218.
Holt, Marilyn Irvin. The Orphan Trains: Placing Out in America. Lincoln: University of Nebraska Press, 1992.
Johnson, Mary Ellen, ed. Orphan Train Riders: Their Own Stories. (2 vol. 1992),
Magnuson, James and Dorothea G. Petrie. Orphan Train. New York: Dial Press, 1978.
O'Connor, Stephen. Orphan Trains: The Story of Charles Loring Brace and the Children He Saved and Failed. Boston: Houghton Mifflin, 2001.
Patrick, Michael, and Evelyn Trickel. Orphan Trains to Missouri. Columbia: University of Missouri Press, 1997.
Patrick, Michael, Evelyn Sheets, and Evelyn Trickel. We Are Part of History: The Story of the Orphan Trains. Santa Fe, NM: The Lightning Tree, 1990.
Riley, Tom. The Orphan Trains. New York: LGT Press, 2004.
Donna Nordmark Aviles. "Orphan Train To Kansas – A True Story". Wasteland Press 2018.
Renee Wendinger. "Extra! Extra! The Orphan Trains and Newsboys of New York". Legendary Publications 2009.
Clark Kidder. "Emily's Story – The Brave Journey of an Orphan Train Rider". 2007.
Downs, Susan Whitelaw, and Michael W. Sherraden. “The Orphan Asylum in the Nineteenth Century.” Social Service Review, vol. 57, no. 2, 1983, pp. 272–290. JSTOR, The Orphan Asylum in the Nineteenth Century. Accessed 1 March 2023.
Clement, Priscilla Ferguson. “Children and Charity: Orphanages in New Orleans, 1817–1914.” Louisiana History: The Journal of the Louisiana Historical Association, vol. 27, no. 4, 1986, pp. 337–351. JSTOR, Children and Charity: Orphanages in New Orleans, 1817-1914. Accessed 1 March 2023.
Facts about The Orphan Train Movement: America’s Largest Child Migration.
The Orphan Train
External links
West by Orphan Train – A documentary film by Colleen Bradford Krantz and Clark Kidder, 2014
DiPasquale, Connie. "Orphan Trains of Kansas"
"He rode the 'Orphan Train' across the country" – CNN
"Orphan train riders, offspring seek answers about heritage" – USA Today
"The Orphan Train" – CBS
"98-Year-Old Woman Recounts Experience As ‘Orphan Train’ Rider" – CBS
The Cawker City Public Record, 8 April 1886
"Placing Out" Department form
"The Orphan Trains", American Experience, PBS
National Orphan Train Complex
Adoption, fostering, orphan care and displacement
Child welfare in the United States
Adoption history
Rail transportation in the United States
History of New York City
1854 establishments in the United States
1929 disestablishments in the United States
Trains | Orphan Train | [
"Technology"
] | 6,021 | [
"Trains",
"Transport systems"
] |
13,302,082 | https://en.wikipedia.org/wiki/Revelation%20Space%20series | The Revelation Space series is a book series created by Alastair Reynolds. The fictional universe it is set in is used as the setting for a number of his novels and stories. Its fictional history follows the human species through various conflicts from the relatively near future (roughly 2200) to approximately 40,000 AD (all the novels to date are set between 2427 and 2858, although certain stories extend beyond this period). It takes its name from Revelation Space (2000), which was the first published novel set in the universe.
Universe
The Revelation Space universe is a fictional universe which was set in a future version of our world, with the addition of a number of extraterrestrial species and advanced technologies that are not necessarily grounded in current science. It is, nonetheless, somewhat "harder" than most examples of space opera, relying to a considerable extent on science Reynolds believes to be possible; in particular, faster-than-light travel is largely absent. Reynolds has said he prefers to keep the science in his fiction plausible, but he will adopt science he believes to be impossible when it is necessary for the story.
The name "Revelation Space universe" has been used by Alastair Reynolds in both the introductory text in the collections Diamond Dogs, Turquoise Days and Galactic North, and also on several editions of the novels set in the universe. He considered calling it the "Exordium universe" after a key plot device, but found that the name was already in use.
While a great deal of science fiction reflects either very optimistic or dystopian visions of the human future, the Revelation Space universe is notable in that human societies have not developed to either positive or negative extremes. Instead, despite their dramatically advanced technology, they are similar to those of today in terms of their moral ambiguity and mixture of cruelty and decency, corruption and opportunity.
The Revelation Space universe contains elements of Lovecraftian horror, with one posthuman entity stating explicitly that some things in the universe are fundamentally beyond human or transhuman understanding. Nevertheless, the main storyline is essentially optimistic, with humans continuing to survive even in a universe that seems fundamentally hostile to intelligent life.
The name "Revelation Space" appears in the novel of the same name during Philip Lascaille's account of his visit to Lascaille's Shroud, an anomalous region of the local universe. Lascaille says that "the key" to something momentous "was explained to me [. . .] while I was in Revelation Space."
Chronology
The chronology of the Revelation Space universe extends to roughly one billion years into the past, when the "Dawn War" — a galaxy-spanning conflict over the availability of various natural resources — resulted in almost all sentient life in the galaxy being wiped out. One race of survivors, later termed the Inhibitors, having converted itself to machine form, predicted that the impending Andromeda–Milky Way collision, roughly 3 billion years in our future, may severely damage the capacity of either galaxy to support life. Consequently, they planned to adjust the positions of stars in order to limit the damage the collision would cause. Also central to the Inhibitor project was the eradication of all species above a certain technological level until the crisis was over, as they believed no organic species would be capable of co-operating on such a large-scale project (an in-universe solution to the Fermi paradox). Whilst they were relatively successful, certain advanced species were able to hide from Inhibitor forces, or even fight back.
In human history, during the 21st and 22nd centuries numerous wars occurred, and a flotilla of generation ships were deployed to colonise a planet orbiting the star 61 Cygni (this becomes a major segment of the plot of Chasm City). The flotilla was later to reach a planet termed Sky's Edge, which was to be embroiled in war until human civilisation there was eradicated.
Meanwhile, in the Solar System in 2190, the Conjoiners emerged as a result of increased experimentation with neural implants. In response, the Coalition for Neural Purity was formed, opposed to the Conjoiners. Nevil Clavain fought on the side of the Coalition in the ensuing war, but defected later on after being betrayed. Clavain, and the Conjoiners, succeeded in escaping the Solar System and left for surrounding stars.
For the next few centuries, the so-called Belle Epoque, humanity enjoyed a period of relative peace and prosperity, with several planets being colonised. The most successful planet of all was Yellowstone, a planet orbiting the star Epsilon Eridani, site of the Glitter Band / Rust Belt and Chasm City. Technologies developed included the Conjoiner Drive, a gift from the Conjoiners (who resumed contact with humanity at an unknown time), advanced nanotechnology, and numerous other devices. With the exception of an attempted takeover of the Glitter Band, no major incidents affected humanity during this time.
The Belle Epoque was terminated by the advent of the Melding Plague in 2510, a nanotechnological virus that destroyed all other nanotechnology it came into contact with. Only the Conjoiners were unaffected by this disaster, which devastated the civilisation around Yellowstone. War between the Demarchists and Conjoiners erupted as a result of the plague.
Meanwhile, activities around a far-flung human colony on the planet Resurgam, orbiting the star Delta Pavonis, inadvertently attracted the attention of the Inhibitors. The Conjoiners, also made aware of this event, sent Clavain to recover the exceedingly powerful "Cache Weapons" from this system (said weapons having been stolen from the Conjoiners centuries before) that could be used to fend off the Inhibitors while the Conjoiners escaped. Clavain instead defected from the Conjoiners, intending to use the weapons to protect all of humanity. Skade, another Conjoiner, was sent to stop him and recover the weapons. They fought around the Resurgam system, with Clavain and his allies eventually obtaining the weapons. Clavain's ally Remontoire agreed to seek out alien assistance from the Hades Matrix, a nearby alien computer disguised as a neutron star, whilst Clavain sheltered refugees from Resurgam on another planet, later termed Ararat.
Remontoire returned in 2675, only a few days after Clavain's death at the hands of Skade, who had arrived with him. Remontoire and his allies were now at war with the Inhibitors, assisted by alien technology obtained from Hades. Even so, it was realised that the humans would not last indefinitely, and Clavain's people, now led by Scorpio, decided to seek out the mysterious "Shadows": a race believed to be near a moon called Hela, site of a theocracy. Aura, daughter of Ana Khouri (an ally of Remontoire) infiltrated the theocracy under the pseudonym Rashmika Els. After considerable conflict, Scorpio and Aura realised that contacting the Shadows was inadvisable. With the later assistance of the Conjoiner known as Glass, and of Clavain's estranged brother Warren, Scorpio and Aura (now going by the name Lady Arek) instead succeeded in contacting the Nestbuilders, an alien race who provided them with weapons capable of defeating the Inhibitors. As such, the Inhibitors were effectively eradicated from human space, with buffer zones and frontiers established to keep them at bay.
Humanity then enjoyed a second, 400-year-long golden age. After this, however, came the Greenfly outbreak, in which human civilisation was destroyed by a rogue terraforming system of human origin that destroyed planets and converted them to millions of orbiting, vegetation-filled habitats. The Greenfly began to subsume most of human space, with all efforts to stop them failing, due to the Greenfly having assimilated both aspects of the Melding Plague and Inhibitor technology. The storyline of the Revelation Space universe thus far concludes with humanity leaving the Milky Way galaxy in an attempt to set up a new civilisation elsewhere.
Books and stories set in the universe
All short stories and novellas in this universe to date are collected in Galactic North and Diamond Dogs, Turquoise Days, with the exception of "Monkey Suit", "The Last Log of the Lachrimosa", "Night Passage", "Open and Shut", and "Plague Music".
The Inhibitor Sequence
Revelation Space. London: Gollancz, 2000. .
Redemption Ark. London: Gollancz, 2002. .
Absolution Gap. London: Gollancz, 2003. .
Inhibitor Phase. London: Gollancz, 2021. .
Prefect Dreyfus Emergencies
The Prefect. London: Gollancz, 2007, . (Re-released as Aurora Rising in 2017, )
Elysium Fire. London: Gollancz, 2018, .
Machine Vendetta. London: Gollancz, 2023, .
Standalone
Chasm City. London: Gollancz, 2001. .
Short fiction
"Dilation Sleep" — originally published in Interzone #39 (September 1990); reprinted in Galactic North
"A Spy in Europa" — originally published in Interzone #120 (June 1997); reprinted in The Year's Best Science Fiction: Fifteenth Annual Collection (1998, ), Gardner Dozois, ed.; and in Galactic North; and posted free online at Infinity Plus
"Galactic North" — originally published in Interzone #145 (July 1999); reprinted in Space Soldiers (2001, ), Jack Dann and Gardner Dozois, eds.; and in The Year's Best Science Fiction: Seventeenth Annual Collection (2000, ), Gardner Dozois, ed.; and in Hayakawa's SF magazine; and in Galactic North
"Monkey Suit" — originally published in Death Ray #20 (July 2009); reprinted in Deep Navigation
"The Last Log of the Lachrimosa" — originally published in Subterranean Online (July 2014); reprinted in Beyond the Aquila Rift
"Night Passage" — originally published in the SF anthology Infinite Stars by Titan Books (October 2017, )
"Open and Shut" — A Prefect Dreyfus Emergency short story, originally published on the Gollancz website (January 2018)
"Plague Music" — originally published in Belladonna Nights and Other Stories, Subterranean Press (2021, )
Novellas
"Great Wall of Mars" — originally published in Spectrum SF #1 (February 2000); reprinted in The Year's Best Science Fiction: Eighteenth Annual Collection (2001, ), Gardner Dozois, ed.; and in Galactic North and in Beyond the Aquila Rift
"Glacial" — originally published in Spectrum SF #5 (March 2001); reprinted in The Year's Best Science Fiction: Nineteenth Annual Collection (2002, ), Gardner Dozois, ed.; and in Galactic North
Diamond Dogs — originally published as a chapbook from PS Publishing (2001, ); reprinted in Infinities (2002), Peter Crowther, ed.; and in Diamond Dogs, Turquoise Days and in Beyond the Aquila Rift
Turquoise Days — originally published as a chapbook from Golden Gryphon (2002, no ISBN); reprinted in The Year's Best Science Fiction: Twentieth Annual Collection (2003, ), Gardner Dozois, ed.; and in Best of the Best Volume 2: 20 Years of the Year's Best Short Science Fiction Novels (2007, ), Gardner Dozois, ed.; and in Diamond Dogs, Turquoise Days
"Weather" — originally published in Galactic North (2006); reprinted in Beyond the Aquila Rift
"Grafenwalder's Bestiary" — originally published in Galactic North (2006)
"Nightingale" — originally published in Galactic North (2006); reprinted in The Year's Best Science Fiction: Twenty-Fourth Annual Collection (2006, ), Gardner Dozois, ed.
Stories in chronological order
References
External links
Book series introduced in 2000
Future history
Revelation Space
Science fiction book series
Space opera
Fictional universes
Fiction about artificial intelligence
Fiction about nanotechnology
Fiction about consciousness transfer
Fiction set in the 7th millennium or beyond | Revelation Space series | [
"Materials_science"
] | 2,517 | [
"Fiction about nanotechnology",
"Nanotechnology"
] |
13,302,135 | https://en.wikipedia.org/wiki/Wuxian%20%28Shang%20dynasty%29 | Wuxian () was a Chinese shaman, or Wu () who practiced divination, prayer, sacrifice, rainmaking, and healing in Chinese traditions dating back over 3,000 years. Wuxian lived in the Shang dynasty (c. 1600–1046 BC) of China, and served under king Tai Wu. He is considered one of the main ancient Chinese astronomers alongside more historical figures such as Gan De and Shi Shen, the latter two of whom lived during the Warring States (403–221 BC). He has also been represented as one of the "Three Astronomical Traditions" on the Dunhuang map which was made during the Tang dynasty (618–907).
See also
Li Sao
Tai Wu
References
Ancient Chinese astronomers
Shang dynasty people
Year of birth unknown
Year of death unknown | Wuxian (Shang dynasty) | [
"Astronomy"
] | 164 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
13,302,262 | https://en.wikipedia.org/wiki/Fragment%20separator | A fragment separator is an ion-optical device used to focus and separate products from the collision of relativistic ion beams with thin targets. Selected products can then be studied individually. Fragment separators typically consist of a series of superconducting magnetic multipole elements. The thin target immediately before the separator allows the fragments produced through various reactions to escape the target material still at a very high velocity. The products are forward-focused because of the high velocity of the center-of-mass in the beam-target interaction, which allows fragment separators to collect a large fraction (in some cases nearly all) of the fragments produced in the target. Some examples of currently operating Fragment separators are the FRS at GSI, the A1900 at NSCL, and BigRIPS of Radioactive Isotope Beam Factory at RIKEN.
References
Experimental physics | Fragment separator | [
"Physics"
] | 179 | [
"Experimental physics"
] |
13,303,011 | https://en.wikipedia.org/wiki/Daintree%20Networks | Daintree Networks, Inc. was a building automation company that provided wireless control systems for commercial and industrial buildings. Founded in 2003, Daintree was headquartered in Los Altos, California, with an R&D lab in Melbourne, Australia.
Daintree's ControlScope wireless control includes switches, sensors, LED drivers, programmable thermostats, and plug load controllers. Wireless communication is achieved either by wireless adaptation to traditional wired devices (such as sensors), or by building wireless communications modules directly into the devices.
Daintree had produced a design verification and operational support tool, the Sensor Network Analyzer (SNA), which supports wireless embedded technologies including IEEE 802.15.4, Zigbee, Zigbee RF4CE, 6LoWPAN, JenNet (from Jennic Limited), SimpliciTI (from Texas Instruments), and Synkro (from Freescale Semiconductor).
History
Daintree was founded in 2003 by Bill Wood, who had previously worked as a General Manager for Agilent Technologies, and Hewlett-Packard.
Daintree managers have previously held roles within wireless standards bodies, including chair of several working groups within the Zigbee Alliance.
In 2003, when many wireless technologies were new, Daintree provided design verification and operational support tools for wireless embedded developers. In 2007 the company began developing and delivering wireless systems for specific purposes; by 2009 it had narrowed its focus to lighting and building control.
On April 21, 2016, Current Lighting Solutions, an energy management startup within GE, acquired Daintree Networks for US$77 million to combine its open-standard wireless network with GE's open source platform Predix to offer a new energy management system to businesses.
Products
ControlScope Manager (CSM): Software used to configure, manage, and maintain key energy loads in commercial buildings. It includes management of individual devices and "zones" of multiple devices. This includes calibration, scheduling, alarm notification, energy monitoring, occupancy and daylight control, demand response controls, and an automated commissioning tool.
Wireless Area Controller (WAC): A hardware that manages the wireless network, contains the control algorithms that converts sensing data into commands to ballasts and luminaires, tracks devices and stores their states, and detects issues and repairs the system.
Wireless Adapter: A hardware that enables traditional wired devices to communicate wirelessly within the network. It interfaces wireless signals to wired controls, and can be used in conjunction with devices such as wired sensors, LED drivers, ballasts and switches.
Sensor Network Analyzer (SNA): A discontinued software for the development, deployment and management of wireless hardware devices and embedded applications based technologies such as Zigbee, IEEE 802.15.4, 6LoWPAN, SimpliciTI and Synkro. Carries out performance measurements and graphical network visualization. The SNA is a part of Zigbee Alliance interoperability and certification events. (Discontinued 31-March-2010).
Sensor Network Adapter: A discontinued hardware used as a capture device in wireless embedded networks, able to interact with live Zigbee and IEEE 802.15.4 networks to poll, configure and commission devices. (Discontinued 31-March-2010).
Zigbee
Zigbee is a specification for a suite of high-level communication protocols using small, low-power digital radios based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs). Zigbee was targeted at RF applications that require a low data rate, long battery life, and secure networking.
Daintree was an active member of the Zigbee Alliance, and their Sensor Network Analyzer is used by the Zigbee Alliance for product certification.
References
External links
Zigbee Alliance
IEEE 802.15.4
Wireless Embedded Network resources including white papers, glossary and specification updates
Building automation
Companies based in Mountain View, California
Computer companies established in 2003
Computer companies disestablished in 2016
Defunct computer companies of the United States
Defunct computer hardware companies
Defunct software companies of the United States
Environmental technology
Home automation companies
IEEE 802
Personal area networks
Software companies based in California
Wireless sensor network | Daintree Networks | [
"Technology",
"Engineering"
] | 860 | [
"Home automation",
"Home automation companies",
"Building engineering",
"Wireless networking",
"Wireless sensor network",
"Automation",
"Building automation"
] |
13,303,432 | https://en.wikipedia.org/wiki/Blue%20dwarf%20%28red-dwarf%20stage%29 | A blue dwarf is a hypothetical class of star that develops from a red dwarf after it has exhausted much of its hydrogen fuel supply. Because red dwarfs fuse their hydrogen slowly and are fully convective (allowing their entire hydrogen supply to be fused, instead of merely that in the core), they are predicted to have lifespans of trillions of years; the Universe is currently not old enough for any blue dwarfs to have formed yet. Their future existence is predicted based on theoretical models.
Hypothetical scenario
Stars increase in luminosity as they age, and a more luminous star must radiate energy more quickly to maintain equilibrium. For stars more massive than red dwarfs, the resulting internal pressure increases their size, causing them to become red giants with larger surface areas. However, it is predicted that red dwarfs with less than 0.25 solar masses, rather than expanding, will increase radiative rate through an increase in surface temperature, hence emitting more blue and less red light. This is because the surface layers of red dwarfs do not become significantly more opaque with increasing temperature, so higher-energy photons from the interior of the star can escape, rather than being absorbed and re-radiated at lower temperatures as occurs in larger stars.
Despite their name, blue dwarfs would not necessarily increase in temperature enough to become blue stars. Simulations have been conducted on the future evolution of red dwarfs with stellar mass between 0.06 and 0.25 .
Of the masses simulated, the bluest of the blue dwarf stars at the end of the simulation had begun as a 0.14 red dwarf, and ended with surface temperature approximately , making it a type A blue-white star.
End of stellar life
Blue dwarfs are believed to eventually completely exhaust their store of hydrogen fuel, and their interior pressures are insufficient to fuse any other fuel. Once fusion ends, they are no longer main-sequence "dwarf" stars and become so-called white dwarfs – which, despite the name, are not main-sequence "dwarfs" and are not stars, but rather stellar remnants.
Once the former "blue"-dwarf stars have become degenerate, non-stellar white dwarfs, they cool, losing the remnant heat left over from their final hydrogen-fusing stage. The cooling process also requires enormous periods of time – much longer than the age of the universe at present – similar to the immense time previously required for them to change from their original red dwarf stage to their final blue dwarf stage. The stellar remnant white dwarf will eventually cool to become a black dwarf. (The universe is not old enough for any stellar remnants to have cooled to "black", so black dwarfs are also a well-founded, but still hypothetical object.)
It is also theoretically possible for these dwarfs at any stage of their lives to merge and become larger stars, such as helium stars. Such stars should ultimately also become white dwarfs, which like the others, will cool down to black dwarfs.
See also
Lists of stars
References
Hypothetical stars
Stellar evolution | Blue dwarf (red-dwarf stage) | [
"Physics"
] | 603 | [
"Astrophysics",
"Stellar evolution"
] |
13,304,056 | https://en.wikipedia.org/wiki/Mayhew%20Prize | The Mayhew Prize is a prize awarded annually by the Faculty of Mathematics, University of Cambridge to the student showing the greatest distinction in applied mathematics, primarily for courses offered by DAMTP, but also for some courses offered by the Statistical Laboratory, in the MASt examinations, also known as Part III of the Mathematical Tripos. This includes about half of all students taking the Tripos Math exam, since the rest are taking mainly pure mathematics courses. Since 2018 the Faculty have also awarded the Pure Mathematics Prize for pure mathematics, but due to an absence of funds there is no equivalent monetary reward.
The Mayhew Prize was founded in 1923 through a donation of £500 by William Loudon Mollison, Master of Clare College, in memory of his wife Ellen Mayhew (1846–1917).
List of winners
Most of this list is from The Times newspaper archive. The winners of the prize are published in the Cambridge University Reporter.
1925 Sydney Goldstein
1926 John Arthur Gaunt and Alan Herries Wilson
1927 James Hargreaves
1928 Sir Maurice Joseph Dean
1929 Kenneth Lawrence Dunkley and Eustace Neville Fox
1930 John Conrad Jaeger
1931 Geoffrey William Carter
1932 Robert Allan Smith
1935 Noel Bryan Slater
1936 Fred Hoyle and George Stanley Rushbrooke
1937 J. Corner and Charles Henry Brian Priestley
1938 F. Booth
1939 John Currie Gunn and A. Nisbet
1941 Kenneth Le Couteur and T. Paterson
1942 James G. Oldroyd
1947 Keith Stewartson
1948 John Pople
1950 Roger Tayler
1954 Jeffrey Goldstone and Stanley Mandelstam
1955 Gordon Robert Screaton
1956 M.H. McAndrew and Graham P. McCauley
1957 C. Hunter and J. Nuttall
1958 I. Hunter
1959 Christopher J. Bradley and Robin W. Lardner
1960 John Robert Taylor
1961 John B. Boyling
1962 David Branson and W.G. Dixon
1963 Tim Pedley
1964 Geoffrey Charles Fox
1965 Christopher J. R. Garrett
1966 Neil W. Macfadyen and David L. Moss
1967 Peter Goddard and A.P. Hunt
1968 David John Collop and John Ellis
1969 P.V. Collins
1970 John Margarson Huthnance
1971 David Martin Scott and Malcolm A. Swinbanks
1972 Peter David D'Eath
1973 M.J. Bolton and Peter Harrison
1974 Bernard Silverman and William Morton
1975 L Ruth Cairnie Thomlinson and Richard Weber
1976 J.Y. Probert and Chris Rogers
1977 Simon J. Hathrell
1978 Stephen John Cowley and Glyn Patrick Moody
1979 Paul R.W. Mansfield
1980 Russell J. Gerrard
1981 William Shaw
1982 Richard David Ball and S.G. Goodyear
1983 Peter Julian Ruback
1984 John Ronald Lister
1985 Andrew David Gilbert
1986 Andrew William Woods
1987 Oliver E. Jensen
1988 Paul S. Montague
1989 Nicolas P.E. Weeds
1990 O.J. Harris and M.L.T. Loke
1991 Michael A. Earnshaw
1992 Paul A. Shah
1993 Simon F. Ross
1994 Raphael Lehrer and Dean Rasheed
1995 Marika Taylor
1996 Damon Jude Wischik
1997 David W. Essex and Harvey S. Reall
1998 Toby Wiseman
1999 James Sparks
2000 Gareth J.R. Birdsall
2001 Sean Hartnoll and Aninda Sinha
2002 Robert J. Whittaker
2003 Joseph Conlon
2004 William Hall
2005 Claude Warnick
2006 Chris Cawthorn
2007 Steffen Gielen
2008 Antoine Labatie
2009 Andrew Crosby
2010 Rosie Oglethorpe
2011 Mike Blake
2012 Gunnar Peng
2013 Pierre Haas
2014 James Munro
2015 Julio Parra Martinez
2016 Matthew Colbrook
2017 Dominic Skinner
2018 Daniel Zhang
2019 Edward Beaty
2021 Wilfred Salmon
2022 Adam Wills
2023 Robert Bourne
See also
List of mathematics awards
References
Faculty of Mathematics, University of Cambridge
Mathematics awards | Mayhew Prize | [
"Technology"
] | 762 | [
"Science and technology awards",
"Mathematics awards"
] |
13,305,267 | https://en.wikipedia.org/wiki/LU%20reduction | LU reduction is an algorithm related to LU decomposition. This term is usually used in the context of super computing and highly parallel computing. In this context it is used as a benchmarking algorithm, i.e. to provide a comparative measurement of speed for different computers. LU reduction is a special parallelized version of an LU decomposition algorithm, an example can be found in (Guitart 2001). The parallelized version usually distributes the work for a matrix row to a single processor and synchronizes the result with the whole matrix (Escribano 2000).
Sources
J. Oliver, J. Guitart, E. Ayguadé, N. Navarro and J. Torres. Strategies for Efficient Exploitation of Loop-level Parallelism in Java. Concurrency and Computation: Practice and Experience(Java Grande 2000 Special Issue), Vol.13 (8-9), pp. 663–680. ISSN 1532-0634, July 2001, , last retrieved on Sept. 14 2007
J. Guitart, X. Martorell, J. Torres, and E. Ayguadé, Improving Java Multithreading Facilities: the Java Nanos Environment, Research Report UPC-DAC-2001-8, Computer Architecture Department, Technical University of Catalonia, March 2001, .
Arturo González-Escribano, Arjan J. C. van Gemund, Valentín Cardeñoso-Payo et al., Measuring the Performance Impact of SP-Restricted Programming in Shared-Memory Machines, In Vector and Parallel Processing — VECPAR 2000, Springer Verlag, pp. 128–141, , 2000,
Numerical linear algebra
Supercomputers | LU reduction | [
"Mathematics",
"Technology"
] | 341 | [
"Supercomputers",
"Applied mathematics",
"Applied mathematics stubs",
"Supercomputing"
] |
13,305,328 | https://en.wikipedia.org/wiki/Prolate%20spheroidal%20wave%20function | In mathematics, prolate spheroidal wave functions are eigenfunctions of the Laplacian in prolate spheroidal coordinates, adapted to boundary conditions on certain ellipsoids of revolution (an ellipse rotated around its long axis, “cigar shape“). Related are the oblate spheroidal wave functions (“pancake shaped” ellipsoid).
Solutions to the wave equation
Solve the Helmholtz equation,
, by the method of separation of variables in prolate spheroidal coordinates, , with:
and , , and . Here, is the interfocal distance of the elliptical cross section of the prolate spheroid.
Setting , the solution can be written
as the product of , a radial spheroidal wave function and an angular spheroidal wave function .
The radial wave function satisfies the linear ordinary differential equation:
The angular wave function satisfies the differential equation:
It is the same differential equation as in the case of the radial wave function. However, the range of the variable is different: in the radial wave function, , while in the angular wave function, . The eigenvalue of this Sturm–Liouville problem is fixed by the requirement that must be finite for .
For both differential equations reduce to the equations satisfied by the associated Legendre polynomials. For , the angular spheroidal wave functions can be expanded as a series of Legendre functions.
If one writes , the function satisfies
which is known as the spheroidal wave equation. This auxiliary equation has been used by Stratton.
Band-limited signals
In signal processing, the prolate spheroidal wave functions (PSWF) are useful as eigenfunctions of a time-limiting operation followed by a low-pass filter. Let denote the time truncation operator, such that if and only if has support on . Similarly, let denote an ideal low-pass filtering operator, such that if and only if its Fourier transform is limited to . The operator turns out to be linear, bounded and self-adjoint. For we denote with the -th eigenfunction, defined as
where are the associated eigenvalues, and is a constant. The band-limited functions are the prolate spheroidal wave functions, proportional to the introduced above. (See also Spectral concentration problem.)
Pioneering work in this area was performed by Slepian and Pollak, Landau and Pollak, and Slepian.
Prolate spheroidal wave functions whose domain is a (portion of) the surface of the unit sphere are more generally called "Slepian functions". These are of great utility in disciplines such as geodesy, cosmology, or tomography
Technical information and history
There are different normalization schemes for spheroidal functions. A table of the different schemes can be found in Abramowitz and Stegun who follow the notation of Flammer.
The Digital Library of Mathematical Functions provided by NIST is an excellent resource for spheroidal wave functions.
Tables of numerical values of spheroidal wave functions are given in Flammer, Hunter, Hanish et al., and Van Buren et al.
Originally, the spheroidal wave functions were introduced by C. Niven, which lead to a Helmholtz equation in spheroidal coordinates. Monographs tying together many aspects of the theory of spheroidal wave functions were written by Strutt, Stratton et al., Meixner and Schafke, and Flammer.
Flammer provided a thorough discussion of the calculation of the eigenvalues, angular wavefunctions, and radial wavefunctions for both the prolate and the oblate case. Computer programs for this purpose have been developed by many, including King et al., Patz and Van Buren, Baier et al., Zhang and Jin, Thompson and Falloon. Van Buren and Boisvert have recently developed new methods for calculating prolate spheroidal wave functions that extend the ability to obtain numerical values to extremely wide parameter ranges. Fortran source code that combines the new results with traditional methods is available at http://www.mathieuandspheroidalwavefunctions.com.
Asymptotic expansions of angular prolate spheroidal wave functions for large values of have been derived by Müller. He also investigated the relation between asymptotic expansions of spheroidal wave functions.
References
External links
MathWorld Spheroidal Wave functions
MathWorld Prolate Spheroidal Wave Function
MathWorld Oblate Spheroidal Wave function
Special functions
Wavelets | Prolate spheroidal wave function | [
"Mathematics"
] | 972 | [
"Special functions",
"Combinatorics"
] |
13,305,402 | https://en.wikipedia.org/wiki/Global%20brain | The global brain is a neuroscience-inspired and futurological vision of the planetary information and communications technology network that interconnects all humans and their technological artifacts. As this network stores ever more information, takes over ever more functions of coordination and communication from traditional organizations, and becomes increasingly intelligent, it increasingly plays the role of a brain for the planet Earth. In the philosophy of mind, global brain finds an analog in Averroes's theory of the unity of the intellect.
Basic ideas
Proponents of the global brain hypothesis claim that the Internet increasingly ties its users together into a single information processing system that functions as part of the collective nervous system of the planet. The intelligence of this network is collective or distributed: it is not centralized or localized in any particular individual, organization or computer system. Therefore, no one can command or control it. Rather, it self-organizes or emerges from the dynamic networks of interactions between its components. This is a property typical of complex adaptive systems.
The World Wide Web in particular resembles the organization of a brain with its web pages (playing a role similar to neurons) connected by hyperlinks (playing a role similar to synapses), together forming an associative network along which information propagates. This analogy becomes stronger with the rise of social media, such as Facebook, where links between personal pages represent relationships in a social network along which information propagates from person to person.
Such propagation is similar to the spreading activation that neural networks in the brain use to process information in a parallel, distributed manner.
History
Although some of the underlying ideas were already expressed by Nikola Tesla in the late 19th century and were written about by many others before him, the term "global brain" was coined in 1982 by Peter Russell in his book The Global Brain. How the Internet might be developed to achieve this was set out in 1986. The first peer-reviewed article on the subject was published by Gottfried Mayer-Kress in 1995, while the first algorithms that could turn the world-wide web into a collectively intelligent network were proposed by Francis Heylighen and Johan Bollen in 1996.
Reviewing the strands of intellectual history that contributed to the global brain hypothesis, Francis Heylighen distinguishes four perspectives: organicism, encyclopedism, emergentism and evolutionary cybernetics. He asserts that these developed in relative independence but now are converging in his own scientific re-formulation.
Organicism
In the 19th century, the sociologist Herbert Spencer saw society as a social organism and reflected about its need for a nervous system. Entomologist William Wheeler developed the concept of the ant colony as a spatially extended organism, and in the 1930s he coined the term superorganism to describe such an entity. This concept was later adopted by thinkers such as Joël de Rosnay in the book (1986) and Gregory Stock in the book Metaman (1993) to describe planetary society as a superorganism.
The mental aspects of such an organic system at the planetary level were perhaps first broadly elaborated by palaeontologist and Jesuit priest Pierre Teilhard de Chardin. In 1945, he described a coming "planetisation" of humanity, which he saw as the next phase of accelerating human "socialisation". Teilhard described both socialization and planetization as irreversible, irresistible processes of macrobiological development culminating in the emergence of a noosphere, or global mind (see Emergentism below).
The more recent living systems theory describes both organisms and social systems in terms of the "critical subsystems" ("organs") they need to contain in order to survive, such as an internal transport system, a resource reserve, and a decision-making system. This theory has inspired several thinkers, including Peter Russell and Francis Heylighen to define the global brain as the network of information processing subsystems for the planetary social system.
Encyclopedism
In the perspective of encyclopedism, the emphasis is on developing a universal knowledge network. The first systematic attempt to create such an integrated system of the world's knowledge was the 18th century Encyclopédie of Denis Diderot and Jean le Rond d'Alembert. However, by the end of the 19th century, the amount of knowledge had become too large to be published in a single synthetic volume. To tackle this problem, Paul Otlet founded the science of documentation, now called information science. In the 1930s he envisaged a World Wide Web-like system of associations between documents and telecommunication links that would make all the world's knowledge available immediately to anybody. H. G. Wells proposed a similar vision of a collaboratively developed world encyclopedia that would be constantly updated by a global university-like institution. He called this a World Brain, as it would function as a continuously updated memory for the planet, although the image of humanity acting informally as a more organic global brain is a recurring motif in many of his other works.
Tim Berners-Lee, the inventor of the World Wide Web, too, was inspired by the free-associative possibilities of the brain for his invention. The brain can link different kinds of information without any apparent link otherwise; Berners-Lee thought that computers could become much more powerful if they could imitate this functioning, i.e. make links between any arbitrary piece of information. The most powerful implementation of encyclopedism to date is Wikipedia, which integrates the associative powers of the world-wide-web with the collective intelligence of its millions of contributors, approaching the ideal of a global memory. The Semantic web, also first proposed by Berners-Lee, is a system of protocols to make the pieces of knowledge and their links readable by machines, so that they could be used to make automatic inferences, thus providing this brain-like network with some capacity for autonomous "thinking" or reflection.
Emergentism
This approach focuses on the emergent aspects of the evolution and development of complexity, including the spiritual, psychological, and moral-ethical aspects of the global brain, and is at present the most speculative approach. The global brain is here seen as a natural and emergent process of planetary evolutionary development. Here again Pierre Teilhard de Chardin attempted a synthesis of science, social values, and religion in his The Phenomenon of Man, which argues that the telos (drive, purpose) of universal evolutionary process is the development of greater levels of both complexity and consciousness. Teilhard proposed that if life persists then planetization, as a biological process producing a global brain, would necessarily also produce a global mind, a new level of planetary consciousness and a technologically supported network of thoughts which he called the noosphere. Teilhard's proposed technological layer for the noosphere can be interpreted as an early anticipation of the Internet and the Web.
Evolutionary cybernetics
Systems theorists and cyberneticians commonly describe the emergence of a higher order system in evolutionary development as a "metasystem transition" (a concept introduced by Valentin Turchin) or a "major evolutionary transition". Such a metasystem consists of a group of subsystems that work together in a coordinated, goal-directed manner. It is as such much more powerful and intelligent than its constituent systems. Francis Heylighen has argued that the global brain is an emerging metasystem with respect to the level of individual human intelligence, and investigated the specific evolutionary mechanisms that promote this transition.
In this scenario, the Internet fulfils the role of the network of "nerves" that interconnect the subsystems and thus coordinates their activity. The cybernetic approach makes it possible to develop mathematical models and simulations of the processes of self-organization through which such coordination and collective intelligence emerges.
Recent developments
In 1994 Kevin Kelly, in his popular book Out of Control, posited the emergence of a "hive mind" from a discussion of cybernetics and evolutionary biology.
In 1996, Francis Heylighen and Ben Goertzel founded the Global Brain group, a discussion forum grouping most of the researchers that had been working on the subject of the global brain to further investigate this phenomenon. The group organized the first international conference on the topic in 2001 at the Vrije Universiteit Brussel.
After a period of relative neglect, the Global Brain idea has recently seen a resurgence in interest, in part due to talks given on the topic by Tim O'Reilly, the Internet forecaster who popularized the term Web 2.0, and Yuri Milner, the social media investor. In January 2012, the Global Brain Institute (GBI) was founded at the Vrije Universiteit Brussel to develop a mathematical theory of the "brainlike" propagation of information across the Internet. In the same year, Thomas W. Malone and collaborators from the MIT Center for Collective Intelligence have started to explore how the global brain could be "programmed" to work more effectively, using mechanisms of collective intelligence. The complexity scientist Dirk Helbing and his NervousNet group have recently started developing a "Planetary Nervous System", which includes a "Global Participatory Platform", as part of the large-scale FuturICT project, thus preparing some of the groundwork for a Global Brain.
In July 2017, Elon Musk founded the company Neuralink, which aims to create a brain-computer interface (BCI) with significantly greater information bandwidth than traditional human interface devices. Musk predicts that artificial intelligence systems will rapidly outpace human abilities in most domains and views them as an existential threat. He believes an advanced BCI would enable human cognition to remain relevant for longer. The firm raised $27m from 12 Investors in 2017.
Criticisms
A common criticism of the idea that humanity would become directed by a global brain is that this would reduce individual diversity and freedom, and lead to mass surveillance. This criticism is inspired by totalitarian forms of government, as exemplified by George Orwell's character of "Big Brother". It is also inspired by the analogy between collective intelligence or swarm intelligence and insect societies, such as beehives and ant colonies, in which individuals are essentially interchangeable. In a more extreme view, the global brain has been compared with the Borg, a race of collectively thinking cyborgs conceived by the Star Trek science fiction franchise.
Global brain theorists reply that the emergence of distributed intelligence would lead to the exact opposite of this vision. James Surowiecki in his book The Wisdom of Crowds argued that the reason is that effective collective intelligence requires diversity of opinion, decentralization and individual independence.
See also
Noeme – a combination of a distinct physical brain function and that of an outsourced virtual one
, described by Vladimir Vernadsky and Pierre Teilhard de Chardin
References
Further reading
Wide audience
(emphasis on philosophy and consciousness)
It from bit and fit from bit. On the origin and impact of information in the average evolution. Includes how life forms originate and from there evolve to become more and more complex, like organisations and multinational corporations and a "global brain" (Yves Decadt, 2000). Book published in Dutch with English paper summary in The Information Philosopher, http://www.informationphilosopher.com/solutions/scientists/decadt/
(new sciences and technologies).
(emphasis on global innovation management)
Advanced literature
(The classic on physical and psychological/mental development of global brain and global mind).
.
For more references, check the GBI bibliography:
External links
The Global Brain FAQ on the Principia Cybernetica Web
The Global Brain Institute at the Vrije Universiteit Brussel
Management cybernetics
Global civilization
Hypothetical technology
Holism
Superorganisms
Systems theory
Theories of history
World | Global brain | [
"Biology"
] | 2,398 | [
"Superorganisms",
"Symbiosis"
] |
13,305,657 | https://en.wikipedia.org/wiki/Energy%20and%20Minerals%20Business%20Council | The Energy and Minerals Business Council is a global business forum of mining and energy corporations formed in 2006 with an inaugural meeting at the Grand Hyatt Melbourne Hotel, in Melbourne, Australia on 18 November and 19 November 2006. The meeting was held to coincide with the 2006 G20 summit.
Membership is composed of the world's most powerful mining and oil companies including BHP, Rio Tinto, Brazilian iron ore miner Companhia Vale do Rio Doce (CVRD), Woodside Petroleum, Saudi Aramco the state-owned national oil company of Saudi Arabia, Anglo American, PetroCanada and the BG Group.
Inaugural Meeting
Fifteen chief executives from the mining and energy sector attended the inaugural meeting, with Chip Goodyear, CEO of BHP Billiton chairing the meeting.
In a first for a G20 meeting, an elite group of business leaders from the Energy and Minerals Business Council "was able to address G20 Finance Ministers and Reserve Bank Governors at dinner on Saturday evening and at a working lunch".
In a statement released on 18 November 2006 the Energy and Minerals Business Council said:
"The aim for the natural resources industry is to provide sufficient resources at a reasonable price and in a sustainable way to ensure social and economic development."
Resource industry leaders said there had been a lack of investment in capacity in the industry, with a decline in levels of both skills and research and development, and that growth in Asian demand had not been anticipated. "The industry has been responding to these changes and is making massive investments, but the lead times are typically up to 10 years," according to the statement reported in The Australian. The statement also argued for policy reforms to build extraction capacity, encourage market solutions, and promote accountability.
Rodrigo de Rato, Managing Director of the International Monetary Fund addressed the meeting of the Energy and Minerals Business Council. as well as Australian Teasurer Peter Costello.
See also
International Council on Mining and Metals
References
International business organizations
Mining trade associations
International energy organizations | Energy and Minerals Business Council | [
"Engineering"
] | 402 | [
"International energy organizations",
"Energy organizations"
] |
13,305,897 | https://en.wikipedia.org/wiki/Signature%20Towers | Signature Towers (formerly known as Dancing Towers) was a under construction for a three-tower, mixed-use complex in Dubai, United Arab Emirates. It was designed by Iraqi born architect Zaha Hadid after winning an international design competition which included proposals from OMA and Reiser & Umemoto among others. The developers were Dubai Properties, the company responsible for the earlier Jumeirah Beach Residence. Apart from these three towers, the project would also include a new building to house the Dubai Financial Market, a large podium containing retail space and a pedestrian bridge crossing the creek extension.
History
The project was first unveiled to the public in June 2006 at a Zaha Hadid exhibition in the Guggenheim Museum in New York City. At the time of the launch the name for the project was Dancing Towers; however, this has now been changed to Signature Tower & Dubai Financial Market Development, is started construction on 6 April 2024.
Gallery
See also
Supertall
Skyscraper
Zaha Hadid
List of tallest buildings in Dubai
List of tallest buildings designed by women
References
Proposed skyscrapers in Dubai
Futurist architecture
Architecture in Dubai
High-tech architecture
Postmodern architecture
Zaha Hadid buildings | Signature Towers | [
"Engineering"
] | 237 | [
"Postmodern architecture",
"Architecture"
] |
13,305,949 | https://en.wikipedia.org/wiki/William%20L.%20Sibert | Major General William Luther Sibert (October 12, 1860 – October 16, 1935) was a senior United States Army officer who commanded the 1st Division on the Western Front during World War I.
Early life and education
Sibert was born in Gadsden, Alabama, on October 12, 1860. After attending the University of Alabama from 1879 to 1880, he entered the United States Military Academy and was appointed a second lieutenant of Engineers, United States Army, on June 15, 1884. His appointment was a distinction as only the top 10 percent of each West Point class was then commissioned into the Engineers.
Military career
He graduated from the Engineer School of Applications in 1887 and went on to hold several engineer positions in the United States and overseas.
In 1899, he was assigned as the chief engineer of the 8th Army Corps and the chief engineer and general manager of the Manila and Dagupan Railroad during the Philippine Insurrection. Later, he returned to the United States where he was in charge of river and harbor districts and headquarters in Louisville and Pittsburgh.
From 1907 through 1914, Sibert was a member of the Panama Canal Commission and was responsible for the building of a number of critical parts of the Panama Canal, including the Gatun Locks and Dam, the West Breakwater in Colon, and the channel from Gatun Lake to the Pacific Ocean.
On March 15, 1915, Sibert, by now a lieutenant colonel, was promoted to the rank of brigadier general. This promotion, while not an uncommon practice in the Regular Army of the time, was still unusual. Congress wanted to make Sibert a brigadier general, but the Engineer Corps was only authorized one, so instead of expanding the Corps, they appointed Sibert to a line officer slot (i.e., Infantry). The Army not knowing what to do with an engineer who had never led troops or trained for combat suddenly elevated to a general of infantry, decided to assign Sibert, who had been working on canal projects in the Mid-West and advisory missions to China, to command the Pacific Coast's Coastal Artillery. Here, it was felt he could do little harm.
Unfortunately for Sibert, when the United States entered World War I in April 1917, Brigadier General Sibert was one of the only senior infantry officers on active duty. He was duly breveted to major general and deployed with the initial four regiments of the American Expeditionary Forces (AEF) which formed the 1st Division (nicknamed "The Big Red One") once in France. The AEF's Commander-in-Chief (C-in-C), General John J. Pershing, a long serving cavalry officer famous for his exploits at San Juan Hill in the Spanish–American War, and recently in charge of the campaign against Pancho Villa, was short on qualified general officers (he himself had only recently been promoted to his position) so Sibert was placed in charge of the 1st Division.
To his credit, Sibert opposed his own promotion as a line officer, protesting his own lack of experience. In the peacetime Army prior to 1917, though, it was relatively harmless. In the cauldron of the Western Front, it was a serious problem. The AEF suffered a serious leadership problem throughout the final year of the war, as officers were rapidly promoted to positions with little or no experience. The American Army was singularly unprepared for the war, and the strain of its rapid expansion created many personnel problems like Sibert's.
Part of the problem was the Army's promotion system, which continued to cause problems into World War II. The rank a Regular Army officer might hold, and their official rank were not always the same. Thus a "peacetime rank" and a "wartime" rank differed. An officer might start the war as a lieutenant colonel, end the war as a major general, and then revert to being a lieutenant colonel after the war. Incidentally, pay was not necessarily tied to rank, but depended on time in service and an individual's official rank. In the small Regular Army of 1917, most officers were below the rank of colonel, and few serving in general officer billets actually were recognized by Congress as holding the rank of general, rather, they were "breveted" to the higher rank. Actual promotion required Congressional approval, the number of positions limited by law, and was based solely on seniority. Breveting allowed the Army to bypass these restrictions, for better or worse. Thus, the problem of promoting Sibert to brigadier general in the Engineer Corps and the subsequent trouble it caused.
Sibert led the 1st Infantry Division during its initial training by French and British forces. In October 1917, Pershing wrote an extensive letter to Secretary of War Newton D. Baker expressing his concerns about some of his generals, "I hope you will permit me to speak very frankly and quite confidentially, but I fear that we have some general officers who have neither the experience, the energy, nor the aggressive spirit to prepare their units or to handle them under battle conditions, as they exist today. I shall comment in an enclosure on the individuals to whom I refer particularly."
In January 1918, the first elements of the AEF, part of the 1st Infantry Division, prepared to deploy into the line at Ansauville. MG Sibert was relieved by General John J. Pershing before the Division's deployment to the front. Pershing was dissatisfied with the Division's progress and elevated Brigadier General Robert Lee Bullard, a true line officer, to replace Sibert. Sibert returned to the United States in January 1918 where he became the commanding general of the Army Corps of Engineers Southeastern Department located at Charleston, South Carolina. Sibert was not alone in his relief, as Secretary Baker had approved Pershing's relief of a number of individuals. Pershing showed some measure of respect for Sibert, who was pushing 58 years old (a contributing factor to his relief), recognizing that the position Sibert was in, was not entirely of his own making. Pershing was not nearly as kind to others he removed from command during the war.
When the War Department created the Chemical Warfare Service (CWS) later that spring, Pershing was asked to name a general officer to head it. Pershing recommended Sibert to the War Department, demonstrating his understanding of Sibert's true ability as an engineer and project manager. Following his assignment to the CWS on June 28, 1918, Congress promoted Sibert to the rank of Major General, making the earlier brevet promotion official. Sibert led the CWS from May 1918 to February 1920. During that period the CWS in the United States focused on production and equipment. As commander of the CWS he oversaw the production of America's first chemical warfare agent, Lewisite, and the development of the US Army's chemical defense equipment, including the first US protective (or "gas") masks, the M-1 and M-2. The CWS in Europe, part of the AEF, did not fall under Sibert's control. Instead, that was led by Colonel Amos Fries, part of Pershing's Command Staff. When Sibert announced his retirement in 1919, Amos Fries, still in Europe, was selected to replace him. Today the US Army considers Sibert the "father of the US Army Chemical Corps" because he was the first commander of the CWS. Of course, he was also the first commanding officer of the 1st Infantry Division, the oldest continually serving Division in the United States Army.
Sibert retired from active duty in February 1920 and settled in Bowling Green, Kentucky. Following his retirement from the Army, Sibert led the modernization of the docks and waterways in Mobile, Alabama, and served on the Presidential Commission that led to the building of Hoover Dam. He was elected to the University of Alabama Engineering Hall of Fame in 1961.
For his services during World War I he was awarded the Army Distinguished Service Medal, the citation for which reads:
Personal life
Sibert married Mary Margaret Cummings in September 1887, with whom he had five sons and one daughter. After Mary's death in 1915, General Sibert married Juliette Roberts in June 1917. She died 15 months later and in 1922 Sibert married Evelyn Clyne Bairnsfather of Edinburgh, Scotland who remained his wife until his death on October 16, 1935, in Bowling Green. General Sibert is buried at Arlington National Cemetery. Two of his five sons, Edwin L. Sibert and Franklin C. Sibert, each retired as Major Generals in the Army.
Decorations
References
External links
William Luther Sibert in the Alabama Hall of Fame
US Army Chemical Corps Regimental Association Biography of MG William L. Sibert
1860 births
1935 deaths
People from Gadsden, Alabama
Chemical warfare
United States Army Corps of Engineers personnel
Burials at Arlington National Cemetery
United States Military Academy alumni
United States Army generals of World War I
United States Army generals
American military personnel of the Spanish–American War
American military personnel of the Philippine–American War
Recipients of the Distinguished Service Medal (US Army)
Commanders of the Legion of Honour
Military personnel from Alabama
19th-century United States Army personnel | William L. Sibert | [
"Chemistry"
] | 1,859 | [
"nan"
] |
13,306,439 | https://en.wikipedia.org/wiki/Greens%20Japan | The is an established national green party in Japan.
After the electoral success of Green activist Ryuhei Kawada in the 2007 House of Councillors election, the local green political network Rainbow and Greens had reportedly decided to dissolve itself and merge with the Japan Greens in December 2007. The two precedent organizations dissolved themselves and relaunched as Greens Japan, a political organization in late 2008, under its former Japanese name, Midori no Mirai (みどりの未来 - "green future").
History
The party was founded in July 2012 and held its first general assembly in that same month.
Representation
The party has a number of elected city council members/councillors in towns and cities across Japan. On the 22 November 2010, Kazumi Inamura became the first popularly elected Greens Japan Mayor, in the city of Amagasaki. As well as being the youngest mayor elected in Japan’s history at the age of 38, she is also the first popularly elected female mayor of the city. She won the mayoralty with 54% of the vote.
Party establishment
On 28 July 2012, the party was officially re-established under its new name by local assembly members and civic groups to run in the Upper House election.
Policies
The party opposes Japan's entry into the Trans-Pacific Partnership (TPP).
The party supports a universal basic income (UBI).
See also
Energy in Japan
Environmental issues in Japan
Universal basic income in Japan
References
External links
Midori no Tō (Greens Japan) (official website)
News articles
New Green Party formed in Japan/Group seeks to reflect anti-nuclear, environmental, pro-democracy movements (Article in Green Pages, newspaper of the Green Party of the United States. September 2012).
2008 establishments in Japan
2012 establishments in Japan
Anti-nuclear organizations
Environmentalism in Japan
Global Greens member parties
Green parties in Asia
Political parties in Japan | Greens Japan | [
"Engineering"
] | 374 | [
"Nuclear organizations",
"Anti-nuclear organizations"
] |
13,307,194 | https://en.wikipedia.org/wiki/Braden%20Allenby | Braden R. Allenby (born 1950) is an American environmental scientist, environmental attorney and Professor of Civil and Environmental Engineering, and of Law, at Arizona State University.
Biography
Allenby was born in Highland Park, Illinois on December 29, 1950, to Dr. Richard J. Allenby, Jr.(1923-2017) and Julia T. Allenby(1925–2002). He is the oldest of three brothers, Dr. Kent Allenby(1952-) and Peter Allenby(1957-).
Allenby graduated cum laude from Yale University in 1972, received his Juris Doctor from the University of Virginia Law School in 1978, his Master's in Economics from the University of Virginia in 1979, his Master's in Environmental Sciences from Rutgers University in the Spring of 1989, and his Ph.D. in Environmental Sciences from Rutgers in 1992.
He joined AT&T in 1983 as a telecommunications regulatory attorney, and was an environmental attorney and Senior Environmental Attorney for AT&T from 1984 to 1993. From 1991 to 1992 he was the J. Herbert Holloman Fellow at the National Academy of Engineering in Washington, DC. During 1992, he was the J. Herbert Holloman Fellow at the National Academy of Engineering in Washington, DC. From 1995 to 1997 he was Director for Energy and Environmental Systems at Lawrence Livermore National Laboratory, on temporary assignment from his position as Research Vice President, Technology and Environment, for AT&T. From 1997 to 2004 he was the Environment, Health, and Safety Vice President for AT&T, with global responsibility for those operations for the firm. In 2004, he moved to Arizona State University, where he is now President's Professor, and Lincoln Professor of Engineering and Ethics. In June, 2000, he chaired the second Gordon Conference on Industrial Ecology.
In 2007 he was President of the International Society for Industrial Ecology; Chair of the AAAS Committee on Science, Engineering, and Public Policy; a Batten Fellow in Residence at the University of Virginia's Darden Graduate School of Business Administration;
He is a member of the Virginia Bar, and has worked as an attorney for the Civil Aeronautics Board and the Federal Communications Commission, as well as a strategic consultant on economic and technical telecommunications issues. He is a Fellow of the Royal Society for the Arts, Manufactures & Commerce. He is currently a member or former member of a number of editorial and advisory boards.
Work
His areas of expertise include: design for environment, earth systems engineering and management, industrial ecology, NBIC (i.e., nanotechnology, biotechnology, information and communication technology, and cognitive science), emerging technologies and technological evolution.
He has taught courses on industrial ecology and design for environment at the Yale University School of Forestry and Environmental Studies, the University of Virginia School of Engineering and Applied Science, and at the University of Wisconsin Engineering Extension School; and has lectured widely on earth systems engineering and management, industrial ecology, design for Environment, and the social and policy implications of emerging technologies, especially information and communication technologies.
Allenby has authored a number of books, articles and book chapters on his above mentioned interests.
Publications
Books:
1994, The Greening of Industrial Ecosystems, National Academy Press
1994, Environmental Threats and National Security: An International Challenge to Science and Technology, Lawrence Livermore National Laboratory
1995, Industrial Ecology, Prentice-Hall
1996, Design for Environment, Prentice-Hall
1997, Industrial Ecology and the Automobile, Prentice-Hall
1998, Industrial Ecology: Policy Framework and Implementation, Prentice-Hall
2001, Information Systems and the Environment, National Academy of Engineering – Technology & Engineering
2005, Reconstructing earth : Technology and environment in the age of humans. Washington, DC: Island Press.
2009, Industrial Ecology and Sustainable Engineering, Prentice-Hall
2011, The Techno-Human Condition, The MIT Press
2012, "The Theory and Practice of Sustainable Engineering", Pearson Education
2015, "The Applied Ethics of Emerging Military and Security Technologies", Ashgate
2016, "Future Conflict and Emerging Technologies", Consortium for Science, Policy & Outcomes
2017, "Weaponized Narrative: The New Battlespace," New America Foundation/ASU Center on the Future of War
2017, "Moral Injury: Towards an International Perspective," New America Foundation/ASU Center on the Future of War
Various Articles:
Earth systems engineering and management, IEEE Technology and Society Magazine, (2000)
“Earth systems engineering and management: A manifesto,” Environmental Science & Technology 41, no. 23 (2007): 7960–7965.
“The ontologies of industrial ecology?,” Progress in Industrial Ecology, An International Journal 3, no. 1 (2006): 28–40.
“Toward inherently secure and resilient societies,” Science 309, no. 5737 (2005): 1034.
“The Anthropocene as Media: Information Systems and the Creation of the Human Earth,” American Behavioral Scientist 52, no. 1 (2008): 107.
“From human to transhuman: Technology and the reconstruction of the world,” Templeton lecture, October 22 (2007): 2007.
“Ethical Systems in an Age of Accelerating Technological Evolution,” in Electronics and the Environment, 2006. Proceedings of the 2006 IEEE International Symposium on, 2006, 42–44.
“Complexity in urban systems: ICT and transportation,” in IEEE International Symposium on Electronics and the Environment, 2008. ISEE 2008, 2008, 1–3.
“Educating engineers in the anthropocene,” in IEEE International Symposium on Electronics and the Environment, 2008. ISEE 2008, 2008, 1–3.
“Sustainable Engineering Education: Translating Myth to Mechanism,” in Electronics & the Environment, Proceedings of the 2007 IEEE International Symposium on, 2007, 52–56.
“Earth systems engineering: The role of industrial ecology in an engineered world,” Journal of Industrial Ecology 2, no. 3 (1999): 73–93.
“Industrial ecology,” foresight 2, no. 02 (2000).
“Understanding industrial ecology from a biological systems perspective,” co-written with W.E.Cooper, Environmental Quality Management 3, no. 3 (1994).
“Culture and industrial ecology,” Journal of Industrial Ecology 3, no. 1 (1999): 2–4.
See also
Industrial ecology
Earth systems engineering and management
References
ASU Directory Profile: Braden Allenby – biography
http://schoolofsustainability.asu.edu/about/faculty/persbio.php?pid=4360 – biography
Braden R. Allenby – biography
External links
Video of talk on Earth Systems Engineering and Management
Center for Earth Systems Engineering and Management at Arizona State University
Article on ESEM in the Encyclopedia of Earth
1950 births
Arizona State University faculty
Environmental engineers
American environmental scientists
Industrial ecology
Living people
Yale University alumni | Braden Allenby | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,382 | [
"Environmental scientists",
"American environmental scientists",
"Industrial engineering",
"Environmental engineering",
"Industrial ecology"
] |
13,307,469 | https://en.wikipedia.org/wiki/Empirical%20software%20engineering | Empirical software engineering (ESE) is a subfield of software engineering (SE) research that uses empirical research methods to study and evaluate an SE phenomenon of interest. The phenomenon may refer to software development tools/technology, practices, processes, policies, or other human and organizational aspects.
ESE has roots in experimental software engineering, but as the field has matured the need and acceptance for both quantitative and qualitative research has grown. Today, common research methods used in ESE for primary and secondary research are the following:
Primary research (experimentation, case study research, survey research, simulations in particular software Process simulation)
Secondary research methods (Systematic reviews, Systematic mapping studies, rapid reviews, tertiary review)
Teaching empirical software engineering
Some comprehensive books for students, professionals and researchers interested in ESE are available.
Research community
Journals, conferences, and communities devoted specifically to ESE:
Empirical Software Engineering: An International Journal
International Symposium on Empirical Software Engineering and Measurement
International Software Engineering Research Network (ISERN)
References
Software engineering | Empirical software engineering | [
"Technology",
"Engineering"
] | 205 | [
"Software engineering",
"Systems engineering",
"Information technology",
"Computer engineering"
] |
13,307,577 | https://en.wikipedia.org/wiki/Data%20breach | A data breach, also known as data leakage, is "the unauthorized exposure, disclosure, or loss of personal information".
Attackers have a variety of motives, from financial gain to political activism, political repression, and espionage. There are several technical root causes of data breaches, including accidental or intentional disclosure of information by insiders, loss or theft of unencrypted devices, hacking into a system by exploiting software vulnerabilities, and social engineering attacks such as phishing where insiders are tricked into disclosing information. Although prevention efforts by the company holding the data can reduce the risk of data breach, it cannot bring it to zero.
The first reported breach was in 2002 and the number occurring each year has grown since then. A large number of data breaches are never detected. If a breach is made known to the company holding the data, post-breach efforts commonly include containing the breach, investigating its scope and cause, and notifications to people whose records were compromised, as required by law in many jurisdictions. Law enforcement agencies may investigate breaches, although the hackers responsible are rarely caught.
Many criminals sell data obtained in breaches on the dark web. Thus, people whose personal data was compromised are at elevated risk of identity theft for years afterwards and a significant number will become victims of this crime. Data breach notification laws in many jurisdictions, including all states of the United States and European Union member states, require the notification of people whose data has been breached. Lawsuits against the company that was breached are common, although few victims receive money from them. There is little empirical evidence of economic harm to firms from breaches except the direct cost, although there is some evidence suggesting a temporary, short-term decline in stock price.
Definition
A data breach is a violation of "organizational, regulatory, legislative or contractual" law or policy that causes "the unauthorized exposure, disclosure, or loss of personal information". Legal and contractual definitions vary. Some researchers include other types of information, for example intellectual property or classified information. However, companies mostly disclose breaches because it is required by law, and only personal information is covered by data breach notification laws.
Prevalence
The first reported data breach occurred on 5 April 2002 when 250,000 social security numbers collected by the State of California were stolen from a data center. Before the widespread adoption of data breach notification laws around 2005, the prevalence of data breaches is difficult to determine. Even afterwards, statistics per year cannot be relied on because data breaches may be reported years after they occurred, or not reported at all. Nevertheless, the statistics show a continued increase in the number and severity of data breaches that continues . In 2016, researcher Sasha Romanosky estimated that data breaches (excluding phishing) outnumbered other security breaches by a factor of four.
Perpetrators
According to a 2020 estimate, 55 percent of data breaches were caused by organized crime, 10 percent by system administrators, 10 percent by end users such as customers or employees, and 10 percent by states or state-affiliated actors. Opportunistic criminals may cause data breaches—often using malware or social engineering attacks, but they will typically move on if the security is above average. More organized criminals have more resources and are more focused in their targeting of particular data. Both of them sell the information they obtain for financial gain. Another source of data breaches are politically motivated hackers, for example Anonymous, that target particular objectives. State-sponsored hackers target either citizens of their country or foreign entities, for such purposes as political repression and espionage. Often they use undisclosed zero-day vulnerabilities for which the hackers are paid large sums of money. The Pegasus spyware—a no-click malware developed by the Israeli company NSO Group that can be installed on most cellphones and spies on the users' activity—has drawn attention both for use against criminals such as drug kingpin El Chapo as well as political dissidents, facilitating the murder of Jamal Khashoggi.
Causes
Technical causes
Despite developers' goal of delivering a product that works entirely as intended, virtually all software and hardware contains bugs. If a bug creates a security risk, it is called a vulnerability. Patches are often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation. Both software written by the target of the breach and third party software used by them are vulnerable to attack. The software vendor is rarely legally liable for the cost of breaches, thus creating an incentive to make cheaper but less secure software.
Vulnerabilities vary in their ability to be exploited by malicious actors. The most valuable allow the attacker to inject and run their own code (called malware), without the user being aware of it. Some malware is downloaded by users via clicking on a malicious link, but it is also possible for malicious web applications to download malware just from visiting the website (drive-by download). Keyloggers, a type of malware that records a user's keystrokes, are often used in data breaches. The majority of data breaches could have been averted by storing all sensitive information in an encrypted format. That way, physical possession of the storage device or access to encrypted information is useless unless the attacker has the encryption key. Hashing is also a good solution for keeping passwords safe from brute-force attacks, but only if the algorithm is sufficiently secure.
Many data breaches occur on the hardware operated by a partner of the organization targeted—including the 2013 Target data breach and 2014 JPMorgan Chase data breach. Outsourcing work to a third party leads to a risk of data breach if that company has lower security standards; in particular, small companies often lack the resources to take as many security precautions. As a result, outsourcing agreements often include security guarantees and provisions for what happens in the event of a data breach.
Human causes
Human causes of breach are often based on trust of another actor that turns out to be malicious. Social engineering attacks rely on tricking an insider into doing something that compromises the system's security, such as revealing a password or clicking a link to download malware. Data breaches may also be deliberately caused by insiders. One type of social engineering, phishing, obtains a user's credentials by sending them a malicious message impersonating a legitimate entity, such as a bank, and getting the user to enter their credentials onto a malicious website controlled by the cybercriminal. Two-factor authentication can prevent the malicious actor from using the credentials. Training employees to recognize social engineering is another common strategy.
Another source of breaches is accidental disclosure of information, for example publishing information that should be kept private. With the increase in remote work and bring your own device policies, large amounts of corporate data is stored on personal devices of employees. Via carelessness or disregard of company security policies, these devices can be lost or stolen. Technical solutions can prevent many causes of human error, such as encrypting all sensitive data, preventing employees from using insecure passwords, installing antivirus software to prevent malware, and implementing a robust patching system to ensure that all devices are kept up to date.
Breach lifecycle
Prevention
Although attention to security can reduce the risk of data breach, it cannot bring it to zero. Security is not the only priority of organizations, and an attempt to achieve perfect security would make the technology unusable. Many companies hire a chief information security officer (CISO) to oversee the company's information security strategy. To obtain information about potential threats, security professionals will network with each other and share information with other organizations facing similar threats. Defense measures can include an updated incident response strategy, contracts with digital forensics firms that could investigate a breach, cyber insurance, and monitoring the dark web for stolen credentials of employees. In 2024, the United States National Institute of Standards and Technology (NIST) issued a special publication, "Data Confidentiality: Identifying and Protecting Assets Against Data Breaches". The NIST Cybersecurity Framework also contains information about data protection. Other organizations have released different standards for data protection.
The architecture of a company's systems plays a key role in deterring attackers. Daswani and Elbayadi recommend having only one means of authentication, avoiding redundant systems, and making the most secure setting default. Defense in depth and distributed privilege (requiring multiple authentications to execute an operation) also can make a system more difficult to hack. Giving employees and software the least amount of access necessary to fulfill their functions (principle of least privilege) limits the likelihood and damage of breaches. Several data breaches were enabled by reliance on security by obscurity; the victims had put access credentials in publicly accessible files. Nevertheless, prioritizing ease of use is also important because otherwise users might circumvent the security systems. Rigorous software testing, including penetration testing, can reduce software vulnerabilities, and must be performed prior to each release even if the company is using a continuous integration/continuous deployment model where new versions are constantly being rolled out.
The principle of least persistence—avoiding the collection of data that is not necessary and destruction of data that is no longer necessary—can mitigate the harm from breaches. The challenge is that destroying data can be more complex with modern database systems.
Response
A large number of data breaches are never detected. Of those that are, most breaches are detected by third parties; others are detected by employees or automated systems. Responding to breaches is often the responsibility of a dedicated computer security incident response team, often including technical experts, public relations, and legal counsel. Many companies do not have sufficient expertise in-house, and subcontract some of these roles; often, these outside resources are provided by the cyber insurance policy. After a data breach becomes known to the company, the next steps typically include confirming it occurred, notifying the response team, and attempting to contain the damage.
To stop exfiltration of data, common strategies include shutting down affected servers, taking them offline, patching the vulnerability, and rebuilding. Once the exact way that the data was compromised is identified, there is typically only one or two technical vulnerabilities that need to be addressed in order to contain the breach and prevent it from reoccurring. A penetration test can then verify that the fix is working as expected. If malware is involved, the organization must investigate and close all infiltration and exfiltration vectors, as well as locate and remove all malware from its systems. If data was posted on the dark web, companies may attempt to have it taken down. Containing the breach can compromise investigation, and some tactics (such as shutting down servers) can violate the company's contractual obligations.
Gathering data about the breach can facilitate later litigation or criminal prosecution, but only if the data is gathered according to legal standards and the chain of custody is maintained. Database forensics can narrow down the records involved, limiting the scope of the incident. Extensive investigation may be undertaken, which can be even more expensive than litigation. In the United States, breaches may be investigated by government agencies such as the Office for Civil Rights, the United States Department of Health and Human Services, and the Federal Trade Commission (FTC). Law enforcement agencies may investigate breaches although the hackers responsible are rarely caught.
Notifications are typically sent out as required by law. Many companies offer free credit monitoring to people affected by a data breach, although only around 5 percent of those eligible take advantage of the service. Issuing new credit cards to consumers, although expensive, is an effective strategy to reduce the risk of credit card fraud. Companies try to restore trust in their business operations and take steps to prevent a breach from reoccurring.
Consequences
For consumers
After a data breach, criminals make money by selling data, such as usernames, passwords, social media or customer loyalty account information, debit and credit card numbers, and personal health information (see medical data breach). Criminals often sell this data on the dark web—parts of the internet where it is difficult to trace users and illicit activity is widespread—using platforms like .onion or I2P. Originating in the 2000s, the dark web, followed by untraceable cryptocurrencies such as Bitcoin in the 2010s, made it possible for criminals to sell data obtained in breaches with minimal risk of getting caught, facilitating an increase in hacking. One popular darknet marketplace, Silk Road, was shut down in 2013 and its operators arrested, but several other marketplaces emerged in its place. Telegram is also a popular forum for illegal sales of data.
This information may be used for a variety of purposes, such as spamming, obtaining products with a victim's loyalty or payment information, identity theft, prescription drug fraud, or insurance fraud. The threat of data breach or revealing information obtained in a data breach can be used for extortion.
Consumers may suffer various forms of tangible or intangible harm from the theft of their personal data, or not notice any harm. A significant portion of those affected by a data breach become victims of identity theft. A person's identifying information often circulates on the dark web for years, causing an increased risk of identity theft regardless of remediation efforts. Even if a customer does not end up footing the bill for credit card fraud or identity theft, they have to spend time resolving the situation. Intangible harms include doxxing (publicly revealing someone's personal information), for example medication usage or personal photos.
For organizations
There is little empirical evidence of economic harm from breaches except the direct cost, although there is some evidence suggesting a temporary, short-term decline in stock price. Other impacts on the company can range from lost business, reduced employee productivity due to systems being offline or personnel redirected to working on the breach, resignation or firing of senior executives, reputational damage, and increasing the future cost of auditing or security. Consumer losses from a breach are usually a negative externality for the business. Some experts have argued that the evidence suggests there is not enough direct costs or reputational damage from data breaches to sufficiently incentivize their prevention.
Estimating the cost of data breaches is difficult, both because not all breaches are reported and also because calculating the impact of breaches in financial terms is not straightforward. There are multiple ways of calculating the cost to businesses, especially when it comes to personnel time dedicated to dealing with the breach. Author Kevvie Fowler estimates that more than half the direct cost incurred by companies is in the form of litigation expenses and services provided to affected individuals, with the remaining cost split between notification and detection, including forensics and investigation. He argues that these costs are reduced if the organization has invested in security prior to the breach or has previous experience with breaches. The more data records involved, the more expensive a breach typically will be. In 2016, researcher Sasha Romanosky estimated that while the mean breach cost around the targeted firm $5 million, this figure was inflated by a few highly expensive breaches, and the typical data breach was much less costly, around $200,000. Romanosky estimated the total annual cost to corporations in the United States to be around $10 billion.
Laws
Notification
The law regarding data breaches is often found in legislation to protect privacy more generally, and is dominated by provisions mandating notification when breaches occur. Laws differ greatly in how breaches are defined, what type of information is protected, the deadline for notification, and who has standing to sue if the law is violated. Notification laws increase transparency and provide a reputational incentive for companies to reduce breaches. The cost of notifying the breach can be high if many people were affected and is incurred regardless of the company's responsibility, so it can function like a strict liability fine.
, Thomas on Data Breach listed 62 United Nations member states that are covered by data breach notification laws. Some other countries require breach notification in more general data protection laws. Shortly after the first reported data breach in April 2002, California passed a law requiring notification when an individual's personal information was breached. In the United States, notification laws proliferated after the February 2005 ChoicePoint data breach, widely publicized in part because of the large number of people affected (more than 140,000) and also because of outrage that the company initially informed only affected people in California. In 2018, the European Union's General Data Protection Regulation (GDPR) took effect. The GDPR requires notification within 72 hours, with very high fines possible for large companies not in compliance. This regulation also stimulated the tightening of data privacy laws elsewhere. , the only United States federal law requiring notification for data breaches is limited to medical data regulated under HIPAA, but all 50 states (since Alabama passed a law in 2018) have their own general data breach notification laws.
Security safeguards
Measures to protect data from a breach are typically absent from the law or vague. Filling this gap is standards required by cyber insurance, which is held by most large companies and functions as de facto regulation. Of the laws that do exist, there are two main approaches—one that prescribes specific standards to follow, and the reasonableness approach. The former is rarely used due to a lack of flexibility and reluctance of legislators to arbitrate technical issues; with the latter approach, the law is vague but specific standards can emerge from case law. Companies often prefer the standards approach for providing greater legal certainty, but they might check all the boxes without providing a secure product. An additional flaw is that the laws are poorly enforced, with penalties often much less than the cost of a breach, and many companies do not follow them.
Litigation
Many class-action lawsuits, derivative suits, and other litigation have been brought after data breaches. They are often settled regardless of the merits of the case due to the high cost of litigation. Even if a settlement is paid, few affected consumers receive any money as it usually is only cents to a few dollars per victim. Legal scholars Daniel J. Solove and Woodrow Hartzog argue that "Litigation has increased the costs of data breaches but has accomplished little else." Plaintiffs often struggle to prove that they suffered harm from a data breach. The contribution of a company's actions to a data breach varies, and likewise the liability for the damage resulting for data breaches is a contested matter. It is disputed what standard should be applied, whether it is strict liability, negligence, or something else.
See also
Full disclosure (computer security)
Medical data breach
Surveillance capitalism
Data breaches in India
References
Sources
Breach
Data laws
Secure communication
Security breaches | Data breach | [
"Engineering"
] | 3,831 | [
"Cybersecurity engineering",
"Data security"
] |
13,307,983 | https://en.wikipedia.org/wiki/Resistive%20ballooning%20mode | The resistive ballooning mode (RBM) is an instability occurring in magnetized plasmas, particularly in magnetic confinement devices such as tokamaks, when the pressure gradient is opposite to the effective gravity created by a magnetic field.
Linear growth rate
The linear growth rate of the RBM instability is given as
where is the pressure gradient is the effective gravity produced by a non-homogeneous magnetic field, R0 is the major radius of the device, Lp is a characteristic length of the pressure gradient, and cs is the plasma sound speed.
Similarity with the Rayleigh–Taylor instability
The RBM instability is similar to the Rayleigh–Taylor instability (RT), with Earth gravity replaced by the effective gravity , except that for the RT instability, acts on the mass density of the fluid, whereas for the RBM instability, acts on the pressure of the plasma.
Plasma instabilities
Stability theory
Tokamaks | Resistive ballooning mode | [
"Physics",
"Mathematics"
] | 184 | [
"Physical phenomena",
"Plasma physics",
"Plasma phenomena",
"Plasma instabilities",
"Stability theory",
"Plasma physics stubs",
"Dynamical systems"
] |
13,308,254 | https://en.wikipedia.org/wiki/Land%20mobile%20service | Land mobile service (short: LMS) is – in line to ITU Radio Regulations – a mobile service between base stations and land mobile stations, or between land mobile stations.
In accordance with ITU Radio Regulations (article 1) variations of this radiocommunication service are classified as follows:
Mobile service (article 1.24)
Land mobile service (article 1.26)
Land mobile-satellite service (article 1.27)
Frequency allocation
The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012).
In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is within the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared.
primary allocation: is indicated by writing in capital letters
secondary allocation: is indicated by small letters (see example below)
exclusive or shared utilization: is within the responsibility of administrations
However, military usage, in bands where there is civil usage, will be in accordance with the ITU Radio Regulations. In NATO countries, military land mobile utilizations will be in accordance with the NATO Joint Civil/Military Frequency Agreement (NJFA).
FCC LMR Narrowbanding Mandate
LMR Narrowbanding is the result of an FCC Order issued in December 2004 mandating that all CFR 47 Part 90 business, educational, industrial, public safety, and state and local government VHF (150-174 MHz) and UHF (421-470 MHz) Private Land Mobile Radio (PLMR) licensees operating legacy wideband (25 kHz bandwidth) voice or data/SCADA systems to migrate to narrowband (12.5 kHz bandwidth or equivalent) systems by January 1, 2013.
See also
Business Radio Service
Land mobile radio system
Narrowband
Forest Industries Telecommunications
Radio station
Radiocommunication service
References
External links
FCC: Public Safety Radio Service
FCC: Industrial/Business Radio Service
FCC: Private Land Mobile Radio
Narrowbanding Information, Updates, and Licensee Resources
Mobile services ITU | Land mobile service | [
"Technology"
] | 428 | [
"Mobile telecommunications",
"Mobile services ITU"
] |
13,308,315 | https://en.wikipedia.org/wiki/Line%20%28unit%29 | The line (abbreviated L or l or ‴ or lin.) was a small English unit of length, variously reckoned as , , , or of an inch. It was not included among the units authorized as the British Imperial system in 1824.
Size
The line was not recognized by any statute of the English Parliament but was usually understood as of a barleycorn, (which itself was recognized by statute as of an inch) making it of an inch, and of a foot. The line was eventually decimalized as of an inch, without recourse to barleycorns.
The US button trade uses the same or a similar term but defined as one-fortieth of the US-customary inch (making a button-maker's line equal to ).
In use
Botanists formerly used the units (usually as inch) to measure the size of plant parts. Linnaeus's Philosophia botanica (1751) includes the Linea in its summary of units of measurements, defining it as []; Stearns gives its length as . Even after metrication, British botanists continued to employ tools with gradations marked as linea (lines); the British line is approximately and the Paris line approximately .
Entomologists in the UK and other European countries in the 1800s used lines as a unit of measurement for insects, at least for the relatively large mantids and phasmids. Examples include Westwood, in the UK, and de Haan in the Netherlands.
Gunsmiths and armament companies also employed the -inch line (the "decimal line"), in part owing to the importance of the German and Russian arms industries. These are now given in terms of millimeters, but the seemingly arbitrary 7.62 mm (0.30 in) caliber was originally understood as a 3-line caliber (as with the 1891 Mosin–Nagant rifle). The caliber used by the M2 Browning machine gun was similarly a 5-line caliber.
Foreign units
Other similar small units called lines include:
The Russian (ли́ния), of the diuym which had been set precisely equal to an English inch by Peter the Great
The French or "Paris line", of the French inch (), 2.256 mm and about 1.06 L.
The Portuguese , of the Portuguese inch or 12 "points" () or 2.29 mm
The German was usually of the German inch but sometimes also German inch
The Vienna line, of a Vienna inch.
See also
English units used prior to 1824
Imperial units defined by the British Weights and Measures Act 1824
List of unusual units of measurement
References
Citations
Bibliography
.
.
.
.
.
Units of length
Obsolete units of measurement | Line (unit) | [
"Mathematics"
] | 546 | [
"Obsolete units of measurement",
"Quantity",
"Units of measurement",
"Units of length"
] |
13,308,386 | https://en.wikipedia.org/wiki/Lineshaft%20roller%20conveyor | A lineshaft roller conveyor or line-shaft conveyor is, as its name suggests, powered by a shaft beneath rollers. These conveyors are suitable for light applications up to 50 kg such as cardboard boxes and tote boxes.
A single shaft runs below the rollers running the length of the conveyor. On the shaft are a series of spools, one spool for each roller. An elastic polyurethane o-ring belt runs from a spool on the powered shaft to each roller. When the shaft is powered, the o-ring belt acts as a chain between the spool and the roller making the roller rotate. The rotation of the rollers pushes the product along the conveyor. The shaft is usually driven by an electrical motor that is generally controlled by an electronic PLC (programmable logic controller). The PLC electronically controls how specific sections of the conveyor system interact with the products being conveyed.
Advantages of this conveyor are quiet operation, easy installation, moderate maintenance and low expense. Line-shaft conveyors are also extremely safe for people to work around because the elastic belts can stretch and not injure fingers should any get caught underneath them. Moreover, the spools will slip and allow the rollers to stop moving if clothing, hands or hair gets caught in them. In addition, since the spools are slightly loose on the shaft, they act like clutches that slip when products are required to accumulate (stop moving and bump up against each other. i.e. queue up). With the exception of soft bottomed containers like cement bags, these conveyors can be utilized for almost all applications.
A disadvantage of the roller lineshaft conveyor is that it can only be used to convey products that span at least three rollers, but rollers can be as small as 17mm in diameter and as close together as 18.5mm. For items shorter than 74mm, the conveyor belt system is generally used as an alternative option.
See also
Conveyor systems
Conveyor belt
Chain conveyor
Line shaft
External links
Freight transport
Mechanical power transmission
Packaging machinery | Lineshaft roller conveyor | [
"Physics",
"Engineering"
] | 429 | [
"Packaging machinery",
"Mechanical power transmission",
"Mechanics",
"Industrial machinery"
] |
13,309,700 | https://en.wikipedia.org/wiki/Pentagonal%20bipyramidal%20molecular%20geometry | In chemistry, a pentagonal bipyramid is a molecular geometry with one atom at the centre with seven ligands at the corners of a pentagonal bipyramid. A perfect pentagonal bipyramid belongs to the molecular point group D5h.
The pentagonal bipyramid is a case where bond angles surrounding an atom are not identical (see also trigonal bipyramidal molecular geometry). This is one of the three common shapes for heptacoordinate transition metal complexes, along with the capped octahedron and the capped trigonal prism.
Pentagonal bipyramids are claimed to be promising coordination geometries for lanthanide-based single-molecule magnets, since they present no extradiagonal crystal field terms, therefore minimising spin mixing, and all of their diagonal terms are in first approximation protected from low-energy vibrations, minimising vibronic coupling.
Examples
Iodine heptafluoride (IF7) with 7 bonding groups
Rhenium heptafluoride (ReF7)
Peroxo chromium(IV) complexes, e.g. [Cr(O2)2(NH3)3] where the peroxo groups occupy four of the planar positions.
and
References
External links
– Images of IF7
3D Chem – Chemistry, Structures, and 3D Molecules
IUMSC – Indiana University Molecular Structure Center
Stereochemistry
Molecular geometry | Pentagonal bipyramidal molecular geometry | [
"Physics",
"Chemistry"
] | 295 | [
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Space",
"nan",
"Spacetime",
"Matter"
] |
13,309,886 | https://en.wikipedia.org/wiki/Reversing%3A%20Secrets%20of%20Reverse%20Engineering | Reversing: Secrets of Reverse Engineering is a textbook written by Eldad Eilam on the subject of reverse engineering software, mainly within a Microsoft Windows environment. It covers the use of debuggers and other low-level tools for working with binaries. Of particular interest is that it uses OllyDbg in examples, and is therefore one of the few practical, modern books on the subject that uses popular, real-world tools to facilitate learning. The book is designed for independent study and does not contain problem sets, but it is also used as a course book in some university classes.
The book covers several different aspects of reverse engineering, and demonstrates what can be accomplished:
How copy protection and DRM technologies can be defeated, and how they can be made stronger.
How malicious software such as worms can be analyzed and neutralized.
How to obfuscate code so that it becomes more difficult to reverse engineer.
The book also includes a detailed discussion of the legal aspects of reverse engineering, and examines some famous court cases and rulings that were related to reverse engineering.
Considering its relatively narrow subject matter, Reversing is a bestseller that has remained on Amazon.com's list of top 100 software books for several years, since its initial release.
Chapter Outline
Part I: Reversing 101.
Chapter 1: Foundations.
Chapter 2: Low-Level Software.
Chapter 3: Windows Fundamentals.
Chapter 4: Reversing Tools.
Part II: Applied Reversing.
Chapter 5: Beyond the Documentation.
Chapter 6: Deciphering File Formats.
Chapter 7: Auditing Program Binaries.
Chapter 8: Reversing Malware.
Part III: Cracking.
Chapter 9: Piracy and Copy Protection.
Chapter 10: Antireversing Techniques.
Chapter 11: Breaking Protections.
Part IV: Beyond Disassembly.
Chapter 12: Reversing .NET.
Chapter 13: Decompilation.
Appendix A: Deciphering Code Structures.
Appendix B: Understanding Compiled Arithmetic.
Appendix C: Deciphering Program Data.
Editions
Reversing: Secrets of Reverse Engineering, English, 2005. 595pp.
Reversing: 逆向工程揭密, Simplified Chinese, 2007. 598pp.
References
Software engineering books | Reversing: Secrets of Reverse Engineering | [
"Technology"
] | 458 | [
"Computing stubs",
"Computer book stubs"
] |
13,310,029 | https://en.wikipedia.org/wiki/International%20Archive%20of%20Women%20in%20Architecture | The International Archive of Women in Architecture (IAWA) was established in 1985 as a joint program of the College of Architecture and Urban Studies and the University Libraries at Virginia Tech.
Purpose
The purpose of the Archive is to document the history of women's involvement in architecture by acquiring, preserving, storing, and making available to researchers the professional papers of women architects, landscape architects, designers, architectural historians and critics, urban planners, and the records of women's architectural organizations.
Collections
The IAWA collects the papers of women who practiced at a time when there were few women in the field (i.e., before the 1950s) and to fill serious gaps in the availability of primary research materials for architectural, women's, and social history research. As of October 2006 there were over of materials in the 298 collections in the IAWA, which are housed in Virginia Tech's University Libraries' Special Collections.
As part of its mission to act as a clearinghouse of information about all women architects, past and present, the IAWA also collects and catalogs books, monographs and other publications written by or about women architects, designers, planners, etc. These publications are accessible through the Virginia Tech library's online catalog, Addison.
The IAWA began with a collecting focus on the papers of pioneering women in architecture, individuals who practiced at a time when there were few women in the field. Today, the IAWA includes materials that document multiple generations of women in architecture, providing vital primary source materials for architectural, women's, and social history research. The collections includes material of relevant women architects as Diana Balmori, Olive Chadeayne, Doina Marilena Ciocănea, Mary Colter, L. Jane Hastings, Anna Keichline, Yasmeen Lari, Sarantsatral Ochirpureviin, Eleanore Pettersen, Berta Rahm, Trudy Rosen, Sigrid Lorenzen Rupp, Han Schröder, Anna Sokolina, Brinda Somaya, Pamela Webb, Beverly Willis, Zelma Wilson, and Liane Zimbler.
The IAWA also compiles biographical information. There is information about more than 650 women representing 48 countries and 42 states/territories in the United States available in the IAWA Biographical Database.
Some of the IAWA's resources, approximately 1200 images from 28 collections, have been scanned and are available through the VT ImageBase.
Board
The IAWA is overseen by a board of advisors that includes architects, city planners, industrial and interior designers, librarians, archivists, and faculty from around the world and the U.S. The head of Special Collections or her designee serves as the Archivist for the IAWA and sits on the Board of Advisors and the Executive Board. She prepares a report for presentation to the annual meeting held in the fall of each year at Virginia Tech's Newman Library in the President's Board Room.
Milka Bliznakov Research Prize
The Milka Bliznakov Research Prize was established in 2001 to honor IAWA founder and advisor emerita, Dr. Milka Bliznakov (1927-2010). The IAWA Center invites architects, scholars, professionals, students, and researchers to contribute research on women in architecture and related design fields. This research, in concert with the preservation efforts of the IAWA, will help fill the current void in historical knowledge about the achievements and work of women who shaped the built environment.
Past Milka Bliznakov Award and Research Prize Winners (2001-2015)
2016, Dr. Ines Moisset, "Women Architects on the Web" and Dr. Tanja Poppelreuter, "Refugee and émigré female architects before 1940"
2015, Claire Bonney Brüllman, "The Work and Life of Adrienne Gorska" and Sarah Rafson, "CARY (Chicks in Architecture Refuse to Yield)."
2014, Meredith Sattler, "Early Technological Innovation in the Systems Approach to Environmental Design: Situating Beverly Willis and Associates’ CARLA platform [Computerized Approach to Residential Land Analysis] within the developmental trajectory of Geographic Information Systems (GIS)."
2013, Robert Holton, "Natalie De Blois - The role and contribution in the design of three pivotal SOM projects completed in New York City between 1950-1960: the Lever House, the Pepsi-Cola building and the Union Carbide."
2012, Andrea J. Merrett, "Feminism in American Architecture: Organizing 1972-1975."
2011, Lindsay Nencheck, "Organizing Voices: Examining the 1974 Women in Architecture Symposium at Washington University in St. Louis."
2010, Inge Schaefer Horton, "Early Women Architects of the San Francisco Bay Area."
2009, Patrick Lee Lucas, "Sarah Hunter Kelly: Designing the House of Good Taste."
2008, Martha Alonso, Sonia Bevilacqua, and Graciela Brandariz, "Odilia Suárez: The Exemplary Trajectory of an Architect and Urbanist in Latin America."
2008, Despina Stratigakos, "A Woman’s Berlin."
2008, Milka Bliznakov Prize, Commendation, Lori Brown, feminist practices [exhibition].
2007, No prize awarded.
2006, Milka Bliznakov Prize, Commendation, Eran Ben-Joseph, Holly D. Ben-Joseph and Anne C. Dodge "Against All Odds: MIT's Pioneering Women of Landscape Architecture."
2005, Carmen Alonso Espegel, "Heroines of the Space."
2005, Isabel Bauer, "Architekturstudentinnen der Weimarer Republik."
2005, Bobbye Tigerman, "'I Am Not a Decorator' Florence Knoll, the Knoll Planning Unit, and the Making of the Modern Office."
2005, Milka Bliznakov Honorarium, Joseph Chuo Wang.
2004, Dorrita Hannah, "un-housing performance: The Heart of PQ."
2004, Janet Stoyel, "Sonicloth."
2003, Barbara Nadel,"Security Design: Achieving Transparency in Civic Architecture."
2003, Ozlem Erkarslan, "Turkish Women Architects in the Late Ottoman and Early Republican Era 1908-1960."
2002, Elizabeth Birmingham, "Searching for Marion Mahony: Gender, Erasure, and the Discourse of Architectural Studies."
2001, Claire Bonney, "The Work and Life of Adrienne Gorska."
References
External links
IAWA Guide to Collections
Special Collections Reading Room
History of the activities
IAWA Center in the College of Architecture & Urban Studies at Virginia Tech
Arts organizations based in Virginia
Architecture organizations based in the United States
Interior design
Industrial design
Urban planning organizations
Women's organizations based in the United States
Arts organizations established in 1985
1985 establishments in Virginia | International Archive of Women in Architecture | [
"Engineering"
] | 1,401 | [
"Industrial design",
"Design engineering",
"Design"
] |
13,310,687 | https://en.wikipedia.org/wiki/Convective%20momentum%20transport | Convective momentum transport usually describes a vertical flux of the momentum of horizontal winds or currents. That momentum is carried like a non-conserved flow tracer by vertical air motions in convection.
In the atmosphere, convective momentum transport by small but vigorous (cumulus type) cloudy updrafts can be understood as an interplay of three main mechanisms:
Vertical advection of ambient momentum due to subsidence of environmental air that compensates the in-cloud upward mass flux,
Detrainment of in-cloud momentum where updrafts stop ascending,
Accelerations by the pressure gradient force around clouds whose inner momentum differs from their environment.
The net effect of these interacting mechanisms depends on the detailed configuration or 'organization' of the convective cloud or storm system.
See also
momentum
vertical motion
References
Tropical meteorology
Continuum mechanics | Convective momentum transport | [
"Physics"
] | 171 | [
"Classical mechanics stubs",
"Classical mechanics",
"Continuum mechanics"
] |
13,311,117 | https://en.wikipedia.org/wiki/EIAJ%20MTS | EIAJ MTS is a multichannel television sound standard created by the EIAJ.
Bilingual and stereo sound television programs started being broadcast in Japan in October 1978 using an "FM-FM" system originally developed by the NHK Technical Research Labs during 1962–1969. This system was modified and standardised by the EIAJ in January 1979. Television stations in Japan with capability for bilingual and stereo sound transmissions used the callsign JO**-TAM, where "TAM" denotes their audio FM multiplex sub-carrier designation, until digital switchover to ISDB-T in 2010–2012 which eventually rendered EIAJ MTS obsolete.
The original System M TV standard has a monaural FM transmission at 4.5 MHz. For Japanese multichannel television sound a second channel, or sub-channel, is added to the original signal by using an FM sub-carrier at twice the line frequency (Fh, or 15374 Hz). In order to identify the different modes (mono, stereo, or dual sound) a pilot tone is also added on an AM carrier at 3.5 times the line frequency. The pilot tone frequencies are 982.5 Hz for stereo and 922.5 Hz for dual sound. Contrary to Zweikanalton these pilot tones are not coupled to the line frequency but were instead chosen to allow use of filters already employed in the Pocket Bell pager system.
See also
Multichannel Television Sound (3 additional audio channels on 4.5 MHz audio carriers)
NICAM
Zweikanalton A2
References
Broadcast engineering
Television technology
Sound | EIAJ MTS | [
"Technology",
"Engineering"
] | 321 | [
"Information and communications technology",
"Broadcast engineering",
"Electronic engineering",
"Television technology"
] |
13,311,662 | https://en.wikipedia.org/wiki/Oripavine | Oripavine is an opioid and the major metabolite of thebaine. It is the precursor to the semi-synthetic compounds etorphine and buprenorphine. Although this chemical compound has analgesic potency comparable to morphine, it is not used clinically due to severe adverse effects and a low therapeutic index. Being a precursor to a series of extremely strong opioids, oripavine is a controlled substance in some jurisdictions.
Pharmacological properties
Oripavine possesses an analgesic potency comparable to morphine; however, it is not clinically useful due to severe toxicity and low therapeutic index. In both mice and rats, toxic doses caused tonic-clonic seizures followed by death, similar to thebaine. Oripavine has a potential for dependence which is significantly greater than that of thebaine but slightly less than that of morphine.
Bridged derivatives (The Bentley compounds)
Of much greater relevance are the properties of the orvinols, a large family of semi-synthetic oripavine derivatives classically synthesized by the Diels-Alder reaction of thebaine with an appropriate dienophile followed by 3-O-demethylation to the corresponding bridged oripavine. These compounds were developed by the group led by K. W. Bentley in the 1960s, and these Bentley compounds represent the first series of "super-potent" μ-opioid agonists, with some compounds in the series being over 10,000 times the potency of morphine as an analgesic. The simple bridged oripavine parent compound 6,14-endoethenotetrahydrooripavine is already 40 times the potency of morphine, but adding a branched tertiary alcohol substituent on the C7 position results in a wide range of highly potent compounds.
Other notable derivatives then result from further modification of this template, with saturation of the 7,8-double bond of etorphine resulting in the even more potent dihydroetorphine (up to 12,000× potency of morphine) and acetylation of the 3-hydroxy group of etorphine resulting in acetorphine (8700× morphine)—although while the isopentyl homologue of etorphine is nearly three times more potent, its 7,8-dihydro and 3-acetyl derivatives are less potent than the corresponding derivatives of etorphine at 11,000 and 1300 times morphine, respectively. Replacing the N-methyl group with cyclopropylmethyl results in opioid antagonists such as diprenorphine (M5050, which is used as an antidote to reverse the effects of etorphine, M99), and partial agonists such as buprenorphine, which is widely used in the treatment of opioid addiction.
Legal status
Due to the relative ease of synthetic modification of oripavine to produce other narcotics (by either direct or indirect routes via thebaine), the World Health Organization's Expert Committee on Drug Dependence recommended in 2003 that oripavine be controlled under Schedule I of the 1961 Single Convention on Narcotic Drugs. On March 14, 2007, the United Nations Commission on Narcotic Drugs formally decided to accept these recommendations, and placed oripavine in the Schedule I.
Until recently, oripavine was a Schedule II drug in the United States by default as a thebaine derivative, although it was not explicitly listed. However, as a member state under the 1961 Single Convention on Narcotic Drugs, the US was obliged to specifically control the substance under the Controlled Substances Act following its international control by the UN Commission on Narcotic Drugs. On September 24, 2007, the Drug Enforcement Administration formally added oripavine to Schedule II.
Under the Controlled Substances Act 1970, oripavine has an ACSCN of 9330 and a 2013 manufacturing quota of .
Biosynthesis
This molecule is biosynthetically related to the morphinane derivatives metabolism, where thebaine and morphine are implicated.
References
4,5-Epoxymorphinans
Glycine receptor antagonists
Opiates
Oripavines
Phenol ethers
Neurotoxins | Oripavine | [
"Chemistry"
] | 893 | [
"Neurochemistry",
"Neurotoxins"
] |
13,311,819 | https://en.wikipedia.org/wiki/Therapy | A therapy or medical treatment is the attempted remediation of a health problem, usually following a medical diagnosis. Both words, treatment and therapy, are often abbreviated tx, Tx, or Tx.
As a rule, each therapy has indications and contraindications. There are many different types of therapy. Not all therapies are effective. Many therapies can produce unwanted adverse effects.
Treatment and therapy are often synonymous, especially in the usage of health professionals. However, in the context of mental health, the term therapy may refer specifically to psychotherapy.
Semantic field
The words care, therapy, treatment, and intervention overlap in a semantic field, and thus they can be synonymous depending on context. Moving rightward through that order, the connotative level of holism decreases and the level of specificity (to concrete instances) increases. Thus, in health-care contexts (where its senses are always noncount), the word care tends to imply a broad idea of everything done to protect or improve someone's health (for example, as in the terms preventive care and primary care, which connote ongoing action), although it sometimes implies a narrower idea (for example, in the simplest cases of wound care or postanesthesia care, a few particular steps are sufficient, and the patient's interaction with the provider of such care is soon finished). In contrast, the word intervention tends to be specific and concrete, and thus the word is often countable; for example, one instance of cardiac catheterization is one intervention performed, and coronary care (noncount) can require a series of interventions (count). At the extreme, the piling on of such countable interventions amounts to interventionism, a flawed model of care lacking holistic circumspection—merely treating discrete problems (in billable increments) rather than maintaining health. Therapy and treatment, in the middle of the semantic field, can connote either the holism of care or the discreteness of intervention, with context conveying the intent in each use. Accordingly, they can be used in both noncount and count senses (for example, therapy for chronic kidney disease can involve several dialysis treatments per week).
The words aceology and are obscure and obsolete synonyms referring to the study of therapies.
The English word therapy comes via Latin therapīa from and literally means "curing" or "healing". The term is a somewhat archaic doublet of the word therapy.
Types of therapies
By chronology, priority, or intensity
Levels of care
Levels of care classify health care into categories of chronology, priority, or intensity, as follows:
Urgent care handles health issues that need to be handled today but are not necessarily emergencies; the urgent care venue can send a patient to the emergency care level if it turns out to be needed.
In the United States (and possibly various other countries), urgent care centers also serve another function as their other main purpose: U.S. primary care practices have evolved in recent decades into a configuration whereby urgent care centers provide portions of primary care that cannot wait a month, because getting an appointment with the primary care practitioner is often subject to a waitlist of 2 to 8 weeks.
Emergency care handles medical emergencies and is a first point of contact or intake for less serious problems, which can be referred to other levels of care as appropriate.
Intensive care, also called critical care, is care for extremely ill or injured patients. It thus requires high resource intensity, knowledge, and skill, as well as quick decision making.
Ambulatory care is care provided on an outpatient basis. Typically patients can walk into and out of the clinic under their own power (hence "ambulatory"), usually on the same day.
Home care is care at home, including care from providers (such as physicians, nurses, and home health aides) making house calls, care from caregivers such as family members, and patient self-care.
Primary care is meant to be the main kind of care in general, and ideally a medical home that unifies care across referred providers.
Secondary care is care provided by medical specialists and other health professionals who generally do not have first contact with patients, for example, cardiologists, urologists and dermatologists. A patient reaches secondary care as a next step from primary care, typically by provider referral although sometimes by patient self-initiative. According to a systematic review, fields for development secondary care from patients’ viewpoint may be classified into four domains that should usefully guide future improvement of this care stage: “barriers to care, communication, coordination, and relationships and personal value”.
Tertiary care is specialized consultative care, usually for inpatients and on referral from a primary or secondary health professional, in a facility that has personnel and facilities for advanced medical investigation and treatment, such as a tertiary referral hospital.
Follow-up care is additional care during or after convalescence. Aftercare is generally synonymous with follow-up care.
End-of-life care is care near the end of one's life. It often includes the following:
Palliative care is supportive care, most especially (but not necessarily) near the end of life.
Hospice care is palliative care very near the end of life when cure is very unlikely. Its main goal is comfort, both physical and mental.
Lines of therapy
Treatment decisions often follow formal or informal algorithmic guidelines. Treatment options can often be ranked or prioritized into lines of therapy: first-line therapy, second-line therapy, third-line therapy, and so on. First-line therapy (sometimes referred to as induction therapy, primary therapy, or front-line therapy) is the first therapy that will be tried. Its priority over other options is usually either: (1) formally recommended on the basis of clinical trial evidence for its best-available combination of efficacy, safety, and tolerability or (2) chosen based on the clinical experience of the physician. If a first-line therapy either fails to resolve the issue or produces intolerable side effects, additional (second-line) therapies may be substituted or added to the treatment regimen, followed by third-line therapies, and so on.
An example of a context in which the formalization of treatment algorithms and the ranking of lines of therapy is very extensive is chemotherapy regimens. Because of the great difficulty in successfully treating some forms of cancer, one line after another may be tried. In oncology the count of therapy lines may reach 10 or even 20.
Often multiple therapies may be tried simultaneously (combination therapy or polytherapy). Thus combination chemotherapy is also called polychemotherapy, whereas chemotherapy with one agent at a time is called single-agent therapy or monotherapy.
Adjuvant therapy is therapy given in addition to the primary, main, or initial treatment, but simultaneously (as opposed to second-line therapy). Neoadjuvant therapy is therapy that is begun before the main therapy. Thus one can consider surgical excision of a tumor as the first-line therapy for a certain type and stage of cancer even though radiotherapy is used before it; the radiotherapy is neoadjuvant (chronologically first but not primary in the sense of the main event). Premedication is conceptually not far from this, but the words are not interchangeable; cytotoxic drugs to put a tumor "on the ropes" before surgery delivers the "knockout punch" are called neoadjuvant chemotherapy, not premedication, whereas things like anesthetics or prophylactic antibiotics before dental surgery are called premedication.
Step therapy or stepladder therapy is a specific type of prioritization by lines of therapy. It is controversial in American health care because unlike conventional decision-making about what constitutes first-line, second-line, and third-line therapy, which in the U.S. reflects safety and efficacy first and cost only according to the patient's wishes, step therapy attempts to mix cost containment by someone other than the patient (third-party payers) into the algorithm. Therapy freedom and the negotiation between individual and group rights are involved.
By intent
By therapy composition
Treatments can be classified according to the method of treatment:
By matter
by drugs: pharmacotherapy, chemotherapy (also, medical therapy often means specifically pharmacotherapy)
by medical devices: implantation
cardiac resynchronization therapy
by specific molecules: molecular therapy (although most drugs are specific molecules, molecular medicine refers in particular to medicine relying on molecular biology)
by specific biomolecular targets: targeted therapy
molecular chaperone therapy
by chelation: chelation therapy
by specific chemical elements:
by metals:
by heavy metals:
by gold: chrysotherapy (aurotherapy)
by platinum-containing drugs: platin therapy
by biometals
by lithium: lithium therapy
by potassium: potassium supplementation
by magnesium: magnesium supplementation
by chromium: chromium supplementation; phonemic neurological hypochromium therapy
by copper: copper supplementation
by nonmetals:
by diatomic oxygen: oxygen therapy, hyperbaric oxygen therapy (hyperbaric medicine)
transdermal continuous oxygen therapy
by triatomic oxygen (ozone): ozone therapy
by fluoride: fluoride therapy
by other gases: medical gas therapy
by water:
hydrotherapy
aquatic therapy
rehydration therapy
oral rehydration therapy
water cure (therapy)
by biological materials (biogenic substances, biomolecules, biotic materials, natural products), including their synthetic equivalents: biotherapy
by whole organisms
by viruses: virotherapy
by bacteriophages: phage therapy
by animal interaction: see animal interaction section
by constituents or products of organisms
by plant parts or extracts (but many drugs are derived from plants, even when the term phytotherapy is not used)
scientific type: phytotherapy
traditional (prescientific) type: herbalism
by animal parts: quackery involving shark fins, tiger parts, and so on, often driving threat or endangerment of species
by genes: gene therapy
gene therapy for epilepsy
gene therapy for osteoarthritis
gene therapy for color blindness
gene therapy of the human retina
gene therapy in Parkinson's disease
by epigenetics: epigenetic therapy
by proteins: protein therapy (but many drugs are proteins despite not being called protein therapy)
by enzymes: enzyme replacement therapy
by hormones: hormone therapy
hormonal therapy (oncology)
hormone replacement therapy
estrogen replacement therapy
androgen replacement therapy
hormone replacement therapy (menopause)
transgender hormone therapy
feminizing hormone therapy
masculinizing hormone therapy
antihormone therapy
androgen deprivation therapy
by whole cells: cell therapy (cytotherapy)
by stem cells: stem cell therapy
by immune cells: see immune system products below
by immune system products: immunotherapy, host modulatory therapy
by immune cells:
T-cell vaccination
cell transfer therapy
autologous immune enhancement therapy
TK cell therapy
by humoral immune factors: antibody therapy
by whole serum: serotherapy, including antiserum therapy
by immunoglobulins: immunoglobulin therapy
by monoclonal antibodies: monoclonal antibody therapy
by urine: urine therapy (some scientific forms; many prescientific or pseudoscientific forms)
by food and dietary choices:
medical nutrition therapy
grape therapy (quackery)
by salts (but many drugs are the salts of organic acids, even when drug therapy is not called by names reflecting that)
by salts in the air
by natural dry salt air: "taking the cure" in desert locales (especially common in prescientific medicine; for example, one 19th-century way to treat tuberculosis)
by artificial dry salt air:
low-humidity forms of speleotherapy
negative air ionization therapy
by moist salt air:
by natural moist salt air: seaside cure (especially common in prescientific medicine)
by artificial moist salt air: water vapor forms of speleotherapy
by salts in the water
by mineral water: spa cure ("taking the waters") (especially common in prescientific medicine)
by seawater: seaside cure (especially common in prescientific medicine)
by aroma: aromatherapy
by other materials with mechanism of action unknown
by occlusion with duct tape: duct tape occlusion therapy
By energy
by electric energy as electric current: electrotherapy, electroconvulsive therapy
Transcranial magnetic stimulation
Vagus nerve stimulation
by magnetic energy:
magnet therapy
pulsed electromagnetic field therapy
magnetic resonance therapy
by electromagnetic radiation (EMR):
by light: light therapy (phototherapy)
ultraviolet light therapy
PUVA therapy
photodynamic therapy
photothermal therapy
cytoluminescent therapy
blood irradiation therapy
by darkness: dark therapy
by lasers: laser therapy
low level laser therapy
by gamma rays: radiosurgery
Gamma Knife radiosurgery
stereotactic radiation therapy
cobalt therapy
by radiation generally: radiation therapy (radiotherapy)
intraoperative radiation therapy
by EMR particles:
particle therapy
proton therapy
electron therapy
intraoperative electron radiation therapy
Auger therapy
neutron therapy
fast neutron therapy
neutron capture therapy of cancer
by radioisotopes emitting EMR:
by nuclear medicine
by brachytherapy
quackery type: electromagnetic therapy (alternative medicine)
by mechanical: manual therapy as massotherapy and therapy by exercise as in physical therapy
inversion therapy
by sound:
by ultrasound:
ultrasonic lithotripsy
extracorporeal shockwave therapy
sonodynamic therapy
by music: music therapy
by temperature
by heat: heat therapy (thermotherapy)
by moderately elevated ambient temperatures: hyperthermia therapy
by dry warm surroundings: Waon therapy
by dry or humid warm surroundings: sauna, including infrared sauna, for sweat therapy
by cold:
by extreme cold to specific tissue volumes: cryotherapy
by ice and compression: cold compression therapy
by ambient cold:
hypothermia therapy for neonatal encephalopathy (in newborns)
targeted temperature management (therapeutic hypothermia, protective hypothermia)
by hot and cold alternation: contrast bath therapy
By procedure and human interaction
Surgery
by counseling, such as psychotherapy (see also: list of psychotherapies)
systemic therapy
by group psychotherapy
by cognitive behavioral therapy
by cognitive therapy
by behaviour therapy
by dialectical behavior therapy
by cognitive emotional behavioral therapy
by cognitive rehabilitation therapy
by family therapy
by education
by psychoeducation
by information therapy
by speech therapy, physical therapy, occupational therapy, vision therapy, massage therapy, chiropractic or acupuncture
by lifestyle modifications, such as avoiding unhealthy food or maintaining a predictable sleep schedule
by coaching
By animal interaction
by pets, assistance animals, or working animals: animal-assisted therapy
by horses: equine therapy, hippotherapy
by dogs: pet therapy with therapy dogs, including grief therapy dogs
by cats: pet therapy with therapy cats
by fish: ichthyotherapy (wading with fish), aquarium therapy (watching fish)
by maggots: maggot therapy
by worms:
by internal worms: helminthic therapy
by leeches: leech therapy
by immersion: animal bath
By meditation
by mindfulness: mindfulness-based cognitive therapy
By reading
by bibliotherapy
By creativity
by expression: expressive therapy
by writing: writing therapy
journal therapy
by play: play therapy
by art: art therapy
sensory art therapy
comic book therapy
by gardening: horticultural therapy
by dance: dance therapy
by drama: drama therapy
by recreation: recreational therapy
by music: music therapy
By sleeping and waking
by deep sleep: deep sleep therapy
by sleep deprivation: wake therapy
See also
Biophilia hypothesis
Classification of Pharmaco-Therapeutic Referrals
Compassion-focused therapy
Emotionally focused therapy
Greyhound therapy
Inverse benefit law
List of therapies
Mature minor doctrine
Medication
Medicine
Nutraceutical
Prevention
Psychedelic therapy
Therapeutic inertia
Therapeutic nihilism, the idea that treatment is useless
Treatment as prevention
References
External links
"Chapter Nine of the Book of Medicine Dedicated to Mansur, with the Commentary of Sillanus de Nigris" is a Latin book by Rhazes, from 1483, that is known for its ninth chapter, which is about therapeutics
Therapy
Drug discovery
Health policy
Medicinal chemistry
Pharmaceutical sciences | Therapy | [
"Chemistry",
"Biology"
] | 3,361 | [
"Life sciences industry",
"Drug discovery",
"nan",
"Medicinal chemistry",
"Biochemistry"
] |
13,312,106 | https://en.wikipedia.org/wiki/Yo-yo%20de-spin | A yo-yo de-spin mechanism is a device used to reduce the spin of satellites, typically soon after launch. It consists of two lengths of cable with weights on the ends. The cables are wrapped around the final stage and/or satellite, in the manner of a double yo-yo. When the weights are released, the spin of the rocket flings them away from the spin axis. This transfers enough angular momentum to the weights to reduce the spin of the satellite to the desired value. Subsequently, the weights are often released.
De-spin is needed since some final stages are spin-stabilized, and require fairly rapid rotation (now typically 30-60 rpm; some early missions, such as Pioneer, rotated at over 600 rpm) to remain stable during firing. (See, for example, the Star 48, a solid fuel rocket motor.) After firing, the satellite cannot be simply released, since such a spin rate is beyond the capability of the satellite's attitude control. Therefore, after rocket firing but before satellite release, the yo-yo weights are used to reduce the spin rates to something the satellite can cope with in normal operation (often 2-5 RPM). Yo-yo de-spin systems are commonly used on sub-orbital sounding rocket flights, as the vehicles are spin stabilized through ascent and have minimal flight time for roll cancellation using the payload's attitude control system.
As an example of yo-yo de-spin, on the Dawn spacecraft, roughly of weights, and cables, reduced the initial spin rate of the spacecraft from 46 RPM to 3 RPM in the opposite direction. The relatively small weights have a large effect since they are far from the spin axis, and their effect increases as the square of the length of the cables.
Yo-yo de-spin was invented, built, and tested at Caltech's Jet Propulsion Laboratory.
Yo-yo hardware can contribute to the space debris problem on orbital missions, but this is not a problem when used on the upper stages of earth escape missions such as Dawn, as the cables and weights are also on an escape trajectory.
Yo-weight
Sometimes only a single weight and cable is used. Such an arrangement is colloquially named a "yo-weight." When the final stage is a solid rocket, the stage may continue to thrust slightly even after spacecraft release. This is from residual fuel and insulation in the motor casing outgassing, even without significant combustion. In a few cases, the spent stage has rammed the payload, for example in the fourth launch attempt of Ohsumi, third stage of Lambda 4S rocket collided with the fourth stage. By using one weight without a matching counterpart, the stage eventually tumbles. The tumbling motion prevents residual thrust from accumulating in a single direction. Instead, the stage's exhaust averages out to a much lower value over a wide range of directions.
In March 2009, a leftover yo-weight caused a scare when it came too close to the International Space Station.
See also
Attitude dynamics and control
Momentum exchange tether
Space debris
Further reading
Cornille, H. J., Jr., A Method of Accurately Reducing the Spin Rate of a Rotating Spacecraft, NASA Technical Note D- 1420, October 1962.
Fedor, J. V., Analytical Theory of the Stretch Yo-Yo for De-Spin of Satellites, NASA Technical Note D-1676, April 1963.
Fedor, J. V., Theory and Design Curves for a Yo-Yo De-Spin Mechanism for Satellites, NASA Technical Note D-708, August 1961.
References
Spacecraft propulsion
Spacecraft components
Articles containing video clips
Spacecraft design | Yo-yo de-spin | [
"Engineering"
] | 736 | [
"Spacecraft design",
"Design",
"Aerospace engineering"
] |
13,312,171 | https://en.wikipedia.org/wiki/Association%20of%20Registered%20Graphic%20Designers | The Association of Registered Graphic Designers (RGD or simply RGD; formerly ARGD/ON is a non-profit, self-regulatoryprofessional design association with over 3,000 members. It serves graphic design professionals, managers, educators and students. Created in 1996 by an Act of the Legislative Assembly of Ontario (Bill Pr56), the Association is Canada's only accredited body of graphic designers with a legislated title and the second such accredited body of graphic designers in the world. RGD certifies graphic designers and promotes knowledge sharing, continuous learning, research, advocacy and mentorship.
Advocacy
RGD works to establish professional standards and innovative thinking within the graphic design industry. The association assumes an advocacy role for best practices for both graphic designers and the clients they work with. They focus on issues such as spec work and crowdsourcing, accessibility, sustainability, salaries and billing practices, pro bono work and internship guidelines.
RGD advocacy initiatives include:
Supporting, defending and maintaining policies
Promoting measures that broadly benefit members and the industry
Increasing public awareness and disseminating information about industry best practices and the value of working with a Registered Graphic Designer (RGD)
Arguing in favour of a new idea
Speaking out on issues of concern
Mediating, coordinating, clarifying and advancing a particular point of view
Intervening with others on behalf of the profession
History
In 1956, Toronto-based designers Frank Davies, John Gibson, Frank Newfeld and Sam Smart formed the Society of Typographic Designers of Canada (TDC). The TDC was later renamed the Society of Graphic Designers of Canada (GDC) to reflect the wider interests of its members.
By 1984 many other design disciplines such as Architecture and Interior Design had been given Acts in Provincial Legislatures so that their respective associations could govern and grant their members exclusive professional designations. RGD's founders recognized the need to align Graphic Design with other design professionals. To ensure Graphic Design could also advance as an acknowledged profession the Association's founders decided to incorporate the Association of Registered Graphic Designers (RGD).
On April 25, 1996, Bill Pr56 was passed and Royal Assent was given to an Act Respecting The Association of Registered Graphic Designers by the Legislative Assembly of Ontario. Sponsored by Mrs. Margaret Marland, Member of Provincial Parliament and signed by the Honourable Hal Jackman C.M., O.Ont., O.ST.J., B.A., L.L.B., L.L.D., Lieutenant-Governor of the Province of Ontario.
In 1999 a separate Examination Board was established to administer the Registered Graphic Designers Qualification Examination, now referred to as the Certification Process for RGD.
Founders
Pauline Jarworski
Michael Large
Jamie Lees
Ivy Li RGD Emeritus
Helen Mah FGDC
Rod Nash RGD Emeritus
Albert Kai-Wing Ng O.Ont., RGD, FGDC
Rene Schoepflin RGD Emeritus
Robert Smith RGD
Philip Sung RGD Emeritus
Membership
In order to obtain the Registered Graphic Designer (RGD) designation, designers must complete a Certification Process that includes an application to determine eligibility, a multiple-choice online test, and a virtual portfolio interview. The RGD designation signifies knowledge, experience and ethical practice, guaranteeing that a designer is professionally competent in the areas of accessibility, business, design principles, research and ethics.
RGD offers various forms of membership for professional practitioners, managers, educators and students in graphic design, and for persons in allied professions.
Conferences
RGD organizes three annual conferences: a two-day design conference called DesignThinkers, a one-day career development conference for students and emerging designers called Creative Directions, and a one-day Design Educators Conference.
Publications
RGD has published three editionsThe Business of Graphic Design: The RGD Professional Handbook'.
It has also published AccessAbility: A Practical Handbook on Accessible Graphic Designand publishes a biennial National survey of graphic design salaries & billing practices''.
Related organizations
Société des designers graphiques du Québec (SDGQ)
AIGA
Design Council
Icograda
References
External links
The Association of Registered Graphic Designers (RGD) Official Web Site
DesignThinkers Official Web Site
CreativeEarners: National Survey of Salaries & Billing Practices in the Communication Design Industry
Accessibility
Graphic design
Design institutions
Communication design
Professional associations based in Canada
Organizations based in Ontario
Arts organizations established in 1996
1996 establishments in Ontario | Association of Registered Graphic Designers | [
"Engineering"
] | 911 | [
"Design",
"Communication design",
"Design institutions"
] |
8,660,043 | https://en.wikipedia.org/wiki/Undark | Undark was a trade name for luminous paint made with a mixture of radioactive radium and zinc sulfide, as produced by the U.S. Radium Corporation between 1917 and 1938. It was used primarily in radium dials for watches and clocks. The people working in the industry who applied the radioactive paint became known as the Radium Girls because many of them became ill and some died from exposure to the radiation emitted by the radium contained within the product. The product was the direct cause of radium jaw in the dial painters. Undark was also available as a kit for general consumer use and marketed as glow-in-the-dark paint.
Similar products
Mixtures similar to Undark, consisting of radium and zinc sulfide, were used by other companies. Trade names include:
Luna, used by the Radium Dial Company, a division of Standard Chemical Company
Marvelite, used by Cold Light Manufacturing Company (a subsidiary of the Radium Company of Colorado)
See also
Self-powered lighting
Further reading
Clark, Claudia. (1987). Radium Girls: Women and Industrial Health Reform, 1910-1935. University of North Carolina Press. .
Ross Mullner. (1999) Deadly Glow. The Radium Dial Worker Tragedy. American Public Health Association. .
National Council on Radiation Protection and Measurements. "Radiation Exposure from Consumer Products and Miscellaneous Sources. NCRP Report No. 56. 1977.
Scientific American (Macklis RM, The great radium scandal. Sci.Am. 1993 Aug: 269(2):94-99)
External links
Roger Russel - Radium Dials
Damninteresting.com - Undark and the Radium Girls
orau.org - Radioluminescent paint
orau.org - Photo gallery of radioluminescent items
New York Times - "A Glow in the Dark, and a Lesson in Scientific Peril", Denise Grady, October 6, 1998
Luminescence
Radium
Paints
Brand name materials | Undark | [
"Chemistry"
] | 400 | [
"Paints",
"Coatings",
"Molecular physics",
"Luminescence"
] |
8,660,325 | https://en.wikipedia.org/wiki/Donald%20Leroy%20Truesdell | Donald Leroy Truesdell (name changed from Truesdale to Truesdell on 25 July 1942) (August 8, 1906 – September 21, 1993) was a United States Marine Corps corporal who received the Medal of Honor for actions during the Occupation of Nicaragua. He attempted to throw away a rifle grenade at the cost of his right hand. He later obtained the rank of chief warrant officer. He was later given a posthumous memorial by the South Carolina General Assembly on May 19, 2004.
Military career
Truesdale first enlisted with the Marines in November 1924 as a private. At the time of his Medal of Honor action, Truesdale was simultaneously a lieutenant in the Nicaraguan native army. Despite losing his right forearm, he continued to serve with the Marine Corps until his retirement as a chief warrant officer in May 1946.
Medal of Honor citation
The President of the United States of America, in the name of Congress, takes pleasure in presenting the Medal of Honor to Corporal Donald L. Truesdale, USMC, for service in Nicaragua as set forth in the following:
Citation:
For extraordinary heroism in the line of his profession above and beyond the call of duty at the risk of his life, as second in command of a Guardia Nacional Patrol on 24 April 1932, engaged, at the time, in active operations in the field against armed bandit forces in the vicinity of Constancia, near Coco River, Department of Jinotega, Northern Nicaragua. While the patrol was in formation on the trail searching for a bandit group, with which contact had just previously been had, a rifle grenade fell from its carrier, carried by a member of the patrol, and struck a rock, igniting the detonator. Several men of the patrol were in close proximity to the grenade at the time. Corporal Truesdale, who was several yards away at the time, could easily have sought cover and safety for himself but instead, knowing full well the grenade would explode within two or three seconds, and with utter disregard for his own personal safety, and at the risk of his own life, rushed for the grenade, grasped it in his right hand and attempted to throw it away from the patrol before it exploded. The grenade exploded in his hand, blowing it off and inflicting multiple serious wounds on his body. Corporal Truesdale, by his actions, took the full shock of the explosion of the grenade upon himself, thereby saving the lives of, or serious injury to, his comrades in arms. His actions were worthy of the highest traditions of the profession of arms.
See also
List of Medal of Honor recipients
References
External links
1906 births
1993 deaths
American amputees
United States Marine Corps personnel of World War II
United States Marine Corps Medal of Honor recipients
People from Lugoff, South Carolina
Military personnel from South Carolina
United States Marines
United States Marine Corps officers
Occupation of Nicaragua recipients of the Medal of Honor
Explosion survivors | Donald Leroy Truesdell | [
"Chemistry"
] | 579 | [
"Explosion survivors",
"Explosions"
] |
8,660,513 | https://en.wikipedia.org/wiki/American%20Association%20of%20Physics%20Teachers | The American Association of Physics Teachers (AAPT) was founded in 1930 for the purpose of "dissemination of knowledge of physics, particularly by way of teaching." There are more than 10,000 members in over 30 countries. AAPT publications include two peer-reviewed journals, the American Journal of Physics and The Physics Teacher. The association has two annual National Meetings (winter and summer) and has regional sections with their own meetings and organization. The association also offers grants and awards for physics educators, including the Richtmyer Memorial Award and programs and contests for physics educators and students. It is headquartered at the American Center for Physics in College Park, Maryland.
History
The American Association of Physics Teachers was founded on December 31, 1930, when forty-five physicists held a meeting during the joint APS-AAAS meeting in Cleveland specifically for that purpose.
The AAPT became a founding member of the American Institute of Physics after the other founding members were convinced of the stability of the AAPT itself after a new constitution for the AAPT was agreed upon.
Contests
The AAPT sponsors a number of competitions. The Physics Bowl, Six Flags' roller coaster contest, and the US Physics Team are just a few. The US physics team is determined by two preliminary exams and a week and a half long "boot camp". Each year, five members are selected to compete against dozens of countries in the International Physics Olympiad (IPHO).
Publications
The Physics Teacher
American Journal of Physics
See also
American Institute of Physics
Oersted Medal
Physics outreach
References
External links
American Association of Physics Teachers web page
AAPT sponsored events
Archival collections
Finding Aid for the American Association of Physics Teachers, South Atlantic Coast Section Records at the University of North Carolina at Greensboro
Niels Bohr Library & Archives
American Association of Physics Teachers Richard M. Sutton records, 1934-1949
American Association of Physics Teachers miscellaneous publications, 1934-2013
American Association of Physics Teachers records of David Locke Webster, 1930-1958
American Journal of Physics editor's reports, 1967-2001
AAPT Chesapeake Section records of the secretary, 1956-1984
American Association of Physics Teachers Office of the Executive Officer records of Bernard Khoury, 1985-2002
AAPT Office of the Secretary John Layman records, 1947-2000, 2011, undated
American Association of Physics Teachers Office of the Secretary records of Alfred Romer, 1960-1971
American Association of Physics Teachers Office of the Secretary records of Roderick M. Grant, 1968-1991 (bulk 1977-1983)
AAPT Physics Teaching Resource Agents program records, 1983-2007, undated
Commission on College Physics records, 1960-1971
Eastern Association of Physics Teachers records, 1895-1979
Physics education
Physics societies
Professional associations based in the United States
Academic organizations based in the United States
Organizations established in 1930
Educational organizations based in the United States
Teacher associations based in the United States
American education-related professional associations
1930 establishments in the United States | American Association of Physics Teachers | [
"Physics"
] | 582 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
8,660,685 | https://en.wikipedia.org/wiki/Center%20of%20gravity%20of%20an%20aircraft | The center of gravity (CG) of an aircraft is the point over which the aircraft would balance. Its position is calculated after supporting the aircraft on at least two sets of weighing scales or load cells and noting the weight shown on each set of scales or load cells. The center of gravity affects the stability of the aircraft. To ensure the aircraft is safe to fly, the center of gravity must fall within specified limits established by the aircraft manufacturer.
Terminology
Ballast Ballast is removable or permanently installed weight in an aircraft used to bring the center of gravity into the allowable range.
Center-of-Gravity Limits Center of gravity (CG) limits are specified longitudinal (forward and aft) and/or lateral (left and right) limits within which the aircraft's center of gravity must be located during flight. The CG limits are indicated in the airplane flight manual. The area between the limits is called the CG range of the aircraft.
Weight and BalanceWhen the weight of the aircraft is at or below the allowable limit(s) for its configuration (parked, ground movement, take-off, landing, etc.) and its center of gravity is within the allowable range, and both will remain so for the duration of the flight, the aircraft is said to be within weight and balance. Different maximum weights may be defined for different situations; for example, large aircraft may have maximum landing weights that are lower than maximum take-off weights (because some weight is expected to be lost as fuel is burned during the flight). The center of gravity may change over the duration of the flight as the aircraft's weight changes due to fuel burn or by passengers moving forward or aft in the cabin.
Reference DatumThe reference datum is a reference plane that allows accurate, and uniform, measurements to any point on the aircraft. The location of the reference datum is established by the manufacturer and is defined in the aircraft flight manual. The horizontal reference datum is an imaginary vertical plane or point, placed along the longitudinal axis of the aircraft, from which all horizontal distances are measured for weight and balance purposes. There is no fixed rule for its location, and it may be located forward of the nose of the aircraft. For helicopters, it may be located at the rotor mast, the nose of the helicopter, or even at a point in space ahead of the helicopter. While the horizontal reference datum can be anywhere the manufacturer chooses, most small training helicopters have the horizontal reference datum 100 inches forward of the main rotor shaft centerline. This is to keep all the computed values positive. The lateral reference datum is usually located at the center of the helicopter.
ArmThe arm is the horizontal distance from the reference datum to the center of gravity (CG) of an item. The algebraic sign is plus (+) if measured aft of the datum or to the right side of the center line when considering a lateral calculation. The algebraic sign is minus (−) if measured forward of the datum or the left side of the center line when considering a lateral calculation.
MomentThe moment is the moment of force, or torque, that results from an object's weight acting through an arc that is centered on the zero point of the reference datum distance. Moment is also referred to as the tendency of an object to rotate or pivot about a point (the zero point of the datum, in this case). The further an object is from this point, the greater the force it exerts. Moment is calculated by multiplying the weight of an object by its arm.
Mean Aerodynamic Chord (MAC) A specific chord line of a tapered wing. At the mean aerodynamic chord, the center of pressure has the same aerodynamic force, position, and area as it does on the rest of the wing. The MAC represents the width of an equivalent rectangular wing in given conditions. On some aircraft, the center of gravity is expressed as a percentage of the length of the MAC. In order to make such a calculation, the position of the leading edge of the MAC must be known ahead of time. This position is defined as a distance from the reference datum and is found in the aircraft's flight manual and also on the aircraft's type certificate data sheet. If a general MAC is not given but a LeMAC (leading edge mean aerodynamic chord) and a TeMAC (trailing edge mean aerodynamic chord) are given (both of which would be referenced as an arm measured out from the datum line) then your MAC can be found by finding the difference between your LeMAC and your TeMAC.
Calculation
Center of gravity (CG) is calculated as follows:
Determine weights and arms for all mass within the aircraft.
Multiply weights by arms for all mass to calculate moments.
Add all moments together.
Add all weights together.
Divide total moment by total weight to give overall arm.
The arm that results from this calculation must be within the center of gravity limits dictated by the aircraft manufacturer. If it is not, weight in the aircraft must be removed, added (rarely), or redistributed until the center of gravity falls within the prescribed limits.
Aircraft center of gravity calculations are only performed along a single axis from the zero point of the reference datum that represents the longitudinal axis of the aircraft (to calculate fore-to-aft balance). Some helicopter types utilize lateral CG limits as well as longitudinal limits. Operation of such helicopters requires calculating CG along two axes: one calculation for longitudinal CG (fore-to-aft balance) and another calculation for lateral CG (left-to-right balance).
The weight, arm, and moment values of the fixed items on the aircraft (i.e. engines, wings, electronic components) do not change and are provided by the manufacturer on the Aircraft Equipment List. The manufacturer also provides information facilitating the calculation of moments for fuel loads. Removable weight items (i.e. crew members, passengers, baggage) must be properly accounted for in the weight and CG calculation by the aircraft operator.
Example
To find the center of gravity, we divide the total moment by the total weight: 193,193 / 2,055 = 94.01 inches behind the datum plane.
In larger aircraft, weight and balance is often expressed as a percentage of mean aerodynamic chord, or MAC. For example, assume the leading edge of the MAC is 62 inches aft of the datum. Therefore, the CG calculated above lies 32 inches aft of the leading edge of the MAC. If the MAC is 80 inches in length, the percentage of MAC is 32 / 80 = 40%. If the allowable limits were 15% to 35%, the aircraft would not be properly loaded.
Incorrect weight and balance in fixed-wing aircraft
When the weight or center of gravity of an aircraft is outside the acceptable range, the aircraft may not be able to sustain flight, or it may be impossible to maintain the aircraft in level flight in some or all circumstances, in some events resulting in load shifting. Placing the CG or weight of an aircraft outside the allowed range can lead to an unavoidable crash of the aircraft.
Center of gravity out of range
When the fore-aft center of gravity (CG) is out of range, serious aircraft control problems can occur. The fore-aft CG affects the longitudinal stability of the aircraft, with the stability increasing as the CG moves forward and decreasing as the CG moves aft. With a forward CG position, although the stability of the aircraft increases, the elevator control authority is reduced in the capability of raising the nose of the aircraft. This can cause a serious condition during the landing flare when the nose cannot be raised sufficiently to slow the aircraft. An aft CG position can cause severe handling problems due to the reduced pitch stability and increased elevator control sensitivity, with potential loss of aircraft control. Because the burning of fuel gradually produces a loss of weight and possibly a shift in the CG, it is possible for an aircraft to take off with the CG within normal operating range, and yet later develop an imbalance that results in control problems. Calculations of CG must take this into account (often part of this is calculated in advance by the manufacturer and incorporated into CG limits).
Adjusting CG within limits
The amount a weight must be moved can be found by using the following formula
shift distance = (total weight * cg change) / weight shifted
Example:
1500 lb * 33.9 in = 50,850 moment (airplane)
100 lb * 68 in = 8,400 moment (baggage)
cg = 37 in = (50,850 + 8,400) / 1600 lb (1/2 in out of cg limit)
We want to move the CG 1 in using a 100 lb bag in the baggage compartment.
shift dist = (total weight * cg change) / weight shifted
16 in = (1600 lb * 1 in) / 100 lb
Reworking the problem with 100 lb moved 16 in forward to 68 in moves CG 1 in.
1500 lb * 33.9 in = 50,850 moment (airplane)
100 lb * 84in = 6,800 moment (baggage)
cg = 36 in = (50,850 + 6,800) / 1600 lb
new cg = 36 in
Weight out of range
Few aircraft impose a minimum weight for flight (although a minimum pilot weight is often specified), but all impose a maximum weight. If the maximum weight is exceeded, the aircraft may not be able to achieve or sustain controlled flight. Excessive take-off weight may make it impossible to take off within available runway lengths, or it may completely prevent take-off. Excessive weight in flight may make climbing beyond a certain altitude difficult or impossible, or it may make it impossible to maintain an altitude.
Incorrect weight and balance in helicopters
The center of gravity is even more critical for helicopters than it is for fixed-wing aircraft (weight issues remain the same). As with fixed-wing aircraft, a helicopter may be properly loaded for takeoff, but near the end of a long flight when the fuel tanks are almost empty, the CG may have shifted enough for the helicopter to be out of balance laterally or longitudinally. For helicopters with a single main rotor, the CG is usually close to the main rotor mast. Improper balance of a helicopter's load can result in serious control problems. In addition to making a helicopter difficult to control, an out-of-balance loading condition also decreases maneuverability since cyclic control is less effective in the direction opposite to the CG location.
The pilot tries to perfectly balance a helicopter so that the fuselage remains horizontal in hovering flight, with no cyclic pitch control needed except for wind correction. Since the fuselage acts as a pendulum suspended from the rotor, changing the center of gravity changes the angle at which the aircraft hangs from the rotor. When the center of gravity is directly under the rotor mast, the helicopter hangs horizontal; if the CG is too far forward of the mast, the helicopter hangs
with its nose tilted down; if the CG is too far aft of the mast, the nose tilts up.
CG forward of forward limit
A forward CG may occur when a heavy pilot and passenger take off without baggage or proper ballast located aft of the rotor mast. This situation becomes worse if the fuel tanks are located aft of the rotor mast because as fuel burns the weight located aft of the rotor mast becomes less.
This condition is recognizable when coming to a hover following a vertical takeoff. The helicopter will have a nose-low attitude, and the pilot will need excessive rearward displacement of the cyclic control to maintain a hover in a no-wind condition. In this condition, the pilot could rapidly run out of rearward cyclic control as the helicopter consumes fuel. The pilot may also find it impossible to decelerate sufficiently to bring the helicopter to a stop. In the event of engine failure and the resulting autorotation, the pilot may not have enough cyclic control to flare properly for the landing.
A forward CG will not be as obvious when hovering into a strong wind, since less rearward cyclic displacement is required than when hovering with no wind. When determining whether a critical balance condition exists, it is essential to consider the wind velocity and its relation to the rearward displacement of the cyclic control.
CG aft of aft limit
Without proper ballast in the cockpit, exceeding the aft CG may occur when:
A lightweight pilot takes off solo with a full load of fuel located aft of the rotor mast.
A lightweight pilot takes off with maximum baggage allowed in a baggage compartment located aft of the rotor mast.
A lightweight pilot takes off with a combination of baggage and substantial fuel where both are aft of the rotor mast.
An aft CG condition can be recognized by the pilot when coming to a hover following a vertical takeoff. The helicopter will have a tail-low attitude, and the pilot will need excessive forward displacement of cyclic control to maintain a hover in a no-wind condition. If there is a wind, the pilot needs even greater forward cyclic. If flight is continued in this condition, the pilot may find it impossible to fly in the upper allowable airspeed range due to inadequate forward cyclic authority to maintain a nose-low attitude. In addition, with an extreme aft CG, gusty or rough air could accelerate the helicopter to a speed faster than that produced with full forward cyclic control. In this case, dissymmetry of lift and blade flapping could cause the rotor disc to tilt aft. With full forward cyclic control already applied, the rotor disc might not be able to be lowered, resulting in possible loss of control, or the rotor blades striking the tail boom.
Lateral balance
In fixed-wing aircraft, lateral balance is often much less critical than fore-aft balance, simply because most mass in the aircraft is located very close to its center. An exception is fuel, which may be loaded into the wings, but since fuel loads are usually symmetrical about the axis of the aircraft, lateral balance is not usually affected. The lateral center of gravity may become important if the fuel is not loaded evenly into tanks on both sides of the aircraft, or (in the case of small aircraft) when passengers are predominantly on one side of the aircraft (such as a pilot flying alone in a small aircraft). Small lateral deviations of CG that are within limits may cause an annoying roll tendency that pilots must compensate for, but they are not dangerous as long as the CG remains within limits for the duration of the flight.
For most helicopters, it is usually not necessary to determine the lateral CG for normal flight instruction and passenger flights. This is because helicopter cabins are relatively narrow and most optional equipment is located near the center line. However, some helicopter manuals specify the seat from which solo flight must be conducted. In addition, if there is an unusual situation, such as a heavy pilot and a full load of fuel on one side of the helicopter, which could affect the lateral CG, its position should be checked against the CG envelope. If carrying external loads in a position that requires large lateral cyclic control displacement to maintain level flight, fore and aft cyclic effectiveness could be dramatically limited.
Fuel dumping and overweight operations
Many large transport-category aircraft are able to take-off at a greater weight than they can land. This is possible because the weight of fuel that the wings can support along their span in flight, or when parked or taxiing on the ground, is greater than they can tolerate during the stress of landing and touchdown, when the support is not distributed along the span of the wing.
Normally the portion of the aircraft's weight that exceeds the maximum landing weight (but falls within the maximum take-off weight) is entirely composed of fuel. As the aircraft flies, the fuel burns off, and by the time the aircraft is ready to land, it is below its maximum landing weight. However, if an aircraft must land early, sometimes the fuel that remains aboard still keeps the aircraft over the maximum landing weight. When this happens, the aircraft must either burn off the fuel (by flying in a holding pattern) or dump it (if the aircraft is equipped to do this) before landing to avoid damage to the aircraft. In an emergency, an aircraft may choose to land overweight, but this may damage it, and at the very least an overweight landing will mandate a thorough inspection to check for any damage.
In some cases, an aircraft may take off overweight deliberately. An example might be an aircraft being ferried over a very long distance with extra fuel aboard. An overweight take-off typically requires an exceptionally long runway. Overweight operations are not permitted with passengers aboard.
Many smaller aircraft have a maximum landing weight that is the same as the maximum take-off weight, in which case issues of overweight landing due to excess fuel being on board cannot arise.
CG of large commercial transport aircraft
This section shows data obtained from a NASA Ames research grant for large commercial transport aircraft.
The Operational CG Range is utilized during takeoff and landing phases of flight, and the Permissible CG Range is utilized during ground operations (i.e. while loading the aircraft with passengers, baggage and fuel).
Accidents
Air Midwest Flight 5481: in January 2003, a Beech 1900D was dispatched with more than over its maximum weight, and mostly in the rear so its center of gravity was 5% aft. It crashed killing all 21 on board.
In February 2005, a Challenger 600 departed Teterboro, New Jersey, loaded so far forward that it was out of the CG limit and it could not rotate, crashed through the airport fence into a building, severely injuring three occupants and destroying the aircraft.
In August 2010, a Filair Let L-410 crashed in the Democratic Republic of Congo. The accident was reportedly the result of the occupants rushing to the front of the aircraft to escape from a crocodile smuggled on board by one of the passengers. The move compromised the aircraft's balance to the point that control of the aircraft was lost.
In July 2013, a de Havilland Canada DHC-3 Otter departed Soldotna, Alaska, stalled after rotation and crashed away from its brake-release point as it was overloaded by and its CG was well aft of the rear limit. All ten occupants died.
See also
Index of aviation articles
Weight distribution
References
Further reading
Aerodynamics
Gravity of an aircraft, Center of | Center of gravity of an aircraft | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 3,754 | [
"Point (geometry)",
"Geometric centers",
"Aerodynamics",
"Aerospace engineering",
"Symmetry",
"Fluid dynamics"
] |
8,660,698 | https://en.wikipedia.org/wiki/The%20Hitchhikers%20Guide%20to%20the%20Internet | The Hitchhikers Guide to the Internet, by Ed Krol, was published in 1987 through funding by the National Science Foundation. It was the first popular user's guide to the history and use of the Internet. The title was a reference to the popular The Hitchhiker's Guide to the Galaxy.
Background
In 1985, Ed Krol began working at the University of Illinois, became network manager for the National Center for Supercomputing Applications when it was formed and was involved in the establishment of the NSFNET. During this time, in August 1987, he published (through funding by the National Science Foundation), the online text document, Hitchhiker's Guide to the Internet "because he had so much trouble getting information and was sick of telling the same story to everyone". Two years later this was republished as RFC 1118.
The text attracted Tim O'Reilly's attention. Krol reworked and extended it into book form and it was published by O'Reilly in 1992 as the Whole Internet User's Guide and Catalog., though the additional digital catalog related to the text was made freely available online.
See also
History of the Internet
Scientific American Special Issue on Communications, Computers, and Networks
References
External links
at Project Gutenberg
Books about the Internet
1987 non-fiction books
Texts related to the history of the Internet
Request for Comments | The Hitchhikers Guide to the Internet | [
"Technology"
] | 276 | [
"Computing stubs",
"Computer book stubs"
] |
8,660,721 | https://en.wikipedia.org/wiki/1%2C4%2C7-Trithiacyclononane | 1,4,7-Trithiacyclononane, also called 9-ane-S3, is the thia-crown ether with the formula (CH2CH2S)3. This cyclic thioether is most often encountered as a tridentate ligand in coordination chemistry, where it forms transition metal thioether complexes.
9-ane-S3 forms complexes with many metal ions, including those considered hard, such as copper(II) and iron(II). Most of its complexes have the formula [M(9-ane-S3)2]2+ and are octahedral. The point group of [M(9-ane-S3)2]2+ is S6.
Synthesis
This compound was first reported in 1977, and the current synthesis entails the assembly within the coordination sphere of a metal ion followed by decomplexation:
References
Chelating agents
Sulfur heterocycles
Thioethers
Macrocycles | 1,4,7-Trithiacyclononane | [
"Chemistry"
] | 206 | [
"Organic compounds",
"Chelating agents",
"Macrocycles",
"Process chemicals"
] |
8,661,171 | https://en.wikipedia.org/wiki/J-I | The J-I was a solid-fuel, expendable, small-lift launch vehicle developed by the National Space Development Agency of Japan and the Institute of Space and Astronautical Science. In an attempt to reduce development costs, it used the solid rocket booster from the H-II as the first stage, and the upper stages of the M-3SII. It flew only once on a suborbital flight taking place on 11 February 1996 from the Osaki Launch Complex at the Tanegashima Space Center in a partial configuration, to launch the demonstrator HYFLEX. The vehicle never flew in the final orbital capability configuration, which should have launched the OICETS satellite (OICETS was launched on a Russian R-36MUTTH Intercontinental ballistic missile-based Dnepr rocket instead).
On the HYFLEX mission a load of 1,054 kg was launched 1,300 km downrange. Apogee was 110km; the HYFLEX payload achieved speed of approximately 3.8 km/s.
See also
Epsilon (rocket)
Mu (rocket family)
M-V
Comparison of orbital launchers families
References
External links
Space launch vehicles of Japan | J-I | [
"Astronomy"
] | 245 | [
"Rocketry stubs",
"Astronomy stubs"
] |
8,661,211 | https://en.wikipedia.org/wiki/Dielectric%20barrier%20discharge | Dielectric-barrier discharge (DBD) is the electrical discharge between two electrodes separated by an insulating dielectric barrier. Originally called silent (inaudible) discharge and also known as ozone production discharge or partial discharge, it was first reported by Ernst Werner von Siemens in 1857.
Process
The process normally uses high voltage alternating current, ranging from lower RF to microwave frequencies. However, other methods were developed to extend the frequency range all the way down to the DC. One method was to use a high resistivity layer to cover one of the electrodes. This is known as the resistive barrier discharge. Another technique using a semiconductor layer of gallium arsenide (GaAs) to replace the dielectric layer, enables these devices to be driven by a DC voltage between 580 V and 740 V.
Construction
DBD devices can be made in many configurations, typically planar, using parallel plates separated by a dielectric or cylindrical, using coaxial plates with a dielectric tube between them. In a common coaxial configuration, the dielectric is shaped in the same form as common fluorescent tubing. It is filled at atmospheric pressure with either a rare gas or rare gas-halide mix, with the glass walls acting as the dielectric barrier. Due to the atmospheric pressure level, such processes require high energy levels to sustain. Common dielectric materials include glass, quartz, ceramics and polymers. The gap distance between electrodes varies considerably, from less than 0.1 mm in plasma displays, several millimetres in ozone generators and up to several centimetres in CO2 lasers.
Depending on the geometry, DBD can be generated in a volume (VDBD) or on a surface (SDBD). For VDBD the plasma is generated between two electrodes, for example between two parallel plates with a dielectric in between. At SDBD the microdischarges are generated on the surface of a dielectric, which results in a more homogeneous plasma than can be achieved using the VDBD configuration At SDBD the microdischarges are limited to the surface, therefore their density is higher compared to the VDBD. The plasma is generated on top of the surface of an SDBD plate. To easily ignite VDBD and obtain a uniformly distributed discharge in the gap, a pre-ionization DBD can be used.
A particular compact and economic DBD plasma generator can be built based on the principles of the piezoelectric direct discharge. In this technique, the high voltage is generated with a piezo-transformer, the secondary circuit of which acts also as the high voltage electrode. Since the transformer material is a dielectric, the produced electric discharge resembles properties of the dielectric barrier discharge.
Manipulation of the encapsulated electrode and distributing the encapsulated electrode throughout the dielectric layer has been shown to alter the performance of the dielectric barrier discharge (DBD) plasma actuator. Actuators with a shallow initial electrode are able to more efficiently impart momentum and mechanical power into the flow.
Operation
A multitude of random arcs form in operation gap exceeding 1.5 mm between the two electrodes during discharges in gases at the atmospheric pressure . As the charges collect on the surface of the dielectric, they discharge in microseconds (millionths of a second), leading to their reformation elsewhere on the surface. Similar to other electrical discharge methods, the contained plasma is sustained if the continuous energy source provides the required degree of ionization, overcoming the recombination process leading to the extinction of the discharge plasma. Such recombinations are directly proportional to the collisions between the molecules and in turn to the pressure of the gas, as explained by Paschen's Law. The discharge process causes the emission of an energetic photon, the frequency and energy of which corresponds to the type of gas used to fill the discharge gap.
Applications
Usage of generated radiation
DBDs can be used to generate optical radiation by the relaxation of excited species in the plasma. The main application here is the generation of UV-radiation. Such excimer ultraviolet lamps can produce light with short wavelengths which can be used to produce ozone in industrial scales. Ozone is still used extensively in industrial air and water treatment. Early 20th-century attempts at commercial nitric acid and ammonia production used DBDs as several nitrogen-oxygen compounds are generated as discharge products.
Usage of the generated plasma
Since the 19th century, DBDs were known for their decomposition of different gaseous compounds, such as NH3, H2S and CO2. Other modern applications include semiconductor manufacturing, germicidal processes, polymer surface treatment, high-power CO2 lasers typically used for welding and metal cutting, pollution control and plasma displays panels, aerodynamic flow control... The relatively lower temperature of DBDs makes it an attractive method of generating plasma at atmospheric pressure.
Industry
The plasma itself is used to modify or clean (plasma cleaning) surfaces of materials (e.g. polymers, semiconductor surfaces), that can also act as dielectric barrier, or to modify gases applied further to "soft" plasma cleaning and increasing adhesion of surfaces prepared for coating or gluing (flat panel display technologies).
A dielectric barrier discharge is one method of plasma treatment of textiles at atmospheric pressure and room temperature. The treatment can be used to modify the surface properties of the textile to improve wettability, improve the absorption of dyes and adhesion, and for sterilization. DBD plasma provides a dry treatment that doesn't generate waste water or require drying of the fabric after treatment. For textile treatment, a DBD system requires a few kilovolts of alternating current, at between 1 and 100 kilohertz. Voltage is applied to insulated electrodes with a millimetre-size gap through which the textile passes.
An excimer lamp can be used as a powerful source of short-wavelength ultraviolet light, useful in chemical processes such as surface cleaning of semiconductor wafers. The lamp relies on a dielectric barrier discharge in an atmosphere of xenon and other gases to produce the excimers.
Water treatment
An additional process when using chlorine gas for removal of bacteria and organic contaminates in drinking water supplies. Treatment of public swimming baths, aquariums and fish ponds involves the use of ultraviolet radiation produced when a dielectric mixture of xenon gas and glass are used.
Surface modification of materials
An application where DBDs can be successfully used is to modify the characteristics of a material surface. The modification can target a change in its hydrophilicity, the surface activation, the introduction of functional groups, and so on. Polymeric surfaces are easy to be processed using DBDs which, in some cases, offer a high processing area.
Medicine
Dielectric barrier discharges were used to generate relatively large volume diffuse plasmas at atmospheric pressure and applied to inactivate bacteria in the mid 1990s. This eventually led to the development of a new field of applications, the biomedical applications of plasmas. In the field of biomedical application, three main approaches have emerged: direct therapy,
surface modification, and plasma polymer deposition. Plasma polymers can control and steer biological–biomaterial interactions (i.e. adhesion, proliferation, and differentiation) or inhibition of bacteria adhesion.
Aeronautics
Interest in plasma actuators as active flow control devices is growing rapidly due to their lack of mechanical parts, light weight and high response frequency.
Properties
Due to their nature, these devices have the following properties:
capacitive electric load: low power factor in range of 0.1 to 0.3
high ignition voltage 1–10 kV
huge amount of energy stored in electric field – requirement of energy recovery if DBD is not driven continuously
voltages and currents during discharge event have major influence on discharge behaviour (filamented, homogeneous).
Operation with continuous sine waves or square waves is mostly used in high power industrial installations. Pulsed operation of DBDs may lead to higher discharge efficiencies.
Driving circuits
Drivers for this type of electric load are power HF-generators that in many cases contain a transformer for high voltage generation. They resemble the control gear used to operate compact fluorescent lamps or cold cathode fluorescent lamps. The operation mode and the topologies of circuits to operate [DBD] lamps with continuous sine or square waves are similar to those standard drivers. In these cases, the energy that is stored in the DBD's capacitance does not have to be recovered to the intermediate supply after each ignition. Instead, it stays within the circuit (oscillates between the [DBD]'s capacitance and at least one inductive component of the circuit) and only the real power, that is consumed by the lamp, has to be provided by the power supply. Differently, drivers for pulsed operation suffer from rather low power factor and in many cases must fully recover the DBD's energy. Since pulsed operation of [DBD] lamps can lead to increased lamp efficiency, international research led to suiting circuit concepts. Basic topologies are resonant flyback and resonant half bridge. A flexible circuit, that combines the two topologies is given in two patent applications, and may be used to adaptively drive DBDs with varying capacitance.
An overview of different circuit concepts for the pulsed operation of DBD optical radiation sources is given in "Resonant Behaviour of Pulse Generators for the Efficient Drive of Optical Radiation Sources Based on Dielectric Barrier Discharges".
References
Electrical phenomena
Electricity
Electrostatics | Dielectric barrier discharge | [
"Physics"
] | 1,973 | [
"Physical phenomena",
"Electrical phenomena"
] |
8,661,521 | https://en.wikipedia.org/wiki/Semidione | Semidiones are radical anions analogous to semiquinones, obtained from the one-electron reduction of non-quinone conjugated dicarbonyls.
The simplest possible semidiones are derived from 1,2-dicarbonyls and have structure , making them the second member of a homologous series starting with ketyl radicals and continuing with semitriones.
They are often transient intermediates, appearing in reactions such as the final reduction step of the acyloin condensation.
Benzil semidione (), synthesized by Auguste Laurent in 1836, is believed to have been the first radical ion ever characterized.
Semidehydroascorbate is a relatively stable semitrione produced by hydrogen abstraction from ascorbate (Vitamin C).
References
Free radicals
Ketones
Anions | Semidione | [
"Physics",
"Chemistry",
"Biology"
] | 167 | [
"Matter",
"Anions",
"Ketones",
"Free radicals",
"Functional groups",
"Senescence",
"Biomolecules",
"Ions",
"Organic chemistry stubs"
] |
8,661,899 | https://en.wikipedia.org/wiki/Polymer-based%20battery | A polymer-based battery uses organic materials instead of bulk metals to form a battery. Currently accepted metal-based batteries pose many challenges due to limited resources, negative environmental impact, and the approaching limit of progress. Redox active polymers are attractive options for electrodes in batteries due to their synthetic availability, high-capacity, flexibility, light weight, low cost, and low toxicity. Recent studies have explored how to increase efficiency and reduce challenges to push polymeric active materials further towards practicality in batteries. Many types of polymers are being explored, including conductive, non-conductive, and radical polymers. Batteries with a combination of electrodes (one metal electrode and one polymeric electrode) are easier to test and compare to current metal-based batteries, however batteries with both a polymer cathode and anode are also a current research focus. Polymer-based batteries, including metal/polymer electrode combinations, should be distinguished from metal-polymer batteries, such as a lithium polymer battery, which most often involve a polymeric electrolyte, as opposed to polymeric active materials.
Organic polymers can be processed at relatively low temperatures, lowering costs. They also produce less carbon dioxide.
History
Organic batteries are an alternative to the metal reaction battery technologies, and much research is taking place in this area.
An article titled "Plastic-Metal Batteries: New promise for the electric car" wrote in 1982: "Two different organic polymers are being investigated for possible use in batteries" and indicated that the demo he gave was based on work begun in 1976.
Waseda University was approached by NEC in 2001, and began to focus on the organic batteries. In 2002, NEC researcher presented a paper on Piperidinoxyl Polymer technology, and by 2005 they presented an organic radical battery (ORB) based on a modified PTMA, poly(2,2,6,6-tetramethylpiperidinyloxy-4-yl meth-acrylate).
In 2006, Brown University announced a technology based on polypyrrole. In 2007, Waseda announced a new ORB technology based on "soluble polymer, polynorborene with pendant nitroxide radical groups."
In 2015 researchers developed an efficient, conductive, electron-transporting polymer. The discovery employed a "conjugated redox polymer" design with a naphthalene-bithiophene polymer that has been used for transistors and solar cells. Doped with lithium ions it offered significant electronic conductivity and remained stable through 3,000 charge/discharge cycles. Polymers that conduct holes have been available for some time. The polymer exhibits the greatest power density for an organic material under practical measurement conditions. A battery could be 80% charged within 6 seconds. Energy density remained lower than inorganic batteries.
Electrochemistry
Like metal-based batteries, the reaction in a polymer-based battery is between a positive and a negative electrode with different redox potentials. An electrolyte transports charges between these electrodes. For a substance to be a suitable battery active material, it must be able to participate in a chemically and thermodynamically reversible redox reaction. Unlike metal-based batteries, whose redox process is based on the valence charge of the metals, the redox process of polymer-based batteries is based on a change of state of charge in the organic material. For a high energy density, the electrodes should have similar specific energies.
Classification of active materials
The active organic material could be a p-type, n-type, or b-type. During charging, p-type materials are oxidized and produce cations, while n-types are reduced and produce anions. B-type organics could be either oxidized or reduced during charging or discharging.
Charge and discharge
In a commercially available Li-ion battery, the Li+ ions are diffused slowly due to the required intercalation and can generate heat during charge or discharge. Polymer-based batteries, however, have a more efficient charge/discharge process, resulting in improved theoretical rate performance and increased cyclability.
Charge
To charge a polymer-based battery, a current is applied to oxidize the positive electrode and reduce the negative electrode. The electrolyte salt compensates the charges formed. The limiting factors upon charging a polymer-based battery differ from metal-based batteries and include the full oxidation of the cathode organic, full reduction of the anode organic, or consumption of the electrolyte.
Discharge
Upon discharge, the electrons go from the anode to cathode externally, while the electrolyte carries the released ions from the polymer. This process, and therefore the rate performance, is limited by the electrolyte ion travel and the electron-transfer rate constant, k0, of the reaction.
This electron transfer rate constant provides a benefit of polymer-based batteries, which typically have high values on the order of 10−1 cm s−1. The organic polymer electrodes are amorphous and swollen, which allows for a higher rate of ionic diffusion and further contributes to a better rate performance. Different polymer reactions, however, have different reaction rates. While a nitroxyl radical has a high reaction rate, organodisulfades have significantly lower rates because bonds are broken and new bonds are formed.
Batteries are commonly evaluated by their theoretical capacity (the total capacity of the battery if 100% of active material were utilized in the reaction). This value can be calculated as follows:
where m is the total mass of active material, n is the number of transferred electrons per molar mass of active material, M is the molar mass of active material, and F is Faraday's constant.
Charge and discharge testing
Most polymer electrodes are tested in a metal-organic battery for ease of comparison to metal-based batteries. In this testing setup, the metal acts as the anode and either n- or p-type polymer electrodes can be used as the cathode. When testing the n-type organic, this metal-polymer battery is charged upon assembly and the n-type material is reduced during discharge, while the metal is oxidized. For p-type organics in a metal-polymer test, the battery is already discharged upon assembly. During initial charging, electrolyte salt cations are reduced and mobilized to the polymeric anode while the organic is oxidized. During discharging, the polymer is reduced while the metal is oxidized to its cation.
Types of active materials
Conductive polymers
Conductive polymers can be n-doped or p-doped to form an electrochemically active material with conductivity due to dopant ions on a conjugated polymer backbone. Conductive polymers (i.e. conjugated polymers) are embedded with the redox active group, as opposed to having pendant groups, with the exception of sulfur conductive polymers. They are ideal electrode materials due to their conductivity and redox activity, therefore not requiring large quantities of inactive conductive fillers. However they also tend to have low coulombic efficiency and exhibit poor cyclability and self-discharge. Due to the poor electronic separation of the polymer's charged centers, the redox potentials of conjugated polymers change upon charge and discharge due to a dependence on the dopant levels. As a result of this complication, the discharge profile (cell voltage vs. capacity) of conductive polymer batteries has a sloped curve.
Conductive polymers struggle with stability due to high levels of charge, failing to reach the ideal of one charge per monomer unit of polymer. Stabilizing additives can be incorporated, but these decrease the specific capacity.
Non-conjugated polymers with pendant groups
Despite the conductivity advantage of conjugated polymers, their many drawbacks as active materials have furthered the exploration of polymers with redox active pendant groups. Groups frequently explored include carbonyls, carbazoles, organosulfur compounds, viologen, and other redox-active molecules with high reactivity and stable voltage upon charge and discharge. These polymers present an advantage over conjugated polymers due to their localized redox sites and more constant redox potential over charge/discharge.
Carbonyl pendant groups
Carbonyl compounds have been heavily studied, and thus present an advantage, as new active materials with carbonyl pendant groups can be achieved by many different synthetic properties. Polymers with carbonyl groups can form multivalent anions. Stabilization depends on the substituents; vicinal carbonyls are stabilized by enolate formation, aromatic carbonyls are stabilized by delocalization of charge, and quinoidal carbonyls are stabilized by aromaticity.
Organosulfur groups
Sulfur is one of earth's most abundant elements and thus are advantageous for active electrode materials. Small molecule organosulfur active materials exhibit poor stability, which is partially resolved via incorporation into a polymer. In disulfide polymers, electrochemical charge is stored in a thiolate anion, formed by a reversible two-electron oxidation of the disulfide bond. Electrochemical storage in thioethers is achieved by the two-electron oxidation of a neutral thioether to a thioether with a +2 charge. As active materials, however, organosulfur compounds, however, exhibit weak cyclability.
Radical groups
Polymeric electrodes in organic radical batteries are electrochemically active with stable organic radical pendant groups that have an unpaired electron in the uncharged state. Nitroxide radicals are the most commonly applied, though phenoxyl and hydrazyl groups are also often used. A nitroxide radical could be reversibly oxidized and the polymer p-doped, or reduced, causing n-doping. Upon charging, the radical is oxidized to an oxoammonium cation, and at the cathode, the radical is reduced to an aminoxyl anion. These processes are reversed upon discharge, and the radicals are regenerated. For stable charge and discharge, both the radical and doped form of the radical must be chemically stable. These batteries exhibit excellent cyclability and power density, attributed to the stability of the radical and the simple one-electron transfer reaction. Slight decrease in capacity after repeated cycling is likely due to a build up of swollen polymer particles which increase the resistance of the electrode. Because the radical polymers are considerably insulating, conductive additives are often added that which lower the theoretical specific capacity. Nearly all organic radical batteries feature a nearly constant voltage during discharge, which is an advantage over conductive polymer batteries. The polymer backbone and cross-linking techniques can be tuned to minimize the solubility of the polymer in the electrolyte, thereby minimizing self-discharge.
Control and performance
Performance summary comparison of key polymer electrode types
During discharge, conductive polymers have a sloping voltage that hinders their practical applications. This sloping curve indicates electrochemical instability which could be due to morphology, size, the charge repulsions within the polymer chain during the reaction, or the amorphous state of polymers.
Effect of polymer morphology
Electrochemical performance of polymer electrodes is affected by polymer size, morphology, and degree of crystallinity. In a polypyrrole (PPy)/Sodium ion hybrid battery, a 2018 study demonstrated that the polymer anode with a fluffy structure consisting of chains of submicron particles performed with a much higher capacity (183 mAh g−1) as compared to bulk PPy (34.8 mAh g−1). The structure of the submicron polypyrrole anode allowed for increased electrical contact between the particles, and the electrolyte was able to further penetrate the polymeric active material. It has also been reported that amorphous polymeric active materials performs better than the crystalline counterpart. In 2014, it was demonstrated that crystalline oligopyrene exhibited a discharge capacity of 42.5 mAh g−1, while the amorphous oligopyrene has a higher capacity of 120 mAh g−1. Further, the crystalline version experienced a sloped charge and discharge voltage and considerable overpotential due to slow diffusion of ClO4−. The amorphous oligopyrene had a voltage plateau during charge and discharge, as well as significantly less overpotential.
Molecular weight control
The molecular weight of polymers effects their chemical and physical properties, and thus the performance of a polymer electrode. A 2017 study evaluated the effect of molecular weight on electrochemical properties of (PTMA). By increasing the monomer to initiator ratio from 50/1 to 1000/1, five different sizes were achieved from 66 to 704 degrees of polymerization. A strong dependence on molecular weight was established, as the higher the molecular weight polymers exhibited a higher specific discharge capacity and better cyclability. This effect was attributed to a reciprocal relationship between molecular weight and solubility in the electrolyte.
Advantages
Polymer-based batteries have many advantages over metal-based batteries. The electrochemical reactions involved are more simple, and the structural diversity of polymers and method of polymer synthesis allows for increased tunability for desired applications. While new types of inorganic materials are difficult to find, new organic polymers can be much more easily synthesized. Another advantage is that polymer electrode materials may have lower redox potentials, but they have a higher energy density than inorganic materials. And, because the redox reaction kinetics for organics is higher than that for inorganics, they have a higher power density and rate performance. Because of the inherent flexibility and light weight of organic materials as compared to inorganic materials, polymeric electrodes can be printed, cast, and vapor deposited, enabling application in thinner and more flexible devices. Further, most polymers can be synthesized at low cost or extracted from biomass and even recycled, while inorganic metals are limited in availability and can be harmful to the environment.
Organic small molecules also possess many of these advantages, however they are more susceptible to dissolving in the electrolyte. Polymeric organic active materials less easily dissolve and thus exhibit superior cyclability.
Challenges
Though superior in this sense to small organic molecules, polymers still exhibit solubility in electrolytes, and battery stability is threatened by dissolved active material that can travel between electrodes, leading to decreased cyclability and self-discharge, which indicates weaker mechanical capacity. This issue can be lessened by incorporating the redox-active unit in the polymeric backbone, but this can decrease the theoretical specific capacity and increase electrochemical polarization. Another challenge is that besides conductive polymers, most polymeric electrodes are electrically insulating and therefore require conductive additives, reducing the battery's overall capacity. While polymers do have a low mass density, they have a greater volumetric energy density which in turn would require an increase in volume of devices being powered.
Safety
A 2009 study evaluated the safety of a hydrophilic radical polymer and found that a radical polymer battery with an aqueous electrolyte is nontoxic, chemically stable, and non-explosive, and is thus a safer alternative to traditional metal-based batteries. Aqueous electrolytes present a safer option over organic electrolytes which can be toxic and can form HF acid. The one-electron redox reaction of a radical polymer electrode during charging generates little heat and therefore has a reduced risk of thermal runaway. Further studies are required to fully understand the safety of all polymeric electrodes.
See also
List of battery types
References
External links
"New material claimed to store more energy and cost less money than batteries", September 29, 2011, National University of Singapore's Nanoscience and Nanotechnology Initiative
"Organic Radical Battery with Piperidinoxyl Polymer", 2002.
"Flexible battery power", 19 March 2007
Battery types
Plastics applications
Polymers | Polymer-based battery | [
"Chemistry",
"Materials_science"
] | 3,245 | [
"Polymers",
"Polymer chemistry"
] |
8,663,141 | https://en.wikipedia.org/wiki/Behaviorally%20anchored%20rating%20scales | Behaviorally anchored rating scales (BARS) are scales used to rate performance. BARS are normally presented vertically with scale points ranging from five to nine. It is an appraisal method that aims to combine the benefits of narratives, critical incidents, and quantified ratings by anchoring a quantified scale with specific narrative examples of good, moderate, and poor performance.
Background
BARS were developed in response to dissatisfaction with the subjectivity involved in using traditional rating scales such as the graphic rating scale. A review of BARS concluded that the strength of this rating format may lie primarily in the performance dimensions which are gathered rather than the distinction between behavioral and numerical scale anchors.
Benefits of BARS
BARS are rating scales that add behavioral scale anchors to traditional rating scales (e.g., graphic rating scales). In comparison to other rating scales, BARS are intended to facilitate more accurate ratings of the target person's behavior or performance. However, whereas the BARS is often regarded as a superior performance appraisal method, BARS may still suffer from unreliability, leniency bias and lack of discriminant validity between performance dimensions.
Developing BARS
BARS are developed using data collected through the critical incident technique, or through the use of comprehensive data about the tasks performed by a job incumbent, such as might be collected through a task analysis. In order to construct BARS, several basic steps, outlined below, are followed.
Examples of effective and ineffective behavior related to job are collected from people with knowledge of job using the critical incident technique. Alternatively, data may be collected through the careful examination of data from a recent task analysis.
These data are then converted into performance dimensions. To convert these data into performance dimensions, examples of behavior (such as critical incidents) are sorted into homogeneous groups using the Q-sort technique. Definitions for each group of behaviors are then written to define each grouping of behaviors as a performance dimension
A group of subject matter experts (SMEs) are asked to re-translate the behavioral examples back into their respective performance dimensions. At this stage the behaviors for which there is not a high level of agreement (often 50–75%) are discarded while the behaviors which were re-translated back into their respective performance dimensions with a high level of SME agreement are retained. The re-translation process helps to ensure that behaviors are readily identifiable with their respective performance dimensions.
The retained behaviors are then scaled by having SMEs rate the effectiveness of each behavior. These ratings are usually done on a 5- to 9-point Likert-type scale.
Behaviors with a low standard deviation (for examples, less than 1.50) are retained while behaviors with a higher standard deviation are discarded. This step helps to ensure SME agreement about the rating of each behavior.
Finally, behaviors for each performance dimensions, all meeting re-translation and criteria, will be used as scale anchors.
See also
Industrial and organizational psychology
References
Further reading
Behaviorism
Psychological tests and scales | Behaviorally anchored rating scales | [
"Biology"
] | 594 | [
"Behavior",
"Behaviorism"
] |
8,663,363 | https://en.wikipedia.org/wiki/Cabozoa | In the classification of eukaryotes (living organisms with a cell nucleus), Cabozoa was a taxon proposed by Cavalier-Smith. It was a putative clade comprising the Rhizaria and Excavata. More recent research places the Rhizaria with the Alveolata and Stramenopiles instead of the Excavata, however, so "Cabozoa" is polyphyletic.
See also
Corticata
References
Obsolete eukaryote taxa
Bikont unranked clades | Cabozoa | [
"Biology"
] | 113 | [
"Eukaryotes",
"Eukaryote taxa"
] |
8,663,894 | https://en.wikipedia.org/wiki/Mesembrine | Mesembrine is an alkaloid primarily derived from the plant Sceletium tortuosum, commonly known as kanna. This compound is noted for its psychoactive properties, particularly as a serotonin reuptake inhibitor, which contributes to its potential use in treating mood disorders and anxiety. Mesembrine has garnered interest in both traditional medicine and modern pharmacology, where it is explored for its effects on enhancing mood and cognitive function. The plant itself has a long history of use by indigenous peoples in southern Africa, who utilized it for its mood-enhancing and stress-relieving effects, often consuming it in various forms such as teas or chews.
Mesembrine has also been identified in Mesembryanthemum cordifolium, Delosperma echinatum, and Oscularia deltoides.
Pharmacology
Mesembrine has been shown to act as a serotonin reuptake inhibitor (Ki = 1.4 nM), and has also been found to behave as a weak inhibitor of the enzyme phosphodiesterase 4 (PDE4) (Ki = 7,800 nM). A concentrated mesembrine extract of Sceletium tortuosum may exert antidepressant effects by acting as a monoamine releasing agent. As such, mesembrine likely plays a dominant role in the antidepressant effects of kanna.
Rat studies have evaluated effects of kanna extract, finding analgesic and antidepressant potential. No adverse results were noted for a commercial extract up to 5000 mg/kg daily in rats.
Structure
Mesembrine was first isolated and characterized by Bodendorf et al. in 1957. It is a tricyclic molecule with two bridgehead chiral carbons located between the five-membered and six-membered rings. The naturally occurring form of mesembrine produced by plants is the levorotatory isomer, (−)-mesembrine, where the carbon atoms at positions 3a and 7a both have the S configuration (3aS,7aS).
Total synthesis
Because of its structure and bioactivity, mesembrine has been a target for total synthesis over the past 40 years. Over 40 total syntheses have been reported for mesembrine, most of which focused on different approaches and strategies for the construction of the bicyclic ring system and the quaternary carbon.
The first total synthesis of mesembrine was reported by Shamma, et al. in 1965. This route has 21 steps, which was among the longest synthetic routes for mesembrine. Key steps involve the construction of the six-membered ketone ring by Diels-Alder reaction, α-allylation for synthesis of the quaternary carbon, and conjugate addition reaction for the final five-membered ring closure. The final product from this route is a racemic mixture of (+)- and (-)-mesembrine.
In 1971, Yamada et al. reported the first asymmetric total synthesis of (+)-mesembrine. This synthesis introduced the quaternary carbon atom through an asymmetric Robinson annulation reaction, which was mediated by a chiral auxiliary derived from L-proline. In the final step, an intramolecular aza-Michael addition produced the fused pyrrolidine ring system.
References
Further reading
Antidepressants
Indole alkaloids
Ketones
Monoamine releasing agents
Phenol ethers
PDE4 inhibitors
Serotonin reuptake inhibitors
Total synthesis | Mesembrine | [
"Chemistry"
] | 752 | [
"Indole alkaloids",
"Ketones",
"Functional groups",
"Alkaloids by chemical classification",
"Chemical synthesis",
"Total synthesis"
] |
8,664,517 | https://en.wikipedia.org/wiki/Stratford%20tube%20crash | The Stratford tube crash occurred on 8 April 1953, on the Central line of the London Underground. 12 people died and 46 were injured as a result of a rear-end collision in a tunnel, caused by driver error after a signal failure. This was the worst accident involving trains on the London Underground until the Moorgate tube crash in 1975. A similar accident at exactly the same location, occurred in 1946, before the line was open for public traffic; one railwayman died.
Collision
The Central line was extended from Liverpool Street to Stratford in November 1946, and was extended further to Leytonstone in 1948.
A signal (A491) in the tunnel between Stratford and Leyton had been damaged, and this and the preceding signal (A489) were showing a permanent red aspect. Trains were being worked slowly past the failed signals under the "Stop and Proceed" rule, under which trains should proceed with extreme caution, typically less than . However, one train collided with the back of another which was waiting at signal A491, and the first and second coaches of the colliding train were partially telescoped.
12 people were killed, with 5 people suffering serious injuries and 41 people slightly injured.
Investigation
The Inspecting Officer considered that the extent of the damage suggested the speed was in the region of , and when the driver had passed signal A489, he had simply coasted down the steep down gradient, not expecting to find another train before the next signal. The driver claimed to have been travelling slowly and that his vision had been obscured by a cloud of dust, but it was felt his memory could have been affected by concussion.
Memorial
A memorial plaque to the accident was unveiled at Stratford Station on 8 April 2016 by Lyn Brown, Member of Parliament for West Ham. Members of the families of those killed in the crash were also in attendance along with Mike Brown, Commissioner of Transport for London.
References
Railways Archive account, including official Accident Report
Railways Archive account and Accident Report of 1946 accident
Disasters on the London Underground
Railway accidents in 1953
1953 disasters in the United Kingdom
1953 in London
Stratford, London
April 1953 events in the United Kingdom
History of the London Borough of Newham
Train collisions in England
Rail accidents caused by a driver's error | Stratford tube crash | [
"Technology"
] | 451 | [
"Railway accidents and incidents",
"Rail accident stubs"
] |
8,664,662 | https://en.wikipedia.org/wiki/Generalized%20Pochhammer%20symbol | In mathematics, the generalized Pochhammer symbol of parameter and partition generalizes the classical Pochhammer symbol, named after Leo August Pochhammer, and is defined as
It is used in multivariate analysis.
References
Gamma and related functions
Factorial and binomial topics | Generalized Pochhammer symbol | [
"Mathematics"
] | 57 | [
"Number theory stubs",
"Factorial and binomial topics",
"Number theory",
"Combinatorics"
] |
8,664,961 | https://en.wikipedia.org/wiki/E-Group | E-Groups are unique architectural complexes found among a number of ancient Maya settlements. They are central components to the settlement organization of Maya sites and, like many other civic and ceremonial buildings, could have served for astronomical observations. These sites have been discovered in the Maya Lowlands and other regions of Mesoamerica and have been dated to Middle Preclassical to Terminal Classic Period. It has been a common opinion that the alignments incorporated in these structural complexes correspond to the sun's solstices and equinoxes. Recent research has shown, however, that the orientations of these assemblages are highly variable, but pertain to alignment groups that are widespread in the Maya area and materialized mostly in other types of buildings, recording different agriculturally significant dates.
Origin of the name
E-Groups are named after "Group E" at the Classic period site of Uaxactun, which was the first one documented by Mesoamerican archaeologists. At Uaxactun, the Group E complex consists of a long terraced platform with three supra-structures arranged along a linear axis oriented north-south. The two smaller outlying structures flank the larger central temple. A stairway leads down to a plaza formed by Uaxacatun's Pyramid E-VII. Three stele immediately front the E-Group, and a larger stele is located midway between Group E and Pyramid E-VII. Each of the four stairways incorporated into the complex (the main central one and three leading up to each supra-structure) bears two side masks (for a total of 16). There is a small platform located on the western part of the plaza, often a tiered structure, located opposite of the central of the three supra-structures.
From a point of observation on Pyramid E-VII, the three structures have the following orientation:
North structure (Temple E-I) – in line with the sunrise at the Summer (June) solstice
South structure (Temple E-III) – in line with the sunrise at the Winter (December) solstice
Central structure (Temple E-II) – in line with the sunrise at the equinoxes (September and March)
As revealed by excavation reports, however, these alignments could not have been observationally functional, because they connect architectural elements from different periods.
Distribution in Mesoamerica
E-Group structures are found at a number of sites across the Maya area, particularly in the lowlands region. The oldest-known E-Groups coincide with the earliest Maya ceremonial sites of the Preclassic period, indicative of the central role played by astronomical and administrative concerns in the very beginnings of Maya ceremonial construction and planning. The oldest documented E-Group in the Yucatán Peninsula is found at the site of Seibal. However, many earlier E Groups have been found in the Olmec region, western Maya Lowlands and along the Pacific coast in Chiapas.
Construction of E-groups continues on through the Classic period, with examples of these including the Lost World Pyramid at Tikal in the Petén Basin of northern Guatemala, and Structure 5C-2nd at Cerros, in Belize. Caracol, also in Belize and the site that defeated Tikal during the Middle Classic, has a large-scale E-Group located in the western portion of its central core.
Significance
Astronomical Use
E-groups have been heavily theorized to serve as astronomical observatories. In this manner, E-groups were considered useful for farmers who needed to schedule agriculture activities throughout the varying seasons. They were also hypothesized to serve as timekeeping tools for trading purposes. The leading theory stuck that E-groups were useful for observing solar zeniths, as the sun's path was significant to Maya culture. Research has found that E-groups were not precise in their astronomical measurements indicating that they were more of a symbolic rather than observational use.
Mesoamerican Ball Game
The Mesoamerican Ball Game has been associated with E-groups. Certain E-groups, such as Seibal, have ball game imagery indicating the game played by people of that site. In addition, sites like Tikal included ball courts near their E-groups.
Public Spaces
Viewsheds were one architectural aspect that were constructed at locations containing the Middle Preclassic E-Groups, who were mostly located in the Central Maya Lowlands. This discovery indicated that large plazas and other similar architectural structures demonstrate a visible community. It was observed that settlers of this region intentionally spaced these monuments apart from one another as a method of defining different groups. Additionally, recent evidence suggests that these different community spaces were civic.
Directionality
In the E-Group found within Chan's Central Group, researchers discovered that the directionality of the E-Group buildings were not only cross-linked with astrological beliefs, but also to maximize agricultural capabilities of the community. For example, the east and west buildings were correlated to the sun's natural cycle while the north and south buildings were correlated with the sun's positions at midday and in the underworld, respectively. E-Groups believed the sun also passed through the underworld when it could not be seen by the naked eye, while the sun's position at midday (north) referred to the sun shining on the heavens, exemplifying supreme power. This data collection was completed by LiDAR technology.
History
1924–1954
Frans Blom is credited with the discovery of the first E Group in 1924 while working in Uaxactún, Guatemala, a northeast region of the Lowland Maya. This site has been dated to originate from the Pre Classic Mayan period. The E Group he identified was an open plaza defined in the west by a pyramid and in the east by a platform supporting three north–south oriented buildings. Blom posited that the assemblage was an astronomical observatory based on the observation that when viewed from the western pyramid, the three eastern buildings marked the position of the sun at sunrise on the equinoxes and solstices. From the western radial structure of the E Group, sunrise during the summer solstice could be seen above the northern structure while the sunrise during the winter solstice can be observed above the southern structure. In 1928, Oliver Ricketson theorized that the sunrises during the equinoxes could be observed over the central eastern structure.
In 1943 Karl Ruppert published his discovery of 13 more E Group structures contained in the classic Maya Lowlands. He also identified 6 more structures that were similar to Blom's original discovery but had slight differences. In addition to these, Thompson had already unknowingly excavated two E Groups. In total during this time period 25 E Groups were identified at 22 different sites–most within a 110 km radius of Uaxactún. At this point only 4 E Groups had been excavated.
1955–1984
During this time period 10 additional E Groups were reported with 4 more being excavated. Arlen Chase excavated the Cenote E Group in 1983 which led to him defining two styles of E Group. The first is the Cenote style which dates back to around 1000 BCE is characterized by a long eastern platform supporting one larger central building. The second kind is the Uaxactún style with the shorter eastern platform supporting three smaller structures.
In 1980 Marvin Cohodas began discussing the relationship of E Groups to celebrating agricultural cycles, an idea that was further investigated by James Aimers (1993:171–179), as well as Travis Stanton and David Freidel (2003). Cohodas also began to discuss notion that the E Group related to origin places for the sun and moon.
1985–2016
142 additional E Groups were discovered during this period, many located in the Southeast Petén. By this point 34 E Groups had been excavated in total. Anthony Aveni and Horst Hartung (1988, 1989) looked more at Uaxactún's Group E complex to test the theory that it functioned as an astronomical observatory with their results indicating that it likely was. Juan Pedro Laporte (2001:141) conducted a survey of 177 sites in the Southeast Petén found that 85% had an E Group assemblage. Laporte (2001:142) noted that E groups were the largest open public space at most sites hinting more at their central nature to the community.
in 2003 the alignments of 40 E Groups were analyzed showed them to be observatories (Aveni et al. 2003:162, Table 1). The analysis also showed a shift from solstice dating to zenith passage dating–a sign of influence from Teotihuacan at around 250–500 CE. Other sights were aligned with the 20-day Winals (Mayan months). This demonstrates that the particular design of a site's E Group was aligned with the values of the people that inhabited the site. There is still an ongoing debate about whether E Groups had other ritual purposes that were more important than astronomical observations, however, it is likely that both uses were important and should continue to be researched.
Current Research
Current research on E Groups has produced many important findings. The first of these is that early E Groups were made by clearing the landscape to bedrock then forming the bedrock into something with building like features. This bedrock was later encased by E Group reconstruction fills. Forming of bedrock is a common practice and important motif found across ancient America.
A second result has come from the analysis of varying E Group sizes and locations. One E Group variant found in Belize (Robin et al. 2012) is small enough and within close enough proximity to a residential complex that it can be inferred the E Group was used by a single family. This is in contrast to the Uaxactún E Group that would have been used by the whole populous. We would like to be able to use E Groups to study population density and societal structure further however a lot of later occupation has made this hard to do.
Finally, it has been discovered that most E Groups are placed strategically along crucial Mesoamerican trade routes. This calls for further investigation into the purpose of E Groups and whether they might have served some economic purposes.
Pseudo E-Groups
In 2006, archaeologist, Thomas Guderjan, conducted research on, what he called, "Pseudo E-Groups." This term refers to the regional variant of E-Groups, mainly residing in Eastern Peten during the Late Classic period. These sites mainly consisted of two buildings joined by a mutual substructure. Additionally, Pseudo E-Groups lack a western building that acts as an observatory. This difference is only correlated with the E-Groups in Eastern Peten. To date, there are currently four known Pseudo E-Groups: Blue Creek, Chan Chich, San Jose, Quam Hill.
Notes
References
Maya architecture
Buildings and structures in Mesoamerica
Ancient astronomical observatories
Solstices | E-Group | [
"Astronomy"
] | 2,222 | [
"Time in astronomy",
"Solstices"
] |
8,665,001 | https://en.wikipedia.org/wiki/Raymarine%20Marine%20Electronics | Raymarine is a manufacturer and major supplier of electronic equipment for marine use. The company targets both recreational and light commercial markets with their products, which include:
GPS Chartplotters
VHF Radios
Digital Fishfinders / Sonar
Radar
Self-steering gear (Autohelm / Autopilot)
Satellite television
Software
The Raymarine brand has been on the market for over 80 years. Within this time, their product range has included visual navigation information equipment. Their products work with performance sensors that operate along with intelligence operating systems. Raymarine has a global service network that operates in over 80 countries all over the world.
Until 2005, the company manufactured the majority of its products, but then started outsourcing the production process. A year later the company declared to have completed this reorganization. The company focuses on development, marketing, sales and service.
Raymarine has been taking charge of the distribution of its products for a number of years. This is done, among other ways, by purchasing national distributors (such as Eissing in Germany and SD Marine in France) or opening their own branches, so-called subsidiaries. On 1 July 2011, Raymarine Nederland's own branch (in Velp) was opened, after the then importer, Holland Nautic Apeldoorn, had indicated that it wanted to stop distributing the brand.
History
The company began as a division of the American company Raytheon, a manufacturer of defense systems, when it launched its first echo depth sounder in 1923.
In 1958 ,Raytheon was able to increase its product offering and market share through the takeover of Apelco.
In 1974, Derek Fawcett, a British-born inventor and long-time sailor, founded Nautech. In the beginning, the company had only one product: automatic steering systems for which it used the brand name Autohelm. Due to the great success of these systems, the company was able to offer an ever-wider range of systems. In 1989, Fawcett came up with the SeaTalk network, a digital-communications protocol that allowed Autohelm units to “talk” with other onboard instrumentation via a single-cable connection (a predecessor to contemporary NMEA 0183 and NMEA 2000 protocols). In 1990 Raytheon purchased Nautech, mainly so the American-based firm could access Nautech's extensive European distribution and sell its radars on the continent.
Due to reorganizations in 1993 and 1998, the current Raymarine - at the time still under the name of the parent company - was created.
In January 2001, Raymarine was formed when the division was acquired in a management buy-out backed by Hg. In December 2004 the company was floated on the London Stock Exchange quadrupling Hg's investment.
In 2005, the company was involved in Ellen MacArthur's solo world circumnavigation.
The Global Financial Crisis of 2007-2008 hit Raymarine, and the recreational boating industry in general, hard. In May 2010 with Raymarine's lenders forcing the company into administration (the British version of business bankruptcy) the company was acquired by FLIR Systems.
References
Companies based in Hampshire
Marine electronics
Navigation system companies
Sonar manufacturers | Raymarine Marine Electronics | [
"Engineering"
] | 650 | [
"Marine electronics",
"Marine engineering"
] |
8,665,621 | https://en.wikipedia.org/wiki/Jack%20function | In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.
Definition
The Jack function
of an integer partition , parameter , and arguments can be recursively defined as
follows:
For m=1
For m>1
where the summation is over all partitions such that the skew partition is a horizontal strip, namely
( must be zero or otherwise ) and
where equals if and otherwise. The expressions and refer to the conjugate partitions of and , respectively. The notation means that the product is taken over all coordinates of boxes in the Young diagram of the partition .
Combinatorial formula
In 1997, F. Knop and S. Sahi gave a purely combinatorial formula for the Jack polynomials in n variables:
The sum is taken over all admissible tableaux of shape and
with
An admissible tableau of shape is a filling of the Young diagram with numbers 1,2,…,n such that for any box (i,j) in the tableau,
whenever
whenever and
A box is critical for the tableau T if and
This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials.
C normalization
The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product:
This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as
where
For is often denoted by and called the Zonal polynomial.
P normalization
The P normalization is given by the identity , where
where and denotes the arm and leg length respectively. Therefore, for is the usual Schur function.
Similar to Schur polynomials, can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter .
Thus, a formula for the Jack function is given by
where the sum is taken over all tableaux of shape , and denotes the entry in box s of T.
The weight can be defined in the following fashion: Each tableau T of shape can be interpreted as a sequence of partitions
where defines the skew shape with content i in T. Then
where
and the product is taken only over all boxes s in such that s has a box from in the same row, but not in the same column.
Connection with the Schur polynomial
When the Jack function is a scalar multiple of the Schur polynomial
where
is the product of all hook lengths of .
Properties
If the partition has more parts than the number of variables, then the Jack function is 0:
Matrix argument
In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If is a matrix with eigenvalues
, then
References
.
.
.
External links
Software for computing the Jack function by Plamen Koev and Alan Edelman.
MOPS: Multivariate Orthogonal Polynomials (symbolically) (Maple Package)
SAGE documentation for Jack Symmetric Functions
Orthogonal polynomials
Special functions
Symmetric functions | Jack function | [
"Physics",
"Mathematics"
] | 661 | [
"Symmetry",
"Special functions",
"Combinatorics",
"Symmetric functions",
"Algebra"
] |
8,665,765 | https://en.wikipedia.org/wiki/Earth%20Gravitational%20Model | The Earth Gravitational Models (EGM) are a series of geopotential models of the Earth published by the National Geospatial-Intelligence Agency (NGA). They are used as the geoid reference in the World Geodetic System.
The NGA provides the models in two formats: as the series of numerical coefficients to the spherical harmonics which define the model, or a dataset giving the geoid height at each coordinate at a given resolution.
Three model versions have been published: EGM84 with n=m=180, EGM96 with n=m=360, and EGM2008 with n=m=2160. n and m are the degree and orders of harmonic coefficients; the higher they are, the more parameters the models have, and the more precise they are. EGM2008 also contains expansions to n=2190. Developmental versions of the EGM are referred to as Preliminary Gravitational Models (PGMs).
Each version of EGM has its own EPSG code as a vertical datum.
History
EGM84
The first EGM, EGM84, was defined as a part of WGS84 along with its reference ellipsoid. WGS84 combines the old GRS 80 with the then-latest data, namely available Doppler, satellite laser ranging, and Very Long Baseline Interferometry (VLBI) observations, and a new least squares method called collocation. It allowed for a model with n=m=180 to be defined, providing a raster for every half degree (30', 30 minute) of latitude and longitude of the world. NIMA also computed and made available 30′×30′ mean altimeter derived gravity anomalies from the GEOSAT Geodetic Mission. 15′×15′ is also available.
EGM96
EGM96 from 1996 is the result of a collaboration between the National Imagery and Mapping Agency (NIMA), the NASA Goddard Space Flight Center (GSFC), and the Ohio State University. It took advantage of new surface gravity data from many different regions of the globe, including data newly released from the NIMA archives. Major terrestrial gravity acquisitions by NIMA since 1990 include airborne gravity surveys over Greenland and parts of the Arctic and the Antarctic, surveyed by the Naval Research Lab (NRL) and cooperative gravity collection projects, several of which were undertaken with the University of Leeds. These collection efforts have improved the data holdings over many of the world's land areas, including Africa, Canada, parts of South America and Africa, Southeast Asia, Eastern Europe, and the former Soviet Union. In addition, there have been major efforts to improve NIMA's existing 30' mean anomaly database through contributions over various countries in Asia. EGM96 also included altimeter derived anomalies derived from ERS-1 by Kort & Matrikelstyrelsen (KMS), (National Survey and Cadastre, Denmark) over portions of the Arctic, and the Antarctic, as well as the altimeter derived anomalies of Schoene [1996] over the Weddell Sea. The raster from EGM96 is provided at 15'x15' resolution.
EGM96 is a composite solution, consisting of:
a combination solution to degree and order 70,
a block diagonal solution from degree 71 to 359,
and the quadrature solution at degree 360.
PGM2000A is an EGM96 derivative model that incorporates normal equations for the dynamic ocean topography implied by the POCM4B ocean general circulation model.
EGM2008
The official Earth Gravitational Model EGM2008 has been publicly released by the National Geospatial-Intelligence Agency (NGA) EGM Development Team. Among other new data sources, the GRACE satellite mission provided a very high resolution model of the global gravity. This gravitational model is complete to spherical harmonic degree and order 2159 (block diagonal), and contains additional coefficients extending to degree 2190 and order 2159. It provides a raster of 2.5′×2.5′ and an accuracy approaching 10 cm. 1'×1' is also available in non-float but lossless PGM, but original .gsb files are better. Indeed, some libraries like GeographicLib use uncompressed PGM, but it is not original float data as was present in .gsb format. That introduces an error of up to 0.3 mm because of 16 bit quantisation, using lossless float GeoTIFF or original .gsb files is a good idea. The two grids can be recreated by using program in Fortran and source data from NGA. "Test versions" of EGM2008 includes PGM2004, 2006, and 2007.
As with all spherical harmonic models, EGM2008 can be truncated to have fewer coefficients with lower resolution.
EGM2020
EGM2020 is to be a new release (still not released as of September 2024) with the same structure as EGM2008, but with improved accuracy by incorporating newer data. It was originally planned to be released in April 2020. The precursor version XGM2016 (X stands for experimental) was released in 2016 up to degree and order (d/o) 719. XGM2019e was released in 2020 up to spheroidal d/o 5399 (that corresponds to a spatial resolution of 2′ which is ~4 km) and spherical d/o 5540 with a different spheroidal harmonic construction followed by conversion back into spherical harmonics. XGM2020 was also released recently.
See also
ETRS89
NAD83
References
External links
EGM96: The NASA GSFC and NIMA Joint Geopotential Model
Earth Gravitational Model 2008 (EGM2008)
GeographicLib provides a utility GeoidEval (with source code) to evaluate the geoid height for the EGM84, EGM96, and EGM2008 Earth gravity models. Here is an online version of GeoidEval.
The Tracker Component Library from the United States Naval Research Laboratory is a free Matlab library with a number of gravitational synthesis routines. The function getEGMGeoidHeight can be used to evaluate the geoid height under the EGM96 and EGM2008 models. Additionally, the gravitational potential, acceleration, and gravity gradient (second spatial derivatives of the potential) can be evaluated using the spherHarmonicEval function, as demonstrated in DemoGravCode.
Geodesy | Earth Gravitational Model | [
"Mathematics"
] | 1,352 | [
"Applied mathematics",
"Geodesy"
] |
8,665,879 | https://en.wikipedia.org/wiki/HR%203407 | HR 3407 is a single star in the southern constellation of Vela. It has the Bayer designation C Velorum; HR 3407 is the designation in the Bright Star Catalogue. It is an orange-hued star that is dimly visible to the naked eye with an apparent visual magnitude of 5.01. The distance to this object is approximately 1,040 light years based on parallax measurements, and it is drifting further away with a radial velocity of 4 km/s.
This object is an aging K-type supergiant star with a stellar classification of K1.5Ib. It has about three times the mass of the Sun and has expanded to around 71 times the Sun's radius. The latter is equivalent to , or about one third the distance from the Sun to the Earth. It is spinning with a projected rotational velocity of 4.1. The star displays microvariability with a period of 10.99 cycles per day and an amplitude of 0.0036 in magnitude. It is radiating around 1,600 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,324 K.
References
K-type supergiants
Vela (constellation)
Velorum, C
Durchmusterung objects
073155
3407
042088 | HR 3407 | [
"Astronomy"
] | 270 | [
"Vela (constellation)",
"Constellations"
] |
8,665,913 | https://en.wikipedia.org/wiki/HD%2078004 | HD 78004 is a single star in the constellation Vela. It has the Bayer designation c Velorum, while HD 78004 is the identifier from the Henry Draper catalogue. The object has an orange hue and is visible to the naked eye with an apparent visual magnitude of 3.75. It is located at a distance of approximately 320 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +24 km/s.
This is an aging K-type giant star with a stellar classification of K2III, having exhausted the supply of hydrogen at its core then cooled and expanded off the main sequence. At present, it has 27 times the radius of the Sun. The star is radiating 271.5 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,483 K.
References
K-type giants
Vela (constellation)
Velorum, c
Durchmusterung objects
078004
044511
3614 | HD 78004 | [
"Astronomy"
] | 212 | [
"Vela (constellation)",
"Constellations"
] |
8,665,933 | https://en.wikipedia.org/wiki/Navisworks | Navisworks (previously known as JetStream) is a 3D design review package for Microsoft Windows.
Used primarily in the architecture, engineering, and construction (AEC) industries to complement 3D design packages (such as Autodesk Revit, AutoCAD, and MicroStation), Navisworks allows users to open and combine 3D models; navigate around them in real-time (without the WASD possibility); and review the model using a set of tools including comments, redlining, viewpoint, and measurements. A selection of plug-ins enhances the package adding interference detection, 4D time simulation, photorealistic rendering and PDF-like publishing.
The software was originally created by Sheffield, UK based developer NavisWorks (a subsidiary of Lightwork Design). NavisWorks was purchased by Autodesk for $25 million on June 1, 2007.
Components
Navisworks (formerly JetStream) is built around a core module called Roamer and has a number of built-in functionalities:
Roamer - The core part allows users to open models from a range of 3D design and laser scan formats and combine them into a single 3D model. Users can then navigate around the model in real-time and review the model with a range of mark-up tools.
Publisher - This allows users to publish the complete 3D model into a single NWD file that can be freely opened by anyone using Freedom, a free viewer.
Clash Detective - A functionality to enable interference detection. This means users can select parts of the model and look for places where the geometry conflicts. This is for finding faults in the design.
Renderer (formerly Presenter) - With the Renderer, users can apply materials and lighting to the model and produce photorealistic images and animations.
Quantification - By "taking off" the model, users can automatically make material estimates, measure areas and count building components.
TimeLiner - Adds 4D simulation so the user can link geometry to times and dates and to simulate the construction or demolition of the model over time. Also links with project scheduling software (Such as Microsoft Project or Primavera products) to import task data.
Animator - A feature that allows the users to animate the model and interact with it.
Scripter - This allows the user to set up a collection of actions that he/she want to happen when certain events conditions are met.
File format support
Navisworks Simulate and Manage are most notable for its support for a wide range of design file formats. Formats natively supported include:
NavisWorks - .nwd, .nwf, .nwc (all versions, no full backward compatibility)
AutoCAD Drawing - .dwg, .dxf (up to AutoCAD 2018)
MicroStation (SE, J, V8, & XM) - .dgn, .prp, prw (up to v7, & v8)
3D Studio Max - .3ds, .prj (up to 3ds Max 2018)
ACIS SAT - .sat, .sab (all ASM SAT, up to ASM SAT v7)
DWF - .dwf, .dwfx (all versions)
CATIA - .model, session, .exp, dlv3, .CATPart, .CATProduct, .cgr (up to v4, & v5)
IFC - .ifc (IFC2X_PLATFORM, IFC2X_FINAL, IFC2X2_FINAL, IFC2X3, IFC4)
IGES - *.igs*, *.iges* (all versions)
Informatix/MicroGDS - .man, .cv7 (v10)
Inventor - .ipt, .iam, .ipj (up to Inventor 2018)
CIS/2 - .stp (STRUCTURAL_FRAME_SCHEMA)
JT Open - .jt (up to v10)
NX - .prt (up to v9)
Revit - .rvt (up to 2011–2022)
RVM - .rvm (up to v12.0 SP5)
SketchUp - .skp (v5 up to 2015)
PDS Design Review - .dri (legacy file format, support up to 2007)
STL - .stl (binary only)
VRML - .wrl, .wrz (VRML1, VRML2)
Parasolid - .x_b (up to schema 26)
FBX - .fbx (FBX SDK 2017)
Pro/ENGINEER - .prt, .asm, .g, .neu (Wildfire v5, Creo Parametric v1-v3)
STEP - .stp, .step (AP214, AP203E3, AP242)
Solidworks - .prt, .sldprt, .asm, .sldasm (2001, plus 2015)
PDF - .pdf (all versions)
Rhino - .3dm (up to v5)
Solid Edge - .stp, .prt
Additional products that are supported through Autodesk, and third parties:
Revit
MicroStation
3DS Max
ArchiCAD
References
External links
Autodesk products
3D graphics software
BIM software
Building information modeling
Computer-aided design software
Windows graphics-related software | Navisworks | [
"Engineering"
] | 1,109 | [
"Building engineering",
"Building information modeling"
] |
8,665,976 | https://en.wikipedia.org/wiki/Savart%20wheel | The Savart wheel is an acoustical device named after the French physicist Félix Savart (1791–1841), which was originally conceived and developed by the English scientist Robert Hooke (1635–1703).
A card held to the edge of a spinning toothed wheel will produce a tone whose pitch varies with the speed of the wheel. A mechanism of this sort, made using brass wheels, allowed Hooke to produce sound waves of a known frequency, and to demonstrate to the Royal Society in 1681 how pitch relates to frequency. For practical purposes Hooke's device was soon supplanted by the invention of the tuning fork.
About a century and a half after Hooke's work, the mechanism was taken up again by Savart for his investigations into the range of human hearing. In the 1830s Savart was able to construct large, finely-toothed brass wheels producing frequencies of up to 24 kHz that seem to have been the world's first artificial ultrasonic generators. In the later 19th century, Savart's wheels were also used in physiological and psychological investigations of time perception.
Nowadays, Savart wheels are commonly demonstrated in physics lectures, sometimes driven and sounded by an air hose (in place of the card mechanism).
Description
The basic device consists of a ratchet-wheel with a large number of uniformly spaced teeth. When the wheel is turned slowly while the edge of a card is held against the teeth a succession of distinct clicks can be heard. When the wheel is spun rapidly it produces a shrill tone, whereas if the wheel is allowed to turn more slowly the tone progressively decreases in pitch. Since the frequency of the tone is directly proportional to the rate at which the teeth strike the card, a Savart wheel can be calibrated to provide an absolute measure of pitch. Multiple wheels of different sizes, carrying different numbers of teeth, can also be attached so as to allow several pitches (or chords) to be produced while the axle is being turned at a constant rate.
Hooke's wheel
Hooke began work on his wheel in March 1676, in conjunction with the renowned clockmaker Thomas Tompion, following conversations with the music theorist William Holder. He had a longstanding interest in musical vibrations, and a decade earlier in 1666 had even boasted to Samuel Pepys that he could tell the rate a fly's wings were beating from the sound they made. In July 1681, he demonstrated to the Royal Society his new device for producing distinct musical tones by striking the teeth of fast-turning brass wheels. In this way, he was able to generate for the first time sound waves of known frequency, and provide an empirical demonstration of the correspondence between the human perception of pitch and the physical property of sound-wave frequency. Furthermore, by fitting different wheels alongside one another on the same axis, he was able to verify frequency ratios for musical intervals, such as perfect fifths and fourths, etc.
Hooke published his findings in 1705. Despite providing an objective measure of pitch, for everyday use his wheel was soon made irrelevant by the invention in 1711 of the tuning fork.
Savart's version
Hooke's device was not used again for study purposes for over a century. Its next documented usage was in 1830 when Savart reported his use of a system similar to Hooke's which he developed while investigating the lower range of human hearing. Savart's specific contribution was to attach a tachometer to the axis of the toothed wheel to facilitate calibration of the tooth rate. Savart used his wheel as a practical alternative to John Robison's siren, which was also being adopted at the time by Charles Cagniard de la Tour to test the range of human hearing. By 1834 Savart was constructing brass wheels with a width of 82 cm, containing as many as 720 teeth. These wheels, which could produce frequencies up to 24 kHz, have been tentatively proposed as the first artificial generators of ultrasound.
Use in time perception experiments
In the later 19th century, Savart's wheel was adapted for use in physiological and psychological investigations of the human perception of time. In 1873, the Austrian physiologist Sigmund Exner reported the auditory ability to distinguish successive clicks from the wheel (or, alternatively, rapidly snapped electric sparks) at time intervals as close as 2 milliseconds (1/500 sec). A modified wheel that produced varying numbers of clicks at different intervals was later used by the American psychologists G. Stanley Hall and Joseph Jastrow, who in 1886 reported on the limits to the human perception of acoustic discontinuities.
Musical and other applications
In 1894, French electrical engineer Gustave Trouvé patented an electrically (or clockwork) powered keyboard instrument capable of playing a series of 88 variously-sized Savart wheels from a piano keyboard, allowing harmonic chords and dynamics. The same principle is used in modern-day electromechanical organs, such as the Hammond organ, that make use of tonewheels.
The concept has also been adapted to produce an experimental musical instrument created by Bart Hopkin. This application of Savart's wheel consists of a series of 30 wooden disks of increasing size mounted on a motorized axle. Rasping vibrations are induced in a plectrum when it comes into contact with the ridges that line each disk at regular intervals, and are amplified in a styrofoam cup which acts as a sounding board. The instrument is claimed to make "the most obtrusive, obnoxious and irritating sound ever known."
Nowadays, Savart wheels are commonly used for demonstrations during physics lectures. In one variant, the wheel can be driven by an air hose blowing on the teeth; in this case, the pitch of the sound produced will vary with the force of the air current.
See also
Singing bird box
Tonometer
Tonewheel
Notes and references
Notes
References
External links
"Savart's Wheel" – musical instrument designed by Bart Hopkin
Acoustics
Pitch (music)
Physics experiments
Ultrasound
Experimental musical instruments
Mechanical musical instruments
Lamellophones | Savart wheel | [
"Physics",
"Technology"
] | 1,226 | [
"Mechanical musical instruments",
"Machines",
"Physics experiments",
"Classical mechanics",
"Acoustics",
"Physical systems",
"Experimental physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.