id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
4,900,609 | https://en.wikipedia.org/wiki/Effective%20evolutionary%20time | The hypothesis of effective evolutionary time attempts to explain gradients, in particular latitudinal gradients, in species diversity. It was originally named "time hypothesis".
Background
Low (warm) latitudes contain significantly more species than high (cold) latitudes. This has been shown for many animal and plant groups, although exceptions exist (see latitudinal gradients in species diversity). An example of an exception is helminths of marine mammals, which have the greatest diversity in northern temperate seas, possibly because of small population densities of hosts in tropical seas that prevented the evolution of a rich helminth fauna, or because they originated in temperate seas and had more time for speciations there. It has become more and more apparent that species diversity is best correlated with environmental temperature and more generally environmental energy. These findings are the basis of the hypothesis of effective evolutionary time. Species have accumulated fastest in areas where temperatures are highest. Mutation rates and speed of selection due to faster physiological rates are highest, and generation times which also determine speed of selection, are smallest at high temperatures. This leads to a faster accumulation of species, which are absorbed into the abundantly available vacant niches, in the tropics. Vacant niches are available at all latitudes, and differences in the number of such niches can therefore not be the limiting factor for species richness. The hypothesis also incorporates a time factor: habitats with a long undisturbed evolutionary history will have greater diversity than habitats exposed to disturbances in evolutionary history.
The hypothesis of effective evolutionary time offers a causal explanation of diversity gradients, although it is recognized that many other factors can also contribute to and modulate them.
Historical aspects
Some aspects of the hypothesis are based on earlier studies. Bernhard Rensch, for example, stated that evolutionary rates also depend on temperature: numbers of generation in poikilotherms, but sometimes also in homoiotherms (homoiothermic), are greater at higher temperatures and the effectiveness of selection is therefore greater. Ricklefs refers to this hypothesis as "hypothesis of evolutionary speed" or "higher speciation rates". Genera of Foraminifera in the Cretaceous and families of Brachiopoda in the Permian have greater evolutionary rates at low than at high latitudes. That mutation rates are greater at high temperatures has been known since the classical investigations of Nikolay Timofeev-Ressovsky et al. (1935), although few later studies have been conducted. Also, these findings were not applied to evolutionary problems.
The hypothesis of effective evolutionary time differs from these earlier approaches as follows. It proposes that species diversity is a direct consequence of temperature-dependent processes and the time ecosystems have existed under more or less equal conditions. Since vacant niches into which new species can be absorbed are available at all latitudes, the consequence is accumulation of more species at low latitudes. All earlier approaches remained without basis without the assumption of vacant niches, as there is no evidence that niches are generally narrower in the tropics, i.e., an accumulation of species cannot be explained by subdivision of previously utilized niches (see also Rapoport's rule). The hypothesis, in contrast to most other hypotheses attempting to explain latitudinal or other gradients in diversity, does not rely on the assumption that different latitudes or habitats generally have different "ceilings" for species numbers, which are higher in the tropics than in cold environments. Such different ceilings are thought to be, for example, determined by heterogeneity or area of the habitat. But such factors, although not setting ceilings, may well modulate the gradients.
Recent studies
A considerable number of recent studies support the hypothesis. Thus, diversity of marine benthos, interrupted by some collapses and plateaus, has risen from the Cambrian to the Recent, and there is no evidence that saturation has been reached. Rates of diversification per time unit for birds and butterflies increase towards the tropics. Allen et al. found a general correlation between environmental temperature and species richness for North and Central American trees, for amphibians, fish, Prosobranchia and fish parasites. They showed that species richness can be predicted from the biochemical kinetics of metabolism, and concluded that evolutionary rates are determined by generation times and mutation rates both correlated with metabolic rates which have the same Boltzmann relation with temperature. They further concluded that these findings support the mechanisms for latitudinal gradients proposed by Rohde. Gillooly et al. (2002) described a general model also based on first principles of allometry and biochemical kinetics which makes predictions about generation times as a function of body size and temperature. Empirical findings support the predictions: in all cases that were investigated (birds, fish, amphibians, aquatic insects, zooplankton) generation times are negatively correlated with temperature. Brown et al.(2004) further developed these findings to a general metabolic theory of ecology. Indirect evidence points to increased mutation rates at higher temperatures, and the energy-speciation hypothesis is the best predictor for species richness of ants. Finally, computer simulations using the Chowdhury ecosystem model have shown that results correspond most closely to empirical data when the number of vacant niches is kept large. Rohde gives detailed discussions of these and other examples. Of particular importance is the study by Wright et al. (2006) which was specifically designed to test the hypothesis. It showed that molecular substitution rates of tropical woody plants are more than twice as large as those of temperate species, and that more effective genetic drift in smaller tropical populations cannot be responsible for the differences, leaving only direct temperature effects on mutation rates as an explanation. Gillman et al. (2009) examined 260 mammal species of 10 orders and 29 families and found that substitution rates in the cytochrome B gene were substantially faster in species at warm latitudes and elevations, compared with those from cold latitudes and elevations. A critical examination of the data showed that this cannot be attributed to gene drift or body mass differentials. The only possibilities left are a Red Queen effect or direct effects of thermal gradients (including possibly an effect of torpor/hibernation differentials). Rohde (1992, 1978) had already pointed out that “it may well be that mammalian diversity is entirely determined by the diversity of plants and poikilothermic animals further down in the hierarchy”, i.e., by a Red Queen effect. He also pointed out that exposure to irradiation including light is known to cause mutations in mammals, and that some homoiothermic animals have shorter generation times in the tropics, which - either separately or jointly - may explain the effect found by Gillman et al. Gillman et al. (2010) extended their earlier study on plants by determining whether the effect is also found within highly conserved DNA. They examined the 18S ribosomal gene in the same 45 pairs of plants. And indeed, the rate of evolution was 51% faster in the tropical than their temperate sister species. Furthermore, the substitution rate in 18S correlated positively with that in the more variable ITS. These result lend further very strong support to the hypothesis. Wright et al. (2010) tested the hypothesis on 188 species of amphibians belonging to 18 families, using mitochondrial RNA genes 12S and 16S, and found substantially faster substitution rates for species living in warmer habitats at both lower latitudes and lower elevations. Thus, the hypothesis has now been confirmed for several genes and for plants and animals.
Vázquez, D.P. and Stevens, R.D. (2004) conducted a metanalysis of previous studies and found no evidence that niches are generally narrower in the tropics than at high latitudes. This can be explained only by the assumption that niche space was not and is not saturated, having the capacity to absorb new species without affecting the niche width of species already present, as predicted by the hypothesis.
Depth gradients
Species diversity in the deepsea has been largely underestimated until recently (e.g., Briggs 1994: total marine diversity less than 200,000 species). Although our knowledge is still very fragmentary, some recent studies appear to suggest much greater species numbers (e.g., Grassle and Maciolek 1992: 10 million macroinvertebrates in soft bottom sediments of the deepsea). Further studies must show whether this can be verified. A rich diversity in the deepsea can be explained by the hypothesis of effective evolutionary time: although temperatures are low, conditions have been more or less equal over large time spans, certainly much larger than in most or all surface waters.
References
Evolutionary ecology
Biogeography
Systems ecology | Effective evolutionary time | Biology,Environmental_science | 1,789 |
63,167,039 | https://en.wikipedia.org/wiki/%28Z%29-6-Dodecen-4-olide | (Z)-6-Dodecen-4-olide is a volatile, unsaturated lipid and γ-lactone found in dairy products, and secreted as a pheromone by some even-toed ungulates. It has a creamy, cheesy, fatty flavour with slight floral undertones in small concentrations, but contributes towards the strong, musky smell of a few species of antelope and deer in higher concentrations.
Function
(Z)-6-Dodecen-4-olide is believed to play a part in olfactory communication between individuals of the Columbian black-tailed deer (Odocoileus hemionus columbianus), and is secreted into urine during a rut. (Z)-6-Dodecen-4-olide is then deposited onto the tuft of hair making up the tarsal gland of the deer, as the urine runs down the gland, during a behavior called rub-urination. Similarly, it has also been identified in secretions of the interdigital and pedal glands of the bontebok (Damaliscus pygargus) and the blesbok (Damaliscus pygargus phillipsi) where it is believed to play a role in carrying information about the dominance status, sex, health condition and possibly other characteristics of the animal it came from. The (Z)-6-dodecen-4-olide is replenished daily to maintain the pungent smell. It has also been isolated from Polianthes tuberosa, a perennial plant used in the perfume industry since the 17th century for its powerful floral scent.
References
Lactones
Unsaturated compounds | (Z)-6-Dodecen-4-olide | Chemistry | 354 |
1,191,600 | https://en.wikipedia.org/wiki/Document%20processing | Document processing is a field of research and a set of production processes aimed at making an analog document digital. Document processing does not simply aim to photograph or scan a document to obtain a digital image, but also to make it digitally intelligible. This includes extracting the structure of the document or the layout and then the content, which can take the form of text or images. The process can involve traditional computer vision algorithms, convolutional neural networks or manual labor. The problems addressed are related to semantic segmentation, object detection, optical character recognition (OCR), handwritten text recognition (HTR) and, more broadly, transcription, whether automatic or not. The term can also include the phase of digitizing the document using a scanner and the phase of interpreting the document, for example using natural language processing (NLP) or image classification technologies. It is applied in many industrial and scientific fields for the optimization of administrative processes, mail processing and the digitization of analog archives and historical documents.
Background
Document processing was initially as is still to some extent a kind of production line work dealing with the treatment of documents, such as letters and parcels, in an aim of sorting, extracting or massively extracting data. This work could be performed in-house or through business process outsourcing. Document processing can indeed involve some kind of externalized manual labor, such as mechanical Turk.
As an example of manual document processing, as relatively recent as 2007, document processing for "millions of visa and citizenship applications" was about use of "approximately 1,000 contract workers" working to "manage mail room and data entry."
While document processing involved data entry via keyboard well before use of a computer mouse or a computer scanner, a 1990 article in The New York Times regarding what it called the "paperless office" stated that "document processing begins with the scanner". In this context, a former Xerox vice-president, Paul Strassman, expressed a critical opinion, saying that computers add rather than reduce the volume of paper in an office. It was said that the engineering and maintenance documents for an airplane weigh "more than the airplane itself".
Automatic document processing
As the state of the art advanced, document processing transitioned to handling "document components ... as database entities."
A technology called automatic document processing or sometimes intelligent document processing (IDP) emerged as a specific form of Intelligent Process Automation (IPA), combining artificial intelligence such as Machine Learning (ML), Natural Language Processing (NLP) or Intelligent Character Recognition (ICE) to extract data from several types documents. Advancements in automatic document processing, also called Intelligent Document Processing, improve the ability to process unstructured data with fewer exceptions and greater speeds.
Applications
Automatic document processing applies to a whole range of documents, whether structured or not. For instance, in the world of business and finance, technologies may be used to process paper-based invoices, forms, purchase orders, contracts, and currency bills. Financial institutions use intelligent document processing to process high volumes of forms such as regulatory forms or loan documents. ID uses AI to extract and classify data from documents, replacing manual data entry.
In medicine, document processing methods have been developed to facilitate patient follow-up and streamline administrative procedures, in particular by digitizing medical or laboratory analysis reports. The goal is also to standardize medical databases. Algorithms are also directly used to assist physicians in medical diagnosis, e.g. by analyzing magnetic resonance images, or microscopic images.
Document processing is also widely used in the humanities and digital humanities, in order to extract historical big data from archives or heritage collections. Specific approaches were developed for various sources, including textual documents, such as newspaper archives, but also images, or maps.
Technologies
If, from the 1980s onward, traditional computer vision algorithms were widely used to solve document processing problems, these have been gradually replaced by neural network technologies in the 2010s. However, traditional computer vision technologies are still used, sometimes in conjunction with neural networks, in some sectors.
Many technologies support the development of document processing, in particular optical character recognition (OCR), and handwritten text recognition (HTR), which allow the text to be transcribed automatically. Text segments as such are identified using instance or object detection algorithms, which can sometimes also be used to detect the structure of the document. The resolution of the latter problem sometimes also uses semantic segmentation algorithms.
These technologies often form the core of document processing. However, other algorithms may intervene before or after these processes. Indeed, document digitization technologies are also involved, whether in the form of classical or three-dimensional scanning. The digitization of 3D documents can in particular resort to derivatives of photogrammetry. Sometimes, specific 2D scanners must also be developed to adapt to the size of the documents or for reasons of scanning ergonomics. The document processing also depends on the digital encoding of the documents in a suitable file format. Furthermore, the processing of heterogeneous databases can rely on image classification technologies.
At the other end of the chain are various image completion, extrapolation or data cleanup algorithms. For textual documents, the interpretation can use natural language processing (NLP) technologies.
See also
Document automation
Document modelling
Data Processing
Document Imaging
Duplex scanning
Text mining
Workflow
References
Automatic identification and data capture
Applied data mining
Applications of computer vision | Document processing | Technology | 1,088 |
3,865,506 | https://en.wikipedia.org/wiki/%2A-autonomous%20category | In mathematics, a *-autonomous (read "star-autonomous") category C is a symmetric monoidal closed category equipped with a dualizing object . The concept is also referred to as Grothendieck—Verdier category in view of its relation to the notion of Verdier duality.
Definition
Let C be a symmetric monoidal closed category. For any object A and , there exists a morphism
defined as the image by the bijection defining the monoidal closure
of the morphism
where is the symmetry of the tensor product. An object of the category C is called dualizing when the associated morphism is an isomorphism for every object A of the category C.
Equivalently, a *-autonomous category is a symmetric monoidal category C together with a functor such that for every object A there is a natural isomorphism , and for every three objects A, B and C there is a natural bijection
.
The dualizing object of C is then defined by . The equivalence of the two definitions is shown by identifying .
Properties
Compact closed categories are *-autonomous, with the monoidal unit as the dualizing object. Conversely, if the unit of a *-autonomous category is a dualizing object then there is a canonical family of maps
.
These are all isomorphisms if and only if the *-autonomous category is compact closed.
Examples
A familiar example is the category of finite-dimensional vector spaces over any field k made monoidal with the usual tensor product of vector spaces. The dualizing object is k, the one-dimensional vector space, and dualization corresponds to transposition. Although the category of all vector spaces over k is not *-autonomous, suitable extensions to categories of topological vector spaces can be made *-autonomous.
On the other hand, the category of topological vector spaces contains an extremely wide full subcategory, the category Ste of stereotype spaces, which is a *-autonomous category with the dualizing object and the tensor product .
Various models of linear logic form *-autonomous categories, the earliest of which was Jean-Yves Girard's category of coherence spaces.
The category of complete semilattices with morphisms preserving all joins but not necessarily meets is *-autonomous with dualizer the chain of two elements. A degenerate example (all homsets of cardinality at most one) is given by any Boolean algebra (as a partially ordered set) made monoidal using conjunction for the tensor product and taking 0 as the dualizing object.
The formalism of Verdier duality gives further examples of *-autonomous categories. For example, mention that the bounded derived category of constructible l-adic sheaves on an algebraic variety has this property. Further examples include derived categories of constructible sheaves on various kinds of topological spaces.
An example of a self-dual category that is not *-autonomous is finite linear orders and continuous functions, which has * but is not autonomous: its dualizing object is the two-element chain but there is no tensor product.
The category of sets and their partial injections is self-dual because the converse of the latter is again a partial injection.
The concept of *-autonomous category was introduced by Michael Barr in 1979 in a monograph with that title. Barr defined the notion for the more general situation of V-categories, categories enriched in a symmetric monoidal or autonomous category V. The definition above specializes Barr's definition to the case V = Set of ordinary categories, those whose homobjects form sets (of morphisms). Barr's monograph includes an appendix by his student Po-Hsiang Chu that develops the details of a construction due to Barr showing the existence of nontrivial *-autonomous V-categories for all symmetric monoidal categories V with pullbacks, whose objects became known a decade later as Chu spaces.
Non symmetric case
In a biclosed monoidal category C, not necessarily symmetric, it is still possible to define a dualizing object and then define a *-autonomous category as a biclosed monoidal category with a dualizing object. They are equivalent definitions, as in the symmetric case.
References
Monoidal categories
Closed categories | *-autonomous category | Mathematics | 857 |
310,577 | https://en.wikipedia.org/wiki/The%20Abolition%20of%20Work | "The Abolition of Work" is an essay written by Bob Black in 1985. It was part of Black's first book, an anthology of essays entitled The Abolition of Work and Other Essays published by Loompanics Unlimited. It is an exposition of Black's "type 3 anarchism" – a blend of post-Situationist theory and individualist anarchism – focusing on a critique of the work ethic.
Influence
"The Abolition of Work" was a significant influence on futurist and design critic Bruce Sterling, who at the time was a leading cyberpunk science fiction author and called it "one of the seminal underground documents of the 1980s". The essay's critique of work formed the basis for the anti-labor faction in Sterling's 1988 novel Islands in the Net.
See also
Anarcho-syndicalism
Anti-work
Automation
Freedom of choice
He who does not work, neither shall he eat
Issues in anarchism
Libertarian socialism
Post-work society
Refusal of work
Universal basic income
Wage slavery
Work as play
Workers of the world, unite!
References
Further reading
External links
The Abolition of Work and Other Essays, the 1986 collection by Bob Black hosted in its entirety on Inspiracy.com
"The Approaching Obsolescence of Housework: A Working-Class Perspective", chapter thirteen of Women, Race & Class, by Angela Davis.
Post-left anarchism
Criticism of work
Refusal of work
1985 essays
Essays about anarchism
Anarchist works
Literature critical of work and the work ethic
Anarcho-syndicalism
Works about automation
Universal basic income in the United States | The Abolition of Work | Engineering | 329 |
1,303,530 | https://en.wikipedia.org/wiki/Hybrid%20seed | In agriculture and gardening, hybrid seed is produced by deliberately cross-pollinating parent plants which are genetically distinct. The parents are usually two inbred strains.
Hybrid seed is common in industrial agriculture and home gardening. It is one of the main contributors to the dramatic rise in agricultural output during the last half of the 20th century. Alternatives to hybridization include open pollination and clonal propagation.
An important factor is the heterosis that results from the genetic differences between the parents, which can produce higher yield and faster growth rate. Crossing any particular pair of inbred strains may or may not result in superior offspring. The parent strains used are carefully chosen so as to achieve the uniformity that comes from the uniformity of the parents, and the superior performance that comes from heterosis.
Elite inbred strains are used that express well-documented and consistent phenotypes with yield that is relatively good for inbred plants. Other characteristics of the parents are carefully chosen to provide desirable traits such as improved color, flavour, or disease resistance.
Hybrid seeds planted by the farmer produce similar plants, but the seeds of the next generation from those hybrids will not consistently have the desired characteristics because of genetic assortment. It is therefore rarely desirable to save the seeds from hybrid plants to start the next crop.
History
In the US, experimental agriculture stations in the 1920s investigated the hybrid crops, and by the 1930s farmers had widely adopted the first hybrid maize.
See also
F1 hybrid
Hybrid (biology)
Plant breeding
Seed saving
Sterile male plant
References
Hybrid plants
Seeds
Breeding
Hybridisation (biology)
Intensive farming
Ornamental plants
Pollination management | Hybrid seed | Chemistry,Biology | 326 |
58,283 | https://en.wikipedia.org/wiki/Prandtl%20number | The Prandtl number (Pr) or Prandtl group is a dimensionless number, named after the German physicist Ludwig Prandtl, defined as the ratio of momentum diffusivity to thermal diffusivity. The Prandtl number is given as:where:
: momentum diffusivity (kinematic viscosity), , (SI units: m2/s)
: thermal diffusivity, , (SI units: m2/s)
: dynamic viscosity, (SI units: Pa s = N s/m2)
: thermal conductivity, (SI units: W/(m·K))
: specific heat, (SI units: J/(kg·K))
: density, (SI units: kg/m3).
Note that whereas the Reynolds number and Grashof number are subscripted with a scale variable, the Prandtl number contains no such length scale and is dependent only on the fluid and the fluid state. The Prandtl number is often found in property tables alongside other properties such as viscosity and thermal conductivity.
The mass transfer analog of the Prandtl number is the Schmidt number and the ratio of the Prandtl number and the Schmidt number is the Lewis number.
Experimental values
Typical values
For most gases over a wide range of temperature and pressure, is approximately constant. Therefore, it can be used to determine the thermal conductivity of gases at high temperatures, where it is difficult to measure experimentally due to the formation of convection currents.
Typical values for are:
0.003 for molten potassium at 975 K
around 0.015 for mercury
0.065 for molten lithium at 975 K
around 0.16–0.7 for mixtures of noble gases or noble gases with hydrogen
0.63 for oxygen
around 0.71 for air and many other gases
1.38 for gaseous ammonia
between 4 and 5 for R-12 refrigerant
around 7.56 for water (At 18 °C)
13.4 and 7.2 for seawater (At 0 °C and 20 °C respectively)
50 for n-butanol
between 100 and 40,000 for engine oil
1000 for glycerol
10,000 for polymer melts
around 1 for Earth's mantle.
Formula for the calculation of the Prandtl number of air and water
For air with a pressure of 1 bar, the Prandtl numbers in the temperature range between −100 °C and +500 °C can be calculated using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 0.1% from the literature values.
,
where is the temperature in Celsius.
The Prandtl numbers for water (1 bar) can be determined in the temperature range between 0 °C and 90 °C using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 1% from the literature values.
Physical interpretation
Small values of the Prandtl number, , means the thermal diffusivity dominates. Whereas with large values, , the momentum diffusivity dominates the behavior.
For example, the listed value for liquid mercury indicates that the heat conduction is more significant compared to convection, so thermal diffusivity is dominant.
However, engine oil with its high viscosity and low heat conductivity, has a higher momentum diffusivity as compared to thermal diffusivity.
The Prandtl numbers of gases are about 1, which indicates that both momentum and heat dissipate through the fluid at about the same rate. Heat diffuses very quickly in liquid metals () and very slowly in oils () relative to momentum. Consequently thermal boundary layer is much thicker for liquid metals and much thinner for oils relative to the velocity boundary layer.
In heat transfer problems, the Prandtl number controls the relative thickness of the momentum and thermal boundary layers. When is small, it means that the heat diffuses quickly compared to the velocity (momentum). This means that for liquid metals the thermal boundary layer is much thicker than the velocity boundary layer.
In laminar boundary layers, the ratio of the thermal to momentum boundary layer thickness over a flat plate is well approximated by
where is the thermal boundary layer thickness and is the momentum boundary layer thickness.
For incompressible flow over a flat plate, the two Nusselt number correlations are asymptotically correct:
where is the Reynolds number. These two asymptotic solutions can be blended together using the concept of the Norm (mathematics):
See also
Turbulent Prandtl number
Magnetic Prandtl number
References
Further reading
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics | Prandtl number | Physics,Chemistry,Engineering | 978 |
608,186 | https://en.wikipedia.org/wiki/%CE%92-Carboline | β-Carboline (9H-pyrido[3,4-b]indole) represents the basic chemical structure for more than one hundred alkaloids and synthetic compounds. The effects of these substances depend on their respective substituent. Natural β-carbolines primarily influence brain functions but can also exhibit antioxidant effects. Synthetically designed β-carboline derivatives have recently been shown to have neuroprotective, cognitive enhancing and anti-cancer properties.
Pharmacology
The pharmacological effects of specific β-carbolines are dependent on their substituents. For example, the natural β-carboline harmine has substituents on position 7 and 1. Thereby, it acts as a selective inhibitor of the DYRK1A protein kinase, a protein necessary for neurodevelopment. It also exhibits various antidepressant-like effects in rats by interacting with serotonin receptor 2A. Furthermore, it increases levels of the brain-derived neurotrophic factor (BDNF) in rat hippocampus. A decreased BDNF level has been associated with major depression in humans. The antidepressant effect of harmine might also be due to its function as a MAO-A inhibitor by reducing the breakdown of serotonin and noradrenaline.
A synthetic derivative, 9-methyl-β-carboline, has shown neuroprotective effects including increased expression of neurotrophic factors and enhanced respiratory chain activity. This derivative has also been shown to enhance cognitive function, increase dopaminergic neuron count and facilitate synaptic and dendritic proliferation. It also exhibited therapeutic effects in animal models for Parkinson's disease and other neurodegenerative processes.
However, β-carbolines with substituents in position 3 reduce the effect of benzodiazepine on GABA-A receptors and can therefore have convulsive, anxiogenic and memory enhancing effects. Moreover, 3-hydroxymethyl-beta-carboline blocks the sleep-promoting effect of flurazepam in rodents and – by itself – can decrease sleep in a dose-dependent manner. Another derivative, methyl-β-carboline-3-carboxylate, stimulates learning and memory at low doses but can promote anxiety and convulsions at high doses. With modification in position 9 similar positive effects have been observed for learning and memory without promotion of anxiety or convulsion.
β-carboline derivatives also enhance the production of the antibiotic reveromycin A in soil-dwelling Streptomyces species. Specifically, expression of biosynthetic genes is facilitated by binding of the β-carboline to a large ATP-binding regulator of the LuxR family.
Also Lactobacillus spp. secretes a β-carboline (1-acetyl-β-carboline) preventing the pathogenic fungus Candida albicans to change to a more virulent growth form (yeast-to-filament transition). Thereby, β-carboline reverses imbalances in the microbiome composition causing pathologies ranging from vaginal candidiasis to fungal sepsis.
Since β-carbolines also interact with various cancer-related molecules such as DNA, enzymes (GPX4, kinases, etc.) and proteins (ABCG2/BRCP1, etc.), they are also discussed as potential anticancer agents.
Explorative human studies for the medical use of β-carbolines
The extract of the liana Banisteriopsis caapi has been used by the tribes of the Amazon as an entheogen and was described as a hallucinogen in the middle of the 19th century. In early 20th century, European pharmacists identified harmine as the active substance. This discovery stimulated the interest to further investigate its potential as a medicine. For example, Louis Lewin, a prominent pharmacologist, demonstrated a dramatic benefit in neurological impairments after injections of B. caapi in patients with postencephalitic Parkinsonism. By 1930, it was generally agreed that hypokinesia, drooling, mood, and sometimes rigidity improved by treatment with harmine. Altogether, 25 studies had been published in the 1920s and 1930s about patients with Parkinson's disease and postencephalitic Parkinsonism. The pharmacological effects of harmine have been attributed mainly to its central monoamine oxidase (MAO) inhibitory properties. In-vivo and rodent studies have shown that extracts of Banisteriopsis caapi and also Peganum harmala lead to striatal dopamine release. Furthermore, harmine supports the survival of dopaminergic neurons in MPTP-treated mice. Since harmine also antagonizes N-methyl-d-aspartate (NMDA) receptors, some researchers speculatively attributed the rapid improvement in patients with Parkinson's disease to these antiglutamatergic effects. However, the advent of synthetic anticholinergic drugs at that time led to the total abandonment of harmine.
Structure
β-Carbolines belong to the group of indole alkaloids and consist of a pyridine ring that is fused to an indole skeleton. The structure of β-carboline is similar to that of tryptamine, with the ethylamine chain re-connected to the indole ring via an extra carbon atom, to produce a three-ringed structure. The biosynthesis of β-carbolines is believed to follow this route from analogous tryptamines. Different levels of saturation are possible in the third ring which is indicated here in the structural formula by coloring the optionally double bonds red and blue:
Examples of β-carbolines
Some of the more important β-carbolines are tabulated by structure below. Their structures may contain the aforementioned bonds marked by red or blue.
Natural occurrence
β-Carboline alkaloids are widespread in prokaryotes, plants and animals. Some β-carbolines, notably tetrahydro-β-carbolines, may be formed naturally in plants and the human body with tryptophan, serotonin and tryptamine as precursors.
Altogether, eight plant families are known to express 64 different kinds of β-carboline alkaloids. For example, the β-carbolines harmine, harmaline, and tetrahydroharmine are components of the liana Banisteriopsis caapi and play a pivotal role in the pharmacology of the indigenous psychedelic drug ayahuasca. Moreover, the seeds of Peganum harmala (Syrian Rue) contain between 0.16% and 5.9% β-carboline alkaloids (by dry weight).
A specific group of β-carboline derivatives, termed eudistomins, were extracted from ascidians (marine tunicates of the family Ascidiacea) such as Ritterella sigillinoides, Lissoclinum fragile or Pseudodistoma aureum.
Nostocarboline was isolated from a freshwater cyanobacterium.
The fully aromatic β-carbolines also occur in many foodstuffs, however in lower concentrations. The highest amounts have been detected in brewed coffee, raisins, well-done fish and meats. Smoking is another source of fully aromatic β-carbolines, with levels up to thousands of μg per smoker each day.
β-Carbolines have also been found in the cuticle of scorpions, causing their skin to fluoresce upon exposed to ultraviolet light at certain wavelengths (e.g. blacklight).
See also
γ-Carboline
Harmala alkaloid
Oxopropaline
Tryptamine
References
External links
TiHKAL #44
TiHKAL in general
Beta-carbolines in coffee
Anxiolytics
Convulsants
Entheogens
Monoamine oxidase inhibitors
GABAA receptor negative allosteric modulators
Indole alkaloids
Tryptamines | Β-Carboline | Chemistry | 1,708 |
43,298,724 | https://en.wikipedia.org/wiki/PlayCanvas | PlayCanvas is an open-source 3D game engine/interactive 3D application engine alongside a proprietary cloud-hosted creation platform that allows for simultaneous editing from multiple computers via a browser-based interface. It runs in modern browsers that support WebGL, including Mozilla Firefox and Google Chrome. The engine is capable of rigid-body physics simulation, handling three-dimensional audio and 3D animations.
PlayCanvas has gained the support of ARM, Activision and Mozilla.
The PlayCanvas engine was open-sourced on June 4, 2014.
In April 2019, BusinessInsider.com reported that the company was acquired by Snap Inc. in 2017.
Features
The PlayCanvas platform has collaborative real-time Editor that allows editing a project by multiple developers simultaneously. The engine supports the WebGL 1.0 and 2.0 standard to produce GPU accelerated 3D graphics and allows for scripting via the JavaScript programming language.
Projects can be distributed via a URL web link or packaged in native wrappers, p.g. for Android, using CocoonJS or for Steam using Electron, and many other options and platforms.
Notable PlayCanvas applications
Various companies use PlayCanvas in projects of different disciplines of interactive 3D content in the web.
Disney created an educational game for Hour of Code based on its Moana film.
King published Shuffle Cats Mini, as a launch title for Facebook Instant Games.
TANX – massively multiplayer online game of cartoon styled tanks.
Miniclip published number of games on their platform with increase of HTML5 games popularity on the web.
Mozilla collaborated with PlayCanvas team creating After the Flood demo for presenting cutting-edge features of WebGL 2.0.
See also
List of WebGL frameworks
List of game engines
JavaScript
HTML5
WebGL
References
External links
PlayCanvas Official Website
PlayCanvas Engine (Open Source)
PlayCanvas API Reference
PlayCanvas Tutorials
Various free-to-play games built with PlayCanvas
Cloud applications
Collaborative real-time editors
Cross-platform free software
Free 3D graphics software
Free game engines
Free software programmed in JavaScript
Graphics libraries
IOS video game engines
Software using the MIT license
Video game development software
Video game engines
Web applications
Web development
Web development software
Web software
WebGL | PlayCanvas | Technology,Engineering | 460 |
7,195,059 | https://en.wikipedia.org/wiki/Variable%20Valve%20Control | VVC (Variable Valve Control) is an automobile variable valve timing technology developed by Rover and applied to some high performance variants of the company's K Series 1800cc engine.
About
In order to improve the optimisation of the valve timing for differing engine speeds and loads, the system is able to vary the timing and duration of the inlet valve opening. It achieves this by using a complex and finely machined mechanism to drive the inlet camshafts. This mechanism can accelerate and decelerate the rotational speed of the camshaft during different parts of its cycle. e.g. to produce longer opening duration, it slows the rotation during the valve open part of the cycle and speeds it up during the valve closed period.
The system has the advantage that it is continuously variable rather than switching in at a set speed. Its disadvantage is the complexity of the system and corresponding price. Other systems will achieve similar results with less cost and simpler design (electronic control).
For a more detailed description, see the sandsmuseum link below.
Applications
MG Rover cars
MG F / MG TF
MG ZR
Rover 200 / 25
Non MG/Rover cars
Lotus Elise
Caterham 7
Caterham 21
GTM Libra
See also
Variable Valve Timing
Rover K-Series engine
External links
How the MGF VVC really works
MG Rover Group
Powertrain Ltd, manufacturer of the VVC engines
Valve Manufacturer
Sandsmuseum
Engine technology
Variable valve timing
Rover Company | Variable Valve Control | Technology | 290 |
14,880,338 | https://en.wikipedia.org/wiki/KCNMB3 | Calcium-activated potassium channel subunit beta-3 is a protein that in humans is encoded by the KCNMB3 gene.
MaxiK channels are large conductance, voltage and calcium-sensitive potassium channels which are fundamental to the control of smooth muscle tone and neuronal excitability. MaxiK channels can be formed by 2 subunits: the pore-forming alpha subunit and the modulatory beta subunit. The protein encoded by this gene is an auxiliary beta subunit which may partially inactivate or slightly decrease the activation time of MaxiK alpha subunit currents. At least four transcript variants encoding four different isoforms have been found for this gene.
See also
BK channel
Voltage-gated potassium channel
References
Further reading
Ion channels | KCNMB3 | Chemistry | 148 |
75,499,594 | https://en.wikipedia.org/wiki/Halosulfuron-methyl | Halosulfuron-methyl is a sulfonylurea post-emergence herbicide used to control some annual and perennial broad-leaved weeds and sedges (such as nutsedge/nutgrass) in a range of crops (particularly rice), established landscape woody ornamentals and turfgrass. It is marketed under several tradenames including Sedgehammer and Sandea.
Effects
Halosulfuron-methyl is systemic and selective, and acts as an inhibitor of acetohydroxyacid synthase (AHAS, also known as acetolactate synthase) restricting the biosynthesis of the essential amino acids, valine and isoleucine, thus restricting plant growth. Symptoms take several weeks to develop and include general stunting, chlorosis, and necrosis of the growing points. It typically does not affect other major annual and perennial weed grasses and broadleaves such as spurge, dandelions, lambsquarters, and oxalis.
References
External links
Herbicides
Pyrazoles
Pyrimidines
Sulfonylureas
Methyl esters | Halosulfuron-methyl | Biology | 229 |
28,367,737 | https://en.wikipedia.org/wiki/Lymphoepithelial%20lesion | In pathology, lymphoepithelial lesion refers to a discrete abnormality that consists of lymphoid cells and epithelium, which may or may not be benign.
It may refer to a benign lymphoepithelial lesion of the parotid gland or benign lymphoepithelial lesion of the lacrimal gland, or may refer to the infiltration of malignant lymphoid cells into epithelium, in the context of primary gastrointestinal lymphoma.
In the context of GI tract lymphoma, it is most often associated with MALT lymphomas.
See also
Gastric lymphoma
MALT lymphoma
References
Pathology | Lymphoepithelial lesion | Biology | 157 |
2,658,385 | https://en.wikipedia.org/wiki/Van%20Stockum%20dust | In general relativity, the van Stockum dust is an exact solution of the Einstein field equations where the gravitational field is generated by dust rotating about an axis of cylindrical symmetry. Since the density of the dust is increasing with distance from this axis, the solution is rather artificial, but as one of the simplest known solutions in general relativity, it stands as a pedagogically important example.
This solution is named after Willem Jacob van Stockum, who rediscovered it in 1938 independently of a much earlier discovery by Cornelius Lanczos in 1924. It is currently recommended that the solution be referred to as the Lanczos–van Stockum dust.
Derivation
One way of obtaining this solution is to look for a cylindrically symmetric perfect fluid solution in which the fluid exhibits rigid rotation. That is, we demand that the world lines of the fluid particles form a timelike congruence having nonzero vorticity but vanishing expansion and shear. (In fact, since dust particles feel no forces, this will turn out to be a timelike geodesic congruence, but we won't need to assume this in advance.)
A simple ansatz corresponding to this demand is expressed by the following frame field, which contains two undetermined functions of :
To prevent misunderstanding, we should emphasize that taking the dual coframe
gives the metric tensor in terms of the same two undetermined functions:
Multiplying out gives
We compute the Einstein tensor with respect to this frame, in terms of the two undetermined functions,
and demand that the result have the form appropriate for a perfect fluid solution with the timelike unit vector everywhere tangent to the world line of a fluid particle. That is, we demand that
This gives the conditions
Solving for and then for gives the desired frame defining the van Stockum solution:
Note that this frame is only defined on .
Properties
Computing the Einstein tensor with respect to our frame shows that in fact the pressure vanishes, so we have a dust solution. The mass density of the dust turns out to be
Happily, this is finite on the axis of symmetry , but the density increases with radius, a feature which unfortunately severely limits possible astrophysical applications.
Solving the Killing equations shows that this spacetime admits a three-dimensional abelian Lie algebra of Killing vector fields, generated by
Here, has nonzero vorticity, so we have a stationary spacetime invariant under translation along the world lines of the dust particles, and also under translation along the axis of cylindrical symmetry and rotation about that axis.
Note that unlike the Gödel dust solution, in the van Stockum dust the dust particles are rotating about a geometrically distinguished axis.
As promised, the expansion and shear of the timelike geodesic congruence vanishes, but the vorticity vector is
This means that even though in our comoving chart the world lines of the dust particles appear as vertical lines, in fact they are twisting about one another as the dust particles swirl about the axis of symmetry. In other words, if we follow the evolution of a small ball of dust, we find that it rotates about its own axis (parallel to ), but does not shear or expand; the latter properties define what we mean by rigid rotation. Notice that on the axis itself, the magnitude of the vorticity vector becomes simply .
The tidal tensor is
which shows that observers riding on the dust particles experience isotropic tidal tension in the plane of rotation. The magnetogravitic tensor is
An apparent paradox
Consider a thought experiment in which an observer riding on a dust particle sitting on the axis of symmetry looks out at dust particles with positive radial coordinate. Does he see them to be rotating, or not?
Since the top array of null geodesics is obtained simply by translating upwards the lower array, and since the three world lines are all vertical (invariant under time translation), it might seem that the answer is "no". However, while the frame given above is an inertial frame, computing the covariant derivatives
shows that only the first vanishes identically. In other words, the remaining spatial vectors are spinning about (i.e. about an axis parallel to the axis of cylindrical symmetry of this spacetime).
Thus, to obtain a nonspinning inertial frame we need to spin up our original frame, like this:
where where q is a new undetermined function of r. Plugging in the requirement that the covariant derivatives vanish, we obtain
The new frame appears, in our comoving coordinate chart, to be spinning, but in fact it is gyrostabilized. In particular, since our observer with the green world line in the figure is presumably riding a nonspinning dust particle (otherwise spin-spin forces would be apparent in the dynamics of the dust), he in fact observes nearby radially separated dust particles to be rotating clockwise about his location with angular velocity a. This explains the physical meaning of the parameter which we found in our earlier derivation of the first frame.
(Pedantic note: alert readers will have noticed that we ignored the fact that neither of our frame fields is well defined on the axis. However, we can define a frame for an on-axis observer by an appropriate one-sided limit; this gives a discontinuous frame field, but we only need to define a frame along the world line of our on-axis observer in order to pursue the thought experiment considered in this section.)
It is worth remarking that the null geodesics spiral inwards in the above figure. This means that our on-axis observer sees the other dust particles at time-lagged locations, which is of course just what we would expect. The fact that the null geodesics appear "bent" in this chart is of course an artifact of our choice of comoving coordinates in which the world lines of the dust particles appear as vertical coordinate lines.
A genuine paradox
Let us draw the light cones for some typical events in the van Stockum dust, to see how their appearance (in our comoving cylindrical chart) depends on the radial coordinate:
As the figure shows, at , the cones become tangent to the coordinate plane , and we obtain a closed null curve (the red circle). Note that this is not a null geodesic.
As we move further outward, we can see that horizontal circles with larger radii are closed timelike curves. The paradoxical nature of these CTCs was apparently first pointed out by van Stockum: observers whose world lines form a closed timelike curve can apparently revisit or affect their own past. Even worse, there is apparently nothing to prevent such an observer from deciding, on his third lifetime, say, to stop accelerating, which would give him multiple biographies.
These closed timelike curves are not timelike geodesics, so these paradoxical observers must accelerate to experience these effects. Indeed, as we would expect, the required acceleration diverges as these timelike circles approach the null circles lying in the critical cylinder .
Closed timelike curves turn out to exist in many other exact solutions in general relativity, and their common appearance is one of the most troubling theoretical objections to this theory. However, very few physicists refuse to use general relativity at all on the basis of such objections; rather most take the pragmatic attitude that using general relativity makes sense whenever one can get away with it, because of the relative simplicity and well established reliability of this theory in many astrophysical situations. This is not unlike the fact that many physicists use Newtonian mechanics every day, even though they are well aware that Galilean kinematics has been "overthrown" by relativistic kinematics.
See also
Dust solution
Gödel dust solution
References
Lanczos's paper announcing the first discovery of this solution.
Van Stockum's paper announcing his rediscovery of this solution.
Exact solutions in general relativity
Thought experiments in physics | Van Stockum dust | Mathematics | 1,618 |
256,310 | https://en.wikipedia.org/wiki/John%20Tyndall | John Tyndall (; 2 August 1820 – 4 December 1893) was an Irish physicist and chemist. His scientific fame arose in the 1850s from his study of diamagnetism. Later he made discoveries in the realms of infrared radiation and the physical properties of air, proving the connection between atmospheric CO and what is now known as the greenhouse effect in 1859.
Tyndall also published more than a dozen science books which brought state-of-the-art 19th century experimental physics to a wide audience. From 1853 to 1887 he was professor of physics at the Royal Institution of Great Britain in London. He was elected as a member to the American Philosophical Society in 1868.
Early years and education
Tyndall was born in Leighlinbridge, County Carlow, Ireland. His father was a local police constable, descended from Gloucestershire emigrants who settled in southeast Ireland around 1670. Tyndall attended the local schools (Ballinabranna Primary School) in County Carlow until his late teens, and was probably an assistant teacher near the end of his time there. Subjects learned at school notably included technical drawing and mathematics with some applications of those subjects to land surveying. He was hired as a draftsman by the Ordnance Survey of Ireland in his late teens in 1839, and moved to work for the Ordnance Survey for Great Britain in 1842. In the decade of the 1840s, a railway-building boom was in progress, and Tyndall's land surveying experience was valuable and in demand by the railway companies. Between 1844 and 1847, he was lucratively employed in railway construction planning.
In 1847, Tyndall opted to become a mathematics and surveying teacher at Queenwood College, a boarding school in Hampshire. Recalling this decision later, he wrote: "the desire to grow intellectually did not forsake me; and, when railway work slackened, I accepted in 1847 a post as master in Queenwood College." Another recently arrived young teacher at Queenwood was Edward Frankland, who had previously worked as a chemical laboratory assistant for the British Geological Survey. Frankland and Tyndall became good friends. On the strength of Frankland's prior knowledge, they decided to go to Germany to further their education in science. Among other things, Frankland knew that certain German universities were ahead of any in Britain in experimental chemistry and physics. (British universities were still focused on classics and mathematics and not laboratory science.) The pair moved to Germany in summer 1848 and enrolled at the University of Marburg, attracted by the reputation of Robert Bunsen as a teacher. Tyndall studied under Bunsen for two years. Perhaps more influential for Tyndall at Marburg was Professor Hermann Knoblauch, with whom Tyndall maintained communications by letter for many years afterwards. Tyndall's Marburg dissertation was a mathematical analysis of screw surfaces in 1850 (under Friedrich Ludwig Stegmann). Tyndall stayed in Germany for a further year doing research on magnetism with Knoblauch, including some months' visit at the Berlin laboratory of Knoblauch's main teacher, Heinrich Gustav Magnus. It is clear today that Bunsen and Magnus were among the very best experimental science instructors of the era. Thus, when Tyndall returned to live in England in summer 1851, he probably had as good an education in experimental science as anyone in England.
Early scientific work
Tyndall's early original work in physics was his experiments on magnetism and diamagnetic polarity, on which he worked from 1850 to 1856. His two most influential reports were the first two, co-authored with Knoblauch. One of them was entitled "The magneto-optic properties of crystals, and the relation of magnetism and diamagnetism to molecular arrangement", dated May 1850. The two described an inspired experiment, with an inspired interpretation. These and other magnetic investigations very soon made Tyndall known among the leading scientists of the day. He was elected a Fellow of the Royal Society in 1852. In his search for a suitable research appointment, he was able to ask the longtime editor of the leading German physics journal (Poggendorff) and other prominent men to write testimonials on his behalf. In 1853, he attained the prestigious appointment of Professor of Natural Philosophy (Physics) at the Royal Institution in London, due in no small part to the esteem his work had garnered from Michael Faraday, the leader of magnetic investigations at the Royal Institution. About a decade later Tyndall was appointed the successor to the positions held by Michael Faraday at the Royal Institution on Faraday's retirement.
Alpine mountaineering and glaciology
Tyndall visited the Alps mountains in 1856 for scientific reasons and ended up becoming a pioneering mountain climber. He visited the Alps almost every summer from 1856 onward, was a member of the very first mountain-climbing team to reach the top of the Weisshorn (1861), and led one of the early teams to reach the top of the Matterhorn (1868). His is one of the names associated with the "Golden age of alpinism" — the mid-Victorian years when the more difficult of the Alpine peaks were summited for the first time.
In the Alps, Tyndall studied glaciers, and especially glacier motion. His explanation of glacial flow brought him into dispute with others, particularly James David Forbes. Much of the early scientific work on glacier motion had been done by Forbes, but Forbes at that time did not know of the phenomenon of regelation, which was discovered a little later by Michael Faraday. Regelation played a key role in Tyndall's explanation. Forbes did not see regelation in the same way at all. Complicating their debate, a disagreement arose publicly over who deserved to get investigator credit for what. Articulate friends of Forbes, as well as Forbes himself, thought that Forbes should get the credit for most of the good science, whereas Tyndall thought the credit should be distributed more widely. Tyndall commented: "The idea of semi-fluid motion belongs entirely to Louis Rendu; the proof of the quicker central flow belongs in part to Rendu, but almost wholly to Louis Agassiz and Forbes; the proof of the retardation of the bed belongs to Forbes alone; while the discovery of the locus of the point of maximum motion belongs, I suppose, to me." When Forbes and Tyndall were in the grave, their disagreement was continued by their respective official biographers. Everyone tried to be reasonable, but agreement was not attained. More disappointingly, aspects of glacier motion remained not understood or not proved.
Numerous landforms and geographical features are named for John Tyndall, including Tyndall Glacier in Chile, Tyndall Glacier in Colorado, Tyndall Glacier in Alaska, Mount Tyndall in California, and Mount Tyndall in Tasmania.
Main scientific work
Work on glaciers alerted Tyndall to the research of Horace Bénédict de Saussure into the heating effect of sunlight, and the concept of Jospeph Fourier, developed by Claude Pouillet and William Hopkins, that heat from the sun penetrates the atmosphere more easily than "obscure heat" (infrared) "terrestrial radiation" from the warmed Earth, causing what we now call the greenhouse effect. In the spring of 1859, Tyndall began research into how thermal radiation, both visible and obscure, affects different gases and aerosols. He developed differential absorption spectroscopy using the electro-magnetic thermopile devised by Macedonio Melloni. Tyndall began intensive experiments on 9 May 1859, at first without significant results,
then improved the sensitivity of the apparatus and on 18 May wrote in his journal "Experimented all day; the subject is completely in my hands!" On 26 May he gave the Royal Society a note which described his methods, and stated "With the exception of the celebrated memoir of M. Pouillet on Solar Radiation through the atmosphere, nothing, so far as I am aware, has been published on the transmission of radiant heat through gaseous bodies. We know nothing of the effect even of air upon heat radiated from terrestrial sources."
On 10 June, he demonstrated the research in a Royal Society lecture, noting that coal gas and ether strongly absorbed (infrared) radiant heat, and his experimental confirmation of the (greenhouse effect) concept; that solar heat crosses an atmosphere, but "when the heat is absorbed by the planet, it is so changed in quality that the rays emanating from the planet cannot get with the same freedom back into space. Thus the atmosphere admits of the entrance of solar heat; but checks its exit, and the result is a tendency to accumulate heat at the surface of the planet."
Tyndall's studies of the action of radiant energy on the constituents of air led him onto several lines of inquiry, and his original research results included the following:
Tyndall explained the heat in the Earth's atmosphere in terms of the capacities of the various gases in the air to absorb radiant heat, in the form of infrared radiation. His measuring device, which used thermopile technology, is an early landmark in the history of absorption spectroscopy of gases. He was the first to correctly measure the relative infrared absorptive powers of the gases nitrogen, oxygen, water vapour, carbon dioxide, ozone, methane, and other trace gases and vapours. He concluded that water vapour is the strongest absorber of radiant heat in the atmosphere and is the principal gas controlling air temperature. Absorption by the other gases is not negligible but relatively small. Prior to Tyndall it was widely surmised that the Earth's atmosphere warms the surface in what was later called a greenhouse effect, but he was the first to prove it. The proof was that water vapour strongly absorbed infrared radiation. Three years earlier, in 1856, the American scientist Eunice Newton Foote had announced experiments demonstrating that water vapour and carbon dioxide absorb heat from solar radiation, but she did not differentiate the effects of infrared. Relatedly, Tyndall in 1860 was first to demonstrate and quantify that visually transparent gases are infrared emitters.
He devised demonstrations that advanced the question of how radiant heat is absorbed and emitted at the molecular level. He appears to be the first person to have demonstrated experimentally that emission of heat in chemical reactions has its physical origination within the newly created molecules (1864). He produced instructive demonstrations involving the incandescent conversion of infrared into visible light at the molecular level, which he called calorescence (1865), in which he used materials that are transparent to infrared and opaque to visible light or vice versa. He usually referred to infrared as "radiant heat", and sometimes as "ultra-red undulations", as the word "infrared" did not start coming into use until the 1880s. His main reports of the 1860s were republished as a 450-page collection in 1872 under the title Contributions to Molecular Physics in the Domain of Radiant Heat.
In the investigations on radiant heat in air it had been necessary to use air from which all traces of floating dust and other particulates had been removed. A very sensitive way to detect particulates is to bathe the air with intense light. The scattering of light by particulate impurities in air and other gases, and in liquids, is known today as the Tyndall effect or Tyndall scattering. In studying this scattering during the late 1860s Tyndall was a beneficiary of recent improvements in electric-powered lights. He also had the use of good light concentrators. He developed the nephelometer and similar instruments that show properties of aerosols and colloids through concentrated light beams against a dark background and are based on exploiting the Tyndall effect. (When combined with microscopes, the result is the ultramicroscope, which was developed later by others).
He was the first to observe and report the phenomenon of thermophoresis in aerosols. He spotted it surrounding hot objects while investigating the Tyndall effect with focused lightbeams in a dark room. He devised a better way to demonstrate it, and then simply reported it (1870), without investigating the physics of it in depth.
In radiant-heat experiments that called for much laboratory expertise in the early 1860s, he showed for a variety of readily vaporisable liquids that, molecule for molecule, the vapour form and the liquid form have essentially the same power to absorb radiant heat. (In modern experiments using narrow-band spectra, some small differences are found that Tyndall's equipment was unable to get at; see e.g. absorption spectrum of H2O).
He consolidated and enhanced the results of Paul-Quentin Desains, James D. Forbes, Hermann Knoblauch and others demonstrating that the principal properties of visible light can be reproduced for radiant heat – namely reflection, refraction, diffraction, polarisation, depolarisation, double refraction, and rotation in a magnetic field.
Using his expertise about radiant heat absorption by gases, he invented a system for measuring the amount of carbon dioxide in a sample of exhaled human breath (1862, 1864). The basics of Tyndall's system is in daily use in hospitals today for monitoring patients under anaesthesia. (See capnometry.)
When studying the absorption of radiant heat by ozone, he came up with a demonstration that helped confirm or reaffirm that ozone is an oxygen cluster (1862).
In the lab he came up with the following simple way to obtain "optically pure" air, i.e. air that has no visible signs of particulate matter. He built a square wooden box with a couple of glass windows on it. Before closing the box, he coated the inside walls and floor of the box with glycerin, which is a sticky syrup. He found that after a few days' wait the air inside the box was entirely particulate-free when examined with strong light beams through the glass windows. The various floating-matter particulates had all ended up getting stuck to the walls or settling on the sticky floor. Now, in the optically pure air there were no signs of any "germs", i.e. no signs of floating micro-organisms. Tyndall sterilised some meat-broths by simply boiling them, and then compared what happened when he let these meat-broths sit in the optically pure air, and in ordinary air. The broths sitting in the optically pure air remained "sweet" (as he said) to smell and taste after many months of sitting, while the ones in ordinary air started to become putrid after a few days. This demonstration extended Louis Pasteur's earlier demonstrations that the presence of micro-organisms is a precondition for biomass decomposition. However, the next year (1876) Tyndall failed to consistently reproduce the result. Some of his supposedly heat-sterilized broths rotted in the optically pure air. From this Tyndall was led to find viable bacterial spores (endospores) in supposedly heat-sterilized broths. He discovered the broths had been contaminated with dry bacterial spores from hay in the lab. All bacteria are killed by simple boiling, except that bacteria have a spore form that can survive boiling, he correctly contended, citing research by Ferdinand Cohn. Tyndall found a way to eradicate the bacterial spores that came to be known as "Tyndallization". Tyndallization historically was the earliest known effective way to destroy bacterial spores. At the time, it affirmed the "germ theory" against a number of critics whose experimental results had been defective from the same cause. During the mid-1870s Pasteur and Tyndall were in frequent communication.
Invented a better fireman's respirator, a hood that filtered smoke and noxious gas from air (1871, 1874).
In the late 1860s and early 1870s he wrote an introductory book about sound propagation in air, and was a participant in a large-scale British project to develop a better foghorn. In laboratory demonstrations motivated by foghorn issues, Tyndall established that sound is partially reflected (i.e. partially bounced back like an echo) at the location where an air mass of one temperature meets another air mass of a different temperature; and more generally when a body of air contains two or more air masses of different densities or temperatures, the sound travels poorly because of reflections occurring at the interfaces between the air masses, and very poorly when many such interfaces are present. (He then argued, though inconclusively, that this is the usual main reason why the same distant sound, e.g. foghorn, can be heard stronger or fainter on different days or at different times of day.)
An index of 19th-century scientific research journals has John Tyndall as the author of more than 147 papers in science research journals, with practically all of them dated between 1850 and 1884, which is an average of more than four papers a year over that 35-year period.
In his lectures at the Royal Institution Tyndall put a great value on, and was talented at producing, lively, visible demonstrations of physics concepts. In one lecture, Tyndall demonstrated the propagation of light down through a stream of falling water via total internal reflection of the light. It was referred to as the "light fountain". It is historically significant today because it demonstrates the scientific foundation for modern fibre optic technology. During second half of the 20th century Tyndall was usually credited with being the first to make this demonstration. However, Jean-Daniel Colladon published a report of it in Comptes Rendus in 1842, and there's some suggestive evidence that Tyndall's knowledge of it came ultimately from Colladon and no evidence that Tyndall claimed to have originated it himself.
Molecular physics of radiant heat
Tyndall was an experimenter and laboratory apparatus builder, not an abstract model builder. But in his experiments on radiation and the heat-absorptive power of gases, he had an underlying agenda to understand the physics of molecules. Tyndall said in 1879: "During nine years of labour on the subject of radiation [in the 1860s], heat and light were handled throughout by me, not as ends, but as instruments by the aid of which the mind might perchance lay hold upon the ultimate particles of matter." This agenda is explicit in the title he picked for his 1872 book Contributions to Molecular Physics in the Domain of Radiant Heat. It is present less explicitly in the spirit of his widely read 1863 book Heat Considered as a Mode of Motion. Besides heat he also saw magnetism and sound propagation as reducible to molecular behaviours. Invisible molecular behaviours were the ultimate basis of all physical activity. With this mindset, and his experiments, he outlined an account whereby differing types of molecules have differing absorptions of infrared radiation because their molecular structures give them differing oscillating resonances. He'd gotten into the oscillating resonances idea because he'd seen that any one type of molecule has differing absorptions at differing radiant frequencies, and he was entirely persuaded that the only difference between one frequency and another is the frequency. He'd also seen that the absorption behaviour of molecules is quite different from that of the atoms composing the molecules. For example, the gas nitric oxide (NO) absorbed more than a thousand times more infrared radiation than either nitrogen (N2) or oxygen (O2). He'd also seen in several kinds of experiments that – no matter whether a gas is a weak absorber of broad-spectrum radiant heat – any gas will strongly absorb the radiant heat coming from a separate body of the same type of gas. That demonstrated a kinship between the molecular mechanisms of absorption and emission. Such a kinship was also in evidence in experiments by Balfour Stewart and others, cited and extended by Tyndall, that showed with respect to broad-spectrum radiant heat that molecules that are weak absorbers are weak emitters and strong absorbers are strong emitters. (For example, rock-salt is an exceptionally poor absorber of heat via radiation, and a good absorber of heat via conduction. When a plate of rock-salt is heated via conduction and let stand on an insulator, it takes an exceptionally long time to cool down; i.e., it's a poor emitter of infrared.) The kinship between absorption and emission was also consistent with some generic or abstract features of resonators. The chemical decomposition of molecules by lightwaves (photochemical effect) convinced Tyndall that the resonator could not be the molecule as a whole unit; it had to be some substructure, because otherwise the photochemical effect would be impossible. But he was without testable ideas as to the form of this substructure, and did not partake in speculation in print. His promotion of the molecular mindset, and his efforts to experimentally expose what molecules are, has been discussed by one historian under the title "John Tyndall, The Rhetorician of Molecularity".
Educator
Besides being a scientist, John Tyndall was a science teacher and evangelist for the cause of science. He spent a significant amount of his time disseminating science to the general public. He gave hundreds of public lectures to non-specialist audiences at the Royal Institution in London. When he went on a public lecture tour in the US in 1872, large crowds of non-scientists paid fees to hear him lecture about the nature of light. A typical statement of Tyndall's reputation at the time is this from a London publication in 1878: "Following the precedent set by Faraday, Professor Tyndall has succeeded not only in original investigation and in teaching science soundly and accurately, but in making it attractive.... When he lectures at the Royal Institution the theatre is crowded." Tyndall said of the occupation of teacher "I do not know a higher, nobler, and more blessed calling." His greatest audience was gained ultimately through his books, most of which were not written for experts or specialists. He published more than a dozen science books. From the mid-1860s on, he was one of the world's most famous living physicists, due firstly to his skill and industry as a tutorialist. Most of his books were translated into German and French with his main tutorials staying in print in those languages for decades.
As an indicator of his teaching attitude, here are his concluding remarks to the reader at the end of a 200-page tutorial book for a "youthful audience", The Forms of Water (1872): "Here, my friend, our labours close. It has been a true pleasure to me to have you at my side so long. In the sweat of our brows we have often reached the heights where our work lay, but you have been steadfast and industrious throughout, using in all possible cases your own muscles instead of relying upon mine. Here and there I have stretched an arm and helped you to a ledge, but the work of climbing has been almost exclusively your own. It is thus that I should like to teach you all things; showing you the way to profitable exertion, but leaving the exertion to you.... Our task seems plain enough, but you and I know how often we have had to wrangle resolutely with the facts to bring out their meaning. The work, however, is now done, and you are master of a fragment of that sure and certain knowledge which is founded on the faithful study of nature.... Here then we part. And should we not meet again, the memory of these days will still unite us. Give me your hand. Good bye."
As another indicator, here is the opening paragraph of his 350-page tutorial entitled Sound (1867): "In the following pages I have tried to render the science of acoustics interesting to all intelligent persons, including those who do not possess any special scientific culture. The subject is treated experimentally throughout, and I have endeavoured so to place each experiment before the reader that he should realise it as an actual operation." In the preface to the 3rd edition of this book, he reports that earlier editions were translated into Chinese at the expense of the Chinese government and translated into German under the supervision of Hermann von Helmholtz (a big name in the science of acoustics). His first published tutorial, which was about glaciers (1860), similarly states: "The work is written with a desire to interest intelligent persons who may not possess any special scientific culture."
His most widely praised tutorial, and probably his biggest seller, was the 550-page "Heat: a Mode of Motion" (1863; updated editions until 1880). It was in print for at least 50 years, and is in print today. Its primary feature is, as James Clerk Maxwell said in 1871, "the doctrines of the science [of heat] are forcibly impressed on the mind by well-chosen illustrative experiments."
Tyndall's three longest tutorials, namely Heat (1863), Sound (1867), and Light (1873), represented state-of-the-art experimental physics at the time they were written. Much of their contents were recent major innovations in the understanding of their respective subjects, which Tyndall was the first writer to present to a wider audience. One caveat is called for about the meaning of "state of the art". The books were devoted to laboratory science and they avoided mathematics. In particular, they contain absolutely no infinitesimal calculus. Mathematical modelling using infinitesimal calculus, especially differential equations, was a component of the state-of-the-art understanding of heat, light and sound at the time.
Demarcation of science from religion
The majority of the progressive and innovative British physicists of Tyndall's generation were conservative and orthodox on matters of religion. That includes for example James Joule, Balfour Stewart, James Clerk Maxwell, George Gabriel Stokes and Lord Kelvin – all names investigating heat or light contemporaneously with Tyndall. These conservatives believed, and sought to strengthen the basis for believing, that religion and science were consistent and harmonious with each other. Tyndall, however, was a member of a club that vocally supported Charles Darwin's theory of evolution and sought to strengthen the barrier, or separation, between religion and science. The most prominent member of this club was the anatomist Thomas Henry Huxley. Tyndall first met Huxley in 1851 and the two had a lifelong friendship. Chemist Edward Frankland and mathematician Thomas Archer Hirst, both of whom Tyndall had known since before going to university in Germany, were members too. Others included the social philosopher Herbert Spencer.
Though not nearly so prominent as Huxley in controversy over philosophical problems, Tyndall played his part in communicating to the educated public what he thought were the virtues of having a clear separation between science (knowledge & rationality) and religion (faith & spirituality). As the elected president of the British Association for the Advancement of Science in 1874, he gave a long keynote speech at the Association's annual meeting held that year in Belfast. The speech gave a favourable account of the history of evolutionary theories, mentioning Darwin's name favourably more than 20 times, and concluded by asserting that religious sentiment should not be permitted to "intrude on the region of knowledge, over which it holds no command". This was a hot topic. The newspapers carried the report of it on their front pages – in Britain, Ireland & North America, even the European Continent – and many critiques of it appeared soon after. The attention and scrutiny increased the friends of the evolutionists' philosophical position, and brought it closer to mainstream ascendancy.
In Rome in 1864, Pope Pius IX in his Syllabus of Errors decreed that it was an error that "reason is the ultimate standard by which man can and ought to arrive at knowledge" and an error that "divine revelation is imperfect" in the Bible – and anyone maintaining those errors was to be "anathematized" – and in 1888 decreed as follows: "The fundamental doctrine of rationalism is the supremacy of the human reason, which, refusing due submission to the divine and eternal reason, proclaims its own independence... A doctrine of such character is most hurtful both to individuals and to the State... It follows that it is quite unlawful to demand, to defend, or to grant, unconditional [or promiscuous] freedom of thought, speech, writing, or religion." Those principles and Tyndall's principles were profound enemies. Luckily for Tyndall he didn't need to get into a contest with them in Britain. Even in Italy, Huxley and Darwin were awarded honorary medals and most of the Italian governing class was hostile to the papacy. But in Ireland during Tyndall's lifetime the majority of the population grew increasingly doctrinaire and vigorous in its Roman Catholicism and also grew stronger politically. Between 1886 and 1893, Tyndall was active in the debate in England about whether to give the Catholics of Ireland more freedom to go their own way. Like the great majority of Irish-born scientists of the 19th century he opposed the Irish Home Rule Movement. He had ardent views about it, which were published in newspapers and pamphlets. For example, in an opinion piece in The Times on 27 December 1890 he saw priests and Catholicism as "the heart and soul of this movement" and wrote that placing the non-Catholic minority under the dominion of "the priestly horde" would be "an unspeakable crime". He tried unsuccessfully to get the UK's premier scientific society to denounce the Irish Home Rule proposal as contrary to the interests of science.
In several essays included in his book Fragments of Science for Unscientific People, Tyndall attempted to dissuade people from believing in the potential effectiveness of prayers. At the same time, though, he was not broadly anti-religious.
Many of his readers interpret Tyndall to be a confirmed agnostic, though he never explicitly declared himself to be so. The following statement from Tyndall is an example of Tyndall's agnostic mindset, made in 1867, and reiterated in 1878: "The phenomena of matter and force come within our intellectual range... but behind, and above, and around us the real mystery of the universe lies unsolved, and, as far as we are concerned, is incapable of solution.... Let us lower our heads, and acknowledge our ignorance, priest and philosopher, one and all."
Private life
Tyndall did not marry until age 55. His bride, Louisa Hamilton, was the 30-year-old daughter of a member of parliament (Lord Claud Hamilton, M.P.). The following year, 1877, they built a summer chalet at Belalp in the Swiss Alps. Before getting married Tyndall had been living for many years in an upstairs apartment at the Royal Institution and continued living there after marriage until 1885 when he and Louisa moved to a house near Haslemere 45 miles southwest of London. The marriage was a happy one and without children. He retired from the Royal Institution at age 66 having complaints of ill health.
Tyndall became financially well-off from sales of his popular books and fees from his lectures (but there is no evidence that he owned commercial patents). For many years he got non-trivial payments for being a part-time scientific advisor to a couple of quasi-governmental agencies and partly donated the payments to charity. His successful lecture tour of the United States in 1872 netted him a substantial amount of dollars, all of which he promptly donated to a trustee for fostering science in America. Late in life his money donations went most visibly to the Irish Unionist political cause. When he died, his wealth was £22,122. For comparison's sake, the income of a police constable in London was about £80 per year at the time.
Death
In his last years Tyndall often took chloral hydrate to treat his insomnia. When bedridden and ailing, he died from an accidental overdose of this drug in 1893 at the age of 73, and was buried at Haslemere. The overdose was administered by his wife Louisa. "My darling," said Tyndall when he realized what had happened, "you have killed your John."
Afterwards, Tyndall's wife took possession of his papers and assigned herself supervisor of an official biography of him. She procrastinated on the project, however, and it was still unfinished when she died in 1940 aged 95. The book eventually appeared in 1945, written by A. S. Eve and C. H. Creasey, whom Louisa Tyndall had authorised shortly before her death.
John Tyndall is commemorated by a memorial (the Tyndalldenkmal) erected at an elevation of on the mountain slopes above the village of Belalp, where he had his holiday home, and in sight of the Aletsch Glacier, which he had studied.
John Tyndall's books
Tyndall, J. (1860), The glaciers of the Alps, Being a narrative of excursions and ascents, an account of the origin and phenomena of glaciers and an exposition of the physical principles to which they are related, (1861 edition) Ticknor and Fields, Boston
Tyndall, J. (1862), Mountaineering in 1861. A vacation tour, Longman, Green, Longman, and Roberts, London
Tyndall, J. (1865), On Radiation: One Lecture (40 pages)
Tyndall, J. (1868), Heat : A mode of motion, (1869 edition) D. Appleton, New York
Tyndall, J. (1869), Natural Philosophy in Easy Lessons (180 pages) (a physics book intended for use in secondary schools)
Tyndall, J. (1870), Faraday as a discoverer, Longmans, Green, London
Tyndall, J. (1870), Three Scientific Addresses by Prof. John Tyndall (75 pages)
Tyndall, J. (1870), Notes of a Course of Nine Lectures on Light (80 pages)
Tyndall, J. (1870), Notes of a Course of Seven Lectures on Electrical Phenomena and Theories (50 pages)
Tyndall, J. (1870), Researches on diamagnetism and magne-crystallic action: including the question of diamagnetic polarity, (a compilation of 1850s research reports), Longmans, Green, London
Tyndall, J. (1871), Hours of exercise in the Alps, Longmans, Green, and Co., London
Tyndall, J. (1871), Fragments of Science: A Series of Detached Essays, Lectures, and Reviews, (1872 edition), Longmans, Green, London
Tyndall, J. (1872), Contributions to Molecular Physics in the Domain of Radiant Heat, (a compilation of 1860s research reports), (1873 edition), D. Appleton and Company, New York
Tyndall, J. (1873), The forms of water in clouds & rivers, ice & glaciers, H. S. King & Co., London
Tyndall, J. (1873), Six Lectures on Light (290 pages)
Tyndall, J. (1876), Lessons in Electricity at the Royal Institution (100 pages), (intended for secondary school students)
Tyndall, J. (1878), Sound; delivered in eight lectures, (1969 edition), Greenwood Press, New York
Tyndall, J. (1882), Essays on the floating matter of the air, in relation to putrefaction and infection, D. Appleton, New York
Tyndall, J. (1887), Light and electricity: notes of two courses of lectures before the Royal Institution of Great Britain, D. Appleton and Company, New York
Tyndall, J. (1892), New Fragments (miscellaneous essays for a broad audience), D. Appleton, New York
See also
Ice sheet dynamics
Greenhouse gas
John Tyndall's system for measuring radiant heat absorption in gases
Notes
Sources
Biographies of John Tyndall
430 pages. This is the "official" biography.
William Tulloch Jeans wrote a 100-page biography of Professor Tyndall in 1887 (the year Tyndall retired from the Royal Institution). Downloadable. See also The Lives of Electricians: Professors Tyndall, Wheatstone, and Morse. (1887, Whittaker & Co.)
Louisa Charlotte Tyndall, his wife, wrote an 8-page biography of John Tyndall that was published in 1899 in Dictionary of National Biography (volume 57). It is readable online (and a 1903 republication of the same biography is also readable online).
Edward Frankland, a longtime friend, wrote a 16-page biography of John Tyndall as an obituary in 1894 in a scientific journal. It is readable online.
Gives an account of Tyndall's vocational development prior to 1853.
220 pages.
Arthur Whitmore Smith, a professor of physics, wrote a 10-page biography of John Tyndall in 1920 in a scientific monthly. Readable online.
John Walter Gregory, a naturalist, wrote a 9-page obituary of John Tyndall in 1894 in a natural science journal. Readable online.
An early, 8-page profile of John Tyndall appeared in 1864 in Portraits of Men of Eminence in Literature, Science and Art, Volume II, pages 25–32.
A brief profile of Tyndall based on information supplied by Tyndall himself appeared in 1874 in .
Claud Schuster, John Tyndall as a Mountaineer, 56-page essay included in Schuster's book Postscript to Adventure, year 1950 (New Alpine Library: Eyre & Spottiswoode, London).
.
The first major biography of Tyndall since 1945.
Further reading
External links
A blog maintained by a historian who is involved in transcribing Tyndall's letters.
The Tyndall Correspondence Project website
1820 births
1893 deaths
Atmospheric physicists
Experimental physicists
University of Marburg alumni
19th-century Irish physicists
Optical physicists
Glaciologists
Irish mountain climbers
Scientists from County Carlow
Royal Medal winners
Fellows of the Royal Society
Drug-related deaths in England
People from Leighlinbridge | John Tyndall | Physics | 7,937 |
3,587,430 | https://en.wikipedia.org/wiki/Pore%20water%20pressure | Pore water pressure (sometimes abbreviated to pwp) refers to the pressure of groundwater held within a soil or rock, in gaps between particles (pores). Pore water pressures below the phreatic level of the groundwater are measured with piezometers. The vertical pore water pressure distribution in aquifers can generally be assumed to be close to hydrostatic.
In the unsaturated ("vadose") zone, the pore pressure is determined by capillarity and is also referred to as tension, suction, or matric pressure. Pore water pressures under unsaturated conditions are measured with tensiometers, which operate by allowing the pore water to come into equilibrium with a reference pressure indicator through a permeable ceramic cup placed in contact with the soil.
Pore water pressure is vital in calculating the stress state in the ground soil mechanics, from Terzaghi's expression for the effective stress of the soil.
General principles
Pressure develops due to:
Water elevation difference: water flowing from a higher elevation to a lower elevation and causing a velocity head, or with water flow, as exemplified in Bernoulli's energy equations.
Hydrostatic water pressure: resulting from the weight of material above the point measured.
Osmotic pressure: inhomogeneous aggregation of ion concentrations, which causes a force in water particles as they attract by the molecular laws of attraction.
Absorption pressure: attraction of surrounding soil particles to one another by adsorbed water films.
Matric suction: the defining trait of unsaturated soil, this term corresponds to the pressure dry soil exerts on the surrounding material to equalise the moisture content in the overall block of soil and is defined as the difference between pore air pressure,, and pore water pressure, .
Below the water table
The buoyancy effects of water have a large impact on certain soil properties, such as the effective stress present at any point in a soil medium. Consider an arbitrary point five meters below the ground surface. In dry soil, particles at this point experience a total overhead stress equal to the depth underground (5 meters), multiplied by the specific weight of the soil. However, when the local water table height is within said five meters, the total stress felt five meters below the surface is decreased by the product of the height of the water table in to the five meter area, and the specific weight of water, 9.81 kN/m^3. This parameter is called the effective stress of the soil, basically equal to the difference in a soil's total stress and pore water pressure. The pore water pressure is essential in differentiating a soil's total stress from its effective stress. A correct representation of stress in the soil is necessary for accurate field calculations in a variety of engineering trades.
Equation for calculation
When there is no flow, the pore pressure at depth, hw, below the water surface is:
,
where:
ps is the saturated pore water pressure (kPa)
gw is the unit weight of water (kN/m3),
hw is the depth below the water table (m),
Measurement methods and standards
The standard method for measuring pore water pressure below the water table employs a piezometer, which measures the height to which a column of the liquid rises against gravity; i.e., the static pressure (or piezometric head) of groundwater at a specific depth. Piezometers often employ electronic pressure transducers to provide data. The United States Bureau of Reclamation has a standard for monitoring water pressure in a rock mass with piezometers. It sites ASTM D4750, "Standard Test Method for Determining Subsurface Liquid Levels in a Borehole or Monitoring Well (Observation Well)".
Above the water table
At any point above the water table, in the vadose zone, the effective stress is approximately equal to the total stress, as proven by Terzaghi's principle. Realistically, the effective stress is greater than the total stress, as the pore water pressure in these partially saturated soils is actually negative. This is primarily due to the surface tension of pore water in voids throughout the vadose zone causing a suction effect on surrounding particles, i.e. matric suction. This capillary action is the "upward movement of water through the vadose zone" (Coduto, 266). Increased water infiltration, such as that caused by heavy rainfall, brings about a reduction in matric suction, following the relationship described by the soil water characteristic curve (SWCC), resulting in a reduction of the soil's shear strength, and reduced slope stability. Capillary effects in soil are more complex than in free water due to the randomly connected void space and particle interference through which to flow; regardless, the height of this zone of capillary rise, where negative pore water pressure is generally peaks, can be closely approximated by a simple equation. The height of capillary rise is inversely proportional to the diameter of void space in contact with water. Therefore, the smaller the void space, the higher water will rise due to tension forces. Sandy soils consist of more coarse material with more room for voids, and therefore tend to have a much shallower capillary zone than do more cohesive soils, such as clays and silts.
Equation for calculation
If the water table is at depth dw in fine-grained soils, then the pore pressure at the ground surface is:
,
where:
pg is the unsaturated pore water pressure (Pa) at ground level,
gw is the unit weight of water (kN/m3),
dw is the depth of the water table (m),
and the pore pressure at depth, z, below the surface is:
,
where:
pu is the unsaturated pore water pressure (Pa) at point, z, below ground level,
z is depth below ground level.
Measurement methods and standards
A tensiometer is an instrument used to determine the matric water potential () (soil moisture tension) in the vadose zone. An ISO standard, "Soil quality — Determination of pore water pressure — Tensiometer method", ISO 11276:1995, "describes methods for the determination of pore water pressure (point measurements) in unsaturated and saturated soil using tensiometers. Applicable for in situ measurements in the field and, e. g. soil cores, used in experimental examinations." It defines pore water pressure as "the sum of matric and pneumatic pressures".
Matric pressure
The amount of work that must be done in order to transport reversibly and isothermally an infinitesimal quantity of water, identical in composition to the soil water, from a pool at the elevation and the external gas pressure of the point under consideration, to the soil water at the point under consideration, divided by the volume of water transported.
Pneumatic pressure
The amount of work that must be done in order to transport reversibly and isothermally an infinitesimal quantity of water, identical in composition to the soil water, from a pool at atmospheric pressure and at the elevation of the point under consideration, to a similar pool at an external gas pressure of the point under consideration, divided by the volume of water transported.
See also
Internal erosion
Frost weathering
Geotechnical engineering
Water potential
Well engineering
References
Soil mechanics | Pore water pressure | Physics | 1,534 |
72,722,473 | https://en.wikipedia.org/wiki/2023%20FAA%20system%20outage | On January 11, 2023, U.S. flights were grounded or delayed as the Federal Aviation Administration (FAA) attempted to fix a system outage. FAA paused all flight departures between 7:30 a.m. and 9 a.m. ET. Flights already in the air were allowed to continue to their destinations. Around 8:30 am. ET, flights were beginning to resume departures. The outage was the first time since September 11, 2001, that the FAA issued a nationwide ground stop in the United States.
A preliminary investigation of the incident demonstrated to FAA investigators that a "damaged database file" may have caused the outage of the FAA's Notice to Air Missions (NOTAM) system, responsible for notifying pilots of safety hazards. The FAA told CNN that there was "no evidence of a cyberattack" on its NOTAM system.
Incident
On January 10, 2023, NOTAM system stopped processing updates at 3:28 p.m. ET, and FAA issued the first Air Traffic Control System Command Center Advisory about this incident at 7:47 p.m. ET.
At 7:30 a.m. ET on January 11, the FAA ordered airlines to pause all domestic departures after its pilot-alerting Notice to Air Missions (NOTAM) system went offline overnight, causing extensive disruption. Around 8:30 a.m. ET, flights were beginning to resume departures after the FAA terminated the NOTAM outage advisory, and departures at other airports were expected to resume by 9 am. ET. However, the airlines were free to implement their own ground delay programs subsequent to the ground stop being lifted, potentially leading to further timetable issues.
Aftermath
A total of 32,578 flights were delayed within, into or out of the United States as of 8:07 a.m. ET, and another 409 within, into or out of the country were also canceled.
After the incident, shares of U.S. carriers fell in the premarket trading: Southwest Airlines was down 2.4%, while Delta Air Lines Inc, United Airlines and American Airlines were down about 1%.
Delta Air Lines reported that it had a working backup to the FAA system, but decided not to use it. Delta Air Lines CEO Ed Bastian stated that they didn't use the backup system "out of deference" to the FAA and allowing the FAA to make the decisions.
The FAA adopted new procedures for maintenance of the NOTAM system to prevent future outages.
Investigation
On January 13, 2023, the FAA stated that preliminary analysis of the outage indicates that it was caused by the failure of FAA personnel to follow proper procedures. The incident happened during routine scheduled maintenance. According to the FAA, one engineer mistakenly "replaced one file with another", not realizing that a mistake had been made. FAA officials stated that it was an "honest mistake that cost the country millions." It was later determined that the contractor from Spatial Front, Inc. unintentionally deleted files while "working to correct synchronization between the live primary database and a backup database."
Reactions
President Joe Biden was briefed on the FAA system outage. The White House said there was no evidence of a cyberattack in relation to the system outage, but the president has asked for an investigation.
House Transportation Committee Chair Sam Graves (R-MO) and Rick Larsen (D-WA) stated that the committee intended to conduct vigorous oversight of the Department of Transportation's plan to prevent such disruption from happening again. A group of more than 120 U.S lawmakers also told the FAA that such incidents were "completely unacceptable."
See also
2023 Philippine airspace closure, a similar incident that occurred ten days prior.
References
Aviation accidents and incidents in the United States in 2023
January 2023 events in the United States
Federal Aviation Administration
Technological failures | 2023 FAA system outage | Technology | 794 |
37,284,632 | https://en.wikipedia.org/wiki/Ethylene%20diurea | Ethylene diurea (EDU) is an organic compound with the formula (CH2NHCONH2)2. It is a white solid.
The compound has attracted interest as a potential antiozonant for crop protection. With respect to preventing the harmful effects on crops by ozone, EDU appears to either prevent the harmful effects of ozone or it stimulated plant growth. Trees treated with EDU were significantly healthier in both leaf longevity and water use efficiency.
The effectiveness of EDU depends upon several environmental factors.
References
Ureas | Ethylene diurea | Chemistry | 113 |
6,591,796 | https://en.wikipedia.org/wiki/Support%20%28measure%20theory%29 | In mathematics, the support (sometimes topological support or spectrum) of a measure on a measurable topological space is a precise notion of where in the space the measure "lives". It is defined to be the largest (closed) subset of for which every open neighbourhood of every point of the set has positive measure.
Motivation
A (non-negative) measure on a measurable space is really a function Therefore, in terms of the usual definition of support, the support of is a subset of the σ-algebra
where the overbar denotes set closure. However, this definition is somewhat unsatisfactory: we use the notion of closure, but we do not even have a topology on What we really want to know is where in the space the measure is non-zero. Consider two examples:
Lebesgue measure on the real line It seems clear that "lives on" the whole of the real line.
A Dirac measure at some point Again, intuition suggests that the measure "lives at" the point and nowhere else.
In light of these two examples, we can reject the following candidate definitions in favour of the one in the next section:
We could remove the points where is zero, and take the support to be the remainder This might work for the Dirac measure but it would definitely not work for since the Lebesgue measure of any singleton is zero, this definition would give empty support.
By comparison with the notion of strict positivity of measures, we could take the support to be the set of all points with a neighbourhood of positive measure: (or the closure of this). It is also too simplistic: by taking for all points this would make the support of every measure except the zero measure the whole of
However, the idea of "local strict positivity" is not too far from a workable definition.
Definition
Let be a topological space; let denote the Borel σ-algebra on i.e. the smallest sigma algebra on that contains all open sets Let be a measure on Then the support (or spectrum) of is defined as the set of all points in for which every open neighbourhood of has positive measure:
Some authors prefer to take the closure of the above set. However, this is not necessary: see "Properties" below.
An equivalent definition of support is as the largest (with respect to inclusion) such that every open set which has non-empty intersection with has positive measure, i.e. the largest such that:
Signed and complex measures
This definition can be extended to signed and complex measures.
Suppose that is a signed measure. Use the Hahn decomposition theorem to write
where are both non-negative measures. Then the support of is defined to be
Similarly, if is a complex measure, the support of is defined to be the union of the supports of its real and imaginary parts.
Properties
holds.
A measure on is strictly positive if and only if it has support If is strictly positive and is arbitrary, then any open neighbourhood of since it is an open set, has positive measure; hence, so Conversely, if then every non-empty open set (being an open neighbourhood of some point in its interior, which is also a point of the support) has positive measure; hence, is strictly positive.
The support of a measure is closed in as its complement is the union of the open sets of measure
In general the support of a nonzero measure may be empty: see the examples below. However, if is a Hausdorff topological space and is a Radon measure, a Borel set outside the support has measure zero:
The converse is true if is open, but it is not true in general: it fails if there exists a point such that (e.g. Lebesgue measure). Thus, one does not need to "integrate outside the support": for any measurable function or
The concept of support of a measure and that of spectrum of a self-adjoint linear operator on a Hilbert space are closely related. Indeed, if is a regular Borel measure on the line then the multiplication operator is self-adjoint on its natural domain
and its spectrum coincides with the essential range of the identity function which is precisely the support of
Examples
Lebesgue measure
In the case of Lebesgue measure on the real line consider an arbitrary point Then any open neighbourhood of must contain some open interval for some This interval has Lebesgue measure so Since was arbitrary,
Dirac measure
In the case of Dirac measure let and consider two cases:
if then every open neighbourhood of contains so
on the other hand, if then there exists a sufficiently small open ball around that does not contain so
We conclude that is the closure of the singleton set which is itself.
In fact, a measure on the real line is a Dirac measure for some point if and only if the support of is the singleton set Consequently, Dirac measure on the real line is the unique measure with zero variance (provided that the measure has variance at all).
A uniform distribution
Consider the measure on the real line defined by
i.e. a uniform measure on the open interval A similar argument to the Dirac measure example shows that Note that the boundary points 0 and 1 lie in the support: any open set containing 0 (or 1) contains an open interval about 0 (or 1), which must intersect and so must have positive -measure.
A nontrivial measure whose support is empty
The space of all countable ordinals with the topology generated by "open intervals" is a locally compact Hausdorff space. The measure ("Dieudonné measure") that assigns measure 1 to Borel sets containing an unbounded closed subset and assigns 0 to other Borel sets is a Borel probability measure whose support is empty.
A nontrivial measure whose support has measure zero
On a compact Hausdorff space the support of a non-zero measure is always non-empty, but may have measure An example of this is given by adding the first uncountable ordinal to the previous example: the support of the measure is the single point which has measure
References
(See chapter 2, section 2.)
(See chapter 3, section 2)
Measures (measure theory)
Measure theory | Support (measure theory) | Physics,Mathematics | 1,262 |
19,097,104 | https://en.wikipedia.org/wiki/NEFA%20%28drug%29 | NEFA is a moderate affinity NMDA antagonist (IC50 = 0.51 μM). It is a structural analog of phencyclidine. It was first synthesized by a team at Parke-Davis in the late 1950s.
References
Dissociative drugs
NMDA receptor antagonists
Fluorenes
Amines | NEFA (drug) | Chemistry | 67 |
74,011,849 | https://en.wikipedia.org/wiki/Centre%20Port | Centre Port is a proposed development across The Wash in Eastern England, which would link Norfolk and Lincolnshire by road. The plan is to link Hunstanton in Norfolk, with Gibraltar Point in Lincolnshire, creating an road, with a port and a railway at the midway point. Additionally the development would be a tidal barrage to prevent sea flooding, and would use tidal power to create enough electricity to power 600,000 homes. Whilst no formal plans have yet been submitted, the scheme has come under widespread criticism from those living in the area and from wildlife groups.
History
The Wash is a large tidal area between the counties of Lincolnshire and Norfolk on the eastern coast of England. The Wash is fed by several major watercourses, the main ones are: (anticlockwise from the north) the Steeping River, River Witham, River Welland, River Nene and the River Great Ouse, and covers an area of . The Nene, Ouse, Walland and Witham, collectively discharge an average flow of ( per day.) fresh water into The Wash. The sea area of The Wash is significantly smaller than it was in the 16th century, as much of the land has been reclaimed - places such as Wisbech, now far inland, were subjected to flooding such as in 1236 when hundreds died in a massive sea-storm. A large swathe of land bordered by Boston in the north, Peterborough in the west, Wisbech and Ely in the south, and King's Lynn in the east which fringes the coastline of The Wash is still below sea level. The Wash is an estuary (or embayment) which has deep channels, intertidal sandbanks and mudflats, while the coastline fringing the two counties of Lincolnshire and Norfolk has extensive saltmarshes.
The proposal to build a barrage across the wash was first mooted in the 1960s, however, the plan then was that it would involve reclaiming some of land and the diversion of fresh water spilling into the wash to feed water requirements in the south-east of England, particularly for agricultural needs. This would require a sea wall some in depth, releasing of fresh water per day. The scheme was costed between £150 million and £287 million, saving an annual bill of £300,000 on sea wall repairs in The Wash area. The project was dropped when the approval for a large on-land reservoir (Rutland Water) passed in Parliament.
Plans were also announced in the 1970s with a barrage much further south in The Wash, but the consultant engineers stated that the project was beyond their technology, something with which a team of Dutch engineers agreed. However, the Wash Water Storage Scheme was developed to determine the project's feasibility, in terms of its geological, and ecological impacts. Further schemes were mooted in 2008, and again in 2019 when the Environment Agency suggested such a plan to protect flooding damage to King's Lynn.
In 2023, The Wash, and other associated wetlands in Britain which are part of the East Atlantic Flyway, were nominated for UNESCO World Heritage status. The Wash estuary has several recognised protections, notably EMS (European Marine Site), NNR, RAMSAR, SAC, SSSI, and SPA.
The 2022 proposal
Developer Port Evo announced plans in November 2022 to develop a tidal barrage across The Wash. This would include a causeway with an road along the entire length of the barrage, and also include:
a container port known as Centre Port, capable of handling ships carrying 23,000 twenty-foot equivalent units
a railway linking onto the Poacher Line in Lincolnshire
a tidal energy scheme to generate electricity for 600,000 homes
sea flood prevention of The Wash area
The port would provide a deep-water trans-shipment point for container traffic by rail. This would feed into the line between and , providing a quicker route into the Midlands and Northern England than the current route from the East Anglian ports. The road would be a dual-carriageway from the container port area to Wainfleet, and a single-carriage road from the port to Hunstanton in the south in what would be a 20-minute journey end-to-end. The project has an initial estimate of 2028 for completion.
Support
The energy firm Centrica announced their support and funding towards a feasibility study. The managing director of the firm said "We’re excited to help Centre Port explore their ambitious plans for The Wash. The project represents one of the largest tidal power schemes anywhere in the world and would provide a reliable source of green energy to the UK."
Objections
In November 2022, the National Trust stated "The Wash is one of the most important estuaries in the UK. Therefore, news of a potential new container terminal and tidal scheme in an area designated for its importance to wildlife, is deeply concerning." They also said: "Some bold claims are being made about ecology and we are keen to seek further information on the detailed plans and data to back these up. We are ready to scrutinise these plans and hold developers to account on their promises."
Greenpeace were similarly critical of the effect on wildlife. Their chief scientist was quoted as saying: "Greenpeace remains highly sceptical that a tidal barrage on the Wash is a useful project, and it should certainly not be a priority for government support. Given the environmental impacts, needs for port infrastructure or flood defence should be met in a more targeted way." The RSPB labelled the project as "outlandish, [and] unworkable."
Both the Lincolnshire Wildlife Trust, and the Wild Ken Hill nature reserve in Norfolk object to the scheme. The latter pointed out that "...The way the Wash works is it’s quite a dynamic wilderness, so you get very complex movements of water and sediment which creates a mosaic of mudflats, salt marsh, channels, tidal streams … and that is what makes it so great for wildlife. Over 2 million birds visit a year, it hosts 50% of Europe’s common seals, and it has eels which are critically endangered globally. To interfere with those processes is highly likely to be very damaging."
See also
Cardiff Bay Barrage
Mersey Barrage
Outer Trial Bank - a trial for fresh water storage in The Wash in the 1970s
Severn Barrage
Notes
References
Sources
External links
Map of reclaimed land around The Wash
Tidal barrages
Proposed tidal power stations
Power stations in Lincolnshire
Coastal construction
Geography of Lincolnshire
Flood barriers
Proposed renewable energy power stations in England
Flood control in the United Kingdom
Proposed infrastructure in England | Centre Port | Engineering | 1,330 |
67,822,906 | https://en.wikipedia.org/wiki/MadamePee | madamePee is a mobile female urinal, without contact and without water supply. It is designed to be used at public events such as concerts or music festivals, but also in more durable situations such as construction sites, public gardens, etc.
Context
Female urination in public events is an ongoing issue (see section History in female urinal): differences in needs, conventions and practices translate into a blatant inequality of access between men and women, with longer queues and waiting times for women. Since the beginning of the 20th century, many initiatives have been taken (see female urinal devices) to deal with this problem: including portable individual urinals, men-like urinals but adapted to the women morphology, unisex urinals, specific cabin urinals etc. However, who has attended outdoor rock concerts can attest that no standard and durable solution has been found and adopted.
Rationales
Studies have shown that the separation of urination and defecation devices, such as for men, increases the efficiency of women's toilets, in terms of space optimization and service duration; for event planners, this means more devices, used more efficiently, with constant resources.
Implementation in public of female urinals has psychological and social implications, which strongly depend on the cultural environment. The degree of intimacy preservation is an important issue, viewed differently in unisex toilets or in cabin toilets.
Concept
Nathalie des Isnards was so upset to miss the show of her favorite rock group, because of the time spent to access the toilets, that she contacted several designers, installation providers and psychologists to find an industrial solution. Building on the previous experiences, such as the contactless urination devices, madamePee is based on the following premises:
Mobility: devices should be easily installed and uninstalled;
Environmental sustainability: no need for water supply (which adds to the mobility above) and urine collection for fertilizer uses;
Privacy: to meet the needs of various countries and contexts, light cabins with hinged doors, possibly with a veil as roof.
Several patents have been taken, for example for the urinal itself which must not retain bad smells after use.
madamePee cabins have been installed in major public events for several years (e.g. Hellfest, Parisplages, Solidays...) ; they are distributed by major rental companies of mobile sanitary facilities. They are now installed in countries outside France: Portugal, Belgium, Andorra, Ivory Coast, Canada.
In 2022, Nathalie des Isnards was recognized as "Woman entrepreneur of the year, favorite of the jury" at the "Women in Industry" trophies (Paris 2022) awarded by the magazine "l'Usine Nouvelle".
Developments
The COVID-19 pandemic halted the holding of outdoor festivals worldwide in the years 2019–2020, they were the first outlet for Madame Pee female urinals. Since the end of 2021, again, festivals have been organized bringing together hundreds of thousands of participants; MadamePee urinals were present at major events such as HellFest2022 (420,000 tickets sold) or Solidays in Paris.
The pandemic with restrictions on access to cafes and bistros has highlighted the need for public toilets for women in cities. Large cities in Western Europe are concerned with installing toilets in public places that are easy to maintain, without a water connection; about ten cities in France are experimenting with MadamePee urinals permanently installed in urban areas.
Climate change results in extreme drought in Western Europe in 2022, after several unusually dry summers; the use of drinking water in toilets is increasingly questioned and becomes a determining factor in the development of dry toilets (without connection to the drinking water network).
Finally, human urine as fertilizer is an alternative to the use of chemical fertilizers. Urine collection is not possible in general purpose toilets; madamePee type urinals provide pure urine which is collected and transformed.
A version for men has been developed (misterPee 2022) based on the same characteristics as the madamePee urinals: no contact, no water, no need for connection to the sewer.
In 2022, a European standard on "mobile non-sewer-connected toilet cabins" has been adopted and published in 2023 by AFNOR. It states the requirements of services and products relating to the deployment of cabins and sanitary products and applies to madamePee's products.
See also
Female urinals
Female urination device
Public toilets
Nathalie_des_Isnards (in French)
Pollee
References
Toilets
Sanitation
Urinals
Feminine hygiene | MadamePee | Biology | 937 |
3,321,820 | https://en.wikipedia.org/wiki/Actiosaurus | Actiosaurus (meaning "coast lizard") is an extinct genus of reptile first described by Henri Sauvage in 1883 from Antully bonebed, Autun (Triassic of France). The type species is A. gaudryi (commonly misspelled A. gaudrii after Boulenger). Little is known of it, and it is considered a nomen dubium. Actiosaurus was originally described as a dinosaur in 1883 and was reinterpreted as an ichthyosaur in 1908. Actiosaurus may instead represent the remains of a choristodere. Fischer et al. (2014) considered A. gaudryi to be a species inquirenda, and noted the similarity of its bones to the limb bones of choristoderes.
See also
Rachitrema
List of ichthyosaurs
Timeline of ichthyosaur research
References
External links
Actiosaurus is a choristodere not an ichthyosaur.
Nomina dubia
Fossil taxa described in 1883
Fossils of France
Choristodera
Prehistoric marine reptiles | Actiosaurus | Biology | 226 |
3,303,019 | https://en.wikipedia.org/wiki/Robinson%E2%80%93Dadson%20curves | The Robinson–Dadson curves are one of many sets of equal-loudness contours for the human ear, determined experimentally by D. W. Robinson and R. S. Dadson.
Until recently, it was common to see the term Fletcher–Munson used to refer to equal-loudness contours generally, even though the re-determination carried out by Robinson and Dadson in 1956, became the basis for an ISO standard ISO 226 which was only revised recently.
It is now better to use the term equal-loudness contours as the generic term, especially as a recent survey by ISO redefined the curves in a new standard, ISO 226:2003.
According to the ISO report, the Robinson-Dadson results were the odd one out, differing more from the current standard than did the Fletcher–Munson curves. It comments that it is fortunate that the 40-Phon Fletcher-Munson curve on which the A-weighting standard was based turns out to have been in good agreement with modern determinations.
The article also comments on the large differences apparent in the low-frequency region, which remain unexplained. Possible explanations are:
The equipment used was not properly calibrated.
The criteria used for judging equal loudness (which is tricky) differed.
Different races actually vary greatly in this respect (possible, and most recent determinations were by the Japanese).
Subjects were not properly rested for days in advance or were exposed to loud noise in travelling to the tests which tensed the tensor timpani and stapedius muscles controlling low-frequency mechanical coupling.
See also
A-weighting
ITU-R 468 noise weighting
References
External links
ISO Standard
Fletcher–Munson is not Robinson–Dadson
Full Revision of International Standards for Equal-Loudness Level Contours (ISO 226)
Hearing curves and on-line hearing test
Equal-loudness contours by Robinson and Dadson
Acoustics
Hearing
Audio engineering
Sound
Psychoacoustics | Robinson–Dadson curves | Physics,Engineering | 407 |
63,505,865 | https://en.wikipedia.org/wiki/IBM%20Enterprise%20Systems%20Architecture | IBM Enterprise Systems Architecture is an instruction set architecture introduced by IBM as ESA/370 in 1988. It is based on the IBM System/370-XA architecture.
It extended the dual-address-space mechanism introduced in later IBM System/370 models by adding a new mode in which general-purpose registers 1-15 are each associated with an access register referring to an address space, with instruction operands whose address is computed with a given general-purpose register as a base register will be in the address space referred to by the corresponding address register.
The later ESA/390, introduced in 1990, added a facility to allow device descriptions to be read using channel commands and, in later models, added instructions to perform IEEE 754 floating-point operations and increased the number of floating-point registers from 4 to 16.
Enterprise Systems Architecture is essentially a 32-bit architecture; as with System/360, System/370, and 370-XA, the general-purpose registers are 32 bits long, and the arithmetic instructions support 32-bit arithmetic. Only byte-addressable real memory (Central Storage) and Virtual Storage addressing is limited to 31 bits, as is the case with 370-XA. (IBM reserved the most significant bit to easily support applications expecting 24-bit addressing, as well as to sidestep a problem with extending two instructions to handle 32-bit unsigned addresses.) It maintains problem state backward compatibility dating back to 1964 with the 24-bit-address/32-bit-data (System/360 and System/370) and subsequent 24/31-bit-address/32-bit-data architecture (System/370-XA). However, the I/O subsystem is based on System/370 Extended Architecture (S/370-XA), not on the original S/370 I/O instructions.
ESA/370 architecture
On February 15, 1988, IBM announced
Enterprise Systems Architecture/370 (ESA/370) for 3090 enhanced ("E") models and for 4381 model groups 91E and 92E.
In addition to the primary-space and secondary-space addressing modes that later System/370 models, and System/370 Extended Architecture (S/370-XA) models, support, ESA has an access register mode in which each use of general register 1-15 as a base register uses an associated access register to select an address space. In addition to the normal address spaces that machines with the dual-address-space facility support, ESA also allows data spaces, which contain no executable code.
A machine may be divided into Logical Partitions (LPARs), each with its own virtual system memory so that multiple operating systems may run concurrently on one machine.
ESA/390 architecture
An important capability to form a Parallel Sysplex was added to the architecture in 1994.
ESA/390 also extends the Sense ID command to provide additional information about a device, and additional device-dependent channel commands, the command codes for which are provided in the Sense ID information, to allow device description information to be fetched from a device.
Starting with the System/390 G5, IBM introduced:
the basic floating-point extensions facility, which increases the number of floating-point registers from 4 (0, 2, 4, 6) to 16 (0-15);
the binary floating-point (BFP) extensions facility, which supports IEEE 754 binary floating-point numbers, with an additional floating-point control (FPC) register to support IEEE 754 modes and errors;
the floating-point support (FPS) extensions facility, which adds instructions to load and store floating-point numbers regardless of whether they're in hexadecimal or IEEE 754 format and to convert between those formats;
the hexadecimal floating-point (HFP) extensions facility, which adds new hexadecimal floating-point instructions corresponding to some binary floating-point instructions.
Some PC-based IBM-compatible mainframes which provide ESA/390 processors in smaller machines have been released over time, but are only intended for software development.
New facilities
ESA/390 adds the following facilities
All models
Access-list-controlled protection
Some models
Concurrent sense
PER 2
Storage-protection override
Move-page facility 2
Square root
String instruction
Suppression on protection with virtual-address enhancement
Set address space control fast
Subspace group
Called-space identification
Checksum
Compare and move extended
Immediate and relative instruction
Branch and set authority
Perform locked operation
Additional floating-point
Program call fast
Resume program
Trap
Extended TOD clock
TOD-clock-control override
Store system information
Extended translation 1
Extended translation 2
z/Architecture (certain instructions)
Enhanced input/output
New channel commands
The following channel commands are new, or have their functionality changed, in ESA/390:
Notes
References
S370-ESA
S/390-ESA
Enterprise Systems Architecture
Computing platforms
Computer-related introductions in 1988
2000s disestablishments
32-bit computers | IBM Enterprise Systems Architecture | Technology | 1,011 |
1,048,987 | https://en.wikipedia.org/wiki/Self-incompatibility | Self-incompatibility (SI) is a general name for several genetic mechanisms that prevent self-fertilization in sexually reproducing organisms, and thus encourage outcrossing and allogamy. It is contrasted with separation of sexes among individuals (dioecy), and their various modes of spatial (herkogamy) and temporal (dichogamy) separation.
SI is best-studied and particularly common in flowering plants, although it is present in other groups, including sea squirts and fungi. In plants with SI, when a pollen grain produced in a plant reaches a stigma of the same plant or another plant with a matching allele or genotype, the process of pollen germination, pollen-tube growth, ovule fertilization, or embryo development is inhibited, and consequently no seeds are produced. SI is one of the most important means of preventing inbreeding and promoting the generation of new genotypes in plants and it is considered one of the causes of the spread and success of angiosperms on Earth.
Mechanisms of single-locus self-incompatibility
The best studied mechanisms of SI act by inhibiting the germination of pollen on stigmas, or the elongation of the pollen tube in the styles. These mechanisms are based on protein-protein interactions, and the best-understood mechanisms are controlled by a single locus termed S, which has many different alleles in the species population. Despite their similar morphological and genetic manifestations, these mechanisms have evolved independently, and are based on different cellular components; therefore, each mechanism has its own, unique S-genes.
The S-locus contains two basic protein coding regions – one expressed in the pistil, and the other in the anther and/or pollen (referred to as the female and male determinants, respectively). Due to their physical proximity, these are genetically linked, and are inherited as a unit. The units are called S-haplotypes. The translation products of the two regions of the S-locus are two proteins which, by interacting with one another, lead to the arrest of pollen germination and/or pollen tube elongation, and thereby generate an SI response, preventing fertilization. However, when a female determinant interacts with a male determinant of a different haplotype, no SI is created, and fertilization ensues. This is a simplistic description of the general mechanism of SI, which is more complicated, and in some species the S-haplotype contains more than two protein coding regions.
Following is a detailed description of the different known mechanisms of SI in plants.
Gametophytic self-incompatibility (GSI)
In gametophytic self-incompatibility (GSI), the SI phenotype of the pollen is determined by its own gametophytic haploid genotype. This is the most common type of SI. Two different mechanisms of GSI have been described in detail at the molecular level, and their description follows.
The RNase mechanism
In this mechanism, pollen tube elongation is halted when it has proceeded approximately one third of the way through the style. The female component ribonuclease protein, termed S-RNase probably causes degradation of the ribosomal RNA (rRNA) inside the pollen tube, in the case of identical male and female S alleles, and consequently pollen tube elongation is arrested, and the pollen grain dies.
Within a decade of the initial confirmation their role in GSI, proteins belonging to the same RNase gene family were also found to cause pollen rejection in species of Rosaceae and Plantaginaceae. Despite initial uncertainty about the common ancestry of RNase-based SI in these distantly related plant families, phylogenetic studies and the finding of shared male determinants (F-box proteins) strongly supported homology across eudicots. Therefore, this mechanism likely arose approximately 90 million years ago, and is the inferred ancestral state for approximately 50% of all plant species.
In the past decade, the predictions about the wide distribution of this mechanism of SI have been confirmed, placing additional support of its single ancient origin. Specifically, a style-expressed T2/S-RNase gene and pollen-expressed F-box genes are now implicated in causing SI among the members of Rubiaceae, Rutaceae, and Cactaceae. Therefore, other mechanisms of SI are thought to be recently derived in eudicots plants, in some cases relatively recently. One particularly interesting case is the Prunus SI systems, which functions through self-recognition (the cytotoxic activity of the S-RNAses is inhibited by default and selectively activated by the pollen partner SFB upon self-pollination), [where "SFB" is a term that stands "for S-haplotype-specific F-box protein", as explained (parenthetically) in the abstract of], while SI in the other species with S-RNAse functions through non-self recognition (the S-RNAses are selectively detoxified upon cross-pollination).
The S-glycoprotein mechanism
In this mechanism, pollen growth is inhibited within minutes of its placement on the stigma. The mechanism is described in detail for Papaver rhoeas and so far appears restricted to the plant family Papaveraceae.
The female determinant is a small, extracellular molecule, expressed in the stigma; the identity of the male determinant remains elusive, but it is probably some cell membrane receptor. The interaction between male and female determinants transmits a cellular signal into the pollen tube, resulting in strong influx of calcium cations; this interferes with the intracellular concentration gradient of calcium ions which exists inside the pollen tube, essential for its elongation. The influx of calcium ions arrests tube elongation within 1–2 minutes. At this stage, pollen inhibition is still reversible, and elongation can be resumed by applying certain manipulations, resulting in ovule fertilization.
Subsequently, the cytosolic protein p26, a pyrophosphatase, is inhibited by phosphorylation, possibly resulting in arrest of synthesis of molecular building blocks, required for tube elongation. There is depolymerization and reorganization of actin filaments, within the pollen cytoskeleton. Within 10 minutes from the placement on the stigma, the pollen is committed to a process which ends in its death. At 3–4 hours past pollination, fragmentation of pollen DNA begins, and finally (at 10–14 hours), the cell dies apoptotically.
Sporophytic self-incompatibility (SSI)
In sporophytic self-incompatibility (SSI), the SI phenotype of the pollen is determined by the diploid genotype of the anther (the sporophyte) in which it was created. This form of SI was identified in the families: Brassicaceae, Asteraceae, Convolvulaceae, Betulaceae, Caryophyllaceae, Sterculiaceae and Polemoniaceae. Up to this day, only one mechanism of SSI has been described in detail at the molecular level, in Brassica (Brassicaceae).
Since SSI is determined by a diploid genotype, the pollen and pistil each express the translation products of two different alleles, i.e. two male and two female determinants. Dominance relationships often exist between pairs of alleles, resulting in complicated patterns of compatibility/self-incompatibility. These dominance relationships also allow the generation of individuals homozygous for a recessive S allele.
Compared to a population in which all S alleles are co-dominant, the presence of dominance relationships in the population raises the chances of compatible mating between individuals. The frequency ratio between recessive and dominant S alleles reflects a dynamic balance between reproductive assurance (favoured by recessive alleles) and avoidance of selfing (favoured by dominant alleles).
The SI mechanism in Brassica
As previously mentioned, the SI phenotype of the pollen is determined by the diploid genotype of the anther. In Brassica, the pollen coat, derived from the anther's tapetum tissue, carries the translation products of the two S alleles. These are small, cysteine-rich proteins. The male determinant is termed SCR or SP11, and is expressed in the anther tapetum as well as in the microspore and pollen (i.e. sporophytically). There are possibly up to 100 polymorphs of the S-haplotype in Brassica, and within these there is a dominance hierarchy.
The female determinant of the SI response in Brassica, is a transmembrane protein termed SRK, which has an intracellular kinase domain, and a variable extracellular domain. SRK is expressed in the stigma, and probably functions as a receptor for the SCR/SP11 protein in the pollen coat. Another stigmatic protein, termed SLG, is highly similar in sequence to the SRK protein, and seems to function as a co-receptor for the male determinant, amplifying the SI response.
The interaction between the SRK and SCR/SP11 proteins results in autophosphorylation of the intracellular kinase domain of SRK, and a signal is transmitted into the papilla cell of the stigma. Another protein essential for the SI response is MLPK, a serine-threonine kinase, which is anchored to the plasma membrane from its intracellular side. A downstream signaling cascade leads to proteasomal degradation that produces an SI response.
Other mechanisms of self-incompatibility
These mechanisms have received only limited attention in scientific research. Therefore, they are still poorly understood.
2-locus gametophytic self-incompatibility
The grass subfamily Pooideae, and perhaps all of the family Poaceae, have a gametophytic self-incompatibility system that involves two unlinked loci referred to as S and Z. If the alleles expressed at these two loci in the pollen grain both match the corresponding alleles in the pistil, the pollen grain will be recognized as incompatible. At both loci, S and Z, two male and one female determinant can be found. All four male determinants encode proteins belonging to the same family (DUF247) and are predicted to be membrane-bound. The two female determinants are predicted to be secreted proteins with no protein family membership.
Heteromorphic self-incompatibility
A distinct SI mechanism exists in heterostylous flowers, termed heteromorphic self-incompatibility. This mechanism is probably not evolutionarily related to the more familiar mechanisms, which are differentially defined as homomorphic self-incompatibility.
Almost all heterostylous taxa feature SI to some extent. The loci responsible for SI in heterostylous flowers, are strongly linked to the loci responsible for flower polymorphism, and these traits are inherited together. Distyly is determined by a single locus, which has two alleles; tristyly is determined by two loci, each with two alleles. Heteromorphic SI is sporophytic, i.e. both alleles in the male plant, determine the SI response in the pollen. SI loci always contain only two alleles in the population, one of which is dominant over the other, in both pollen and pistil. Variance in SI alleles parallels the variance in flower morphs, thus pollen from one morph can fertilize only pistils from the other morph. In tristylous flowers, each flower contains two types of stamens; each stamen produces pollen capable of fertilizing only one flower morph, out of the three existing morphs.
A population of a distylous plant contains only two SI genotypes: ss and Ss. Fertilization is possible only between genotypes; each genotype cannot fertilize itself. This restriction maintains a 1:1 ratio between the two genotypes in the population; genotypes are usually randomly scattered in space. Tristylous plants contain, in addition to the S locus, the M locus, also with two alleles. The number of possible genotypes is greater here, but a 1:1 ratio exists between individuals of each SI type.
Cryptic self-incompatibility (CSI)
Cryptic self-incompatibility (CSI) exists in a limited number of taxa (for example, there is evidence for CSI in Silene vulgaris, Caryophyllaceae). In this mechanism, the simultaneous presence of cross and self pollen on the same stigma, results in higher seed set from cross pollen, relative to self pollen. However, as opposed to 'complete' or 'absolute' SI, in CSI, self-pollination without the presence of competing cross pollen, results in successive fertilization and seed set; in this way, reproduction is assured, even in the absence of cross-pollination. CSI acts, at least in some species, at the stage of pollen tube elongation, and leads to faster elongation of cross pollen tubes, relative to self pollen tubes. The cellular and molecular mechanisms of CSI have not been described.
The strength of a CSI response can be defined, as the ratio of crossed to selfed ovules, formed when equal amounts of cross and self pollen, are placed upon the stigma; in the taxa described up to this day, this ratio ranges between 3.2 and 11.5.
Late-acting self-incompatibility (LSI)
Late-acting self-incompatibility (LSI) is also termed ovarian self-incompatibility (OSI). In this mechanism, self pollen germinates and reaches the ovules, but no fruit is set. LSI can be pre-zygotic (e.g. deterioration of the embryo sac prior to pollen tube entry, as in Narcissus triandrus) or post-zygotic (malformation of the zygote or embryo, as in certain species of Asclepias and in Spathodea campanulata).
The existence of the LSI mechanism among different taxa and in general, is subject for scientific debate. Criticizers claim, that absence of fruit set is due to genetic defects (homozygosity for lethal recessive alleles), which are the direct result of self-fertilization (inbreeding depression). Supporters, on the other hand, argue for the existence of several basic criteria, which differentiate certain cases of LSI from the inbreeding depression phenomenon.
Self-compatibility (SC)
Self-compatibility (SC) is the absence of genetic mechanisms which prevent self-fertilization resulting in plants that can reproduce successfully via both self-pollen and pollen from other individuals. Approximately one half of angiosperm species are SI, the remainder being SC. Mutations that disable SI (resulting in SC) may become common or entirely dominate in natural populations. Pollinator decline, variability in pollinator service, the so-called "automatic advantage" of self-fertilisation, among other factors, may favor the loss of SI.
Many cultivated plants are SC, although there are notable exceptions, such as apples and Brassica oleracea. Human-mediated artificial selection through selective breeding is often responsible for SC among these agricultural crops. SC enables more efficient breeding techniques to be employed for crop improvement. However, when genetically similar SI cultivars are bred, inbreeding depression can cause a cross-incompatible form of SC to arise, such as in apricots and almonds. In this rare, intraspecific, cross-incompatible mechanism, individuals have more reproductive success when self-pollinated rather than when cross-pollinated with other individuals of the same species. In wild populations, intraspecific cross-incompatibility has been observed in Nothoscordum bivalve.
See also
References
Further reading
External links
Pollination
Plant reproduction
Population genetics
Plant sexuality | Self-incompatibility | Biology | 3,389 |
39,582,537 | https://en.wikipedia.org/wiki/Dan%20Willard | Dan Edward Willard (September 19, 1948 – January 21, 2023) was an American computer scientist and logician, and a professor of computer science at the University at Albany.
Education and career
Willard did his undergraduate studies in mathematics at Stony Brook University, graduating in 1970. He went on to graduate studies in mathematics at Harvard University, earning a master's degree in 1972 and a doctorate in 1978. After leaving Harvard, he worked at Bell Labs for four years before joining the Albany faculty in 1983.
Contributions
Although trained as a mathematician and employed as a computer scientist, Willard's most highly cited publication is in evolutionary biology. In 1973, with biologist Robert Trivers, Willard published the Trivers–Willard hypothesis, that female mammals could control the sex ratio of their offspring, and that it would be evolutionally advantageous for healthier or higher-status females to have more male offspring and for less healthy or lower-status females to have more female offspring. Controversial at the time, especially because it proposed no mechanism for this control, this theory was later validated through observation, and it has been called "one of
the most influential and highly cited papers of 20th century evolutionary biology".
Willard's 1978 thesis work on range searching data structures was one of the predecessors to the technique of fractional cascading, and throughout the 1980s Willard continued to work on related data structure problems. As well as continuing to work on range searching, he did important early work on the order-maintenance problem, and invented the x-fast trie and y-fast trie, data structures for storing and searching sets of small integers with low memory requirements.
In computer science, Willard is best known for his work with Michael Fredman in the early 1990s on integer sorting and related data structures. Before their research, it had long been known that comparison sorting required time to sort a set of items, but that faster algorithms were possible if the keys by which the items were sorted could be assumed to be integers of moderate size. For instance, sorting keys in the range from to could be accomplished in time by radix sorting. However, it was assumed that integer sorting algorithms would necessarily have a time bound depending on , and would necessarily be slower than comparison sorting for sufficiently large values of . In research originally announced in 1990, Fredman and Willard changed these assumptions by introducing the transdichotomous model of computation. In this model, they showed that integer sorting could be done in time by an algorithm using their fusion tree data structure as a priority queue. In a follow-up to this work, Fredman and Willard also showed that similar speedups could be applied to other standard algorithmic problems including minimum spanning trees and shortest paths.
After 2000, Willard's publications primarily concerned self-verifying theories: systems of logic that have been weakened sufficiently, compared to more commonly studied systems, to prevent Gödel's incompleteness theorems from applying to them. Within these systems, it is possible to prove that the systems themselves are logically consistent, without this deduction leading to the self-contradiction that Gödel's theorem implies for stronger systems. In a preprint summarizing his oeuvre of work in this area, Willard speculated that these logical systems will be of importance in developing artificial intelligences that can survive the potential extinction of mankind, reason consistently, and recognize their own consistency.
Selected publications
References
1948 births
2023 deaths
American computer scientists
American logicians
Mathematical logicians
American theoretical computer scientists
Stony Brook University alumni
University at Albany, SUNY faculty
Harvard Graduate School of Arts and Sciences alumni | Dan Willard | Mathematics | 725 |
9,492,012 | https://en.wikipedia.org/wiki/Galanin%20receptor | The galanin receptor is a G protein-coupled receptor, or metabotropic receptor which binds galanin.
Galanin receptors can be found throughout the peripheral and central nervous systems and the endocrine system. So far three subtypes are known to exist: GAL-R1, GAL-R2, and GAL-R3. The specific function of each subtype remains to be fully elucidated, although as of 2009 great progress is currently being made in this respect with the generation of receptor subtype-specific knockout mice, and the first selective ligands for galanin receptor subtypes. Selective galanin agonists are anticonvulsant, while antagonists produce antidepressant and anxiolytic effects in animals, so either agonist or antagonist ligands for the galanin receptors may be potentially therapeutic compounds in humans.
Ligands
Agonists
Non-selective
Galanin
Galanin 1-15 fragment
Galanin-like peptide - agonist at GAL1 and GAL2 but not GAL3
Galmic
Galnon
NAX 5055
D-Gal(7-Ahp)-B2
GAL1 selective
M617
GAL1/2 selective
M1154 - has no GalR3 interaction
GAL2 selective
Galanin 2-11 amide - also called AR-M 1896, anticonvulsant in mice, CAS# 367518-31-8
M1145 - selective compared to both GalR1 and GalR3
M1153 - selective compared to both GalR1 and GalR3
CYM 2503 (positive allosteric modulator)
Antagonists
Non-selective
M35 peptide
GAL1 selective
SCH-202,596
GAL2 selective
M871 peptide
GAL3 selective
SNAP-37889
SNAP-398,299
References
External links
G protein-coupled receptors | Galanin receptor | Chemistry | 379 |
71,836,696 | https://en.wikipedia.org/wiki/Rurouni%20Kenshin%20%282023%20TV%20series%29 | is a Japanese anime television series, based on Nobuhiro Watsuki's manga series Rurouni Kenshin. It is the second anime television series adaptation after the 1996–98 series. Animated by Liden Films, the series' first season, which was directed and storyboarded by Hideyo Yamamoto, aired from July to December 2023 on Fuji TV's Noitamina programming block. A second season, subtitled Kyoto Disturbance and directed by Yuki Komada, premiered in October 2024.
Premise
In the Meiji era in Japan, Himura Kenshin is a pacifistic swordsman, wandering the country and helping people with his swordsmanship skills. Once a deadly and feared political assassin known as Hitokiri Battōsai, he has since led a path of peace, wielding a reverse-bladed sword, known as , in a vow to never again take another life.
Voice cast
Production
On December 19, 2021, at the Jump Festa '22 event, it was announced that a new television series adaptation of the Rurouni Kenshin manga would be animated by Liden Films. A promotional video was shown at the Aniplex Online Fest 2022 on September 24, 2022. The series re-adapts the original manga story. It is directed by Hideyo Yamamoto, with scripts written by Hideyuki Kurata, character designs by , and music composed by . The original manga author, Nobuhiro Watsuki, supervised the character designs and scenario.
Yamamoto said that he used to watch all Rurouni Kenshin works during his youth, and was in particular impressed by the original video animations for its different artwork. He came to think the main appeal of the series was how the series showed people's lives in the Meiji era and how Kenshin not only fought enemies but also helped them redeem from their crimes while interacting with them. The latter was further noted to make Kenshin as a man every viewer wants to be like. From the beginning, he was thinking that he wanted to show "a way of foreshadowing" but wanted the details to be subtle, similar to the dramas that often air next to Kenshin. In contrast to the more comical original work, Yamamoto aimed to make the narrative more serious and avoid slapstick or superdeformed designs in order to make it more realistic. In animating the work, they used 3DCG with 3-D models of the rooms, fitting for the modern age. Horse carriages were animated through CGI as they were a common vehicle used in the Meiji. In regards to the action, the animation was given a more unique style for the fight scenes. The designs were made by Nishii under supervision by Yamamoto and Watsuki. Careful detail was given to the kimono and other clothing featured in the anime.
Among many supervisions of the series, and Watsuki aimed to make it fitting for the Reiwa era as well as accessible to both newcomers and returning audience. The story arc involving Raijuta was revised in order to be improved in the 2023 anime. Kurosaki in particular revised the scripts of the Raijuta episodes. Kurata came up with new ideas to revisit Sanosuke's backstory in order to bring further depth to the character. Meetings were done in order to supervise most episodes. While the clothing remained the same, the way bodies are drawn were revised due to improvements from the graphic style from the manga and 1990s version. Soma Saito was chosen because the staff found him fitting to portray both the gentle and rude demeanors of Kenshin. After the musical based on the manga was done, the staff found the manga very comical and wanted to generate different style with the new anime, to the point Kenshin no longer says his expression of "Oro?" meant to sound as comic relief to his reactions in jokes. Nevertheless, there was still a desire to keep some comical scenes.
Kenshin's voice actor, Soma Saito, said he was a fan of the series ever since he was a child and looks forward to creating his own take of Kenshin. Meanwhile, Kaoru's voice actress, Rie Takahashi, was surprised she was selected to voice the heroine and, similar to Saito, wanted to create an appealing version of Karou. Other main voice actors include Makoto Koichi, Yahiko's voice actor, who commented that she aimed to portray his passionate spirit, and Taku Yashiro, Sanosuke's voice actor, who expressed that he aimed to portray various aspects of his personality, such as his "straightness, youthfulness, roughness, strength, and warmth."
On December 16, 2023, a second season, subtitled , was announced at Jump Festa '24. Yuki Komada replaced Yamamoto for the second season, while Kazuo Watanabe (first season's sub-character designer and chief animation director) joined Nishii for the character designs.
Release
The first season of Rurouni Kenshin ran for two consecutive cours, for a total of 24 episodes aired from July 7 to December 15, 2023, on Fuji TV's Noitamina programming block. For the first season, the first opening theme is , performed by Ayase and (under the name Ayase×R-Shitei), while the ending theme is , performed by Reol. The second opening theme is , performed by Masaki Suda and Tokyo Ska Paradise Orchestra (under the name Masaki Suda×Tokyo Ska Paradise Orchestra), while the second ending theme is , performed by Kid Phenomenon. The episodes were collected by Aniplex on eight DVDs and Blu-ray sets; the first volume was released on October 25, 2023, and the last one on May 29, 2024.
Following the first season's finale, it was announced that the series was renewed for a second season. Subtitled Kyoto Disturbance, it premiered on October 4, 2024, on the same programming block, and is set to run for two consecutive cours. For the second season, the first opening theme is "Chained" (also known as ), performed by Tatsuya Kitani and (under the name Tatsuya Kitani×Natori), while the first ending theme is , performed by . The second opening theme is "Burn", performed by Yama and (under the name Yama×WurtS), while the second ending theme is , performed by .
Aniplex of America screened the U.S. premiere for the series at the 2023 Anime Expo on July 3 in the Main Events stage of the Los Angeles Convention Center. A conversation between Aniplex producer Masami Niwa and voice actors Soma Saito and Rie Takahashi followed the screening. Crunchyroll is streaming the series outside of Asia. An English dub premiered on October 15, 2023, although neither Aniplex of America nor Crunchyroll revealed the cast. The English dub for the second season premiered on October 24, 2024.
Episodes
Season 1 (2023)
Season 2
Home media release
Season 1
Season 2
Notes
References
External links
Rurouni Kenshin
Anime reboots
Anime series based on manga
Aniplex
Fiction set in 1878
Historical anime and manga
Liden Films
Meiji era in fiction
Noitamina
Samurai in anime and manga
Television series set in the 1870s
Television shows set in Japan
Works about atonement | Rurouni Kenshin (2023 TV series) | Biology | 1,520 |
10,298,849 | https://en.wikipedia.org/wiki/African%20Society%20for%20Bioinformatics%20and%20Computational%20Biology | The African Society for Bioinformatics and Computational Biology (ASBCB) is a non-profit professional association dedicated to the advancement of bioinformatics and computational biology in Africa. Transformed from the African Bioinformatics Network (ABioNET), ASBCB was established in February 2004 at a meeting in Cape Town, South Africa. The Society serves as an international forum and resource devoted to developing competence and expertise in bioinformatics and computational biology in Africa. It complements its activities with those of other international and national societies, associations and institutions, public and private, that have similar aims. It also promotes the standing of African bioinformatics and computational biology in the global arena through liaison and cooperation with other international bodies.
It is an affiliated regional group of the International Society for Computational Biology (ISCB).
Objectives
Identify, promote and establish opportunities for networking.
Encourage and develop bioinformatics and computational biology nodes.
Increase awareness and promote the use of bioinformatics and computational biology.
Facilitate access to bioinformatics and computational biology infrastructure.
Promote bioinformatics and computational biology education.
The Society also has good cooperation with the newly re-established African BIOinformatics NETwork (ABioNet). The ABioNET is in the process of development of a programme of training and research linking key sites of African Bioinformatics to provide regional capacity development.
References
Organizations established in 2004
Biology societies
Bioinformatics organizations
Professional associations based in Africa | African Society for Bioinformatics and Computational Biology | Biology | 303 |
35,124,443 | https://en.wikipedia.org/wiki/Peterson%E2%80%93Stein%20formula | In mathematics, the Peterson–Stein formula, introduced by , describes the Spanier–Whitehead dual of a secondary cohomology operation.
References
Theorems in algebraic topology | Peterson–Stein formula | Mathematics | 35 |
310,802 | https://en.wikipedia.org/wiki/Trompe-l%27%C5%93il | {{DISPLAYTITLE:Trompe-l'œil}}
; ; ) is an artistic term for the highly realistic optical illusion of three-dimensional space and objects on a two-dimensional surface. , which is most often associated with painting, tricks the viewer into perceiving painted objects or spaces as real. Forced perspective is a related illusion in architecture.
History in painting
The phrase, which can also be spelled without the hyphen and ligature in English as trompe l'oeil, originates with the artist Louis-Léopold Boilly, who used it as the title of a painting he exhibited in the Paris Salon of 1800. Although the term gained currency only in the early 19th century, the illusionistic technique associated with dates much further back. It was (and is) often employed in murals. Instances from Greek and Roman times are known, for instance in Pompeii. A typical mural might depict a window, door, or hallway, intended to suggest a larger room.
A version of an oft-told ancient Greek story concerns a contest between two renowned painters. Zeuxis (born around 464 BC) produced a still life painting so convincing that birds flew down to peck at the painted grapes. A rival, Parrhasius, asked Zeuxis to judge one of his paintings that was behind a pair of tattered curtains in his study. Parrhasius asked Zeuxis to pull back the curtains, but when Zeuxis tried, he could not, as the curtains were included in Parrhasius's painting—making Parrhasius the winner.
Perspective
A fascination with perspective drawing arose during the Renaissance. But Giotto had begun using perspective at the end of the 13th century with the cycle of Assisi in Saint Francis stories. Many Italian painters of the late Quattrocento, such as Andrea Mantegna (1431–1506) and Melozzo da Forlì (1438–1494), began painting illusionistic ceiling paintings, generally in fresco, that employed perspective and techniques such as foreshortening to create the impression of greater space for the viewer below. This type of illusionism as specifically applied to ceiling paintings is known as di sotto in sù, meaning "from below, upward" in Italian. The elements above the viewer are rendered as if viewed from true vanishing point perspective. Well-known examples are the Camera degli Sposi in Mantua and Antonio da Correggio's (1489–1534) Assumption of the Virgin in the Parma Cathedral.
Similarly, Vittorio Carpaccio (1460–1525) and Jacopo de' Barbari (c. 1440 – before 1516) added small trompe-l'œil features to their paintings, playfully exploring the boundary between image and reality. For example, a painted fly might appear to be sitting on the painting's frame, or a curtain might appear to partly conceal the painting, a piece of paper might appear to be attached to a board, or a person might appear to be climbing out of the painting altogether—all in reference to the contest of Zeuxis and Parrhasius.
Quadratura
Perspective theories in the 17th century allowed a more fully integrated approach to architectural illusion, which when used by painters to "open up" the space of a wall or ceiling is known as quadratura. Examples include Pietro da Cortona's Allegory of Divine Providence in the Palazzo Barberini and Andrea Pozzo's Apotheosis of St Ignatius on the ceiling of the Roman church of Sant'Ignazio in Campo Marzio.
The Mannerist and Baroque style interiors of Jesuit churches in the 16th and 17th centuries often included such ceiling paintings, which optically "open" the ceiling or dome to the heavens with a depiction of Jesus', Mary's, or a saint's ascension or assumption. An example of a perfect architectural is the illusionistic dome in the Jesuit church, Vienna, by Andrea Pozzo, which is only slightly curved, but gives the impression of true architecture.
paintings became very popular in Flemish and later in Dutch painting in the 17th century arising from the development of still life painting. The Flemish painter Cornelis Norbertus Gysbrechts created a chantourné painting showing an easel holding a painting. Chantourné literally means 'cutout' and refers to a trompe-l'œil representation designed to stand away from a wall. The Dutch painter Samuel Dirksz van Hoogstraten was a master of the and theorized on the role of art as the lifelike imitation of nature in his 1678 book, the Introduction to the Academy of Painting, or the Visible World (Inleyding tot de hooge schoole der schilderkonst: anders de zichtbaere werelt, Rotterdam, 1678).
A fanciful form of architectural , quodlibet, features realistically rendered paintings of such items as paper knives, playing cards, ribbons, and scissors, apparently accidentally left lying around.
can also be found painted on tables and other items of furniture, on which, for example, a deck of playing cards might appear to be sitting on the table. A particularly impressive example can be seen at Chatsworth House in Derbyshire, where one of the internal doors appears to have a violin and bow suspended from it, in a trompe-l'œil painted around 1723 by Jan van der Vaart. Another example can be found in the Painted Hall at the Old Royal Naval College, Greenwich, London. This Wren building was painted by Sir James Thornhill, the first British born painter to be knighted and is a classic example of the Baroque style popular in the early 18th century. The American 19th-century still-life painter William Harnett specialized in .
In the 20th century, from the 1960s on, the American Richard Haas and many others painted large murals on the sides of city buildings. From the beginning of the 1980s when German artist Rainer Maria Latzke began to combine classical fresco art with contemporary content, became increasingly popular for interior murals. The Spanish painter Salvador Dalí utilized the technique for a number of his paintings.
In other art forms
, in the form of "forced perspective", has long been used in stage-theater set design, so as to create the illusion of a much deeper space than the existing stage. A famous early example is the Teatro Olimpico in Vicenza, with Vincenzo Scamozzi's seven forced-perspective "streets" (1585), which appear to recede into the distance.
is employed in Donald O'Connor's famous "Running up the wall" scene in the film Singin' in the Rain (1952). During the finale of his "Make 'em Laugh" number he first runs up a real wall. Then he runs towards what appears to be a hallway, but when he runs up this as well we realize that it is a large mural. More recently, Roy Andersson has made use of similar techniques in his feature films.
Matte painting is a variant of , and is used in film production with elements of a scene are painted on glass panels mounted in front of the camera.
Elsa Schiaparelli frequently made use of in her designs, most famously perhaps in her Bowknot Sweater, which some consider to be the first use of in fashion. The Tears Dress, which she did in collaboration with Salvador Dalí, features both appliqué tears on the veil and trompe-l'œil tears on the dress itself.
Fictional trompe-l'œil appears in many Looney Tunes, such as the Road Runner cartoons, where, for example, Wile E. Coyote paints a tunnel on a rock wall, and Road Runner then races through the fake tunnel. This is usually followed by the coyote's foolishly trying to run through the tunnel after the road runner, only to smash into the hard rock-face. This sight gag was employed in Who Framed Roger Rabbit.
In Chicago's Near North Side, Richard Haas used a 16-story 1929 apartment hotel converted into a 1981 apartment building for murals in homage to Chicago school architecture. One of the building's sides features the Chicago Board of Trade Building, intended as a reflection of the building located two miles south.
Several contemporary artists use chalk on pavement or sidewalk to create works, a technique called street painting or "pavement art". These creations last only until washed away, and therefore must be photographed to be preserved. Practitioners of this form include Julian Beever, Edgar Mueller, Leon Keer, and Kurt Wenner.
The Palazzo Salis of Tirano, Italy, has over centuries and throughout the palace used trompe-l'œil in place of more expensive real masonry, doors, staircases, balconies, and draperies to create an illusion of sumptuousness and opulence.
Trompe-l'œil in the form of illusion architecture and Lüftlmalerei is common on façades in the Alpine region.
Trompe-l'œil, in the form of "illusion painting", is also used in contemporary interior design, where illusionary wall paintings experienced a renaissance since around 1980. Significant artists in this field are the German muralist Rainer Maria Latzke, who invented, in the 1990s, a new method of producing illusion paintings, frescography, and the English artist Graham Rust.
OK Go's music video for "The Writing's on the Wall" uses a number of illusions alongside other optical illusions, captured through a one-shot take. Trompe-l'œil illusions have been used as gameplay mechanics in video games such as The Witness and Superliminal.
Japanese filmmaker and animator Isao Takahata regarded achieving a sense of to be important for his work, stating that an animated world should feel as if it "existed right there" so that "people believe in a fantasy world and characters that no one has seen in reality."
Tourist attractions employing large-scale illusory art allowing visitors to photograph themselves in fantastic scenes have opened in several Asian countries, such as the Trickeye Museum and Hong Kong 3D Museum.
Recently a Trick Art Museum opened in Europe and uses more photographic approaches.
Artists
Old Masters
Cornelis Biltius
Jacob Biltius
Donato Bramante
Petrus Christus
Antonio da Correggio
Carlo Crivelli
Luca Giordano
Cornelis Norbertus Gysbrechts
Franciscus Gijsbrechts
Samuel Dirksz van Hoogstraten
Andrea Mantegna
Masaccio
Jean-Francois de la Motte
Charles Willson Peale
Jacobus Plasschaert
Andrea Pozzo
Vincenzo Scamozzi
Giovanni Battista Tiepolo
19th century and modern masters
Henry Alexander
Aaron Bohrod
Louis-Léopold Boilly
Salvador Dalí
Walter Goodman
John Haberle
William Harnett
Claude Raguet Hirst
René Magritte
John F. Peto
Contemporary
Ellen Altfest
Martin Battersby
Julian Beever
Daniela Benedini
Henri Bol
Henri Cadiou
Dan Colen
Piero Fornasetti
Ronald Francis
Joanne Gair
Frederic Gracia
Richard Haas
Jonty Hurwitz
Lorena Kloosterboer
Rainer Maria Latzke
Attila Meszlenyi
István Orosz (Utisz)
Os Gêmeos, "The Twins"
Jacques Poirier
Susan Powers
John Pugh
Pierre-Marie Rudelle
Graham Rust
Anthony Waichulis
Kurt Wenner
Raymond. A. Whyte
Tavar Zawacki
Paintings
Murals
Sculptures
Architecture
Use in films
Singin' in the Rain (1952)
Willy Wonka & the Chocolate Factory (1971)
Indiana Jones and the Last Crusade (1989)
Where the Heart Is (1990)
Millennium Actress (2001)
Eternal Sunshine of the Spotless Mind (2004)
Bewitched (2005)
Westworld (Season 1, Episode 7) (2016)
See also
2.5D—enhancement of 2-dimensional graphics by limited application of some 3D effects to them
Bump mapping, normal mapping and parallax mapping—graphical techniques used to add fake details that enhance 2D representations of 3D objects (in the context of that branch of computer graphics that aims to give a realistic 3D view on the screen)
Camouflage
Marbling
Faux painting
Photorealism
Anamorphosis
List of art techniques
Notes
External links
Deceptions and Illusions, National Gallery of Art exhibition on paintings
Trompe l'œil Tricks: Borges' Baroque Illusionism, essay by Lois Parkinson Zamora comparing to the literature of Borges
Custom trompe l'œil Paintings, Fresco Blog
murals.trompe-l-oeil.info , More than 10 000 pictures and 1200 Outdoor murals of France and Europe
Paris: Trompe-l'œil, surréalisme urbain?, Avenue George V. Text and photography by Catherine-Alice Palagret
"The Mechanics of the Art World," Vistas: Visual Culture in Spanish America, 1520-1820.
Trick Art Museum: Magic World Museum Barcelona
Visual arts genres
Architectural elements
Artistic techniques
Painting techniques
Optical illusions
Decorative arts
Composition in visual art | Trompe-l'œil | Physics,Technology,Engineering | 2,676 |
4,784,103 | https://en.wikipedia.org/wiki/Psilocybe%20quebecensis | Psilocybe quebecensis is a moderately active hallucinogenic mushroom in the section Aztecorum, having psilocybin and psilocin as main active compounds. Native to Quebec, it is the most northern known psilocybin mushroom after Psilocybe semilanceata in northern Scandinavia.
Macroscopically this mushroom somewhat resembles Psilocybe baeocystis.
Etymology
Named for the province Quebec, where it was discovered.
Description
Cap: () in diameter. Nearly hemispheric to convex at first, becoming subcampanulate to more or less plane when mature, viscid and even to translucent-striate when moist, hygrophanous, brownish to straw colored, yellowish to milk white when dry. Surface smooth, may become finely wrinkled with age, flesh whitish. Readily stains blue-green where injured.
Gills: Adnate, thin, moderately broad to swollen in the middle. Grayish yellow with green tones becoming dark brown at maturity, with the edges remaining whitish.
Spore Print: Dark purplish brown.
Stipe: () long by () thick. Equal, slightly subbulbous, smooth to striate, brittle, tough, and fibrous, base is furnished with long conspicuous rhizomorphs. Yellowish or brownish towards the base, whitish when dry, partial veil cortinate, and soon disappearing, no annulus present, readily bruises blue.
Taste: Somewhat farinaceous
Odor: Farinaceous
Microscopic features: Spores ellipsoid to subovoid in side and face view some spores mango shaped, 8.8–11 μm (16 μm) × 6.6–7.7 μm (8.8 μm). Basidia 15–20 μm (28 μm) 4-spored. Pleurocystidia present, 12–25 μm (35 μm) × (3 μm) 5–10 μm (15 μm), relatively polymorphic, often fusiform-ventricose or ampullaceous. Cheilocystidia (18 μm) 22–36 μm × 5.5–8.8 μm (10 μm), fusoid-ampullaceous with an extended neck, 2–3.3 μm thick, abundant, forming a sterile band, sometimes with a hyaline viscous drop at the apex.
Habitat and formation
Solitary to gregarious, rarely cespitose, on rotting wood, particularly in the outwashes of streams in the decayed-wood substratum of alder, birch, fir and spruce in the late summer and fall. Reported from Quebec, Canada specifically in the Jacques-Cartier River Valley, fruiting at a temperature of from summer to late October. Recently found in the United States (Michigan). Originally discovered in 1966, P.Quebecensis has also been confirmed growing in at least one area within Cape Breton, Nova Scotia.
References
Further reading
Guzman, G. The Genus Psilocybe: A Systematic Revision of the Known Species Including the History, Distribution and Chemistry of the Hallucinogenic Species. Beihefte zur Nova Hedwigia Heft 74. J. Cramer, Vaduz, Germany (1983) [now out of print].
Ola'h, Gyorgy Miklos & Heim, R. 1967. Une nouvelle espèce nord-américaine de Psilocybe hallucinogène: Psilocybe quebecensis. Comptes Rendus Hebdomadaires des Séances de l'Académie des Sciences 264: 1601-1604.
Entheogens
Psychoactive fungi
quebecensis
Psychedelic tryptamine carriers
Fungi of North America
Fungus species | Psilocybe quebecensis | Biology | 781 |
75,422,675 | https://en.wikipedia.org/wiki/Chiglitazar | Chiglitazar (trade name Bilessglu) is a drug for the treatment of type 2 diabetes. It is a peroxisome proliferator-activated receptor (PPAR) agonist.
In China, chiglitazar is approved for glycemic control in adult patients with type 2 diabetes when used in combination with diet and exercise.
References
Carbazoles
PPAR agonists
Benzophenones
4-Fluorophenyl compounds
Anilines
Amino acid derivatives | Chiglitazar | Chemistry | 104 |
63,008,623 | https://en.wikipedia.org/wiki/Conical%20screw%20compressor | The relatively recently developed conical screw compressor is a type of rotary-screw compressor using a different topology from the typical dual-screw type. In effect it can be thought of as a conical spiral extension of a gerotor, although the exact geometry is somewhat different due to the angular offset. Because of this it does not have the inherent "blow-hole" leakage path which in typical screw compressors is responsible for significant leakage through the assembly and makes low-speed operation impractical. This theoretically allows much smaller rotors to have practical efficiency since at smaller sizes the leakage area does not become as large a portion of the pumping area as in straight screw compressors. In conjunction with the decreasing diameter of the cone shaped rotor this also allows much higher compression ratios to be achieved in a single stage.
A significant impediment to production is the machining of the outer rotor to the tolerances required. Although the inner rotor can be manufactured with precise CNC machines using common methods, the outer rotor presents significant difficulties due to the restricted access to the interior for precision tooling, and assembling the outer rotor from easier to machine segments presents its own problems. The cost of units is currently very high, but development is ongoing, development is primarily being performed by VERT Rotors who hold patents on this topology.
References
Gas compressors | Conical screw compressor | Chemistry | 274 |
41,103,339 | https://en.wikipedia.org/wiki/Arctic%2C%20Antarctic%2C%20and%20Alpine%20Research | Arctic, Antarctic, and Alpine Research is a peer-reviewed scientific journal published by the Institute of Arctic and Alpine Research (University of Colorado Boulder). It covers research on all aspects of Arctic, Antarctic, and alpine environments, including subarctic, subantarctic, subalpine, and paleoenvironments. Jack D. Ives founded the journal in 1969 as Arctic and Alpine Research and the name was expanded to include the Antarctic in 1999. The editors-in-chief are Anne E. Jennings and Bill Bowman (University of Colorado Boulder).
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index, Current Contents/Agriculture, Biology & Environmental Sciences, The Zoological Record, and BIOSIS Previews. According to the Journal Citation Reports, the journal has a 2017 impact factor of 2.231.
References
External links
Earth and atmospheric sciences journals
Environmental science journals
Academic journals established in 1969
Arctic research
Quarterly journals
University of Colorado Boulder
English-language journals
Antarctic research | Arctic, Antarctic, and Alpine Research | Environmental_science | 203 |
12,751,285 | https://en.wikipedia.org/wiki/Bootstrapping%20Server%20Function | A Bootstrapping Server Function (BSF) is an intermediary element in Cellular networks which provides application-independent functions for mutual authentication of user equipment and servers unknown to each other and for 'bootstrapping' the exchange of secret session keys afterwards. This allows the use of additional services like Mobile TV and PKI, which need authentication and secured communication.
GBA/GAA Setup
The setup and function to deploy a generic security relation as described is called Generic Bootstrapping Architecture (GBA) or Generic Authentication Architecture (GAA). In short, it consists of the following elements.
user equipment (UE), e. g. a mobile cellular telephone; needs access to a specific service
application server (NAF: Network Application Function), e. g. for mobile TV; provides the service
BSF (Bootstrapping Server Function); arranges security relation between UE and NAF
mobile network operator's Home Subscriber Server (HSS); hosts user profiles.
In this case, the term 'bootstrapping' is related to building a security relation with a previously unknown device first and to allow installing security elements (keys) in the device and the BSF afterwards.
Workflow
The BSF is introduced by the application server (NAF), after an unknown UE device is trying to get service access: the NAF refers the UE to the BSF. UE and BSF mutually authenticate via 3GPP protocol AKA (Authentication and Key Agreement); additionally, the BSF sends related queries to the Home Subscriber Server (HSS).
Afterwards, UE and BSF agree on a session key to be used for encrypted data exchange with the application server (NAF). When the UE again connects to the NAF, the NAF is able to obtain the session key as well as user-specific data from the BSF and can start data exchange with the end device (UE), using the related session keys for encryption.
Standards
BSF is standardised in recent versions of 3GPP Standards: GAA (Generic Authentication Architecture) and GBA (Generic Bootstrapping Architecture), and 3GPP TS 33.919, 33.220 24.109, 29.109
External links
DVB-H News
BMCO forum
Open Mobile Alliance
3GPP
BSF in LTE network
castLabs (commercial BSF supplier)
Nexcom Systems (OEM commercial BSF supplier)
3GPP TS 24.109 version 8.3.0 Release 8
Mobile telecommunications standards | Bootstrapping Server Function | Technology | 528 |
24,009,615 | https://en.wikipedia.org/wiki/Fibronectin%20type%20III%20domain | The Fibronectin type III domain is an evolutionarily conserved protein domain that is widely found in animal proteins. The fibronectin protein in which this domain was first identified contains 16 copies of this domain. The domain is about 100 amino acids long and possesses a beta sandwich structure. Of the three fibronectin-type domains, type III is the only one without disulfide bonding present. Fibronectin domains are found in a wide variety of extracellular proteins. They are widely distributed in animal species, but also found sporadically in yeast, plant and bacterial proteins.
Human proteins containing this domain
ABI3BP; ANKFN1; ASTN2; AXL; BOC; BZRAP1; C20orf75; CDON;
CHL1; CMYA5; CNTFR; CNTN1; CNTN2; CNTN3; CNTN4; CNTN5;
CNTN6; COL12A1; COL14A1; COL20A1; COL7A1; CRLF1; CRLF3; CSF2RB;
CSF3R; DCC; DSCAM; DSCAML1; EBI3; EGFLAM; EPHA1; EPHA10;
EPHA2; EPHA3; EPHA4; EPHA5; EPHA6; EPHA7; EPHA8; EPHB1;
EPHB2; EPHB3; EPHB4; EPHB6; EPOR; FANK1; FLRT1; FLRT2;
FLRT3; FN1; FNDC1; FNDC3A; FNDC3B; FNDC4; FNDC5; FNDC7;
FNDC8; FSD1; FSD1L; FSD2; GHR; HCFC1; HCFC2; HUGO;
IFNGR2; IGF1R; IGSF22; IGSF9; IGSF9B; IL4R; IL11RA; IL12B; IL12RB1;
IL12RB2; IL20RB; IL23R; IL27RA; IL31RA; IL6R; IL6ST; IL7R;
INSR; INSRR; ITGB4; KAL1; KALRN; L1CAM; LEPR;
LIFR; LRFN2; LRFN3; LRFN4; LRFN5; LRIT1; LRRN1; LRRN3;
MERTK; MID1; MID2; MPL; MYBPC1; MYBPC2; MYBPC3; MYBPH;
MYBPHL; MYLK; MYOM1; MYOM2; MYOM3; NCAM1; NCAM2; NEO1;
NFASC; NOPE; NPHS1; NRCAM; OBSCN; OBSL1; OSMR; PHYHIP;
PHYHIPL; PRLR; PRODH2; PTPRB; PTPRC; PTPRD; PTPRF; PTPRG;
PTPRH; PTPRJ; PTPRK; PTPRM; PTPRO; PTPRS; PTPRT; PTPRU;
PTPRZ1; PTPsigma; PUNC; RIMBP2; ROBO1; ROBO2; ROBO3; ROBO4;
ROS1; SDK1; SDK2; SNED1; SORL1; SPEG; TEK; TIE1;
TNC; TNN; TNR; TNXB; TRIM36; TRIM42; TRIM46; TRIM67;
TRIM9; TTN; TYRO3; UMODL1; USH2A; VASN; VWA1; dJ34F7.1;
fmi;
See also
Monobodies are engineered (synthetic) antibody mimetics based on a fibronectin type III domain (specifically, the 10th FN3 domain of human fibronectin). Monobodies feature either diversified loops or diversified strands of a flat beta-sheet surface, which serve as interaction epitopes. Monobody binders have been selected a wide variety of target molecules, and have expanded beyond the potential range of binding interfaces observed in both natural and synthetic antibodies.
References
Protein domains
Single-pass transmembrane proteins | Fibronectin type III domain | Biology | 936 |
249,919 | https://en.wikipedia.org/wiki/Basal%20rate | Basal rate, in biology, is the rate of continuous supply of some chemical or process. In the case of diabetes mellitus, it is a low rate of continuous insulin supply needed for such purposes as controlling cellular glucose and amino acid uptake.
Together with a bolus of insulin, the basal insulin completes the total insulin needs of an insulin-dependent person.
An insulin pump and wristop controller is one way to arrange for a closely controlled basal insulin rate. The slow-release insulins (e.g., Lantus and Levemir) can provide a similar effect.
In healthy individuals, basal rate is monitored by the pancreas, which provides a regular amount of insulin at all times. The body requires this flow of insulin to enable the body to utilize glucose in the blood stream, so the energy in glucose can be used to carry out bodily functions. Basal rate requirements can differ for individuals depending on the activities they will carry out on that particular day. For example, if one is not highly active on a certain day, they will have a decreased basal rate because they are not using a lot of energy. On the other hand, basal rate increases dramatically when an individual is highly active.
Basal rates often even vary from hour to hour throughout the day. For example, one's insulin needs vary from activity to activity. Activities, such as sports, housework, shopping, gardening, tidying the house, and consuming alcohol all require a lowering in basal rate. These activities all require energy and, thus, use glucose; basal rate must decrease in order to keep glucose levels high enough to be used as fuel for the body. On the other hand, fevers, having a cold, taking a nap, taking cortisone-containing medication, and moments of excitement call for different basal rate needs. In these instances, the body has an overwhelming supply of glucose, and glucose levels need to decrease. To induce this decrease, basal rate needs to increase to increase insulin release to absorb some of the excess glucose from the blood stream.
Those with diabetes mellitus must be aware of their basal rates and regulate them accordingly. Basal rate can be raised and lowered through various methods. For example, individuals with diabetes mellitus often use an insulin pump to supply an increased amount of insulin into the blood stream. Those with diabetes also may eat carbohydrates or sugars to account for low blood sugar. However one monitors and regulates their blood sugar levels and basal rates, it is important to make changes gradually. An initial lowering in basal rate should be no more than 10% of the original. After the initial lowering point, one must note the factor by which one's blood sugar changes. If blood sugar levels decreased, one should lower their basal rate by 20% next time. If their blood sugar levels increased, a lowering of 10% was too great, and one should not lower their basal rate at all next time. If blood sugar levels remained relatively constant, a drop in basal rate of 10% was sufficient.
Just as the action to change basal rate should be gradual in nature, the actual response from changing basal rate does not happen instantly. A change in basal rate is felt around two hours after the action is done. This is especially important for those with diabetes to note, as it affects when they should act to monitor their basal rates. For example, if there is a particular time in the day when one notices a problem with blood glucose levels, they should act to change their basal rate accordingly two hours prior to when the problem was previously experienced.
Causes of Basal Rate
The liver is the primary contributing organ which produces glucose continuously even when nothing is being eaten. The liver will supply glucose either from fats or from previously eaten foods. Therefore, the basal rate can be thought of as a sort of "second bolus" after the initial bolus intake of insulin.
Modelling the Basal Rate
Most adult diabetics (over the age of 21) will have a fairly constant ratio of bolus:basal of 60%:40%, where 60% of all insulin intake in a single 24-hour period will be attributed to meals (bolus) and 40% should then be attributed to the basal rate. This ratio will fluctuate from person to person depending on their size, activity level, and caloric intake as well but is a good baseline for determining the correct basal rate for an adult diabetic. Thus, the basal rate could theoretically be set based on an averaged bolus insulin intake of several days. Averaging the total bolus, and then dividing this number by 36 would then give the required hourly basal rate intake for any individual with a 60:40 ratio established.
References
External links
Insulin Pump Terminology: Basal Rates
Information on the varying basal rate needs throughout the day
General guidelines on changing basal rate
Further guidelines in changing basal rate
Diabetes
Insulin therapies
Temporal rates | Basal rate | Physics | 991 |
18,864,203 | https://en.wikipedia.org/wiki/HD%20157819 | HD 157819 is the Henry Draper Catalogue designation for a star in the southern constellation of Ara. It is faintly visible to the naked eye at an apparent visual magnitude of 5.94 and is approximately distant from the Earth. The spectrum of this star fits a stellar classification of G8 II-III, indicating it is a G-type star that is somewhere between the giant and bright giant stages of its evolution.
References
External links
HR 6487
CCDM J17286-5510
Image HD 157819
Ara (constellation)
157819
Double stars
G-type bright giants
6487
085520
Durchmusterung objects | HD 157819 | Astronomy | 132 |
23,974,496 | https://en.wikipedia.org/wiki/C7H10N2O2S | {{DISPLAYTITLE:C7H10N2O2S}}
The molecular formula C7H10N2O2S may refer to:
Carbimazole, drug used to treat hyperthyroidism
Mafenide, a sulfonamide-type medication used as an antibiotic | C7H10N2O2S | Chemistry | 66 |
23,219,749 | https://en.wikipedia.org/wiki/Folksonomy | Folksonomy is a classification system in which end users apply public tags to online items, typically to make those items easier for themselves or others to find later. Over time, this can give rise to a classification system based on those tags and how often they are applied or searched for, in contrast to a taxonomic classification designed by the owners of the content and specified when it is published. This practice is also known as collaborative tagging, social classification, social indexing, and social tagging. Folksonomy was originally "the result of personal free tagging of information [...] for one's own retrieval", but online sharing and interaction expanded it into collaborative forms. Social tagging is the application of tags in an open online environment where the tags of other users are available to others. Collaborative tagging (also known as group tagging) is tagging performed by a group of users. This type of folksonomy is commonly used in cooperative and collaborative projects such as research, content repositories, and social bookmarking.
The term was coined by Thomas Vander Wal in 2004 as a portmanteau of folk and taxonomy. Folksonomies became popular as part of social software applications such as social bookmarking and photograph annotation that enable users to collectively classify and find information via shared tags. Some websites include tag clouds as a way to visualize tags in a folksonomy.
Folksonomies can be used for K–12 education, business, and higher education. More specifically, folksonomies may be implemented for social bookmarking, teacher resource repositories, e-learning systems, collaborative learning, collaborative research, professional development and teaching. Wikipedia is also a prime example of folksonomy.
Benefits and disadvantages
Folksonomies are a trade-off between traditional centralized classification and no classification at all, and have several advantages:
Tagging is easy to understand and do, even without training and previous knowledge in classification or indexing
The vocabulary in a folksonomy directly reflects the user's vocabulary
Folksonomies are flexible, in the sense that the user can add or remove tags
Tags consist of both popular content and long-tail content, enabling users to browse and discover new content even in narrow topics
Tags reflect the user's conceptual model without cultural, social, or political bias
Enable the creation of communities, in the sense that users who apply the same tag have a common interest
Folksonomies are multi-dimensional, in the sense that users can assign any number and combination of tags to express a concept
There are several disadvantages with the use of tags and folksonomies as well, and some of the advantages can lead to problems. For example, the simplicity in tagging can result in poorly applied tags. Further, while controlled vocabularies are exclusionary by nature, tags are often ambiguous and overly personalized. Users apply tags to documents in many different ways and tagging systems also often lack mechanisms for handling synonyms, acronyms and homonyms, and they also often lack mechanisms for handling spelling variations such as misspellings, singular/plural form, conjugated and compound words. Some tagging systems do not support tags consisting of multiple words, resulting in tags like "viewfrommywindow". Sometimes users choose specialized tags or tags without meaning to others.
Elements and types
A folksonomy emerges when users tag content or information, such as web pages, photos, videos, podcasts, tweets, scientific papers and others. Strohmaier et al. elaborate the concept: the term "tagging" refers to a "voluntary activity of users who are annotating resources with term-so-called 'tags' – freely chosen from an unbounded and uncontrolled vocabulary". Others explain tags as an unstructured textual label or keywords, and that they appear as a simple form of metadata.
Folksonomies consist of three basic entities: users, tags, and resources. Users create tags to mark resources such as: web pages, photos, videos, and podcasts. These tags are used to manage, categorize and summarize online content. This collaborative tagging system also uses these tags as a way to index information, facilitate searches and navigate resources. Folksonomy also includes a set of URLs that are used to identify resources that have been referred to by users of different websites. These systems also include category schemes that have the ability to organize tags at different levels of granularity.
Vander Wal identifies two types of folksonomy: broad and narrow. A broad folksonomy arises when multiple users can apply the same tag to an item, providing information about which tags are the most popular. A narrow folksonomy occurs when users, typically fewer in number and often including the item's creator, tag an item with tags that can each be applied only once. While both broad and narrow folksonomies enable the searchability of content by adding an associated word or phrase to an object, a broad folksonomy allows for sorting based on the popularity of each tag, as well as the tracking of emerging trends in tag usage and developing vocabularies.
An example of a broad folksonomy is del.icio.us, a website where users can tag any online resource they find relevant with their own personal tags. The photo-sharing website Flickr is an oft-cited example of a narrow folksonomy.
Folksonomy versus taxonomy
'Taxonomy' refers to a hierarchical categorization in which relatively well-defined classes are nested under broader categories. A folksonomy establishes categories (each tag is a category) without stipulating or necessarily deriving a hierarchical structure of parent-child relations among different tags. (Work has been done on techniques for deriving at least loose hierarchies from clusters of tags.)
Supporters of folksonomies claim that they are often preferable to taxonomies because folksonomies democratize the way information is organized, they are more useful to users because they reflect current ways of thinking about domains, and they express more information about domains. Critics claim that folksonomies are messy and thus harder to use, and can reflect transient trends that may misrepresent what is known about a field.
An empirical analysis of the complex dynamics of tagging systems, published in 2007, has shown that consensus around stable distributions and shared vocabularies does emerge, even in the absence of a central controlled vocabulary. For content to be searchable, it should be categorized and grouped. While this was believed to require commonly agreed on sets of content describing tags (much like keywords of a journal article), some research has found that in large folksonomies common structures also emerge on the level of categorizations.
Accordingly, it is possible to devise mathematical models of collaborative tagging that allow for translating from personal tag vocabularies (personomies) to the vocabulary shared by most users.
Folksonomy is unrelated to folk taxonomy, a cultural practice that has been widely documented in anthropological and folkloristic work. Folk taxonomies are culturally supplied, intergenerationally transmitted, and relatively stable classification systems that people in a given culture use to make sense of the entire world around them (not just the Internet).
The study of the structuring or classification of folksonomy is termed folksontology. This branch of ontology deals with the intersection between highly structured taxonomies or hierarchies and loosely structured folksonomy, asking what best features can be taken by both for a system of classification. The strength of flat-tagging schemes is their ability to relate one item to others like it. Folksonomy allows large disparate groups of users to collaboratively label massive, dynamic information systems. The strength of taxonomies are their browsability: users can easily start from more generalized knowledge and target their queries towards more specific and detailed knowledge. Folksonomy looks to categorize tags and thus create browsable spaces of information that are easy to maintain and expand.
Social tagging for knowledge acquisition
Social tagging for knowledge acquisition is the specific use of tagging for finding and re-finding specific content for an individual or group. Social tagging systems differ from traditional taxonomies in that they are community-based systems lacking the traditional hierarchy of taxonomies. Rather than a top-down approach, social tagging relies on users to create the folksonomy from the bottom up.
Common uses of social tagging for knowledge acquisition include personal development for individual use and collaborative projects. Social tagging is used for knowledge acquisition in secondary, post-secondary, and graduate education as well as personal and business research. The benefits of finding/re-finding source information are applicable to a wide spectrum of users. Tagged resources are located through search queries rather than searching through a more traditional file folder system. The social aspect of tagging also allows users to take advantage of metadata from thousands of other users.
Users choose individual tags for stored resources. These tags reflect personal associations, categories, and concept, all of which are individual representations based on meaning and relevance to that individual. The tags, or keywords, are designated by users. Consequently, tags represent a user's associations corresponding to the resource. Commonly tagged resources include videos, photos, articles, websites, and email. Tags are beneficial for a couple of reasons. First, they help to structure and organize large amounts of digital resources in a manner that makes them easily accessible when users attempt to locate the resource at a later time. The second aspect is social in nature, that is to say that users may search for new resources and content based on the tags of other users. Even the act of browsing through common tags may lead to further resources for knowledge acquisition.
Tags that occur more frequently with specific resources are said to be more strongly connected. Furthermore, tags may be connected to each other. This may be seen in the frequency in which they co-occur. The more often they co-occur, the stronger the connection. Tag clouds are often utilized to visualize connectivity between resources and tags. Font size increases as the strength of association increases.
Tags show interconnections of concepts that were formerly unknown to a user. Therefore, a user's current cognitive constructs may be modified or augmented by the metadata information found in aggregated social tags. This process promotes knowledge acquisition through cognitive irritation and equilibration. This theoretical framework is known as the co-evolution model of individual and collective knowledge.
The co-evolution model focuses on cognitive conflict in which a learner's prior knowledge and the information received from the environment are dissimilar to some degree. When this incongruence occurs, the learner must work through a process cognitive equilibration in order to make personal cognitive constructs and outside information congruent. According to the coevolution model, this may require the learner to modify existing constructs or simply add to them. The additional cognitive effort promotes information processing which in turn allows individual learning to occur.
Examples
Archive of Our Own: fan fiction archive
BibSonomy: social bookmarking and publication-sharing system
del.icio.us: public tagging service
Diigo: social bookmarking website
Flickr: shared photos
Instagram: online photo-sharing and social networking service
Many libraries' online catalogs
Last.fm: music listening community and algorithmic radio stations
Mendeley: social reference management software
MusicBrainz: online music metadata database
OpenStreetMap: map database
Pinterest: photo sharing and saving website
Steam: video game store
StumbleUpon: content discovery engine
Twitter hashtags
Tumblr tags
The World Wide Web Consortium's Annotea project with user-generated tags in 2002.
WordPress: blogging tool and Content Management System
See also
Autotagging
Blogosphere
Collective intelligence
Enterprise bookmarking
Faceted classification
Hierarchical clustering
Semantic annotation
Semantic similarity
Thesaurus
Weak ontology
Wiki
References
External links
Folksonomies as a tool for professional scientific databases
"The Three Orders": 2005 explanation of tagging and folksonomies (Archived version)
Vanderwal's definition of folksonomy
Vanderwal's take on Wikipedia's definition of folksonomy
Classroom Collaboration Using Social Bookmarking Service Diigo
Collective intelligence
Knowledge representation
Metadata
Semantic Web
Social bookmarking
Taxonomy
Web 2.0 neologisms
Sociology of knowledge
Information architecture
Crowdsourcing | Folksonomy | Technology | 2,546 |
8,591,468 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Crux | This is the list of notable stars in the constellation Crux, sorted by decreasing brightness.
See also
List of star names in Crux
List of stars by constellation
Bandeira do Brasil: Sobre as estrelas (Portuguese)
Notes
References
List
Crux | List of stars in Crux | Astronomy | 53 |
167,748 | https://en.wikipedia.org/wiki/Sex%20position | A sex position is a positioning of the bodies that people use to engage in sexual intercourse or other sexual activities. Sexual acts are generally described by the positions the participants adopt in order to perform those acts. Though sexual intercourse generally involves penetration of the body of one person by another, sex positions commonly involve non-penetrative sexual activities.
Three broad and overlapping categories of sexual activity are commonly practiced: vaginal sex, anal sex, and oral sex (mouth-on-genital or mouth-on-anus). Sex acts may also be part of a fourth category, manual sex, which is stimulating the genitals or anus by using fingers or hands. Some acts may include stimulation by a device (sex toy), such as a dildo or vibrator. There are numerous sex positions that participants may adopt in any of these types of sex acts, and some authors have argued that the number of sex positions is essentially limitless.
History
Sex manuals typically present a guide to sex positions. They have a long history. In the Greco-Roman era, a sex manual was written by
Philaenis of Samos, possibly a hetaira (courtesan) of the Hellenistic period (3rd–1st century BC). The Kama Sutra of Vatsyayana, believed to have been written in the 1st to 6th centuries, has a notorious reputation as a sex manual. Different sex positions result in differences in the depth and angle of sexual penetration. Alfred Kinsey categorized six primary positions, The earliest known European medieval text dedicated to sexual positions is the Speculum al foderi, (The Mirror of Coitus) a 15th-century Catalan text discovered in the 1970s.
Exclusively penetrative
These positions involve the insertion of a phallic object(s) (such as a penis, strap-on dildo, plug, or other nonporous object(s)) into a vagina, anus or mouth.
Penetrating partner on top with front entry
The most used sex position is the missionary position. In this position, the participants face each other. The receiving partner lies on their back with legs apart, while the penetrating partner lies on top. This position and the following variations may be used for vaginal or anal intercourse.
The penetrating partner stands in front of the receiving partner, whose legs dangle over the edge of a bed or some other platform like a table.
With the receiving partner's legs lifted towards the ceiling and resting against the penetrating partner, this is sometimes called the butterfly position. This can also be done as a kneeling position.
The receiving partner lies on their back. The penetrating partner stands and lifts the receiving partner's pelvis for penetration. A variant is for the receiving partner to rest their legs on the penetrating partner's shoulders.
The receiving partner lies on their back, legs pulled up straight and knees near to the head. The penetrating partner holds the receiving partner's legs and penetrates from above.
Similarly to the previous position, but the receiving partner's legs need not be straight and the penetrating partner wraps their arms around the receiving partner to push the legs as close as possible to the chest. Called the stopperage in Burton's translation of The Perfumed Garden.
The coital alignment technique, a position where a woman is vaginally penetrated by a man, and the penetrating partner moves upward along the woman's body until the penis is pointing down, the dorsal side of the penis now rubbing against the clitoris.
The receiving partner crosses their feet behind their head (or at least puts their feet next to their ears), while lying on their back. The penetrating partner then holds the receiving partner tightly around each instep or ankle and lies on the receiving partner full-length. A variation is to have the receiving partner cross their ankles on their stomach, knees to shoulders, and then have the penetrating partner lie on the receiving partner's crossed ankles with their full weight. Called the Viennese oyster by The Joy of Sex.
Penetrating from behind
Most of these positions can be used for either vaginal or anal penetration. Variants include:
The receiving partner is on all fours with their torso horizontal and the penetrating partner inserts either their penis or sex toy into either the vagina or anus from behind.
The receiving partner's torso is angled downwards and the penetrating partner raises their own hips above those of the receiving partner for maximum penetration.
The penetrating partner places their feet on each side of the receiving partner while keeping their knees bent and effectively raising up as high as possible while maintaining penetration. The penetrating partner's hands usually have to be placed on the receiving partner's back to keep from falling forward.
The receiving partner kneels upright while the penetrating partner gently pulls the receiving partner's arms backwards at the wrists towards them.
In the spoons position both partners lie on their side, facing the same direction. Variants of this technique include the following:
The receiving partner lies on their side. The penetrating partner kneels and penetrates from behind. Alternatively, the penetrating partner can stand if the receiving partner is on a raised surface.
The receiving partner lies facing down in prone position, possibly with their legs spread. The penetrating partner lies on top of them. The placement of a pillow beneath the receiving partner's hips can help increase stimulation in this position.
The receiving partner lies face down, knees together. The penetrating partner lies on top with spread legs.
The receiving partner lies on their side with their uppermost leg forward. The penetrating partner kneels astride the receiver's lowermost leg.
Receiving partner on top
Most of these positions can be used for either vaginal or anal penetration.
When the receiving partner is a woman, these positions are sometimes called the woman on top, or cowgirl positions.
A feature of these positions is that the penetrating partner lies on their back with the receiving partner on top:
The receiving partner can kneel while straddling the penetrating partner, with the participants facing each other.
Alternatively, the receiving partner can face away from the penetrating partner. This position is sometimes called the reverse cowgirl position.
The receiving partner can arch back with hands on the ground.
The receiving partner can squat (instead of kneel) facing the penetrating partner.
The receiving partner can bring forward their knees against the ground.
The penetrating partner lies with their upper back on a low table, couch, chair or edge of bed, keeping their feet flat on the floor and back parallel to floor. The receiving partner straddles them, also keeping their feet on the floor. Receiving partner can assume any of various positions.
The lateral coital position was recommended by Masters and Johnson, and was preferred by three quarters of their heterosexual study participants after having tried it. The position involves the male on his back, with the female rolled slightly to the side so that her pelvis is atop his, but her weight is beside his. This position can also be used for anal penetration, and is not limited to heterosexual partners.
Sitting and kneeling
Most of these positions can be used for either vaginal or anal penetration.
The penetrating partner sits on an area surface, legs outstretched. The receiving partner sits on top and wraps their legs around the penetrating partner. Called pounding on the spot in the Burton translation of The Perfumed Garden. If the penetrating partner sits cross-legged, it is called the lotus position or lotus flower. The position can be combined with fondling of erogenous zones.
The penetrating partner sits in a chair. The receiving partner straddles penetrating partner and sits, facing the penetrating partner, feet on floor. This is sometimes called a lap dance, which is somewhat erroneous as a lap dance typically does not involve penetration. The receiving partner may also sit in reverse, with their back to the penetrating partner.
The penetrating partner sits on a couch or in a chair that has armrests. The receiving partner sits in the penetrating partner's lap, perpendicular to penetrating partner, with their back against the armrest.
The penetrating partner kneels while the receiving partner lies on their back, ankles on each side of penetrating partner's shoulders.
Standing
Most of these positions can be used for either vaginal or anal penetration. In the basic standing position, both partners stand facing each other. The following variations are possible:
In the basic standing position, both partners stand facing each other and engage in vaginal sex. In order to match heights, the shorter partner can, for instance, stand on a stair or wear high heels. It may be easier to maintain solid thrusts if the woman has her back to a wall. With such a support, the Kama Sutra calls this position the Suspended Congress. This position is most often used in upright places, such as a wall in a bedroom or a shower.
The penetrating partner stands, and the receiving partner wraps their arms around his neck, and their legs around his waist, thereby exposing either the vagina or anus to the man's penis. This position is made easier with the use of a solid object behind the receiver, as above. To assume this position, it can be easier to start with the receiving partner laying on their back on the edge of a bed; the penetrating partner puts his elbows under their knees, enters them, and then lifts them as he rises to a standing position. In Japan, this is colloquially called the Ekiben position, after a specific bento lunch box sold at train stations.
Alternatively, the receiving partner can face away from the penetrating partner which allows for anal sex. This position is varied by having the receiving partner assume different semi-standing positions. For instance, they may bend at the waist, resting their hands or elbows on a table.
Anal sex positions
These positions involve anal penetration:
Doggy style penetration maximizes the depth of penetration, but can pose the risk of pushing against the sigmoid colon. If the receiving partner is male, this increases the chances of stimulating the prostate. The penetrating partner controls the thrusting rhythm. A variation is the leapfrog position, in which the receiving partner angles their torso downward. The receiving partner may also lie flat and face down, with the penetrating partner straddling their thighs.
In the missionary positions, to achieve optimal alignment, the receiving partner's legs should be in the air with the knees drawn towards their chest. Some sort of support (such as a pillow) under the receiving partner's hips can also be useful. The penetrating partner positions themselves between the receiving partner's legs. The penetrating partner controls the thrusting rhythm. This position is often cited as good for beginners, because it allows them to relax more fully than is usual in the doggy style position.
The spoons position allows the receiving partner to control initial penetration and the depth, speed and force of subsequent thrusting.
The receiving partner on top positions allow the receiving partner more control over the depth, rhythm and speed of penetration. More specifically, the receiving partner can slowly push their anus down on the penetrating partner, allowing time for their muscles to relax.
Less common positions
These positions are more innovative, and perhaps not as widely known or practiced as the ones listed above.
The receiving partner lies on their back with knees up and legs apart. The penetrating partner lies on their side perpendicular to the receiver, with the penetrating partner's hips under the arch formed by receiver's legs. This position is sometimes called the T-square.
The receiving partner's legs are together turning to one side while looking up towards the penetrator, who has spread legs and is kneeling straight behind the other's hips. The penetrator's hands are on the other's hips. This position can be called the modified T-square.
The Seventh Posture of Burton's translation of The Perfumed Garden is an unusual position not described in other classical sex manuals. The receiving partner lies on their side. The penetrating partner faces the receiver, straddling the receiver's lower leg, and lifts the receiver's upper leg on either side of the body onto the crook of penetrating partner's elbow or onto the shoulder. While some references describe this position as being "for acrobats and not to be taken seriously", others have found it very comfortable, especially during pregnancy.
The piledriver is a difficult position sometimes seen in porn videos. It is described in many ways by different sources. In a heterosexual context, the woman lies on her back, then raises her hips as high as possible, so that her partner, standing, can enter her vaginally or anally. The position places considerable strain on the woman's neck, so firm cushions should be used to support her.
The receiver lies face down legs spread on the edge of the bed and parallel to the floor, while the penetrator stands behind, holding both legs.
The rusty bike pump is similar to a piledriver where penetration is achieved from above at a downward angle with the receiving partner bottom side up.
Others
The receiving partner is on the bottom. The penetrating partner lies on top perpendicularly to them.
The penetrating partner lies on their back, legs spread. The receiving partner is on their back on top of the penetrator, legs spread, facing the opposite direction.
The penetrator and the receiver lie on their backs, heads pointed away from one another. Each places one leg on the other's shoulder (as a brace) and the other leg out somewhat to the side.
The receiving partner lies on their back with the penetrating partner lying perpendicular. The receiving partner bends the knee closest to the penetrating partner's head enough so that there is room for the penetrating partner's waist to fit beneath it, while the penetrating partner's legs straddle the receiving partner's other leg. The in-and-out thrusting action will move more along a side-to-side rather than top-to-bottom axis. This position allows for breast stimulation during sex, for partners to maintain eye contact if they wish, and for a good view of both partners as they reach orgasm.
The penetrating partner sits on edge of a bed or chair with feet spread wide on floor. The receiving partner lies on their back on the floor and drapes their legs and thighs over the legs of the penetrating partner. The penetrating partner holds the knees of the receiving partner and controls thrusts.
Using furniture or special apparatus
Most sex acts are typically performed on a bed or other simple platform. As the range of supports available increases, so does the range of positions that are possible. Ordinary furniture can be used for this purpose. Also, various forms of erotic furniture and other apparatus such as fisting slings and trapezes have been used to facilitate even more exotic sexual positions.
Positions to promote or prevent conception
Pregnancy is a potential result of any form of sexual activity where sperm comes in contact with the vagina; this is typically during vaginal sex, but pregnancy can result from anal sex, digital sex (fingering), oral sex, or by another body part, if sperm is transferred from one area to the vagina between a fertile female and a fertile male. Men and women are typically fertile during puberty. Though certain sexual positions are believed to produce more favorable results than others, none of these are effective means of contraception.
Positions during pregnancy
The goal is to prevent excessive pressure on the belly and to restrict penetration as required by the particular partners. Some of the positions below are popular positions for sex during pregnancy.
Woman on top: takes the pressure off of the woman's abdomen and allows her to control the depth and frequency of thrusting.
Woman on back: like the missionary, but with less pressure on abdomen or uterus. The woman lies on her back and raises her knees up towards her chest. The partner kneels between her legs and enters from the front. A pillow is placed under her bottom for added comfort.
Sideways: also keeps pressure off of her abdomen while supporting her uterus at the same time.
Spooning: very popular positions to use during the late stages of pregnancy; allowing only shallow penetration and relieves the pressure on the abdomen.
Sitting: she mounts the sitting partner, relieving her abdomen of pressure.
From behind: allowing her to support abdomen and breasts.
Non-exclusively penetrative
Oral sex positions
Oral sex is genital stimulation by the mouth. It may be penetrative or non-penetrative, and may take place before, during, as, or following intercourse. It may also be performed simultaneously (for example, when one partner performs cunnilingus, while the other partner performs fellatio), or only one partner may perform upon the other; this creates a multitude of variations.
Fellatio
Fellatio is oral sex performed on the penis. Possible positions include:
Sitting
The receiver lies on their back while the partner kneels between the receiver's legs.
The receiver lies on their back while the partner lies off to the side of their legs.
The receiver sits in a chair, the partner kneels in front of them between their legs.
Standing
The receiver stands while the partner either kneels in front of them or sits (in a chair or on the edge of a bed, etc.) and bends forward.
The receiver stands while the partner, also standing, bends forward at the waist.
The receiver stands or crouches at the edge of the bed, facing the bed. The active partner lies on the bed with their head hanging over the edge of the bed backward. The receiver inserts their penis into the partner's mouth, usually to achieve deep throat penetration.
Lying
While the active partner lies on their back, the receiver assumes the missionary position but adjusted forward.
The active partner (with breasts) lies on their back, and the receiver inserts their penis between the breasts, and into the mouth.
Cunnilingus
Cunnilingus is oral sex performed on the vulva. Possible positions include:
The receiver lies on her back as in the missionary position. The active partner lies on their front between their legs.
The active partner sits. The receiver stands facing away and bends at the hips.
The active partner sits. The receiver stands or squats facing towards partner and may arch their back, to create further stimulation.
The active partner lies on their back while the receiver kneels with their legs at their sides and their vulva on their mouth. In other words, the receiver sits on the partner's face.
The receiver rests on all fours as in the doggy style position. The partner lies on their back with their head under their vulva. Their feet may commonly extend off the bed and rest on the floor.
The receiver stands, possibly bracing themself against a wall. The active partner kneels in front of them.
The receiver sits on the bed with their legs open, the active partner kneels in front of them.
The receiver is upside-down (standing on hands, held by partner, or using support, such as bondage or furniture), with the active partner standing or kneeling (depending on elevation) in front or behind. Such a position may be difficult to achieve, or maintain for extended time periods, but the rush of blood to the brain can alter stimulation's effect.
The receiver stands on hands, resting each leg on either side of the active partner's head, with the active partner standing or kneeling facing them. Depending on which way up the receiver is facing, different stimulation and levels of comfort may be available.
Sixty-nine
Simultaneous oral sex between two people is called 69. They can lie side-by-side, lie one on top of the other, or stand with one partner holding the other upside down.
Anilingus
Anilingus is oral sex performed on the anus. Positions for anilingus are often variants on those for genital-oral sex. Possible positions include:
The passive partner is on all fours in the doggy position with the active partner behind.
The passive partner is on their back in the missionary position with their legs up.
The passive partner on top in the 69 position.
The rusty trombone, in which a male stands while the active partner performs both anilingus from behind, generally from a kneeling position, and also manually stimulates the standing partner's penis, thus somewhat resembling someone playing the trombone.
Other positions
Fingering of the vulva, vagina or anus.
Fisting: inserting the entire hand into the vagina or rectum.
Non-penetrative
Non-penetrative sex or frottage generally refers to a sexual activity that excludes penetration, and often includes rubbing one's genitals on one's sexual partner. This may include the partner's genitals or buttocks, and can involve different sex positions. As part of foreplay or to avoid penetrative sex, people engage in a variety of non-penetrative sexual behavior, which may or may not lead to orgasm.
Dry humping: frottage while clothed. This act is common, although not essential, in the dance style known as "grinding".
Handjob: manual stimulation of a partner's penis.
Fingering: manual stimulation of a partner's vulva.
Footjob: using the feet to stimulate the genitals.
Mammary intercourse: using the breasts together to stimulate the penis through the cleavage.
Axillary intercourse: with the penis in the armpit. Commonly known as "bagpiping".
Orgasm control: By self or by a partner managing the physical stimulation and sensation connected with the emotional and physiologic excitement levels. Through the practice of masturbation, an individual can learn to develop control of their own body's orgasmic response and timing. In partnered stimulation, either partner can control their own orgasmic response and timing. With mutual agreement, either partner can similarly learn to control or enhance their partner's orgasmic response and timing. Partnered stimulation orgasm techniques referred to as expanded orgasm, extended orgasm or orgasm control can be learned and practiced for either partner to refine their control of the orgasmic response of the other. Partners mutually choose which is in control or in response to the other.
The slang term humping may refer to masturbation—thrusting one's genitals against the surface of non-sexual objects, clothed or unclothed; or it may refer to penetrative sex.
Genital-genital rubbing
Genital-genital rubbing (often termed GG rubbing by primatologists to describe the ubiquitous behavior among female bonobos) is the sexual act of mutually rubbing genitals; it is commonly grouped with frottage, as well as other terms, such as non-penetrative sex or outercourse:
Intercrural sex or interfemoral sex: the penis is placed between the partner's thighs, perhaps rubbing the vulva, scrotum or perineum.
Frot: two males mutually rubbing penises together.
Tribadism: two females mutually rubbing vulvae together.
Docking: inserting the glans penis into the foreskin of another penis.
Group sex
People may participate in group sex. While group sex does not imply that all participants must be in sexual contact with all others simultaneously, some positions are only possible with three or more people.
As with the positions listed above, more group sex positions become practical if erotic furniture is used.
Threesomes
When three people have sex with each other, it is called a threesome. Possible ways of having all partners in sexual contact with each include some of the following:
One person performs oral sex on one partner while they engage in receptive anal or vaginal intercourse with the other partner. Sometimes called a spit roast.
The 369 position is where two people engage in oral sex in the 69 position while a third person positions himself to penetrate one of the others; usually a man engaging in sex doggie-style with the woman on top in the 69 position.
A man has vaginal or anal sex with one partner, while himself being anally penetrated by another (possibly with a strap-on dildo).
Two participants engage in cowgirl position, a third straddles man's face allowing him to go down on them. Generally called a double cowgirl.
Three partners lie or stand in parallel, with one between the other two. Sometimes called a sandwich. This term may specifically refer to the double penetration of a woman, with one penis in her anus, and the other in her vagina or of a male, with two penises in his anus.
Two participants have vaginal/anal sex with each other, and one/both perform oral sex on a third.
Three people perform oral/vaginal/anal sex on one another simultaneously, commonly called a daisy chain.
The slang term lucky Pierre is sometimes used in reference to the person playing the middle role in a threesome, being anally penetrated while engaging in penetrative anal or vaginal sex.
Foursomes
A 469 is a four-person sexual position where two individuals engage in 69 oral sex while a third and a fourth person both position themselves on each end to penetrate the two engaged in simultaneous oral sex; similar to a 369, with the addition of a fourth person.
With many participants
These positions can be expanded to accommodate any number of participants:
A group of males masturbating is called a circle jerk.
Sexual intercourse involving multiple women in which one man is the central focus is known as reverse gangbang.
A group of males masturbating and ejaculating on one person's face is known as bukkake.
A group of men, women, or both, each performing oral sex upon each other, in a circular arrangement, is a daisy chain.
When one woman or man is given the serial or parallel attention of many, often involving a queue (pulling a train), it is often termed a gang bang.
Multiple penetration
A person may be sexually penetrated multiple times simultaneously. Penetration may involve use of fingers, toes, sex toys, or penises. Scenes of multiple penetration are common in pornography.
If one person is penetrated by two objects, it is generically called double penetration (DP). Double penetration of the vagina, anus, or mouth can involve:
Simultaneous penetration of the anus by two penises or other objects. This is commonly called double anal penetration (DAP).
Simultaneous penetration of the vagina by two penises or other objects. This is commonly called double vaginal penetration (DVP) or double stuffing.
Simultaneous penetration of the vagina and anus. The shocker accomplishes this using several fingers of one hand.
Simultaneous penetration of the mouth and either the vagina or anus. If the penetrating objects are penises, this is sometimes called the spit roast, the Chinese finger trap, or the Eiffel tower.
Cultural differences and preferences
Sexual practices vary between cultures. Latin American couples that recorded their sexual activities do not practice the missionary position as much as couples from United States reported. The duration of sexual intercourse seems to be similar amongst European and Latin American couples.
See also
Bondage positions and methods
References
Further reading
Historical
Kama Sutra
The Perfumed Garden
Modern
(235 pages)
(272 pages)
(101 pages—design criteria for assistive furniture, with sections on accommodation of disabled persons.)
(96 pages)
(376 pages)
External links
Sex positions
Sexology
Sexual intercourse | Sex position | Biology | 5,571 |
5,021,571 | https://en.wikipedia.org/wiki/Aluminium%20fluoride | Aluminium fluoride is an inorganic compound with the formula . It forms hydrates . Anhydrous and its hydrates are all colorless solids. Anhydrous is used in the production of aluminium. Several occur as minerals.
Occurrence and production
Aside from anhydrous , several hydrates are known. With the formula , these compounds include monohydrate (x = 1), two polymorphs of the trihydrate (x = 3), a hexahydrate (x = 6), and a nonahydrate (x = 9).
The majority of aluminium fluoride is produced by treating alumina with hydrogen fluoride at 700 °C: Hexafluorosilicic acid may also be used make aluminium fluoride.
Alternatively, it is manufactured by thermal decomposition of ammonium hexafluoroaluminate. For small scale laboratory preparations, can also be prepared by treating aluminium hydroxide or aluminium with hydrogen fluoride.
Aluminium fluoride trihydrate is found in nature as the rare mineral rosenbergite.
The anhydrous form appears as the relatively recently (as of 2020) recognized mineral óskarssonite. A related, exceedingly rare mineral, is zharchikhite, .
Structure
According to X-ray crystallography, anhydrous adopts the rhenium trioxide motif, featuring distorted octahedra. Each fluoride is connected to two Al centers. Because of its three-dimensional polymeric structure, has a high melting point. The other trihalides of aluminium in the solid state differ, has a layer structure and and , are molecular dimers. Also they have low melting points and evaporate readily to give dimers. In the gas phase aluminium fluoride exists as trigonal molecules of D3h symmetry. The Al–F bond lengths of this gaseous molecule are 163 pm.
Applications
Aluminium fluoride is an important additive for the production of aluminium by electrolysis. Together with cryolite, it lowers the melting point to below 1000 °C and increases the conductivity of the solution. It is into this molten salt that aluminium oxide is dissolved and then electrolyzed to give bulk Al metal.
Aluminium fluoride complexes are used to study the mechanistic aspects of phosphoryl transfer reactions in biology, which are of fundamental importance to cells, as phosphoric acid anhydrides such as adenosine triphosphate and guanosine triphosphate control most of the reactions involved in metabolism, growth and differentiation. The observation that aluminium fluoride can bind to and activate heterotrimeric G proteins has proven to be useful for the study of G protein activation in vivo, for the elucidation of three-dimensional structures of several GTPases, and for understanding the biochemical mechanism of GTP hydrolysis, including the role of GTPase-activating proteins.
Niche uses
Together with zirconium fluoride, aluminium fluoride is an ingredient for the production of fluoroaluminate glasses.
It is also used to inhibit fermentation.
Like magnesium fluoride it is used as a low-index optical thin film, particularly when far UV transparency is required. Its deposition by physical vapor deposition, particularly by evaporation, is favorable.
Safety
The reported oral animal lethal dose (LD50) of aluminium fluoride is 100 mg/kg. Repeated or prolonged inhalation exposure may cause asthma, and may have effects on the bone and nervous system, resulting in bone alterations (fluorosis), and nervous system impairment.
Many of the neurotoxic effects of fluoride are due to the formation of aluminium fluoride complexes, which mimic the chemical structure of a phosphate and influence the activity of ATP phosphohydrolases and phospholipase D. Only micromolar concentrations of aluminium are needed to form aluminium fluoride.
Human exposure to aluminium fluoride can occur in an industrial setting, such as emissions from aluminium reduction processes, or when a person ingests both a fluoride source (e.g., fluoride in drinking water or residue of fluoride-based pesticides) and an aluminium source; sources of human exposure to aluminium include drinking water, tea, food residues, infant formula, aluminium-containing antacids or medications, deodorants, cosmetics, and glassware. Fluoridation chemicals may also contain aluminium fluoride. Data on the potential neurotoxic effects of chronic exposure to the aluminium species existing in water are limited.
See also
Aluminium monofluoride
References
External links
MSDS
ToxNet Profile
PubChem
Aluminium compounds
Fluorides
Metal halides | Aluminium fluoride | Chemistry | 984 |
77,769,841 | https://en.wikipedia.org/wiki/Bedoradrine | Bedoradrine (; developmental code names KUR-1246, MN-221) is a sympathomimetic and bronchodilator medication that was developed for the treatment of preterm labor, asthma, and chronic obstructive pulmonary disease (COPD) but was never marketed. It acts as an ultra-selective long-acting β2-adrenergic receptor agonist. The drug was intended for intravenous administration.
See also
Hexoprenaline
Ritodrine
Terbutaline
References
2-Aminotetralins
Abandoned drugs
Beta2-adrenergic agonists
Bronchodilators
Dimethylamino compounds
Enantiopure drugs
Phenols
Phenylethanolamines
Tocolytics
Diols | Bedoradrine | Chemistry | 165 |
8,103,418 | https://en.wikipedia.org/wiki/Beeturia | Beeturia is the passing of red or pink urine after eating beetroots or foods colored with beetroot extract or beetroot betalain pigments. The color is caused by the excretion of the betalain pigments, such as betanin.
The coloring is highly variable between individuals and between different feeding occasions, and can vary in intensity from light pink urine to strongly-colored deep red urine. The condition is benign and dissipates promptly with avoidance of beet foods. Beeturia occurs in about 10-14% of the public, with higher frequency and intensity occurring in people with iron deficiency, pernicious anemia or digestive diseases.
The pigment is sensitive to oxidative degradation under strongly acidic conditions. Therefore, the urine coloring depends on stomach acidity and dwell time, as well as the presence in foods of betalain-protecting substances, such as oxalic acid. Beeturia is often associated with red or pink feces.
Cause
The red color seen in beeturia is caused by the presence of unmetabolized betalain pigments such as betanin in beetroot passed through the body. The pigments are absorbed in the colon. Betalains are oxidation-sensitive redox indicators that are decolorized by hydrochloric acid, ferric ions, and colonic bacteria preparations. The gut flora play a not-yet-evaluated role in the breakdown of the pigment.
Differential diagnosis
The incidence of beeturia increases in people with pernicious anemia and iron deficiency. There is no known relation to deficiencies in liver metabolism or removal from the body by the kidneys. There is no known direct genetic influence, and no single gene variant, that differentiates excreters from non-excreters.
Factors
The extent of excreted pigment depends on the beet pigment content of the meal, including the addition of concentrated beetroot extract as a food additive to certain processed foods. Storage conditions of the beet foods, including light, heat, and oxygen exposure, and repeated freeze-thaw cycles could degrade the beet pigments. Stomach acidity and dwell time may affect urine color intensity. The presence of beet pigment-protecting substances, such as oxalic acid, in the meal and during intestinal passage, increase the color intensity in the urine. Medications may affect stomach acidity, such as proton pump inhibitors, thereby affecting urine color.
See also
Food coloring
Porphyria, a group of disorders that may cause reddish urine
Blue diaper syndrome
References
Symptoms and signs: Urinary system
Urine | Beeturia | Biology | 530 |
33,024,085 | https://en.wikipedia.org/wiki/Narnatumab | Narnatumab is a human monoclonal antibody designed for the treatment of cancer. Clinical development was abandoned after phase I trials.
Narnatumab was developed by ImClone Systems.
References
Monoclonal antibodies
Abandoned drugs | Narnatumab | Chemistry | 48 |
39,731,430 | https://en.wikipedia.org/wiki/Piotr%20Piecuch | Piotr Piecuch (born January 21, 1960) is a Polish-born American physical chemist. He holds the title of university distinguished professor in the department of chemistry at Michigan State University, East Lansing, Michigan, United States. He supervises a group, whose research focuses on theoretical and computational chemistry as well as theoretical and computational physics, particularly on the development and applications of many-body methods for accurate quantum calculations for molecular systems and atomic nuclei, including methods based on coupled cluster theory, mathematical methods of chemistry and physics, and theory of intermolecular forces. His group is also responsible for the development of the coupled-cluster computer codes incorporated in the widely used GAMESS (US) package.
Education and academic posts
Piecuch studied chemistry at the undergraduate and graduate levels at the University of Wrocław, Poland. He received his M.S. degree in 1983 and Ph.D. degree in 1988. After postdoctoral and research faculty appointments at the University of Waterloo, Canada (1988–91, 1994–95), where he worked with Professors Josef Paldus and Jiri Čížek, University of Arizona (1992–93), where he worked with Professor Ludwik Adamowicz, University of Toronto, Canada (1995–97), where he worked with the recipient of the 1986 Nobel Prize in Chemistry, Professor John C. Polanyi, and University of Florida (1997–98), where he worked with Professor Rodney J. Bartlett, he joined the faculty at Michigan State University as an assistant professor in 1998. He was promoted to an associate professor in 2002 and professor in 2004. He was named a university distinguished professor in 2007. While his primary appointment at Michigan State University is with the department of chemistry, he has also held adjunct professorship appointments in the Department of Physics and Astronomy (2003–10, 2014-). During his tenure at Michigan State University, he was named a visiting professor at the University of Coimbra, Portugal (2006), Kyoto University, Japan (2005), Institute for Molecular Science, National Institutes of Natural Sciences, in Okazaki, Japan (2012–13), and a Clark Way Harrison Distinguished Visiting Professor at the Washington University in St. Louis, United States (2016). The latter visit resulted in the creation of the on-line lecture series on algebraic and diagrammatic methods for many-fermion systems, consisting of more than 40 high-definition videos, available on YouTube.
Research interests and accomplishments
Piecuch has established himself as one of the leaders of electronic structure theory. Of particular note are his contributions to coupled-cluster and many-body theories. His work on the renormalized and active-space coupled-cluster methods is especially important, since the resulting approximations, such as CR-CC(2,3), CCSDt, or CC(t;3), and their extensions utilizing the equation-of-motion coupled-cluster concepts, for example, CR-EOMCC, EOMCCSDt, etc., can accurately describe potential energy surfaces, biradicals, and electronic excitations in molecules without resorting to complex multi-reference wave functions.
In general, Piecuch has been among the early and lead developers of multi-reference, response, extended, generalized, and externally corrected coupled-cluster methods, including approximate coupled-pair approaches for strongly correlated systems. His group and collaborators have also implemented linear scaling, local correlation coupled-cluster methods for large systems. The resulting multi-level local schemes that combine higher-level methods, such as CR-CC(2,3), to treat reactive parts of large molecular systems, with lower-order local or canonical methods, such as MP2 or CCSD, to describe chemically inactive regions are particularly valuable.
Although the exponential wave function ansatz of coupled-cluster theory was originally proposed by nuclear physicists, it initially found limited applications in nuclear structure theory. Piecuch and his associates, and their co-workers working in the area of nuclear theory have demonstrated the great utility of quantum-chemistry-inspired coupled-cluster approximations in the field of nuclear physics.
In addition to his coupled-cluster work, Piecuch has made major contributions to fundamental understanding and formal description of intermolecular forces, particularly pairwise non-additive effects, and developed potential energy surface extrapolation schemes based on scaling correlation energies. He has applied theoretical methods to solve many important problems in chemistry and physics. This is exemplified in papers by his group and collaborators on spectroscopy, reaction dynamics, ro-vibrational resonances in van der Waals complexes, several important reaction mechanisms in organic and bioinorganic chemistry, catalysis, and photochemistry.
As of July 6, 2018, his research has resulted in more than 200 publications that according to the Web of Science have received more than 10,100 citations and the h-index of 56. On July 6, 2018, Google Scholar reported more than 11,500 citations and the h-index of 61. In particular, Piecuch's original contributions to coupled-cluster theory, as applied to molecular problems, have been extensively discussed in the scientific literature. His intermolecular forces theory effort has been reviewed by several authors as well. His nuclear coupled-cluster theory research, in addition to being cited in the scientific literature, has received attention in more popular publications. As of July 6, 2018, he has given 235 invited lectures at national and international symposia, and academic and research institutions in the United States, Australia, Brazil, Canada, Chile, China, Czech Republic, France, Germany, Greece, Hungary, India, Italy, Japan, New Zealand, Poland, Portugal, Russia, Slovakia, South Africa, Spain, Sweden, Switzerland, Tunisia, and United Kingdom. He has co-edited 6 books and 2 special journal issues, and served on many scientific committees and advisory boards, including the editorial boards of several scientific journals and book series.
Selected publications
Awards and honors
Piecuch is an elected member of the International Academy of Quantum Molecular Science (2018), an elected Fellow of the Royal Society of Chemistry (2016), an elected Distinguished Fellow of the Kosciuszko Foundation Collegium of Eminent Scientists (2015), an elected Fellow of the American Association for the Advancement of Science (2011), an elected Fellow of the American Physical Society (2008), an Elected Member of the European Academy of Sciences, Arts and Humanities in Paris, France (2003), and a recipient of a number of other awards and honors, including, in addition to the title of the university distinguished professor that he received in 2007, the Lawrence J. Schaad Lectureship in Theoretical Chemistry at Vanderbilt University (2017), the S.R. Palit Memorial Lecture at the Indian Association for the Cultivation of Science (Kolkata, India, 2007), the Invitation Fellowship of the Japan Society for the Promotion of Science (2005), the QSCP Promising Scientist Prize of Centre de Mécanique Ondulatoire Appliquée, France, for "Scientific and Human Endeavour and Achievement" (2004), the Alfred P. Sloan Research Fellowship (2002–2004), the Wiley–International Journal of Quantum Chemistry Young Investigator Award (2000), three awards from the Polish Chemical Society for Research (1983, 1986, 1992), the award from the Minister of National Education of Poland (1989), and two awards from the Polish Academy of Sciences (1982).
Personal life
Piecuch was born in Wrocław, Poland to Telesfor and Hanna Piecuch, and has one sister, Katarzyna. He is married to Jolanta Piecuch (maiden name Sanetra). They have one child, Anna Piecuch.
References
External resources
Piotr Piecuch's MSU profile
1960 births
Living people
American physical chemists
Theoretical chemists
Michigan State University faculty
Fellows of the American Physical Society
Fellows of the American Association for the Advancement of Science
Fellows of the Royal Society of Chemistry
Members of the International Academy of Quantum Molecular Science
American people of Polish descent
Polish physical chemists
Canadian physical chemists
Computational chemists
University of Wrocław alumni
Academic staff of the University of Coimbra
Academic staff of Kyoto University
Washington University in St. Louis faculty | Piotr Piecuch | Chemistry | 1,684 |
15,624,925 | https://en.wikipedia.org/wiki/Transom%20%28architecture%29 | In architecture, a transom is a transverse horizontal structural beam or bar, or a crosspiece separating a door from a window above it. This contrasts with a mullion, a vertical structural member. Transom or transom window is also the customary U.S. word used for a transom light, the window over this crosspiece. In Britain, the transom light is usually referred to as a fanlight, often with a semi-circular shape, especially when the window is segmented like the slats of a folding hand fan. A prominent example of this is at the main entrance of 10 Downing Street, the official residence of the British prime minister.
History
In early Gothic ecclesiastical work, transoms are found only in belfry unglazed windows or spire lights, where they were deemed necessary to strengthen the mullions in the absence of the iron stay bars, which in glazed windows served a similar purpose. In the later Gothic, and more especially the Perpendicular Period, the introduction of transoms became common in windows of all kinds.
Function
Transom windows which could be opened to provide cross-ventilation while maintaining security and privacy (due to their small size and height above floor level) were a common feature of apartments, homes, office buildings, schools, and other buildings before central air conditioning and heating became common beginning in the early-to-mid 20th century.
In order to operate opening transom windows, they were generally fitted with transom operators, a sort of wand assembly. In industrial buildings, transom operators could use a variety of mechanical arrangements.
Idiomatic usage
The phrase "over the transom" refers to works submitted for publication without being solicited. The image evoked is of a writer tossing a manuscript through the open window over the door of the publisher's office.
Similarly, the phrase is used to describe the means by which confidential documents, information or tips were delivered anonymously to someone who is not officially supposed to have them.
Some such phrases may refer instead to the transom of a ship – large waves from behind can bring water over the transom.
"Like pushing a piano through a transom" is a folk idiom used to describe something exceedingly difficult; its application to childbirth (and possibly its origin) has been attributed to Alice Roosevelt Longworth and Fannie Brice.
France
In French, a transom window is called an imposte. The term (previously spelled ), from the German , refers to any single pane within a door or window sash which is hinged independently to provide discrete ventilation without opening the entire sash.
Japan
Architectural details called are often found above doors in traditional Japanese buildings.
These details can be anything from simple shōji-style dividers to elaborate wooden carvings, and they serve as a traditional welcome to visitors of the head of the household.
See also
Fortochka
Lev door
Roof lantern
Sidelight
Skylight
References
External links
Architectural elements
Windows
Doors | Transom (architecture) | Technology,Engineering | 597 |
10,180,397 | https://en.wikipedia.org/wiki/Hapke%20parameters | The Hapke parameters are a set of parameters for an empirical model that is commonly used to describe the directional reflectance properties of the airless regolith surfaces of bodies in the Solar System. The model has been developed by astronomer Bruce Hapke at the University of Pittsburgh.
The parameters are:
— Single scattering albedo. This is the ratio of scattering efficiency to total light extinction (which includes also absorption), for small-particle scattering of light. That is, , where is the scattering coefficient, and is the absorption coefficient
— The width of the opposition surge.
or — The strength of the opposition surge.
or g — The particle phase function parameter, also called the asymmetry factor.
— The effective surface tilt, also called the macroscopic roughness angle.
The Hapke parameters can be used to derive other albedo and scattering properties, such as the geometric albedo, the phase integral, and the Bond albedo.
See also
Albedo
Geometric albedo
Bidirectional reflectance distribution function
References
Radiometry
Scattering, absorption and radiative transfer (optics)
Equations of astronomy | Hapke parameters | Physics,Chemistry,Materials_science,Astronomy,Engineering | 225 |
2,381,034 | https://en.wikipedia.org/wiki/Nefazodone | Nefazodone, sold formerly under the brand names Serzone, Dutonin, and Nefadar among others, is an atypical antidepressant medication which is used in the treatment of depression and for other uses. Nefazodone is still available in the United States, but was withdrawn from other countries due to rare liver toxicity. The medication is taken by mouth.
Side effects of nefazodone include dry mouth, sleepiness, nausea, dizziness, blurred vision, weakness, lightheadedness, confusion, and postural low blood pressure, among others. Rarely, nefazodone can cause serious liver damage, with an incidence of death or liver transplantation of about 1 in every 250,000 to 300,000 patient years. Nefazodone is a phenylpiperazine compound and is related to trazodone. It has been described as a serotonin antagonist and reuptake inhibitor (SARI) due to its combined actions as a potent antagonist of the serotonin 5-HT2A and 5-HT2C receptors and weak serotonin–norepinephrine–dopamine reuptake inhibitor (SNDRI).
Nefazodone was introduced for medical use in 1994. Generic versions were introduced in 2003. Serious liver toxicity was first reported with nefazodone in 1998, and it was withdrawn from most markets by 2004. However, as of 2023, it continues to be available in the United States in generic from one manufacturer, Teva Pharmaceuticals and is manufactured in Israel.
Medical uses
Nefazodone is used to treat major depressive disorder, aggressive behavior, anxiety, and panic disorder.
Available forms
Nefazodone is available as 50mg, 100mg, 150mg, 200mg, and 250mg tablets for oral ingestion.
Contraindications
Contraindications include the coadministration of terfenadine, astemizole, cisapride, pimozide, or carbamazepine. Nefazodone is contraindicated in patients who were withdrawn from nefazodone because of evident liver injury as well as those that have shown hypersensitivity to the drug, its inactive ingredients, or other phenylpiperazine antidepressants. Furthermore, the coadministration of triazolam and nefazodone should be avoided for all patients, including the elderly, since it causes a significant increase in the plasma level of triazolam and not all commercially available dosage forms of triazolam permit a sufficient dosage reduction. If coadministrated, a 75% reduction in the initial dosage of triazolam is recommended.
Side effects
Common and mild side effects of nefazodone reported in clinical trials more often than placebo include dry mouth (25%), sleepiness (25%), nausea (22%), dizziness (17%), blurred vision (16%), weakness (11%), lightheadedness (10%), confusion (7%), and orthostatic hypotension (5%). Rare and serious adverse reactions may include allergic reactions, fainting, painful/prolonged erection, and jaundice. Nefazodone is not especially associated with increased appetite and weight gain. It is also known for having low levels of sexual side effects in comparisons to SSRIs.
Nefazodone can cause severe liver damage which may lead to the need for liver transplantation or to death. The incidence of severe liver damage is approximately 1 in every 250,000 to 300,000 patient-years. By the time it started to be withdrawn from the markets in 2003, nefazodone had been associated with at least 53 cases of liver injury (of which 11 led to death) in the United States, and 51 cases of liver toxicity (of which 2 led to transplantation) in Canada. In a 2002 Canadian study of 32 cases, it was noted that databases like those used in the study tended to include only a small proportion of suspected drug reactions.
Treatment protocols suggest screening for pre-existing liver disease before initiating nefazodone, and those with known liver disease should not be prescribed nefazodone. If serum AST or serum ALT levels are more than 3 times the upper limit of normal (ULN), treatment should be permanently withdrawn. Enzyme labs should be done every six months, and nefazodone should not be a first-line treatment.
Interactions
Nefazodone is a potent inhibitor of CYP3A4, and may interact adversely with many commonly used medications that are metabolized by CYP3A4.
Pharmacology
Pharmacodynamics
Nefazodone acts primarily as a potent antagonist of the serotonin 5-HT2A receptor and to a lesser extent of the serotonin 5-HT2C receptor. It also has high affinity for the α1-adrenergic receptor and serotonin 5-HT1A receptor, and relatively lower affinity for the α2-adrenergic receptor and dopamine D2 receptor. Nefazodone has low but significant affinity for the serotonin, norepinephrine, and dopamine transporters as well, and therefore acts as a weak serotonin-norepinephrine-dopamine reuptake inhibitor (SNDRI). It has low but potentially significant affinity for the histamine H1 receptor, where it is an antagonist, and hence may have some antihistamine activity. Nefazodone has negligible activity at muscarinic acetylcholine receptors, and accordingly, has no anticholinergic effects.
Pharmacokinetics
The bioavailability of nefazodone is low and variable, about 20%. Its plasma protein binding is approximately 99%, but it is bound loosely.
Nefazodone is metabolized in the liver, with the main enzyme involved thought to be CYP3A4. The drug has at least four active metabolites, which include hydroxynefazodone, para-hydroxynefazodone, triazoledione, and meta-chlorophenylpiperazine (mCPP). Nefazodone has a short elimination half-life of about 2 to 4hours. Its metabolite hydroxynefazodone similarly has an elimination half-life of about 1.5 to 4hours, whereas the elimination half-lives of triazoledione and mCPP are longer at around 18hours and 4 to 8hours, respectively. Due to its long elimination half-life, triazoledione is the major metabolite and predominates in the circulation during nefazodone treatment, with plasma levels that are 4 to 10times higher than those of nefazodone itself. Conversely, hydroxynefazodone levels are about 40% of those of nefazodone at steady state. Plasma levels of mCPP are very low at about 7% of those of nefazodone; hence, mCPP is only a minor metabolite. mCPP is thought to be formed from nefazodone specifically by CYP2D6.
The ratios of brain-to-plasma concentrations of mCPP to nefazodone are 47:1 in mice and 10:1 in rats, suggesting that brain exposure to mCPP may be much higher than plasma exposure. Conversely, hydroxynefazodone levels in the brain are 10% of those in plasma in rats. As such, in spite of its relatively low plasma concentrations, brain exposure to mCPP may be substantial, whereas that of hydroxynefazodone may be minimal.
Chemistry
Nefazodone is a phenylpiperazine; it is an alpha-phenoxyl derivative of etoperidone which in turn was a derivative of trazodone.
History
Nefazodone was discovered by scientists at Bristol-Myers Squibb (BMS) who were seeking to improve on trazodone by reducing its sedating qualities.
BMS obtained marketing approvals for nefazodone worldwide, including in the United States and Europe, in 1994. It was marketed in the United States under the brand name Serzone and in Europe under the brand name Dutonin.
The first reports of serious liver toxicity with nefazodone were published in 1998 and 1999. These instances were quickly followed by many additional cases.
In 2002 the United States Food and Drug Administration (FDA) obligated BMS to add a black box warning about potential fatal liver toxicity to the drug label. Worldwide sales in 2002 were $409 million.
In 2003 Public Citizen filed a citizen petition asking the FDA to withdraw the marketing authorization in the United States, and in early 2004 the organization sued the FDA to attempt to force withdrawal of the drug. The FDA issued a response to the petition in June 2004 and filed a motion to dismiss, and Public Citizen withdrew the suit.
Sales of nefazodone were about $100 million in 2003. By that time, it was also being marketed under the additional brand names Serzonil, Nefadar, and Rulivan.
Generic versions were introduced in the United States in 2003 and Health Canada withdrew the marketing authorization that same year.
In April 2004, BMS announced that it was going to discontinue the sale of Serzone in the United States in June 2004 and said that this was due to declining sales and generic versions were available in the United States. By that time, BMS had already withdrawn the drug from the market in Europe, Australia, New Zealand, and Canada.
In August 2020, Teva Pharmaceuticals placed nefazodone in shortage due to a shortage of a raw ingredient. On December 20, 2021, nefazodone was again made available in all strengths.
Society and culture
Generic names
Nefazodone is the generic name of the drug and its and , while néfazodone is its and nefazodone hydrochloride is its and .
Brand names
Nefazodone has been marketed under a number of brand names including Dutonin (, , , ), Menfazona (), Nefadar (, , , ), Nefazodone BMS (), Nefazodone Hydrochloride Teva (), Reseril (), Rulivan (), and Serzone (, , ).
Research
Nefazodone was under development for the treatment of panic disorder, and reached phase 3 clinical trials for this indication, but development was discontinued in 2004.
The use of nefazodone to prevent migraine has been studied, due to its antagonism of the serotonin 5-HT2A and 5-HT2C receptors.
References
External links
Alpha-1 blockers
Alpha-2 blockers
Antidepressants
Anxiolytics
5-HT1A agonists
5-HT2A antagonists
5-HT2C antagonists
CYP3A4 inhibitors
H1 receptor antagonists
Hepatotoxins
Phenol ethers
Piperazines
1,2,4-Triazol-3-ones
Serotonin–norepinephrine–dopamine reuptake inhibitors
Ureas
Withdrawn drugs
3-Chlorophenyl compounds | Nefazodone | Chemistry | 2,422 |
43,646,390 | https://en.wikipedia.org/wiki/Elephant-built%20bridge | An elephant bridge, in the sense of a bridge built largely by elephants working under skilled human supervision, is a bridge whose structure consists primarily of logs, that are both carried to the site and put in place, by domesticated Indian elephants. Typically they are built in conjunction with logging operations in South and Southeast Asia.
Elephant bridges were built for military purposes in World War II by the Allies in the China Burma India Theater, and operations behind the lines of invading Japanese troops were undertaken to move elephants and elephant handlers to behind Allied lines. The native personnel involved, and the elephants, were, before the arrival of Japanese forces, largely engaged in the logging of teak for export. The British soldier James Howard "Elephant Bill" Williams, who oversaw the evacuation, had worked in the Burma teak trade; the success of the operation is said to have hinged on his long-standing personal relationship with one particularly large elephant.
Further reading
Croke, Vicki Constantine, Elephant Company: The Inspiring Story of an Unlikely Hero and the Animals Who Helped Him Save Lives in World War II, Random House, 2014
References
Bridges | Elephant-built bridge | Engineering | 224 |
644,797 | https://en.wikipedia.org/wiki/Ricci-flat%20manifold | In the mathematical field of differential geometry, Ricci-flatness is a condition on the curvature of a Riemannian manifold. Ricci-flat manifolds are a special kind of Einstein manifold. In theoretical physics, Ricci-flat Lorentzian manifolds are of fundamental interest, as they are the solutions of Einstein's field equations in a vacuum with vanishing cosmological constant.
In Lorentzian geometry, a number of Ricci-flat metrics are known from works of Karl Schwarzschild, Roy Kerr, and Yvonne Choquet-Bruhat. In Riemannian geometry, Shing-Tung Yau's resolution of the Calabi conjecture produced a number of Ricci-flat metrics on Kähler manifolds.
Definition
A pseudo-Riemannian manifold is said to be Ricci-flat if its Ricci curvature is zero. It is direct to verify that, except in dimension two, a metric is Ricci-flat if and only if its Einstein tensor is zero. Ricci-flat manifolds are one of three special types of Einstein manifold, arising as the special case of scalar curvature equaling zero.
From the definition of the Weyl curvature tensor, it is direct to see that any Ricci-flat metric has Weyl curvature equal to Riemann curvature tensor. By taking traces, it is straightforward to see that the converse also holds. This may also be phrased as saying that Ricci-flatness is characterized by the vanishing of the two non-Weyl parts of the Ricci decomposition.
Since the Weyl curvature vanishes in two or three dimensions, every Ricci-flat metric in these dimensions is flat. Conversely, it is automatic from the definitions that any flat metric is Ricci-flat. The study of flat metrics is usually considered as a topic unto itself. As such, the study of Ricci-flat metrics is only a distinct topic in dimension four and above.
Examples
As noted above, any flat metric is Ricci-flat. However it is nontrivial to identify Ricci-flat manifolds whose full curvature is nonzero.
In 1916, Karl Schwarzschild found the Schwarzschild metrics, which are Ricci-flat Lorentzian manifolds of nonzero curvature. Roy Kerr later found the Kerr metrics, a two-parameter family containing the Schwarzschild metrics as a special case. These metrics are fully explicit and are of fundamental interest in the mathematics and physics of black holes. More generally, in general relativity, Ricci-flat Lorentzian manifolds represent the vacuum solutions of Einstein's field equations with vanishing cosmological constant.
Many pseudo-Riemannian manifolds are constructed as homogeneous spaces. However, these constructions are not directly helpful for Ricci-flat Riemannian metrics, in the sense that any homogeneous Riemannian manifold which is Ricci-flat must be flat. However, there are homogeneous (and even symmetric) Lorentzian manifolds which are Ricci-flat but not flat, as follows from an explicit construction and computation of Lie algebras.
Until Shing-Tung Yau's resolution of the Calabi conjecture in the 1970s, it was not known whether every Ricci-flat Riemannian metric on a closed manifold is flat. His work, using techniques of partial differential equations, established a comprehensive existence theory for Ricci-flat metrics in the special case of Kähler metrics on closed complex manifolds. Due to his analytical techniques, the metrics are non-explicit even in the simplest cases. Such Riemannian manifolds are often called Calabi–Yau manifolds, although various authors use this name in slightly different ways.
Analytical character
Relative to harmonic coordinates, the condition of Ricci-flatness for a Riemannian metric can be interpreted as a system of elliptic partial differential equations. It is a straightforward consequence of standard elliptic regularity results that any Ricci-flat Riemannian metric on a smooth manifold is analytic, in the sense that harmonic coordinates define a compatible analytic structure, and the local representation of the metric is real-analytic. This also holds in the broader setting of Einstein Riemannian metrics.
Analogously, relative to harmonic coordinates, Ricci-flatness of a Lorentzian metric can be interpreted as a system of hyperbolic partial differential equations. Based on this perspective, Yvonne Choquet-Bruhat developed the well-posedness of the Ricci-flatness condition. She reached a definitive result in collaboration with Robert Geroch in the 1960s, establishing how a certain class of maximally extended Ricci-flat Lorentzian metrics are prescribed and constructed by certain Riemannian data. These are known as maximal globally hyperbolic developments. In general relativity, this is typically interpreted as an initial value formulation of Einstein's field equations for gravitation.
The study of Ricci-flatness in the Riemannian and Lorentzian cases are quite distinct. This is already indicated by the fundamental distinction between the geodesically complete metrics which are typical of Riemannian geometry and the maximal globally hyperbolic developments which arise from Choquet-Bruhat and Geroch's work. Moreover, the analyticity and corresponding unique continuation of a Ricci-flat Riemannian metric has a fundamentally different character than Ricci-flat Lorentzian metrics, which have finite speeds of propagation and fully localizable phenomena. This can be viewed as a nonlinear geometric analogue of the difference between the Laplace equation and the wave equation.
Topology of Ricci-flat Riemannian manifolds
Yau's existence theorem for Ricci-flat Kähler metrics established the precise topological condition under which such a metric exists on a given closed complex manifold: the first Chern class of the holomorphic tangent bundle must be zero. The necessity of this condition was previously known by Chern–Weil theory.
Beyond Kähler geometry, the situation is not as well understood. A four-dimensional closed and oriented manifold supporting any Einstein Riemannian metric must satisfy the Hitchin–Thorpe inequality on its topological data. As particular cases of well-known theorems on Riemannian manifolds of nonnegative Ricci curvature, any manifold with a complete Ricci-flat Riemannian metric must:
have first Betti number less than or equal to the dimension, whenever the manifold is closed
have fundamental group of polynomial growth.
Mikhael Gromov and Blaine Lawson introduced the notion of enlargeability of a closed manifold. The class of enlargeable manifolds is closed under homotopy equivalence, the taking of products, and under the connected sum with an arbitrary closed manifold. Every Ricci-flat Riemannian manifold in this class is flat, which is a corollary of Cheeger and Gromoll's splitting theorem.
Ricci-flatness and holonomy
On a simply-connected Kähler manifold, a Kähler metric is Ricci-flat if and only if the holonomy group is contained in the special unitary group. On a general Kähler manifold, the if direction still holds, but only the restricted holonomy group of a Ricci-flat Kähler metric is necessarily contained in the special unitary group.
A hyperkähler manifold is a Riemannian manifold whose holonomy group is contained in the symplectic group. This condition on a Riemannian manifold may also be characterized (roughly speaking) by the existence of a 2-sphere of complex structures which are all parallel. This says in particular that every hyperkähler metric is Kähler; furthermore, via the Ambrose–Singer theorem, every such metric is Ricci-flat. The Calabi–Yau theorem specializes to this context, giving a general existence and uniqueness theorem for hyperkähler metrics on compact Kähler manifolds admitting holomorphically symplectic structures. Examples of hyperkähler metrics on noncompact spaces had earlier been obtained by Eugenio Calabi. The Eguchi–Hanson space, discovered at the same time, is a special case of his construction.
A quaternion-Kähler manifold is a Riemannian manifold whose holonomy group is contained in the Lie group . Marcel Berger showed that any such metric must be Einstein. Furthermore, any Ricci-flat quaternion-Kähler manifold must be locally hyperkähler, meaning that the restricted holonomy group is contained in the symplectic group.
A manifold or manifold is a Riemannian manifold whose holonomy group is contained in the Lie groups or . The Ambrose–Singer theorem implies that any such manifold is Ricci-flat. The existence of closed manifolds of this type was established by Dominic Joyce in the 1990s.
Marcel Berger commented that all known examples of irreducible Ricci-flat Riemannian metrics on simply-connected closed manifolds have special holonomy groups, according to the above possibilities. It is not known whether this suggests an unknown general theorem or simply a limitation of known techniques. For this reason, Berger considered Ricci-flat manifolds to be "extremely mysterious."
References
Notes.
Sources.
Riemannian manifolds | Ricci-flat manifold | Mathematics | 1,892 |
3,626,780 | https://en.wikipedia.org/wiki/SIGCUM | SIGCUM, also known as Converter M-228, was a rotor cipher machine used to encrypt teleprinter traffic by the United States Army. Hastily designed by William Friedman and Frank Rowlett, the system was put into service in January 1943 before any rigorous analysis of its security had taken place. SIGCUM was subsequently discovered to be insecure by Rowlett, and was immediately withdrawn from service. The machine was redesigned to improve its security, reintroduced into service by April 1943, and remained in use until the 1960s.
Development
In 1939, Friedman and Rowlett worked on the problem of creating a secure teleprinter encryption system. They decided against using a tape-based system, such as those proposed by Gilbert Vernam, and instead conceived of the idea of generating a stream of five-bit pulses by use of wired rotors. Because of lack of funds and interest, however, the proposal was not pursued any further at that time. This changed with the United States' entry into World War II in December 1941. Rowlett was assigned to develop a teleprinter encryption system for use between Army command centers in United Kingdom and Australia (and later in North Africa).
Friedman described to Rowlett a concrete design for a teleprinter cipher machine that he had invented. However, Rowlett discovered some flaws in Friedman's proposed circuitry that showed the design to be flawed. Under pressure to report to a superior about the progress of the machine, Friedman responded angrily, accusing Rowlett of trying to destroy his reputation as a cryptanalyst. After Friedman calmed down, Rowlett proposed some designs for a replacement machine based on rotors. They settled on one, and agreed to write up a complete design and have it reviewed by another cryptanalyst by the following day.
The design agreed upon was a special attachment for a standard teleprinter. The attachment used a stack of five 26-contact rotors, the same as those used in the SIGABA, the highly secure US off-line cipher machine. Each time a key character was needed, thirteen inputs to the rotor stack were energized at the input endplate. Passing through the rotor stack, these thirteen inputs were to be scrambled at the output endplate. However, only five live contacts would be used. These five outputs would form five binary impulses, which would form the keystream for the cipher, to be combined with the message itself, encoded in the 5-bit Baudot code.
The rotors advanced odometrically; that is, after each encipherment, the "fast" rotor would advance one step. Once every revolution of the fast rotor, the "medium" rotor would step once. Similarly, ever revolution of the medium rotor, the "slow" rotor would step, and so on for the other two rotors. However, which rotor was assigned as the "fast", "medium", "slow" etc. rotors was controlled by a set of five multi-switches. This gave a total of different rotor stepping patterns. The machine was equipped with a total of 10 rotors, each of which could be inserted "direct" or in reversed order, yielding possible rotor orderings and alignments.
Introduction of the machine
The design for this machine, which was designated the Converter M-228, or SIGCUM, was given to the Teletype Corporation, who were also producing SIGABA. Rowlett recommended that the adoption of the machine be postponed until after a study of its cryptographic security, but SIGCUM was urgently needed by the Army, and the machine was put into production. Rowlett then proposed that the machine used in the Pentagon code room be monitored by connecting a page-printing "spy machine". The output could be then studied to establish whether the machine was resistant to attack. Rowlett's suggestion was implemented at the same time the first M-228 machines were installed at the Pentagon in January 1943, used for the Washington-Algiers link.
The machines worked as planned, and, initially, Rowlett's study of its security, joined by cryptanalyst Robert Ferner, uncovered no signs of cryptographic weakness. However, after a few days, a SIGCUM operator made a serious operating error, retransmitting the same message twice using the same machine settings, producing a depth.
From this, Rowlett was able to deduce the underlying plaintext and keystream used by the machine. By 2 a.m., an analysis of the keystream allowed him to deduce the wiring of the fast and medium rotors, and of the output wiring. SIGCUM was immediately withdrawn from service, and work on a replacement system, SIGTOT — a one-time tape machine designed by Leo Rosen — was given top priority.
Redesign
Meanwhile, M-228 was redesigned to improve its security. Only five inputs, rather than thirteen, were energized. The five output contacts, instead of being used as the five output bits directly, were instead connected by three leads, each connected to different output point. That meant that an output bit could be energized by any of three different outputs from the rotor maze, making analysis of the machine more complex. The reduced number of inputs ensured that the generated key would not be biased.
The rotor stepping was also made more complex. The slowest two rotors, which originally were unlikely to step during the course of an encipherment, were redesigned so that they stepped depending on the output of the previous key output. One rotor, designated the "fast bump" rotor, would step if the fourth and fifth bits of the previous output were both true; and similarly the "slow bump" rotor would do the same for the first, second and third bits.
Certain of the rotor stepping arrangements were discovered to be weaker than others, and so these were ruled out for key lists.
This redesigned version of the M-228 was put into service by April 1943. However, the machine was judged to be only secure enough to handle traffic up to SECRET by landline, and CONFIDENTIAL by radio. The machine was also shared with the United Kingdom for joint communications.
A further-modified version of the M-228 that could be used for the highest level traffic, was designated M-228-M, or SIGHUAD.
From that point on, the Army monitored the communications of its high-level systems to ensure that good operational procedure was being followed, even for highly secure devices such as the SIGABA and SIGTOT devices. As a result, poor operator practices, such as transmitting messages in depth, were largely eliminated.
References
Stephen J. Kelley, "The SIGCUM Story: Cryptographic Failure, Cryptographic Success", in Cryptologia 21(4), October 1997, pp289–316.
External links
Converter M-228 or SIGCUM by John Savard
Rotor machines
Encryption devices | SIGCUM | Physics,Technology | 1,420 |
23,937,637 | https://en.wikipedia.org/wiki/Calvatia%20cyathiformis | Calvatia cyathiformis, or purple-spored puffball, is a large edible saprobic species of Calvatia. This terrestrial puffball has purplish or purple-brown spores, which distinguish it from other large Agaricales. It is found in North America and Australia, mostly in prairie or grassland environments.
Description
The fruiting body is high and/or broad. When young it is relatively smooth and spherical or slightly flattened and purplish or brownish. It has a chocolate-brown or purple-colored gleba with a smooth exoperidium. As it matures it often becomes pear or irregularly-shaped and the exterior skin takes on a dark or silvery colour. As it ages the exterior dries and cracks and the fleshy spore-bearing interior breaks away to be distributed by wind and rain. After the spores completely disperse, "a soft leathery cup-shaped sterile base lightly rooted to the ground remains".
According to MushroomExpert.Com, the spores are 3.5–7.5 μm in diameter, "round, spiny or warty to nearly smooth. Capillitial threads 3–7.5 μm wide; thick-walled; minutely pitted."
The spore mass turns from white to yellow to dull purple or purple-brown at maturity. It is said to be edible until the flesh begins to turn to a tan colour.
To make a meal from most mushrooms, you probably hope to find at least a half dozen to a dozen, depending on the size. The large Calvatia species are special, because one or two at the most will probably be sufficient for a dinner for two. While this puffball does not have a strong flavor of its own, it is still quite good, and its ability to absorb flavors makes it a rewarding find.Lycoperdon utriforme is a similar species.
Distribution
Calvatia cyathiformis is commonly found in grazing paddocks and grassed areas around the wet areas of Australia in the southwest of Western Australia, and from Adelaide in South Australia to Cooktown, on Cape York Peninsula, as well as in Darwin, Northern Territory.
Footnotes
References
Hall, Ian, et al. (2003). Edible and Poisonous Mushrooms of the World. Timber Press. .
External links
Calvatia cyathiformis at Mushroomobserver.org.
Agaricaceae
Edible fungi
Fungi of North America
Fungi of Australia
Fungi of Africa
cyathiformis
Fungus species | Calvatia cyathiformis | Biology | 515 |
1,265,269 | https://en.wikipedia.org/wiki/Xenobiology | Xenobiology (XB) is a subfield of synthetic biology, the study of synthesizing and manipulating biological devices and systems. The name "xenobiology" derives from the Greek word xenos, which means "stranger, alien". Xenobiology is a form of biology that is not (yet) familiar to science and is not found in nature. In practice, it describes novel biological systems and biochemistries that differ from the canonical DNA–RNA-20 amino acid system (see central dogma of molecular biology). For example, instead of DNA or RNA, XB explores nucleic acid analogues, termed xeno nucleic acid (XNA) as information carriers. It also focuses on an expanded genetic code and the incorporation of non-proteinogenic amino acids, or “xeno amino acids” into proteins.
Difference between xeno-, exo-, and astro-biology
"Astro" means "star" and "exo" means "outside". Both exo- and astrobiology deal with the search for naturally evolved life in the Universe, mostly on other planets in the circumstellar habitable zone. (These are also occasionally referred to as xenobiology.) Whereas astrobiologists are concerned with the detection and analysis of life elsewhere in the Universe, xenobiology attempts to design forms of life with a different biochemistry or different genetic code than on planet Earth.
Aims
Xenobiology has the potential to reveal fundamental knowledge about biology and the origin of life. In order to better understand the origin of life, it is necessary to know why life evolved seemingly via an early RNA world to the DNA-RNA-protein system and its nearly universal genetic code. Was it an evolutionary "accident" or were there constraints that ruled out other types of chemistries? By testing alternative biochemical "primordial soups", it is expected to better understand the principles that gave rise to life as we know it.
Xenobiology is an approach to develop industrial production systems with novel capabilities by means of biopolymer engineering and pathogen resistance. The genetic code encodes in all organisms 20 canonical amino acids that are used for protein biosynthesis. In rare cases, special amino acids such as selenocysteine or pyrrolysine can be incorporated by the translational apparatus in to proteins of some organisms. Together, these 20+2 Amino Acids are known as the 22 Proteinogenic Amino Acids. By using additional amino acids from among the over 700 known to biochemistry, the capabilities of proteins may be altered to give rise to more efficient catalytical or material functions. The EC-funded project Metacode, for example, aims to incorporate metathesis (a useful catalytical function so far not known in living organisms) into bacterial cells. Another reason why XB could improve production processes lies in the possibility to reduce the risk of virus or bacteriophage contamination in cultivations since XB cells would no longer provide suitable host cells, rendering them more resistant (an approach called semantic containment)
Xenobiology offers the option to design a "genetic firewall", a novel biocontainment system, which may help to strengthen and diversify current bio-containment approaches. One concern with traditional genetic engineering and biotechnology is horizontal gene transfer to the environment and possible risks to human health. One major idea in XB is to design alternative genetic codes and biochemistries so that horizontal gene transfer is no longer possible. Additionally alternative biochemistry also allows for new synthetic auxotrophies. The idea is to create an orthogonal biological system that would be incompatible with natural genetic systems.
Scientific approach
In xenobiology, the aim is to design and construct biological systems that differ from their natural counterparts on one or more fundamental levels. Ideally these new-to-nature organisms would be different in every possible biochemical aspect exhibiting a very different genetic code. The long-term goal is to construct a cell that would store its genetic information not in DNA but in an alternative informational polymer consisting of xeno nucleic acids (XNA), different base pairs, using non-canonical amino acids and an altered genetic code. So far cells have been constructed that incorporate only one or two of these features.
Xeno nucleic acids (XNA)
Originally this research on alternative forms of DNA was driven by the question of how life evolved on earth and why RNA and DNA were selected by (chemical) evolution over other possible nucleic acid structures. Two hypotheses for the selection of RNA and DNA as life's backbone are either they are favored under life on Earth's conditions, or they were coincidentally present in pre-life chemistry and continue to be used now. Systematic experimental studies aiming at the diversification of the chemical structure of nucleic acids have resulted in completely novel informational biopolymers. So far a number of XNAs with new chemical backbones or leaving group of the DNA have been synthesized, e.g.: hexose nucleic acid (HNA); threose nucleic acid (TNA), glycol nucleic acid (GNA) cyclohexenyl nucleic acid (CeNA). The incorporation of XNA in a plasmid, involving 3 HNA codons, has been accomplished already in 2003. This XNA is used in vivo (E coli) as template for DNA synthesis. This study, using a binary (G/T) genetic cassette and two non-DNA bases (Hx/U), was extended to CeNA, while GNA seems to be too alien at this moment for the natural biological system to be used as template for DNA synthesis. Extended bases using a natural DNA backbone could, likewise, be transliterated into natural DNA, although to a more limited extent.
Aside being used as extensions to template DNA strands, XNA activity has been tested for use as genetic catalysts. Although proteins are the most common components of cellular enzymatic activity, nucleic acids are also used in the cell to catalyze reactions. A 2015 study found several different kinds of XNA, most notably FANA (2'-fluoroarabino nucleic acids), as well as HNA, CeNA and ANA (arabino nucleic acids) could be used to cleave RNA during post-transcriptional RNA processing acting as XNA enzymes, hence the name XNAzymes. FANA XNAzymes also showed the ability to ligate DNA, RNA and XNA substrates. Although XNAzyme studies are still preliminary, this study was a step in the direction of searching for synthetic circuit components that are more efficient than those containing DNA and RNA counterparts that can regulate DNA, RNA, and their own, XNA, substrates.
Expanding the genetic alphabet
While XNAs have modified backbones, other experiments target the replacement or enlargement of the genetic alphabet of DNA with unnatural base pairs. For example, DNA has been designed that has – instead of the four standard bases A, T, G, and C – six bases A, T, G, C, and the two new ones P and Z (where Z stands for 6-Amino-5-nitro3-(l'-p-D-2'-deoxyribofuranosyl)-2(1H)-pyridone, and P stands for 2-Amino-8-(1-beta-D-2'-deoxyribofuranosyl)imidazo[1,2-a]-1,3,5-triazin-4 (8H)). In a systematic study, Leconte et al. tested the viability of 60 candidate bases (yielding potentially 3600 base pairs) for possible incorporation in the DNA.
In 2002, Hirao et al. developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in vitro in transcription and translation toward a genetic code for protein synthesis containing a non-standard amino acid. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription, and afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins.
In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, alongside the four naturally occurring nucleotides, and by including individual artificial nucleotides in the culture media, were able to passage the bacteria 24 times; they did not create mRNA or proteins able to use the artificial nucleotides.
Novel polymerases
Neither the XNA nor the unnatural bases are recognized by natural polymerases. One of the major challenges is to find or create novel types of polymerases that will be able to replicate these new-to-nature constructs. In one case a modified variant of the HIV-reverse transcriptase was found to be able to PCR-amplify an oligonucleotide containing a third type base pair.
Pinheiro et al. (2012) demonstrated that the method of polymerase evolution and design successfully led to the storage and recovery of genetic information (of less than 100bp length) from six alternative genetic polymers based on simple nucleic acid architectures not found in nature, xeno nucleic acids.
Genetic code engineering
One of the goals of xenobiology is to rewrite the genetic code. The most promising approach to change the code is the reassignment of seldom used or even unused codons.
In an ideal scenario, the genetic code is expanded by one codon, thus having been liberated from its old function and fully reassigned to a non-canonical amino acid (ncAA) ("code expansion"). As these methods are laborious to implement, and some short cuts can be applied ("code engineering"), for example in bacteria that are auxotrophic for specific amino acids and at some point in the experiment are fed isostructural analogues instead of the canonical amino acids for which they are auxotrophic. In that situation, the canonical amino acid residues in native proteins are substituted with the ncAAs. Even the insertion of multiple different ncAAs into the same protein is possible. Finally, the repertoire of 20 canonical amino acids can not only be expanded, but also reduced to 19.
By reassigning transfer RNA (tRNA)/aminoacyl-tRNA synthetase pairs the codon specificity can be changed. Cells endowed with such aminoacyl-[tRNA synthetases] are thus able to read [mRNA] sequences that make no sense to the existing gene expression machinery. Altering the codon: tRNA synthetases pairs may lead to the in vivo incorporation of the non-canonical amino acids into proteins.
In the past reassigning codons was mainly done on a limited scale. In 2013, however, Farren Isaacs and George Church at Harvard University reported the replacement of all 321 TAG stop codons present in the genome of E. coli with synonymous TAA codons, thereby demonstrating that massive substitutions can be combined into higher-order strains without lethal effects. Following the success of this genome wide codon replacement, the authors continued and achieved the reprogramming of 13 codons throughout the genome, directly affecting 42 essential genes.
An even more radical change in the genetic code is the change of a triplet codon to a quadruplet and even quintuplet codon pioneered by Sisido in cell-free systems and by Schultz in bacteria. Finally, non-natural base pairs can be used to introduce novel amino acid in proteins.
Directed evolution
The goal of substituting DNA by XNA may also be reached by another route, namely by engineering the environment instead of the genetic modules. This approach has been successfully demonstrated by Marlière and Mutzel with the production of an E. coli strain whose DNA is composed of standard A, C and G nucleotides but has the synthetic thymine analogue 5-chlorouracil instead of thymine (T) in the corresponding positions of the sequence. These cells are then dependent on externally supplied 5-chlorouracil for growth, but otherwise they look and behave as normal E. coli. These cells, however, are currently not yet fully auxotrophic for the Xeno-base since they are still growing on thymine when this is supplied to the medium.
Biosafety
Xenobiological systems are designed to convey orthogonality to natural biological systems. A (still hypothetical) organism that uses XNA, different base pairs and polymerases and has an altered genetic code will hardly be able to interact with natural forms of life on the genetic level. Thus, these xenobiological organisms represent a genetic enclave that cannot exchange information with natural cells. Altering the genetic machinery of the cell leads to semantic containment. In analogy to information processing in IT, this safety concept is termed a “genetic firewall”. The concept of the genetic firewall seems to overcome a number of limitations of previous safety systems. A first experimental evidence of the theoretical concept of the genetic firewall was achieved in 2013 with the construction of a genomically recoded organism (GRO). In this GRO all known UAG stop codons in E.coli were replaced by UAA codons, which allowed for the deletion of release factor 1 and reassignment of UAG translation function. The GRO exhibited increased resistance to T7 bacteriophage, thus showing that alternative genetic codes do reduce genetic compatibility. This GRO, however, is still very similar to its natural “parent” and cannot be regarded to have a genetic firewall. The possibility of reassigning the function of large number of triplets opens the perspective to have strains that combine XNA, novel base pairs, new genetic codes, etc. that cannot exchange any information with the natural biological world.
Regardless of changes leading to a semantic containment mechanism in new organisms, any novel biochemical systems still has to undergo a toxicological screening. XNA, novel proteins, etc. might represent novel toxins, or have an allergic potential that needs to be assessed.
Governance and regulatory issues
Xenobiology might challenge the regulatory framework, as currently laws and directives deal with genetically modified organisms and do not directly mention chemically or genomically modified organisms. Taking into account that real xenobiology organisms are not expected in the next few years, policy makers do have some time at hand to prepare themselves for an upcoming governance challenge. Since 2012, the following groups have picked up the topic as a developing governance issue: policy advisers in the US, four National Biosafety Boards in Europe, the European Molecular Biology Organisation, and the European Commission's Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) in three opinions (Definition, risk assessment methodologies and safety aspects, and risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology.).
See also
Auxotrophy
Biological dark matter
Body plan
Directed evolution
Expanded genetic code
Foldamer
Hachimoji DNA
Hypothetical types of biochemistry
Life definitions
Nucleic acid analogue
Purple Earth hypothesis
RNA world
Shadow biosphere
References
External links
XB1: The First Conference on Xenobiology May 6–8, 2014. Genoa, Italy.
XB2: The Second Conference on Xenobiology May 24–26, 2016. Berlin, Germany.
Bioinformatics
Biotechnology
Synthetic biology | Xenobiology | Engineering,Biology | 3,301 |
47,770,094 | https://en.wikipedia.org/wiki/All%20of%20Us%20%28initiative%29 | The All of Us Research Program (previously known as the Precision Medicine Initiative Cohort Program) is a research program created in 2015 during the tenure of Barack Obama with $130 million in funding that aims to make advances in tailoring medical care to the individual. The mission of All of Us is to accelerate health and medical breakthroughs, enabling individualized prevention, treatment and care.
The project aims to collect genetic and health data from one million volunteers. The initiative was announced during the 2015 State of the Union Address, and is run by the National Institutes of Health (NIH). The program is bilingual, with information and materials available in Spanish and English.
Who can enroll
Eligible adults (18 and over) can enroll with the program. People who are not eligible are those in prison or people who cannot consent on their own
According to a sample consent form released in June 2018, participation in All of Us is voluntary and does not affect a participant's medical care. The form explains that if a participant quits the program, their samples will be destroyed. Children may also be able to enroll in the program.
By January 2018 an initial pilot project had enrolled about 10,000 people and 2022 was targeted for one million people. As of May 2019, enrollment numbers at the one-year launch anniversary are 187,000+ participants. More than 132,000 have already given biosamples.
The NIH reported in May 2018 that they were pleased with the high enrollment by underrepresented groups including communities of color and individuals with lower incomes. Up to three-quarters of beta phase participants came from those communities.
Program partners
All of Us has more than 100 partners and champions working together to implement and support the mission and goals of the research program. Google life sciences startup Verily Life Sciences, a Google "moonshot" with a goal of "transform[ing] the way we detect, prevent, and manage disease" is one partner.
The initiative was identified by a 2019 review as involving the public in every stage of the research.
Program budget
The All of Us Research Program budget has increased every year since it launched: FY2016 - $130 million; FY2017 - $230 million; and FY2018 - $290 million.
Responses to the Initiative
Professor Kenneth Weiss from Pennsylvania State University, in a skeptical review of this project in 2017, suggested that the funding could be better spent elsewhere.
The project suffered backlash in 2024 to their use of umap plotting to depict ancestry, rather than principal components analysis.
Program progress
Enrollment
The research program was launched for national enrollment on May 6, 2018. In the summer of 2019, one year after its official launch, All of Us had enrolled 230,000 participants, which represents almost one quarter of the program's goal of 1,000,000 individuals. Approximately 80% of those people are from groups that have been traditionally underrepresented in biomedical research. One of All of US's main goals is to include many people from diverse ancestries. By June 2020, enrollment reached approximately 350,000 individuals.
All of Us Researcher Workbench
On May 27, 2020, the All of Us research program announced the launch of their research platform, the All of Us Researcher Workbench, for beta testing. Select data collected by the initiative, including electronic health records and survey responses from the first 225,000 program participants, will be available to approved researchers through the workbench. Researchers may apply for access to the data if they have an NIH eRA Commons account (for identity verification) and are affiliated with an institution that has signed a data use agreement with All of Us.
Response to COVID-19 pandemic
In June 2020, the NIH announced that research materials collected as part of the All of Us initiative will be used to address the COVID-19 pandemic. Blood samples collected from recent volunteers will be tested for SARS-CoV-2 antibodies in order to track prior infections within the US population. Electronic health records shared by All of Us participants will also be evaluated for potential patterns associated with SARS-CoV-2 infection. All of Us also added monthly participant surveys with questions about the physical, mental, and socioeconomic impacts of the COVID-19 pandemic.
Administration
The founding program director was Eric Dishman, who stepped down to become the Chief Innovation Officer. In 2019, Joshua Denny was selected to be the second director. In October 2016, the project was renamed "All of Us".
See also
100,000 Genomes Project (UK)
21st Century Cures Act
Baseline Study
Precision medicine
precisionFDA
References
External links
Join All of Us homepage to join the All of Us research program
All of Us homepage at the National Institutes of Health
NIH Innovation account on USAspending.gov
American medical research
Supercomputing
Biological databases
Biobank organizations
Epidemiology | All of Us (initiative) | Technology,Environmental_science | 992 |
88,340 | https://en.wikipedia.org/wiki/Galvanism | Galvanism is a term invented by the late 18th-century physicist and chemist Alessandro Volta to refer to the generation of electric current by chemical action. The term also came to refer to the discoveries of its namesake, Luigi Galvani, specifically the generation of electric current within biological organisms and the contraction/convulsion of biological muscle tissue upon contact with electric current. While Volta theorized and later demonstrated the phenomenon of his "Galvanism" to be replicable with otherwise inert materials, Galvani thought his discovery to be a confirmation of the existence of "animal electricity," a vital force which gave life to organic matter.
History
Johann Georg Sulzer
Galvanic phenomena were described in the literature before it was understood that they were of an electrical nature. In 1752, when the Swiss mathematician and physicist Johann Georg Sulzer placed his tongue between a piece of lead and a piece of silver, joined at their edges, he perceived a taste similar to that of iron(II) sulfate. Neither of the metals alone produced this taste. He realized that the contact between the metals probably did not produce a solution of either on the tongue. He did, however, not realize that this was an electrical phenomenon. He concluded that the contact between the metals caused their particles to vibrate, producing this taste by stimulating the nerves of the tongue.
Luigi Galvani
According to popular legend, Galvani discovered the effects of electricity on muscle tissue when investigating an unrelated phenomenon which required skinned frogs in the 1780s and 1790s. His assistant is claimed to have accidentally touched a scalpel to the sciatic nerve of the frog and this resulted in a spark and animation of its legs. This was building on the theories of Giovanni Battista Beccaria, Felice Fontana, Leopoldo Marco Antonio Caldani, and . Galvani was investigating the effects of distant atmospheric electricity (lightning) on prepared frog legs when he discovered the legs convulsed not only when lightning struck but also when he pressed the brass hooks attached to the frog's spinal cord to the iron railing they were suspended from. In his laboratory, Galvani later discovered that he could replicate this phenomenon by touching metal electrodes of brass connected to the frog's spinal cord to an iron plate. He concluded that this was proof of "animal electricity," the electric power which animated living things.
Alessandro Volta
Alessandro Volta, a contemporary physicist, believed that the effect was explicable not by any vital force but rather it was the presence of two different metals that was generating the electricity. Volta demonstrated his theory by creating the first chemical electric battery. Despite their differences in opinion, Volta named the phenomenon of the chemical generation of electricity "Galvanism" after Galvani.
Galvani publishes his work
On March 27, 1791, Galvani published a book about his work on animal electricity. It contained comprehensive details of his 11 years of research and experimentation on the topic.
The 1797 edition of Gren’s Grundriss der Naturlehre provides the first explicit definition of 'galvanism' as clearly reflecting Volta’s opinion in the following terms:
Galvani from Bologna was the first to observe muscular motions elicited by the contact between two different metals; after him, the phenomena of this sort were termed and included under the name of Galvanism.
Giovanni Aldini
Giovanni Aldini, Galvani's nephew, continued his uncle's work after Luigi Galvani died in 1798. In 1803, Aldini performed a famous public demonstration of the electro-stimulation technique of deceased limbs on the corpse of an executed criminal George Foster at Newgate in London. The Newgate Calendar describes what happened when the galvanic process was used on the body:
Galvani has been called the father of electrophysiology. The debate between Galvani and Volta "would result in the creation of electrophysiology, electromagnetism, electrochemistry and the electric battery."
Scientific and intellectual legacy
Literature
Mary Shelley's Frankenstein, wherein a man stitches together a human body from corpses and brings it to life, was inspired in part by the theory and demonstrations of Galvanism which may have been conducted by James Lind. Although the Creature was described in later works as a composite of whole body parts grafted together from cadavers and reanimated by the use of electricity, this description is not consistent with Shelley's work; both the use of electricity and the cobbled-together image of Frankenstein's monster were more the result of James Whale's popular 1931 film adaptation of the story.
Abiogenesis
Galvanism influenced metaphysical thought in the domain of abiogenesis, the underlying process of the generation of living forms. In 1836, Andrew Crosse recorded what he referred to as "the perfect insect, standing erect on a few bristles which formed its tail," as having appeared during an experiment wherein he used electricity to produce mineral crystals. While Crosse himself never claimed to have generated the insects, even in private, the scientific world at the time viewed the connection between life and electricity to be sufficiently clear that he received threats against his life for this "blasphemy."
Medicine
Giovanni Aldini is claimed to have applied Galvanic principles (application of electricity to biological organisms) in successfully alleviating the symptoms of "several cases of insanity", and with "complete success". Today, electroconvulsive therapy is used as a treatment option for severely depressed pregnant mothers (as it is the least harmful for the developing fetus) and people suffering treatment-resistant major depressive disorder. It is found to be effective for half of those who receive treatment while the other half may relapse within 12 months.
The modern application of electricity to the human body for medical diagnostics and treatments is practiced under the term electrophysiology. This includes the monitoring of the electric activity of the heart, muscles, and even the brain, respectively termed electrocardiography, electromyography, and electrocorticography.
See also
Action potential
Bioelectromagnetics
Electrochemistry
Electrohomeopathy
Electrotherapy
Electrotherapy (cosmetic)
Hallerian physiology, for a counter-theory to Galvanism
References
External links
The history of galvanism
Electrochemistry
Muscular system | Galvanism | Chemistry | 1,267 |
14,878 | https://en.wikipedia.org/wiki/International%20Astronomical%20Union | The International Astronomical Union (IAU; , UAI) is an international non-governmental organization (INGO) with the objective of advancing astronomy in all aspects, including promoting astronomical research, outreach, education, and development through global cooperation. It was founded on 28 July 1919 in Brussels, Belgium and is based in Paris, France.
The IAU is composed of individual members, who include both professional astronomers and junior scientists, and national members, such as professional associations, national societies, or academic institutions. Individual members are organised into divisions, committees, and working groups centered on particular subdisciplines, subjects, or initiatives. the Union had 85 national members and 12,734 individual members, spanning 90 countries and territories.
Among the key activities of the IAU is serving as a forum for scientific conferences. It sponsors nine annual symposia and holds a triannual General Assembly that sets policy and includes various scientific meetings. The Union is best known for being the leading authority in assigning official names and designations to astronomical objects, and for setting uniform definitions for astronomical principles. It also coordinates with national and international partners, such as UNESCO, to fulfill its mission.
The IAU is a member of the International Science Council, which is composed of international scholarly and scientific institutions and national academies of sciences.
Function
The International Astronomical Union is an international association of professional astronomers, at the PhD level and beyond, active in professional research and education in astronomy. Among other activities, it acts as the recognized authority for assigning designations and names to celestial bodies (stars, planets, asteroids, etc.) and any surface features on them.
The IAU is a member of the International Science Council. Its main objective is to promote and safeguard the science of astronomy in all its aspects through international cooperation. The IAU maintains friendly relations with organizations that include amateur astronomers in their membership. The IAU has its head office on the second floor of the in the 14th arrondissement of Paris.
This organisation has many working groups. For example, the Working Group for Planetary System Nomenclature (WGPSN), which maintains the astronomical naming conventions and planetary nomenclature for planetary bodies, and the Working Group on Star Names (WGSN), which catalogues and standardizes proper names for stars. The IAU is also responsible for the system of astronomical telegrams which are produced and distributed on its behalf by the Central Bureau for Astronomical Telegrams. The Minor Planet Center also operates under the IAU, and is a "clearinghouse" for all non-planetary or non-moon bodies in the Solar System.
History
The IAU was founded on 28 July 1919, at the Constitutive Assembly of the International Research Council (now the International Science Council) held in Brussels, Belgium. Two subsidiaries of the IAU were also created at this assembly: the International Time Commission seated at the International Time Bureau in Paris, France, and the International Central Bureau of Astronomical Telegrams initially seated in Copenhagen, Denmark.
The seven initial member states were Belgium, Canada, France, Great Britain, Greece, Japan, and the United States, soon to be followed by Italy and Mexico. The first executive committee consisted of Benjamin Baillaud (President, France), Alfred Fowler (General Secretary, UK), and four vice presidents: William Campbell (US), Frank Dyson (UK), Georges Lecointe (Belgium), and Annibale Riccò (Italy). Thirty-two Commissions (referred to initially as Standing Committees) were appointed at the Brussels meeting and focused on topics ranging from relativity to minor planets. The reports of these 32 Commissions formed the main substance of the first General Assembly, which took place in Rome, Italy, 2–10 May 1922.
By the end of the first General Assembly, ten additional nations (Australia, Brazil, Czechoslovakia, Denmark, the Netherlands, Norway, Poland, Romania, South Africa, and Spain) had joined the Union, bringing the total membership to 19 countries. Although the Union was officially formed eight months after the end of World War I, international collaboration in astronomy had been strong in the pre-war era (e.g., the Astronomische Gesellschaft Katalog projects since 1868, the Astrographic Catalogue since 1887, and the International Union for Solar research since 1904).
The first 50 years of the Union's history are well documented. Subsequent history is recorded in the form of reminiscences of past IAU Presidents and General Secretaries. Twelve of the fourteen past General Secretaries in the period 1964–2006 contributed their recollections of the Union's history in IAU Information Bulletin No. 100. Six past IAU Presidents in the period 1976–2003 also contributed their recollections in IAU Information Bulletin No. 104.
In 2015 and 2019, the Union held the NameExoWorlds contests.
Starting in 2024, the Union, in partnership with the United Nations, is poised to play a critical role in developing the legislation and framework for lunar industrialization.
Composition
As of 1 August 2019, the IAU has a total of 13,701 individual members, who are professional astronomers from 102 countries worldwide; 81.7% of individual members are male, while 18.3% are female.
Membership also includes 82 national members, professional astronomical communities representing their country's affiliation with the IAU. National members include the Australian Academy of Science, the Chinese Astronomical Society, the French Academy of Sciences, the Indian National Science Academy, the National Academies (United States), the National Research Foundation of South Africa, the National Scientific and Technical Research Council (Argentina), the Council of German Observatories, the Royal Astronomical Society (United Kingdom), the Royal Astronomical Society of New Zealand, the Royal Swedish Academy of Sciences, the Russian Academy of Sciences, and the Science Council of Japan, among many others.
The sovereign body of the IAU is its General Assembly, which comprises all members. The Assembly determines IAU policy, approves the Statutes and By-Laws of the Union (and amendments proposed thereto) and elects various committees.
The right to vote on matters brought before the Assembly varies according to the type of business under discussion. The Statutes consider such business to be divided into two categories:
issues of a "primarily scientific nature" (as determined by the Executive Committee), upon which voting is restricted to individual members, and
all other matters (such as Statute revision and procedural questions), upon which voting is restricted to the representatives of national members.
On budget matters (which fall into the second category), votes are weighted according to the relative subscription levels of the national members. A second category vote requires a turnout of at least two-thirds of national members to be valid. An absolute majority is sufficient for approval in any vote, except for Statute revision which requires a two-thirds majority. An equality of votes is resolved by the vote of the President of the Union.
List of national members
Africa
Asia
(suspended)
(suspended)
(suspended)
(suspended)
(suspended)
(suspended)
Europe
North America
(interim)
(interim)
(interim)
Oceania
South America
(observer)
(suspended)
(observer)
(suspended)
Terminated national members
General Assemblies
Since 1922, the IAU General Assembly meets every three years, except for the period between 1938 and 1948, due to World War II.
After a Polish request in 1967, and by a controversial decision of the then President of the IAU, an Extraordinary IAU General Assembly was held in September 1973 in Warsaw, Poland, to commemorate the 500th anniversary of the birth of Nicolaus Copernicus, soon after the regular 1973 GA had been held in Sydney.
List of the presidents of the IAU
Sources.
Commission 46: Education in astronomy
Commission 46 is a Committee of the Executive Committee of the IAU, playing a special role in the discussion of astronomy development with governments and scientific academies. The IAU is affiliated with the International Council of Scientific Unions (ICSU), a non-governmental organization representing a global membership that includes both national scientific bodies and international scientific unions. They often encourage countries to become members of the IAU. The Commission further seeks to development, information or improvement of astronomical education. Part of Commission 46, is Teaching Astronomy for Development (TAD) program in countries where there is currently very little astronomical education. Another program is named the Galileo Teacher Training Program (GTTP), is a project of the International Year of Astronomy 2009, among which Hands-On Universe that will concentrate more resources on education activities for children and schools designed to advance sustainable global development. GTTP is also concerned with the effective use and transfer of astronomy education tools and resources into classroom science curricula. A strategic plan for the period 2010–2020 has been published.
Publications
In 2004 the IAU contracted with the Cambridge University Press to publish the Proceedings of the International Astronomical Union.
In 2007, the Communicating Astronomy with the Public Journal Working Group prepared a study assessing the feasibility of the Communicating Astronomy with the Public Journal (CAP Journal).
See also
List of astronomy acronyms
Astronomical naming conventions
List of proper names of stars
Planetary nomenclature
References
Statutes of the IAU, VII General Assembly (1948), pp. 13–15
External links
XXVIth General Assembly 2006
XXVIIth General Assembly 2009
XXVIIIth General Assembly 2012
XXIXth General Assembly 2015
XXXth General Assembly 2018
XXXIst General Assembly 2022
XXXIIst General Assembly 2024
Astronomy organizations
International organizations based in France
International professional associations
Members of the International Council for Science
Organizations based in Paris
Scientific organizations based in France
Scientific organizations established in 1919
1919 establishments in France
Standards organizations in France
International scientific organizations
International scientific organizations based in Europe
Members of the International Science Council | International Astronomical Union | Astronomy | 1,967 |
20,575,238 | https://en.wikipedia.org/wiki/Synthetic%20rescue | Synthetic rescue (or synthetic recovery or synthetic viability when a lethal phenotype is rescued ) refers to a genetic interaction in which a cell that is nonviable, sensitive to a specific drug, or otherwise impaired due to the presence of a genetic mutation becomes viable when the original mutation is combined with a second mutation in a different gene. The second mutation can either be a loss-of-function mutation (equivalent to a knockout) or a gain-of-function mutation.
Synthetic rescue could potentially be exploited for gene therapy, but it also provides information on the function of the genes involved in the interaction.
Types of genetic suppression
Dosage-mediated suppression
Dosage-mediated suppression occurs when the suppression of the mutant phenotype is mediated by the over expression of a second suppressor gene. This can occur when the initial mutations destabilize a protein-protein interaction and over expression of the interacting protein bypass the negative effect of the initial mutation.
Interaction-mediated suppression
Interaction-mediated suppression occurs when a deleterious mutation in a component of a protein complex destabilizes the complex. A compensatory mutation in another component of the protein complex can then suppress the deleterious phenotype by re-establishing the interaction between the two proteins. It usually means that the deleterious mutation and the suppressive mutation occurs in two residues that are closely located in the tridimensional structure of the multi-protein complex. As thus, this kind of suppression provides indirect information on the molecular structure of the proteins involved.
Experimental observation of theoretical prediction
The strongest form of synthetic rescues, in which the deleterious impact of a gene knockout is mitigated by an additional genetic perturbation that is also deleterious when considered in isolation, was modeled and predicted theoretically for gene interactions mediated by the metabolic network.
This strong form of synthetic rescue has been recently observed in experiments in both Saccharomyces cerevisiae.
and Escherichia coli.
Patient survival analysis was also shown to predict synthetic rescues and other types of interactions.
tRNA-mediated suppression
Genetic suppression can be mediated by tRNA genes when a mutation alters their anticodon sequence. For example, a tRNA designated for the recognition of the codon TCA and the corresponding insertion of serine in the growing polypeptide chain can mutate so that it recognize a TAA stop codon and promote the insertion of serine instead of the termination of the polypeptide chain. This could be particularly useful when a nonsense mutation (TCA >TAA) prevents the expression of a gene by either leading to a partially completed polypeptide or degradation of the mRNA by nonsense-mediated decay. The redundancy of tRNA genes makes sure that such mutation would not prevent the normal insertion of serines when the TCA codon specifies them.
See also
Complex networks
Gene therapy
Suppressor mutation
Synthetic lethality
References
Genetics
Gene therapy | Synthetic rescue | Engineering,Biology | 596 |
5,888,617 | https://en.wikipedia.org/wiki/SimGear | SimGear is a group of libraries, which provide capabilities useful for simulations, visualizations, and even games building.
SimGear is a relatively new project, and while quite a bit of code has been written in conjunction with the FlightGear project, the final interface and arrangements are still evolving.
All the SimGear code is designed to be portable across a wide variety of platforms and compilers. Originally, it has been developed in support of the FlightGear project, but as development moved forward, it has become useful for other types of applications as well.
SimGear is free software, licensed under the terms of the GNU LGPL.
External links
official website
Documentation
Computer libraries
Free simulation software
Video game development software | SimGear | Technology | 149 |
70,889,655 | https://en.wikipedia.org/wiki/Magic%20Kombat | Magic Kombat is a 1995 Philippine sci-fi fantasy comedy film written and directed by Junn Cabreira. The film stars Smokey Manaloto and Eric Fructuoso, and centers around Mario (Manaloto) and Luigi (Fructuoso) as they are accidentally transported into a video game world and are forced to fight their way out of it. Many of the film's scenarios, sound effects and characters–including that of Mario and Luigi–were unauthorized parodies of Super Mario Bros., Street Fighter II and other video games popular in the Philippines during the 80s and 90s.
It was one of the entries in the 1995 Metro Manila Film Festival.
Plot
Working students Mario (Manaloto) and Luigi (Fructuoso), become recently-unemployed after an incident at a mall and struggle to make ends meet until they find employment as janitors and technicians at a school, where they get a chance to study.
When a video game character named Rio suddenly gets materialized into the real world during a gaming session by Mario on a stormy night, the Mario brothers along with their friend Diana set out to get Rio back to her home world, but things get complicated when Mario and Luigi are sucked into the video game realm instead of Rio. The two brothers are then forced to fight their way through each of the game's levels and are later aided by Rio who made her way back to her home realm. After a final encounter with supernatural creatures in a cave, Mario and Luigi use the gems they retrieved from their previous encounters, unlocking the door which leads them back to the real world.
Back in their old job as janitors, Mario and Luigi chance upon a student who bears a striking resemblance to Rio.
Cast
Smokey Manaloto as Mario
Eric Fructuoso as Luigi
Dandin Ranillo as Janitor
Beth Tamayo as Diana
Joanne Pascual as Rio
Sharmaine Suarez as Blanka
Ernie Ortega as Samurai Man
Aga Fazon as Gorilla
Jan Cassie Espolong as Goko
Cita Astals as School Dean
Jaime Fabregas as Asst. Dean
Francis Enriquez as Student
Nonong de Andres as Albularyo
Solita Carreon as Recruiter
Cris Daluz as Uncle Teong
Awards
Notes
References
External links
1995 films
1995 fantasy films
1990s parody films
1990s science fiction adventure films
Filipino-language films
Philippine fantasy comedy films
Philippine parody films
Philippine science fiction comedy films
Films about artificial intelligence
Films about computing
Films about video games
Metafictional works
Moviestars Production films
Parodies of video games
Unofficial works based on Mario
Works about janitors
Works based on Street Fighter
Works about Nintendo
1995 science fiction films | Magic Kombat | Technology | 537 |
1,186,676 | https://en.wikipedia.org/wiki/Kaminsky%20catalyst | A Kaminsky catalyst is a catalytic system for alkene polymerization. Kaminsky catalysts are based on metallocenes of group 4 transition metals (Ti, Zr, Hf) activated with methylaluminoxane (MAO). These and other innovations have inspired development of new classes of catalysts that in turn led to commercialization of novel engineering polyolefins.
Catalyst development
The catalyst is named after German chemist Walter Kaminsky, who first described it in 1980 along with Hansjörg Sinn and others. Prior to Kaminsky's work, titanium chlorides supported on various materials were widely used (and still are) as heterogeneous catalysts for alkene polymerization. These halides are typically activated by treatment with trimethylaluminium. Kaminsky discovered that titanocene and related complexes emulated some aspects of these Ziegler–Natta catalysts but with low activity. He subsequently found that high activity could be achieved upon activation of these metallocenes with methylaluminoxane (MAO). The MAO serves two roles: (i) alkylation of the metallocene halide and (ii) abstraction of an anionic ligand (chloride or methyl) to give an electrophilic catalyst with a labile coordination site.
Ligand design
Kaminsky's discovery of well-defined, high activity homogeneous catalysts led to many innovations in the design of novel cyclopentadienyl ligands. These innovations include ansa-metallocenes, Cs-symmetric fluorenyl-Cp ligands, constrained geometry catalysts, Some Kaminsky-inspired catalysts use of chiral metallocenes that have bridged cyclopentadienyl rings. These innovations made possible highly stereoselective (or stereoregular) polymerization of α-olefins, some of which have been commercialized.
References
Homogeneous catalysis
Organometallic chemistry
Polymer chemistry
Catalysts | Kaminsky catalyst | Chemistry,Materials_science,Engineering | 404 |
9,206,525 | https://en.wikipedia.org/wiki/Hydron%20%28chemistry%29 | In chemistry, the hydron, informally called proton, is the cationic form of atomic hydrogen, represented with the symbol . The general term "hydron", endorsed by IUPAC, encompasses cations of hydrogen regardless of isotope: thus it refers collectively to protons (H) for the protium isotope, deuterons (H or D) for the deuterium isotope, and tritons (H or T) for the tritium isotope.
Unlike most other ions, the hydron consists only of a bare atomic nucleus. The negatively charged counterpart of the hydron is the hydride anion, .
Properties
Solute properties
Other things being equal, compounds that readily donate hydrons (Brønsted acids, see below) are generally polar, hydrophilic solutes and are often soluble in solvents with high relative static permittivity (dielectric constants). Examples include organic acids like acetic acid (CHCOOH) or methanesulfonic acid (CHSOH). However, large nonpolar portions of the molecule may attenuate these properties. Thus, as a result of its alkyl chain, octanoic acid (CHCOOH) is considerably less hydrophilic compared to acetic acid.
The unsolvated hydron (a completely free or "naked" hydrogen atomic nucleus) does not exist in the condensed (liquid or solid) phase. As the surface Electric field strength is inverse to the radius, a tiny nucleus interacts thousands times stronger with nearby electrons than any partly ionized atom.
Although superacids are sometimes said to owe their extraordinary hydron-donating power to the presence of "free hydrons", such a statement is misleading: even for a source of "free hydrons" like , one of the superacidic cations present in the superacid fluoroantimonic acid (HF:SbF), detachment of a free still comes at an enormous energetic penalty on the order of several hundred kcal/mol. This effectively rules out the possibility of the free hydron being present in solution. For this reason, in liquid strong acids, hydrons are believed to diffuse by sequential transfer from one molecule to the next along a network of hydrogen bonds through what is known as the Grotthuss mechanism.
Acidity
The hydron ion can incorporate an electron pair from a Lewis base into the molecule by adduction:
+ :L →
Because of this capture of the Lewis base (L), the hydron ion has Lewis acidic character. In terms of Hard/Soft Acid Base (HSAB) theory, the bare hydron is an infinitely hard Lewis acid.
The hydron plays a central role in Brønsted–Lowry acid–base theory: a species that behaves as a hydron donor in a reaction is known as the Brønsted acid, while the species accepting the hydron is known as the Brønsted base. In the generic acid–base reaction shown below, HA is the acid, while B (shown with a lone pair) is the base:
+ :B → + :A
The hydrated form of the hydrogen cation, the hydronium (hydroxonium) ion (aq), is a key object of Arrhenius' definition of acid. Other hydrated forms, the Zundel cation , which is formed from a proton and two water molecules, and the Eigen cation , which is formed from a hydronium ion and three water molecules, are theorized to play an important role in the diffusion of protons though an aqueous solution according to the Grotthuss mechanism. Although the ion (aq) is often shown in introductory textbooks to emphasize that the hydron is never present as an unsolvated species in aqueous solution, it is somewhat misleading, as it oversimplifies infamously complex speciation of the solvated proton in water; the notation (aq) is often preferred, since it conveys aqueous solvation while remaining noncommittal with respect to the number of water molecules involved.
Isotopes of hydron
Proton, having the symbol p or H, is the +1 ion of protium, 1H.
Deuteron, having the symbol H or D, is the +1 ion of deuterium, H or D.
Triton, having the symbol H or T, is the +1 ion of tritium, H or T.
Other isotopes of hydrogen are too unstable to be relevant in chemistry.
History of the term
The term "hydron" is recommended by IUPAC to be used instead of "proton" if no distinction is made between the isotopes proton, deuteron and triton, all found in naturally occurring isotope mixtures. The name "proton" refers to isotopically pure H.
On the other hand, calling the hydron simply hydrogen ion is not recommended because hydrogen anions also exist.
The term "hydron" was defined by IUPAC in 1988.
Traditionally, the term "proton" was and is used in place of "hydron".
The latter term is generally only used in the context where comparisons between the various isotopes of hydrogen is important (as in the kinetic isotope effect or hydrogen isotopic labeling). Otherwise, referring to hydrons as protons is still considered acceptable, for example in such terms as protonation, deprotonation, proton pump, or proton channel. The transfer of in an acid-base reaction is usually referred to as proton transfer. Acid and bases are referred to as proton donors and acceptors correspondingly.
99.9844% of natural hydrons (hydrogen nuclei) are protons, and the remainder (about 156 per million in sea water) are deuterons (see deuterium), except for some very rare natural tritons (see tritium).
See also
Deprotonation
Dihydrogen cation
Hydrogen ion cluster
Solvated electron
Superacid
Trihydrogen cation
References
Cations
Hydrogen
Proton
Deuterium
Tritium | Hydron (chemistry) | Physics,Chemistry | 1,255 |
36,467,007 | https://en.wikipedia.org/wiki/Peziza%20domiciliana | Peziza domiciliana, commonly known as the domicile cup fungus, is a species of fungus in the genus Peziza, family Pezizaceae. Described by English mycologist Mordecai Cubitt Cooke, the fungus grows on rotten wood, drywall/plasterboard, and plaster in homes, damp cellars, and basements. It is known from Asia, Europe, North America, and Antarctica.
Taxonomy and phylogeny
The fungus was first described in 1877 by the British botanist Mordecai Cubitt Cooke, based on specimens sent to him that had been found growing on the walls, ceilings, and floors of a house in Edinburgh that had been partially destroyed by fire. The species was transferred to genus Aleuria by Ethel Irene McLennan & Halsey in 1936, and later into Galactinia by Irma J. Gamundi in 1960; both of the binomials resulting from these generic transfers are synonyms of P. domiciliana.
Peziza domiciliana is commonly known as the "domicile cup fungus".
Description
The fruit bodies of P. domicilia are cup-shaped; initially concave, they later develop an undulating margin and a depressed center. The outer surface of the cup is whitish, and the margin of the cup can either remain intact, or split. Fruit bodies reach an upper diameter of . The inner surface of the cup is the fertile, spore-bearing hymenium; it is initially white before turning buff, tan, or brownish. The whitish stem does not typically become longer than .
The asci (the spore-bearing cells) are cylindrical or roughly so, reaching dimensions of 225–250 μm long by 15 μm wide. The spores are ellipsoid, hyaline (translucent) when young, often contain two small oil droplets, and measure 11–15 by 6–10 μm. The paraphyses are slender, contain septa, and are slightly enlarged above. The species is inedible.
Similar species
Peziza domiciliana is similar in appearance to P. repanda and has often been mistaken for it. Peziza badia is darker brown, grows on the ground or well-decayed wood, and has longer spores measuring 15–19 by 7–10 μm. Other Peziza species have been reported to grow indoors, including P. varia and P. petersii.
Habitat and distribution
The fruit bodies of Peziza domiciliana grow singly, in groups, or in clusters on plaster, sand, gravel and coal-dust in cellars, caves, and greenhouses. The species is known from Europe, North America, and South America (Argentina). The fungus has been identified as one of several responsible for the degradation of construction wood used in historical monuments in Moldavia. It has also been recorded from Deception Island of Antarctica, and from the eastern Himalayas. The fungus has been implicated in a case of hypersensitivity pneumonitis (called El Niño lung in the original report), in which a previously healthy woman developed severe dyspnea and was found to have restrictive lung disease and evidence of alveolitis. A search of her home, which had recently been flooded as a result of heavy rains, revealed the mushroom in her basement, and air sampling confirmed the presence of P. domiciliana spores.
References
External links
Fungi described in 1877
Fungi of Asia
Fungi of Europe
Fungi of North America
Inedible fungi
Pezizaceae
Fungi of Antarctica
Taxa named by Mordecai Cubitt Cooke
Fungus species | Peziza domiciliana | Biology | 746 |
1,256,165 | https://en.wikipedia.org/wiki/Calorie%20restriction | Calorie restriction (also known as caloric restriction or energy restriction) is a dietary regimen that reduces the energy intake from foods and beverages without incurring malnutrition. The possible effect of calorie restriction on body weight management, longevity, and aging-associated diseases has been an active area of research.
Dietary guidelines
Caloric intake control, and reduction for overweight individuals, is recommended by US dietary guidelines and science-based societies.
Calorie restriction is recommended for people with diabetes and prediabetes, in combination with physical exercise and a weight loss goal of 5-15% for diabetes and 7-10% for prediabetes to prevent progression to diabetes. Mild calorie restriction may be beneficial for pregnant women to reduce weight gain (without weight loss) and reduce perinatal risks for both the mother and child. For overweight or obese individuals, calorie restriction may improve health through weight loss, although a gradual weight regain of per year may occur.
Risks of malnutrition
The term "calorie restriction" as used in the study of aging refers to dietary regimens that reduce calorie intake without incurring malnutrition. If a restricted diet is not designed to include essential nutrients, malnutrition may result in serious deleterious effects, as shown in the Minnesota Starvation Experiment. This study was conducted during World War II on a group of lean men, who restricted their calorie intake by 45% for six months and composed roughly 77% of their diet with carbohydrates. As expected, this malnutrition resulted in metabolic adaptations, such as decreased body fat, improved lipid profile, and decreased resting heart rate. The experiment also caused negative effects, such as anemia, edema, muscle wasting, weakness, dizziness, irritability, lethargy, and depression.
Typical low-calorie diets may not supply sufficient nutrient intake that is typically included in a calorie restriction diet.
Possible side effects
People losing weight during calorie restriction risk developing side effects, such as cold sensitivity, menstrual irregularities, infertility, or hormonal changes.
Research
Humans
Decreasing caloric intake by 20-30%, while fulfilling nutrient requirements, has been found to remedy diseases of aging, including cancer, cardiovascular disease, dementia, and diabetes in humans, and result in an average loss of in body weight, but because of the long lifespan of humans, evidence that calorie restriction could prevent age-related disease in humans remains under preliminary research. While calorie restriction leads to weight and fat loss, the precise amount of calorie intake and associated fat mass for optimal health in humans is not known. Moderate amounts of calorie restriction may have harmful effects on certain population groups, such as lean people with low body fat.
Life extension
As of 2021, intermittent fasting and calorie restriction remain under preliminary research to assess the possible effects on disease burden and increased lifespan during aging, although the relative risks associated with long-term fasting or calorie restriction remain undetermined.
Intermittent fasting refers to periods with intervals during which no food but only clear fluids are ingested – such as a period of daily time-restricted eating with a window of 8 to 12 hours for any caloric intake – and could be combined with overall calorie restriction and variants of the Mediterranean diet which may contribute to long-term cardiovascular health and longevity.
Minnesota Starvation Experiment
The Minnesota Starvation Experiment examined the physical and psychological effects of extreme calorie restriction on 32 young and lean 24-year-old men during a 40% reduction in energy intake for 6 months. The study was designed to mimic dietary conditions during World War II. Participants could only eat 1800 kcal per day, but were required to walk 5 km per day and expend 3000 calories. The men lost about 25% of their body weight of which 67% was fat mass and 17% fat-free mass. The quality of the diet was insufficient to accurately represent the diet during war due to the inadequate consumption of protein, and a lack of fruits and vegetables. Despite the extreme calorie restriction, the experiment was not representative of true calorie-restrictive diets, which adhere to intake guidelines for macronutrients and micronutrients. Chronic weakness, decreased aerobic capacity, and painful lower limb edema was caused by the malnourished calorie restrictive diet. Emotional distress, confusion, apathy, depression, hysteria, hypochondriasis, suicidal thoughts, and loss of sex drive were among the abnormal psychological behaviors that occurred within six weeks.
Intensive care
, current clinical guidelines recommend that hospitals ensure that the patients get fed with 80–100% of energy expenditure, the normocaloric feeding. A systematic review investigated whether people in intensive care units have different outcomes with normocaloric feeding or hypocaloric feeding, and found no difference. However, a comment criticized the inadequate control of protein intake, and raised concerns that hypocaloric feeding safety should be further assessed with underweight critically ill people.
Non-human primates
A calorie restriction study started in 1987 by the National Institute on Aging showed that calorie restriction did not extend years of life or reduce age-related deaths in non-obese rhesus macaques. It did improve certain measures of health, however. These results were publicized as being different from the Wisconsin rhesus macaque calorie restriction study, which also started in 1987 and showed an increase in the lifespan of rhesus macaques following calorie restriction.
In a 2017 report on rhesus monkeys, caloric restriction in the presence of adequate nutrition was effective in delaying the effects of aging. Older age of onset, female sex, lower body weight and fat mass, reduced food intake, diet quality, and lower fasting blood glucose levels were factors associated with fewer disorders of aging and with improved survival rates. Specifically, reduced food intake was beneficial in adult and older primates, but not in younger monkeys. The study indicated that caloric restriction provided health benefits with fewer age-related disorders in elderly monkeys and, because rhesus monkeys are genetically similar to humans, the benefits and mechanisms of caloric restriction may apply to human health during aging.
Activity levels
Calorie restriction preserves muscle tissue in nonhuman primates and rodents. Muscle tissue grows when stimulated, so it has been suggested that the calorie-restricted test animals exercised more than their companions on higher calories, perhaps because animals enter a foraging state during calorie restriction. However, studies show that overall activity levels are no higher in calorie restriction than ad libitum animals in youth.
Sirtuin-mediated mechanism
Preliminary research indicates that sirtuins are activated by fasting and serve as "energy sensors" during metabolism. Sirtuins, specifically Sir2 (found in yeast) have been implicated in the aging of yeast, and are a class of highly conserved, NAD+-dependent histone deacetylase enzymes. Sir2 homologs have been identified in a wide range of organisms from bacteria to humans.
See also
Calorie deficit
CR Society International
Fasting
Intermittent fasting
List of diets
Okinawa diet
Very low calorie diet
References
Further reading
Diets
Eating behaviors
Life extension
Senescence | Calorie restriction | Chemistry,Biology | 1,508 |
10,640,172 | https://en.wikipedia.org/wiki/Line%20representations%20in%20robotics | Line representations in robotics are used for the following:
They model joint axes: a revolute joint makes any connected rigid body rotate about the line of its axis; a prismatic joint makes the connected rigid body translate along its axis line.
They model edges of the polyhedral objects used in many task planners or sensor processing modules.
They are needed for shortest distance calculation between robots and obstacles.
When using such line it is needed to have conventions for the representations so they are clearly defined. This article discusses several of these methods.
Non-minimal vector coordinates
A line is completely defined by the ordered set of two vectors:
a point vector , indicating the position of an arbitrary point on
one free direction vector , giving the line a direction as well as a sense.
Each point on the line is given a parameter value that satisfies:
. The parameter t is unique once and are chosen. The representation is not minimal, because it uses six parameters for only four degrees of freedom. The following two constraints apply:
The direction vector can be chosen to be a unit vector
the point vector can be chosen to be the point on the line that is nearest the origin. So is orthogonal to
Plücker coordinates
Arthur Cayley and Julius Plücker introduced an alternative representation using two free vectors. This representation was finally named after Plücker.
The Plücker representation is denoted by . Both and are free vectors: represents the direction of the line and is the moment of about the chosen reference origin. ( is independent of which point on the line is chosen!)
The advantage of the Plücker coordinates is that they are homogeneous.
A line in Plücker coordinates has still four out of six independent parameters, so it is not a minimal representation. The two constraints on the six Plücker coordinates are
the homogeneity constraint
the orthogonality constraint
Minimal line representation
A line representation is minimal if it uses four parameters, which is the minimum needed to represent all possible lines in the Euclidean Space (E³).
Denavit–Hartenberg line coordinates
Jaques Denavit and Richard S. Hartenberg presented the first minimal representation for a line which is now widely used. The common normal between two lines was the main geometric concept that allowed Denavit and Hartenberg to find a minimal representation. Engineers use the Denavit–Hartenberg convention(D–H) to help them describe the positions of links and joints unambiguously. Every link gets its own coordinate system. There are a few rules to consider in choosing the coordinate system:
the -axis is in the direction of the joint axis
the -axis is parallel to the common normal: If there is no unique common normal (parallel axes), then (below) is a free parameter.
the -axis follows from the - and -axis by choosing it to be a right-handed coordinate system.
Once the coordinate frames are determined, inter-link transformations are uniquely described by the following four parameters:
: angle about previous , from old to new
: offset along previous to the common normal
: length of the common normal (aka , but if using this notation, do not confuse with ). Assuming a revolute joint, this is the radius about previous .
: angle about common normal, from old axis to new axis
Hayati–Roberts line coordinates
The Hayati–Roberts line representation, denoted , is another minimal line representation, with parameters:
and are the and components of a unit direction vector on the line. This requirement eliminates the need for a component, since
and are the coordinates of the intersection point of the line with the plane through the origin of the world reference frame, and normal to the line. The reference frame on this normal plane has the same origin as the world reference frame, and its and frame axes are images of the world frame's and axes through parallel projection along the line.
This representation is unique for a directed line. The coordinate singularities are different from the DH singularities: it has singularities if the line becomes parallel to either the or axis of the world frame.
Product of exponentials formula
The product of exponentials formula represents the kinematics of an open-chain mechanism as the product of exponentials of twists, and may be used to describe a series of revolute, prismatic, and helical joints.
See also
List of basic robotics topics
References
Giovanni Legnani, Federico Casolo, Paolo Righettini and Bruno Zappa A homogeneous matrix approach to 3D kinematics and dynamics — I. Theory Mechanism and Machine Theory, Volume 31, Issue 5, July 1996, Pages 573–587
Giovanni Legnani, Federico Casalo, Paolo Righettini and Bruno Zappa A homogeneous matrix approach to 3D kinematics and dynamics—II. Applications to chains of rigid bodies and serial manipulators Mechanism and Machine Theory, Volume 31, Issue 5, July 1996, Pages 589–605
A. Bottema and B. Roth. Theoretical Kinematics. Dover Books on Engineering. Dover Publications, Inc. Mineola, NY, 1990
A. Cayley. On a new analytical representation of curves in space. Quarterly Journal of Pure and Applied Mathematics,3:225–236,1860
K.H. Hunt. Kinematic Geometry of Mechanisms. Oxford Science Publications, Oxford, England, 2n edition, 1990
J. Plücker. On a new geometry of space. Philosophical Transactions of the Royal Society of London, 155:725–791, 1865
J. Plücker. Fundamental views regarding mechanics. Philosophical Transactions of the Royal Society of London, 156:361–380, 1866
J. Denavit and R.S. Hartenberg. A kinematic notation for lower-pair mechanisms based on matrices. Trans ASME J. Appl. Mech, 23:215–221,1955
R.S. HartenBerg and J. Denavit Kinematic synthesis of linkages McGraw–Hill, New York, NY, 1964
R. Bernhardt and S.L. Albright. Robot Calibration, Chapman & Hall, 1993
S.A. Hayati and M. Mirmirani. Improving the absolute positioning accuracy of robot manipulators. J. Robotic Systems, 2(4):397–441, 1985
K.S. Roberts. A new representation for a line. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 635–640, Ann Arbor, MI, 1988
External links
Denavit Hartenburg Convention Computational Software, Wolfram.com 'Math Source' Author: Jason Desjardins 2002
Robotics engineering | Line representations in robotics | Technology,Engineering | 1,347 |
6,945,596 | https://en.wikipedia.org/wiki/Tegart%27s%20Wall | Tegart's Wall was a barbed wire fence erected in May–June 1938 by British Mandatory authorities in the Upper Galilee near the northern border of the territory in order to keep militants from infiltrating from French-controlled Mandatory Lebanon and Syria to join the 1936–1939 Arab revolt in Palestine. With time the security system further included police forts, smaller pillbox-type fortified positions, and mounted police squads patrolling along it. It was described as an "ingenious solution for handling terrorism in Mandatory Palestine."
History
The wall was built on the advice of Charles Tegart, adviser to the Palestine Government on the suppression of terrorism. In his first report, Tegart wrote that the border could not be defended along most of its length under the prevailing topographical conditions. The barrier was strung from Ras en Naqura on the Mediterranean coast to the north edge of Lake Tiberias at a cost of $450,000. It included a nine-foot barbed wire fence that roughly followed the border between Palestine and French-mandated Lebanon but the Galilee panhandle was left on the outside. Before the fence was completed, "a band of Arab terrorists swooped down on a section of the fence… ripped it up and carted it across the frontier into Lebanon."
Five Tegart forts and twenty pillboxes were built along the route of the fence. Nevertheless, the infiltrators easily overcame the fence and evaded mobile patrols along the frontier road.
The barrier, which impeded both legal and illegal trade, angered local inhabitants on both sides of the border because it bisected pastures and private property. After the rebellion was suppressed in 1939, the wall was dismantled.
See also
Separation barrier
References
Bibliography
History of Mandatory Palestine
1936–1939 Arab revolt in Palestine
Border barriers
Walls
1938 establishments in Mandatory Palestine
1940s disestablishments in Mandatory Palestine
Upper Galilee | Tegart's Wall | Engineering | 379 |
24,284,131 | https://en.wikipedia.org/wiki/ExtremeXOS | ExtremeXOS is the software or the network operating system used in newer Extreme Networks network switches. It is Extreme Networks second generation operating system after the VxWorks based ExtremeWare operating system.
ExtremeXOS is based on the Linux kernel and BusyBox. In July 2008 legal action was taken against Extreme Networks due to alleged violation of the GNU General Public License. Three months later the lawsuit was settled out of court.
References
Linux-based devices
Embedded Linux
Network operating systems | ExtremeXOS | Engineering | 95 |
31,443,350 | https://en.wikipedia.org/wiki/Optical%20depth%20%28astrophysics%29 | Optical depth in astrophysics refers to a specific level of transparency. Optical depth and actual depth, and respectively, can vary widely depending on the absorptivity of the astrophysical environment. Indeed, is able to show the relationship between these two quantities and can lead to a greater understanding of the structure inside a star.
Optical depth is a measure of the extinction coefficient or absorptivity up to a specific 'depth' of a star's makeup.
The assumption here is that either the extinction coefficient or the column number density is known. These can generally be calculated from other equations if a fair amount of information is known about the chemical makeup of the star. From the definition, it is also clear that large optical depths correspond to higher rate of obscuration. Optical depth can therefore be thought of as the opacity of a medium.
The extinction coefficient can be calculated using the transfer equation. In most astrophysical problems, this is exceptionally difficult to solve since solving the corresponding equations requires the incident radiation as well as the radiation leaving the star. These values are usually theoretical.
In some cases the Beer–Lambert law can be useful in finding .
where is the refractive index, and is the wavelength of the incident light before being absorbed or scattered. The Beer–Lambert law is only appropriate when the absorption occurs at a specific wavelength, . For a gray atmosphere, for instance, it is most appropriate to use the Eddington Approximation.
Therefore, is simply a constant that depends on the physical distance from the outside of a star. To find at a particular depth , the above equation may be used with and integration from to .
The Eddington approximation and the depth of the photosphere
Since it is difficult to define where the interior of a star ends and the photosphere begins, astrophysicists usually rely on the Eddington Approximation to derive the formal definition of
Devised by Sir Arthur Eddington the approximation takes into account the fact that produces a "gray" absorption in the atmosphere of a star, that is, it is independent of any specific wavelength and absorbs along the entire electromagnetic spectrum. In that case,
where is the effective temperature at that depth and is the optical depth.
This illustrates not only that the observable temperature and actual temperature at a certain physical depth of a star vary, but that the optical depth plays a crucial role in understanding the stellar structure. It also serves to demonstrate that the depth of the photosphere of a star is highly dependent upon the absorptivity of its environment. The photosphere extends down to a point where is about 2/3, which corresponds to a state where a photon would experience, in general, less than 1 scattering before leaving the star.
The above equation can be rewritten in terms of in the following way:
Which is useful, for example, when is not known but is.
References
Astrophysics
Scattering, absorption and radiative transfer (optics) | Optical depth (astrophysics) | Physics,Chemistry,Astronomy | 591 |
6,806 | https://en.wikipedia.org/wiki/Computer%20memory | Computer memory stores information, such as data and programs, for immediate use in the computer. The term memory is often synonymous with the terms RAM, main memory, or primary storage. Archaic synonyms for main memory include core (for magnetic core memory) and store.
Main memory operates at a high speed compared to mass storage which is slower but less expensive per bit and higher in capacity. Besides storing opened programs and data being actively processed, computer memory serves as a mass storage cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it is not needed by running software. If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique called virtual memory.
Modern computer memory is implemented as semiconductor memory, where data is stored within memory cells built from MOS transistors and other components on an integrated circuit. There are two main kinds of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM, and EEPROM memory. Examples of volatile memory are dynamic random-access memory (DRAM) used for primary storage and static random-access memory (SRAM) used mainly for CPU cache.
Most semiconductor memory is organized into memory cells each storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and a multi-level cell capable of storing multiple bits per cell. The memory cells are grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address of N bits, making it possible to store 2N words in the memory.
History
In the early 1940s, memory technology often permitted a capacity of a few bytes. The first electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits stored in the vacuum tubes.
The next significant advance in computer memory came with acoustic delay-line memory, developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through the mercury, with the quartz crystals acting as transducers to read and write bits. Delay-line memory was limited to a capacity of up to a few thousand bits.
Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode-ray tubes, Fred Williams invented the Williams tube, which was the first random-access computer memory. The Williams tube was able to store more information than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and was less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental disturbances.
Efforts began in the late 1940s to find non-volatile memory. Magnetic-core memory allowed for memory recall after power loss. It was developed by Frederick W. Viehe and An Wang in the late 1940s, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialized with the Whirlwind I computer in 1953. Magnetic-core memory was the dominant form of memory until the development of MOS semiconductor memory in the 1960s.
The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s using bipolar transistors. Semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. In the same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first bipolar semiconductor memory IC chip was the SP95 introduced by IBM in 1965. While semiconductor memory offered improved performance over magnetic-core memory, it remained larger and more expensive and did not displace magnetic-core memory until the late 1960s.
MOS memory
The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s.
The two main types of volatile random-access memory (RAM) are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but requires six transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95.
Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to build capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103 in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992.
The term memory is also often used to refer to non-volatile memory including read-only memory (ROM) through modern flash memory. Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987.
Developments in technology and economies of scale have made possible so-called (VLM) computers.
Volatility categories
Volatile memory
Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM (DRAM). DRAM dominates for desktop system memory. SRAM is used for CPU cache. SRAM is also found in small embedded systems requiring little memory.
SRAM retains its contents as long as the power is connected and may use a simpler interface, but commonly uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs.
Non-volatile memory
Non-volatile memory can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices (e.g. hard disk drives, floppy disks and magnetic tape), optical discs, and early computer storage methods such as magnetic drum, paper tape and punched cards.
Non-volatile memory technologies under development include ferroelectric RAM, programmable metallization cell, Spin-transfer torque magnetic RAM, SONOS, resistive random-access memory, racetrack memory, Nano-RAM, 3D XPoint, and millipede memory.
Semi-volatile memory
A third category of memory is semi-volatile. The term is used to describe a memory that has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is to provide the high performance and durability associated with volatile memories while providing some benefits of non-volatile memory.
For example, some non-volatile memory types experience wear when written. A worn cell has increased volatility but otherwise continues to work. Data locations which are written frequently can thus be directed to use worn circuits. As long as the location is updated within some known retention time, the data stays valid. After a period of time without update, the value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows a high write rate while avoiding wear on the not-worn circuits.
As a second example, an STT-RAM can be made non-volatile by building large cells, but doing so raises the cost per bit and power requirements and reduces the write speed. Using small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some applications, the increased volatility can be managed to provide many benefits of a non-volatile memory, for example by removing power but forcing a wake-up before data is lost; or by caching read-only data and discarding the cached data if the power-off time exceeds the non-volatile threshold.
The term semi-volatile is also used to describe semi-volatile behavior constructed from other memory types, such as nvSRAM, which combines SRAM and a non-volatile memory on the same chip, where an external signal copies data from the volatile memory to the non-volatile memory, but if power is removed before the copy occurs, the data is lost. Another example is battery-backed RAM, which uses an external battery to power the memory device in case of external power loss. If power is off for an extended period of time, the battery may run out, resulting in data loss.
Management
Proper management of memory is vital for a computer system to operate properly. Modern operating systems have complex systems to properly manage memory. Failure to do so can lead to bugs or slow performance.
Bugs
Improper management of memory is a common cause of bugs and security vulnerabilities, including the following types:
A memory leak occurs when a program requests memory from the operating system and never returns the memory when it is done with it. A program with this bug will gradually require more and more memory until the program fails as the operating system runs out.
A segmentation fault results when a program tries to access memory that it does not have permission to access. Generally, a program doing so will be terminated by the operating system.
A buffer overflow occurs when a program writes data to the end of its allocated space and then continues to write data beyond this to memory that has been allocated for other purposes. This may result in erratic program behavior, including memory access errors, incorrect results, a crash, or a breach of system security. They are thus the basis of many software vulnerabilities and can be maliciously exploited.
Virtual memory
Virtual memory is a system where physical memory is managed by the operating system typically with assistance from a memory management unit, which is part of many modern CPUs. It allows multiple types of memory to be used. For example, some data can be stored in RAM while other data is stored on a hard drive (e.g. in a swapfile), functioning as an extension of the cache hierarchy. This offers several advantages. Computer programmers no longer need to worry about where their data is physically stored or whether the user's computer will have enough memory. The operating system will place actively used data in RAM, which is much faster than hard disks. When the amount of RAM is not sufficient to run all the current programs, it can result in a situation where the computer spends more time moving data from RAM to disk and back than it does accomplishing tasks; this is known as thrashing.
Protected memory
Protected memory is a system where each program is given an area of memory to use and is prevented from going outside that range. If the operating system detects that a program has tried to alter memory that does not belong to it, the program is terminated (or otherwise restricted or redirected). This way, only the offending program crashes, and other programs are not affected by the misbehavior (whether accidental or intentional). Use of protected memory greatly enhances both the reliability and security of a computer system.
Without protected memory, it is possible that a bug in one program will alter the memory used by another program. This will cause that other program to run off of corrupted memory with unpredictable results. If the operating system's memory is corrupted, the entire computer system may crash and need to be rebooted. At times programs intentionally alter the memory used by other programs. This is done by viruses and malware to take over computers. It may also be used benignly by desirable programs which are intended to modify other programs, debuggers, for example, to insert breakpoints or hooks.
See also
Memory geometry
Memory hierarchy
Memory organization
Processor registers store data but normally are not considered as memory, since they only store one word and do not include an addressing mechanism.
Universal memory, memory combining both large capacity and high speed
Notes
References
Further reading
MOSFETs
Digital electronics | Computer memory | Engineering | 3,018 |
66,977,007 | https://en.wikipedia.org/wiki/Balt%20%28company%29 | Balt is a medical equipment manufacturer specializing in medical devices designed to treat stroke and other neurovascular diseases.
History
Balt was established in 1977 in France as a small family business by Leopold Płowiecki. His son, Nicolas, oversaw the company's growth until 2018. In 2015, Bridgepoint Advisers acquired a stake in the company, and Balt began to expand internationally. Pascal Girin, who had been serving as CEO of Balt International since 2016, was appointed CEO of the entire company in late 2018.
Balt initially focused on plastic micro-tubes for the pharmaceutical industry. In 1987, it developed a microcatheter enabling the treatment of arteriovenous malformations. The Silk Vista Baby, one of the company's most recent innovations released in 2018, is the world's smallest collapsible intracranial stent.
Balt's revenues tripled between 2015 and 2020 and its workforce, quadrupled to reach 500 employees worldwide in 2020. Balt sells its production to 100 countries around the world. Between 2016 and 2019, the company acquired the American start-up Blockade Medical, which also produces medical devices, as well as several of its distributors in Europe, China, India and Brazil. Today, Balt operates in ten different geographical locations worldwide.
References
External links
French companies established in 1977
Companies based in Île-de-France
Life sciences industry
Medical and health organizations based in France | Balt (company) | Biology | 294 |
62,023,864 | https://en.wikipedia.org/wiki/Riverbank%20Publications | The Riverbank Publications is a series of pamphlets written by the people who worked for millionaire George Fabyan in the multi-discipline research facility he built in the early 20th century near Chicago. They were published by Fabyan, often without author credit. The publications on cryptanalysis, mostly written by William Friedman, with contributions from Elizebeth Smith Friedman and others, are considered seminal in the field. In particular, Publication 22 introduced the Index of Coincidence, a powerful statistical tool for cryptanalysis.
List of publications on cryptography
The Riverbank Publications dealt with many subjects investigated at the laboratories. The ones dealing with cryptography began with number 15,
and consists of:
15, A Method of Reconstructing the Primary Alphabet From a Single One of the Series of Secondary Alphabets, 1917
16, Methods for the Solution of Running-Key Ciphers, 1918
17, An Introduction to Methods for the Solution of Ciphers, 1918
18, Synoptic Tables for the Solution of Ciphers and A Bibliography of Cryptographic Literature, 1918
19, Formulae for the Solution of Geometrical Transposition Ciphers, written with Capt. Lenox R. Lohr, 1918
20, Several Machine Ciphers and Methods for their Solution, 1918
21, Methods for the Reconstruction of Primary Alphabets, written with Elizebeth Smith Friedman, 1918
22, The Index of Coincidence and Its Applications in Cryptography, imprint L. Fournier, Paris, 1922
50, The production and detection of messages in concealed writing and images, by H. O. Nolan, 1918
75, Memorization Methods: Specifically Illustrated in Respect to Their Applicability to Codes and Topographic Material, by H. O. Nolan, 1919,
Except as noted, the above publications were written by William F. Friedman and were published by George Fabyan's Riverbank Laboratories in Geneva, Illinois.
References
Cryptography books
World War I-related lists
Cryptographic attacks
Riverbank Laboratories | Riverbank Publications | Technology | 393 |
5,317,576 | https://en.wikipedia.org/wiki/9-Crown-3 | 9-Crown-3, also called 1,4,7-trioxonane or 1,4,7-trioxacyclononane is a crown ether with the formula (C2H4O)3. A colorless liquid, it is obtained in low yield by the acid-catalyzed oligomerization of ethylene oxide.
In contrast to larger crown ethers (12-crown-4, and 18-crown-6), 9-crown-3 has elicited very little interest, except from theorists.
See also
1,4-Dioxane
References
Crown ethers
Tridentate ligands
Nine-membered rings | 9-Crown-3 | Chemistry | 137 |
503,581 | https://en.wikipedia.org/wiki/Sex%20ratio | A sex ratio is the ratio of males to females in a population. As explained by Fisher's principle, for evolutionary reasons this is typically about 1:1 in species which reproduce sexually. However, many species deviate from an even sex ratio, either periodically or permanently. These include parthenogenic and androgenetic species, periodically mating organisms such as aphids, some eusocial wasps, bees, ants, and termites.
Types
In most species, the sex ratio varies according to the age profile of the population.
It is generally divided into four subdivisions:
— ratio at fertilization
— ratio at birth
— ratio in sexually mature organisms
The tertiary sex ratio is equivalent to the (ASR), which is defined as the ratio of adult males to females in a population.
The operational sex ratio (OSR) is the ratio of sexually active males to females in a population, and is therefore derived from a subset of the individuals included when calculating the ASR. Although conceptually distinct, researchers have sometimes equated the ASR with the OSR, particularly in experimental studies of animals where the difference between the two values may not always be readily apparent.
— ratio in post-reproductive organisms
These definitions can be somewhat subjective since they lack clear boundaries.
Sex ratio theory
Sex ratio theory is a field of academic study which seeks to understand the sex ratios observed in nature from an evolutionary perspective. It continues to be heavily influenced by the work of Eric Charnov. He defines five major questions, both for his book and the field in general (slightly abbreviated here):
For a dioecious species, what is the equilibrium sex ratio maintained by natural selection?
For a sequential hermaphrodite, what is the equilibrium sex order and time of sex change?
For a simultaneous hermaphrodite, what is the equilibrium allocation of resources to male versus female function in each breeding season?
Under what conditions are the various states of hermaphroditism or dioecy evolutionarily stable? When is a mixture of sexual types stable?
When does selection favour the ability of an individual to alter its allocation to male versus female function, in response to particular environmental or life history situations?
Biological research mostly concerns itself with sex allocation rather than sex ratio, sex allocation denoting the allocation of energy to either sex. Common research themes are the effects of local mate and resource competition (often abbreviated LMC and LRC, respectively).
Fisher's principle
Fisher's principle (1930) explains why in most species, the sex ratio is approximately 1:1. His argument was summarised by W. D. Hamilton (1967) as follows, assuming that parents invest the same whether raising male or female offspring:
Suppose male births are less common than female.
A newborn male then has better mating prospects than a newborn female, and therefore can expect to have more offspring.
Therefore parents genetically disposed to produce males tend to have more than average numbers of grandchildren born to them.
Therefore the genes for male-producing tendencies spread, and male births become more common.
As the 1:1 sex ratio is approached, the advantage associated with producing males dies away.
The same reasoning holds if females are substituted for males throughout. Therefore 1:1 is the equilibrium ratio.
In modern language, the 1:1 ratio is the evolutionarily stable strategy (ESS). This ratio has been observed in many species, including the bee Macrotera portalis. A study performed by Danforth observed no significant difference in the number of males and females from the 1:1 sex ratio.
Human sex ratio
The human sex ratio is of particular interest to anthropologists and demographers. In human societies, sex ratios at birth may be considerably skewed by factors such as the age of mother at birth and by sex-selective abortion and infanticide. Exposure to pesticides and other environmental contaminants may be a significant contributing factor as well. As of 2024, the global sex ratio at birth is estimated at 107 boys to 100 girls (1,000 boys per 934 girls). By old age, the sex ratio reverses, with 81 older men for every 100 older women; across all ages, the global population is nearly balanced, with 101 males for every 100 females.
Examples in non-human species
Environmental and individual control
Spending equal amounts of resources to produce offspring of either sex is an evolutionarily stable strategy: if the general population deviates from this equilibrium by favoring one sex, one can obtain higher reproductive success with less effort by producing more of the other. For species where the cost of successfully raising one offspring is roughly the same regardless of its sex, this translates to an approximately equal sex ratio.
Bacteria of the genus Wolbachia cause skewed sex ratios in some arthropod species as they kill males. Sex-ratio of adult populations of pelagic copepods is usually skewed towards dominance of females. However, there are differences in adult sex ratios between families: in families in which females require multiple matings to keep producing eggs, sex ratios are less biased (close to 1); in families in which females can produce eggs continuously after only one mating, sex ratios are strongly skewed towards females.
Several species of reptiles have temperature-dependent sex determination, where incubation temperature of eggs determines the sex of the individual. In the American alligator, for example, females are hatched from eggs incubated between , whereas males are hatched from eggs . In this method, however, all eggs in a clutch (20–50) will be of the same sex. In fact, the natural sex ratio of this species is five females to one male.
In birds, mothers can influence the sex of their chicks. In peafowl, maternal body condition can influence the proportion of daughters in the range from 25% to 87%.
Dichogamy (sequential hermaphroditism) is normal in several groups of fish, such as wrasses, parrotfish and clownfish. This can cause a discrepancy in the sex ratios as well. In the bluestreak cleaner wrasse, there is only one male for every group of 6-8 females. If the male fish dies, the strongest female changes its sex to become the male for the group. All of these wrasses are born female, and only become male in this situation. Other species, like clownfish, do this in reverse, where all start out as non-reproductive males, and the largest male becomes a female, with the second-largest male maturing to become reproductive.
Domesticated animals
Traditionally, farmers have discovered that the most economically efficient community of animals will have a large number of females and a very small number of males. A herd of cows with a few bulls or a flock of hens with one rooster are the most economical sex ratios for domesticated livestock.
Dioecious plants secondary sex ratio and amount of pollen
It was found that the amount of fertilizing pollen can influence secondary sex ratio in dioecious plants. Increase in pollen amount leads to decrease in number of male plants in the progeny. This relationship was confirmed on four plant species from three families – Rumex acetosa (Polygonaceae), Melandrium album (Caryophyllaceae), Cannabis sativa and Humulus japonicus (Cannabinaceae).
Polyandrous and cooperatively breeding homeotherms
In charadriiform birds, recent research has shown clearly that polyandry and sex-role reversal (where males care and females compete for mates) as found in phalaropes, jacanas, painted snipe and a few plover species is clearly related to a strongly male-biased adult sex ratio. Those species with male care and polyandry invariably have adult sex ratios with a large surplus of males, which in some cases can reach as high as six males per female.
Male-biased adult sex ratios have also been shown to correlate with cooperative breeding in mammals such as alpine marmots and wild canids. This correlation may also apply to cooperatively breeding birds, though the evidence is less clear. It is known, however, that both male-biased adult sex ratios and cooperative breeding tend to evolve where caring for offspring is extremely difficult due to low secondary productivity, as in Australia and Southern Africa. It is also known that in cooperative breeders where both sexes are philopatric like the varied sittella, adult sex ratios are equally or more male-biased than in those cooperative species, such as fairy-wrens, treecreepers and the noisy miner where females always disperse.
See also
Evolution of sex
Operational sex ratio
Sex allocation
Trivers–Willard hypothesis
XY sex-determination system
Humans:
List of countries by sex ratio
Bride kidnapping
Groom kidnapping
Demographic transition
Sex selection
Sex-selective abortion and infanticide
Youth bulge
References
Further reading
Also printed as
External links
CIA listing of sex ratios for individual countries (including age divisions)
A review of sex ratio theory
Population
Selection
Biostatistics | Sex ratio | Biology | 1,838 |
17,871,500 | https://en.wikipedia.org/wiki/Damascone | Damascones are a series of closely related chemical compounds that are components of a variety of essential oils. The damascones belong to a family of chemicals known as rose ketones, which also includes damascenones and ionones. beta-Damascone is a contributor to the aroma of roses, despite its relatively low concentration, and is an important fragrance chemical used in perfumery.
The damascones are derived from the degradation of carotenoids.
See also
Rose oil
References
Further reading
Carotenoids
Enones
Perfume ingredients
Cyclohexenes
B | Damascone | Biology | 117 |
595,896 | https://en.wikipedia.org/wiki/Chowla%E2%80%93Mordell%20theorem | In mathematics, the Chowla–Mordell theorem is a result in number theory determining cases where a Gauss sum is the square root of a prime number, multiplied by a root of unity. It was proved and published independently by Sarvadaman Chowla and Louis Mordell, around 1951.
In detail, if is a prime number, a nontrivial Dirichlet character modulo , and
where is a primitive -th root of unity in the complex numbers, then
is a root of unity if and only if is the quadratic residue symbol modulo . The 'if' part was known to Gauss: the contribution of Chowla and Mordell was the 'only if' direction. The ratio in the theorem occurs in the functional equation of L-functions.
References
Gauss and Jacobi Sums by Bruce C. Berndt, Ronald J. Evans and Kenneth S. Williams, Wiley-Interscience, p. 53.
Cyclotomic fields
Zeta and L-functions
Theorems in number theory
fi:Chowlan–Mordellin lause | Chowla–Mordell theorem | Mathematics | 217 |
64,799,736 | https://en.wikipedia.org/wiki/Wendy%20Lee%20Queen | Wendy Lee Queen (born 1981 in South Carolina) is an American chemist and material scientist. Her research interest focus on development design and production of hybrid organic/inorganic materials at the intersection of chemistry, chemical engineering and material sciences. As of 2020 she is a tenure-track assistant professor at the École polytechnique fédérale de Lausanne (EPFL) in Switzerland, where she directs the Laboratory for Functional Inorganic Materials.
Career
Queen studied chemistry and mathematics at Lander University in Greenwood, South Carolina, USA. She then pursued a PhD in inorganic chemistry at Clemson University under the mentorship of Shiou-Jyh Hwu. In 2009 she joined the Center for Neutron Research at National Institute of Standards and Technology. From 2011 to 2012 she was a visiting scholar in laboratory of Jeffrey R. Long at University of California Berkeley before returning to the Center for Neutron Research as a postdoctoral fellow with Craig Brown.
In the position of a project scientist, Queen joined the Molecular Foundry at Lawrence Berkeley National Laboratory in 2012. Here she helped build a new user program focused on the synthesis and characterization of porous adsorbents. During her time there she worked on a number of projects focused on the use of polymer-metal-organic frameworks (MOF) or MOF-based membranes for a variety of globally relevant gas separations such as carbon dioxide capture from flue gas and water capture from air.
In 2015, she was nominated as tenure-track assistant professor at Department of Chemical Engineering of École polytechnique fédérale de Lausanne (EPFL) in Switzerland. Her Laboratory for Functional Inorganic Materials is based at the EPFL Valais Wallis campus in Sion, Switzerland.
Research
Queen's research is focused on the synthesis and characterization of novel porous adsorbents, namely metal-organic frameworks, and their corresponding composites, which are of interest in a number of host-guest applications. Her research aims at contributing knowledge towards solving globally relevant problems, like reducing energy consumption, cutting CO2 emissions, water purification, the extraction of valuable commodities from waste, and chemical conversion processes.
Queen became known to a wider audience through her TEDx Talk on "Cut Carbon to Save Lives", her Aeon article on "Could mining gold from waste reduce its great cost?", and multiple appearances in the news outlets.
Distinctions
In 2020, Queen was nominated as one of Chemical & Engineering News's “Talented 12”. She is a member of the board of Scientific Advisors at novoMOF.
References
External links
American women chemists
Women materials scientists and engineers
Academic staff of the École Polytechnique Fédérale de Lausanne
21st-century American women scientists
21st-century American chemists
American expatriate academics
American expatriates in Switzerland
American materials scientists
Lander University alumni
1981 births
Living people
Clemson University alumni
American women academics
Scientists from South Carolina
Chemists from South Carolina | Wendy Lee Queen | Materials_science,Technology | 583 |
2,222,206 | https://en.wikipedia.org/wiki/K%E1%B9%9Bttik%C4%81 | The star cluster Sanskrit: कृत्तिका, pronounced , popularly transliterated Krittika), sometimes known as Kārtikā, corresponds to the open star cluster called Pleiades in western astronomy; it is one of the clusters which makes up the constellation Taurus. In Indian astronomy and (Hindu astrology) the name literally translates to "the cutters". It is also the name of its goddess-personification, who is a daughter of Daksha and Panchajani, and thus a half-sister to Khyati. Spouse of Kṛttikā is Chandra ("moon"). The six Krittikas who raised the Hindu God Kartikeya are Śiva, Sambhūti, Prīti, Sannati, Anasūya and Kṣamā.
In Hindu astrology, is the third of the 27 s. It is ruled by Sun.
Under the traditional Hindu principle of naming individuals according to their Ascendant/Lagna , the following Sanskrit syllables correspond with this , and would belong at the beginning of the first name of an individual born under it: A (अ), I (ई), U (उ) and E (ए).
See also
List of Nakshatras
Pleione
References
Taurus (constellation)
Nakshatra
Daughters of Daksha | Kṛttikā | Astronomy | 273 |
174,431 | https://en.wikipedia.org/wiki/Fiberglass | Fiberglass (American English) or fibreglass (Commonwealth English) is a common type of fiber-reinforced plastic using glass fiber. The fibers may be randomly arranged, flattened into a sheet called a chopped strand mat, or woven into glass cloth. The plastic matrix may be a thermoset polymer matrix—most often based on thermosetting polymers such as epoxy, polyester resin, or vinyl ester resin—or a thermoplastic.
Cheaper and more flexible than carbon fiber, it is stronger than many metals by weight, non-magnetic, non-conductive, transparent to electromagnetic radiation, can be molded into complex shapes, and is chemically inert under many circumstances. Applications include aircraft, boats, automobiles, bath tubs and enclosures, swimming pools, hot tubs, septic tanks, water tanks, roofing, pipes, cladding, orthopedic casts, surfboards, and external door skins.
Other common names for fiberglass are glass-reinforced plastic (GRP), glass-fiber reinforced plastic (GFRP) or GFK (from ). Because glass fiber itself is sometimes referred to as "fiberglass", the composite is also called fiberglass-reinforced plastic (FRP). This article uses "fiberglass" to refer to the complete fiber-reinforced composite material, rather than only to the glass fiber within it.
History
Glass fibers have been produced for centuries, but the earliest patent was awarded to the Prussian inventor Hermann Hammesfahr (1845–1914) in the U.S. in 1880.
Mass production of glass strands was accidentally discovered in 1932 when Games Slayter, a researcher at Owens-Illinois, directed a jet of compressed air at a stream of molten glass and produced fibers. A patent for this method of producing glass wool was first applied for in 1933. Owens joined with the Corning company in 1935 and the method was adapted by Owens Corning to produce its patented "Fiberglas" (spelled with one "s") in 1936. Originally, Fiberglas was a glass wool with fibers entrapping a great deal of gas, making it useful as an insulator, especially at high temperatures.
A suitable resin for combining the fiberglass with a plastic to produce a composite material was developed in 1936 by DuPont. The first ancestor of modern polyester resins is Cyanamid's resin of 1942. Peroxide curing systems were used by then. With the combination of fiberglass and resin the gas content of the material was replaced by plastic. This reduced the insulation properties to values typical of the plastic, but now for the first time, the composite showed great strength and promise as a structural and building material. Many glass fiber composites continued to be called "fiberglass" (as a generic name) and the name was also used for the low-density glass wool product containing gas instead of plastic.
Ray Greene of Owens Corning is credited with producing the first composite boat in 1937 but did not proceed further at the time because of the brittle nature of the plastic used. In 1939 Russia was reported to have constructed a passenger boat of plastic materials, and the United States a fuselage and wings of an aircraft. The first car to have a fiberglass body was a 1946 prototype of the Stout Scarab, but the model did not enter production.
Fiber
Unlike glass fibers used for insulation, for the final structure to be strong, the fiber's surfaces must be almost entirely free of defects, as this permits the fibers to reach gigapascal tensile strengths. If a bulk piece of glass were defect-free, it would be as strong as glass fibers; however, it is generally impractical to produce and maintain bulk material in a defect-free state outside of laboratory conditions.
Production
The process of manufacturing fiberglass is called pultrusion. The manufacturing process for glass fibers suitable for reinforcement uses large furnaces to gradually melt the silica sand, limestone, kaolin clay, fluorspar, colemanite, dolomite and other minerals until a liquid forms. It is then extruded through bushings (spinneret), which are bundles of very small orifices (typically 5–25 micrometres in diameter for E-Glass, 9 micrometres for S-Glass).
These filaments are then sized (coated) with a chemical solution. The individual filaments are now bundled in large numbers to provide a roving. The diameter of the filaments, and the number of filaments in the roving, determine its weight, typically expressed in one of two measurement systems:
yield, or yards per pound (the number of yards of fiber in one pound of material; thus a smaller number means a heavier roving). Examples of standard yields are 225yield, 450yield, 675yield.
tex, or grams per km (how many grams 1 km of roving weighs, inverted from yield; thus a smaller number means a lighter roving). Examples of standard tex are 750tex, 1100tex, 2200tex.
These rovings are then either used directly in a composite application such as pultrusion, filament winding (pipe), gun roving (where an automated gun chops the glass into short lengths and drops it into a jet of resin, projected onto the surface of a mold), or in an intermediary step, to manufacture fabrics such as chopped strand mat (CSM) (made of randomly oriented small cut lengths of fiber all bonded together), woven fabrics, knit fabrics or unidirectional fabrics.
Chopped strand mat
Chopped strand mat (CSM) is a form of reinforcement used in fiberglass. It consists of glass fibers laid randomly across each other and held together by a binder. It is typically processed using the hand lay-up technique, where sheets of material are placed on a mold and brushed with resin. Because the binder dissolves in resin, the material easily conforms to different shapes when wetted out. After the resin cures, the hardened product can be taken from the mold and finished. Using chopped strand mat gives the fiberglass isotropic in-plane material properties.
Sizing
A coating or primer is applied to the roving to help protect the glass filaments for processing and manipulation and to ensure proper bonding to the resin matrix, thus allowing for the transfer of shear loads from the glass fibers to the thermoset plastic. Without this bonding, the fibers can 'slip' in the matrix causing localized failure.
Properties
An individual structural glass fiber is both stiff and strong in tension and compression—that is, along its axis. Although it might be assumed that the fiber is weak in compression, it is actually only the long aspect ratio of the fiber which makes it seem so; i.e., because a typical fiber is long and narrow, it buckles easily. On the other hand, the glass fiber is weak in shear—that is, across its axis. Therefore, if a collection of fibers can be arranged permanently in a preferred direction within a material, and if they can be prevented from buckling in compression, the material will be preferentially strong in that direction.
Furthermore, by laying multiple layers of fiber on top of one another, with each layer oriented in various preferred directions, the material's overall stiffness and strength can be efficiently controlled. In fiberglass, it is the plastic matrix which permanently constrains the structural glass fibers to directions chosen by the designer. With chopped strand mat, this directionality is essentially an entire two-dimensional plane; with woven fabrics or unidirectional layers, directionality of stiffness and strength can be more precisely controlled within the plane.
A fiberglass component is typically of a thin "shell" construction, sometimes filled on the inside with structural foam, as in the case of surfboards. The component may be of nearly arbitrary shape, limited only by the complexity and tolerances of the mold used for manufacturing the shell.
The mechanical functionality of materials is heavily reliant on the combined performances of both the resin (AKA matrix) and fibers. For example, in severe temperature conditions (over 180 °C), the resin component of the composite may lose its functionality, partially due to bond deterioration of resin and fiber. However, GFRPs can still show significant residual strength after experiencing high temperatures (200 °C).
One notable feature of fiberglass is that the resins used are subject to contraction during the curing process. For polyester this contraction is often 5–6%; for epoxy, about 2%. Because the fibers do not contract, this differential can create changes in the shape of the part during curing. Distortions can appear hours, days, or weeks after the resin has set. While this distortion can be minimized by symmetric use of the fibers in the design, a certain amount of internal stress is created; and if it becomes too great, cracks form.
Types
The most common types of glass fiber used in fiberglass is E-glass, which is alumino-borosilicate glass with less than 1% w/w alkali oxides, mainly used for glass-reinforced plastics. Other types of glass used are A-glass (Alkali-lime glass with little or no boron oxide), E-CR-glass (Electrical/Chemical Resistance; alumino-lime silicate with less than 1% w/w alkali oxides, with high acid resistance), C-glass (alkali-lime glass with high boron oxide content, used for glass staple fibers and insulation), D-glass (borosilicate glass, named for its low Dielectric constant), R-glass (alumino silicate glass without MgO and CaO with high mechanical requirements as Reinforcement), and S-glass (alumino silicate glass without CaO but with high MgO content with high tensile strength).
Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fiber for fiberglass but has the drawback that it must be worked at very high temperatures. In order to lower the necessary work temperature, other materials are introduced as "fluxing agents" (i.e., components to lower the melting point). Ordinary A-glass ("A" for "alkali-lime") or soda lime glass, crushed and ready to be remelted, as so-called cullet glass, was the first type of glass used for fiberglass. E-glass ("E" because of initial Electrical application), is alkali-free and was the first glass formulation used for continuous filament formation. It now makes up most of the fiberglass production in the world, and also is the single largest consumer of boron minerals globally. It is susceptible to chloride ion attack and is a poor choice for marine applications. S-glass ("S" for "stiff") is used when tensile strength (high modulus) is important and is thus an important building and aircraft epoxy composite (it is called R-glass, "R" for "reinforcement" in Europe). C-glass ("C" for "chemical resistance") and T-glass ("T" is for "thermal insulator"—a North American variant of C-glass) are resistant to chemical attack; both are often found in insulation-grades of blown fiberglass.
Table of some common fiberglass types
Applications
Fiberglass is versatile because it is lightweight, strong, weather-resistant, and can have a variety of surface textures.
During World War II, fiberglass was developed as a replacement for the molded plywood used in aircraft radomes (fiberglass being transparent to microwaves). Its first main civilian application was for the building of boats and sports car bodies, where it gained acceptance in the 1950s. Its use has broadened to the automotive and sport equipment sectors. In the production of some products, such as aircraft, carbon fiber is now used instead of fiberglass, which is stronger by volume and weight.
Advanced manufacturing techniques such as pre-pregs and fiber rovings extend fiberglass's applications and the tensile strength possible with fiber-reinforced plastics.
Fiberglass is also used in the telecommunications industry for shrouding antennas, due to its RF permeability and low signal attenuation properties. It may also be used to conceal other equipment where no signal permeability is required, such as equipment cabinets and steel support structures, due to the ease with which it can be molded and painted to blend with existing structures and surfaces. Other uses include sheet-form electrical insulators and structural components commonly found in power-industry products. Because of fiberglass's lightweight and durability, it is often used in protective equipment such as helmets. Many sports use fiberglass protective gear, such as goaltenders' and catchers' masks.
Storage tanks
Storage tanks can be made of fiberglass with capacities up to about 300 tonnes. Smaller tanks can be made with chopped strand mat cast over a thermoplastic inner tank which acts as a preform during construction. Much more reliable tanks are made using woven mat or filament wound fiber, with the fiber orientation at right angles to the hoop stress imposed in the sidewall by the contents. Such tanks tend to be used for chemical storage because the plastic liner (often polypropylene) is resistant to a wide range of corrosive chemicals. Fiberglass is also used for septic tanks.
House building
Glass-reinforced plastics are also used to produce house building components such as roofing laminate, door surrounds, over-door canopies, window canopies and dormers, chimneys, coping systems, and heads with keystones and sills. The material's reduced weight and easier handling, compared to wood or metal, allows faster installation. Mass-produced fiberglass brick-effect panels can be used in the construction of composite housing, and can include insulation to reduce heat loss.
Oil and gas artificial lift systems
In rod pumping applications, fiberglass rods are often used for their high tensile strength to weight ratio. Fiberglass rods provide an advantage over steel rods because they stretch more elastically (lower Young's modulus) than steel for a given weight, meaning more oil can be lifted from the hydrocarbon reservoir to the surface with each stroke, all while reducing the load on the pumping unit.
Fiberglass rods must be kept in tension, however, as they frequently part if placed in even a small amount of compression. The buoyancy of the rods within a fluid amplifies this tendency.
Piping
GRP and GRE pipe can be used in a variety of above- and below-ground systems, including those for desalination, water treatment, water distribution networks, chemical process plants, water used for firefighting, hot and cold drinking water, wastewater/sewage, municipal waste and liquified petroleum gas.
Boating
Fiberglass composite boats have been made since the early 1940s, and many sailing vessels made after 1950 were built using the fiberglass lay-up process. As of 2022, boats continue to be made with fiberglass, though more advanced techniques such as vacuum bag moulding are used in the construction process.
Armour
Though most bullet-resistant armours are made using different textiles, fiberglass composites have been shown to be effective as ballistic armor.
Construction methods
Filament winding
Filament winding is a fabrication technique mainly used for manufacturing open (cylinders) or closed-end structures (pressure vessels or tanks). The process involves winding filaments under tension over a male mandrel. The mandrel rotates while a wind eye on a carriage moves horizontally, laying down fibers in the desired pattern. The most common filaments are carbon or glass fiber and are coated with synthetic resin as they are wound. Once the mandrel is completely covered to the desired thickness, the resin is cured; often the mandrel is placed in an oven to achieve this, though sometimes radiant heaters are used with the mandrel still turning in the machine. Once the resin has cured, the mandrel is removed, leaving the hollow final product. For some products such as gas bottles, the 'mandrel' is a permanent part of the finished product forming a liner to prevent gas leakage or as a barrier to protect the composite from the fluid to be stored.
Filament winding is well suited to automation, and there are many applications, such as pipe and small pressure vessels that are wound and cured without any human intervention. The controlled variables for winding are fiber type, resin content, wind angle, tow or bandwidth and thickness of the fiber bundle. The angle at which the fiber has an effect on the properties of the final product. A high angle "hoop" will provide circumferential or "burst" strength, while lower angle patterns (polar or helical) will provide greater longitudinal tensile strength.
Products currently being produced using this technique range from pipes, golf clubs, Reverse Osmosis Membrane Housings, oars, bicycle forks, bicycle rims, power and transmission poles, pressure vessels to missile casings, aircraft fuselages and lamp posts and yacht masts.
Fiberglass hand lay-up operation
A release agent, usually in either wax or liquid form, is applied to the chosen mold to allow the finished product to be cleanly removed from the mold. Resin—typically a 2-part thermoset polyester, vinyl, or epoxy—is mixed with its hardener and applied to the surface. Sheets of fiberglass matting are laid into the mold, then more resin mixture is added using a brush or roller. The material must conform to the mold, and air must not be trapped between the fiberglass and the mold. Additional resin is applied and possibly additional sheets of fiberglass. Hand pressure, vacuum or rollers are used to be sure the resin saturates and fully wets all layers, and that any air pockets are removed. The work must be done quickly before the resin starts to cure unless high-temperature resins are used which will not cure until the part is warmed in an oven. In some cases, the work is covered with plastic sheets and vacuum is drawn on the work to remove air bubbles and press the fiberglass to the shape of the mold.
Fiberglass spray lay-up operation
The fiberglass spray lay-up process is similar to the hand lay-up process but differs in the application of the fiber and resin to the mold. Spray-up is an open-molding composites fabrication process where resin and reinforcements are sprayed onto a mold. The resin and glass may be applied separately or simultaneously "chopped" in a combined stream from a chopper gun. Workers roll out the spray-up to compact the laminate. Wood, foam or other core material may then be added, and a secondary spray-up layer imbeds the core between the laminates. The part is then cured, cooled, and removed from the reusable mold.
Pultrusion operation
Pultrusion is a manufacturing method used to make strong, lightweight composite materials. In pultrusion, material is pulled through forming machinery using either a hand-over-hand method or a continuous-roller method (as opposed to extrusion, where the material is pushed through dies).
In fiberglass pultrusion, fibers (the glass material) are pulled from spools through a device that coats them with a resin. They are then typically heat-treated and cut to length. Fiberglass produced this way can be made in a variety of shapes and cross-sections, such as W or S cross-sections.
Health hazards
Exposure
People can be exposed to fiberglass in the workplace during its fabrication, installation or removal, by breathing it in, by skin contact, or by eye contact.
Furthermore, in the manufacturing process of fiberglass, styrene vapors are released while the resins are cured. These are also irritating to mucous membranes and respiratory tract.
The general population can get exposed to fibreglass from insulation and building materials or from fibers in the air near manufacturing facilities or when they are near building fires or implosions. The American Lung Association advises that fiberglass insulation should never be left exposed in an occupied area. Since work practices are not always followed, and fiberglass is often left exposed in basements that later become occupied, people can get exposed. No readily usable biological or clinical indices of exposure exist.
Symptoms and signs, health effects
Fiberglass will irritate the eyes, skin, and the respiratory system. Hence, symptoms can include itchy eyes, skin, nose, sore throat, hoarseness, dyspnea (breathing difficulty) and cough. Peak alveolar deposition was observed in rodents and humans for fibers with diameters of 1 to 2 μm.
In animal experiments, adverse lung effects such as lung inflammation and lung fibrosis have occurred, and increased incidences of mesothelioma, pleural sarcoma, and lung carcinoma had been found with intrapleural or intratracheal instillations in rats.
As of 2001, in humans only the more biopersistent materials like ceramic fibres, which are used industrially as insulation in high-temperature environments such as blast furnaces, and certain special-purpose glass wools not used as insulating materials remain classified as possible carcinogens (IARC Group 2B). The more commonly used glass fibre wools including insulation glass wool, rock wool and slag wool are considered not classifiable as to carcinogenicity to humans (IARC Group 3).
In October 2001, all fiberglass wools commonly used for thermal and acoustical insulation were reclassified by the International Agency for Research on Cancer (IARC) as "not classifiable as to carcinogenicity to humans" (IARC group 3). "Epidemiologic studies published during the 15 years since the previous IARC monographs review of these fibers in 1988 provide no evidence of increased risks of lung cancer or mesothelioma (cancer of the lining of the body cavities) from occupational exposures during the manufacture of these materials, and inadequate evidence overall of any cancer risk."
In June 2011, the US National Toxicology Program (NTP) removed from its Report on Carcinogens all biosoluble glass wool used in home and building insulation and for non-insulation products. However, NTP still considers fibrous glass dust to be "reasonably anticipated [as] a human carcinogen (Certain Glass Wool Fibers (Inhalable))". Similarly, California's Office of Environmental Health Hazard Assessment (OEHHA) published a November, 2011 modification to its Proposition 65 listing to include only "Glass wool fibers (inhalable and biopersistent)." Therefore a cancer warning label for biosoluble fiber glass home and building insulation is no longer required under federal or California law. As of 2012, the North American Insulation Manufacturers Association stated that fiberglass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation.
As of 2012, the European Union and Germany have classified synthetic glass fibers as possibly or probably carcinogenic, but fibers can be exempt from this classification if they pass specific tests. A 2012 health hazard review for the European Commission stated that inhalation of fiberglass at concentrations of 3, 16 and 30 mg/m3 "did not induce fibrosis nor tumours except transient lung inflammation that disappeared after a post-exposure recovery period."
Historic reviews of the epidemiology studies had been conducted by Harvard's Medical and Public Health Schools in 1995, the National Academy of Sciences in 2000, the Agency for Toxic Substances and Disease Registry ("ATSDR") in 2004, and the National Toxicology Program in 2011. which reached the same conclusion as IARC that there is no evidence of increased risk from occupational exposure to glass wool fibers.
Pathophysiology
Genetic and toxic effects are exerted through production of reactive oxygen species, which can damage DNA, and cause chromosomal aberrations, nuclear abnormalities, mutations, gene amplification in proto-oncogenes, and cell transformation in mammalian cells. There is also indirect, inflammation-driven genotoxicity through reactive oxygen species by inflammatory cells. The longer and thinner as well as the more durable (biopersistent) fibers were, the more potent they were in damage.
Regulation, exposure limits
In the US, fine mineral fiber emissions have been regulated by the EPA, but respirable fibers (“particulates not otherwise regulated”) are regulated by Occupational Safety and Health Administration (OSHA); OSHA has set the legal limit (permissible exposure limit) for fiberglass exposure in the workplace as 15 mg/m3 total and 5 mg/m3 in respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 3 fibers/cm3 (less than 3.5 micrometers in diameter and greater than 10 micrometers in length) as a time-weighted average over an 8-hour workday, and a 5 mg/m3 total limit.
As of 2001, the Hazardous Substances Ordinance in Germany dictates a maximum occupational exposure limit of 86 mg/m3. In certain concentrations, a potentially explosive mixture may occur. Further manufacture of GRP components (grinding, cutting, sawing) creates fine dust and chips containing glass filaments, as well as tacky dust, in quantities high enough to affect health and the functionality of machines and equipment. The installation of effective extraction and filtration equipment is required to ensure safety and efficiency.
See also
Bulk moulding compound
Fiberglass sheet laminating
G-10 (material)
Glass fiber reinforced concrete
Hobas
Ignace Dubus-Bonnel
Sheet moulding compound
Carbon-fiber-reinforced polymers reinforcement with carbon fibers.
References
External links
American inventions
Composite materials
Fibre-reinforced polymers
Glass applications | Fiberglass | Physics,Chemistry,Materials_science | 5,371 |
32,591 | https://en.wikipedia.org/wiki/Vegetarianism | Vegetarianism is the practice of abstaining from the consumption of meat (red meat, poultry, seafood, insects, and the flesh of any other animal). It may also include abstaining from eating all by-products of animal slaughter. A person who practices vegetarianism is known as a vegetarian.
Vegetarianism may be adopted for various reasons. Many people object to eating meat out of respect for sentient animal life. Such ethical motivations have been codified under various religious beliefs as well as animal rights advocacy. Other motivations for vegetarianism are health-related, political, environmental, cultural, aesthetic, economic, taste-related, or relate to other personal preferences.
There are many variations of the vegetarian diet: an ovo-vegetarian diet includes eggs and a lacto-vegetarian diet includes dairy products, while a lacto-ovo vegetarian diet includes both. As the strictest of vegetarian diets, a vegan diet excludes all animal products, and can be accompanied by abstention from the use of animal-derived products, such as leather shoes.
Vegetarian diets pose some difficulties. For vitamin B12, depending on the presence or absence of eggs and dairy products in the diet or other reliable B12 sources, vegetarians may incur a nutritional deficiency. Packaged and processed foods may contain minor quantities of animal ingredients. While some vegetarians scrutinize product labels for such ingredients, others do not object to consuming them, or are unaware of their presence.
Etymology
The first written use of the term "vegetarian" originated in the early 19th century, when authors referred to a vegetable regimen diet. Historically, 'vegetable' could be used to refer to any type of edible vegetation. Modern dictionaries explain its origin as a compound of vegetable (adjective) and the suffix -arian (in the sense of agrarian). The term was popularized with the foundation of the Vegetarian Society in Manchester in 1847, although it may have appeared in print before 1847. The earliest occurrences of the term seem to be related to Alcott House—a school on the north side of Ham Common, London—which was opened in July 1838 by James Pierrepont Greaves. From 1841, it was known as A Concordium, or Industry Harmony College, and the institution then began to publish its own pamphlet, The Healthian. It provides some of the earliest appearances of the term "vegetarian".
History
The earliest record of vegetarianism comes from the 9th century BCE, inculcating tolerance towards all living beings. Parshwanatha and Mahavira, the 23rd and 24th tirthankaras in Jainism, respectively, revived and advocated ahimsa and Jain vegetarianism between the 8th and 6th centuries BCE; the most comprehensive and strictest form of vegetarianism. In Indian culture, vegetarianism has been closely connected with the attitude of nonviolence towards animals (called ahimsa in India) for millennia and was promoted by religious groups and philosophers. The Ācārāṅga Sūtra from 5th century BCE advocates Jain-vegetarianism; and forbids the monks from walking on grass in order to avoid inflicting pain on them and prevent small insects dwelling inside from getting killed. The ancient Indian work of the Tirukkuṟaḷ, dated before the 5th century CE, explicitly and unambiguously emphasizes shunning meat and non-killing as a common man's virtues. Chapter 26 of the Tirukkural, particularly couplets 251–260, deals exclusively on moral vegetarianism or veganism.
Among the Hellenes, Egyptians, and others, vegetarianism had medical or ritual purification purposes. Vegetarianism was also practiced in ancient Greece and the earliest reliable evidence for vegetarian theory and practice in Greece dates from the 6th century BCE. The Orphics, a religious movement spreading in Greece at that time, also practiced and promoted vegetarianism. Greek teacher Pythagoras, who promoted the altruistic doctrine of metempsychosis, may have practiced vegetarianism, but is also recorded as eating meat. A fictionalized portrayal of Pythagoras appears in Ovid's Metamorphoses, in which he advocates a form of strict vegetarianism. It was through this portrayal that Pythagoras was best known to English-speakers throughout the early modern period and, prior to the coinage of the word "vegetarianism", vegetarians were referred to in English as "Pythagoreans". Vegetarianism was also practiced about six centuries later in another instance (30 BCE–50 CE) in the northern Thracian region by the Moesi tribe (who inhabited present-day Serbia and Bulgaria), feeding themselves on honey, milk, and cheese.
In Japan in 675, the Emperor Tenmu prohibited the killing and the eating of meat during the busy farming period between April and September but excluded the eating of wild birds and wild animals. These bans and several others that followed over the centuries were overturned in the nineteenth century during the Meiji Restoration. In China, during the Song dynasty, Buddhist cuisine became popular enough that vegetarian restaurants appeared where chefs used ingredients such as beans, gluten, root vegetables and mushrooms to create meat analogues including pork, fowl, eggs and crab roe and many meat substitutes used even today such as tofu, seitan and konjac originate in Chinese Buddhist cuisine.
Following the Christianization of the Roman Empire in late antiquity, vegetarianism practically disappeared from Europe, as it did elsewhere, except in India. Several orders of monks in medieval Europe restricted or banned the consumption of meat for ascetic reasons, but none of them eschewed fish. Moreover, the medieval definition of "fish" included such animals as seals, porpoises, dolphins, barnacle geese, puffins, and beavers. Vegetarianism re-emerged during the Renaissance, becoming more widespread in the 19th and 20th centuries. In 1847, the first Vegetarian Society was founded in the United Kingdom; Germany, the Netherlands, and other countries followed. In 1886, the vegetarian colony Nueva Germania was founded in Paraguay, though its vegetarian aspect would prove short-lived. The International Vegetarian Union, an association of the national societies, was founded in 1908. In the Western world, the popularity of vegetarianism grew during the 20th century as a result of nutritional, ethical, and—more recently—environmental and economic concerns.
Varieties
There are a number of vegetarian diets that exclude or include various foods:
Fruitarianism permits only fruit, nuts, seeds, and other plant matter that can be gathered without harming the plant.
Macrobiotic diets consist mostly of whole grains and beans.
Lacto vegetarianism includes dairy products but not eggs.
Ovo vegetarianism includes eggs but not dairy products.
Lacto-ovo vegetarianism (or ovo-lacto vegetarianism) includes animal products such as eggs, milk, and honey.
Sattvic diet (also known as yogic diet), a plant-based diet which may also include dairy and honey, but excludes eggs, red lentils, durian, mushrooms, alliums, blue cheeses, fermented foods or sauces, and alcoholic drinks. Coffee, black or green tea, chocolate, nutmeg, and any other type of stimulant (including excessively pungent spices) are sometimes excluded, as well.
Veganism excludes all animal flesh and by-products, such as eggs, milk, honey, edible bird's nest and items refined or manufactured through any such product, such as animal-tested baking soda or white sugar refined with bone char.
Raw veganism includes only fresh and uncooked fruit, nuts, seeds, and vegetables. Food must not be heated above to be considered "raw". Usually, raw vegan food is only ever "cooked" with a food dehydrator at low temperatures.
Within the "ovo-" groups, there are many who refuse to consume fertilized eggs (with balut being an extreme example); however, such distinction is typically not specifically addressed.
Some vegetarians also avoid products that may use animal ingredients not included in their labels or which use animal products in their manufacturing. For example, sugars that are whitened with bone char, cheeses that use animal rennet (enzymes from animal stomach lining), gelatin (derived from the collagen inside animals' skin, bones, and connective tissue), some cane sugar (but not beet sugar) and beverages (such as apple juice and alcohol) clarified with gelatin or crushed shellfish and sturgeon, while other vegetarians are unaware of, or do not mind, such ingredients. In the 21st century, 90% of rennet and chymosin used in cheesemaking are derived from industrial fermentation processes, which satisfy both kosher and halal requirements.
Individuals sometimes label themselves "vegetarian" while practicing a semi-vegetarian diet, as some dictionary definitions describe vegetarianism as sometimes including the consumption of fish, or only include mammalian flesh as part of their definition of meat, while other definitions exclude fish and all animal flesh. In other cases, individuals may describe themselves as "flexitarian".
These diets may be followed by those who reduce animal flesh consumed as a way of transitioning to a complete vegetarian diet or for health, ethical, environmental, or other reasons. Semi-vegetarian diets include:
Pescetarianism, which includes fish and possibly other forms of seafood.
Pollotarianism, which includes chicken and possibly other poultry.
Semi-vegetarianism is contested by vegetarian groups, such as the Vegetarian Society, which states that vegetarianism excludes all animal flesh.
Consumption of eggs is not considered to be a part of a vegetarian diet in India, as egg is an animal product that gives birth to the next generation of the relevant species.
Health research
In Western countries, the most common motive for people practicing vegetarianism is health consciousness. The Academy of Nutrition and Dietetics has stated that at all stages of life, a properly planned vegetarian diet can be "healthful, nutritionally adequate, and may be beneficial in the prevention and treatment of certain diseases." Vegetarian diets offer lower levels of saturated fat, cholesterol and animal protein, and higher levels of carbohydrates, fibre, magnesium, potassium, folate, vitamins C and E, and phytochemicals.
Bones
Studies have shown that a (non-lacto) vegetarian diet may increase the risk of calcium deficiency and low bone mineral density. A 2019 review found that vegetarians have lower bone mineral density at the femoral neck and lumbar spine compared to omnivores. A 2020 meta-analysis found that infants fed a lacto-vegetarian diet exhibited normal growth and development. A 2021 review found no differences in growth between vegetarian and meat-eating children.
Diabetes
Vegetarian diets are under preliminary research for their potential to help people with type 2 diabetes.
Cardiovascular system
Meta-analyses have reported a reduced risk of death from ischemic heart disease and from cerebrovascular disease among vegetarians.
Mental health
Reviews of vegan and vegetarian diets showed a possible association with depression and anxiety, particularly among people under 26 years old. Another review found no significant associations between a vegetarian diet and depression or anxiety.
Eating disorders
The American Dietetic Association discussed that vegetarian diets may be more common among adolescents with eating disorders, indicating that vegetarian diets do not cause eating disorders, but rather "vegetarian diets may be selected to camouflage an existing eating disorder".
Mortality risk
A 2012 study found a reduced risk in all-cause mortality in vegetarians. A 2017 review found a lower mortality (−25%) from ischemic heart disease.
Diet composition and nutrition
Western vegetarian diets are typically high in carotenoids, but relatively low in omega-3 fatty acids and vitamin B12. Vegans can have particularly low intake of vitamin B and calcium if they do not eat enough items such as collard greens, leafy greens, tempeh and tofu (soy). High levels of dietary fiber, folic acid, vitamins C and E, and magnesium, and low consumption of saturated fat are all considered to be beneficial aspects of a vegetarian diet. A well planned vegetarian diet will provide all nutrients in a meat-eater's diet to the same level for all stages of life.
Protein
Protein intake in vegetarian diets tends to be lower than in meat diets but can meet the daily requirements for most people. Studies at Harvard University as well as other studies conducted in the United States, United Kingdom, Canada, Australia, New Zealand, and various European countries, confirmed that vegetarian diets provide sufficient protein intake as long as a variety of plant sources are available and consumed.
Iron
Vegetarian diets typically contain similar levels of iron to non-vegetarian diets, but this has lower bioavailability than iron from meat sources, and its absorption can sometimes be inhibited by other dietary constituents. According to the Vegetarian Resource Group, consuming food that contains vitamin C, such as citrus fruit or juices, tomatoes, or broccoli, is a good way to increase the amount of iron absorbed at a meal. Vegetarian foods rich in iron include black beans, cashews, hempseed, kidney beans, broccoli, lentils, oatmeal, raisins, jaggery, spinach, cabbage, lettuce, black-eyed peas, soybeans, many breakfast cereals, sunflower seeds, chickpeas, tomato juice, tempeh, molasses, thyme, and whole-wheat bread. The related vegan diets can often be higher in iron than vegetarian diets, because dairy products are low in iron. Iron stores often tend to be lower in vegetarians than non-vegetarians, and a few small studies report very high rates of iron deficiency (up to 40%, and 58% of the respective vegetarian or vegan groups). However, the American Dietetic Association states that iron deficiency is no more common in vegetarians than non-vegetarians (adult males are rarely iron deficient); iron deficiency anaemia is rare no matter the diet.
Vitamin B12
Vitamin B12 is not generally present in plants but is naturally found in foods of animal origin. Lacto-ovo vegetarians can obtain B12 from dairy products and eggs, and vegans can obtain it from manufactured fortified foods (including plant-based products and breakfast cereals) and dietary supplements. A strict vegan diet avoiding consumption of all animal products risks vitamin B12 deficiency, which can lead to hyperhomocysteinemia, a risk factor for several health disorders, including anemia, neurological deficits, gastrointestinal problems, platelet disorders, and increased risk for cardiovascular diseases. The recommended daily dietary intake of B12 in the United States and Canada is 0.4 mcg (ages 0–6 months), rising to 1.8 mcg (9–13 years), 2.4 mcg (14+ years), and 2.8 mcg (lactating female). While the body's daily requirement for vitamin B12 is in microgram amounts, deficiency of the vitamin through strict practice of a vegetarian diet without supplementation can increase the risk of several chronic diseases.
Fatty acids
Plant-based, or vegetarian, sources of Omega 3 fatty acids include soy, walnuts, pumpkin seeds, canola oil, kiwifruit, hempseed, algae, chia seed, flaxseed, echium seed and leafy vegetables such as lettuce, spinach, cabbage and purslane. Purslane contains more Omega 3 than any other known leafy green. Olives (and olive oil) are another important plant source of unsaturated fatty acids. Plant foods can provide alpha-linolenic acid which the human body uses to synthesize the long-chain n-3 fatty acids EPA and DHA. EPA and DHA can be obtained directly in high amounts from oily fish, fish oil, or algae oil. Vegetarians, and particularly vegans, have lower levels of EPA and DHA than meat-eaters. While the health effects of low levels of EPA and DHA are unknown, it is unlikely that supplementation with alpha-linolenic acid will significantly increase levels.. Significantly, for vegetarians, certain algae such as spirulina are good sources of gamma-linolenic acid (GLA), alpha-linolenic acid (ALA), linoleic acid (LA), stearidonic acid (SDA), eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), and arachidonic acid (AA).
Calcium
Calcium intake in vegetarians and vegans can be similar to non-vegetarians, as long as the diet is properly planned. Lacto-ovo vegetarians that include dairy products can still obtain calcium from dairy sources like milk, yogurt, and cheese.
Non-dairy milks that are fortified with calcium, such as soymilk and almond milk can also contribute a significant amount of calcium in the diet. Broccoli, bok choy, and kale have also been found to have calcium that is well absorbed in the body. Though the calcium content per serving is lower in these vegetables than a glass of milk, the absorption of the calcium into the body is higher. Other foods that contain calcium include calcium-set tofu, blackstrap molasses, turnip greens, mustard greens, soybeans, tempeh, almonds, okra, dried figs, and tahini. Though calcium can be found in Spinach, swiss chard, beans and beet greens, they are generally not considered to be a good source since the calcium binds to oxalic acid and is poorly absorbed into the body. Phytic acid found in nuts, seeds, and beans may also impact calcium absorption rates. See the National Institutes of Health Office of Dietary Supplements for calcium needs for various ages, the Vegetarian Resource Group and the Vegetarian Nutrition Calcium Fact Sheet from the Academy of Nutrition and Dietetics for more specifics on how to obtain adequate calcium intake on a vegetarian or vegan diet.
Vitamin D
Vitamin D needs can be met via the human body's own generation upon sufficient and sensible exposure to ultraviolet (UV) light in sunlight. Products including milk, soy milk and cereal grains may be fortified to provide a source of vitamin D. For those who do not get adequate sun exposure or food sources, vitamin D supplementation may be necessary.
Vitamin D2
Plants
Alfalfa (Medicago sativa subsp. sativa), shoot: 4.8 μg (192 IU) vitamin D2, 0.1 μg (4 IU) vitamin D3
Fungus, from USDA nutrient database, per 100 g:
Mushrooms, portabella, exposed to ultraviolet light, raw: Vitamin D2: 11.2 μg (446 IU)
Mushrooms, portabella, exposed to ultraviolet light, grilled: Vitamin D2: 13.1 μg (524 IU)
Mushrooms, shiitake, dried: Vitamin D2: 3.9 μg (154 IU)
Mushrooms, shiitake, raw: Vitamin D2: 0.4 μg (18 IU)
Mushrooms, portabella, raw: Vitamin D2: 0.3 μg (10 IU)
Mushroom powder, any species, illuminated with sunlight or artificial ultraviolet light sources
Vitamin D2, or ergocalciferol is found in fungus (except alfalfa which is a plantae) and created from viosterol, which in turn is created when ultraviolet light activates ergosterol (which is found in fungi and named as a sterol from ergot). Any UV-irradiated fungus including yeast form vitamin D2. Human bioavailability of vitamin D2 from vitamin D2-enhanced button mushrooms via UV-B irradiation is effective in improving vitamin D status and not different from a vitamin D2 supplement according to study. For example, vitamin D2 from UV-irradiated yeast baked into bread is bioavailable.
By visual assessment or using a chromometer, no significant discoloration of irradiated mushrooms, as measured by the degree of "whiteness", was observed making it hard to discover if they have been treated without labeling. Claims have been made that a normal serving (approx. 3 oz or 1/2 cup, or 60 grams) of mushrooms treated with ultraviolet light increase their vitamin D content to levels up to 80 micrograms, or 2700 IU if exposed to just 5 minutes of UV light after being harvested.
Choline
Choline is a nutrient that helps transfer signals between nerve cells and is involved in liver function. It is highest in dairy foods and meat but it is possible to be obtained through a vegan diet.
Ethics and diet
General
With regard to the ethics of eating meat, scholars consider vegetarianism an ideology and a social movement. Ethical reasons for choosing vegetarianism vary and are usually predicated on the interests of non-human animals. In many societies, controversies and debates have arisen over the ethics of eating animals. Some people, while not vegetarians, refuse to eat the flesh of certain animals due to cultural taboo, such as cats, dogs, horses or rabbits. Others support meat eating for scientific, nutritional and cultural reasons, including religious ones. Some meat eaters abstain from the meat of animals reared in particular ways, such as factory farms, or avoid certain meats, such as veal or foie gras. Some people follow vegetarian or vegan diets not because of moral concerns involving the raising or consumption of animals in general, but because of concerns about the specific treatment and practices involved in the processing of animals for food. Others still avoid meat out of concern that meat production places a greater burden on the environment than production of an equivalent amount of plant protein. Ethical objections based on consideration for animals are generally divided into opposition to the act of killing in general, and opposition to certain agricultural practices surrounding the production of meat.
Ethics of killing for food
Ethical vegetarians believe that killing an animal, like killing a human, especially one who has equal or lesser cognitive abilities than the animals in question, can only be justified in extreme circumstances and that consuming a living creature for its enjoyable taste, convenience, or nutrition value is not a sufficient cause. Another common view is that humans are morally conscious of their behavior in a way other animals are not, and therefore subject to higher standards. Jeff McMahan proposes that denying the right to life and humane treatment to animals with equal or greater cognitive abilities than mentally disabled humans is an arbitrary and discriminatory practice based on habit instead of logic. Opponents of ethical vegetarianism argue that animals are not moral equals to humans and so consider the comparison of eating livestock with killing people to be fallacious. This view does not excuse cruelty, but maintains that animals do not possess the rights a human has.
Dairy and eggs
One of the main differences between a vegan and a lacto-ovo vegetarian diet is the avoidance of both eggs and dairy products such as milk, cheese, butter and yogurt. Ethical vegans do not consume dairy or eggs because they state that their production causes the animal suffering or a premature death.
To produce milk from dairy cattle, farmers separate calves from their mothers soon after birth to retain cow milk for human consumption.
Treatment of animals
Ethical vegetarianism has become popular in developed countries particularly because of the spread of factory farming and environmental consciousness. Some believe that the current mass-demand for meat cannot be satisfied without a mass-production system that disregards the welfare of animals, while others believe that practices like well-managed free-range farming or the consumption of game (particularly from species whose natural predators have been significantly eliminated) could substantially alleviate consumer demand for mass-produced meat.
Religion and diet
Jainism teaches vegetarianism as moral conduct, as do some sects of Hinduism. Buddhism in general does not prohibit meat eating, but Mahayana Buddhism encourages vegetarianism as beneficial for developing compassion. Other denominations that advocate a vegetarian diet include the Seventh-day Adventists, the Rastafari movement, the Ananda Marga movement and the Hare Krishnas. Sikhism does not equate spirituality with diet and does not specify a vegetarian or meat diet.
Baháʼí Faith
While there are no dietary restrictions in the Baháʼí Faith, `Abdu'l-Bahá, the son of the religion's founder, noted that a vegetarian diet consisting of fruits and grains was desirable, except for people with a weak constitution or those that are sick. He stated that there are no requirements that Baháʼís become vegetarian, but that a future society should gradually become vegetarian. `Abdu'l-Bahá also stated that killing animals was contrary to compassion. While Shoghi Effendi, the head of the Bahá'í Faith in the first half of the 20th century, stated that a purely vegetarian diet would be preferable since it avoided killing animals, both he and the Universal House of Justice, the governing body of the Baháʼís have stated that these teachings do not constitute a Baháʼí practice and that Baháʼís can choose to eat whatever they wish but should be respectful of others' beliefs.
Buddhism
Theravadins in general eat meat. If Buddhist monks "see, hear or know" a living animal was killed specifically for them to eat, they must refuse it or else incur an offense. However, this does not include eating meat which was given as alms or commercially purchased. In the Theravada canon, Shakyamuni Buddha did not make any comment discouraging them from eating meat (except specific types, such as human, elephant, horse, dog, snake, lion, tiger, leopard, bear, and hyena flesh) but he specifically refused to institute vegetarianism in his monastic code when a suggestion had been made.
In several Sanskrit texts of Mahayana Buddhism, Buddha instructs his followers to avoid meat. However, each branch of Mahayana Buddhism selects which sutra to follow, and some branches, including the majority of Tibetan and Japanese Buddhists, actually do eat meat.
Meanwhile, Chinese, Korean, Vietnamese Buddhism (in some sectors of East Asian Buddhism) monks and nuns are expected to abstain from meat, and traditionally, to abstain from eggs and dairy as well.
Different Buddhist traditions have differing teachings on diet, which may also vary for ordained monks and nuns compared to others. Many interpret the precept "not to kill" to require abstinence from meat, but not all. In Taiwan, su vegetarianism excludes not only all animal products but also vegetables in the allium family (which have the characteristic aroma of onion and garlic): onion, garlic, scallions, leeks, chives, or shallots.
Christianity
Various groups within Christianity have practiced specific dietary restrictions for various reasons. The Council of Jerusalem in around 50 AD, recommended Christians keep following some of the Jewish food laws concerning meat. The early sect known as the Ebionites are considered to have practiced vegetarianism. Surviving fragments from their Gospel indicate their belief that – as Christ is the Passover sacrifice and eating the Passover lamb is no longer required – a vegetarian diet may (or should) be observed. However, orthodox Christianity does not accept their teaching as authentic. Indeed, their specific injunction to strict vegetarianism was cited as one of the Ebionites' "errors".
At a much later time, the Bible Christian Church founded by Reverend William Cowherd in 1809 followed a vegetarian diet. Cowherd was one of the philosophical forerunners of the Vegetarian Society. Cowherd encouraged members to abstain from eating of meat as a form of temperance.
Seventh-day Adventists are encouraged to engage in healthy eating practices, and lacto-ovo-vegetarian diets are recommended by the General Conference of Seventh-day Adventists Nutrition Council (GCNC). They have also sponsored and participated in many scientific studies exploring the impact of dietary decisions upon health outcomes. The GCNC has in addition adapted the USDA's food pyramid for a vegetarian dietary approach. However, the only kinds of meat specifically frowned upon by the SDA health message are unclean meats, or those forbidden in scripture.
Additionally, some monastic orders follow a pescatarian diet, and members of the Eastern Orthodox Church follow a vegan diet during fasts. There is also a strong association between the Quakers and vegetarianism dating back at least to the 18th century. The association grew in prominence during the 19th century, coupled with growing Quaker concerns in connection with alcohol consumption, anti-vivisection and social purity. The association between the Quaker tradition and vegetarianism, however, becomes most significant with the founding of the Friends' Vegetarian Society in 1902 "to spread a kindlier way of living amongst the Society of Friends."
Seventh-day Adventist
The Seventh-day Adventist Church is well known for presenting a health message that recommends vegetarianism and expects adherence to the kosher laws in Leviticus 11. Obedience to these laws means abstinence from pork, shellfish, and other animals proscribed as "unclean". The church discourages its members from consuming alcoholic beverages, tobacco or illegal drugs (compare Christianity and alcohol). In addition, some Adventists avoid coffee, tea, cola, and other beverages containing caffeine.
The pioneers of the Adventist Church had much to do with the common acceptance of breakfast cereals into the Western diet, and the "modern commercial concept of cereal food" originated among Adventists. John Harvey Kellogg was one of the early founders of Adventist health work. His development of breakfast cereals as a health food led to the founding of Kellogg's by his brother William. In both Australia and New Zealand, the church-owned Sanitarium Health and Wellbeing Company is a leading manufacturer of health and vegetarian-related products, most prominently Weet-Bix. Kellogg encouraged his students Daniel H. Kress and Lauretta E. Kress to study medicine together at the University of Michigan Medical School and become public advocates of vegetarianism; together they published an important vegetarian cookbook and became early founders of what was later Washington Adventist Hospital.
Research funded by the U.S. National Institutes of Health has shown that the average Adventist in California lives 4 to 10 years longer than the average Californian. The research, as cited by the cover story of the November 2005 issue of National Geographic, asserts that Adventists live longer because they do not smoke or drink alcohol, have a day of rest every week, and maintain a healthy, low-fat vegetarian diet that is rich in nuts and beans. The cohesiveness of Adventists' social networks has also been put forward as an explanation for their extended lifespan.
Since Dan Buettner's 2005 National Geographic story about Adventist longevity, his book, The Blue Zones: Lessons for Living Longer From the People Who've Lived the Longest, named Loma Linda, California, a "blue zone" because of the large concentration of Seventh-day Adventists. He cites the Adventist emphasis on health, diet, and Sabbath-keeping as primary factors for Adventist longevity.
An estimated 35% of Adventists practice vegetarianism or veganism, according to a 2002 worldwide survey of local church leaders. North American Adventist health study recruitments from 2001 to 2007 found a similar prevalence of vegetarianism/veganism. A small majority of Adventists, 54%, were conventional meat-eaters. Of the remaining 46% it was found that 28% were Ovo/Lacto-vegetarians, 10% were Pesco-vegetarians and 8% were vegans. It is common for Adventists who choose to eat meat to also eat plant-based foods; 6% of the "meat-eaters" group restricted their intake of meat/fish to no more than once per week.
Hinduism
Though there is no strict rule on what to consume and what not to, the food habits of Hindus vary according to their specific caste and sub-caste, community, location, custom and varying traditions. Historically and currently, a majority of Hindus (about 70%) eat meat, while a large proportion of Hindus are vegetarian (about 30%).
Some sects of Hinduism such as Vaishnavism follow the purest form of vegetarianism as an ideal while Shaktism and Tantric sects freely consume chicken, mutton (goat and sheep meat), fish and eggs. The reasons stated by Jains and Vaishnavas are: the principle of nonviolence (ahimsa) applied to animals; the intention to offer only "pure" (vegetarian) food to a deity and then to receive it back as prasada; and the conviction that a sattvic diet is beneficial for a healthy body. A sattvic diet is lacto-vegetarian, which includes dairy, but excludes eggs. An overwhelming majority of the Hindus consider the cow to be a holy and sacred animal whose slaughter for meat is forbidden. Thus, beef is a taboo for the majority of Hindus, Jains and Sikhs
Islam
Some followers of Islam, or Muslims, chose to be vegetarian for health, ethical, or personal reasons. However, the choice to become vegetarian for non-medical reasons can sometimes be controversial due to conflicting fatwas and differing interpretations of the Quran. Though some more traditional Muslims may keep quiet about their vegetarian diet, the number of vegetarian Muslims is increasing.
Sri Lankan Sufi master Bawa Muhaiyaddeen, who established The Bawa Muhaiyaddeen Fellowship of North America in Philadelphia. The former Indian president Dr. A. P. J. Abdul Kalam was also famously a vegetarian.
In January 1996, The International Vegetarian Union announced the formation of the Muslim Vegetarian/Vegan Society.
Many non-vegetarian Muslims will select vegetarian (or seafood) options when dining in non-halal restaurants. However, this is a matter of not having the right kind of meat rather than preferring not to eat meat on the whole.
Jainism
Followers of Jainism believe that all living organisms, including microorganisms, are living and have a soul, and have one or more senses out of five senses. They go to great lengths to minimise any harm to any living organism. Most Jains are lacto-vegetarians, but more devout Jains do not eat root vegetables, because they believe that root vegetables contain many more microorganisms as compared to other vegetables, and that, by eating them, violence against these microorganisms is inevitable. They therefore prefer eating beans and fruits, whose cultivation involves killing fewer microorganisms. No products obtained from already-dead animals are allowed because of potential violence against decomposing microorganisms. Some particularly dedicated individuals are fruitarians. Honey is forbidden, being the regurgitation of nectar by bees and potentially containing eggs, excreta and dead bees. Many Jains do not consume plant parts that grow underground such as roots and bulbs, because the plants themselves and tiny animals may be killed when the plants are pulled up.
Judaism
While classical Jewish law neither requires nor prohibits the consumption of meat, Jewish vegetarians often cite Jewish principles regarding animal welfare, environmental ethics, moral character, and health as reasons for adopting a vegetarian or vegan diet.
Rabbis may advocate vegetarianism or veganism primarily because of concerns about animal welfare, especially in light of the traditional prohibition on causing unnecessary "pain to living creatures" (tza'ar ba'alei hayyim). Some Jewish vegetarian groups and activists believe that the halakhic permission to eat meat is a temporary leniency for those who are not ready yet to accept the vegetarian diet.
The book of Daniel starts in its first chapter with the benefits of vegetarianism. Due to its size, its late time of origin and its revealing content, the book is of particular importance for the time of the following exile, which lasts now for 2000 years and technically still goes on until the Temple in Jerusalem is rebuilt. A diet described as "pulse and water" is presented along benefits such as accordance with the biblical dietary laws, health, beauty, wisdom and vision. Vegetarianism can be seen as a safeguard around the dietary laws or the beautification of them.
Jewish vegetarianism and veganism have become especially popular among Israeli Jews. In 2016, Israel was described as "the most vegan country on Earth", as five percent of its population eschewed all animal products. Interest in veganism has grown among both non-Orthodox and Orthodox Jews in Israel.
Rastafari
Within the Afro-Caribbean community, a minority are Rastafari and follow the dietary regulations with varying degrees of strictness. The most orthodox eat only "Ital" or natural foods, in which the matching of herbs or spices with vegetables is the result of long tradition originating from the African ancestry and cultural heritage of Rastafari. "Ital", which is derived from the word vital, means essential to human existence. Ital cooking in its strictest form prohibits the use of salt, meat (especially pork), preservatives, colorings, flavorings and anything artificial. Most Rastafari are vegetarian.
Sikhism
The tenets of Sikhism do not advocate a particular stance on either vegetarianism or the consumption of meat, but leave the decision of diet to the individual. The tenth guru, Guru Gobind Singh, however, prohibited "Amritdhari" Sikhs, or those that follow the Sikh Rehat Maryada (the Official Sikh Code of Conduct) from eating Kutha meat, or meat which has been obtained from animals which have been killed in a ritualistic way. This is understood to have been for the political reason of maintaining independence from the then-new Muslim hegemony, as Muslims largely adhere to the ritualistic halal diet.
"Amritdharis" that belong to some Sikh sects (e.g. Akhand Kirtani Jatha, Damdami Taksal, Namdhari and Rarionwalay, etc.) are vehemently against the consumption of meat and eggs (though they do consume and encourage the consumption of milk, butter and cheese). This vegetarian stance has been traced back to the times of the British Raj, with the advent of many new Vaishnava converts. In response to the varying views on diet throughout the Sikh population, Sikh Gurus have sought to clarify the Sikh view on diet, stressing their preference only for simplicity of diet. Guru Nanak said that over-consumption of food (Lobh, Greed) involves a drain on the Earth's resources and thus on life. Passages from the Guru Granth Sahib (the holy book of Sikhs, also known as the Adi Granth) say that it is "foolish" to argue for the superiority of animal life, because though all life is related, only human life carries more importance: "Only fools argue whether to eat meat or not. Who can define what is meat and what is not meat? Who knows where the sin lies, being a vegetarian or a non-vegetarian?" The Sikh langar, or free temple meal, is largely lacto-vegetarian, though this is understood to be a result of efforts to present a meal that is respectful of the diets of any person who would wish to dine, rather than out of dogma.
Environment and diet
Environmental vegetarianism is based on the concern that the production of meat and animal products for mass consumption, especially through factory farming, is environmentally unsustainable. According to a 2006 United Nations initiative, the livestock industry is one of the largest contributors to environmental degradation worldwide, and modern practices of raising animals for food contribute on a "massive scale" to air and water pollution, land degradation, climate change, and loss of biodiversity. The initiative concluded that "the livestock sector emerges as one of the top two or three most significant contributors to the most serious environmental problems, at every scale from local to global."
In addition, animal agriculture is a large source of greenhouse gases. According to a 2006 report it is responsible for 18% of the world's greenhouse gas emissions as estimated in 100-year CO2 equivalents. Livestock sources (including enteric fermentation and manure) account for about 3.1 percent of US anthropogenic GHG emissions expressed as carbon dioxide equivalents. This EPA estimate is based on methodologies agreed to by the Conference of Parties of the UNFCCC, with 100-year global warming potentials from the IPCC Second Assessment Report used in estimating GHG emissions as carbon dioxide equivalents.
Meat produced in a laboratory (called in vitro meat) may be more environmentally sustainable than regularly produced meat. Reactions of vegetarians vary. Rearing a relatively small number of grazing animals can be beneficial, as the Food Climate Research Network at Surrey University reports: "A little bit of livestock production is probably a good thing for the environment".
In May 2009, Ghent, Belgium, was reported to be "the first [city] in the world to go vegetarian at least once a week" for environmental reasons, when local authorities decided to implement a "weekly meatless day". Civil servants would eat vegetarian meals one day per week, in recognition of the United Nations' report. Posters were put up by local authorities to encourage the population to take part on vegetarian days, and "veggie street maps" were printed to highlight vegetarian restaurants. In September 2009, schools in Ghent are due to have a weekly veggiedag ("vegetarian day") too.
Public opinion and acceptance of meat-free food is expected to be more successful if its descriptive words focus less on the health aspects and more on the flavor.
Labor conditions and diet
Some groups, such as PETA, promote vegetarianism as a way to offset poor treatment and working conditions of workers in the contemporary meat industry. These groups cite studies showing the psychological damage caused by working in the meat industry, especially in factory and industrialised settings, and argue that the meat industry violates its labourers' human rights by assigning difficult and distressing tasks without adequate counselling, training and debriefing. However, the working conditions of agricultural workers as a whole, particularly non-permanent workers, remain poor and well below conditions prevailing in other economic sectors. Accidents, including pesticide poisoning, among farmers and plantation workers contribute to increased health risks, including increased mortality. According to the International Labour Organization, agriculture is one of the three most dangerous jobs in the world.
Economics and diet
Similar to environmental vegetarianism is the concept of economic vegetarianism. An economic vegetarian is someone who practices vegetarianism from either the philosophical viewpoint concerning issues such as public health and curbing world starvation, the belief that the consumption of meat is economically unsound, part of a conscious simple living strategy or just out of necessity. According to the Worldwatch Institute, "Massive reductions in meat consumption in industrial nations will ease their health care burden while improving public health; declining livestock herds will take pressure off rangelands and grainlands, allowing the agricultural resource base to rejuvenate. As populations grow, lowering meat consumption worldwide will allow more efficient use of declining per capita land and water resources, while at the same time making grain more affordable to the world's chronically hungry." According to estimates in 2016, adoption of vegetarianism would contribute substantially to global healthcare and environmental savings.
Demographics
Prejudice researcher Gordon Hodson argues that vegetarians and vegans frequently face discrimination where eating meat is held as a cultural norm.
Turnover
Research suggests that, at least in the United States, vegetarianism has a high turnover rate, with less than 20% of adopters persisting for more than a year. Research shows that lacking social support contributes to lapses. A 2019 analysis found that adhering to any kind of restricted diet (gluten-free, vegetarian, kosher, teetotal) was associated with feelings of loneliness and increased social isolation.
Vegetarians or vegans who adopted their diet abruptly might be more likely to eventually abandon it when compared to individuals adopting their diet gradually with incremental changes.
Country-specific information
The rate of vegetarianism by country varies substantially from relatively low levels in countries such as the Netherlands (5%) to more considerable levels in India (20–40%). Estimates for the number of vegetarians per country can be subject to methodological difficulties, as respondents may identify as vegetarian even if they include some meat in their diet, and thus some researchers suggest the percentage of vegetarians may be significantly overestimated.
Media
Vegetarianism is occasionally depicted in mass media. Some scholars have argued that mass media serves as a "source of information for individuals" interested in vegetarianism or veganism, while there are "increasing social sanctions against eating meat". Over time, societal attitudes of vegetarianism have changed, as have perceptions of vegetarianism in popular culture, leading to more "vegetarian sentiment". Even so, there are still existing "meat-based" food metaphors which infuse daily speech, and those who are vegetarian and vegan are met with "acceptance, tolerance, or hostility" after they divulge they are vegetarian or vegan. Some writers, such as John L. Cunningham, editor of the Vegetarian Resource Group's newsletter, have argued for "more sympathetic vegetarian characters in the mass media".
Literature
In Western literature, vegetarianism, and topics that relate to it, have informed a "gamut of literary genres", whether literary fiction or those fictions focusing on utopias, dystopias, or apocalypses, with authors shaped by questions about human identity and "our relation to the environment", implicating vegetarianism and veganism. Others have pointed to the lack of "memorable characters" who are vegetarian. There are also vegetarian themes in horror fiction, science fiction and poetry.
In 1818, Mary Shelley published the novel Frankenstein. Writer and animal rights advocate Carol J. Adams argued in her seminal book, The Sexual Politics of Meat that the unnamed creature in the novel was a vegetarian. She argued that the book was "indebted to the vegetarian climate" of its day and that vegetarianism is a major theme in the novel as a whole. She notes that the creature gives an "emotional speech" talking about its dietary principles, which makes it a "more sympathetic being" than others. She also said that it connected with Vegetarianism in the Romantic Era who believed that the Garden of Eden was meatless, rewrote the myth of Prometheus, the ideas of Jean-Jacques Rousseau, and feminist symbolism. Adams concludes that it is more likely that the "vegetarian revelations" in the novel are "silenced" due to the lack of a "framework into which we can assimilate them." Apart from Adams, scholar Suzanne Samples pointed to "gendered spaces of eating and consumption" within Victorian England which influenced literary characters of the time. This included works such as Alfred, Lord Tennyson's poem titled The Charge of the Light Brigade, Christina Rossetti's volume of poetry titled Goblin Market and Other Poems, Lewis Carroll's Alice's Adventures in Wonderland, Mary Seacole's autographical account titled Wonderful Adventures of Mrs. Seacole in Many Lands, and Anthony Trollope's novel titled Orley Farm. Samples also argued that vegetarianism in the Victorian era "presented a unique lifestyle choice that avoided meat but promoted an awareness of health", which initially was seen as rebellious but later became more normalized.
In Irene Clyde's 1909 feminist utopian novel, Beatrice the Sixteenth, Mary Hatherley accidentally travels through time, discovering a lost world, which is a postgender society named Armeria, with the inhabitants following a strict vegetarian diet, having ceased to slaughter animals for over a thousand years. Some reviewers of the book praised the vegetarianism of the Armerians.
James Joyce's 1922 novel, Ulysses is said to have vegetarian themes. Scholar Peter Adkins argued that while Joyce was critical of the vegetarianism of George A.E. Russell, the novel engages with "questions of animal ethics through its portrayal of Ireland's cattle industry, animal slaughter and the cultural currency of meat," unlike some of his other novels. He also stated that the novel "historicizes and theorizes animal life and death," and that it demonstrates the ways that symbolism and materiality of meat are "co-opted within patriarchal political structures," putting it in the same space as theorists like Carol J. Adams, Donna J. Haraway, Laura Wright, and Cary Wolfe, and writers such as J. M. Coetzee.
In 1997, S. Reneé Wheeler wrote in the Vegetarian Journal, saying that "finding books with vegetarian themes" is important for helping children "feel legitimate in being vegetarian." In 2004, writer J. M. Coetzee argued that since the "mode of consciousness of nonhuman species is quite different from human consciousness," it is hard for writers to realize this for animals, with a "temptation to project upon them feelings and thoughts that may belong only to our own human mind and heart," and stated that reviewers have ignored the presence of animals in his books. He also stated that animals are present in his "fiction either not at all or in a merely subsidiary role" because they occupy "a subsidiary place in our lives" and argued that it is not "possible to write about the inner lives of animals in any complex way."
In 2014, The New Yorker published a short story by Jonathan Lethem titled "Pending Vegan" which follows "one family, a husband and wife and their four-year-old twin daughters" on a trip to SeaWorld in San Diego, California. The protagonist of the story, Paul Espeseth, renames himself "Pending Vegan" in order to acknowledge his "increasing uneasiness with the relationship between man and beast."
In 2016, a three-part Korean novel by Han Kang titled The Vegetarian was published in the U.S., which focuses on a woman named Young-hye, who "sees vegetarianism as a way of not inflicting harm on anything," with eating meat symbolizing human violence itself, and later identifies as a plant rather than as a human "and stops eating entirely." Some argued the book was
more about mental illness than vegetarianism. Others compared it to fictional works by Margaret Atwood.
Television
Vegetarians, and vegetarian themes, have appeared in various TV shows, such as Buffy the Vampire Slayer, True Blood, The Simpsons, King of the Hill, and South Park.
Mr. Spock of Star Trek has been called "television's first vegetarian." He and his fellow Vulcans do not eat meat due to a "philosophy of non-violence." He is identified as vegetarian following an episode where he was "transported back to pre-civilised times" and ate meat, and in Richard Marranca, in an issue of the Vegetarian Journal, said that for Spock, like Kwai Chang Caine in Kung Fu, "vegetarianism was something authentic and taken for granted; it was the right thing to do based on compassion and logic."
In 1995, The Simpsons episode "Lisa the Vegetarian" aired. Before recording their lines for the episode, showrunner David Mirkin, who had recently stopped consuming meat, gave Linda and Paul McCartney "a container of his favorite turkey substitute," with both voicing characters in an episode which focused around vegetarianism. Critic Alan Siegel said that before the episode vegetarians had been portrayed as "rarely as anything but one-dimensional hippies" but that this episode was different as it was "told from the point of view of the person becoming a vegetarian." He said that the episode was one of the "first times on television that vegetarians saw an honest depiction of themselves" and of people's reaction to their dietary choices. The idea for the episode was originally proposed by David X. Cohen and the McCartneys agreed on the condition that Lisa remain a vegetarian, with both satisfied with how the episode turned out. In the episode, Lisa decides to stop eating meat after bonding with a lamb at a petting zoo. Her schoolmates and family members ridicule her for her beliefs, but with the help of Apu as well as Paul and Linda McCartney, she commits to vegetarianism. The staff promised that she would remain a vegetarian, resulting in one of the few permanent character changes made in the show. In an August 2020 interview, McCartney said that he and is wife were worried that Lisa "would be a vegetarian for a week, then Homer would persuade her to eat a hot dog," but were assured by the producers that she would remain that way, and he was delighted that they "kept their word."
In September 1998, the King of the Hill episode "And They Call It Bobby Love" aired on FOX. In the episode, "Bobby has a relationship with a vegetarian named Marie. She later dumps him after he eats a steak in front of her." In the March 2002 South Park episode "Fun with Veal", Stan Marsh becomes a vegetarian after he learns that veal is made of baby cows, which Cartman makes fun of. The episode ends with the boys, including Stan, getting grounded, but not before going out with their parents for burgers, meaning that Stan is no longer a vegetarian. In the DVD commentary, the creators said they wanted to balance their message of not eating baby animals, by at the same time not advocating people abstain from meat consumption altogether.
Aang, in the animated series Avatar: The Last Airbender and The Legend of Korra was vegetarian. According to the show's creators, "Buddhism and Taoism have been huge inspirations behind the idea for Avatar." As shown in "The King of Omashu" and "The Headband", a notable aspect of Aang's character is his vegetarian diet, which is consistent with Buddhism, Hinduism, and Taoism. In the Brahmajala Sutra, a Buddhist code of ethics, vegetarianism is encouraged.
Other fictional characters who are vegetarians include Count Duckula in Count Duckula, Beast Boy in Teen Titans and Teen Titans Go!, Lenore in Supernatural, and Norville "Shaggy" Rogers in the animated series What's New, Scooby Doo?. Before the latter animated series, Shaggy was known for having an "enormous appetite" earlier in the Scooby-Doo franchise. The decision to make Shaggy a vegetarian occurred after his voice actor, Casey Kasem, convinced the producers to do so, since he was a vegan who supported animal rights and opposed factory farming, saying he would refuse to voice Shaggy unless the character was vegetarian.
Also, a Netflix original, Okja, focused on vegetarianism, while an October 2019 South Park episode, "Let Them Eat Goo", featured a vegetarian character. Additionally, Steven Universe, the protagonist in the show Steven Universe and the limited epilogue series, Steven Universe Future, is a vegetarian. In the episode "Snow Day" of Steven Universe Future, Steven tells the Gems he lives with that he has been a vegetarian for a month, drinks protein shakes and mentions that he does "his own skincare routine."
Film
In the 1999 film, Notting Hill, Keziah, played by Emma Bernard is a vegetarian. In one scene, Keziah tells William "Will" Thacker (played by Hugh Grant), that she is a fruitarian. She says she believes that "fruits and vegetables have feeling", meaning she opposes cooking them, only eating things that have "actually fallen off a tree or bush" and that are dead already, leading to what some describe as a negative depiction.
In the 2000 film, But I'm A Cheerleader, before Megan, one of the film's protagonists, is sent to a conversion therapy camp, her parents and others claim she is a lesbian because she is a vegetarian. Legally Blonde, a 2001 film, also featured a vegetarian. When Elle Wood introduces herself at Harvard Law School, she describes herself and her dog as "Gemini vegetarians".
In the 2012 film, Life of Pi, Pi, played by Suraj Sharma, is a vegetarian based on his 3 religions: Hindu, Christian, and Muslim. And in the ship scene, one Taiwanese Sailor, played by Bo-Chieh Wang, is a vegetarian from his Buddhism religion to eat rice and the vegetarian gravy.
In the 2018 Hollywood blockbuster, Black Panther, M’Baku (voiced by Winston Duke), the Jabari tribe leader who lives in the mountains of Wakanda, declares to a White CIA agent named Everett Ross (voiced by Martin Freeman), "if you say one more word, I'll feed you to my children!" After Everett is shaken by these words, he jokes, saying he is kidding because all those in his tribe, including himself, are vegetarians. Some praised this scene for challenging a stereotype of Black culture and the perception of what vegetarians look like. Duke later said that some Black outlets cooked vegan meals for him, and said that the scene is "kind of teaching kids that eating vegetables is cool," which is something he is for.
Vegetarian themes have also been noted in the Twilight novel (2005–2008) and film franchise (2008–2012), The Road (2006) and The Year of the Flood (2009). In March 2020, scholar Nathan Poirer reviewed Thinking Veganism in Literature and Culture: Towards a Vegan Theory, a book edited by Emelia Quinn and Benjamin Westwood, and he concluded that veganism could "infiltrate popular culture without being perceived as threatening," while noting others who contribute to the book examining vegan cinema that "challenges the normality of human supremacy by situating humans as potential prey," and stating that the essays outline ways veganism can be successful in popular culture.
Other scholars noted vegetarian themes in the films The Fault in Our Stars, The Princess Diaries series, and the 2009 film, Vegetarian.
See also
European Vegetarian Union
International Vegetarian Union
List of vegetarians
Plant-based diet
ProVeg International
Vegetarian and vegan symbolism
Vegetarian cuisine
Vegetarian Diet Pyramid
Vegetarian nutrition
References
External links
The Complete Vegetarian Cookbook
The Logic of Vegetarianism: Essays and Dialogues by Henry S. Salt
Diets
Applied ethics
Intentional living
Nonviolence | Vegetarianism | Biology | 11,975 |
41,443,085 | https://en.wikipedia.org/wiki/External%20image | In psychology, the external image (also alien image, foreign image, public image, third-party image; ) is the image other people have of a person, i.e., a person's external image is the way they are viewed by other people. It contrasts with a person's self image (); how the external image is communicated to a person may affect their self esteem positively or negatively.
Definition
An external image is the totality of all perceptions, feelings, and judgments that third parties make about an individual. These interpersonal perceptions are automatically linked to earlier experiences with the person being observed and with the feelings arising from these interactions and evaluations. The image that others have of a person shapes their expectations of this person, and significantly affects their mutual social interaction.
External image and self image
A person's external image, or more precisely, how this image is communicated to the individual, and how others react to the individual as a result of his or her external image, significantly affects the person's self image. Positive, appreciative external images strengthen an individual's self confidence and self esteem. In extreme cases, negative or conflicting external images can cause mental illness.
The external image is always different from an individual's self-image. From the two perspectives and the differences between them, or more accurately, the inferences that the two parties draw for themselves, social interactions evolve, influenced by the parties' own selves.
In group dynamics
Conscious handling of images about each other plays an important part in group dynamics. In feedback exercises, subjects are trained in giving and receiving external images. The Johari window describes the relationship between external and self images, and that between conscious and unconscious parts of these images. With mindful "awareness exercises", a person is trained to detect previously unconscious expectations of third parties, and with communication exercises, they are trained to reconcile their own and others' images and expectations of each other.
In psychotherapy
Psychotherapy also deals with external images when treating depression or in dealing with the effects of trauma or bullying, or more generally in counseling members of marginalized groups.
References
See also
Constructivism
Othering
Conceptions of self
Perception
Cognitive psychology
Interpersonal relationships
Interpersonal communication | External image | Biology | 451 |
1,048,389 | https://en.wikipedia.org/wiki/Pirbright%20Institute | The Pirbright Institute (formerly the Institute for Animal Health) is a research institute in Surrey, England, dedicated to the study of infectious diseases of farm animals. It forms part of the UK government's Biotechnology and Biological Sciences Research Council (BBSRC). The institute employs scientists, vets, PhD students, and operations staff.
History
It began in 1914 to test cows for tuberculosis. More buildings were added in 1925. Compton was established by the Agricultural Research Council in 1937. Pirbright became a research institute in 1939 and Compton in 1942. The Houghton Poultry Research Station at Houghton, Cambridgeshire was established in 1948. In 1963 Pirbright became the Animal Virus Research Institute and Compton became the Institute for Research on Animal Diseases. The Neuropathogenesis Unit (NPU) was established in Edinburgh in 1981. This became part of the Roslin Institute in 2007.
In 1987, Compton, Houghton and Pirbright became the Institute for Animal Health, being funded by BBSRC. Houghton closed in 1992, operations at Compton ended in 2015.
The Edward Jenner Institute for Vaccine Research was sited at Compton until October 2005, when it merged with the vaccine programmes of the University of Oxford and the Institute for Animal Health.
The Pirbright site was implicated in the 2007 United Kingdom foot-and-mouth outbreak, with the Health and Safety Executive (HSE) concluding that a local case of the disease was a result of contaminated effluent release either from the Pirbright Institute or the neighbouring Merial Animal Health laboratory.
Significant investment (over £170 million) took place at Pirbright with the development of new world-class laboratory and animal facilities. The institute has been known as "The Pirbright Institute" since October 2012.
On 14 June 2019 the largest stock of the rinderpest virus was destroyed at the Pirbright Institute.
Directors of note
John Burns Brooksby 1964 until 1980
Structure
The work previously carried out at Compton has either moved out to the university sector, ended or has been transferred to the Pirbright site. The Compton site currently carries out work on endemic (commonplace) animal diseases including some avian viruses and a small amount of bovine immunology whilst Pirbright works on exotic (unusual) animal diseases (usually caused by virus outbreaks). Pirbright has national and international reference laboratories of diseases. It is a biosafety level 4 laboratories (commonly referred to as "P4" or BSL-4).
Funding
25% of its income comes from a core grant from the BBSRC of around £11 million. Around 50% comes from research grants from related government organisations, such as DEFRA, or industry and charities (such as the Wellcome Trust). The remaining 25% comes from direct payments for work carried out.
The Bill & Melinda Gates Foundation has provided funding to the institute for research into veterinary infectious diseases and universal flu vaccine development.
Function
The Pirbright Institute carries out research, diagnostics and surveillance of viruses carried predominantly by farm animals, such as foot-and-mouth disease virus (FMDV), African swine fever, bluetongue, lumpy skin disease and avian and swine flu. Understanding of viruses comes from molecular biology.
It carries out surveillance activities on farm animal health and disease movement in the UK.
Services
Arthropod supplies
Diagnostics & Surveillance
Disinfectant testing
Flow cytometry & cell sorting
Products – Includes positive sera, inactived antigens, diagnostic kits, viral cultures and live midges.
Training courses
Location
The institute had two sites:
Compton in Berkshire – closed in August 2015 with services relocated to new facilities at Pirbright.
Pirbright in Surrey – shared with commercial company Merial
See also
2007 United Kingdom foot-and-mouth outbreak
World Organisation for Animal Health
Bluetongue disease
Veterinary Laboratories Agency (now part of the Animal Health and Veterinary Laboratories Agency)
Animal Health (now part of the Animal Health and Veterinary Laboratories Agency)
Animal Health and Veterinary Laboratories Agency (an Executive Agency of the Department of Environment, Food and Rural Affairs)
References
External links
Agricultural research institutes in the United Kingdom
Agricultural organisations based in England
Animal health in England
Animal research institutes
Animal virology
Biotechnology in the United Kingdom
Biotechnology organizations
Genetics or genomics research institutions
Medical research institutes in the United Kingdom
Microbiology institutes
Research institutes established in 1987
Research institutes in Berkshire
Research institutes in Surrey
Veterinary research institutes
1987 establishments in England
Veterinary medicine in England | Pirbright Institute | Engineering,Biology | 906 |
930,016 | https://en.wikipedia.org/wiki/Ascidiacea | Ascidiacea, commonly known as the ascidians or sea squirts, is a paraphyletic class in the subphylum Tunicata of sac-like marine invertebrate filter feeders. Ascidians are characterized by a tough outer test or "tunic" made of the polysaccharide cellulose.
Ascidians are found all over the world, usually in shallow water with salinities over 2.5%. While members of the Thaliacea (salps, doliolids and pyrosomes) and Appendicularia (larvaceans) swim freely like plankton, sea squirts are sessile animals after their larval phase: they then remain firmly attached to their substratum, such as rocks and shells.
There are 2,300 species of ascidians and three main types: solitary ascidians, social ascidians that form clumped communities by attaching at their bases, and compound ascidians that consist of many small individuals (each individual is called a zooid) forming large colonies.
Sea squirts feed by taking in water through a tube, the oral siphon. The water enters the mouth and pharynx, flows through mucus-covered gill slits (also called pharyngeal stigmata) into a water chamber called the atrium, then exits through the atrial siphon.
Some authors now include the thaliaceans in Ascidiacea, making it monophyletic.
Anatomy
Sea squirts are rounded or cylindrical animals ranging from about in size. One end of the body is always firmly fixed to rock, coral, or some similar solid surface. The lower surface is pitted or ridged, and in some species has root-like extensions that help the animal grip the surface. The body wall is covered by a smooth thick tunic, which is often quite rigid. The tunic consists of cellulose, along with proteins and calcium salts. Unlike the shells of molluscs, the tunic is composed of living tissue and often has its own blood supply. In some colonial species, the tunics of adjacent individuals are fused into a single structure.
The upper surface of the animal, opposite to the part gripping the substratum, has two openings, or siphons. When removed from the water, the animal often violently expels water from these siphons, hence the common name of "sea squirt". The body itself can be divided into up to three regions, although these are not clearly distinct in most species. The pharyngeal region contains the pharynx, while the abdomen contains most of the other bodily organs, and the postabdomen contains the heart and gonads. In many sea squirts, the postabdomen, or even the entire abdomen, are absent, with their respective organs being located more anteriorly.
As its name implies, the pharyngeal region is occupied mainly by the pharynx. The large buccal siphon opens into the pharynx, acting like a mouth. The pharynx itself is ciliated and contains numerous perforations, or stigmata, arranged in a grid-like pattern around its circumference. The beating of the cilia sucks water through the siphon, and then through the stigmata. A long ciliated groove, or endostyle, runs along one side of the pharynx, and a projecting ridge along the other. The endostyle may be homologous with the thyroid gland of vertebrates, despite its differing function.
The pharynx is surrounded by an atrium, through which water is expelled through a second, usually smaller, siphon. Cords of connective tissue cross the atrium to maintain the general shape of the body. The outer body wall consists of connective tissue, muscle fibres, and a simple epithelium directly underlying the tunic.
Digestive system
The pharynx forms the first part of the digestive system. The endostyle produces a supply of mucus which is then passed into the rest of the pharynx by the beating of flagella along its margins. The mucus then flows in a sheet across the surface of the pharynx, trapping planktonic food particles as they pass through the stigmata, and is collected in the ridge on the dorsal surface. The ridge bears a groove along one side, which passes the collected food downwards and into the oesophageal opening at the base of the pharynx.
The esophagus runs downwards to a stomach in the abdomen, which secretes enzymes that digest the food. An intestine runs upwards from the stomach parallel to the oesophagus and eventually opens, through a short rectum and anus, into a cloaca just below the atrial siphon. In some highly developed colonial species, clusters of individuals may share a single cloaca, with all the atrial siphons opening into it, although the buccal siphons all remain separate. A series of glands lie on the outer surface of the intestine, opening through collecting tubules into the stomach, although their precise function is unclear.
Circulatory system
The heart is a curved muscular tube lying in the postabdomen, or close to the stomach. Each end opens into a single vessel, one running to the endostyle, and the other to the dorsal surface of the pharynx. The vessels are connected by a series of sinuses, through which the blood flows. Additional sinuses run from that on the dorsal surface, supplying blood to the visceral organs, and smaller vessels commonly run from both sides into the tunic. Nitrogenous waste, in the form of ammonia, is excreted directly from the blood through the walls of the pharynx, and expelled through the atrial siphon.
Unusually, the heart of sea squirts alternates the direction in which it pumps blood every three to four minutes. There are two excitatory areas, one at each end of the heart, with first one being dominant, to push the blood through the ventral vessel, and then the other, pushing it dorsally.
There are four different types of blood cell: lymphocytes, phagocytic amoebocytes, nephrocytes and morula cells. The nephrocytes collect waste material such as uric acid and accumulate it in renal vesicles close to the digestive tract. The morula cells help to form the tunic, and can often be found within the tunic substance itself. In some species, the morula cells possess pigmented reducing agents containing iron (hemoglobin), giving the blood a red colour, or vanadium (hemovanadin) giving it a green colour. In that case the cells are also referred to as vanadocytes.
Nervous system
The ascidian central nervous system is formed from a plate that rolls up to form a neural tube. The number of cells within the central nervous system is very small. The neural tube is composed of the sensory vesicle, the neck, the visceral or tail ganglion, and the caudal nerve cord. The anteroposterior regionalization of the neural tube in ascidians is comparable to that in vertebrates.
Although there is no true brain, the largest ganglion is located in the connective tissue between the two siphons, and sends nerves throughout the body. Beneath this ganglion lies an exocrine gland that empties into the pharynx. The gland is formed from the nerve tube, and is therefore homologous to the spinal cord of vertebrates.
Sea squirts lack special sense organs, although the body wall incorporates numerous individual receptors for touch, chemoreception, and the detection of light.
Life history
Almost all ascidians are hermaphrodites and conspicuous mature ascidians are sessile. The gonads are located in the abdomen or postabdomen, and include one testis and one ovary, each of which opens via a duct into the cloaca. Broadly speaking, the ascidians can be divided into species which exist as independent animals (the solitary ascidians) and those which are interdependent (the colonial ascidians). Different species of ascidians can have markedly different reproductive strategies, with colonial forms having mixed modes of reproduction.
Solitary ascidians release many eggs from their atrial siphons; external fertilization in seawater takes place with the coincidental release of sperm from other individuals. A fertilized egg spends 12 hours to a few days developing into a free-swimming tadpole-like larva, which then takes no more than 36 hours to settle and metamorphose into a juvenile.
As a general rule, the larva possesses a long tail, containing muscles, a hollow dorsal nerve tube and a notochord, both features clearly indicative of the animal's chordate affinities. One group though, the molgulid ascidians, have evolved tailless species on at least four separate occasions, and even direct development. A notochord is formed early in development and always consists of a row of exactly 40 cells. The nerve tube enlarges in the main body, and will eventually become the cerebral ganglion of the adult. The tunic develops early in embryonic life and extends to form a fin along the tail in the larva. The larva also has a statocyst and a pigmented cup above the mouth, which opens into a pharynx lined with small clefts opening into a surrounding atrium. The mouth and anus are originally at opposite ends of the animal, with the mouth only moving to its final (posterior) position during metamorphosis.
The larva selects and settles on appropriate surfaces using receptors sensitive to light, orientation to gravity, and tactile stimuli. When its anterior end touches a surface, papillae (small, finger-like nervous projections) secrete an adhesive for attachment. Adhesive secretion prompts an irreversible metamorphosis: various organs (such as the larval tail and fins) are lost while others rearrange to their adult positions, the pharynx enlarges, and organs called ampullae grow from the body to permanently attach the animal to the substratum. The siphons of the juvenile ascidian become orientated to optimise current flow through the feeding apparatus. Sexual maturity can be reached in as little as a few weeks. Since the larva is more advanced than its adult, this type of metamorphosis is called 'retrogressive metamorphosis'. This feature is a landmark for the 'theory of retrogressive metamorphosis or ascidian larva theory'; the true chordates are hypothesized to have evolved from sexually mature larvae.
Direct development in ascidians
Some ascidians, especially in Molgulidae family, have direct development in which the embryo develops directly into the juvenile without developing a tailed larva.
Colonial species
Colonial ascidians reproduce both asexually and sexually. Colonies can survive for decades. An ascidian colony consists of individual elements called zooids. Zooids within a colony are usually genetically identical and some have a shared circulation.
Sexual reproduction
Different colonial ascidian species produce sexually derived offspring by one of two dispersal strategies – colonial species are either broadcast spawners (long-range dispersal) or philopatric (very short-range dispersal). Broadcast spawners release sperm and ova into the water column and fertilization occurs near to the parent colonies. Some species are also viviparous. Resultant zygotes develop into microscopic larvae that may be carried great distances by oceanic currents. The larvae of sessile forms which survive eventually settle and complete maturation on the substratum- then they may bud asexually to form a colony of zooids.
The picture is more complicated for the philopatrically dispersed ascidians: sperm from a nearby colony (or from a zooid of the same colony) enter the atrial siphon and fertilization takes place within the atrium. Embryos are then brooded within the atrium where embryonic development takes place: this results in macroscopic tadpole-like larvae. When mature, these larvae exit the atrial siphon of the adult and then settle close to the parent colony (often within meters). The combined effect of short sperm range and philopatric larval dispersal results in local population structures of closely related individuals/inbred colonies. Generations of colonies which are restricted in dispersal are thought to accumulate adaptions to local conditions, thereby providing advantages over newcomers.
Trauma or predation often results in fragmentation of a colony into subcolonies. Subsequent zooid replication can lead to coalescence and circulatory fusion of the subcolonies. Closely related colonies which are proximate to each other may also fuse if they coalesce and if they are histocompatible. Ascidians were among the first animals to be able to immunologically distinguish self from non-self as a mechanism to prevent unrelated colonies from fusing to them and parasitizing them.
Fertilization
Sea squirt eggs are surrounded by a fibrous vitelline coat and a layer of follicle cells that produce sperm-attracting substances. In fertilization, the sperm passes through the follicle cells and binds to glycosides on the vitelline coat. The sperm's mitochondria are left behind as the sperm enters and drives through the coat; this translocation of the mitochondria might provide the necessary force for penetration. The sperm swims through the perivitelline space, finally reaching the egg plasma membrane and entering the egg. This prompts rapid modification of the vitelline coat, through processes such as the egg's release of glycosidase into the seawater, so no more sperm can bind and polyspermy is avoided. After fertilization, free calcium ions are released in the egg cytoplasm in waves, mostly from internal stores. The temporary large increase in calcium concentration prompts the physiological and structural changes of development.
The dramatic rearrangement of egg cytoplasm following fertilization, called ooplasmic segregation, determines the dorsoventral and anteroposterior axes of the embryo. There are at least three types of sea squirt egg cytoplasm: ectoplasm containing vesicles and fine particles, endoderm containing yolk platelets, and myoplasm containing pigment granules, mitochondria, and endoplasmic reticulum. In the first phase of ooplasmic segregation, the myoplasmic actin-filament network contracts to rapidly move the peripheral cytoplasm (including the myoplasm) to the vegetal pole, which marks the dorsal side of the embryo. In the second phase, the myoplasm moves to the subequatorial zone and extends into a crescent, which marks the future posterior of the embryo. The ectoplasm with the zygote nucleus ends up at the animal hemisphere while the endoplasm ends up in the vegetal hemisphere.
Promotion of out-crossing
Ciona intestinalis is a hermaphrodite that releases sperm and eggs into the surrounding seawater almost simultaneously. It is self-sterile, and thus has been used for studies on the mechanism of self-incompatibility. Self/non-self-recognition molecules play a key role in the process of interaction between sperm and the vitelline coat of the egg. It appears that self/non-self recognition in ascidians such as C. intestinalis is mechanistically similar to self-incompatibility systems in flowering plants. Self-incompatibility promotes out-crossing, and thus provides the adaptive advantage at each generation of masking deleterious recessive mutations (i.e. genetic complementation).
Ciona savignyi is highly self-fertile. However, non-self sperm out-compete self-sperm in fertilization competition assays. Gamete recognition is not absolute allowing some self-fertilization. It was speculated that self-incompatibility evolved to avoid inbreeding depression, but that selfing ability was retained to allow reproduction at low population density.
Botryllus schlosseri is a colonial tunicate able to reproduce both sexually and asexually. B. schlosseri is a sequential (protogynous) hermaphrodite, and in a colony, eggs are ovulated about two days before the peak of sperm emission. Thus self-fertilization is avoided, and cross-fertilization is favored. Although avoided, self-fertilization is still possible in B. schlosseri. Self-fertilized eggs develop with a substantially higher frequency of anomalies during cleavage than cross-fertilized eggs (23% vs. 1.6%). Also, a significantly lower percentage of larvae derived from self-fertilized eggs metamorphose, and the growth of the colonies derived from their metamorphosis is significantly lower. These findings suggest that self-fertilization gives rise to inbreeding depression associated with developmental deficits that are likely caused by expression of deleterious recessive mutations.
Asexual reproduction
Many colonial sea squirts are also capable of asexual reproduction, although the means of doing so are highly variable between different families. In the simplest forms, the members of the colony are linked only by rootlike projections from their undersides known as stolons. Buds containing food storage cells can develop within the stolons and, when sufficiently separated from the 'parent', may grow into a new adult individual.
In other species, the postabdomen can elongate and break up into a string of separate buds, which can eventually form a new colony. In some, the pharyngeal part of the animal degenerates, and the abdomen breaks up into patches of germinal tissue, each combining parts of the epidermis, peritoneum, and digestive tract, and capable of growing into new individuals.
In yet others, budding begins shortly after the larva has settled onto the substrate. In the family Didemnidae, for instance, the individual essentially splits into two, with the pharynx growing a new digestive tract and the original digestive tract growing a new pharynx.
DNA repair
Apurinic/apyrimidinic (AP) sites are a common form of DNA damage that inhibit DNA replication and transcription. AP endonuclease 1 (APEX1), an enzyme produced by C. intestinalis, is employed in the repair of AP sites during early embryonic development. Lack of such repair leads to abnormal development. C. intestinalis also has a set of genes that encode proteins homologous to those employed in the repair of DNA interstrand crosslinks in humans.
Ecology
The exceptional filtering capability of adult sea squirts causes them to accumulate pollutants that may be toxic to embryos and larvae as well as impede enzyme function in adult tissues. This property has made some species sensitive indicators of pollution.
Over the last few hundred years, most of the world's harbors have been invaded by non-native sea squirts that have been introduced by accident from the shipping industry. Several factors, including quick attainment of sexual maturity, tolerance of a wide range of environments, and a lack of predators, allow sea squirt populations to grow rapidly. Unwanted populations on docks, ship hulls, and farmed shellfish cause significant economic problems, and sea squirt invasions have disrupted the ecosystem of several natural sub-tidal areas by smothering native animal species.
Sea squirts are the natural prey of many animals, including nudibranchs, flatworms, molluscs, rock crabs, sea stars, fish, birds, and sea otters. Some are also eaten by humans in many parts of the world, including Japan, Korea, Chile, and Europe (where they are sold under the name "sea violet"). As chemical defenses, many sea squirts intake and maintain an extremely high concentration of vanadium in the blood, have a very low pH of the tunic due to acid in easily ruptured bladder cells, and (or) produce secondary metabolites harmful to predators and invaders. Some of these metabolites are toxic to cells and are of potential use in pharmaceuticals.
Evolution
Fossil record
Ascidians are soft-bodied animals, and for this reason, their fossil record is almost entirely lacking. The earliest reliable ascidians is Shankouclava shankouense from the Lower Cambrian Maotianshan Shale (Yunnan, South China). There are also two enigmatic species from the Ediacaran period with some affinity to the ascidians – Ausia from the Nama Group of Namibia and Burykhia from the Onega Peninsula, White Sea of northern Russia.
They are also recorded from Lower Jurassic (Bonet and Benveniste-Velasquez, 1971; Buge and Monniot, 1972) and the Tertiary from France (Deflandre-Riguard, 1949, 1956; Durand, 1952; Deflandre and Deflandre-Rigaud, 1956; Bouche, 1962; Lezaud, 1966; Monniot and Buge, 1971; Varol and Houghton, 1996). Older (Triassic) records are ambiguous. From the Early Jurassic, the species Didemnum cassianum, Quadrifolium hesselboi, Palaeoquadrum ullmanni and other indet genera are recorded. The representatives of the genus Cystodytes (family Polycitoridae) have been described from the Pliocene of France by Monniot (1970, 1971) and Deflandre-Rigaud (1956), and from Eocene of France by Monniot and Buge (1971), and lately from the Late Eocene of S Australia by Łukowiak (2012).
Phylogeny
The ascidians were on morphological evidence treated as sister to the Thaliacea and Appendicularia, but molecular evidence has suggested that ascidians could be polyphyletic within the Tunicata, as shown in the following cladogram.
In 2017 and 2018, two studies were published, which suggested an alternate phylogeny, placing Appendicularia as sister to the rest of Tunicata, and Thaliacea nested inside Ascidiacea. A grouping of Thaliacea and Ascidiacea to the exclusion of Appendicularia had already been suggested for a long time, under the name of Acopa. Brusca et al. treat Ascidiacea as a monophyletic group including pelagic Thaliacea.
Uses
Culinary
Various ascidians are eaten by humans around the world as delicacies.
Sea pineapple (Halocynthia roretzi) is cultivated in Japan (hoya, maboya) and Korea (meongge). When served raw, they have a chewy texture and peculiar flavor likened to "rubber dipped in ammonia" which has been attributed to a naturally occurring chemical known as cynthiaol. Styela clava is farmed in parts of Korea where it is known as mideoduk and is added to various seafood dishes such as agujjim. Tunicate bibimbap is a specialty of Geoje Island, not far from Masan.
Microcosmus species from the Mediterranean Sea are eaten in France (figue de mer, violet), Italy (limone di mare, uova di mare) and Greece (fouska, φούσκα), for example, raw with lemon, or in salads with olive oil, lemon and parsley.
The piure (Pyura chilensis) is used in the cuisine of Chile – it is consumed both raw and in seafood stews similar to bouillabaisse.
Pyura praeputialis is known as cunjevoi in Australia. It was once used as a food source by Aboriginal people living around Botany Bay, but is now used mainly for fishing bait.
Ciona is being developed in Norway as a potential substitute meat protein, after processing to remove its 'marine taste' and to make its texture less 'squid-like'.
Model organisms for research
Several factors make sea squirts good models for studying the fundamental developmental processes of chordates, such as cell-fate specification. The embryonic development of sea squirts is simple, rapid, and easily manipulated. Because each embryo contains relatively few cells, complex processes can be studied at the cellular level, while remaining in the context of the whole embryo.
The eggs of some species contain little yolk and are therefore transparent making them transparency ideal for fluorescent imaging. Its maternally-derived proteins are naturally associated with pigment (in a few species only), so cell lineages are easily labeled, allowing scientists to visualize embryogenesis from beginning to end.
Sea squirts are also valuable because of their unique evolutionary position: as an approximation of ancestral chordates, they can provide insight into the link between chordates and ancestral non-chordate deuterostomes, as well as the evolution of vertebrates from simple chordates. The sequenced genomes of the related sea squirts Ciona intestinalis and Ciona savignyi are small and easily manipulated; comparisons with the genomes of other organisms such as flies, nematodes, pufferfish and mammals provides valuable information regarding chordate evolution. A collection of over 480,000 cDNAs have been sequenced and are available to support further analysis of gene expression, which is expected to provide information about complex developmental processes and regulation of genes in vertebrates. Gene expression in embryos of sea squirts can be conveniently inhibited using Morpholino oligos.
References
Citations
General and cited references
External links
The Dutch Ascidians Homepage
Encyclopedia of Marine Life of Britain and Ireland
A fate map of the ascidian egg
Ciona savignyi Database
ANISEED Ascidian Network for In Situ Expression and Embryological Data
Chordate classes
Cambrian Series 2 first appearances
Extant Cambrian first appearances
Paraphyletic groups | Ascidiacea | Biology | 5,524 |
8,651 | https://en.wikipedia.org/wiki/Dark%20matter | In astronomy, dark matter is an invisible and hypothetical form of matter that does not interact with light or other electromagnetic radiation. Dark matter is implied by gravitational effects which cannot be explained by general relativity unless more matter is present than can be observed. Such effects occur in the context of formation and evolution of galaxies, gravitational lensing, the observable universe's current structure, mass position in galactic collisions, the motion of galaxies within galaxy clusters, and cosmic microwave background anisotropies.
In the standard Lambda-CDM model of cosmology, the mass–energy content of the universe is 5% ordinary matter, 26.8% dark matter, and 68.2% a form of energy known as dark energy. Thus, dark matter constitutes 85% of the total mass, while dark energy and dark matter constitute 95% of the total mass–energy content.
Dark matter is not known to interact with ordinary baryonic matter and radiation except through gravity, making it difficult to detect in the laboratory. The most prevalent explanation is that dark matter is some as-yet-undiscovered subatomic particle, such as either weakly interacting massive particles (WIMPs) or axions. The other main possibility is that dark matter is composed of primordial black holes.
Dark matter is classified as "cold", "warm", or "hot" according to velocity (more precisely, its free streaming length). Recent models have favored a cold dark matter scenario, in which structures emerge by the gradual accumulation of particles.
Although the astrophysics community generally accepts the existence of dark matter, a minority of astrophysicists, intrigued by specific observations that are not well explained by ordinary dark matter, argue for various modifications of the standard laws of general relativity. These include modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. So far none of the proposed modified gravity theories can describe every piece of observational evidence at the same time, suggesting that even if gravity has to be modified, some form of dark matter will still be required.
History
Early history
The hypothesis of dark matter has an elaborate history.
Wm. Thomson, Lord Kelvin, discussed the potential number of stars around the Sun in the appendices of a book based on a series of lectures given in 1884 in Baltimore. He inferred their density using the observed velocity dispersion of the stars near the Sun, assuming that the Sun was 20–100 million years old. He posed what would happen if there were a thousand million stars within 1 kiloparsec of the Sun (at which distance their parallax would be 1 milli-arcsecond). Kelvin concluded
Many of our supposed thousand million stars – perhaps a great majority of them – may be dark bodies.
In 1906, Poincaré used the French term [] ("dark matter") in discussing Kelvin's work. He found that the amount of dark matter would need to be less than that of visible matter, incorrectly, it turns out.
The second to suggest the existence of dark matter using stellar velocities was Dutch astronomer Jacobus Kapteyn in 1922.
A publication from 1930 by Swedish astronomer Knut Lundmark points to him being the first to realise that the universe must contain much more mass than can be observed. Dutch radio astronomy pioneer Jan Oort also hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the galactic neighborhood and found the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be incorrect.
In 1933, Swiss astrophysicist Fritz Zwicky studied galaxy clusters while working at Cal Tech and made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass he called ('dark matter'). Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred some unseen matter provided the mass and associated gravitational attraction to hold the cluster together. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant; the same calculation today shows a smaller fraction, using greater values for luminous mass. Nonetheless, Zwicky did correctly conclude from his calculation that most of the gravitational matter present was dark. However unlike modern theories, Zwicky considered "dark matter" to be non-luminous ordinary matter.
Further indications of mass-to-light ratio anomalies came from measurements of galaxy rotation curves. In 1939, H.W. Babcock reported the rotation curve for the Andromeda nebula (now called the Andromeda Galaxy), which suggested the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral, rather than to unseen matter. Following Babcock's 1939 report of unexpectedly rapid rotation in the outskirts of the Andromeda Galaxy and a mass-to-light ratio of 50; in 1940, Oort discovered and wrote about the large non-visible halo of NGC 3115.
1970s
The hypothesis of dark matter largely took root in the 1970s. Several different observations were synthesized to argue that galaxies should be surrounded by halos of unseen matter. In two papers that appeared in 1974, this conclusion was drawn in tandem by independent groups: in Princeton, New Jersey, by Jeremiah Ostriker, Jim Peebles, and Amos Yahil, and in Tartu, Estonia, by Jaan Einasto, Enn Saar, and Ants Kaasik.
One of the observations that served as evidence for the existence of galactic halos of dark matter was the shape of galaxy rotation curves. These observations were done in optical and radio astronomy. In optical astronomy, Vera Rubin and Kent Ford worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy.
At the same time, radio astronomers were making use of new radio telescopes to map the 21 cm line of atomic hydrogen in nearby galaxies. The radial distribution of interstellar atomic hydrogen (H) often extends to much greater galactic distances than can be observed as collective starlight, expanding the sampled distances for rotation curves – and thus of the total mass distribution – to a new dynamical regime. Early mapping of Andromeda with the telescope at Green Bank and the dish at Jodrell Bank already showed the H rotation curve did not trace the decline expected from Keplerian orbits.
As more sensitive receivers became available, Roberts & Whitehurst (1975) were able to trace the rotational velocity of Andromeda to 30 kpc, much beyond the optical measurements. Illustrating the advantage of tracing the gas disk at large radii; that paper's Figure 16 combines the optical data (the cluster of points at radii of less than 15 kpc with a single point further out) with the H data between 20 and 30 kpc, exhibiting the flatness of the outer galaxy rotation curve; the solid curve peaking at the center is the optical surface density, while the other curve shows the cumulative mass, still rising linearly at the outermost measurement. In parallel, the use of interferometric arrays for extragalactic H spectroscopy was being developed. Rogstad & Shostak (1972) published H rotation curves of five spirals mapped with the Owens Valley interferometer; the rotation curves of all five were very flat, suggesting very large values of mass-to-light ratio in the outer parts of their extended H disks. In 1978, Albert Bosma showed further evidence of flat rotation curves using data from the Westerbork Synthesis Radio Telescope.
By the late 1970s the existence of dark matter halos around galaxies was widely recognized as real, and became a major unsolved problem in astronomy.
1980–1990s
A stream of observations in the 1980–1990s supported the presence of dark matter. is notable for the investigation of 967 spirals. The evidence for dark matter also included gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background.
According to the current consensus among cosmologists, dark matter is composed primarily of some type of not-yet-characterized subatomic particle.
The search for this particle, by a variety of means, is one of the major efforts in particle physics.
Technical definition
In standard cosmological calculations, "matter" means any constituent of the universe whose energy density scales with the inverse cube of the scale factor, i.e., This is in contrast to "radiation", which scales as the inverse fourth power of the scale factor and a cosmological constant, which does not change with respect to (). The different scaling factors for matter and radiation are a consequence of radiation redshift. For example, after doubling the diameter of the observable Universe via cosmic expansion, the scale, , has doubled. The energy of the cosmic microwave background radiation has been halved (because the wavelength of each photon has doubled); the energy of ultra-relativistic particles, such as early-era standard-model neutrinos, is similarly halved. The cosmological constant, as an intrinsic property of space, has a constant energy density regardless of the volume under consideration.
In principle, "dark matter" means all components of the universe which are not visible but still obey In practice, the term "dark matter" is often used to mean only the non-baryonic component of dark matter, i.e., excluding "missing baryons". Context will usually indicate which meaning is intended.
Observational evidence
Galaxy rotation curves
The arms of spiral galaxies rotate around their galactic center. The luminous mass density of a spiral galaxy decreases as one goes from the center to the outskirts. If luminous mass were all the matter, then the galaxy can be modelled as a point mass in the centre and test masses orbiting around it, similar to the Solar System. From Kepler's Third Law, it is expected that the rotation velocities will decrease with distance from the center, similar to the Solar System. This is not observed. Instead, the galaxy rotation curve remains flat or even increases as distance from the center increases.
If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there may be a lot of non-luminous matter (dark matter) in the outskirts of the galaxy.
Velocity dispersions
Stars in bound systems must obey the virial theorem. The theorem, together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. With some exceptions, velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution, even assuming complicated distributions of stellar orbits.
As with galaxy rotation curves, the obvious way to resolve the discrepancy is to postulate the existence of non-luminous matter.
Galaxy clusters
Galaxy clusters are particularly important for dark matter studies since their masses can be estimated in three independent ways:
From the scatter in radial velocities of the galaxies within clusters
From X-rays emitted by hot gas in the clusters. From the X-ray energy spectrum and flux, the gas temperature and density can be estimated, hence giving the pressure; assuming pressure and gravity balance determines the cluster's mass profile.
Gravitational lensing (usually of more distant galaxies) can measure cluster masses without relying on observations of dynamics (e.g., velocity).
Generally, these three methods are in reasonable agreement that dark matter outweighs visible matter by approximately 5 to 1.
Gravitational lensing
One of the consequences of general relativity is the gravitational lens. Gravitational lensing occurs when massive objects between a source of light and the observer act as a lens to bend light from this source. Lensing does not depend on the properties of the mass; it only requires there to be a mass. The more massive an object, the more lensing is observed. An example is a cluster of galaxies lying between a more distant source such as a quasar and an observer. In this case, the galaxy cluster will lens the quasar.
Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around many distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the weak regime, lensing does not distort background galaxies into arcs, causing minute distortions instead. By examining the apparent shear deformation of the adjacent background galaxies, the mean distribution of dark matter can be characterized. The measured mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements.
Cosmic microwave background
Although both dark matter and ordinary matter are matter, they do not behave in the same way. In particular, in the early universe, ordinary matter was ionized and interacted strongly with radiation via Thomson scattering. Dark matter does not interact directly with radiation, but it does affect the cosmic microwave background (CMB) by its gravitational potential (mainly on large scales) and by its effects on the density and velocity of ordinary matter. Ordinary and dark matter perturbations, therefore, evolve differently with time and leave different imprints on the CMB.
The CMB is very close to a perfect blackbody but contains very small temperature anisotropies of a few parts in 100,000. A sky map of anisotropies can be decomposed into an angular power spectrum, which is observed to contain a series of acoustic peaks at near-equal spacing but different heights. The locations of these peaks depend on cosmological parameters. Matching theory to data, therefore, constrains cosmological parameters.
The CMB anisotropy was first discovered by COBE in 1992, though this had too coarse resolution to detect the acoustic peaks.
After the discovery of the first acoustic peak by the balloon-borne BOOMERanG experiment in 2000, the power spectrum was precisely observed by WMAP in 2003–2012, and even more precisely by the Planck spacecraft in 2013–2015. The results support the Lambda-CDM model.
The observed CMB angular power spectrum provides powerful evidence in support of dark matter, as its precise structure is well fitted by the Lambda-CDM model, but difficult to reproduce with any competing model such as modified Newtonian dynamics (MOND).
Structure formation
Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen.
Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process.
Bullet Cluster
The Bullet Cluster is the result of a recent collision of two galaxy clusters. It is of particular note because the location of the center of mass as measured by gravitational lensing is different from the location of the center of mass of visible matter. This is difficult for modified gravity theories, which generally predict lensing around visible matter, to explain. Standard dark matter theory however has no issue: the hot, visible gas in each cluster would be cooled and slowed down by electromagnetic interactions, while dark matter (which does not interact electromagnetically) would not. This leads to the dark matter separating from the visible gas, producing the separate lensing peak as observed.
Type Ia supernova distance measurements
Type Ia supernovae can be used as standard candles to measure extragalactic distances, which can in turn be used to measure how fast the universe has expanded in the past. Data indicates the universe is expanding at an accelerating rate, the cause of which is usually ascribed to dark energy. Since observations indicate the universe is almost flat, it is expected the total energy density of everything in the universe should sum to 1 (). The measured dark energy density is ; the observed ordinary (baryonic) matter energy density is and the energy density of radiation is negligible. This leaves a missing which nonetheless behaves like matter (see technical definition section above)dark matter.
Sky surveys and baryon acoustic oscillations
Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (≈1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130–160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.
Redshift-space distortions
Large galaxy redshift surveys may be used to make a three-dimensional map of the galaxy distribution. These maps are slightly distorted because distances are estimated from observed redshifts; the redshift contains a contribution from the galaxy's so-called peculiar velocity in addition to the dominant Hubble expansion term. On average, superclusters are expanding more slowly than the cosmic mean due to their gravity, while voids are expanding faster than average. In a redshift map, galaxies in front of a supercluster have excess radial velocities towards it and have redshifts slightly higher than their distance would imply, while galaxies behind the supercluster have redshifts slightly low for their distance. This effect causes superclusters to appear squashed in the radial direction, and likewise voids are stretched. Their angular positions are unaffected. This effect is not detectable for any one structure since the true shape is not known, but can be measured by averaging over many structures. It was predicted quantitatively by Nick Kaiser in 1987, and first decisively measured in 2001 by the 2dF Galaxy Redshift Survey. Results are in agreement with the Lambda-CDM model.
Lyman-alpha forest
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
Theoretical classifications
Dark matter can be divided into cold, warm, and hot categories. These categories refer to velocity rather than an actual temperature, and indicate how far corresponding objects moved due to random motions in the early universe, before they slowed due to cosmic expansion. This distance is called the free streaming length (FSL). The categories of dark matter are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy): dark matter particles are classified as cold, warm, or hot if their FSL is much smaller (cold), similar to (warm), or much larger (hot) than a protogalaxy. Mixtures of the above are also possible: a theory of mixed dark matter was popular in the mid-1990s, but was rejected following the discovery of dark energy.
The significance of the free streaming length is that the universe began with some primordial density fluctuations from the Big Bang (in turn arising from quantum fluctuations at the microscale). Particles from overdense regions will naturally spread to underdense regions, but because the universe is expanding quickly, there is a time limit for them to do so. Faster particles (hot dark matter) can beat the time limit while slower particles cannot. The particles travel a free streaming length's worth of distance within the time limit; therefore this length sets a minimum scale for later structure formation. Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies, while the reverse is true for cold dark matter.
Deep-field observations show that galaxies formed first, followed by clusters and superclusters as galaxies clump together, and therefore that most dark matter is cold. This is also the reason why neutrinos, which move at nearly the speed of light and therefore would fall under hot dark matter, cannot make up the bulk of dark matter.
Composition
The identity of dark matter is unknown, but there are many hypotheses about what dark matter could consist of, as set out in the table below.
Baryonic matter
Dark matter can refer to any substance which interacts predominantly via gravity with visible matter (e.g., stars and planets). Hence in principle it need not be composed of a new type of fundamental particle but could, at least in part, be made up of standard baryonic matter, such as protons or neutrons. Most of the ordinary matter familiar to astronomers, including planets, brown dwarfs, red dwarfs, visible stars, white dwarfs, neutron stars, and black holes, fall into this category. A black hole would ingest both baryonic and non-baryonic matter that comes close enough to its event horizon; afterwards, the distinction between the two is lost.
These massive objects that are hard to detect are collectively known as MACHOs. Some scientists initially hoped that baryonic MACHOs could account for and explain all the dark matter.
However, multiple lines of evidence suggest the majority of dark matter is not baryonic:
Sufficient diffuse, baryonic gas or dust would be visible when backlit by stars.
The theory of Big Bang nucleosynthesis predicts the observed abundance of the chemical elements. If there are more baryons, then there should also be more helium, lithium and heavier elements synthesized during the Big Bang. Agreement with observed abundances requires that baryonic matter makes up between 4–5% of the universe's critical density. In contrast, large-scale structure and other observations indicate that the total matter density is about 30% of the critical density.
Astronomical searches for gravitational microlensing in the Milky Way found at most only a small fraction of the dark matter may be in dark, compact, conventional objects (MACHOs, etc.); the excluded range of object masses is from half the Earth's mass up to 30 solar masses, which covers nearly all the plausible candidates.
Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background by WMAP and Planck indicate that around five-sixths of the total matter is in a form that only interacts significantly with ordinary matter or photons through gravitational effects.
Non-baryonic matter
If baryonic matter cannot make up most of dark matter, then dark matter must be non-baryonic. There are two main candidates for non-baryonic dark matter: new hypothetical particles and primordial black holes.
Unlike baryonic matter, nonbaryonic particles do not contribute to the formation of the elements in the early universe (Big Bang nucleosynthesis) and so its presence is felt only via its gravitational effects (such as weak lensing). In addition, some dark matter candidates can interact with themselves (self-interacting dark matter) or with ordinary particles (e.g. WIMPs or Weakly Interacting Massive Particles), possibly resulting in observable by-products such as gamma rays and neutrinos (indirect detection). Candidates abound (see the table above), each with their own strengths and weaknesses.
Undiscovered massive particles
There exists no formal definition of a Weakly Interacting Massive Particle, but broadly, it is an elementary particle which interacts via gravity and any other force (or forces) which is as weak as or weaker than the weak nuclear force, but also non-vanishing in strength. Many WIMP candidates are expected to have been produced thermally in the early Universe, similarly to the particles of the Standard Model according to Big Bang cosmology, and usually will constitute cold dark matter. Obtaining the correct abundance of dark matter today via thermal production requires a self-annihilation cross section of , which is roughly what is expected for a new particle in the 100 GeV mass range that interacts via the electroweak force.
Because supersymmetric extensions of the Standard Model of particle physics readily predict a new particle with these properties, this apparent coincidence is known as the "WIMP miracle", and a stable supersymmetric partner has long been a prime explanation for dark matter. Experimental efforts to detect WIMPs include the search for products of WIMP annihilation, including gamma rays, neutrinos and cosmic rays in nearby galaxies and galaxy clusters; direct detection experiments designed to measure the collision of WIMPs with nuclei in the laboratory, as well as attempts to directly produce WIMPs in colliders, such as the Large Hadron Collider at CERN.
In the early 2010s, results from direct-detection experiments along with the lack of evidence for supersymmetry at the Large Hadron Collider (LHC) experiment have cast doubt on the simplest WIMP hypothesis.
Undiscovered ultralight particles
Axions are hypothetical elementary particles originally theorized in 1978 independently by Frank Wilczek and Steven Weinberg as the Goldstone boson of Peccei–Quinn theory, which had been proposed in 1977 to solve the strong CP problem in quantum chromodynamics (QCD). QCD effects produce an effective periodic potential in which the axion field moves. Expanding the potential about one of its minima, one finds that the product of the axion mass with the axion decay constant is determined by the topological susceptibility of the QCD vacuum. An axion with mass much less than 60 keV is long-lived and weakly interacting: A perfect dark matter candidate.
The oscillations of the axion field about the minimum of the effective potential, the so-called misalignment mechanism, generate a cosmological population of cold axions with an abundance depending on the mass of the axion. With a mass above 5 μeV/2 ( times the electron mass) axions could account for dark matter, and thus be both a dark-matter candidate and a solution to the strong CP problem. If inflation occurs at a low scale and lasts sufficiently long, the axion mass can be as low as 1 peV/2.
Because axions have extremely low mass, their de Broglie wavelength is very large, in turn meaning that quantum effects could help resolve the small-scale problems of the Lambda-CDM model. A single ultralight axion with a decay constant at the grand unified theory scale provides the correct relic density without fine-tuning.
Axions as a dark matter candidate has gained in popularity in recent years, because of the non-detection of WIMPS.
Primordial black holes
Primordial black holes are hypothetical black holes that formed soon after the Big Bang. In the inflationary era and early radiation-dominated universe, extremely dense pockets of subatomic matter may have been tightly packed to the point of gravitational collapse, creating primordial black holes without the supernova compression typically needed to make black holes today. Because the creation of primordial black holes would pre-date the first stars, they are not limited to the narrow mass range of stellar black holes and also not classified as baryonic dark matter.
The idea that black holes could form in the early universe was first suggested by Yakov Zeldovich and Igor Dmitriyevich Novikov in 1967, and independently by Stephen Hawking in 1971. It quickly became clear that such black holes might account for at least part of dark matter. Primordial black holes as a dark matter candidate has the major advantage that it is based on a well-understood theory (General Relativity) and objects (black holes) that are already known to exist. However, producing primordial black holes requires exotic cosmic inflation or physics beyond the standard model of particle physics, and might also require fine-tuning. Primordial black holes can also span nearly the entire possible mass range, from atom-sized to supermassive.
The idea that primordial black holes make up dark matter gained prominence in 2015 following results of gravitational wave measurements which detected the merger of intermediate-mass black holes. Black holes with about 30 solar masses are not predicted to form by either stellar collapse (typically less than 15 solar masses) or by the merger of black holes in galactic centers (millions or billions of solar masses), which suggests that the detected black holes might be primordial. A later survey of about a thousand supernovae detected no gravitational lensing events, when about eight would be expected if intermediate-mass primordial black holes above a certain mass range accounted for over 60% of dark matter. However, that study assumed that all black holes have the same or similar mass to the LIGO/Virgo mass range, which might not be the case (as suggested by subsequent James Webb Space Telescope observations).
The possibility that atom-sized primordial black holes account for a significant fraction of dark matter was ruled out by measurements of positron and electron fluxes outside the Sun's heliosphere by the Voyager 1 spacecraft. Tiny black holes are theorized to emit Hawking radiation. However the detected fluxes were too low and did not have the expected energy spectrum, suggesting that tiny primordial black holes are not widespread enough to account for dark matter. Nonetheless, research and theories proposing dense dark matter accounts for dark matter continue as of 2018, including approaches to dark matter cooling, and the question remains unsettled. In 2019, the lack of microlensing effects in the observation of Andromeda suggests that tiny black holes do not exist.
Nonetheless, there still exists a largely unconstrained mass range smaller than that which can be limited by optical microlensing observations, where primordial black holes may account for all dark matter.
Dark matter aggregation and dense dark matter objects
If dark matter is composed of weakly interacting particles, then an obvious question is whether it can form objects equivalent to planets, stars, or black holes. Historically, the answer has been it cannot, because of two factors:
It lacks an efficient means to lose energy
Ordinary matter forms dense objects because it has numerous ways to lose energy. Losing energy would be essential for object formation, because a particle that gains energy during compaction or falling "inward" under gravity, and cannot lose it any other way, will heat up and increase velocity and momentum. Dark matter appears to lack a means to lose energy, simply because it is not capable of interacting strongly in other ways except through gravity. The virial theorem suggests that such a particle would not stay bound to the gradually forming object – as the object began to form and compact, the dark matter particles within it would speed up and tend to escape.
It lacks a diversity of interactions needed to form structures
Ordinary matter interacts in many different ways, which allows the matter to form more complex structures. For example, stars form through gravity, but the particles within them interact and can emit energy in the form of neutrinos and electromagnetic radiation through fusion when they become energetic enough. Protons and neutrons can bind via the strong interaction and then form atoms with electrons largely through electromagnetic interaction. There is no evidence that dark matter is capable of such a wide variety of interactions, since it seems to only interact through gravity (and possibly through some means no stronger than the weak interaction, although until dark matter is better understood, this is only speculation).
Detection of dark matter particles
If dark matter is made up of subatomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs have been the main search candidates, axions have drawn renewed attention, with the Axion Dark Matter Experiment (ADMX) searches for axions and many more planned in the future. Another candidate is heavy hidden sector particles which only interact with ordinary matter via gravity.
These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of dark matter particle annihilations or decays.
Direct detection
Direct detection experiments aim to observe low-energy recoils (typically a few keVs) of nuclei induced by interactions with particles of dark matter, which (in theory) are passing through the Earth. After such a recoil, the nucleus will emit energy in the form of scintillation light or phonons as they pass through sensitive detection apparatus. To do so effectively, it is crucial to maintain an extremely low background, which is the reason why such experiments typically operate deep underground, where interference from cosmic rays is minimized. Examples of underground laboratories with direct detection experiments include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the China Jinping Underground Laboratory.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include such projects as CDMS, CRESST, EDELWEISS, and EURECA, while noble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO, which use alternative methods in their attempts to detect dark matter.
Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.
A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low-pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Indirect detection
Indirect detection experiments search for the products of the self-annihilation or decay of dark matter particles in outer space. For example, in regions of high dark matter density (e.g., the centre of the Milky Way) two dark matter particles could annihilate to produce gamma rays or Standard Model particle–antiparticle pairs. Alternatively, if a dark matter particle is unstable, it could decay into Standard Model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions in the Milky Way and other galaxies. A major difficulty inherent in such searches is that various astrophysical sources can mimic the signal expected from dark matter, and so multiple signals are likely required for a conclusive discovery.
A few of the dark matter particles passing through the Sun or Earth may scatter off atoms and lose energy. Thus dark matter may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal. The detection by LIGO in September 2015 of gravitational waves opens the possibility of observing dark matter in a new way, particularly if it is in the form of primordial black holes.
Many experimental searches have been undertaken to look for such emission from dark matter annihilation or decay, examples of which follow.
The Energetic Gamma Ray Experiment Telescope observed more gamma rays in 2008 than expected from the Milky Way, but scientists concluded this was most likely due to incorrect estimation of the telescope's sensitivity.
The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In 2009, an as yet unexplained surplus of gamma rays from the Milky Way's galactic center was found in Fermi data. This Galactic Center GeV excess might be due to dark matter annihilation or to a population of pulsars. In April 2012, an analysis of previously available data from Fermi's Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation.
At higher energies, ground-based gamma-ray telescopes have set limits on the annihilation of dark matter in dwarf spheroidal galaxies and in clusters of galaxies.
The PAMELA experiment (launched in 2006) detected excess positrons. They could be from dark matter annihilation or from pulsars. No excess antiprotons were observed.
In 2013, results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays which could be due to dark matter annihilation.
Collider searches for dark matter
An alternative approach to the detection of dark matter particles in nature is to produce them in a laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect dark matter particles produced in collisions of the LHC proton beams. Because a dark matter particle should have negligible interactions with normal visible matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. Constraints on dark matter also exist from the LEP experiment using a similar principle, but probing the interaction of dark matter particles with electrons rather than quarks. Any discovery from collider searches must be corroborated by discoveries in the indirect or direct detection sectors to prove that the particle discovered is, in fact, dark matter.
Alternative hypotheses
Because dark matter has not yet been identified, many other hypotheses have emerged aiming to explain the same observational phenomena without introducing a new unknown type of matter. The theory underpinning most observational evidence for dark matter, general relativity, is well-tested on Solar System scales, but its validity on galactic or cosmological scales has not been well proven. A suitable modification to general relativity can in principle conceivably eliminate the need for dark matter. The best-known theories of this class are MOND and its relativistic generalization tensor–vector–scalar gravity (TeVeS), f(R) gravity, negative mass, dark fluid, and entropic gravity. Alternative theories abound.
A problem with alternative hypotheses is that observational evidence for dark matter comes from so many independent approaches (see the "observational evidence" section above). Explaining any individual observation is possible but explaining all of them in the absence of dark matter is very difficult. Nonetheless, there have been some scattered successes for alternative hypotheses, such as a 2016 test of gravitational lensing in entropic gravity and a 2020 measurement of a unique MOND effect.
The prevailing opinion among most astrophysicists is that while modifications to general relativity can conceivably explain part of the observational evidence, there is probably enough data to conclude there must be some form of dark matter present in the universe.
In popular culture
Dark matter regularly appears as a topic in hybrid periodicals that cover both factual scientific topics and science fiction, and dark matter itself has been referred to as "the stuff of science fiction".
Mention of dark matter is made in works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties, thus becoming inconsistent with the hypothesized properties of dark matter in physics and cosmology. For example:
Dark matter serves as a plot device in the 1995 X-Files episode "Soft Light".
A dark-matter-inspired substance known as "Dust" features prominently in Philip Pullman's His Dark Materials trilogy.
Beings made of dark matter are antagonists in Stephen Baxter's Xeelee Sequence.
More broadly, the phrase "dark matter" is used metaphorically in fiction to evoke the unseen or invisible.
Gallery
See also
Related theories
Density wave theory – A theory in which waves of compressed gas, which move slower than the galaxy, maintain galaxy's structure
Experiments
, a search apparatus
, large underground dark matter detector
, a space mission
, a research program
, astrophysical simulations
, a particle accelerator research infrastructure
Dark matter candidates
Weakly interacting slim particle (WISP)Low-mass counterpart to WIMP
Other
Luminiferous aether – A once theorized invisible and infinite material with no interaction with physical objects, used to explain how light could travel through a vacuum (now disproven)
Notes
References
Further reading
(Recommended on BookAuthrority site))
Weiss, Rainer, (July/August 2023) "The Dark Universe Comes into Focus" Scientific American, vol. 329, no. 1, pp. 7–8.
External links
Celestial mechanics
Large-scale structure of the cosmos
Physics beyond the Standard Model
Astroparticle physics
Exotic matter
Matter
Concepts in astronomy
Unsolved problems in astronomy
Articles containing video clips
Dark concepts in astrophysics | Dark matter | Physics,Astronomy | 9,000 |
17,604,172 | https://en.wikipedia.org/wiki/Tecticornia%20arbuscula | Tecticornia arbuscula, the shrubby glasswort or scrubby samphire, is a species of plant in the family Amaranthaceae, native to Australia. It is a shrub that grows to 2 metres in height, with a spreading habit. It has succulent swollen branchlets with small leaf lobes.
The species occurs on shorelines in coastal or estuarine areas or in salt marshes, especially marshes subject to occasional inundation by the ocean. It has a patchy distribution across south coastal Australia, occurring in southern Western Australia, South Australia, Victoria, New South Wales and Tasmania.
Seeds of the species are enclosed in a hard, vaguely pyramid-shaped pericarp which reveal 1.5 mm long, narrow seeds. these seeds appear as golden brown, transparent and unornamented.
Originally published by Robert Brown under the name Salicornia arbuscula, it was transferred into Sclerostegia by Paul G. Wilson in 1980, before being merged into Tecticornia in 2007.
References
arbuscula
Caryophyllales of Australia
Eudicots of Western Australia
Flora of South Australia
Flora of Victoria (state)
Flora of New South Wales
Flora of the Northern Territory
Halophytes
Taxa named by Robert Brown (botanist, born 1773) | Tecticornia arbuscula | Chemistry | 265 |
64,523,032 | https://en.wikipedia.org/wiki/Muneeb%20Ali | Muneeb Ali is a Pakistani-American computer scientist and internet entrepreneur. He is a co-founder of Stacks, an open-source smart contract platform for Bitcoin. He is known for the regulatory framework that resulted in the first SEC-qualified offering for a crypto asset and for his doctoral dissertation which formed the basis of the Stacks network. He is a co-author of Protothread and Proof-of-Transfer (PoX) consensus.
Career
Ali studied Computer Science at LUMS and received his PhD in Computer Science from Princeton University in 2017. Ali co-founded Stacks (formerly Blockstack) with Ryan Shea and went through Y Combinator in 2014.
His work mainly focused on sensor networks, blockchains, and cloud computing.
Ali was a technical advisor to the HBO Silicon Valley show, and appeared in the Amazon Prime Video Rizqi Presents: Blockchain show.
In 2019, he convinced the SEC regulators to allow his company to start a token offering under Reg A+ exemption, becoming the first to do so. In 2020, Ali released a legal framework for non-security status of Stacks.
References
Living people
American computer scientists
Pakistani emigrants to the United States
21st-century American businesspeople
Princeton University alumni
Year of birth missing (living people) | Muneeb Ali | Technology | 262 |
25,509,720 | https://en.wikipedia.org/wiki/Piperic%20acid | Piperic acid is a chemical often obtained by the base-hydrolysis of the alkaloid piperine from black pepper, followed by acidification of the corresponding salt. Piperic acid is an intermediate in the synthesis of other compounds such as piperonal, and as-such may be used to produce fragrances, perfumes flavorants and drugs as well as other useful compounds.
Preparation
Piperic acid can be prepared from the commercially-available alkaloid piperine, a cyclic amide containing a piperidine group, by reacting it with a hydroxide such as potassium hydroxide, then acidifying the formed piperate salt with hydrochloric acid or another acid. The toxic compound piperidine is given off during the base-hydrolysis of piperine and as-such, safety precautions should be taken.
Reactions
Reaction of piperic acid with strong oxidizers such as potassium permanganate or ozone, or a halogen such as bromine followed by sodium hydroxide causes oxidative cleavage of the double-bonds, yielding piperonal and piperonylic acid. Piperonal has many uses in industry and is itself a precursor to a good subsection of other chemicals. On reduction with sodium amalgam piperic acid forms α- and β-dihydropiperic acid, C12H12O4, and the latter can take up two further atoms of hydrogen to produce tetrahydropiperic acid.
See also
Piperonal
Piperine
Safrole
Isosafrole
Sesamol
Piperonyl butoxide
Cinnamic acid
References
Carboxylic acids
Benzodioxoles | Piperic acid | Chemistry | 329 |
27,856,340 | https://en.wikipedia.org/wiki/ISO%207200 | ISO 7200, titled Technical product documentation - Data fields in title blocks and document headers, is an international technical standard defined by ISO which describes title block formats to be used in technical drawings.
Revisions
ISO 7200:1984
ISO 7200:2004
Other ISO standard related to technical drawing
ISO 128 for the general principles of presentation in technical drawings
ISO 216 for paper sizes
See also
Engineering drawing : Title block
List of International Organization for Standardization standards
References
07200
Technical drawing | ISO 7200 | Engineering | 94 |
50,697,538 | https://en.wikipedia.org/wiki/Calonarius%20verrucisporus | Calonarius verrucisporus is a species of mushroom producing fungus in the family Cortinariaceae.
Taxonomy
It was described as new to science in 1969 by the mycologists Harry Delbert Thiers and Alexander H. Smith who classified it as Cortinarius verrucisporus.
In 2022 the species was transferred from Cortinarius and reclassified as Calonarius verrucisporus based on genomic data.
Description
The mushroom is brownish-yellow. Its cap is 3–7 cm wide, convex, brownish-yellow, dry, with firm yellow flesh, and mild odor and taste. The gills are adnate to notched, whitish to yellow, browning as the spores mature. The stalk is 1–3 cm tall, 1–2 cm wide, equal or clavate, with a yellow partial veil. The spores are brown, elliptical, and warted.
Its edibility is unknown, but it is not recommended due to its similarity to deadly poisonous species.
Cortinarius magnivelatus is similar in appearance, but with a white veil and flesh.
The species is characterized by a long-lasting membranous universal veil.
Habitat and distribution
The specimens studied by Thiers and Smith were found growing solitary under Conifers in Silver Lake, California in June.
See also
List of Cortinarius species
References
External links
Cortinariaceae
Fungi of the United States
Fungi described in 1969
Fungi without expected TNC conservation status
Fungus species | Calonarius verrucisporus | Biology | 309 |
33,919,913 | https://en.wikipedia.org/wiki/Verbal%20aggression | Verbal aggressiveness in communication has been studied to examine the underlying message of how the aggressive communicator gains control over different things that occur, through the usage of verbal aggressiveness. Scholars have identified that individuals who express verbal aggressiveness have the goal of controlling and manipulating others through language. Infante and Wigley defined verbal aggressiveness as "a personality trait that predisposes persons to attack the self-concepts of other people instead of, or in addition to, their positions on topics of communication". Self-concept can be described as a group of values and beliefs that one has. Verbal aggressiveness is thought to be mainly a destructive form of communication, but it can produce positive outcomes. Infante and Wigley described aggressive behavior in interpersonal communication as products of individual's aggressive traits and the way the person perceives the aggressive circumstances that prevents them or something in a situation.
Infante, Trebing, Shepard, and Seeds collaborated to showcase the relationship between argumentativeness and verbal aggression. The study investigated two things. The first component investigated whether high, moderate, or low behaviors differ in how easily they are caused by an opponent that selects verbally aggressive responses. The second focused on whether different sexes display different levels of verbal aggression. The results concluded that people who scored high on argumentativeness were the least likely to prefer verbal aggression. Argumentativeness is a constructive, positive trait that recognizes different positions which might exist on issues that are controversial. As for the difference between sexes, males are more likely than females to use verbal aggression because males have been conditioned to be more dominant and competitive.
The Verbal Aggressiveness Scale measures the personality trait of verbal aggressiveness and has been widely used in communication research. The VAS has 20 items, 10 that are 10-worded negatively/aggressively, and 10 worded positively/kindly. Infante and Wigley's scale is often scored as unidimensional.
Types of messages
Reasons or causes
There are four primary reasons or causes suggested by Infante, Trebing, Shepard, and Seeds, which are:
Frustration—in which a goal is blocked by someone or having to deal with an individual deemed "unworthy" of one's time
Social learning—in which the aggressive behavior has been learned from observing other individuals
Psychopathology—in which an individual attacks other persons because of unresolved issues
Argumentative skill deficiency—in which an individual lacks verbal skills to deal with an issue, and therefore resorts to verbal aggressiveness
These motivators of verbal aggressiveness contribute to an individual with a verbally aggressive personality trait.
More recently Shaw, Kotowski, Boster, and Levine demonstrated that verbal aggression may be caused by variation in prenatal testosterone exposure. They conducted two studies in which they measured the length of the second and fourth digits (2D:4D) on each hand of participants, an indicator of amount of prenatal androgen exposure, and conducted a questionnaire to determine the verbal aggressiveness of participants. A negative correlation between 2D:4D and verbal aggressiveness was determined.
Effects
Self-concept damage is the most fundamental effect, which can cause long lasting and more harmful results than the temporal effects. The more temporal and short term effects are: hurt feelings, anger, irritation, embarrassment, discouragement, humiliation, despair, and depression. Verbal aggressiveness that harms an individual's self-concept can follow an individual throughout their life. For instance, Infante and Wigley state "the self-concept damage done by teasing a child about an aspect of physical appearance can endure for a lifetime and exert an enormous impact on the amount of unhappiness experience". Verbal aggressiveness is also a major cause of violence. When verbal aggressiveness escalates, it can lead to physical violence.
Constructive
The constructive traits which produce satisfaction and increase relationship contentment by helping to increase understandings between the different positions are assertiveness and argumentativeness. Assertiveness is often confused with aggressiveness, but assertive individuals often possess traits like dominance, independence, and competitiveness. Infante and Rancer define argumentativeness as the "trait-like behavior that predisposes an individual to take a stand on controversial issues and attack the positions that other people take". Argumentative individuals focus on the topic rather than attacking an individual. Productive argumentativeness can produce positive outcomes in communication through challenging and defending standpoints through justification. This allows for reasoning between individuals to resolve issue and terminate the disagreement. Argumentative encounters such as this have a positive correlation to relational satisfaction.
Destructive
The destructive traits, hostility and verbal aggressiveness, lead to dissatisfaction in communication and relationship deterioration. Destructive verbal aggressiveness is used for revenge, teasing, and to manipulate others. Verbal aggressiveness is destructive and links to the hostility trait. Unlike argumentativeness, verbal aggressiveness is focused on defending one's identity and attacking others; not trying to resolve the dispute but instead attacking individuals self-concept. Also, verbally aggressive individuals often do not provide as much evidence to support their standpoint. In many cases these individuals possess verbally aggressive traits because they lack the skills to argue rationally and effectively, and therefore use verbally aggressive messages as their defense mechanism. Individuals with argumentative skill deficiency often see violence as their only alternative. These aggressive tactics cause a digression by using personal attacks which do not allow for the disagreement to ever be resolved.
In romantic relationships
The manner in which conflicts are dealt with in romantic relationships differ among each partnership. There are numerous concepts, qualities, and traits that predict the verbal aggressiveness of each partner within a romantic relationship. How couples deal with arguments and controversy has been a major topic amongst researchers for many years. When resolving a dispute is the objective amongst a couple, each individual's argumentative traits come into play. The way in which couples engage and act during a discrepancy can play a chief role in the satisfaction of each partner.
Verbal aggressiveness often results in deterioration of relational satisfaction. Romantically involved couples can perceive verbally aggressive messages as unaffectionate communication. Infante and et al. found that "an act of verbal aggression produces a negative emotional reaction (e.g., anger); the negative reaction can remain covert, leaving a trace effect that can combine additively with subsequent verbal aggression. If the effect if not dissipated through some means, it can lead to the formation of intentions to behave with physical aggression toward the origin or perceived origin of the verbal aggression". Verbal aggressiveness is impacted by the commitment levels of the partners in a relationship. Research findings have shown a negative correlation between commitment and destructive confrontation, and also commitment and communicative acts of abuse.
The arguments that occur between romantic partners play a crucial role in the quality and course of relationships. Arguing successfully means, at least in some part, that a couple will avoid unwarranted negativity and approach discrepancies in confidence that discussing dissimilarities of opinion will supply positive results. Many couples refocus the argument and attack the other partner rather than staying on track with the differences of opinion on a subject. Unhappily married couples tend to use a more destructive approach to conflict. Verbal aggressiveness is resorted to in conflict and controversy. Infante and et al. found that in violent marriages more character attacks and competence attacks are used during disputes. Happily married couples were more likely to resolve disputes without the use of verbally aggressive messages, using instead argumentativeness to negotiate an agreement.
In families
Communication between parents and children influences children and can have important effects for the well-being of the child. It can also have important influences on the relationship between the child and the parent. Muris, Meesters, Morren, and Moorman found that, "attachment style and perceived parent rearing styles that included low levels of emotional warmth were more likely to result in anger and hostility in children". Also, Riesch, Anderson, and Krueger argued that "parent-child communication can help reduce risk behaviors through individual risk factors such as self-esteem, academic achievement, and parental involvement in monitoring". Knapp, Stafford, and Daly stated, "verbally aggressive behavior is contextual: most parents likely have said something verbally aggressive to their child at some point, even if they later regretted doing so".
The parental use of verbal aggressiveness can cause a disruption in the relationship between the child and the parent. When a parent uses verbally aggressive behavior children are often frightened, which leads to avoidance of the parent. The verbal aggressiveness causes the child to feel fear and anxiety and therefore the child loses trust in their relationship. Parental verbal aggressiveness has a negative correlation with relational satisfaction and closeness to their children. Studies found that parents who are verbally aggressive tend to have children who are also verbally aggressive. This is proven through Bandura's social learning theory. Children who are consistently around their parents are likely to model their behavior.
According to the attachment theory, all humans are dependent on one or several individuals during the early years of their lives. It is important to understand how a parent's verbal aggressiveness can change the attachment style the child has toward the parent. If a parent is shown as attacking a child's self-image, it is likely that these attacks will hinder the growth of a confident attachment style. Styron and Janoff-Bulman found, "more than 60% of participants who had been verbally abused as children had reported an insecure attachment style".
Authoritative parents are characterized by encouraging and democratic behaviors. These types of parents value verbal "give-and-take." Authoritarian parents prefer punishment as a way to control their child's behavior and they value obedience from their children. Parents low in verbal aggression tend to adopt an authoritative parenting style and that is positively related to a secure attachment style.
In athletics
Communication plays a significant role in the athlete-coach relationship. Verbal aggression has been identified as prominent in athletics. Coaches who exhibit verbal aggressive behavior may influence athletes' performance, competence, overall enjoyment, and motivation. Symrpas and Bekiari conducted a study that was aimed to determine two things. The first one was to explore the perceived leadership style and verbal aggressive profile of coaches. The second was to look for differences in athletes' satisfaction and achievement goal orientation based on perceived coaches' leadership style and verbal aggressive profile.
The study supported two profiles of coaches. The first profile included coaches who present a low autocratic (harsh) behavior, high democratic (fair) behavior, and low verbal aggressive behavior. The second profile included coaches who present a high autocratic, low democratic, and high verbal aggressive behavior. Based upon the results, coaches categorized within the first profile promoted athletes' satisfaction where their mental state was filled with compassion. Athletes who are more task-oriented that perform tasks to achieve their desired outcome, considered that their coaches belong to the first profile which did not impact their performance in a negative way.
In customer service
Customer incivility can be described as verbal aggression towards customer service employees. It can negatively impact customer service perceptions and potentially crumble an organization's competitive status. Today, customer incivility is known as customer verbal aggression towards employees through language content and communication style. Customer verbal aggression can happen in places such as restaurants, retail stores, banks, etc.
Walker, Jaarsveld, and Skarlicki performed a study that focused on developing an understanding of what customers do in service events that can increase employee incivility toward others. Employee incivility has four total factors involved: aggressive words, second person pronoun use (you, your), interruptions, and positive emotion words. Positive associations between customer aggressive words and employee incivility was clear when verbal aggression included second person pronouns, labeled as targeted aggression. The researchers observed interactions that included two people between targeted aggression and customer interruptions where employees demonstrated more offensive language when the targeted customer verbal aggression was followed by more interruptions. The 2-way interactions predicting employee incivility was lessened when customers used positive emotion words. Saying something like, "I know you charged me twice, but we can try to work this out together", is an example. The results suggested that customer verbal aggression consumes employees, leading to self-regulation failure. The customer using positive emotional language increases the ability of the employee to engage in self-regulation and reduce incivility.
Self-regulation is important with interactions in the workplace. To communicate effectively in social environments while helping customers, the key objective must be emotional labor. Emotional labor is the self-regulatory process that unfolds over the course of customer interactions, with employees monitoring and adjusting their felt and expressed emotion. A goal in which employees use during emotional labor is to produce effective and emotional displays that enhance the customer experience. The self-regulatory approach provides insight into how felt emotions, displayed emotions, and emotion regulation may relate to each other over time.
See also
Trash talk
Verbal abuse
References
Aggression
Speech | Verbal aggression | Biology | 2,646 |
66,944,870 | https://en.wikipedia.org/wiki/Fascaplysin | Fascaplysin is a marine alkaloid based on 12H-pyrido[1–2-a:3,4-b′]diindole ring system. It was first isolated as a red pigment from the marine sponge Fascaplysinopsis reticulata that was collected in the South Pacific near Fiji in 1988. Fascaplysin possesses a broad range of in vitro biological activities including analgesic, antimicrobial, antifungal, antiviral, antimalarial, anti-angiogenic, and antiproliferative activity against numerous cancer cell lines.
Synthesis
The first total synthesis of fascaplysin was performed in seven steps from indole in 1990. Fascaplysin and its derivatives can be synthesized from tryptamine, beta-carboline, indoleketones, and indigo.
References
Alkaloids
Heterocyclic compounds with 5 rings
Nitrogen heterocycles
Quaternary ammonium compounds | Fascaplysin | Chemistry | 201 |
168,393 | https://en.wikipedia.org/wiki/Polystyrene | Polystyrene (PS) is a synthetic polymer made from monomers of the aromatic hydrocarbon styrene. Polystyrene can be solid or foamed. General-purpose polystyrene is clear, hard, and brittle. It is an inexpensive resin per unit weight. It is a poor barrier to air and water vapor and has a relatively low melting point. Polystyrene is one of the most widely used plastics, with the scale of its production being several million tonnes per year. Polystyrene is naturally transparent, but can be colored with colorants. Uses include protective packaging (such as packing peanuts and optical disc jewel cases), containers, lids, bottles, trays, tumblers, disposable cutlery, in the making of models, and as an alternative material for phonograph records.
As a thermoplastic polymer, polystyrene is in a solid (glassy) state at room temperature but flows if heated above about 100 °C, its glass transition temperature. It becomes rigid again when cooled. This temperature behaviour is exploited for extrusion (as in Styrofoam) and also for molding and vacuum forming, since it can be cast into molds with fine detail. The temperatures behavior can be controlled by photocrosslinking.
Under ASTM standards, polystyrene is regarded as not biodegradable. It is accumulating as a form of litter in the outside environment, particularly along shores and waterways, especially in its foam form, and in the Pacific Ocean.
History
Polystyrene was discovered in 1839 by Eduard Simon, an apothecary from Berlin. From storax, the resin of the Oriental sweetgum tree Liquidambar orientalis, he distilled an oily substance, that he named styrol, now called styrene. Several days later, Simon found that it had thickened into a jelly, now known to have been a polymer, that he dubbed styrol oxide ("Styroloxyd") because he presumed that it had resulted from oxidation (styrene oxide is a distinct compound). By 1845 Jamaican-born chemist John Buddle Blyth and German chemist August Wilhelm von Hofmann showed that the same transformation of styrol took place in the absence of oxygen. They called the product "meta styrol"; analysis showed that it was chemically identical to Simon's Styroloxyd. In 1866 Marcellin Berthelot correctly identified the formation of meta styrol/Styroloxyd from styrol as a polymerisation process. About 80 years later it was realized that heating of styrol starts a chain reaction that produces macromolecules, following the thesis of German organic chemist Hermann Staudinger (1881–1965). This eventually led to the substance receiving its present name, polystyrene.
The company I. G. Farben began manufacturing polystyrene in Ludwigshafen, about 1931, hoping it would be a suitable replacement for die-cast zinc in many applications. Success was achieved when they developed a reactor vessel that extruded polystyrene through a heated tube and cutter, producing polystyrene in pellet form.
Ray McIntire (1918–1996), a chemical engineer of Dow Chemical, rediscovered a process first patented in early 1930s by Swedish inventor Carl Munters. According to the Science History Institute, "Dow bought the rights to Munters's method and began producing a lightweight, water-resistant, and buoyant material that seemed perfectly suited for building docks and watercraft and for insulating homes, offices, and chicken sheds." In 1944, Styrofoam was patented.
Before 1949, chemical engineer Fritz Stastny (1908–1985) developed pre-expanded PS beads by incorporating aliphatic hydrocarbons, such as pentane. These beads are the raw material for molding parts or extruding sheets. BASF and Stastny applied for a patent that was issued in 1949. The molding process was demonstrated at the Kunststoff Messe 1952 in Düsseldorf. Products were named Styropor.
The crystal structure of isotactic polystyrene was reported by Giulio Natta.
In 1954, the Koppers Company in Pittsburgh, Pennsylvania, developed expanded polystyrene (EPS) foam under the trade name Dylite. In 1960, Dart Container, the largest manufacturer of foam cups, shipped their first order.
Structure and production
In chemical terms, polystyrene is a long chain hydrocarbon wherein alternating carbon centers are attached to phenyl groups (a derivative of benzene). Polystyrene's chemical formula is ; it contains the chemical elements carbon and hydrogen.
The material's properties are determined by short-range van der Waals attractions between polymer chains. Since the molecules consist of thousands of atoms, the cumulative attractive force between the molecules is large. When heated (or deformed at a rapid rate, due to a combination of viscoelastic and thermal insulation properties), the chains can take on a higher degree of confirmation and slide past each other. This intermolecular weakness (versus the high intramolecular strength due to the hydrocarbon backbone) confers flexibility and elasticity. The ability of the system to be readily deformed above its glass transition temperature allows polystyrene (and thermoplastic polymers in general) to be readily softened and molded upon heating. Extruded polystyrene is about as strong as an unalloyed aluminium but much more flexible and much less dense (1.05 g/cm3 for polystyrene vs. 2.70 g/cm3 for aluminium).
Production
Polystyrene is an addition polymer that results when styrene monomers polymerize (interconnect). In the polymerization, the carbon-carbon π bond of the vinyl group is broken and a new carbon-carbon σ bond is formed, attaching to the carbon of another styrene monomer to the chain. Since only one kind of monomer is used in its preparation, it is a homopolymer. The newly formed σ bond is stronger than the π bond that was broken, thus it is difficult to depolymerize polystyrene. About a few thousand monomers typically comprise a chain of polystyrene, giving a molar mass of 100,000–400,000 g/mol.
Each carbon of the backbone has tetrahedral geometry, and those carbons that have a phenyl group (benzene ring) attached are stereogenic. If the backbone were to be laid as a flat elongated zig-zag chain, each phenyl group would be tilted forward or backward compared to the plane of the chain.
The relative stereochemical relationship of consecutive phenyl groups determines the tacticity, which affects various physical properties of the material.
Tacticity
In polystyrene, tacticity describes the extent to which the phenyl group is uniformly aligned (arranged at one side) in the polymer chain. Tacticity has a strong effect on the properties of the plastic. Standard polystyrene is atactic. The diastereomer where all of the phenyl groups are on the same side is called isotactic polystyrene, which is not produced commercially.
Atactic polystyrene
The only commercially important form of polystyrene is atactic, in which the phenyl groups are randomly distributed on both sides of the polymer chain. This random positioning prevents the chains from aligning with sufficient regularity to achieve any crystallinity. The plastic has a glass transition temperature Tg of ≈90 °C. Polymerization is initiated with free radicals.
Syndiotactic polystyrene
Ziegler–Natta polymerization can produce an ordered syndiotactic polystyrene with the phenyl groups positioned on alternating sides of the hydrocarbon backbone. This form is highly crystalline with a Tm (melting point) of . Syndiotactic polystyrene resin is currently produced under the trade name XAREC by Idemitsu corporation, who use a metallocene catalyst for the polymerisation reaction.
Degradation
Polystyrene is relatively chemically inert. While it is waterproof and resistant to breakdown by many acids and bases, it is easily attacked by many organic solvents (e.g. it dissolves quickly when exposed to acetone), chlorinated solvents, and aromatic hydrocarbon solvents. Because of its resilience and inertness, it is used for fabricating many objects of commerce. Like other organic compounds, polystyrene burns to give carbon dioxide and water vapor, in addition to other thermal degradation by-products. Polystyrene, being an aromatic hydrocarbon, typically combusts incompletely as indicated by the sooty flame.
The process of depolymerizing polystyrene into its monomer, styrene, is called pyrolysis. This involves using high heat and pressure to break down the chemical bonds between each styrene compound. Pyrolysis usually goes up to 430 °C. The high energy cost of doing this has made commercial recycling of polystyrene back into styrene monomer difficult.
Organisms
Polystyrene is generally considered to be non-biodegradable. However, certain organisms are able to degrade it, albeit very slowly.
In 2015, researchers discovered that mealworms, the larvae form of the darkling beetle Tenebrio molitor, could digest and subsist healthily on a diet of EPS. About 100 mealworms could consume between 34 and 39 milligrams of this white foam in a day. The droppings of mealworm were found to be safe for use as soil for crops.
In 2016, it was also reported that superworms (Zophobas morio) may eat expanded polystyrene (EPS). A group of high school students in Ateneo de Manila University found that compared to Tenebrio molitor larvae, Zophobas morio larvae may consume greater amounts of EPS over longer periods of time.
In 2022 scientists identified several bacterial genera, including Pseudomonas, Rhodococcus and Corynebacterium, in the gut of superworms that contain encoded enzymes associated with the degradation of polystyrene and the breakdown product styrene.
The bacterium Pseudomonas putida is capable of converting styrene oil into the biodegradable plastic PHA. This may someday be of use in the effective disposing of polystyrene foam. It is worthy to note the polystyrene must undergo pyrolysis to turn into styrene oil.
Forms produced
Polystyrene is commonly injection molded, vacuum formed, or extruded, while expanded polystyrene is either extruded or molded in a special process.
Polystyrene copolymers are also produced; these contain one or more other monomers in addition to styrene. In recent years the expanded polystyrene composites with cellulose and starch have also been produced. Polystyrene is used in some polymer-bonded explosives (PBX).
Sheet or molded polystyrene
Polystyrene (PS) is used for producing disposable plastic cutlery and dinnerware, CD "jewel" cases, smoke detector housings, license plate frames, plastic model assembly kits, and many other objects where a rigid, economical plastic is desired. Production methods include thermoforming (vacuum forming) and injection molding.
Polystyrene Petri dishes and other laboratory containers such as test tubes and microplates play an important role in biomedical research and science. For these uses, articles are almost always made by injection molding, and often sterilized post-molding, either by irradiation or by treatment with ethylene oxide. Post-mold surface modification, usually with oxygen-rich plasmas, is often done to introduce polar groups. Much of modern biomedical research relies on the use of such products; they, therefore, play a critical role in pharmaceutical research.
Thin sheets of polystyrene are used in polystyrene film capacitors as it forms a very stable dielectric, but has largely fallen out of use in favor of polyester.
Foams
Polystyrene foams are 95–98% air. Polystyrene foams are good thermal insulators and are therefore often used as building insulation materials, such as in insulating concrete forms and structural insulated panel building systems. Grey polystyrene foam, incorporating graphite, has superior insulation properties.
Carl Munters and John Gudbrand Tandberg of Sweden received a US patent for polystyrene foam as an insulation product in 1935 (USA patent number 2,023,204).
PS foams also exhibit good damping properties, therefore it is used widely in packaging. The trademark Styrofoam by Dow Chemical Company is informally used (mainly US & Canada) for all foamed polystyrene products, although strictly it should only be used for "extruded closed-cell" polystyrene foams made by Dow Chemicals.
Foams are also used for non-weight-bearing architectural structures (such as ornamental pillars).
Expanded polystyrene (EPS)
Expanded polystyrene (EPS) is a rigid and tough, closed-cell foam with a normal density range of 11 to 32 kg/m3. It is usually white and made of pre-expanded polystyrene beads. The manufacturing process for EPS conventionally begins with the creation of small polystyrene beads. Styrene monomers (and potentially other additives) are suspended in water, where they undergo free-radical addition polymerization. The polystyrene beads formed by this mechanism may have an average diameter of around 200 μm. The beads are then permeated with a "blowing agent", a material that enables the beads to be expanded. Pentane is commonly used as the blowing agent. The beads are added to a continuously agitated reactor with the blowing agent, among other additives, and the blowing agent seeps into pores within each bead. The beads are then expanded using steam.
EPS is used for food containers, molded sheets for building insulation, and packing material either as solid blocks formed to accommodate the item being protected or as loose-fill "peanuts" cushioning fragile items inside boxes. EPS also has been widely used in automotive and road safety applications such as motorcycle helmets and road barriers on automobile race tracks.
A significant portion of all EPS products are manufactured through injection molding. Mold tools tend to be manufactured from steels (which can be hardened and plated), and aluminum alloys. The molds are controlled through a split via a channel system of gates and runners. EPS is colloquially called "styrofoam" in the Anglosphere, an genericization of Dow Chemical's brand of extruded polystyrene.
EPS in building construction
Sheets of EPS are commonly packaged as rigid panels (common in Europe is a size of 100 cm x 50 cm, usually depending on an intended type of connection and glue techniques, it is, in fact, 99.5 cm x 49.5 cm or 98 cm x 48 cm; less common is 120 x 60 cm; size or in the United States). Common thicknesses are from 10 mm to 500 mm. Many customizations, additives, and thin additional external layers on one or both sides are often added to help with various properties. An example of this is lamination with cement board to form a structural insulated panel.
Thermal conductivity is measured according to EN 12667. Typical values range from 0.032 to 0.038 W/(m⋅K) depending on the density of the EPS board. The value of 0.038 W/(m⋅K) was obtained at 15 kg/m3 while the value of 0.032 W/(m⋅K) was obtained at 40 kg/m3 according to the datasheet of K-710 from StyroChem Finland. Adding fillers (graphites, aluminum, or carbons) has recently allowed the thermal conductivity of EPS to reach around 0.030–0.034 W/(m⋅K) (as low as 0.029 W/(m⋅K)) and as such has a grey/black color which distinguishes it from standard EPS. Several EPS producers have produced a variety of these increased thermal resistance EPS usage for this product in the UK and EU.
Water vapor diffusion resistance (μ) of EPS is around 30–70.
ICC-ES (International Code Council Evaluation Service) requires EPS boards used in building construction meet ASTM C578 requirements. One of these requirements is that the limiting oxygen index of EPS as measured by ASTM D2863 be greater than 24 volume %. Typical EPS has an oxygen index of around 18 volume %; thus, a flame retardant is added to styrene or polystyrene during the formation of EPS.
The boards containing a flame retardant when tested in a tunnel using test method UL 723 or ASTM E84 will have a flame spread index of less than 25 and a smoke-developed index of less than 450. ICC-ES requires the use of a 15-minute thermal barrier when EPS boards are used inside of a building.
According to the EPS-IA ICF organization, the typical density of EPS used for insulated concrete forms (expanded polystyrene concrete) is . This is either Type II or Type IX EPS according to ASTM C578. EPS blocks or boards used in building construction are commonly cut using hot wires.
Extruded polystyrene (XPS)
Extruded polystyrene foam (XPS) consists of closed cells. It offers improved surface roughness, higher stiffness and reduced thermal conductivity. The density range is about 28–34 kg/m3.
Extruded polystyrene material is also used in crafts and model building, in particular architectural models. Because of the extrusion manufacturing process, XPS does not require facers to maintain its thermal or physical property performance. Thus, it makes a more uniform substitute for corrugated cardboard. Thermal conductivity varies between 0.029 and 0.039 W/(m·K) depending on bearing strength/density and the average value is ≈0.035 W/(m·K).
Water vapor diffusion resistance (μ) of XPS is around 80–250.
Commonly extruded polystyrene foam materials include:
Styrofoam, also known as Blue Board, produced by DuPont
Depron, a thin insulation sheet also used for model building
Water absorption of polystyrene foams
Although it is a closed-cell foam, both expanded and extruded polystyrene are not entirely waterproof or vapor proof. In expanded polystyrene there are interstitial gaps between the expanded closed-cell pellets that form an open network of channels between the bonded pellets, and this network of gaps can become filled with liquid water. If the water freezes into ice, it expands and can cause polystyrene pellets to break off from the foam. Extruded polystyrene is also permeable by water molecules and can not be considered a vapor barrier.
Water-logging commonly occurs over a long period in polystyrene foams that are constantly exposed to high humidity or are continuously immersed in water, such as in hot tub covers, in floating docks, as supplemental flotation under boat seats, and for below-grade exterior building insulation constantly exposed to groundwater. Typically an exterior vapor barrier such as impermeable plastic sheeting or a sprayed-on coating is necessary to prevent saturation.
Oriented polystyrene
Oriented polystyrene (OPS) is produced by stretching extruded PS film, improving visibility through the material by reducing haziness and increasing stiffness. This is often used in packaging where the manufacturer would like the consumer to see the enclosed product. Some benefits to OPS are that it is less expensive to produce than other clear plastics such as polypropylene (PP), (PET), and high-impact polystyrene (HIPS), and it is less hazy than HIPS or PP. The main disadvantage of OPS is that it is brittle, and will crack or tear easily.
Co-polymers
Ordinary (homopolymeric) polystyrene has an excellent property profile about transparency, surface quality and stiffness. Its range of applications is further extended by copolymerization and other modifications (blends e.g. with PC and syndiotactic polystyrene). Several copolymers are used based on styrene: The brittleness of homopolymeric polystyrene is overcome by elastomer-modified styrene-butadiene copolymers. Copolymers of styrene and acrylonitrile (SAN) are more resistant to thermal stress, heat and chemicals than homopolymers and are also transparent. Copolymers called ABS have similar properties and can be used at low temperatures, but they are opaque.
Styrene-butane co-polymers
Styrene-butane co-polymers can be produced with a low butene content. Styrene-butane co-polymers include PS-I and SBC (see below), both co-polymers are impact resistant. PS-I is prepared by graft co-polymerization, SBC by anionic block co-polymerization, which makes it transparent in case of appropriate block size.
If styrene-butane co-polymer has a high butylene content, styrene-butadiene rubber (SBR) is formed.
The impact strength of styrene-butadiene co-polymers is based on phase separation, polystyrene and poly-butane are not soluble in each other (see Flory–Huggins solution theory). Co-polymerization creates a boundary layer without complete mixing. The butadiene fractions (the "rubber phase") assemble to form particles embedded in a polystyrene matrix. A decisive factor for the improved impact strength of styrene-butadiene copolymers is their higher absorption capacity for deformation work. Without applied force, the rubber phase initially behaves like a filler. Under tensile stress, crazes (microcracks) are formed, which spread to the rubber particles. The energy of the propagating crack is then transferred to the rubber particles along its path. A large number of cracks give the originally rigid material a laminated structure. The formation of each lamella contributes to the consumption of energy and thus to an increase in elongation at break. Polystyrene homo-polymers deform when a force is applied until they break. Styrene-butane co-polymers do not break at this point, but begin to flow, solidify to tensile strength and only break at much higher elongation.
With a high proportion of polybutadiene, the effect of the two phases is reversed. Styrene-butadiene rubber behaves like an elastomer but can be processed like a thermoplastic.
Impact-resistant polystyrene (PS-I)
PS-I (impact resistant polystyrene) consists of a continuous polystyrene matrix and a rubber phase dispersed therein. It is produced by polymerization of styrene in the presence of polybutadiene dissolved (in styrene). Polymerization takes place simultaneously in two ways:
Graft copolymerization: The growing polystyrene chain reacts with a double bond of the polybutadiene. As a result, several polystyrene chains are attached to one polybutadiene.
S represents in the figure the styrene repeat unit
B the butadiene repeat unit. However, the middle block often does not consist of such depicted butane homo-polymer but of a styrene-butadiene co-polymer:
SSSSSSSSSSSSSSSSSSSBBSBBSBSBBBBSBSSBBBSBSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
By using a statistical copolymer at this position, the polymer becomes less susceptible to cross-linking and flows better in the melt. For the production of SBS, the first styrene is homopolymerized via anionic copolymerization. Typically, an organometallic compound such as butyllithium is used as a catalyst. Butadiene is then added and after styrene again its polymerization. The catalyst remains active during the whole process (for which the used chemicals must be of high purity). The molecular weight distribution of the polymers is very low (polydispersity in the range of 1.05, the individual chains have thus very similar lengths). The length of the individual blocks can be adjusted by the ratio of catalyst to monomer. The size of the rubber sections, in turn, depends on the block length. The production of small structures (smaller than the wavelength of the light) ensure transparency. In contrast to PS-I, however, the block copolymer does not form any particles but has a lamellar structure.
Styrene-butadiene rubber
Styrene-butadiene rubber (SBR) is produced like PS-I by graft copolymerization, but with a lower styrene content. Styrene-butadiene rubber thus consists of a rubber matrix with a polystyrene phase dispersed therein. Unlike PS-I and SBC, it is not a thermoplastic, but an elastomer. Within the rubber phase, the polystyrene phase is assembled into domains. This causes physical cross-linking on a microscopic level. When the material is heated above the glass transition point, the domains disintegrate, the cross-linking is temporarily suspended and the material can be processed like a thermoplastic.
Acrylonitrile butadiene styrene
Acrylonitrile butadiene styrene (ABS) is a material that is stronger than pure polystyrene.
Others
SMA is a copolymer with maleic anhydride. Styrene can be copolymerized with other monomers; for example, divinylbenzene can be used for cross-linking the polystyrene chains to give the polymer used in solid phase peptide synthesis. Styrene-acrylonitrile resin (SAN) has a greater thermal resistance than pure styrene.
Environmental issues
Production
Polystyrene foams are produced using blowing agents that form bubbles and expand the foam. In expanded polystyrene, these are usually hydrocarbons such as pentane, which may pose a flammability hazard in manufacturing or storage of newly manufactured material, but have relatively mild environmental impact. Extruded polystyrene is usually made with hydrofluorocarbons (HFC-134a), which have global warming potentials of approximately 1000–1300 times that of carbon dioxide. Packaging, particularly expanded polystyrene, is a contributor of microplastics from both land and maritime activities.
Environmental degradation
Polystyrene is not biodegradeable but it is susceptible to photo-oxidation. For this reason commercial products contain light stabilizers.
Litter
Animals do not recognize polystyrene foam as an artificial material and may even mistake it for food.
Polystyrene foam blows in the wind and floats on water due to its low specific gravity. It can have serious effects on the health of birds and marine animals that swallow significant quantities. Juvenile rainbow trout exposed to polystyrene fragments show toxic effects in the form of substantial histomorphometrical changes.
Reducing
Restricting the use of foamed polystyrene takeout food packaging is a priority of many solid waste environmental organisations. Efforts have been made to find alternatives to polystyrene, especially foam in restaurant settings. The original impetus was to eliminate chlorofluorocarbons (CFC), which was a former component of foam.
United States
In 1987, Berkeley, California, banned CFC food containers. The following year, Suffolk County, New York, became the first U.S. jurisdiction to ban polystyrene in general. However, legal challenges by the Society of the Plastics Industry kept the ban from going into effect until at last it was delayed when the Republican and Conservative parties gained the majority of the county legislature. In the meantime, Berkeley became the first city to ban all foam food containers. As of 2006, about one hundred localities in the United States, including Portland, Oregon, and San Francisco had some sort of ban on polystyrene foam in restaurants. For instance, in 2007 Oakland, California, required restaurants to switch to disposable food containers that would biodegrade if added to food compost. In 2013, San Jose became reportedly the largest city in the country to ban polystyrene foam food containers. Some communities have implemented wide polystyrene bans, such as Freeport, Maine, which did so in 1990. In 1988, the first U.S. ban of general polystyrene foam was enacted in Berkeley, California.
On 1 July 2015, New York City became the largest city in the United States to attempt to prohibit the sale, possession, and distribution of single-use polystyrene foam (the initial decision was overturned on appeal). In San Francisco, supervisors approved the toughest ban on "Styrofoam" (EPS) in the US which went into effect 1 January 2017. The city's Department of the Environment can make exceptions for certain uses like shipping medicines at prescribed temperatures.
The U.S. Green Restaurant Association does not allow polystyrene foam to be used as part of its certification standard. Several green leaders, including the Dutch Ministry of the Environment, advise people to reduce their environmental harm by using reusable coffee cups.
In March 2019, Maryland banned polystyrene foam food containers and became the first state in the country to pass a food container foam ban through the state legislature. Maine was the first state to officially get a foam food container ban onto the books. In May 2019, Maryland Governor Hogan allowed the foam ban (House Bill 109) to become law without a signature making Maryland the second state to have a food container foam ban on the books, but is the first one to take effect on 1 July 2020.
In September 2020, the New Jersey state legislature voted to ban disposable foam food containers and cups made of polystyrene foam.
Outside the United States
China banned expanded polystyrene takeout/takeaway containers and tableware around 1999. However, compliance has been a problem and, in 2013, the Chinese plastics industry was lobbying for the ban's repeal.
India and Taiwan also banned polystyrene-foam food-service ware before 2007.
The government of Zimbabwe, through its Environmental Management Agency (EMA), banned polystyrene containers (popularly called 'kaylite' in the country), under Statutory Instrument 84 of 2012 (Plastic Packaging and Plastic Bottles) (Amendment) Regulations, 2012 (No 1.)
The city of Vancouver, Canada, has announced its Zero Waste 2040 plan in 2018. The city will introduce bylaw amendments to prohibit business license holders from serving prepared food in polystyrene foam cups and take-out containers, beginning 1 June 2019.
In 2019, the European Union voted to ban expanded polystyrene food packaging and cups, with the law officially going into effect in 2021.
Fiji passed the Environmental Management Bill in December 2020. Imports of polystyrene products were banned in January 2021.
Recycling
In general, polystyrene is not accepted in curbside collection recycling programs and is not separated and recycled where it is accepted. In Germany, polystyrene is collected as a consequence of the packaging law (Verpackungsverordnung) that requires manufacturers to take responsibility for recycling or disposing of any packaging material they sell.
Most polystyrene products are currently not recycled due to the lack of incentive to invest in the compactors and logistical systems required. Due to the low density of polystyrene foam, it is not economical to collect. However, if the waste material goes through an initial compaction process, the material changes density from typically 30 kg/m3 to 330 kg/m3 and becomes a recyclable commodity of high value for producers of recycled plastic pellets. Expanded polystyrene scrap can be easily added to products such as EPS insulation sheets and other EPS materials for construction applications; many manufacturers cannot obtain sufficient scrap because of collection issues. When it is not used to make more EPS, foam scrap can be turned into products such as clothes hangers, park benches, flower pots, toys, rulers, stapler bodies, seedling containers, picture frames, and architectural molding from recycled PS. As of 2016, around 100 tonnes of EPS are recycled every month in the UK.
Recycled EPS is also used in many metal casting operations. Rastra is made from EPS that is combined with cement to be used as an insulating amendment in the making of concrete foundations and walls. American manufacturers have produced insulating concrete forms made with approximately 80% recycled EPS since 1993.
Upcycling
A March 2022 joint study by scientists Sewon Oh and Erin Stache at Cornell University in Ithaca, New York found a new processing method of upcycling polystyrene to benzoic acid. The process involved irradiation of polystyrene with iron chloride and acetone under white light and oxygen for 20 hours. The scientists also demonstrated a similar scalable commercial process of upcycling polystyrene into valuable small-molecules (like benzoic acid) taking just a few hours.
Incineration
If polystyrene is properly incinerated at high temperatures (up to 1000 °C) and with plenty of air (14 m3/kg), the chemicals generated are water, carbon dioxide, and possibly small amounts of residual halogen-compounds from flame-retardants. If only incomplete incineration is done, there will also be leftover carbon soot and a complex mixture of volatile compounds. According to the American Chemistry Council, when polystyrene is incinerated in modern facilities, the final volume is 1% of the starting volume; most of the polystyrene is converted into carbon dioxide, water vapor, and heat. Because of the amount of heat released, it is sometimes used as a power source for steam or electricity generation.
When polystyrene was burned at temperatures of 800–900 °C (the typical range of a modern incinerator), the products of combustion consisted of "a complex mixture of polycyclic aromatic hydrocarbons (PAHs) from alkyl benzenes to benzoperylene. Over 90 different compounds were identified in combustion effluents from polystyrene." The American National Bureau of Standards Center for Fire Research found 57 chemical by-products released during the combustion of expanded polystyrene (EPS) foam.
Safety
Health
The American Chemistry Council, formerly known as the Chemical Manufacturers' Association, writes:
From 1999 to 2002, a comprehensive review of the potential health risks associated with exposure to styrene was conducted by a 12-member international expert panel selected by the Harvard Center for Risk Assessment. The scientists had expertise in toxicology, epidemiology, medicine, risk analysis, pharmacokinetics, and exposure assessment. The Harvard study reported that styrene is naturally present in trace quantities in foods such as strawberries, beef, and spices, and is naturally produced in the processing of foods such as wine and cheese. The study also reviewed all the published data on the quantity of styrene contributing to the diet due to migration of food packaging and disposable food contact articles, and concluded that risk to the general public from exposure to styrene from foods or food-contact applications (such as polystyrene packaging and foodservice containers) was at levels too low to produce adverse effects.
Polystyrene is commonly used in containers for food and drinks. The styrene monomer (from which polystyrene is made) is a cancer suspect agent. Styrene is "generally found in such low levels in consumer products that risks aren't substantial". Polystyrene which is used for food contact may not contain more than 1% (0.5% for fatty foods) of styrene by weight. Styrene oligomers in polystyrene containers used for food packaging have been found to migrate into the food. Another Japanese study conducted on wild-type and AhR-null mice found that the styrene trimer, which the authors detected in cooked polystyrene container-packed instant foods, may increase thyroid hormone levels.
Whether polystyrene can be microwaved with food is controversial. Some containers may be safely used in a microwave, but only if labeled as such. Some sources suggest that foods containing carotene (vitamin A) or cooking oils must be avoided.
Because of the pervasive use of polystyrene, these serious health related issues remain topical.
Fire hazards
Like other organic compounds, polystyrene is flammable. Polystyrene is classified according to DIN4102 as a "B3" product, meaning highly flammable or "Easily Ignited". As a consequence, although it is an efficient insulator at low temperatures, its use is prohibited in any exposed installations in building construction if the material is not flame-retardant. It must be concealed behind drywall, sheet metal, or concrete. Foamed polystyrene plastic materials have been accidentally ignited and caused huge fires and losses of life, for example at the Düsseldorf International Airport and in the Channel Tunnel (where polystyrene was inside a railway carriage that caught fire).
See also
Styrofoam
Foam food container
Bioplastic
Geofoam
Structural insulated panel
Polystyrene sulfonate
Shrinky Dinks
Insulating concrete form
Foamcore
References
Sources
Bibliography
External links
Polystyrene Composition – The University of Southern Mississippi
SPI resin identification code – Society of the Plastics Industry
Polystyrene: Local Ordinances – Californians Against Waste
Take a Closer Look at Today's Polystyrene Packaging (brochure by the industry group American Chemistry Council, arguing that the material is "safe, affordable and environmentally responsible")
Insulators
Building insulation materials
Organic polymers
Packaging materials
Food packaging
Thermoplastics
Commodity chemicals
Vinyl polymers | Polystyrene | Chemistry | 8,076 |
7,854,648 | https://en.wikipedia.org/wiki/Black%20cocaine | Black cocaine () is a mixture of regular cocaine base or cocaine hydrochloride with various other substances. These other substances are added
to camouflage the typical appearance (pigments and dyes, e.g. charcoal),
to interfere with color-based drug tests (mixing thiocyanates and iron salts or cobalt salts forms deep red complexes in solution),
to make the mixture undetectable by drug sniffing dogs (activated carbon may sufficiently absorb trace odors).
Since the result is usually black, it is generally smuggled as toner, fingerprint powder, fertilizer, pigment, metal moldings, or charcoal. The pure cocaine base can be recovered from the mixture by extraction (freebase) or acid-base extraction (hydrochloride) using common organic solvents such as methylene chloride or acetone. A second process is required to convert cocaine base into powdered cocaine hydrochloride.
It was reported that in the mid-1980s Chilean dictator Augusto Pinochet ordered his army to build a clandestine cocaine laboratory in Chile where chemists mixed cocaine with other chemicals to produce what Pinochet's former top aide for intelligence Manuel Contreras described as a "black cocaine" capable of being smuggled past drug agents in the US and Europe.
Black cocaine was detected in Bogota, Colombia in May 1998. In 2008, a new type of black cocaine was discovered by police in Spain. It had been manufactured into rubber-like sheets and made into luggage. In 2021, of black cocaine disguised as charcoal, in 30 sacks among 1,364 sacks of charcoal, were seized in Spain, one of the biggest cocaine seizures recorded in Castilla y León.
See also
Cocaine paste
Black tar heroin
References
Cocaine
Smuggling
Military dictatorship of Chile (1973–1990)
Adulteration | Black cocaine | Chemistry | 366 |
66,990,938 | https://en.wikipedia.org/wiki/List%20of%20parties%20to%20weapons%20of%20mass%20destruction%20treaties | The list of parties to weapons of mass destruction treaties encompasses the states which have signed and ratified, succeeded, or acceded to any of the major multilateral treaties prohibiting or restricting weapons of mass destruction (WMD), in particular nuclear, biological, or chemical weapons.
Overview
List of states parties to weapons of mass destruction treaties
The following list was last updated in March 2021.
Legend:
GP = Geneva Protocol
BWC = Biological Weapons Convention
CWC = Chemical Weapons Convention
NPT = Nuclear Non-Proliferation Treaty
TPNW = Treaty on the Prohibition of Nuclear Weapons
CTBT = Comprehensive Nuclear-Test-Ban Treaty
S = signing; R = ratification; A = accession; Su = succession; Ac = acceptance
Notes
See also
List of parties to the Biological Weapons Convention
List of parties to the Chemical Weapons Convention
List of parties to the Comprehensive Nuclear-Test-Ban Treaty
List of parties to the Treaty on the Non-Proliferation of Nuclear Weapons
List of parties to the Treaty on the Prohibition of Nuclear Weapons
List of parties to the Partial Nuclear Test Ban Treaty
References
WMD treaty parties
WMD treaty parties
WMD treaty parties
WMD treaty parties
WMD treaty parties | List of parties to weapons of mass destruction treaties | Chemistry,Biology | 232 |
15,820,232 | https://en.wikipedia.org/wiki/Common%20Arrangement%20of%20Work%20Sections | Common Arrangement of Work Sections (CAWS), first published in 1987, is a construction industry working convention in the UK. It was designed to promote standardisation of, and detailed coordination between, bills of quantities and specifications. It is part of an industry-wide initiative to produce coordinated projects information (now managed by the Construction Project Information Committee). CAWS has been used for the arrangement of the National Building Specification, the National Engineering Specification and the Standard Method of Measurement of Building Works (SMM7) (7th ed).
The new edition aligns CAWS with the Unified Classification for the Construction Industry (Uniclass) which was published in 1997.
The Common Arrangement is the authoritative UK classification of work sections for building work, for use in arranging project specifications and bills of quantities. Over 300 work sections are defined in detail to give:
good coordination between drawings, specifications and bills of quantities
predictability of location of relevant information
fewer oversights and discrepancies between documents
flexibility to the contractor in dividing the project information into work packages.
The classification of work sections is separate from, and complementary to, the classification of other concepts such as building types, elements, construction products and properties/characteristics. Uniclass, published in 1997, is the definitive overall classification tables, one of which is for work sections for buildings, comprising the Common Arrangement group, sub-group and work section headings.
External links
NBS
Civil engineering
Construction industry of the United Kingdom | Common Arrangement of Work Sections | Engineering | 296 |
961,677 | https://en.wikipedia.org/wiki/Krytron | The krytron is a cold-cathode gas-filled tube intended for use as a very high-speed switch, somewhat similar to the thyratron. It consists of a sealed glass tube with four electrodes. A small triggering pulse on the grid electrode switches the tube on, allowing a large current to flow between the cathode and anode electrodes. The vacuum version is called a vacuum krytron, or sprytron. The krytron was one of the earliest developments of the EG&G Corporation.
Description
Unlike most other gas switching tubes, the krytron conducts by means of an arc discharge, to handle very high voltages and currents (reaching several kilovolts and several kiloamperes), rather than the low-current glow discharge used in other thyratrons. The krytron is a development of the triggered spark gaps and thyratrons originally developed for radar transmitters during World War II.
The gas used in krytrons is hydrogen; noble gases (usually krypton), or a Penning mixture can also be used.
Operation
A krytron has four electrodes. Two are a conventional anode and cathode. One is a keep-alive electrode, placed near the cathode. The keep-alive has a low positive voltage applied, which causes a small area of gas to ionize near the cathode. High voltage is applied to the anode, but primary conduction does not occur until a positive pulse is applied to the trigger electrode ("Grid" in the image above). Once started, arc conduction carries a considerable current.
The fourth is a control grid, usually wrapped around the anode, except for a small opening on its top.
In place of or in addition to the keep-alive electrode some krytrons may contain a tiny amount of radioactive material (usually less than of nickel-63), which emits beta particles (high-speed electrons) to make ionization easier. The radiation source serves to increase the reliability of ignition and formation of the keep-alive electrode discharge.
The gas filling provides ions for neutralizing the space charge and allowing high currents at lower voltage. The keep-alive discharge populates the gas with ions, forming a preionized plasma. This can shorten the arc formation time by 3–4 orders of magnitude in comparison with non-preionized tubes, as time does not have to be spent on ionizing the medium during formation of the arc path.
The electric arc is self-sustaining. Once the tube is triggered, it conducts until the arc is interrupted by the current falling too low for too long (under 10 milliamperes for more than 100 microseconds for the KN22 krytrons).
Krytrons and sprytrons are triggered by a high voltage from a capacitor discharge via a trigger transformer, in a similar way flashtubes for e.g. photoflash applications are triggered. Devices integrating a krytron with a trigger transformer are available.
Sprytron
A sprytron, also known as vacuum krytron or triggered vacuum switch (TVS), is a vacuum, rather than a gas-filled, version. It is designed for use in environments with high levels of ionizing radiation, which might trigger a gas-filled krytron spuriously. It is also more immune to electromagnetic interference than gas-filled tubes.
Sprytrons lack the keep alive electrode and the preionization radioactive source. The trigger pulse must be stronger than for a krytron. Sprytrons are able to handle higher currents. Krytrons tend to be used for triggering a secondary switch, e.g., a triggered spark gap, while sprytrons are usually connected directly to the load.
The trigger pulse has to be much more intense, as there is no preionized gas path for the electric current, and a vacuum arc must form between the cathode and anode. An arc first forms between the cathode and the grid, then a breakdown occurs between the cathode–grid conductive region and the anode.
Sprytrons are evacuated to hard vacuum, typically 0.001 Pa. As kovar and other metals are somewhat permeable to hydrogen, especially during the 600 °C bake-out before evacuation and sealing, all external metal surfaces must be plated with a thick (25 microns or more) layer of soft gold. The same metallization is used for other switch tubes as well.
Sprytrons are often designed similar to trigatrons, with the trigger electrode coaxial to the cathode. In one design the trigger electrode is formed as metallization on the inner surface of an alumina tube. The trigger pulse causes surface flashover, which liberates electrons and vaporized surface discharge material into the inter-electrode gap, which facilitates formation of a vacuum arc, closing the switch. The short switching time suggests electrons from the trigger discharge and the corresponding secondary electrons knocked from the anode as the initiation of the switching operation; the vaporized material travels too slowly through the gap to play significant role. The repeatability of the triggering can be improved by special coating of the surface between the trigger electrode and the cathode, and the jitter can be improved by doping the trigger substrate and modifying the trigger probe structures. Sprytrons can degrade in storage, by outgassing from their components, diffusion of gases (especially hydrogen) through the metal components, and gas leaks through the hermetic seals. An example tube manufactured with internal pressure of 0.001 Pa will exhibit spontaneous gap breakdowns when the pressure inside rises to 1 Pa. Accelerated testing of storage life can be done by storing in increased ambient pressure, optionally with added helium for leak testing, and increased temperature storage (150 °C) for outgassing testing. Sprytrons can be made miniaturized and rugged.
Sprytrons can be also triggered by a laser pulse. In 1999 the laser pulse energy needed to trigger a sprytron was reduced to 10 microjoules.
Sprytrons are usually manufactured as rugged metal/ceramic parts. They typically have low inductance (10 nanohenries) and low electrical resistance when switched on (10–30 milliohms). After triggering, just before the sprytron switches fully on in avalanche mode, it briefly becomes slightly conductive (carrying 100–200 amperes); high-power MOSFET transistors operating in avalanche mode show similar behavior. SPICE models for sprytrons are available.
Performance
This design, dating from the late 1940s, is still capable of pulse-power performance that even the most advanced semiconductors (even IGBTs) cannot match easily. Krytrons and sprytrons are capable of handling high-current high-voltage pulses, with very fast switching times, and constant, low jitter time delay between application of the trigger pulse and switching on.
Krytrons can switch currents of up to about 3000 amperes and voltages up to about 5000 volts. Commutation time of less than 1 nanosecond can be achieved, with a delay between the application of the trigger pulse and switching as low as about 30 nanoseconds. The achievable jitter may be below 5 nanoseconds. The required trigger pulse voltage is about 200–2000 volts; higher voltages decrease the switching delay to some degree. Commutation time can be somewhat shortened by increasing the trigger pulse rise time. A given krytron tube will give very consistent performance to identical trigger pulses (low jitter). The keep-alive current ranges from tens to hundreds of microamperes. The pulse repetition rate can range from one per minute to tens of thousands per minute.
Switching performance is largely independent of the environment (temperature, acceleration, vibration, etc.). However, the formation of the keep-alive glow discharge is more sensitive, which necessitates the use of a radioactive source to aid its ignition.
Krytrons have a limited lifetime, ranging, according to type, typically from tens of thousands to tens of millions of switching operations, and sometimes only a few hundreds.
Sprytrons have somewhat faster switching times than krytrons.
Hydrogen-filled thyratrons may be used as a replacement in some applications.
Applications
Krytrons and their variations are manufactured by Perkin-Elmer Components and used in a variety of industrial and military devices. They are best known for their use in igniting exploding-bridgewire and slapper detonators in nuclear weapons, their original application, either directly (sprytrons are usually used for this) or by triggering higher-power spark gap switches. They are also used to trigger thyratrons, large flashlamps in photocopiers, lasers and scientific apparatus, and for firing ignitors for industrial explosives.
Export restrictions in the United States
Because of their potential for use as triggers of nuclear weapons, the export of krytrons is tightly regulated in the United States. A number of cases involving the smuggling or attempted smuggling of krytrons have been reported, as countries seeking to develop nuclear weapons have attempted to procure supplies of krytrons for igniting their weapons. One prominent case was that of Richard Kelly Smyth, who allegedly helped Arnon Milchan smuggle 15 orders of 810 krytrons total to Israel in the early 1980s. 469 of these were returned to the United States, with Israel claiming the remaining 341 were "destroyed in testing".
Krytrons and sprytrons handling voltages of 2,500 V and above, currents of 100 A and above, and switching delays of under 10 microseconds are typically suitable for nuclear weapon triggers.
In popular culture
A krytron was the "MacGuffin" in Roman Polanski's 1988 film Frantic. The device in the film was actually a Krytron-Pac, which consisted of a Krytron tube along with a trigger transformer encased in black epoxy.
The krytron, incorrectly called a "kryton", also appeared in the Tom Clancy nuclear terrorism novel The Sum of All Fears.
The plot of Larry Collins' book The Road to Armageddon revolved heavily around American-made krytrons that Iranian mullahs wanted for three Russian nuclear artillery shells they had hoped to upgrade to full nuclear weapons.
The term "krytron" appeared in the season 3, episode 14 (Provenance) of the television drama Person of Interest.
In Season 3 of NCIS episode "Kill Ari, Part 2", it was revealed that Ari Haswari, a rogue Mossad operative, had been tasked with acquiring a krytron trigger. Along with stolen plutonium from Dimona, these were key components for an Israeli sting operation. The krytron was also incorrectly called a "kryton".
Further developments
Optically triggered solid-state switches based on diamond are a potential candidate for krytron replacement.
Notes
References
EG&G Electronic Components Catalog, 1994.
CBS/Hytron second source documentation:
"Krytron Trigger Tubes" spec sheets E-337, E-337A-1, E-337A-2
"7229 Cold-Cathode Trigger Tube" data sheet E287B
"7230 Reliable Cold-Cathode Trigger Tube" data sheet E287C
"7231 Subminiature Cold-Cathode Trigger Tube" data sheet E287D
"7232 Reliable Subminiature Cold-Cathode Trigger Tube" data sheet E287E
External links
John Pasley's article about gas-filled switch tubes, Krytron section
Photo of a small glass krytron
40 month sentence to illegal exporter (though the sentence was definitely related to the 'fugitive' details)
Gas-filled tubes
Nuclear weapons
Pulsed power
Switching tubes
Vacuum tubes | Krytron | Physics | 2,468 |
28,615 | https://en.wikipedia.org/wiki/Sequencing | In genetics and biochemistry, sequencing means to determine the primary structure (sometimes incorrectly called the primary sequence) of an unbranched biopolymer. Sequencing results in a symbolic linear depiction known as a sequence which succinctly summarizes much of the atomic-level structure of the sequenced molecule.
DNA sequencing
DNA sequencing is the process of determining the nucleotide order of a given DNA fragment. So far, most DNA sequencing has been performed using the chain termination method developed by Frederick Sanger. This technique uses sequence-specific termination of a DNA synthesis reaction using modified nucleotide substrates. However, new sequencing technologies such as pyrosequencing are gaining an increasing share of the sequencing market. More genome data are now being produced by pyrosequencing than Sanger DNA sequencing. Pyrosequencing has enabled rapid genome sequencing. Bacterial genomes can be sequenced in a single run with several times coverage with this technique. This technique was also used to sequence the genome of James Watson recently.
The sequence of DNA encodes the necessary information for living things to survive and reproduce. Determining the sequence is therefore useful in fundamental research into why and how organisms live, as well as in applied subjects. Because of the key importance DNA has to living things, knowledge of DNA sequences is useful in practically any area of biological research. For example, in medicine it can be used to identify, diagnose, and potentially develop treatments for genetic diseases. Similarly, research into pathogens may lead to treatments for contagious diseases. Biotechnology is a burgeoning discipline, with the potential for many useful products and services.
The Carlson curve is a term coined by The Economist to describe the biotechnological equivalent of Moore's law, and is named after author Rob Carlson. Carlson accurately predicted the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law. Carlson curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis, and a range of physical and computational tools used in protein expression and in determining protein structures.
Sanger sequencing
In chain terminator sequencing (Sanger sequencing), extension is initiated at a specific site on the template DNA by using a short oligonucleotide 'primer' complementary to the template at that region. The oligonucleotide primer is extended using a DNA polymerase, an enzyme that replicates DNA. Included with the primer and DNA polymerase are the four deoxynucleotide bases (DNA building blocks), along with a low concentration of a chain terminating nucleotide (most commonly a di-deoxynucleotide). The deoxynucleotides lack in the OH group both at the 2' and at the 3' position of the ribose molecule, therefore once they are inserted within a DNA molecule they prevent it from being further elongated. In this sequencer four different vessels are employed, each containing only of the four dideoxyribonucleotides; the incorporation of the chain terminating nucleotides by the DNA polymerase in a random position results in a series of related DNA fragments, of different sizes, that terminate with a given dideoxiribonucleotide. The fragments are then size-separated by electrophoresis in a slab polyacrylamide gel, or more commonly now, in a narrow glass tube (capillary) filled with a viscous polymer.
An alternative to the labelling of the primer is to label the terminators instead, commonly called 'dye terminator sequencing'. The major advantage of this approach is the complete sequencing set can be performed in a single reaction, rather than the four needed with the labeled-primer approach. This is accomplished by labelling each of the dideoxynucleotide chain-terminators with a separate fluorescent dye, which fluoresces at a different wavelength. This method is easier and quicker than the dye primer approach, but may produce more uneven data peaks (different heights), due to a template dependent difference in the incorporation of the large dye chain-terminators. This problem has been significantly reduced with the introduction of new enzymes and dyes that minimize incorporation variability.
This method is now used for the vast majority of sequencing reactions as it is both simpler and cheaper. The major reason for this is that the primers do not have to be separately labelled (which can be a significant expense for a single-use custom primer), although this is less of a concern with frequently used 'universal' primers. This is changing rapidly due to the increasing cost-effectiveness of second- and third-generation systems from Illumina, 454, ABI, Helicos, and Dover.
Pyrosequencing
The pyrosequencing method is based on the detection of the pyrophosphate release on nucleotide incorporation. Before performing pyrosequencing, the DNA strand to sequence has to be amplified by PCR. Then the order in which the nucleotides have to be added in the sequencer is chosen (i.e. G-A-T-C). When a specific nucleotide is added, if the DNA polymerase incorporates it in the growing chain, the pyrophosphate is released and converted into ATP by ATP sulfurylase. ATP powers the oxidation of luciferase through the luciferase; this reaction generates a light signal recorded as a pyrogram peak. In this way, the nucleotide incorporation is correlated to a signal. The light signal is proportional to the amount of nucleotides incorporated during the synthesis of the DNA strand (i.e. two nucleotides incorporated correspond to two pyrogram peaks). When the added nucleotides aren't incorporated in the DNA molecule, no signal is recorded; the enzyme apyrase removes any unincorporated nucleotide remaining in the reaction.
This method requires neither fluorescently-labelled nucleotides nor gel electrophoresis.
Pyrosequencing, which was developed by Pål Nyrén and Mostafa Ronaghi DNA, has been commercialized by Biotage (for low-throughput sequencing) and 454 Life Sciences (for high-throughput sequencing). The latter platform sequences roughly 100 megabases [now up to 400 megabases] in a seven-hour run with a single machine. In the array-based method (commercialized by 454 Life Sciences), single-stranded DNA is annealed to beads and amplified via EmPCR. These DNA-bound beads are then placed into wells on a fiber-optic chip along with enzymes which produce light in the presence of ATP. When free nucleotides are washed over this chip, light is produced as ATP is generated when nucleotides join with their complementary base pairs. Addition of one (or more) nucleotide(s) results in a reaction that generates a light signal that is recorded by the CCD camera in the instrument. The signal strength is proportional to the number of nucleotides, for example, homopolymer stretches, incorporated in a single nucleotide flow.
True single molecule sequencing
Large-scale sequencing
Whereas the methods above describe various sequencing methods, separate related terms are used when a large portion of a genome is sequenced. Several platforms were developed to perform exome sequencing (a subset of all DNA across all chromosomes that encode genes) or whole genome sequencing (sequencing of the all nuclear DNA of a human).
RNA sequencing
RNA is less stable in the cell, and also more prone to nuclease attack experimentally. As RNA is generated by transcription from DNA, the information is already present in the cell's DNA. However, it is sometimes desirable to sequence RNA molecules. While sequencing DNA gives a genetic profile of an organism, sequencing RNA reflects only the sequences that are actively expressed in the cells. To sequence RNA, the usual method is first to reverse transcribe the RNA extracted from the sample to generate cDNA fragments. This can then be sequenced as described above.
The bulk of RNA expressed in cells are ribosomal RNAs or small RNAs, detrimental for cellular translation, but often not the focus of a study. This fraction can be removed in vitro, however, to enrich for the messenger RNA, also included, that usually is of interest. Derived from the exons these mRNAs are to be later translated to proteins that support particular cellular functions. The expression profile therefore indicates cellular activity, particularly desired in the studies of diseases, cellular behaviour, responses to reagents or stimuli. Eukaryotic RNA molecules are not necessarily co-linear with their DNA template, as introns are excised. This gives a certain complexity to map the read sequences back to the genome and thereby identify their origin.
For more information on the capabilities of next-generation sequencing applied to whole transcriptomes see: RNA-Seq and MicroRNA Sequencing.
Protein sequencing
Methods for performing protein sequencing
include:
Edman degradation
Peptide mass fingerprinting
Mass spectrometry
Protease digests
If the gene encoding the protein is known, it is currently much easier to sequence the DNA and infer the protein sequence. Determining part of a protein's amino-acid sequence (often one end) by one of the above methods may be sufficient to identify a clone carrying this gene.
Polysaccharide sequencing
Though polysaccharides are also biopolymers, it is not so common to talk of 'sequencing' a polysaccharide, for several reasons. Although many polysaccharides are linear, many have branches. Many different units (individual monosaccharides) can be used, and bonded in different ways. However, the main theoretical reason is that whereas the other polymers listed here are primarily generated in a 'template-dependent' manner by one processive enzyme, each individual join in a polysaccharide may be formed by a different enzyme. In many cases the assembly is not uniquely specified; depending on which enzyme acts, one of several different units may be incorporated. This can lead to a family of similar molecules being formed. This is particularly true for plant polysaccharides. Methods for the structure determination of oligosaccharides and polysaccharides include NMR spectroscopy and methylation analysis.
See also
Exome sequencing
Full genome sequencing
Genetic code
Pathogenomics
RNA-Seq
MicroRNA sequencing
Sequence motif
References
Links
https://www.nature.com/subjects/sequencing
Biochemistry methods
Molecular biology | Sequencing | Chemistry,Biology | 2,197 |
1,158,068 | https://en.wikipedia.org/wiki/Mayer%20f-function | The Mayer f-function is an auxiliary function that often appears in the series expansion of thermodynamic quantities related to classical many-particle systems. It is named after chemist and physicist Joseph Edward Mayer.
Definition
Consider a system of classical particles interacting through a pair-wise potential
where the bold labels and denote the continuous degrees of freedom associated with the particles, e.g.,
for spherically symmetric particles and
for rigid non-spherical particles where denotes position and the orientation parametrized e.g. by Euler angles. The Mayer f-function is then defined as
where the inverse absolute temperature in units of energy−1 .
See also
Virial coefficient
Cluster expansion
Excluded volume
Notes
Special functions | Mayer f-function | Mathematics | 141 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.