text
stringlengths 2
132k
| source
dict |
|---|---|
rely on biological signals of reproductive success and non-biological signals, such as the female's willingness to marry. Unlike many animals, humans are not able to consciously display physical changes to their body when they are ready to mate, so they have to rely on other forms of communication before engaging in a consensual relationship. Romantic love is the mechanism through which long-term mate choice occurs in human males. For long-term sexual relationships, men are usually equally choosy because they have a similar parental investment like the women, as they heavily invest in the offspring in form of resource provisioning. Males may look for: Commitment and marriage: A human male may be interested in mating with a female who seeks marriage. This is because he has exclusive sexual access to the female, so any offspring produced in the relationship will be genetically related to him (unless the female has sexual intercourse with another male outside of the marriage). This increases the likelihood of paternity certainty. With two married parents investing in the offspring, their chance of survival may increase; therefore the male's DNA will be passed on to the children of his offspring. Also, a male who is interested in committing to a female may be more attractive to potential mates. A male who can promise resources and future parental investment is likely to be more appealing to women than a male who is unwilling to commit to her. Facial symmetry: Symmetrical faces have been judged to signal good general health and the ability for a woman to withstand adverse environmental factors, such as illness. Femininity: A feminine face can be a signal of youth, which in turn signals strong reproductive value. As a woman gets older, her facial features become less feminine due to ageing. Femininity can also be linked
|
{
"page_id": 70324833,
"source": null,
"title": "Mate choice in humans"
}
|
to disease-resistance and high estrogen levels, which are factors that suggest reproductive value to a potential mate. Physical beauty: Observable characteristics of a woman can indicate good health and the ability to reproduce, qualities which are likely to be desired by a male. This may include smooth skin, absence of lesions, muscle tone, long hair and high energy levels. Women with darker features (lips, eyes, eyebrows) relative to their facial skin have been found to be more attractive, as this increases facial contrast (the same features appear to decrease male attractiveness). Waist-to-hip ratio: A waist-to-hip ratio of 0.7 is an indicator of fertility, lower long-term health risks and suggests that the woman isn't already pregnant. A male is likely to desire these qualities in a mate, as it will increase the chance of survival of any offspring the couple have together. Breasts: The pigmentation of nipples and breasts appears to be the most important quality of breast attractiveness. Men rated women with dark nipples and dark areola as significantly more attractive than those with light-colored nipppes or areola. Breasts of medium cup size were found to be the most attractive, however authors noted that men focused primarily on the coloration of nipples and areola rather than breast size. Youth: Both young and old men are attracted to women in their twenties. Faces that appear younger are usually rated as more attractive by males. == Parasite stress on mate choice == The parasite-stress theory, otherwise known as pathogen stress, states that parasites or diseases put stress on the life development of an organism, leading to a change in the appearance of their sexually attractive traits. The initial research on the Hamilton–Zuk hypothesis (see indicator traits) showed that, within one species (brightly colored birds), there was greater sexual selection for males that
|
{
"page_id": 70324833,
"source": null,
"title": "Mate choice in humans"
}
|
had brighter plumage (feathers). In addition, Hamilton and Zuk showed that, comparing across multiple species, there is greater selection for physical attributes in species under greater parasitic stress. This has influenced research regarding human mate choice. In societies with a high prevalence of parasites or pathogens, members would derive greater evolutionary advantage from selecting for physical attractiveness/good looks in mate choice compared to that derived by members of societies with lower prevalence. Humans could use physical attractiveness to determine resistance to parasites and diseases, which are believed to lower their sufferers' ability to portray attractive traits from then on and limit the number of high-quality pathogen-resistant mates. In cultures where parasitic infection is especially high, members could use cues available to them to determine the physical health status of the potential mate. Regardless of the wealth or ideology, the females in areas that are more at risk or have higher rates of parasites and diseases would weigh masculinity more highly when rating potential mates. Scarification: In pre-industrial societies, body markings such as tattoos or scarifications are predicted to have been a way in which individuals could attract potential mates, by indicating the reproductive quality of a person. Meaning, scars on the body could be viewed by prospective mates as evidence that a person has overcome parasites and is thus more attractive to potential mates. Research investigating this hypothesis (Singh and Bronstad 1997) found that in instances of increased pathogen prevalence, the only anatomical area with evidence of scarification in females was found on the stomach, with no evidence found for male scarification. Masculinity: In societies where there are high levels of parasites or diseases, the females, as the overall health of members decreases, are predicted to increasingly emphasize masculinity in their mate preferences. Women look for signs of masculinity in
|
{
"page_id": 70324833,
"source": null,
"title": "Mate choice in humans"
}
|
areas such as the voice, face and body shape of males. The face, in particular, may hold several cues for parasitic resistance and has been the subject of most attractiveness research. Polygamy: Tropical areas were originally associated with polygynous societies as a result of the surrounding environment being both ecologically richer and homogeneous. However, whilst tropical areas were associated with polygamy, pathogen stress is predicted as a better indicator of polygamy and has been positively correlated with it. Furthermore, over the course of human evolution, areas which had high levels of parasite-stress may have shifted the polygamy threshold and increased the presence of certain types of polygamy in a society. === Criticisms === Gangested and Buss (2009) say that research indicates that parasite stress may have only influenced mate choice through females searching for "good genes" which show parasite resistance, in areas which have high prevalence of parasites. John Cartwright also points out that females may be simply avoiding the transmission of parasites to themselves rather than it being them choosing males with good genes and that females look for more than just parasite-resistant genes. == MHC-correlated mate choice == Major histocompatibility complex (MHC) or, in humans, human leukocyte antigen (HLA) produces proteins that are essential for immune system functioning. The genes of the MHC complex have extremely high variability, assumed to be a result of frequency-dependent parasite-driven selection and mate choice. This is believed to be so it promotes heterozygosity improving the chances of survival for the offspring. === Odor preferences === In humans, there is evidence that women will rate men's odor as more pleasant if the odor has MHC-dissimilar antigens, which is proposed as a way of avoiding inbreeding and increasing heterozygosity. However, women on contraceptive pills rate the odor of MHC-similar men as being more pleasant,
|
{
"page_id": 70324833,
"source": null,
"title": "Mate choice in humans"
}
|
it is unknown why women on contraceptive pills rate smell in this way. It was found that when processing MHC-similar smells were processed faster. Contrary to these findings, other studies have found that there is no correlation between attraction and odor by testing males' odor preferences on women's odors. The study concludes that there is no correlation in attraction between men and women of dissimilar HLA proteins. Research completed on a Southern Brazilian student population resulted in similar findings that found significant differences in the attraction ratings of giving to male sweat and MHC-difference. === Facial preferences === Human facial preferences have been shown to correlate with both MHC-similarity and MHC-heterozygosity. Research into MHC-similarity with regards to facial attractiveness is limited. One study found that women may prefer mates with MHC-similar faces, despite evidence that they prefer men with dissimilar body odors. While facial asymmetry hasn't been correlated with MHC-heterozygosity, the perceived healthiness of skin appears to be. It appears to be that only MHC-heterozygosity and no other genetic markers are correlated with facial attractiveness in males and it has been shown that so far that there is no correlation that has been found in females. Slightly different from facial attractiveness, facial masculinity is not shown to correlate with MHC heterogeneity (a common measure of immunocompetence). === Criticisms === A review article published in June 2018 concluded that there is no correlation between HLA and mate choice. In addition to assessing previous studies on HLA-Mate choice analysis to identify errors in their research methods (such as small population sizes), the study collects a larger set of data and re-runs the analysis of the previous studies. By using the larger data set to conduct analysis on 30 couples of European descent, they generate findings contrary to previous studies that identified significant
|
{
"page_id": 70324833,
"source": null,
"title": "Mate choice in humans"
}
|
divergence in the mate choice with accordance to HLA genotyping. Additional studies have been conducted simultaneously on African and European populations that only show correlation of MHC divergence in European but not African populations. == See also == Human mating strategies Mating preferences == References ==
|
{
"page_id": 70324833,
"source": null,
"title": "Mate choice in humans"
}
|
The Canberra Ornithologists Group (COG) was founded on 15 April 1970 when the ACT branch of the Royal Australasian Ornithologists Union (RAOU) became defunct following drastic reform within the RAOU in the late 1960s which abolished all its branches. It publishes a quarterly journal, Canberra Bird Notes, as well as a monthly newsletter, Gang-gang. Its aims are to: encourage interest in, and develop knowledge of, the birds of the Canberra region promote and co-ordinate the study of birds to promote the conservation of native birds and their habitat COG holds monthly meetings in Canberra as well as regular field excursions. The logo of COG is the gang-gang cockatoo. == References == Robin, Libby. (2001). The Flight of the Emu: a hundred years of Australian ornithology 1901-2001. Melbourne University Press: Carlton. ISBN 0-522-84987-3 == External links == Canberra Ornithologists Group canberrabirds mailing list
|
{
"page_id": 7803491,
"source": null,
"title": "Canberra Ornithologists Group"
}
|
In mathematics, the Gibbons–Hawking ansatz is a method of constructing gravitational instantons introduced by Gary Gibbons and Stephen Hawking (1978, 1979). It gives examples of hyperkähler manifolds in dimension 4 that are invariant under a circle action. == Description == Suppose that U {\displaystyle U} is an open subset of R 3 {\displaystyle \mathbb {R} ^{3}} , and let ∗ {\displaystyle *} denote the Hodge star operator on R 3 {\displaystyle \mathbb {R} ^{3}} with respect to the usual (flat) Euclidean metric. V {\displaystyle V} is a harmonic function defined on U {\displaystyle U} such that the cohomology class [ 1 2 π ∗ d V ] {\displaystyle \left[{\frac {1}{2\pi }}*dV\right]} is integral, i.e. lies in the image of H 2 ( U ; Z ) ↪ H 2 ( U ; R ) {\displaystyle H^{2}(U;\mathbb {Z} )\hookrightarrow H^{2}(U;\mathbb {R} )} . Then there is a U ( 1 ) {\displaystyle U(1)} -principal bundle π : P → U {\displaystyle \pi :P\to U} equipped with a connection 1-form η ∈ Ω 1 ( P ; u ( 1 ) ) {\displaystyle \eta \in \Omega ^{1}(P;{\mathfrak {u}}(1))} whose curvature form is d η = π ∗ ( ∗ d V ) {\displaystyle d\eta =\pi ^{*}(*dV)} . Then the Riemannian metric g = V ∑ j = 1 3 d x j ⊗ d x j + 1 V η ⊗ η {\displaystyle g=V\sum _{j=1}^{3}dx_{j}\otimes dx_{j}+{\frac {1}{V}}\eta \otimes \eta } is hyperkahler, and typically extends to the boundary of U {\displaystyle U} . == Examples == === Quaternions === The usual (flat) metric on the quaternions H ≅ C 2 {\displaystyle \mathbb {H} \cong \mathbb {C} ^{2}} is hyperkahler. It can be obtained as a result of the Gibbons-Hawking ansatz applied to the open subset U = R 3 ∖ { 0
|
{
"page_id": 35132005,
"source": null,
"title": "Gibbons–Hawking ansatz"
}
|
} {\displaystyle U=\mathbb {R} ^{3}\setminus \{0\}} and the harmonic function V ( x ) = 1 2 | x | {\displaystyle V(x)={\frac {1}{2|x|}}} . === ALE gravitational instantons === The ALE gravitational instanton of type A k − 1 {\displaystyle A_{k-1}} can be obtained by applying the Gibbons-Hawking ansatz to the open subset U = R 3 ∖ { p 1 , … , p k } {\displaystyle U=\mathbb {R} ^{3}\setminus \{p_{1},\ldots ,p_{k}\}} for k {\displaystyle k} distinct collinear points p 1 , … , p k {\displaystyle p_{1},\ldots ,p_{k}} and the harmonic function V ( x ) = ∑ j = 1 k 1 2 | x − p j | {\displaystyle V(x)=\sum _{j=1}^{k}{\frac {1}{2|x-p_{j}|}}} . In the case k = 2 {\displaystyle k=2} , we recover the Eguchi-Hanson metric on T ∗ P 1 {\displaystyle T^{*}\mathbb {P} ^{1}} . == See also == Gibbons–Hawking space Ooguri–Vafa metric == References == Gibbons, G.W.; Hawking, S. W. (1978), "Gravitational multi-instantons", Physics Letters B, 78 (4): 430–432, Bibcode:1978PhLB...78..430G, doi:10.1016/0370-2693(78)90478-1, ISSN 0370-2693 Gibbons, G. W.; Hawking, S. W. (1979), "Classification of gravitational instanton symmetries", Communications in Mathematical Physics, 66 (3): 291–310, Bibcode:1979CMaPh..66..291G, doi:10.1007/bf01197189, ISSN 0010-3616, MR 0535152, S2CID 123183399 Gonzalo Pérez, Jesús; Geiges, Hansjörg (2010), "A homogeneous Gibbons–Hawking ansatz and Blaschke products", Advances in Mathematics, 225 (5): 2598–2615, arXiv:0807.0086, doi:10.1016/j.aim.2010.05.006, ISSN 0001-8708, MR 2680177
|
{
"page_id": 35132005,
"source": null,
"title": "Gibbons–Hawking ansatz"
}
|
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. == Correlating the rock record == At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go 7 miles (11 km) deep
|
{
"page_id": 18092646,
"source": null,
"title": "Geologic record"
}
|
thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale using the law of superposition, for where tectonic forces have uplifted one ridge newly subject to erosion and weathering in folding and faulting the strata, they have also created a nearby trough or structural basin region that lies at a relative lower elevation that can accumulate additional deposits. By comparing overall formations, geologic structures and local strata, calibrated by those layers which are widespread, a nearly complete geologic record has been constructed since the 17th century. == Discordant strata example == Correcting for discordancies can be done in a number of ways and utilizing a number of technologies or field research results from studies in other disciplines. In this example, the study of layered rocks and the fossils they contain is called biostratigraphy and utilizes amassed geobiology and paleobiological knowledge. Fossils can be used to recognize rock layers of the same or different geologic ages, thereby coordinating locally occurring geologic stages to the overall geologic timeline. The pictures of the fossils of monocellular algae in this USGS figure were taken with a scanning electron microscope and have been magnified 250 times. In the U.S. state of South Carolina three marker species of fossil algae are found in a core of rock whereas in Virginia only two of the three species are found in the Eocene Series of rock layers spanning three stages and the geologic ages from 37.2–55.8 MA. Comparing the record about the discordance in the record to the full rock column shows the non-occurrence of the missing species and that portion of the local rock record, from the early part of the middle Eocene is
|
{
"page_id": 18092646,
"source": null,
"title": "Geologic record"
}
|
missing there. This is one form of discordancy and the means geologists use to compensate for local variations in the rock record. With the two remaining marker species it is possible to correlate rock layers of the same age (early Eocene and latter part of the middle Eocene) in both South Carolina and Virginia, and thereby "calibrate" the local rock column into its proper place in the overall geologic record. == Lithology vs paleontology == Consequently, as the picture of the overall rock record emerged, and discontinuities and similarities in one place were cross-correlated to those in others, it became useful to subdivide the overall geologic record into a series of component sub-sections representing different sized groups of layers within known geologic time, from the shortest time span stage to the largest thickest strata eonothem and time spans eon. Concurrent work in other natural science fields required a time continuum be defined, and earth scientists decided to coordinate the system of rock layers and their identification criteria with that of the geologic time scale. This gives the pairing between the physical layers of the left column and the time units of the center column in the table at right. == Gallery == == References ==
|
{
"page_id": 18092646,
"source": null,
"title": "Geologic record"
}
|
The history of loop quantum gravity spans more than three decades of intense research. == History == === Classical theories of gravitation === General relativity is the theory of gravitation published by Albert Einstein in 1915. According to it, the force of gravity is a manifestation of the local geometry of spacetime. Mathematically, the theory is modelled after Bernhard Riemann's metric geometry, but the Lorentz group of spacetime symmetries (an essential ingredient of Einstein's own theory of special relativity) replaces the group of rotational symmetries of space. (Later, loop quantum gravity inherited this geometric interpretation of gravity, and posits that a quantum theory of gravity is fundamentally a quantum theory of spacetime.) In the 1920s, the French mathematician Élie Cartan formulated Einstein's theory in the language of bundles and connections, a generalization of Riemannian geometry to which Cartan made important contributions. The so-called Einstein–Cartan theory of gravity not only reformulated but also generalized general relativity, and allowed spacetimes with torsion as well as curvature. In Cartan's geometry of bundles, the concept of parallel transport is more fundamental than that of distance, the centerpiece of Riemannian geometry. A similar conceptual shift occurs between the invariant interval of Einstein's general relativity and the parallel transport of Einstein–Cartan theory. === Spin networks === In 1971, physicist Roger Penrose explored the idea of space arising from a quantum combinatorial structure. His investigations resulted in the development of spin networks. Because this was a quantum theory of the rotational group and not the Lorentz group, Penrose went on to develop twistors. === Loop quantum gravity === In 1982, Amitabha Sen tried to formulate a Hamiltonian formulation of general relativity based on spinorial variables, where these variables are the left and right spinorial component equivalents of Einstein–Cartan connection of general relativity. Particularly, Sen discovered a new
|
{
"page_id": 856680,
"source": null,
"title": "History of loop quantum gravity"
}
|
way to write down the two constraints of the ADM Hamiltonian formulation of general relativity in terms of these spinorial connections. In his form, the constraints are simply conditions that the spinorial Weyl curvature is trace free and symmetric. He also discovered the presence of new constraints which he suggested to be interpreted as the equivalent of Gauss constraint of Yang–Mills field theories. But Sen's work fell short of giving a full clear systematic theory and particularly failed to clearly discuss the conjugate momenta to the spinorial variables, its physical interpretation, and its relation to the metric (in his work he indicated this as some lambda variable). In 1986–87, physicist Abhay Ashtekar completed the project which Amitabha Sen began. He clearly identified the fundamental conjugate variables of spinorial gravity: The configuration variable is as a spinoral connection (a rule for parallel transport; technically, a connection) and the conjugate momentum variable is a coordinate frame (called a vierbein) at each point. So these variable became what we know as Ashtekar variables, a particular flavor of Einstein–Cartan theory with a complex connection. General relativity theory expressed in this way, made possible to pursue quantization of it using well-known techniques from quantum gauge field theory. The quantization of gravity in the Ashtekar formulation was based on Wilson loops, a technique developed by Kenneth G. Wilson in 1974 to study the strong-interaction regime of quantum chromodynamics (QCD). It is interesting in this connection that Wilson loops were known to be ill-behaved in the case of standard quantum field theory on (flat) Minkowski space, and so did not provide a nonperturbative quantization of QCD. However, because the Ashtekar formulation was background-independent, it was possible to use Wilson loops as the basis for nonperturbative quantization of gravity. Due to efforts by Sen and Ashtekar, a setting
|
{
"page_id": 856680,
"source": null,
"title": "History of loop quantum gravity"
}
|
in which the Wheeler–DeWitt equation was written in terms of a well-defined Hamiltonian operator on a well-defined Hilbert space was obtained. This led to the construction of the first known exact solution, the so-called Chern–Simons form or Kodama state. The physical interpretation of this state remains obscure. In 1988–90, Carlo Rovelli and Lee Smolin obtained an explicit basis of states of quantum geometry, which turned out to be labeled by Penrose's spin networks. In this context, spin networks arose as a generalization of Wilson loops necessary to deal with mutually intersecting loops. Mathematically, spin networks are related to group representation theory and can be used to construct knot invariants such as the Jones polynomial. Loop quantum gravity (LQG) thus became related to topological quantum field theory and group representation theory. In 1994, Rovelli and Smolin showed that the quantum operators of the theory associated to area and volume have a discrete spectrum. Work on the semi-classical limit, the continuum limit, and dynamics was intense after this, but progress was slower. On the semi-classical limit front, the goal is to obtain and study analogues of the harmonic oscillator coherent states (candidates are known as weave states). === Hamiltonian dynamics === LQG was initially formulated as a quantization of the Hamiltonian ADM formalism, according to which the Einstein equations are a collection of constraints (Gauss, Diffeomorphism and Hamiltonian). The kinematics are encoded in the Gauss and Diffeomorphism constraints, whose solution is the space spanned by the spin network basis. The problem is to define the Hamiltonian constraint as a self-adjoint operator on the kinematical state space. The most promising work in this direction is Thomas Thiemann's Phoenix Project. === Covariant dynamics === Much of the recent work in LQG has been done in the covariant formulation of the theory, called "spin foam
|
{
"page_id": 856680,
"source": null,
"title": "History of loop quantum gravity"
}
|
theory." The present version of the covariant dynamics is due to the convergent work of different groups, but it is commonly named after a paper by Jonathan Engle, Roberto Pereira and Carlo Rovelli in 2007–08. Heuristically, it would be expected that evolution between spin network states might be described by discrete combinatorial operations on the spin networks, which would then trace a two-dimensional skeleton of spacetime. This approach is related to state-sum models of statistical mechanics and topological quantum field theory such as the Turaeev–Viro model of 3D quantum gravity, and also to the Regge calculus approach to calculate the Feynman path integral of general relativity by discretizing spacetime. == See also == History of string theory == References == == Further reading == Topical reviews Carlo Rovelli, "Loop Quantum Gravity," Living Reviews in Relativity 1, (1998), 1, online article, 2001 version. Thomas Thiemann, "Lectures on Loop Quantum Gravity," e-print available as gr-qc/0210094 Abhay Ashtekar and Jerzy Lewandowski, "Background Independent Quantum Gravity: A Status Report," e-print available as gr-qc/0404018 Carlo Rovelli and Marcus Gaul, "Loop Quantum Gravity and the Meaning of Diffeomorphism Invariance," e-print available as gr-qc/9910079. Lee Smolin, "The Case for Background Independence," e-print available as hep-th/0507235. Popular books Julian Barbour, The End of Time: The Next Revolution in Our Understanding of the Universe (1999). Lee Smolin, Three Roads to Quantum Gravity (2001). Carlo Rovelli, Che cos'è il tempo? Che cos'è lo spazio?, Di Renzo Editore, Roma, 2004. French translation: Qu'est ce que le temps? Qu'est ce que l'espace?, Bernard Gilson ed, Brussel, 2006. English translation: What is Time? What is space?, Di Renzo Editore, Roma, 2006. Magazine articles Lee Smolin, "Atoms in Space and Time", Scientific American, January 2004. Easier introductory, expository or critical works Abhay Ashtekar, "Gravity and the Quantum," e-print available as gr-qc/0410054. John C. Baez
|
{
"page_id": 856680,
"source": null,
"title": "History of loop quantum gravity"
}
|
and Javier P. Muniain, Gauge Fields, Knots and Quantum Gravity, World Scientific (1994). Carlo Rovelli, "A Dialog on Quantum Gravity," e-print available as hep-th/0310077. More advanced introductory/expository works Carlo Rovelli, Quantum Gravity, Cambridge University Press (2004); draft available online. Thomas Thiemann, "Introduction to Modern Canonical Quantum General Relativity," e-print available as gr-qc/0110034. Abhay Ashtekar, New Perspectives in Canonical Gravity, Bibliopolis (1988). Abhay Ashtekar, Lectures on Non-Perturbative Canonical Gravity, World Scientific (1991). Rodolfo Gambini and Jorge Pullin, Loops, Knots, Gauge Theories and Quantum Gravity, Cambridge University Press (1996). Hermann Nicolai, Kasper Peeters, Marija Zamaklar, "Loop Quantum Gravity: An Outside View," e-print available as hep-th/0501114. "Loop and Spin Foam Quantum Gravity: A Brief Guide for beginners" arXiv:hep-th/0601129 H. Nicolai and K. Peeters. Edward Witten, "Quantum Background Independence In String Theory," e-print available as hep-th/9306122. Conference proceedings John C. Baez (ed.), Knots and Quantum Gravity (1993).
|
{
"page_id": 856680,
"source": null,
"title": "History of loop quantum gravity"
}
|
The British Oceanographic Data Centre (BODC) is a national facility in the United Kingdom that collects and distributes marine environmental data. It serves as the designated marine science data centre for the UK and is part of the National Oceanography Centre (NOC). Most of its operations are conducted at its Liverpool facility, with a smaller team based in Southampton. The BODC supports science, education, industry, and the general public by providing access to comprehensive marine data. == History == In 1969, the Natural Environment Research Council (NERC) created the British Oceanographic Data Service (BODS). Located at the National Institute of Oceanography in Wormley, Surrey, its stated purposes were to act as the UK's national oceanographic data centre and to participate in the international exchange of data as part of the Intergovernmental Oceanographic Commission (IOC) network of national data centres. In 1975, BODS was transferred to Bidston Observatory on the Wirral, near Liverpool, as part of the newly formed Institute of Oceanographic Sciences. The following year, BODS became the Marine Information and Advisory Service (MIAS). Its primary activity was to manage the data collected from weather ships, oil rigs, and data buoys. The data banking component of MIAS was restructured to form BODC in April 1989. In December 2004, BODC moved to the purpose-built Joseph Proudman Building on the campus of the University of Liverpool. A small number of its staff are based in the National Oceanography Centre (NOC), Southampton. == National Role == BODC is one of five designated data centres that make up the NERC Environmental Data Service and manage NERC's environmental data. The BODC has stated that it has a number of national roles and responsibilities: Performing data management for NERC-funded marine projects Maintaining and developing its archive of marine data, the National Oceanographic Database (NODB) Managing, checking
|
{
"page_id": 19075690,
"source": null,
"title": "British Oceanographic Data Centre"
}
|
and archiving data from tide gauges around the UK coast for the National Tide Gauge Network, which aims to obtain high quality tidal information and to provide warning of possible flooding of coastal areas around the British Isles. This is part of the National Tidal & Sea Level Facility (NTSLF) Hosting the Marine Environmental Data and Information Network Working in partnership with other NERC marine research centres: British Antarctic Survey (BAS) National Oceanography Centre (NOC), Liverpool, formerly Proudman Oceanographic Laboratory (POL) National Oceanography Centre (NOC), Southampton Plymouth Marine Laboratory (PML) Scottish Association for Marine Science (SAMS) Sea Mammal Research Unit (SMRU) == International Role == The BODC's stated international roles and responsibilities include: Contributing to the International Council for the Exploration of the Sea (ICES) Marine Data Management Creating, maintaining and publishing the General Bathymetric Chart of the Oceans (GEBCO) Digital Atlas BODC is one of over 60 national oceanographic data centres that form part of the IOC International Oceanographic Data and Information Exchange (IODE). == References == == External links == Official website
|
{
"page_id": 19075690,
"source": null,
"title": "British Oceanographic Data Centre"
}
|
Playing God refers to assuming powers of decision, intervention, or control metaphorically reserved to God. Acts described as playing God may include, for example, deciding who should live or die in a situation where not everyone can be saved, the use and development of biotechnologies such as synthetic biology, and in vitro fertilisation. Usually the expression is used pejoratively and to criticize or argue against the supposedly God-like actions. == Description == Playing God is a broad concept, which is encompassed by both theological and scientific topics. When the term is used, it can be used to refer to people who try to exercise great authority and power. It is usually pejorative and suggests arrogance, misappropriation of power, or tampering with matters in which humans should not meddle. == Etymology == Playing God generally refers to someone using their power to make decisions regarding the fate of another's life or many lives. Theologian Paul Ramsey is noted for saying, "Men ought not to play God before they learn to be men, and after they have learned to be men they will not play God." The religious framework of approach to this phrase refers to said religion's deity having a set plan for mankind, therefore man's hubris may lead to the misuse of technology related to sacred life or nature. Other famous literary texts that allude to a man and God complex include Men Like Gods by H. G. Wells and You Shall Be Gods by Erich Fromm. The notion of god-like knowledge or power in humans goes back at least to the story of forbidden fruit in Genesis 3:4–5 whose traditional English translation includes the words "ye shall be as gods". == History of the accusation == === In bioethics === In modern history, there have been many scientific projects
|
{
"page_id": 4395638,
"source": null,
"title": "Playing God (ethics)"
}
|
which have been considered to be attempted acts of playing God. Biomedical projects such as the attempted creation of artificial sperm and the creation of artificial life itself have brought the sci-fi stories of the 1900s out of fantasy and closer to reality. Other projects scientists have attempted include cloning (Dolly the sheep), even bringing back other extinct species that were previously thought to have been lost to time and could possibly be reintroduced to the wild. The fairly recent discovery of DNA has led to scientists toying with the idea that perhaps human genetics could be edited and possibly improved, despite there being opposition regarding unknown and possibly dire consequences. The most common form of "playing God" in the modern era is then often attributed to bioethics. Bioethics refers to ethical issues regarding biological science, medicine etc. IVF treatment, abortion, genetic engineering, and artificial insemination are a few of the major topics regarding synthetic reproduction. Cloning was the centre of the playing God topic for decades and is still a taboo scientific subject due to this. Nicholas Hartsoeker in 1694 studied sperm under a microscope and the diagram he proposed for what sperm was, a homunculus in the head of the human sperm. A very little human was said to be observed, and this continued an Aristotelian thought that the sperm was in fact, a sacred little person. Rabbis continued to use Hartsoeker's image centuries later attempting to prove that artificial interference with an embryo or birth was murder, destruction of life. Western nations such as the United States, the United Kingdom, and Australia have made many advances in fields such as IVF, however, places like the Far East do not show nearly as much interest in the topic. Eastern philosophy has its own outlook on issues regarding "playing
|
{
"page_id": 4395638,
"source": null,
"title": "Playing God (ethics)"
}
|
God", such as the Confucianism school of thought. This provides another angle of analysis that can be offered towards this complicated matter. ==== In genetic modification ==== There is a strong debate regarding morality and the consequences of science and playing God. Gene editing is a big topic that has been the centre of the argument for decades. Many religious figures believe the notion that life is the plan of God and not to be taken away or synthetically given by man, while some scientists argue that if humans are able to do so then God must have meant it to be. The bioethical debate regarding genetic modification in food and humans has many arguments for and against. In the UK, 4% of the half a million children born have life-affecting genetic defects. This includes genetic diseases that can lead to early death, long-term mental issues, or a lifetime of debilitating physical health problems. Many scientists and supporters of genetic modification argue that DNA is not sacred, and is in fact just chemical sequences in an organism. DNA down to the microscope is just atoms made of elements just like any other living or non-living matter. The University of Pennsylvania in 2016 used mice with a genetic liver disease and were able to genetically edit the mice at birth so that they did not have this deadly disease. It is also argued that since humans are part of nature, then all actions of humanity are technically natural. A beaver building a dam is considered natural, a bird building a nest is also considered natural, so therefore the activities of humans are also natural and a result of autonomy and free will. This argument deduces that certain animals evolved with special traits to assist with their survival and humans developed the
|
{
"page_id": 4395638,
"source": null,
"title": "Playing God (ethics)"
}
|
special trait of technological advancement. A common argument against genetic editing especially that of children is the designer baby argument. Designer babies would be children who have been created to be stronger, smarter, possibly more attractive, and with many other desirable traits. This would be a technology that would only be accessible to the rich according to opponents of genetic editing and would create a big divide in society between the rich and the poor not only in wealth status but also in physical appearance and physical ability. The non-secular aspect of opposition to genetic modification is the idea that genetic modification and editing is a step further than selective breeding and an area humanity should not trespass in. King Charles III strongly opposes genetically modified crops and states that mixing genetic materials from different species is dangerous and a matter we should not delve into. It is argued that the crucial boundary between humanity's choice and chance is reliant on the spine of ethics and morality; a minor shift in boundary could cause serious harm to the future of society. === In geo-engineering === Climate and weather is also a factor that scientists have been looking into that humans could control, with terraforming and cities around the world that are made from scratch and planned out including their geography. Geo-engineering is an example of changing the planet that many deem to be “unnatural and against God”. It involves large-scale manipulation of our Earth's natural elements such as the seas, skies, or even atmosphere to counteract against certain environmental issues such as climate change. The debate among scholars is an ongoing battle, where they seek to bring awareness to critical issues and answer questions that relate to the different morality positions when dealing with the manipulation of earth's elements. When
|
{
"page_id": 4395638,
"source": null,
"title": "Playing God (ethics)"
}
|
focusing on climate engineering and changing the very critical environment that God has provided, we, humans, need to be aware of the possible negative outcomes that can arise when engineering our climate. We need to be ready for anything. One must think about who the vulnerable people are, that are going to be affected by the unperceived consequences. With climate engineering, people are left to question the religious morality of what the human role is when looking at the grand scheme of the universe. Climate change and geo-engineering brings in the concept of the "playing God" critique when dealing with policy changes. The critique on "playing God" refers to the idea that the human species should not be allowed to manipulate our planet, in a way that undermines human's conventional involvement and action with the world around us. Many new technological advances, such as the more recent AI or gene modifications, are just a few examples, that feed on the idea of humans "playing God" or presumably undertaking power that rightfully belongs to both God and the land. Climate engineering once an invention from science fiction is now very real and part of an international political conversation. More extreme practices of climate engineering include stimulating phytoplankton blooms in the ocean by seeding iron to absorb excessive carbon dioxide in the atmosphere, to spraying aerosols in the skies to give clouds the maximum reflectivity and brighten them. Many secular and even non-secular individuals advocate against geo-engineering and altering the climate simply because the perceived risks are too great. Due to the lack of understanding from humans regarding the consequences of putting different chemicals into the atmosphere or seeding oceans, opponents of geo-engineering suggest it be abandoned (Hartman, 2017). However, climate scientists who support the geo-engineering idea such as Ken Caldeira of
|
{
"page_id": 4395638,
"source": null,
"title": "Playing God (ethics)"
}
|
Stanford University, suggest that instead of abandoning the idea due to risk, there should be continued research for the consequences of geo-engineering so that the exact probabilities and effects of consequences are understood. Scientists also argue that geo-engineering in some instances can be cheaper and quite financially feasible; however, the opposition to this is that it is a mere quick fix that moves attention away from the development of long-term solutions. === In artificial intelligence === Artificial intelligence has been a frequent topic of moral questioning in the 21st century. Many deem the human creation of another dimension where the being is sentient and possibly near identical to human intelligence to be an act of playing God. Contrary to bioethics and geo-engineering, artificial intelligence does not physically intervene in nature and its processes. Since the invention of the Internet and complex computing systems and algorithms, artificial intelligence has exponentially improved and is now used in everyday technology. The term "artificial intelligence" contrasts that of natural intelligence, displayed by biological organisms. Major organisations around the world, including the United Nations, have commented on the relationship between artificial intelligence and the impact it may have on human lives in a negative way. UN Secretary-General António Guterres noted that AI drone strikes have the capability to possibly go rogue and take lives without human involvement. Other practices of AI can include many other matters, such as Deep Blue, the IBM supercomputer that is capable of beating grandmasters at chess. == Criticism == Philip Ball has argued that "playing God" is a meaningless and dangerous cliché that has no basis in theology. He claims that it was adopted as a rhetorical weapon by bioethicist "theocons", and owes its origin as a meme to the 1931 film version of Frankenstein, and has been used by
|
{
"page_id": 4395638,
"source": null,
"title": "Playing God (ethics)"
}
|
journalists to refer to things they disagree with. Alexandre Erler, in response to Ball, has argued that while the phrase is not meaningless, it is extremely vague and requires further clarification for it to be useful within the context of an argument. === The transhumanist objection === == See also == == Notes == == References == == Further reading == Basinger, D. (2023). God and Human Genetic Engineering. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781009269360 Clay, Eugene (2012). "Transhumanism and the Orthodox Christian Tradition", In H. Tirosh-Samuelson, & K. Mossman (Eds.), Building Better Humans?: Refocusing the Debate on Transhumanism, Peter Lang. https://doi.org/10.3726/978-3-653-01824-0* Coady, C. A. J. (2009). "The religious perspective", In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press. pp. 155 Grey, William (2001). "Playing God", In Ruth Chadwick (ed.) The Concise Encyclopedia of the Ethics of New Technologies. Academic Press. pp. 335-339. Savulescu, Julian (2010). "The Human Prejudice and the Moral Status of Enhanced Beings: What Do We Owe the Gods?" In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press. Shabana, Ayman (2022)." Between Treatment and Enhancement: Islamic Discourses on the Boundaries of Human Genetic Modification." Journal of Religious Ethics 50 (3):386-411. DOI10.1111/jore.12404
|
{
"page_id": 4395638,
"source": null,
"title": "Playing God (ethics)"
}
|
Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics. == Introduction == The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood. Supervised learning involves learning from a training set of data. Every point in the training is an input–output pair, where the input maps to an output. The learning problem consists of inferring the function that maps between the input and the output, such that the learned function can be used to predict the output from future input. Depending on the type of output, supervised learning problems are either problems of regression or problems of classification. If the output takes a continuous range of values, it is a regression problem. Using Ohm's law as an example, a regression could be performed with voltage as input and current as an output. The regression would find the functional relationship between voltage and current to be R {\displaystyle R} , such that V = I R {\displaystyle V=IR} Classification problems are those for which the output will be an element from a discrete set of labels. Classification is very common for machine learning applications. In facial recognition, for instance, a picture of a person's face would be the input, and the output label would be that person's name. The input would be represented by a large multidimensional vector whose elements represent pixels in the picture. After learning a function based on the training
|
{
"page_id": 1053303,
"source": null,
"title": "Statistical learning theory"
}
|
set data, that function is validated on a test set of data, data that did not appear in the training set. == Formal description == Take X {\displaystyle X} to be the vector space of all possible inputs, and Y {\displaystyle Y} to be the vector space of all possible outputs. Statistical learning theory takes the perspective that there is some unknown probability distribution over the product space Z = X × Y {\displaystyle Z=X\times Y} , i.e. there exists some unknown p ( z ) = p ( x , y ) {\displaystyle p(z)=p(\mathbf {x} ,y)} . The training set is made up of n {\displaystyle n} samples from this probability distribution, and is notated S = { ( x 1 , y 1 ) , … , ( x n , y n ) } = { z 1 , … , z n } {\displaystyle S=\{(\mathbf {x} _{1},y_{1}),\dots ,(\mathbf {x} _{n},y_{n})\}=\{\mathbf {z} _{1},\dots ,\mathbf {z} _{n}\}} Every x i {\displaystyle \mathbf {x} _{i}} is an input vector from the training data, and y i {\displaystyle y_{i}} is the output that corresponds to it. In this formalism, the inference problem consists of finding a function f : X → Y {\displaystyle f:X\to Y} such that f ( x ) ∼ y {\displaystyle f(\mathbf {x} )\sim y} . Let H {\displaystyle {\mathcal {H}}} be a space of functions f : X → Y {\displaystyle f:X\to Y} called the hypothesis space. The hypothesis space is the space of functions the algorithm will search through. Let V ( f ( x ) , y ) {\displaystyle V(f(\mathbf {x} ),y)} be the loss function, a metric for the difference between the predicted value f ( x ) {\displaystyle f(\mathbf {x} )} and the actual value y {\displaystyle y} . The expected risk
|
{
"page_id": 1053303,
"source": null,
"title": "Statistical learning theory"
}
|
is defined to be I [ f ] = ∫ X × Y V ( f ( x ) , y ) p ( x , y ) d x d y {\displaystyle I[f]=\int _{X\times Y}V(f(\mathbf {x} ),y)\,p(\mathbf {x} ,y)\,d\mathbf {x} \,dy} The target function, the best possible function f {\displaystyle f} that can be chosen, is given by the f {\displaystyle f} that satisfies f = argmin h ∈ H I [ h ] {\displaystyle f=\mathop {\operatorname {argmin} } _{h\in {\mathcal {H}}}I[h]} Because the probability distribution p ( x , y ) {\displaystyle p(\mathbf {x} ,y)} is unknown, a proxy measure for the expected risk must be used. This measure is based on the training set, a sample from this unknown probability distribution. It is called the empirical risk I S [ f ] = 1 n ∑ i = 1 n V ( f ( x i ) , y i ) {\displaystyle I_{S}[f]={\frac {1}{n}}\sum _{i=1}^{n}V(f(\mathbf {x} _{i}),y_{i})} A learning algorithm that chooses the function f S {\displaystyle f_{S}} that minimizes the empirical risk is called empirical risk minimization. == Loss functions == The choice of loss function is a determining factor on the function f S {\displaystyle f_{S}} that will be chosen by the learning algorithm. The loss function also affects the convergence rate for an algorithm. It is important for the loss function to be convex. Different loss functions are used depending on whether the problem is one of regression or one of classification. === Regression === The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression. The form is: V ( f ( x ) , y ) = ( y − f ( x )
|
{
"page_id": 1053303,
"source": null,
"title": "Statistical learning theory"
}
|
) 2 {\displaystyle V(f(\mathbf {x} ),y)=(y-f(\mathbf {x} ))^{2}} The absolute value loss (also known as the L1-norm) is also sometimes used: V ( f ( x ) , y ) = | y − f ( x ) | {\displaystyle V(f(\mathbf {x} ),y)=|y-f(\mathbf {x} )|} === Classification === In some sense the 0-1 indicator function is the most natural loss function for classification. It takes the value 0 if the predicted output is the same as the actual output, and it takes the value 1 if the predicted output is different from the actual output. For binary classification with Y = { − 1 , 1 } {\displaystyle Y=\{-1,1\}} , this is: V ( f ( x ) , y ) = θ ( − y f ( x ) ) {\displaystyle V(f(\mathbf {x} ),y)=\theta (-yf(\mathbf {x} ))} where θ {\displaystyle \theta } is the Heaviside step function. == Regularization == In machine learning problems, a major problem that arises is that of overfitting. Because learning is a prediction problem, the goal is not to find a function that most closely fits the (previously observed) data, but to find one that will most accurately predict output from future input. Empirical risk minimization runs this risk of overfitting: finding a function that matches the data exactly but does not predict future output well. Overfitting is symptomatic of unstable solutions; a small perturbation in the training set data would cause a large variation in the learned function. It can be shown that if the stability for the solution can be guaranteed, generalization and consistency are guaranteed as well. Regularization can solve the overfitting problem and give the problem stability. Regularization can be accomplished by restricting the hypothesis space H {\displaystyle {\mathcal {H}}} . A common example would be restricting H {\displaystyle {\mathcal
|
{
"page_id": 1053303,
"source": null,
"title": "Statistical learning theory"
}
|
{H}}} to linear functions: this can be seen as a reduction to the standard problem of linear regression. H {\displaystyle {\mathcal {H}}} could also be restricted to polynomial of degree p {\displaystyle p} , exponentials, or bounded functions on L1. Restriction of the hypothesis space avoids overfitting because the form of the potential functions are limited, and so does not allow for the choice of a function that gives empirical risk arbitrarily close to zero. One example of regularization is Tikhonov regularization. This consists of minimizing 1 n ∑ i = 1 n V ( f ( x i ) , y i ) + γ ‖ f ‖ H 2 {\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}V(f(\mathbf {x} _{i}),y_{i})+\gamma \left\|f\right\|_{\mathcal {H}}^{2}} where γ {\displaystyle \gamma } is a fixed and positive parameter, the regularization parameter. Tikhonov regularization ensures existence, uniqueness, and stability of the solution. == Bounding empirical risk == Consider a binary classifier f : X → { 0 , 1 } {\displaystyle f:{\mathcal {X}}\to \{0,1\}} . We can apply Hoeffding's inequality to bound the probability that the empirical risk deviates from the true risk to be a Sub-Gaussian distribution. P ( | R ^ ( f ) − R ( f ) | ≥ ϵ ) ≤ 2 e − 2 n ϵ 2 {\displaystyle \mathbb {P} (|{\hat {R}}(f)-R(f)|\geq \epsilon )\leq 2e^{-2n\epsilon ^{2}}} But generally, when we do empirical risk minimization, we are not given a classifier; we must choose it. Therefore, a more useful result is to bound the probability of the supremum of the difference over the whole class. P ( sup f ∈ F | R ^ ( f ) − R ( f ) | ≥ ϵ ) ≤ 2 S ( F , n ) e − n ϵ 2 / 8 ≈ n d
|
{
"page_id": 1053303,
"source": null,
"title": "Statistical learning theory"
}
|
e − n ϵ 2 / 8 {\displaystyle \mathbb {P} {\bigg (}\sup _{f\in {\mathcal {F}}}|{\hat {R}}(f)-R(f)|\geq \epsilon {\bigg )}\leq 2S({\mathcal {F}},n)e^{-n\epsilon ^{2}/8}\approx n^{d}e^{-n\epsilon ^{2}/8}} where S ( F , n ) {\displaystyle S({\mathcal {F}},n)} is the shattering number and n {\displaystyle n} is the number of samples in your dataset. The exponential term comes from Hoeffding but there is an extra cost of taking the supremum over the whole class, which is the shattering number. == See also == Reproducing kernel Hilbert spaces are a useful choice for H {\displaystyle {\mathcal {H}}} . Proximal gradient methods for learning Rademacher complexity Vapnik–Chervonenkis dimension == References ==
|
{
"page_id": 1053303,
"source": null,
"title": "Statistical learning theory"
}
|
Chromatin bridge is a mitotic occurrence that forms when telomeres of sister chromatids fuse together and fail to completely segregate into their respective daughter cells. Because this event is most prevalent during anaphase, the term anaphase bridge is often used as a substitute. After the formation of individual daughter cells, the DNA bridge connecting homologous chromosomes remains fixed. As the daughter cells exit mitosis and re-enter interphase, the chromatin bridge becomes known as an interphase bridge. These phenomena are usually visualized using the laboratory techniques of staining and fluorescence microscopy. == Background == The faithful inheritance of genetic information from one cellular generation to the next heavily relies on the duplication of deoxyribonucleic acid (DNA), as well as the formation of two identical daughter cells. This complicated cellular process, known as mitosis, depends on a multitude of cellular checkpoints, signals, interactions and signal cascades for accurate and faithful functioning. Cancer, characterized by uncontrollable cell growth mechanisms and high tendencies for proliferation and metastasis, is highly prone to mitotic mistakes. As a result, several forms of chromosomal aberrations occur, including, but not limited to, binucleated cells, multipolar spindles and micronuclei. Chromatin bridges may serve as a marker of cancer activity. == Process of formation == Chromatin bridges may form by any number of processes wherein chromosomes remain topologically entangled during mitosis. One way in which this may occur is the failure to resolve joint molecules formed during homologous recombination mediated DNA repair, a process that ensures that replicated chromosomes are intact before chromosomes are segregated during cell division. In particular, genetic studies have demonstrated that the loss of the enzymes BLM (Bloom's Syndrome Helicase) or FANCM each result in a dramatic increase in the number of chromatin bridges. This occurs because loss of these genes causes an increase in chromosome fusions,
|
{
"page_id": 33231480,
"source": null,
"title": "Chromatin bridge"
}
|
either in an end-to-end manner or through topological entrapment (e.g., catenation or unresolved DNA cross-links), have also been associated with chromatin bridge formation. When viewed under a fluorescence microscope and immunostained for cytological markers, these chromatin bridges appear to emanate from either centromeres, telomeres or DNA crosslinks (as marked by FANCD2). == Fluorescence techniques == Chromatin bridges can be viewed utilizing a laboratory technique known as fluorescence microscopy. Fluorescence is the process that involves excitation of a fluorophore (a molecule with the ability to emit fluorescent light in the visible light spectrum) using ultraviolet light. After the fluorophore becomes chemically excited by the presence of UV light, it emits visible light at a specific wavelength, producing different colors. Fluorophores may be added as a molecular tag to different portions of a cell. DAPI is a fluorophore that specifically binds to DNA and fluoresces blue. In addition, immunofluorescence may be used as a laboratory technique to tag cells with specific fluorophores using antibodies, immune proteins created by B lymphocytes. Antibodies are utilized by the immune system in the identification and binding of foreign substances. Tubulin is a monomer of microtubules that compose the cellular cytoskeleton. The antibody anti-tubulin specifically binds to these tubulin monomeric subunits. A fluorophore can be chemically attached to the anti-tubulin antibody, which then fluoresces green. Numerous antibodies may bind to microtubules in order to amplify the fluorescent signal. Fluorescence microscopy allows for the observation of different components of the cell against a dark background for high intensity and specificity. == Practical applications == === Detection === Chromatin bridges are easiest and most readily visible when observing chromosomes stained with DAPI. DNA bridges appear to be a blue, "string-like" connection between two separated daughter cells. This effect is created when sticky ends of chromosomes remain connected to one
|
{
"page_id": 33231480,
"source": null,
"title": "Chromatin bridge"
}
|
another, even after mitosis. A chromatin bridge may also be observed using indirect immunofluorescence, in which anti-tubulin emits a green coloration when bound to microtubules in the presence of UV light. Because microtubules maintain the positions of the chromosomes during mitosis, they appear to be densely pinched between the two dividing, daughter cells. Chromatin bridges can be difficult to locate utilizing fluorescence microscopy, as this phenomenon is not incredibly abundant and tend to appear faint against the dark background. === Cancer === Recently, chromatin bridges have been implied as a diagnostic marker for cancer, while having been linked to tumorigenesis in humans. This premise is based on the fact that as the mitotic cell divides and the daughter cells move further apart, stress on the DNA bridge leads to breakages in the chromosome at random points. As previously stated, the disruptions in the chromosome may lead to single chromosome mutations, including deletion, duplication and inversion, among others. This instability, defined as frequent changes in chromosomal structure and number, may be the basis of the development of cancer. While the frequency of chromatin bridges may be greater in tumor cells relative to normal cells, it may not be practical to utilize this phenomenon as a diagnostic tool. The process of staining and mounting sample cells using indirect immunofluorescence is time-consuming. Even though DAPI staining is quick, neither laboratory technique can guarantee the presence of the bridges under the fluorescence microscope. The rarity of chromatin bridges, even in cancerous cells, makes this phenomenon difficult to be widely accepted diagnostic marker for cancer. == References ==
|
{
"page_id": 33231480,
"source": null,
"title": "Chromatin bridge"
}
|
Kojic acid is an organic compound with the formula HOCH2C5H2O2OH. It is a derivative of 4-pyrone that functions in nature as a chelation agent produced by several species of fungus, especially Aspergillus oryzae, which has the Japanese common name koji. Kojic acid is a by-product in the fermentation process of malting rice, for use in the manufacturing of sake, the Japanese rice wine. It is a mild inhibitor of the formation of pigment in plant and animal tissues, and is used in food and cosmetics to preserve or change colors of substances. It forms a bright red complex with ferric ions. == Biosynthesis == 13C-Labeling studies have revealed at least two pathways to kojic acid. In the usual route, dehydratase enzymes convert glucose to kojic acid. Pentoses are also viable precursors in which case dihydroxyacetone is invoked as an intermediate. == Applications == Kojic acid may be used on cut fruits to prevent oxidative browning, in seafood to preserve pink and red colors, and in cosmetics to lighten skin. As an example of the latter, it is used to treat skin diseases like melasma. Kojic acid also has antibacterial and antifungal properties. == Chemical reactions == Deprotonation of the ring-OH group converts kojic acid to kojate. Kojate chelates to iron(III), forming a red complex Fe(HOCH2C5OH2O2)3. This kind of reaction may be the basis of the biological function of kojic acid, that is, to solubilize ferric iron. Being a multifunctional molecule, kojic acid has diverse organic chemistry. The hydroxymethyl group gives the chloromethyl derivative upon treatment with thionyl chloride. == Safety == Kojic acid may be weakly carcinogenic, according to some animal studies. It is not believed to reach carcinogenic thresholds in human skin, and is demonstrably safe at the level used in cosmetics. == References == == External links ==
|
{
"page_id": 5116536,
"source": null,
"title": "Kojic acid"
}
|
Safety MSDS data Mohajer, Fatemeh; Mohammadi Ziarani, Ghodsi (2021). "An Overview of Quantitative and Qualitative Approaches on the Synthesis of Heterocyclic Kojic Acid Scaffolds through the Multi-Component Reactions". Heterocycles. 102 (2). Japan Institute of Heterocyclic Chemistry: 211. doi:10.3987/REV-20-936.
|
{
"page_id": 5116536,
"source": null,
"title": "Kojic acid"
}
|
Peptidyl-L-lysine(-L-arginine) hydrolase may refer to: Lysine carboxypeptidase, an enzyme Carboxypeptidase E, an enzyme
|
{
"page_id": 39260790,
"source": null,
"title": "Peptidyl-L-lysine(-L-arginine) hydrolase"
}
|
Instrumental analysis is a field of analytical chemistry that investigates analytes using scientific instruments. == Spectroscopy == Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy, and circular dichroism spectroscopy. == Nuclear spectroscopy == Methods of nuclear spectroscopy use properties of a nucleus to probe a material's properties, especially the material's local structure. Common methods include nuclear magnetic resonance spectroscopy (NMR), Mössbauer spectroscopy (MBS), and perturbed angular correlation (PAC). == Mass spectrometry == Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. There are several ionization methods: electron ionization, chemical ionization, electrospray, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. == Crystallography == Crystallography is a technique that characterizes the chemical structure of materials at the atomic level by analyzing the diffraction patterns of electromagnetic radiation or particles that have been deflected by atoms in the material. X-rays are most commonly used. From the raw data, the relative placement of atoms in space may be determined. == Electrochemical analysis == Electroanalytical methods measure the electric potential in volts and/or the electric current in amps in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The three main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). == Thermal analysis == Calorimetry and
|
{
"page_id": 26219128,
"source": null,
"title": "Instrumental chemistry"
}
|
thermogravimetric analysis measure the interaction of a material and heat. == Separation == Separation processes are used to decrease the complexity of material mixtures. Chromatography and electrophoresis are representative of this field. == Hybrid techniques == Combinations of the above techniques produce "hybrid" or "hyphenated" techniques. Several examples are in popular use today and new hybrid techniques are under development. Hyphenated separation techniques refer to a combination of two or more techniques to separate chemicals from solutions and detect them. Most often, the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. Examples of hyphenated techniques: Gas chromatography-mass spectrometry (GC-MS) Liquid chromatography–mass spectrometry (LC-MS) Liquid chromatography-infrared spectroscopy (LC-IR) High-performance liquid chromatography/electrospray ionization-mass spectrometry (HPLC/ESI-MS) Chromatography-diode-array detection (LC-DAD) Capillary electrophoresis-mass spectrometry (CE-MS) Capillary electrophoresis-ultraviolet-visible spectroscopy (CE-UV) Ion-mobility spectrometry–mass spectrometry Prolate trochoidal mass spectrometer == Microscopy == The visualization of single molecules, single biological cells, biological tissues and nanomaterials is very important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field has been rapidly progressing because of the rapid development of the computer and camera industries. == Lab-on-a-chip == Devices that integrate multiple laboratory functions on a single chip of only a few square millimeters or centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. == See also == Characterization (materials science) == References ==
|
{
"page_id": 26219128,
"source": null,
"title": "Instrumental chemistry"
}
|
Incipient wetness impregnation (IW or IWI), also called capillary impregnation or dry impregnation, is a commonly used technique for the synthesis of heterogeneous catalysts. Typically, the active metal precursor is dissolved in an aqueous or organic solution. Then the metal-containing solution is added to a catalyst support containing the same pore volume as the volume of the solution that was added. Capillary action draws the solution into the pores. Solution added in excess of the support pore volume causes the solution transport to change from a capillary action process to a diffusion process, which is much slower. The catalyst can then be dried and calcined to drive off the volatile components within the solution, depositing the metal on the catalyst surface. The maximum loading is limited by the solubility of the precursor in the solution. The concentration profile of the impregnated compound depends on the mass transfer conditions within the pores during impregnation and drying. == References ==
|
{
"page_id": 7737980,
"source": null,
"title": "Incipient wetness impregnation"
}
|
This article includes a list of the most massive known objects of the Solar System and partial lists of smaller objects by observed mean radius. These lists can be sorted according to an object's radius and mass and, for the most massive objects, volume, density, and surface gravity, if these values are available. These lists contain the Sun, the planets, dwarf planets, many of the larger small Solar System bodies (which includes the asteroids), all named natural satellites, and a number of smaller objects of historical or scientific interest, such as comets and near-Earth objects. Many trans-Neptunian objects (TNOs) have been discovered; in many cases their positions in this list are approximate, as there is frequently a large uncertainty in their estimated diameters due to their distance from Earth. Solar System objects more massive than 1021 kilograms are known or expected to be approximately spherical. Astronomical bodies relax into rounded shapes (spheroids), achieving hydrostatic equilibrium, when their own gravity is sufficient to overcome the structural strength of their material. It was believed that the cutoff for round objects is somewhere between 100 km and 200 km in radius if they have a large amount of ice in their makeup; however, later studies revealed that icy satellites as large as Iapetus (1,470 kilometers in diameter) are not in hydrostatic equilibrium at this time, and a 2019 assessment suggests that many TNOs in the size range of 400–1,000 kilometers may not even be fully solid bodies, much less gravitationally rounded. Objects that are ellipsoids due to their own gravity are here generally referred to as being "round", whether or not they are actually in equilibrium today, while objects that are clearly not ellipsoidal are referred to as being "irregular." Spheroidal bodies typically have some polar flattening due to the centrifugal force from
|
{
"page_id": 594550,
"source": null,
"title": "List of Solar System objects by size"
}
|
their rotation, and can sometimes even have quite different equatorial diameters (scalene ellipsoids such as Haumea). Unlike bodies such as Haumea, the irregular bodies have a significantly non-ellipsoidal profile, often with sharp edges. There can be difficulty in determining the diameter (within a factor of about 2) for typical objects beyond Saturn (see: 2060 Chiron § Physical characteristics, for an example). For TNOs there is some confidence in the diameters, but for non-binary TNOs there is no real confidence in the masses/densities. Many TNOs are often just assumed to have Pluto's density of 2.0 g/cm3, but it is just as likely that they have a comet-like density of only 0.5 g/cm3. For example, if a TNO is incorrectly assumed to have a mass of 3.59×1020 kg based on a radius of 350 km with a density of 2 g/cm3 but is later discovered to have a radius of only 175 km with a density of 0.5 g/cm3, its true mass would be only 1.12×1019 kg. The sizes and masses of many of the moons of Jupiter and Saturn are fairly well known due to numerous observations and interactions of the Galileo and Cassini orbiters; however, many of the moons with a radius less than ≈100 km, such as Jupiter's Himalia, have far more uncertain masses. Further out from Saturn, the sizes and masses of objects are less clear. There has not yet been an orbiter around Uranus or Neptune for long-term study of their moons. For the small outer irregular moons of Uranus, such as Sycorax, which were not discovered by the Voyager 2 flyby, even different NASA web pages, such as the National Space Science Data Center and JPL Solar System Dynamics, give somewhat contradictory size and albedo estimates depending on which research paper is being cited. There are
|
{
"page_id": 594550,
"source": null,
"title": "List of Solar System objects by size"
}
|
uncertainties in the figures for mass and radius, and irregularities in the shape and density, with accuracy often depending on how close the object is to Earth or whether it has been visited by a probe. == Graphical overview == == Objects with radii over 400 km == The following objects have a nominal mean radius of 400 km or greater. It was once expected that any icy body larger than approximately 200 km in radius was likely to be in hydrostatic equilibrium (HE). However, Ceres (r = 470 km) is the smallest body for which detailed measurements are consistent with hydrostatic equilibrium, whereas Iapetus (r = 735 km) is the largest icy body that has been found to not be in hydrostatic equilibrium. The known icy moons in this range are all ellipsoidal (except Proteus), but trans-Neptunian objects up to 450–500 km radius may be quite porous. For simplicity and comparative purposes, the values are manually calculated assuming that the bodies are all spheres. The size of solid bodies does not include an object's atmosphere. For example, Titan looks bigger than Ganymede, but its solid body is smaller. For the giant planets, the "radius" is defined as the distance from the center at which the atmosphere reaches 1 bar of atmospheric pressure. Because Sedna and 2002 MS4 have no known moons, directly determining their mass is impossible without sending a probe (estimated to be from 1.7x1021 to 6.1×1021 kg for Sedna). == Smaller objects by mean radius == === From 200 to 399 km === All imaged icy moons with radii greater than 200 km except Proteus are clearly round, although those under 400 km that have had their shapes carefully measured are not in hydrostatic equilibrium. The known densities of TNOs in this size range are remarkably low
|
{
"page_id": 594550,
"source": null,
"title": "List of Solar System objects by size"
}
|
(1–1.2 g/cm3), implying that the objects retain significant internal porosity from their formation and were never gravitationally compressed into fully solid bodies. === From 100 to 199 km === This list contains a selection of objects estimated to be between 100 and 199 km in radius (200 and 399 km in diameter). The largest of these may have a hydrostatic-equilibrium shape, but most are irregular. Most of the trans-Neptunian objects (TNOs) listed with a radius smaller than 200 km have "assumed sizes based on a generic albedo of 0.09" since they are too far away to directly measure their sizes with existing instruments. Mass switches from 1021 kg to 1018 kg (Zg). Main-belt asteroids have orbital elements constrained by (2.0 AU < a < 3.2 AU; q > 1.666 AU) according to JPL Solar System Dynamics (JPLSSD). Many TNOs are omitted from this list as their sizes are poorly known. === From 50 to 99 km === This list contains a selection of objects 50 and 99 km in radius (100 km to 199 km in average diameter). The listed objects currently include most objects in the asteroid belt and moons of the giant planets in this size range, but many newly discovered objects in the outer Solar System are missing, such as those included in the following reference. Asteroid spectral types are mostly Tholen, but some might be SMASS. === From 20 to 49 km === This list includes few examples since there are about 589 asteroids in the asteroid belt with a measured radius between 20 and 49 km. Many thousands of objects of this size range have yet to be discovered in the trans-Neptunian region. The number of digits is not an endorsement of significant figures. The table switches from ×1018 kg to ×1015 kg (Eg). Most
|
{
"page_id": 594550,
"source": null,
"title": "List of Solar System objects by size"
}
|
mass values of asteroids are assumed. === From 1 to 19 km === This list contains some examples of Solar System objects between 1 and 19 km in radius. This is a common size for asteroids, comets and irregular moons. === Below 1 km === This list contains examples of objects below 1 km in radius. That means that irregular bodies can have a longer chord in some directions, hence the mean radius averages out. In the asteroid belt alone there are estimated to be between 1.1 and 1.9 million objects with a radius above 0.5 km, many of which are in the range 0.5–1.0 km. Countless more have a radius below 0.5 km. Very few objects in this size range have been explored or even imaged. The exceptions are objects that have been visited by a probe, or have passed close enough to Earth to be imaged. Radius is by mean geometric radius. Number of digits not an endorsement of significant figures. Mass scale shifts from × 1015 to 109 kg, which is equivalent to one billion kg or 1012 grams (Teragram – Tg). Currently most of the objects of mass between 109 kg to 1012 kg (less than 1000 teragrams (Tg)) listed here are near-Earth asteroids (NEAs). The Aten asteroid 1994 WR12 has less mass than the Great Pyramid of Giza, 5.9 × 109 kg. For more about very small objects in the Solar System, see meteoroid, micrometeoroid, cosmic dust, and interplanetary dust cloud. (See also Visited/imaged bodies.) == Gallery == == See also == List of gravitationally rounded objects of the Solar System List of dwarf planets List of minor planets List of natural satellites List of Solar System objects most distant from the Sun List of space telescopes Lists of astronomical objects == Notes == ==
|
{
"page_id": 594550,
"source": null,
"title": "List of Solar System objects by size"
}
|
References == == Further reading == NASA Planetary Data System (PDS) Asteroids with Satellites Minor Planet discovery circumstances Supplemental IRAS Minor Planet Survey (SIMPS) and IRAS Minor Planet Survey (IMPS) SIMPS & IMPS (V6, additional, from here) Asteroid Data Archive Archive Planetary Science Institute == External links == Planetary fact sheets Asteroid fact sheet
|
{
"page_id": 594550,
"source": null,
"title": "List of Solar System objects by size"
}
|
In quantum mechanics, Bargmann's limit, named for Valentine Bargmann, provides an upper bound on the number N ℓ {\displaystyle N_{\ell }} of bound states with azimuthal quantum number ℓ {\displaystyle \ell } in a system with central potential V {\displaystyle V} . It takes the form N ℓ < 1 2 ℓ + 1 2 m ℏ 2 ∫ 0 ∞ r | V ( r ) | d r {\displaystyle N_{\ell }<{\frac {1}{2\ell +1}}{\frac {2m}{\hbar ^{2}}}\int _{0}^{\infty }r|V(r)|\,dr} This limit is the best possible upper bound in such a way that for a given ℓ {\displaystyle \ell } , one can always construct a potential V ℓ {\displaystyle V_{\ell }} for which N ℓ {\displaystyle N_{\ell }} is arbitrarily close to this upper bound. Note that the Dirac delta function potential attains this limit. After the first proof of this inequality by Valentine Bargmann in 1953, Julian Schwinger presented an alternative way of deriving it in 1961. == Rigorous formulation and proof == Stated in a formal mathematical way, Bargmann's limit goes as follows. Let V : R 3 → R : r ↦ V ( r ) {\displaystyle V:\mathbb {R} ^{3}\to \mathbb {R} :\mathbf {r} \mapsto V(r)} be a spherically symmetric potential, such that it is piecewise continuous in r {\displaystyle r} , V ( r ) = O ( 1 / r a ) {\displaystyle V(r)=O(1/r^{a})} for r → 0 {\displaystyle r\to 0} and V ( r ) = O ( 1 / r b ) {\displaystyle V(r)=O(1/r^{b})} for r → + ∞ {\displaystyle r\to +\infty } , where a ∈ ( 2 , + ∞ ) {\displaystyle a\in (2,+\infty )} and b ∈ ( − ∞ , 2 ) {\displaystyle b\in (-\infty ,2)} . If ∫ 0 + ∞ r | V ( r ) |
|
{
"page_id": 1184376,
"source": null,
"title": "Bargmann's limit"
}
|
d r < + ∞ , {\displaystyle \int _{0}^{+\infty }r|V(r)|dr<+\infty ,} then the number of bound states N ℓ {\displaystyle N_{\ell }} with azimuthal quantum number ℓ {\displaystyle \ell } for a particle of mass m {\displaystyle m} obeying the corresponding Schrödinger equation, is bounded from above by N ℓ < 1 2 ℓ + 1 2 m ℏ 2 ∫ 0 + ∞ r | V ( r ) | d r . {\displaystyle N_{\ell }<{\frac {1}{2\ell +1}}{\frac {2m}{\hbar ^{2}}}\int _{0}^{+\infty }r|V(r)|dr.} Although the original proof by Valentine Bargmann is quite technical, the main idea follows from two general theorems on ordinary differential equations, the Sturm Oscillation Theorem and the Sturm-Picone Comparison Theorem. If we denote by u 0 ℓ {\displaystyle u_{0\ell }} the wave function subject to the given potential with total energy E = 0 {\displaystyle E=0} and azimuthal quantum number ℓ {\displaystyle \ell } , the Sturm Oscillation Theorem implies that N ℓ {\displaystyle N_{\ell }} equals the number of nodes of u 0 ℓ {\displaystyle u_{0\ell }} . From the Sturm-Picone Comparison Theorem, it follows that when subject to a stronger potential W {\displaystyle W} (i.e. W ( r ) ≤ V ( r ) {\displaystyle W(r)\leq V(r)} for all r ∈ R 0 + {\displaystyle r\in \mathbb {R} _{0}^{+}} ), the number of nodes either grows or remains the same. Thus, more specifically, we can replace the potential V {\displaystyle V} by − | V | {\displaystyle -|V|} . For the corresponding wave function with total energy E = 0 {\displaystyle E=0} and azimuthal quantum number ℓ {\displaystyle \ell } , denoted by ϕ 0 ℓ {\displaystyle \phi _{0\ell }} , the radial Schrödinger equation becomes d 2 d r 2 ϕ 0 ℓ ( r ) − ℓ ( ℓ + 1 )
|
{
"page_id": 1184376,
"source": null,
"title": "Bargmann's limit"
}
|
r 2 ϕ 0 ℓ ( r ) = − W ( r ) ϕ 0 ℓ ( r ) , {\displaystyle {\frac {d^{2}}{dr^{2}}}\phi _{0\ell }(r)-{\frac {\ell (\ell +1)}{r^{2}}}\phi _{0\ell }(r)=-W(r)\phi _{0\ell }(r),} with W = 2 m | V | / ℏ 2 {\displaystyle W=2m|V|/\hbar ^{2}} . By applying variation of parameters, one can obtain the following implicit solution ϕ 0 ℓ ( r ) = r ℓ + 1 − ∫ 0 p G ( r , ρ ) ϕ 0 ℓ ( ρ ) W ( ρ ) d ρ , {\displaystyle \phi _{0\ell }(r)=r^{\ell +1}-\int _{0}^{p}G(r,\rho )\phi _{0\ell }(\rho )W(\rho )d\rho ,} where G ( r , ρ ) {\displaystyle G(r,\rho )} is given by G ( r , ρ ) = 1 2 ℓ + 1 [ r ( r ρ ) ℓ − ρ ( ρ r ) ℓ ] . {\displaystyle G(r,\rho )={\frac {1}{2\ell +1}}\left[r{\bigg (}{\frac {r}{\rho }}{\bigg )}^{\ell }-\rho {\bigg (}{\frac {\rho }{r}}{\bigg )}^{\ell }\right].} If we now denote all successive nodes of ϕ 0 ℓ {\displaystyle \phi _{0\ell }} by 0 = ν 1 < ν 2 < ⋯ < ν N {\displaystyle 0=\nu _{1}<\nu _{2}<\dots <\nu _{N}} , one can show from the implicit solution above that for consecutive nodes ν i {\displaystyle \nu _{i}} and ν i + 1 {\displaystyle \nu _{i+1}} 2 m ℏ 2 ∫ ν i ν i + 1 r | V ( r ) | d r > 2 ℓ + 1. {\displaystyle {\frac {2m}{\hbar ^{2}}}\int _{\nu _{i}}^{\nu _{i+1}}r|V(r)|dr>2\ell +1.} From this, we can conclude that 2 m ℏ 2 ∫ 0 + ∞ r | V ( r ) | d r ≥ 2 m ℏ 2 ∫ 0 ν N r | V ( r ) | d r > N (
|
{
"page_id": 1184376,
"source": null,
"title": "Bargmann's limit"
}
|
2 ℓ + 1 ) ≥ N ℓ ( 2 ℓ + 1 ) , {\displaystyle {\frac {2m}{\hbar ^{2}}}\int _{0}^{+\infty }r|V(r)|dr\geq {\frac {2m}{\hbar ^{2}}}\int _{0}^{\nu _{N}}r|V(r)|dr>N(2\ell +1)\geq N_{\ell }(2\ell +1),} proving Bargmann's limit. Note that as the integral on the right is assumed to be finite, so must be N {\displaystyle N} and N ℓ {\displaystyle N_{\ell }} . Furthermore, for a given value of ℓ {\displaystyle \ell } , one can always construct a potential V ℓ {\displaystyle V_{\ell }} for which N ℓ {\displaystyle N_{\ell }} is arbitrarily close to Bargmann's limit. The idea to obtain such a potential, is to approximate Dirac delta function potentials, as these attain the limit exactly. An example of such a construction can be found in Bargmann's original paper. == References ==
|
{
"page_id": 1184376,
"source": null,
"title": "Bargmann's limit"
}
|
"The Platonic Permutation" is the ninth episode of the ninth season of The Big Bang Theory. The 192nd episode overall, it first aired on CBS on November 19, 2015. The story follows the characters throughout Thanksgiving. The first storyline explores Sheldon and Amy's relationship as they meet-up, after being broken up. The next subplot follows Bernadette, Raj, Emily and Howard where they volunteer at a soup kitchen and the last follows Leonard and Penny after they have a minor conflict as Penny forgets Leonard's birthday. "The Platonic Permutation" features a guest appearance of South-African American entrepreneur and business magnate, Elon Musk as himself. Critics had mixed reviews of the episode. Critics praised Sheldon and Amy's storyline, but were critical of the other two subplots and Musk's appearance. == Plot == With Sheldon Cooper and Amy Farrah Fowler still broken up and all of his friends busy for Thanksgiving, Sheldon tries to give Amy tickets he bought them to Thanksgiving dinner at the aquarium, but Amy suggests they can still go as friends. Along the way, Sheldon asks Amy questions about her current dating life and plays a game about fish. Despite the initial awkwardness, Amy answers his questions, and the two reconnect as friends. Bernadette Rostenkowski, Raj Koothrappali and Emily Sweeney drag Howard Wolowitz to a soup kitchen to volunteer for the day, after Howard lies about going there to avoid Sheldon. At the soup kitchen, Howard encounters Elon Musk, the founder of SpaceX, and they bond over their interest in space travel. Meanwhile, Leonard Hofstadter and Penny prepare Thanksgiving dinner at home for the gang. When he realizes she does not know his birthday, he proceeds to list personal things he knows about her but accidentally reveals his knowing that she hates the orange lingerie he bought her, which
|
{
"page_id": 58593920,
"source": null,
"title": "The Platonic Permutation"
}
|
she only disclosed in her journal. To apologize for reading it without permission, Leonard dances in the lingerie, asking Penny to post an image of him onto her social media as punishment. Howard, Raj, Bernadette, and Emily arrive at the apartment around this time, to Leonard's embarrassment. Later, Amy tells Sheldon she is ready to be his girlfriend again, but Sheldon declines, telling her that getting over their breakup was too difficult, but that he wishes to remain friends. Amy accepts but is disappointed by this. == Production == The story was completed by Jim Reynolds, Jeremy Howe, and Tara Hernandez while the teleplay was written by Steve Holland, Maria Ferrari, and Adam Faberman. The episode was directed by Mark Cendrowski. It features a guest appearance of Wayne Wilderson as Travis, in addition to South African Canadian-American entrepreneur and business magnate, Elon Musk as himself. It first aired in the US on CBS on November 19, 2015, and first aired in the UK on E4 on December 17, 2015. == Reception == === Ratings === The episode was watched live by 15.19 million viewers, and had a ratings share of 3.8, during its original broadcast in the US. The 7-day data showed the episode received a total of 21.23 million viewers. Its UK premiere received 2.285 million viewers (7 day data), with the expanded 28 day data receiving 2.515 million viewers, making it the most watched program on E4 for the week. === Critical response === "The Platonic Permutation" received mixed reviews from critics. Critics praised Sheldon and Amy's storyline but were critical of the other two subplots. IGN's Jesse Schedeen said of Sheldon and Amy's storyline saved the episode from "total mediocrity". Caroline Preece of Den of Geek praised Sheldon and Amy's storyline as well, saying the resolutions of
|
{
"page_id": 58593920,
"source": null,
"title": "The Platonic Permutation"
}
|
the other two subplots had "left something to be desired". Digital Spy's Tom Eames said of "The Platonic Permutation": "Aside from a couple of sweet scenes with Sheldon and Amy, this was one of those episodes where you'd have a better time if you just looked at the photos and caught up on the synopsis on Wikipedia". Schedeen opined that the soup kitchen storyline had potential but ultimately said it lacked humor. He and Eames criticized the lack of message for viewers about being thankful or generous and said that Howard's selfishness is instead rewarded by meeting Elon Musk. Preece was critical of the subplot saying Howard was "misused" by "providing lazy comedic relief to compensate for some of the more dramatic things" the show has done in "The Platonic Permutation" and the wider season. Schedeen criticized Musk's appearance by saying he was not put to "very good use" and that his interactions with Howard were stiff and awkward. Eames concurred calling his appearance a "bit pointless and self-indulgent" and criticized his acting abilities. Schedeen said the Leonard and Penny's subplot was mildly more entertaining than that of the soup kitchen but said parts of it felt repetitive. Eames concurred calling the storyline "mildly funny". Schedeen praised Sheldon and Amy's storyline it was a "welcome way of bringing Sheldon and Amy back together without needlessly pushing them back into each other's arms". Eames and Schedeen praised the rare downbeat ending for the series when Sheldon declined to be Amy's boyfriend. Eames opined that it was revitalizing to see "Sheldon act so mature by admitting that getting over Amy was the one thing he hasn't 'excelled at', and deciding to stay just friends", calling it "one of the most realistic moments the show has ever had". Eames felt more focus should
|
{
"page_id": 58593920,
"source": null,
"title": "The Platonic Permutation"
}
|
have been given to this storyline calling the other two "lazy and boring". === Awards === At the Art Directors Guild Awards 2015, John Shaffner (production designer), along with Francoise Cherry-Cohen (set designer), and Ann Shea (set decorator) won the Excellence in Production Design Award - Multi-Camera Television Series award for The Big Bang Theory for their work on "The Platonic Permutation", "The Skywalker Incursion", and "The Mystery Date Observation". == References == == External links == "The Platonic Permutation" at IMDb
|
{
"page_id": 58593920,
"source": null,
"title": "The Platonic Permutation"
}
|
In general, a sample is a limited quantity of something which is intended to be similar to and represent a larger amount of that thing(s). The things could be countable objects such as individual items available as units for sale, or an uncountable material. Even though the word "sample" implies a smaller quantity taken from a larger amount, sometimes full biological or mineralogical specimens are called samples if they are taken for analysis, testing, or investigation like other samples. They are also considered samples in the sense that even whole specimens are "samples" of the full population of many individual organisms. The act of obtaining a sample is called "sampling" and can be performed manually by a person or by automatic process. Samples of material can be taken or provided for testing, analysis, investigation, quality control, demonstration, or trial use. Sometimes, sampling may be performed continuously. == Aliquot part == In science, a representative liquid sample taken from a larger amount of liquid is sometimes called an aliquot or aliquot part where the sample is an exact divisor of the whole. For example, 10mL would be an aliquot part of a 100mL sample. == Sample characteristics == The material may be solid, liquid, gas, a material of some intermediate characteristics such as gel or sputum, tissue, organism, or a combination of these. Even if a material sample is not countable as individual items, the quantity of the sample may still be describable in terms of its volume, mass, size, or other such dimensions. A solid sample can come in one or a few discrete pieces, or it can be fragmented, granular, or powdered. A section of a rod, wire, cord, sheeting, or tubing may be considered a sample. Samples which are not a solid piece are commonly kept in a
|
{
"page_id": 10949251,
"source": null,
"title": "Sample (material)"
}
|
container of some sort. Where goods are sold or supplied by reference to a sample, relevant sale of goods legislation may dictate the supplier's legal obligations in ensuring that the bulk of the goods corresponds with the goods comprising the sample, for example in the UK, the Sale of Goods Act 1979, section 15, the Supply of Goods and Services Act 1982, section 5, and the Consumer Rights Act 2015, section 13. == See also == Core sample Ice core Specimen (disambiguation) == References ==
|
{
"page_id": 10949251,
"source": null,
"title": "Sample (material)"
}
|
In statistical mechanics, the hard hexagon model is a 2-dimensional lattice model of a gas, where particles are allowed to be on the vertices of a triangular lattice but no two particles may be adjacent. The model was solved by Rodney Baxter (1980), who found that it was related to the Rogers–Ramanujan identities. == The partition function of the hard hexagon model == The hard hexagon model occurs within the framework of the grand canonical ensemble, where the total number of particles (the "hexagons") is allowed to vary naturally, and is fixed by a chemical potential. In the hard hexagon model, all valid states have zero energy, and so the only important thermodynamic control variable is the ratio of chemical potential to temperature μ/(kT). The exponential of this ratio, z = exp(μ/(kT)) is called the activity and larger values correspond roughly to denser configurations. For a triangular lattice with N sites, the grand partition function is Z ( z ) = ∑ n z n g ( n , N ) = 1 + N z + 1 2 N ( N − 7 ) z 2 + ⋯ {\displaystyle \displaystyle {\mathcal {Z}}(z)=\sum _{n}z^{n}g(n,N)=1+Nz+{\tfrac {1}{2}}N(N-7)z^{2}+\cdots } where g(n, N) is the number of ways of placing n particles on distinct lattice sites such that no 2 are adjacent. The function κ is defined by κ ( z ) = lim N → ∞ Z ( z ) 1 / N = 1 + z − 3 z 2 + ⋯ {\displaystyle \kappa (z)=\lim _{N\rightarrow \infty }{\mathcal {Z}}(z)^{1/N}=1+z-3z^{2}+\cdots } so that log(κ) is the free energy per unit site. Solving the hard hexagon model means (roughly) finding an exact expression for κ as a function of z. The mean density ρ is given for small z by ρ = z d
|
{
"page_id": 20058756,
"source": null,
"title": "Hard hexagon model"
}
|
log ( κ ) d z = z − 7 z 2 + 58 z 3 − 519 z 4 + 4856 z 5 + ⋯ . {\displaystyle \rho =z{\frac {d\log(\kappa )}{dz}}=z-7z^{2}+58z^{3}-519z^{4}+4856z^{5}+\cdots .} The vertices of the lattice fall into 3 classes numbered 1, 2, and 3, given by the 3 different ways to fill space with hard hexagons. There are 3 local densities ρ1, ρ2, ρ3, corresponding to the 3 classes of sites. When the activity is large the system approximates one of these 3 packings, so the local densities differ, but when the activity is below a critical point the three local densities are the same. The critical point separating the low-activity homogeneous phase from the high-activity ordered phase is z c = ( 11 + 5 5 ) / 2 = ϕ 5 = 11.09017.... {\displaystyle z_{c}=(11+5{\sqrt {5}})/2=\phi ^{5}=11.09017....} with golden ratio φ. Above the critical point the local densities differ and in the phase where most hexagons are on sites of type 1 can be expanded as ρ 1 = 1 − z − 1 − 5 z − 2 − 34 z − 3 − 267 z − 4 − 2037 z − 5 − ⋯ {\displaystyle \rho _{1}=1-z^{-1}-5z^{-2}-34z^{-3}-267z^{-4}-2037z^{-5}-\cdots } ρ 2 = ρ 3 = z − 2 + 9 z − 3 + 80 z − 4 + 965 z − 5 − ⋯ . {\displaystyle \rho _{2}=\rho _{3}=z^{-2}+9z^{-3}+80z^{-4}+965z^{-5}-\cdots .} == Solution == The solution is given for small values of z < zc by z = − x H ( x ) 5 G ( x ) 5 {\displaystyle \displaystyle z={\frac {-xH(x)^{5}}{G(x)^{5}}}} κ = H ( x ) 3 Q ( x 5 ) 2 G ( x ) 2 ∏ n ≥ 1 ( 1 − x 6 n −
|
{
"page_id": 20058756,
"source": null,
"title": "Hard hexagon model"
}
|
4 ) ( 1 − x 6 n − 3 ) 2 ( 1 − x 6 n − 2 ) ( 1 − x 6 n − 5 ) ( 1 − x 6 n − 1 ) ( 1 − x 6 n ) 2 {\displaystyle \kappa ={\frac {H(x)^{3}Q(x^{5})^{2}}{G(x)^{2}}}\prod _{n\geq 1}{\frac {(1-x^{6n-4})(1-x^{6n-3})^{2}(1-x^{6n-2})}{(1-x^{6n-5})(1-x^{6n-1})(1-x^{6n})^{2}}}} ρ = ρ 1 = ρ 2 = ρ 3 = − x G ( x ) H ( x 6 ) P ( x 3 ) P ( x ) {\displaystyle \rho =\rho _{1}=\rho _{2}=\rho _{3}={\frac {-xG(x)H(x^{6})P(x^{3})}{P(x)}}} where G ( x ) = ∏ n ≥ 1 1 ( 1 − x 5 n − 4 ) ( 1 − x 5 n − 1 ) {\displaystyle G(x)=\prod _{n\geq 1}{\frac {1}{(1-x^{5n-4})(1-x^{5n-1})}}} H ( x ) = ∏ n ≥ 1 1 ( 1 − x 5 n − 3 ) ( 1 − x 5 n − 2 ) {\displaystyle H(x)=\prod _{n\geq 1}{\frac {1}{(1-x^{5n-3})(1-x^{5n-2})}}} P ( x ) = ∏ n ≥ 1 ( 1 − x 2 n − 1 ) = Q ( x ) / Q ( x 2 ) {\displaystyle P(x)=\prod _{n\geq 1}(1-x^{2n-1})=Q(x)/Q(x^{2})} Q ( x ) = ∏ n ≥ 1 ( 1 − x n ) . {\displaystyle Q(x)=\prod _{n\geq 1}(1-x^{n}).} For large z > zc the solution (in the phase where most occupied sites have type 1) is given by z = G ( x ) 5 x H ( x ) 5 {\displaystyle \displaystyle z={\frac {G(x)^{5}}{xH(x)^{5}}}} κ = x − 1 3 G ( x ) 3 Q ( x 5 ) 2 H ( x ) 2 ∏ n ≥ 1 ( 1 − x 3 n − 2 ) ( 1 − x 3 n − 1 ) ( 1 − x 3 n )
|
{
"page_id": 20058756,
"source": null,
"title": "Hard hexagon model"
}
|
2 {\displaystyle \kappa =x^{-{\frac {1}{3}}}{\frac {G(x)^{3}Q(x^{5})^{2}}{H(x)^{2}}}\prod _{n\geq 1}{\frac {(1-x^{3n-2})(1-x^{3n-1})}{(1-x^{3n})^{2}}}} ρ 1 = H ( x ) Q ( x ) ( G ( x ) Q ( x ) + x 2 H ( x 9 ) Q ( x 9 ) ) Q ( x 3 ) 2 {\displaystyle \rho _{1}={\frac {H(x)Q(x)(G(x)Q(x)+x^{2}H(x^{9})Q(x^{9}))}{Q(x^{3})^{2}}}} ρ 2 = ρ 3 = x 2 H ( x ) Q ( x ) H ( x 9 ) Q ( x 9 ) Q ( x 3 ) 2 {\displaystyle \rho _{2}=\rho _{3}={\frac {x^{2}H(x)Q(x)H(x^{9})Q(x^{9})}{Q(x^{3})^{2}}}} R = ρ 1 − ρ 2 = Q ( x ) Q ( x 5 ) Q ( x 3 ) 2 . {\displaystyle R=\rho _{1}-\rho _{2}={\frac {Q(x)Q(x^{5})}{Q(x^{3})^{2}}}.} The functions G and H turn up in the Rogers–Ramanujan identities, and the function Q is the Euler function, which is closely related to the Dedekind eta function. If x = e2πiτ, then x−1/60G(x), x11/60H(x), x−1/24P(x), z, κ, ρ, ρ1, ρ2, and ρ3 are modular functions of τ, while x1/24Q(x) is a modular form of weight 1/2. Since any two modular functions are related by an algebraic relation, this implies that the functions κ, z, R, ρ are all algebraic functions of each other (of quite high degree) (Joyce 1988). In particular, the value of κ(1), which Eric Weisstein dubbed the hard hexagon entropy constant (Weisstein), is an algebraic number of degree 24 equal to 1.395485972... (OEIS: A085851). == Related models == The hard hexagon model can be defined similarly on the square and honeycomb lattices. No exact solution is known for either of these models, but the critical point zc is near 3.7962±0.0001 for the square lattice and 7.92±0.08 for the honeycomb lattice; κ(1) is approximately 1.503048082... (OEIS: A085850) for the square lattice and 1.546440708... for the honeycomb lattice (Baxter
|
{
"page_id": 20058756,
"source": null,
"title": "Hard hexagon model"
}
|
1999). == References == Andrews, George E. (1981), "The hard-hexagon model and Rogers-Ramanujan type identities", Proceedings of the National Academy of Sciences of the United States of America, 78 (9): 5290–5292, Bibcode:1981PNAS...78.5290A, doi:10.1073/pnas.78.9.5290, ISSN 0027-8424, MR 0629656, PMC 348728, PMID 16593082 Baxter, Rodney J. (1980), "Hard hexagons: exact solution", Journal of Physics A: Mathematical and General, 13 (3): L61 – L70, Bibcode:1980JPhA...13L..61B, doi:10.1088/0305-4470/13/3/007, ISSN 0305-4470, MR 0560533 Baxter, Rodney J. (1982), Exactly solved models in statistical mechanics (PDF), London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-083180-7, MR 0690578, archived from the original (PDF) on 2021-04-14, retrieved 2012-08-12 Joyce, G. S. (1988), "Exact results for the activity and isothermal compressibility of the hard-hexagon model", Journal of Physics A: Mathematical and General, 21 (20): L983 – L988, Bibcode:1988JPhA...21L.983J, doi:10.1088/0305-4470/21/20/005, ISSN 0305-4470, MR 0966792 Exton, H. (1983), q-Hypergeometric Functions and Applications, New York: Halstead Press, Chichester: Ellis Horwood Weisstein, Eric W., "Hard Hexagon Entropy Constant", MathWorld Baxter, R. J.; Enting, I. G.; Tsang, S. K. (April 1980), "Hard-square lattice gas", Journal of Statistical Physics, 22 (4): 465–489, Bibcode:1980JSP....22..465B, doi:10.1007/BF01012867, S2CID 121413715 Runnels, L. K.; Combs, L. L.; Salvant, James P. (15 November 1967), "Exact Finite Method of Lattice Statistics. II. Honeycomb-Lattice Gas of Hard Molecules", The Journal of Chemical Physics, 47 (10): 4015–4020, Bibcode:1967JChPh..47.4015R, doi:10.1063/1.1701569 Baxter, R. J. (1 June 1999), "Planar lattice gases with nearest-neighbor exclusion", Annals of Combinatorics, 3 (2): 191–203, arXiv:cond-mat/9811264, doi:10.1007/BF01608783, S2CID 13600601 == External links == Weisstein, Eric W. "Hard Hexagon Entropy Constant". MathWorld.
|
{
"page_id": 20058756,
"source": null,
"title": "Hard hexagon model"
}
|
Missense mRNA is a messenger RNA bearing one or more mutated codons that yield polypeptides with an amino acid sequence different from the wild-type or naturally occurring polypeptide. Missense mRNA molecules are created when template DNA strands or the mRNA strands themselves undergo a missense mutation in which a protein coding sequence is mutated and an altered amino acid sequence is coded for. == Biogenesis == A missense mRNA arises from a missense mutation, in the event of which a DNA nucleotide base pair in the coding region of a gene is changed such that it results in the substitution of one amino acid for another. The point mutation is nonsynonymous because it alters the RNA codon in the mRNA transcript such that translation results in amino acid change. An amino acid change may not result in appreciable changes in protein structure depending on whether the amino acid change is conservative or non-conservative. This owes to the similar physicochemical properties exhibited by some amino acids. Missense mRNAs may be detected as a result of two different types of point mutations - spontaneous mutations and induced mutations. Spontaneous mutations occur during the DNA replication process where a non-complementary nucleotide is deposited by the DNA polymerase in the extension phase. The consecutive round of replication would result in a point mutation. If the resulting mRNA codon is one that changes the amino acid, a missense mRNA would be detected. A hypergeometric distribution study involving DNA polymerase β replication errors in the APC gene revealed 282 possible substitutions that could result in missense mutations. When the APC mRNA was analyzed in the mutational spectrum, it showed 3 sites where the frequency of substitutions were high. Induced mutations caused by mutagens can give rise to missense mutations. Nucleoside analogues such as 2-aminopurine and 5-bromouracil
|
{
"page_id": 10359432,
"source": null,
"title": "Missense mRNA"
}
|
can insert in place of A and T respectively. Ionizing radiation like x-rays and γ-rays can deaminate cytosine to uracil. Missense mRNAs may be applied synthetically in forward and reverse genetic screens used to interrogate the genome. Site-directed mutagenesis is a technique often employed to create knock-in and knock-out models that express missense mRNAs. For example, in knock-in studies, human orthologs are identified in model organisms to introduce missense mutations, or a human gene with a substitution mutation is integrated into the genome of the model organism. The subsequent loss-of-function or gain-of-function phenotypes are measured to model genetic diseases and discover novel drugs. While homologous recombination has been widely used to generate single-base substitutions, novel technologies that co-inject gRNA and hCas9 mRNA of the CRISPR/Cas9 system, in conjunction with single-strand oligodeoxynucleotide (ssODN) donor sequences have shown efficiency in generating point mutations in the genome. == Evolutionary implications == === Non-synonymous RNA editing === Substitutions can occur on the level of both the DNA and RNA. RNA editing-dependent amino acid substitutions can produce missense mRNA's of which occur through hydrolytic deaminase reactions. Two of the most prevalent deaminase reactions occur through the Apolipoprotein B mRNA editing enzyme (APOBEC) and the adenosine deaminase acting on RNA enzyme (ADAR) which are responsible for the conversion of cytidine to uridine (C-to-U), and the deamination of adenosine to inosine (A-to-I), respectively. Such selective substitutions of uridine for cytidine, and inosine for adenosine in RNA editing can produce differential isoforms of missense mRNA transcripts, and confer transcriptome diversity and enhanced protein function in response to selective pressures. == See also == Nonsense mutation Start codon Stop codon == References ==
|
{
"page_id": 10359432,
"source": null,
"title": "Missense mRNA"
}
|
An electrostatic precipitator (ESP) is a filterless device that removes fine particles, such as dust and smoke, from a flowing gas using the force of an induced electrostatic charge minimally impeding the flow of gases through the unit. In contrast to wet scrubbers, which apply energy directly to the flowing fluid medium, an ESP applies energy only to the particulate matter being collected and therefore is very efficient in its consumption of energy (in the form of electricity). == Invention == The first use of corona discharge to remove particles from an aerosol was by Hohlfeld in 1824. However, it was not commercialized until almost a century later. In 1907 Frederick Gardner Cottrell, a professor of chemistry at the University of California, Berkeley, applied for a patent on a device for charging particles and then collecting them through electrostatic attraction—the first electrostatic precipitator. Cottrell first applied the device to the collection of sulphuric acid mist and lead oxide fumes emitted from various acid-making and smelting activities. Wine-producing vineyards in northern California were being adversely affected by the lead emissions. At the time of Cottrell's invention, the theoretical basis for operation was not understood. The operational theory was developed later in Germany, with the work of Walter Deutsch and the formation of the Lurgi company. Cottrell used proceeds from his invention to fund scientific research through the creation of a foundation called Research Corporation in 1912, to which he assigned the patents. The intent of the organization was to bring inventions made by educators (such as Cottrell) into the commercial world for the benefit of society at large. The operation of Research Corporation is funded by royalties paid by commercial firms after commercialization occurs. Research Corporation has provided vital funding to many scientific projects: Goddard's rocketry experiments, Lawrence's cyclotron, production methods
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
for vitamins A and B1, among many others. Research Corporation set territories for manufacturers of this technology, which included Western Precipitation (Los Angeles), Lodge-Cottrell (England), Lurgi Apparatebau-Gesellschaft (Germany), and Japanese Cottrell Corp. (Japan), and was a clearinghouse for any process improvements. However, anti-trust concerns forced Research Corporation to eliminate territory restrictions in 1946. Electrophoresis is the term used for migration of gas-suspended charged particles in a direct-current electrostatic field. Traditional CRT television sets tend to accumulate dust on the screen because of this phenomenon (a CRT is a direct-current machine operating at about 15 kilovolts). == Types == There are two main types of precipitators: High-voltage, single-stage - Single-stage precipitators combine an ionization and a collection step. They are commonly referred to as Cottrell precipitators. Low-voltage, two-stage - Two-stage precipitators use a similar principle; however, the ionizing section is followed by collection plates. Described below is the high-voltage, single-stage precipitator, which is widely used in minerals processing operations. The low-voltage, two-stage precipitator is generally used for filtration in air-conditioning systems. === Plate and bar === The majority of electrostatic precipitators installed are the plate type. Particles are collected on flat, parallel surfaces that are 8 to 12 in. (20 to 30 cm) apart, with a series of discharge electrodes spaced along the centerline of two adjacent plates. The contaminated gases pass through the passage between the plates, and the particles become charged and adhere to the collection plates. Collected particles are usually removed by rapping the plates and deposited in bins or hoppers at the base of the precipitator. The most basic precipitator contains a row of thin vertical wires, and followed by a stack of large flat metal plates oriented vertically, with the plates typically spaced about 1 cm to 18 cm apart, depending on the application. The air
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
stream flows horizontally through the spaces between the wires, and then passes through the stack of plates. A negative voltage of several thousand volts is applied between wire and plate. If the applied voltage is high enough, an electric corona discharge ionizes the air around the electrodes, which then ionizes the particles in the air stream. The ionized particles, due to the electrostatic force, are diverted towards the grounded plates. Particles build up on the collection plates and are removed from the air stream. A two-stage design (separate charging section ahead of the collecting section) has the benefit of minimizing ozone production, which would adversely affect the health of personnel working in enclosed spaces. For shipboard engine rooms where gearboxes generate an oil mist, two-stage ESP's are used to clean the air, improving the operating environment and preventing buildup of flammable oil fog accumulations. Collected oil is returned to the gear lubricating system. === Tubular === Tubular precipitators consist of cylindrical collection electrodes with discharge electrodes located on the axis of the cylinder. The contaminated gases flow around the discharge electrode and up through the inside of the cylinders. The charged particles are collected on the grounded walls of the cylinder. The collected dust is removed from the bottom of the cylinder. Tubular precipitators are often used for mist or fog collection or for adhesive, sticky, radioactive, or extremely toxic materials. == Components == The four main components of all electrostatic precipitators are: Power supply unit, to provide high-voltage DC power Ionizing section, to impart a charge to particulates in the gas stream A means of removing the collected particulates A housing to enclose the precipitator zone The collected material on the electrodes is removed by rapping or vibrating the collecting electrodes either continuously or at a predetermined interval. Cleaning
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
a precipitator can usually be done without interrupting the airflow. == Collection efficiency (R) == The following factors affect the efficiency of electrostatic precipitators: Larger collection-surface areas and lower gas-flow rates increase efficiency because of the increased time available for electrical activity to treat the dust particles. An increase in the dust-particle migration velocity to the collecting electrodes increases efficiency. The migration velocity can be increased by: Decreasing the gas viscosity Increasing the gas temperature Increasing the voltage field Precipitator performance is very sensitive to two particulate properties: 1) electrical resistivity; and 2) particle size distribution. These properties can be measured economically and accurately in the laboratory, using standard tests. Resistivity can be determined as a function of temperature in accordance with IEEE Standard 548. This test is conducted in an air environment containing a specified moisture concentration. The test is run as a function of ascending or descending temperature, or both. Data is acquired using an average ash layer electric field of 4 kV/cm. Since relatively low applied voltage is used and no sulfuric acid vapor is present in the test environment, the values obtained indicate the maximum ash resistivity. In an ESP, where particle charging and discharging are key functions, resistivity is an important factor that significantly affects collection efficiency. While resistivity is an important phenomenon in the inter-electrode region where most particle charging takes place, it has a particularly important effect on the dust layer at the collection electrode where discharging occurs. Particles that exhibit high resistivity are difficult to charge. But once charged, they do not readily give up their acquired charge on arrival at the collection electrode. On the other hand, particles with low resistivity easily become charged and readily release their charge to the grounded collection plate. Both extremes in resistivity impede the efficient
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
functioning of ESPs. ESPs work best under normal resistivity conditions. Resistivity, which is a characteristic of particles in an electric field, is a measure of a particle's resistance to transferring charge (both accepting and giving up charges). Resistivity is a function of a particle's chemical composition as well as flue gas operating conditions such as temperature and moisture. Particles can have high, moderate (normal), or low resistivity. Bulk resistivity is defined using a more general version of Ohm’s Law, as given in Equation (1) below: Where: E is the Electric field strength.Unit:-(V/cm); j is the Current density.Unit:-(A/cm2); and ρ is the Resistivity.Unit:-(Ohm-cm) A better way of displaying this would be to solve for resistivity as a function of applied voltage and current, as given in Equation (2) below: Where: ρ = Resistivity.Unit:-(Ohm-cm) V = The applied DC potential.Unit:-(Volts); I = The measured current.Unit:-(Amperes); l = The ash layer thickness.Unit:-(cm); and A = The current measuring electrode face area.Unit:-(cm2). Resistivity is the electrical resistance of a dust sample 1.0 cm2 in cross-sectional area, 1.0 cm thick, and is recorded in units of ohm-cm. A method for measuring resistivity will be described in this article. The table below, gives value ranges for low, normal, and high resistivity. === Dust layer resistance === Resistance affects electrical conditions in the dust layer by a potential electric field (voltage drop) being formed across the layer as negatively charged particles arrive at its surface and leak their electrical charges to the collection plate. At the metal surface of the electrically grounded collection plate, the voltage is zero, whereas at the outer surface of the dust layer, where new particles and ions are arriving, the electrostatic voltage caused by the gas ions can be quite high. The strength of this electric field depends on the resistance and
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
thickness of the dust layer. In high-resistance dust layers, the dust is not sufficiently conductive, so electrical charges have difficulty moving through the dust layer. Consequently, electrical charges accumulate on and beneath the dust layer surface, creating a strong electric field. Voltages can be greater than 10,000 volts. Dust particles with high resistance are held too strongly to the plate, making them difficult to remove and causing trapping problems. In low resistance dust layers, the corona current is readily passed to the grounded collection electrode. Therefore, a relatively weak electric field, of several thousand volts, is maintained across the dust layer. Collected dust particles with low resistance do not adhere strongly enough to the collection plate. They are easily dislodged and become retained in the gas stream. The electrical conductivity of a bulk layer of particles depends on both surface and volume factors. Volume conduction, or the motions of electrical charges through the interiors of particles, depends mainly on the composition and temperature of the particles. In the higher temperature regions, above 500 °F (260 °C), volume conduction controls the conduction mechanism. Volume conduction also involves ancillary factors, such as compression of the particle layer, particle size and shape, and surface properties. Volume conduction is represented in the figures as a straight-line at temperatures above 500 °F (260 °C). At temperatures below about 450 °F (230 °C), electrical charges begin to flow across surface moisture and chemical films adsorbed onto the particles. Surface conduction begins to lower the resistivity values and bend the curve downward at temperatures below 500 °F (260 °C). These films usually differ both physically and chemically from the interiors of the particles owing to adsorption phenomena. Theoretical calculations indicate that moisture films only a few molecules thick are adequate to provide the desired surface conductivity. Surface
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
conduction on particles is closely related to surface-leakage currents occurring on electrical insulators, which have been extensively studied. An interesting practical application of surface-leakage is the determination of dew point by measurement of the current between adjacent electrodes mounted on a glass surface. A sharp rise in current signals the formation of a moisture film on the glass. This method has been used effectively for determining the marked rise in dew point, which occurs when small amounts of sulfuric acid vapor are added to an atmosphere (commercial Dewpoint Meters are available on the market). The following discussion of normal, high, and low resistance applies to ESPs operated in a dry state; resistance is not a problem in the operation of wet ESPs because of the moisture concentration in the ESP. The relationship between moisture content and resistance is explained later in this work. === Normal resistivity === As stated above, ESPs work best under normal resistivity conditions. Particles with normal resistivity do not rapidly lose their charge on arrival at the collection electrode. These particles slowly leak their charge to grounded plates and are retained on the collection plates by intermolecular adhesive and cohesive forces. This allows a particulate layer to be built up and then dislodged from the plates by rapping. Within the range of normal dust resistivity (between 107 and 2 × 1010 ohm-cm), fly ash is collected more easily than dust having either low or high resistivity. === High resistivity === If the voltage drop across the dust layer becomes too high, several adverse effects can occur. First, the high voltage drop reduces the voltage difference between the discharge electrode and collection electrode, and thereby reduces the electrostatic field strength used to drive the gas ion-charged particles over to the collected dust layer. As the dust layer
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
builds up, and the electrical charges accumulate on the surface of the dust layer, the voltage difference between the discharge and collection electrodes decreases. The migration velocities of small particles are especially affected by the reduced electric field strength. Another problem that occurs with high resistivity dust layers is called back corona. This occurs when the potential drop across the dust layer is so great that corona discharges begin to appear in the gas that is trapped within the dust layer. The dust layer breaks down electrically, producing small holes or craters from which back corona discharges occur. Positive gas ions are generated within the dust layer and are accelerated toward the "negatively charged" discharge electrode. The positive ions reduce some of the negative charges on the dust layer and neutralize some of the negative ions on the "charged particles" heading toward the collection electrode. Disruptions of the normal corona process greatly reduce the ESP's collection efficiency, which in severe cases, may fall below 50% . When back corona is present, the dust particles build up on the electrodes forming a layer of insulation. Often this can not be repaired without bringing the unit offline. The third, and generally most common problem with high resistivity dust is increased electrical sparking. When the sparking rate exceeds the "set spark rate limit," the automatic controllers limit the operating voltage of the field. This causes reduced particle charging and reduced migration velocities toward the collection electrode. High resistivity can generally be reduced by doing the following: Adjusting the temperature; Increasing moisture content; Adding conditioning agents to the gas stream; Increasing the collection surface area; and Using hot-side precipitators (occasionally and with foreknowledge of sodium depletion). Thin dust layers and high-resistivity dust especially favor the formation of back corona craters. Severe back corona has
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
been observed with dust layers as thin as 0.1 mm, but a dust layer just over one particle thick can reduce the sparking voltage by 50%. The most marked effects of back corona on the current-voltage characteristics are: Reduction of the spark over voltage by as much as 50% or more; Current jumps or discontinuities caused by the formation of stable back-corona craters; and Large increase in maximum corona current, which just below spark over corona gap may be several times the normal current. The Figure below and to the left shows the variation in resistivity with changing gas temperature for six different industrial dusts along with three coal-fired fly ashes. The Figure on the right illustrates resistivity values measured for various chemical compounds that were prepared in the laboratory. Results for Fly Ash A (in the figure to the left) were acquired in the ascending temperature mode. These data are typical for a moderate to high combustibles content ash. Data for Fly Ash B are from the same sample, acquired during the descending temperature mode. The differences between the ascending and descending temperature modes are due to the presence of unburned combustibles in the sample. Between the two test modes, the samples are equilibrated in dry air for 14 hours (overnight) at 850 °F (450 °C). This overnight annealing process typically removes between 60% and 90% of any unburned combustibles present in the samples. Exactly how carbon works as a charge carrier is not fully understood, but it is known to significantly reduce the resistivity of a dust. Carbon can act, at first, like a high resistivity dust in the precipitator. Higher voltages can be required in order for corona generation to begin. These higher voltages can be problematic for the TR-Set controls. The problem lies in onset of
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
corona causing large amounts of current to surge through the (low resistivity) dust layer. The controls sense this surge as a spark. As precipitators are operated in spark-limiting mode, power is terminated and the corona generation cycle re-initiates. Thus, lower power (current) readings are noted with relatively high voltage readings. The same thing is believed to occur in laboratory measurements. Parallel plate geometry is used in laboratory measurements without corona generation. A stainless steel cup holds the sample. Another stainless steel electrode weight sits on top of the sample (direct contact with the dust layer). As voltage is increased from small amounts (e.g. 20 V), no current is measured. Then, a threshold voltage level is reached. At this level, current surges through the sample... so much so that the voltage supply unit can trip off. After removal of the unburned combustibles during the above-mentioned annealing procedure, the descending temperature mode curve shows the typical inverted “V” shape one might expect. === Low resistivity === Particles that have low resistivity are difficult to collect because they are easily charged (very conductive) and rapidly lose their charge on arrival at the collection electrode. The particles take on the charge of the collection electrode, bounce off the plates, and become re-entrained in the gas stream. Thus, attractive and repulsive electrical forces that are normally at work at normal and higher resistivities are lacking, and the binding forces to the plate are considerably lessened. Examples of low-resistivity dusts are unburned carbon in fly ash and carbon black. If these conductive particles are coarse, they can be removed upstream of the precipitator by using a device such as a cyclone mechanical collector. The addition of liquid ammonia (NH3) into the gas stream as a conditioning agent has found wide use in recent years. It is
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
theorized that ammonia reacts with H2SO4 contained in the flue gas to form an ammonium sulfate compound that increases the cohesivity of the dust. This additional cohesivity makes up for the loss of electrical attraction forces. The table below summarizes the characteristics associated with low, normal and high resistivity dusts. The moisture content of the flue gas stream also affects particle resistivity. Increasing the moisture content of the gas stream by spraying water or injecting steam into the duct work preceding the ESP lowers the resistivity. In both temperature adjustment and moisture conditioning, one must maintain gas conditions above the dew point to prevent corrosion problems in the ESP or downstream equipment. The figure to the right shows the effect of temperature and moisture on the resistivity of a cement dust. As the percentage of moisture in the gas stream increases from 6 to 20%, the resistivity of the dust dramatically decreases. Also, raising or lowering the temperature can decrease cement dust resistivity for all the moisture percentages represented. The presence of SO3 in the gas stream has been shown to favor the electrostatic precipitation process when problems with high resistivity occur. Most of the sulfur content in the coal burned for combustion sources converts to SO2. However, approximately 1% of the sulfur converts to SO3. The amount of SO3 in the flue gas normally increases with increasing sulfur content of the coal. The resistivity of the particles decreases as the sulfur content of the coal increases. Other conditioning agents, such as sulfuric acid, ammonia, sodium chloride, and soda ash (sometimes as raw trona), have also been used to reduce particle resistivity. Therefore, the chemical composition of the flue gas stream is important with regard to the resistivity of the particles to be collected in the ESP. The table below
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
lists various conditioning agents and their mechanisms of operation. If injection of ammonium sulfate occurs at a temperature greater than about 600 °F (320 °C), dissociation into ammonia and sulfur trioxide results. Depending on the ash, SO2 may preferentially interact with fly ash as SO3 conditioning. The remainder recombines with ammonia to add to the space charge as well as increase cohesiveness of the ash. More recently, it has been recognized that a major reason for loss of efficiency of the electrostatic precipitator is due to particle buildup on the charging wires in addition to the collection plates (Davidson and McKinney, 1998). This is easily remedied by making sure that the wires themselves are cleaned at the same time that the collecting plates are cleaned. Sulfuric acid vapor (SO3) enhances the effects of water vapor on surface conduction. It is physically adsorbed within the layer of moisture on the particle surfaces. The effects of relatively small amounts of acid vapor can be seen in the figure below and to the right. The inherent resistivity of the sample at 300 °F (150 °C) is 5 × 1012 ohm-cm. An equilibrium concentration of just 1.9 ppm sulfuric acid vapor lowers that value to about 7 × 109 ohm-cm. == Modern industrial electrostatic precipitators == ESPs continue to be excellent devices for control of many industrial particulate emissions, including smoke from electricity-generating utilities (coal and oil fired), salt cake collection from black liquor boilers in pulp mills, and catalyst collection from fluidized bed catalytic cracker units in oil refineries to name a few. These devices treat gas volumes from several hundred thousand ACFM to 2.5 million ACFM (1,180 m³/s) in the largest coal-fired boiler applications. For a coal-fired boiler the collection is usually performed downstream of the air preheater at about 160 °C
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
(320 °F) which provides optimal resistivity of the coal-ash particles. For some difficult applications with low-sulfur fuel hot-end units have been built operating above 370 °C (698 °F). The original parallel plate–weighted wire design (see figure of Plate and Bar precipitator above) has evolved as more efficient (and robust) discharge electrode designs were developed, today focusing on rigid (pipe-frame) discharge electrodes to which many sharpened spikes are attached (barbed wire), maximizing corona production. Transformer-rectifier systems apply voltages of 50–100 kV at relatively high current densities. Modern controls, such as an automatic voltage control, minimize electric sparking and prevent arcing (sparks are quenched within 1/2 cycle of the TR set), avoiding damage to the components. Automatic plate-rapping systems and hopper-evacuation systems remove the collected particulate matter while on line, theoretically allowing ESPs to stay in continuous operation for years at a time. == Electrostatic sampling for bioaerosols == Electrostatic precipitators can be used to sample biological airborne particles or aerosol for analysis. Sampling for bioaerosols requires precipitator designs optimised with a liquid counter electrode, which can be used to sample biological particles, e.g. viruses, directly into a small liquid volume to reduce unnecessary sample dilution. See Bioaerosols for more details. == Wet electrostatic precipitator == A wet electrostatic precipitator (WESP or wet ESP) operates with water vapor saturated air streams (100% relative humidity). WESPs are commonly used to remove liquid droplets such as sulfuric acid mist from industrial process gas streams. The WESP is also commonly used where the gases are high in moisture content, contain combustible particulate, or have particles that are sticky in nature. == Household electrostatic air cleaners == Plate precipitators are commonly marketed to the public as air purifier devices or as a permanent replacement for furnace filters, but all have the undesirable attribute of being somewhat
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
messy to clean. A negative side-effect of electrostatic precipitation devices is the potential production of toxic ozone and NOx. However, electrostatic precipitators offer benefits over other air purifications technologies, such as HEPA filtration, which require expensive filters and can become "production sinks" for many harmful forms of bacteria. With electrostatic precipitators, if the collection plates are allowed to accumulate large amounts of particulate matter, the particles can sometimes bond so tightly to the metal plates that vigorous washing and scrubbing may be required to completely clean the collection plates. The close spacing of the plates can make thorough cleaning difficult, and the stack of plates often cannot be easily disassembled for cleaning. One solution, suggested by several manufacturers, is to wash the collector plates in a dishwasher. Some consumer precipitation filters are sold with special soak-off cleaners, where the entire plate array is removed from the precipitator and soaked in a large container overnight, to help loosen the tightly bonded particulates. A study by the Canada Mortgage and Housing Corporation testing a variety of forced-air furnace filters found that ESP filters provided the best, and most cost-effective means of cleaning air using a forced-air system. The first portable electrostatic air filter systems for homes was marketed in 1954 by Raytheon. == See also == Air ionizer Air purge system Ozone generator Scrubber == References == == External links == Parker, K.R. (1997). Applied Electrostatic Precipitation. Springer. ISBN 0751402664.
|
{
"page_id": 1839752,
"source": null,
"title": "Electrostatic precipitator"
}
|
Plague is an infectious disease caused by the bacterium Yersinia pestis. Symptoms include fever, weakness and headache. Usually this begins one to seven days after exposure. There are three forms of plague, each affecting a different part of the body and causing associated symptoms. Pneumonic plague infects the lungs, causing shortness of breath, coughing and chest pain; bubonic plague affects the lymph nodes, making them swell; and septicemic plague infects the blood and can cause tissues to turn black and die. The bubonic and septicemic forms are generally spread by flea bites or handling an infected animal, whereas pneumonic plague is generally spread between people through the air via infectious droplets. Diagnosis is typically by finding the bacterium in fluid from a lymph node, blood or sputum. Those at high risk may be vaccinated. Those exposed to a case of pneumonic plague may be treated with preventive medication. If infected, treatment is with antibiotics and supportive care. Typically antibiotics include a combination of gentamicin and a fluoroquinolone. The risk of death with treatment is about 10% while without it is about 70%. Globally, about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. In the United States, infections occasionally occur in rural areas, where the bacteria are believed to circulate among rodents. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century, which resulted in more than 50 million deaths in Europe. == Signs and symptoms == There are several different clinical manifestations of plague. The most common form is bubonic plague, followed by septicemic and pneumonic plague. Other clinical manifestations include plague meningitis, plague pharyngitis, and ocular plague. General symptoms of plague include fever, chills,
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
headaches, and nausea. Many people experience swelling in their lymph nodes if they have bubonic plague. For those with pneumonic plague, symptoms may (or may not) include a cough, pain in the chest, and haemoptysis. === Bubonic plague === When a flea bites a human and contaminates the wound with regurgitated blood, the plague-causing bacteria are passed into the tissue. Y. pestis can reproduce inside cells, so even if phagocytosed, they can still survive. Once in the body, the bacteria can enter the lymphatic system, which drains interstitial fluid. Plague bacteria secrete several toxins, one of which is known to cause beta-adrenergic blockade. Y. pestis spreads through the lymphatic vessels of the infected human until it reaches a lymph node, where it causes acute lymphadenitis. The swollen lymph nodes form the characteristic buboes associated with the disease, and autopsies of these buboes have revealed them to be mostly hemorrhagic or necrotic. If the lymph node is overwhelmed, the infection can pass into the bloodstream, causing secondary septicemic plague and if the lungs are seeded, it can cause secondary pneumonic plague. === Septicemic plague === Lymphatics ultimately drain into the bloodstream, so the plague bacteria may enter the blood and travel to almost any part of the body. In septicemic plague, bacterial endotoxins cause disseminated intravascular coagulation (DIC), causing tiny clots throughout the body and possibly ischemic necrosis (tissue death due to lack of circulation/perfusion to that tissue) from the clots. DIC results in depletion of the body's clotting resources so that it can no longer control bleeding. Consequently, there is bleeding into the skin and other organs, which can cause red and/or black patchy rash and hemoptysis/hematemesis (coughing up/ vomiting of blood). There are bumps on the skin that look somewhat like insect bites; these are usually red, and sometimes
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
white in the centre. Untreated, the septicemic plague is usually fatal. Early treatment with antibiotics reduces the mortality rate to between 4 and 15 per cent. === Pneumonic plague === The pneumonic form of plague arises from infection of the lungs. It causes coughing and thereby produces airborne droplets that contain bacterial cells and are likely to infect anyone inhaling them. The incubation period for pneumonic plague is short, usually two to four days, but sometimes just a few hours. The initial signs are indistinguishable from several other respiratory illnesses; they include headache, weakness, and spitting or vomiting of blood. The course of the disease is rapid; unless diagnosed and treated soon enough, typically within a few hours, death may follow in one to six days; in untreated cases, mortality is nearly 100%. == Cause == Transmission of Y. pestis to an uninfected individual is possible by any of the following means: droplet contact – coughing or sneezing on another person direct physical contact – touching an infected person, including sexual contact indirect contact – usually by touching soil contamination or a contaminated surface airborne transmission – if the microorganism can remain in the air for long periods fecal-oral transmission – usually from contaminated food or water sources vector borne transmission – carried by insects or other animals. Yersinia pestis circulates in animal reservoirs, particularly in rodents, in the natural foci of infection found on all continents except Australia. The natural foci of plague are situated in a broad belt in the tropical and sub-tropical latitudes and the warmer parts of the temperate latitudes around the globe, between the parallels 55° N and 40° S. Contrary to popular belief, rats did not directly start the spread of the bubonic plague. It is mainly a disease in the fleas (Xenopsylla cheopis)
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
that infested the rats, making the rats themselves the first victims of the plague. Rodent-borne infection in a human occurs when a person is bitten by a flea that has been infected by biting a rodent that itself has been infected by the bite of a flea carrying the disease. The bacteria multiply inside the flea, sticking together to form a plug that blocks its stomach and causes it to starve. The flea then bites a host and continues to feed, even though it cannot quell its hunger, and consequently, the flea vomits blood tainted with the bacteria back into the bite wound. The bubonic plague bacterium then infects a new person and the flea eventually dies from starvation. Serious outbreaks of plague are usually started by other disease outbreaks in rodents or a rise in the rodent population. A 21st-century study of a 1665 outbreak of plague in the village of Eyam in England's Derbyshire Dales – which isolated itself during the outbreak, facilitating modern study – found that three-quarters of cases are likely to have been due to human-to-human transmission, especially within families, a much larger proportion than previously thought. == Diagnosis == Symptoms of plague are usually non-specific and to definitively diagnose plague, laboratory testing is required. Y. pestis can be identified through both a microscope and by culturing a sample and this is used as a reference standard to confirm that a person has a case of plague. The sample can be obtained from the blood, mucus (sputum), or aspirate extracted from inflamed lymph nodes (buboes). If a person is administered antibiotics before a sample is taken or if there is a delay in transporting the person's sample to a laboratory and/or a poorly stored sample, there is a possibility for false negative results. Polymerase chain
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
reaction (PCR) may also be used to diagnose plague, by detecting the presence of bacterial genes such as the pla gene (plasmogen activator) and caf1 gene, (F1 capsule antigen). PCR testing requires a very small sample and is effective for both alive and dead bacteria. For this reason, if a person receives antibiotics before a sample is collected for laboratory testing, they may have a false negative culture and a positive PCR result. Blood tests to detect antibodies against Y. pestis can also be used to diagnose plague, however, this requires taking blood samples at different periods to detect differences between the acute and convalescent phases of F1 antibody titres. In 2020, a study about rapid diagnostic tests that detect the F1 capsule antigen (F1RDT) by sampling sputum or bubo aspirate was released. Results show rapid diagnostic F1RDT test can be used for people who have suspected pneumonic and bubonic plague but cannot be used in asymptomatic people. F1RDT may be useful in providing a fast result for prompt treatment and fast public health response as studies suggest that F1RDT is highly sensitive for both pneumonic and bubonic plague. However, when using the rapid test, both positive and negative results need to be confirmed to establish or reject the diagnosis of a confirmed case of plague and the test result needs to be interpreted within the epidemiological context as study findings indicate that although 40 out of 40 people who had the plague in a population of 1000 were correctly diagnosed, 317 people were diagnosed falsely as positive. == Prevention == === Vaccination === Bacteriologist Waldemar Haffkine developed the first plague vaccine in 1897. He conducted a massive inoculation program in British India, and it is estimated that 26 million doses of Haffkine's anti-plague vaccine were sent out from Bombay
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
between 1897 and 1925, reducing the plague mortality by 50–85%. Since human plague is rare in most parts of the world as of 2023, routine vaccination is not needed other than for those at particularly high risk of exposure, nor for people living in areas with enzootic plague, meaning it occurs at regular, predictable rates in populations and specific areas, such as the western United States. It is not even indicated for most travellers to countries with known recent reported cases, particularly if their travel is limited to urban areas with modern hotels. The United States CDC thus only recommends vaccination for (1) all laboratory and field personnel who are working with Y. pestis organisms resistant to antimicrobials: (2) people engaged in aerosol experiments with Y. pestis; and (3) people engaged in field operations in areas with enzootic plague where preventing exposure is not possible (such as some disaster areas). A systematic review by the Cochrane Collaboration found no studies of sufficient quality to make any statement on the efficacy of the vaccine. === Early diagnosis === Diagnosing plague early leads to a decrease in transmission or spread of the disease. === Prophylaxis === Pre-exposure prophylaxis for first responders and health care providers who will care for patients with pneumonic plague is not considered necessary as long as standard and droplet precautions can be maintained. In cases of surgical mask shortages, patient overcrowding, poor ventilation in hospital wards, or other crises, pre-exposure prophylaxis might be warranted if sufficient supplies of antimicrobials are available. Postexposure prophylaxis should be considered for people who had close (<6 feet), sustained contact with a patient with pneumonic plague and were not wearing adequate personal protective equipment. Antimicrobial postexposure prophylaxis also can be considered for laboratory workers accidentally exposed to infectious materials and people who had
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
close (<6 feet) or direct contact with infected animals, such as veterinary staff, pet owners, and hunters. Specific recommendations on pre- and post-exposure prophylaxis are available in the clinical guidelines on treatment and prophylaxis of plague published in 2021. == Treatments == If diagnosed in time, the various forms of plague are usually highly responsive to antibiotic therapy. The antibiotics often used are streptomycin, chloramphenicol and tetracycline. Amongst the newer generation of antibiotics, gentamicin and doxycycline have proven effective in monotherapeutic treatment of plague. Guidelines on treatment and prophylaxis of plague were published by the Centers for Disease Control and Prevention in 2021. The plague bacterium could develop drug resistance and again become a major health threat. One case of a drug-resistant form of the bacterium was found in Madagascar in 1995. Further outbreaks in Madagascar were reported in November 2014 and October 2017. == Epidemiology == Globally about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century which resulted in more than 50 million dead. In recent years, cases have been distributed between small seasonal outbreaks which occur primarily in Madagascar, and sporadic outbreaks or isolated cases in endemic areas. In 2022 the possible origin of all modern strands of Yersinia pestis DNA was found in human remains in three graves located in Kyrgyzstan, dated to 1338 and 1339. The siege of Caffa in Crimea in 1346, is known to have been the first plague outbreak with following strands, later to spread over Europe. Sequencing DNA compared to other ancient and modern strands paints a family tree of the bacteria. Bacteria today affecting marmots in
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
Kyrgyzstan, are closest to the strand found in the graves, suggesting this is also the location where plague transferred from animals to humans. == Biological weapon == The plague has a long history as a biological weapon. Historical accounts from ancient China and medieval Europe details the use of infected animal carcasses, such as cows or horses, and human carcasses, by the Xiongnu/Huns, Mongols, Turks and other groups, to contaminate enemy water supplies. Han dynasty general Huo Qubing is recorded to have died of such contamination while engaging in warfare against the Xiongnu. Plague victims were also reported to have been tossed by catapult into cities under siege. In 1347, the Genoese possession of Caffa, a great trade emporium on the Crimean peninsula, came under siege by an army of Mongol warriors of the Golden Horde under the command of Jani Beg. After a protracted siege during which the Mongol army was reportedly withering from the disease, they decided to use the infected corpses as a biological weapon. The corpses were catapulted over the city walls, infecting the inhabitants. This event might have led to the transfer of the Black Death via their ships into the south of Europe, possibly explaining its rapid spread. During World War II, the Japanese Army developed weaponized plague, based on the breeding and release of large numbers of fleas. During the Japanese occupation of Manchuria, Unit 731 deliberately infected Chinese, Korean and Manchurian civilians and prisoners of war with the plague bacterium. These subjects, termed "maruta" or "logs", were then studied by dissection, others by vivisection while still conscious. Members of the unit such as Shirō Ishii were exonerated from the Tokyo tribunal by Douglas MacArthur but 12 of them were prosecuted in the Khabarovsk War Crime Trials in 1949 during which some admitted
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
having spread bubonic plague within a 36-kilometre (22 mi) radius around the city of Changde. Ishii innovated bombs containing live mice and fleas, with very small explosive loads, to deliver the weaponized microbes, overcoming the problem of the explosive killing the infected animal and insect by the use of a ceramic, rather than metal, casing for the warhead. While no records survive of the actual usage of the ceramic shells, prototypes exist and are believed to have been used in experiments during WWII. After World War II, both the United States and the Soviet Union developed means of weaponising pneumonic plague. Experiments included various delivery methods, vacuum drying, sizing the bacterium, developing strains resistant to antibiotics, combining the bacterium with other diseases (such as diphtheria), and genetic engineering. Scientists who worked in USSR bio-weapons programs have stated that the Soviet effort was formidable and that large stocks of weaponised plague bacteria were produced. Information on many of the Soviet and US projects is largely unavailable. Aerosolized pneumonic plague remains the most significant threat. The plague can be easily treated with antibiotics. Some countries, such as the United States, have large supplies on hand if such an attack should occur, making the threat less severe. == See also == Timeline of plague == References == == Further reading == Nelson CA, Meaney-Delman D, Fleck-Derderian S, Cooley KM, Yu PA, Mead PS (July 2021). "Antimicrobial Treatment and Prophylaxis of Plague: Recommendations for Naturally Acquired Infections and Bioterrorism Response" (PDF). MMWR Recomm Rep. 70 (3): 1–27. doi:10.15585/mmwr.rr7003a1. PMC 8312557. PMID 34264565. Archived (PDF) from the original on 2022-10-09. == External links == WHO Health topic CDC Plague map world distribution, publications, information on bioterrorism preparedness and response regarding plague Symptoms, causes, pictures of bubonic plague
|
{
"page_id": 4746,
"source": null,
"title": "Plague (disease)"
}
|
A secondary carbon is a carbon atom bound to two other carbon atoms and has sp3 hybridization. For this reason, secondary carbon atoms are found in almost (neopentane, for example, does not have any secondary carbon atoms) all hydrocarbons having at least three carbon atoms. In unbranched alkanes, the inner carbon atoms are always secondary carbon atoms (see figure). == References ==
|
{
"page_id": 16913034,
"source": null,
"title": "Secondary carbon"
}
|
The balance of nature, also known as ecological balance, is a theory that proposes that ecological systems are usually in a stable equilibrium or homeostasis, which is to say that a small change (the size of a particular population, for example) will be corrected by some negative feedback that will bring the parameter back to its original "point of balance" with the rest of the system. The balance is sometimes depicted as easily disturbed and delicate, while other times it is inversely portrayed as powerful enough to correct any imbalances by itself. The concept has been described as "normative", as well as teleological, as it makes a claim about how nature should be: nature is balanced because "it is supposed to be balanced". The theory has been employed to describe how populations depend on each other, for example in predator-prey systems, or relationships between herbivores and their food source. It is also sometimes applied to the relationship between the Earth's ecosystem, the composition of the atmosphere, and weather. The theory has been discredited by scientists working in ecology, as it has been found that constant disturbances leading to chaotic and dynamic changes are the norm in nature. During the later half of the 20th century, it was superseded by catastrophe theory, chaos theory, and thermodynamics. Nevertheless, the idea maintains popularity amongst conservationists, environmentalists and the general public. == History of the theory == The concept that nature maintains its condition is of ancient provenance. Herodotus asserted that predators never excessively consume prey populations and described this balance as "wonderful". Two of Plato's dialogues, the Timaeus and Protagoras myths, support the balance of nature concept. Cicero advanced the theory of "a balance of nature generated by different reproductive rates and traits among species, as well as interactions among species". The balance
|
{
"page_id": 19468941,
"source": null,
"title": "Balance of nature"
}
|
of nature concept once ruled ecological research and governed the management of natural resources. This led to a doctrine popular among some conservationists that nature was best left to its own devices, and that human intervention into it was by definition unacceptable. The theory was a central theme in the 1962 book Silent Spring by Rachel Carson, widely-considered to be the most important environmental book of the 20th century. The controversial Gaia hypothesis was developed in the 1970s by James Lovelock and Lynn Margulis. It asserts that living beings interact with Earth to form a complex system which self-regulates to maintain the balance of nature. The validity of a balance of nature was already questioned in the early 1900s, but the general abandonment of the theory by scientists working in ecology only happened in the last quarter of that century, when studies showed that it did not match what could be observed among plant and animal populations. == Predator-prey interactions == Predator-prey populations tend to show chaotic behavior within limits, where the sizes of populations change in a way that may appear random but is, in fact, obeying deterministic laws based only on the relationship between a population and its food source illustrated by the Lotka–Volterra equation. An experimental example of this was shown in an eight-year study on small Baltic Sea creatures such as plankton, which were isolated from the rest of the ocean. Each member of the food web was shown to take turns multiplying and declining, even though the scientists kept the outside conditions constant. An article in the journal Nature stated: "Advanced mathematical techniques proved the indisputable presence of chaos in this food web ... short-term prediction is possible, but long-term prediction is not." == Human intervention == Although some conservationist organizations argue that human activity
|
{
"page_id": 19468941,
"source": null,
"title": "Balance of nature"
}
|
is incompatible with a balanced ecosystem, there are numerous examples in history showing that several modern-day habitats originate from human activity: some of Latin America's rain forests owe their existence to humans planting and transplanting them, while the abundance of grazing animals in the Serengeti plain of Africa is thought by some ecologists to be partly due to human-set fires that created savanna habitats. One of the best-known and often misunderstood examples of ecosystem balance being enhanced by human activity is the Australian Aboriginal practice of fire-stick farming. This uses low-intensity fire when there is sufficient humidity to limit its action, to reduce the quantity of ground-level combustible material, to lessen the intensity and devastation of forest fires caused by lightning at the end of the dry season. Several plant species are adapted to fire, some even requiring its extreme heat to germinate their seeds. == Continued popularity of the theory == Despite being discredited among ecologists, the theory is widely held to be true by the general public, conservationists and environmentalists, with one author calling it an "enduring myth". Environmental and conservation organizations such as the WWF, Sierra Club and Canadian Wildlife Federation continue to promote the theory, as do animal rights organizations such as PETA. Kim Cuddington considers the balance of nature to be a "foundational metaphor in ecology", which is still in active use by ecologists. She argues that many ecologists see nature as a "beneficent force" and that they also view the universe as being innately predictable; Cuddington asserts that the balance of nature acts as a "shorthand for the paradigm expressing this worldview". Douglas Allchin and Alexander J. Werth assert that although "ecologists formally eschew the concept of balance of nature, it remains a widely adopted preconception and a feature of language that seems not
|
{
"page_id": 19468941,
"source": null,
"title": "Balance of nature"
}
|
to disappear entirely." At least in Midwestern America, the balance of nature idea was shown to be widely held by both science majors and the general student population. In a study at the University of Patras, educational sciences students were asked to reason about the future of ecosystems which suffered human-driven disturbances. Subjects agreed that it was very likely for the ecosystems to fully recover their initial state, referring to either a 'recovery process' which restores the initial 'balance', or specific 'recovery mechanisms' as an ecosystem's inherent characteristic. In a 2017 study, Ampatzidis and Ergazaki discuss the learning objectives and design criteria that a learning environment for non-biology major students should meet to support them challenge the balance of nature concept. In a 2018 study, the same authors report on the theoretical output of a design research study, which concerns the design of a learning environment for helping students challenge their beliefs regarding the balance of nature and reach an up-to-date understanding about ecosystems' contingency. == In popular culture == In Ursula K. Le Guin's Earthsea fantasy series, using magic means to "respect and preserve the immanent metaphysical balance of nature." The balance of nature (referred to as "the circle of life") is a major theme of the 1994 film, The Lion King. In one scene, the character Mufasa describes to his son Simba how everything exists in a state of delicate balance. The character Agent Smith, in the 1999 film The Matrix, describes humanity as a virus, claiming that humans fail to reach an equilibrium with their surrounding environment; unlike other mammals. The disruption of the balance of nature is a common theme in Hayao Miyazaki's films: Nausicaä of the Valley of the Wind, released in 1984, is set in a post-apocalyptic world where humans have upset the balance
|
{
"page_id": 19468941,
"source": null,
"title": "Balance of nature"
}
|
of nature through war; the 1997 film Princess Mononoke, depicts irresponsible activities by humans as having damaged the balance of nature; in the 2008 film Ponyo, the titular character disturbs the balance of nature when she seeks to become human. The titular character of the 2014 film Godzilla fights other sea monsters known as "MUTOs" in a bid to restore the balance of nature. In the 2018 film Avengers: Infinity War, the villain Thanos seeks to restore the balance of nature by eliminating half of the beings in the universe. == See also == Ecological footprint Social metabolism == References ==
|
{
"page_id": 19468941,
"source": null,
"title": "Balance of nature"
}
|
Distance measures are used in physical cosmology to give a natural notion of the distance between two objects or events in the universe. They are often used to tie some observable quantity (such as the luminosity of a distant quasar, the redshift of a distant galaxy, or the angular size of the acoustic peaks in the cosmic microwave background (CMB) power spectrum) to another quantity that is not directly observable, but is more convenient for calculations (such as the comoving coordinates of the quasar, galaxy, etc.). The distance measures discussed here all reduce to the common notion of Euclidean distance at low redshift. In accord with our present understanding of cosmology, these measures are calculated within the context of general relativity, where the Friedmann–Lemaître–Robertson–Walker solution is used to describe the universe. == Overview == There are a few different definitions of "distance" in cosmology which are all asymptotic one to another for small redshifts. The expressions for these distances are most practical when written as functions of redshift z {\displaystyle z} , since redshift is always the observable. They can also be written as functions of scale factor a = 1 / ( 1 + z ) . {\displaystyle a=1/(1+z).} In the remainder of this article, the peculiar velocity is assumed to be negligible unless specified otherwise. We first give formulas for several distance measures, and then describe them in more detail further down. Defining the "Hubble distance" as d H = c H 0 ≈ 3000 h − 1 Mpc ≈ 9.26 ⋅ 10 25 h − 1 m {\displaystyle d_{H}={\frac {c}{H_{0}}}\approx 3000h^{-1}{\text{Mpc}}\approx 9.26\cdot 10^{25}h^{-1}{\text{m}}} where c {\displaystyle c} is the speed of light, H 0 {\displaystyle H_{0}} is the Hubble parameter today, and h is the dimensionless Hubble constant, all the distances are asymptotic to z ⋅ d
|
{
"page_id": 8065677,
"source": null,
"title": "Distance measure"
}
|
H {\displaystyle z\cdot d_{H}} for small z. According to the Friedmann equations, we also define a dimensionless Hubble parameter: E ( z ) = H ( z ) H 0 = Ω r ( 1 + z ) 4 + Ω m ( 1 + z ) 3 + Ω k ( 1 + z ) 2 + Ω Λ {\displaystyle E(z)={\frac {H(z)}{H_{0}}}={\sqrt {\Omega _{r}(1+z)^{4}+\Omega _{m}(1+z)^{3}+\Omega _{k}(1+z)^{2}+\Omega _{\Lambda }}}} Here, Ω r , Ω m , {\displaystyle \Omega _{r},\Omega _{m},} and Ω Λ {\displaystyle \Omega _{\Lambda }} are normalized values of the present radiation energy density, matter density, and "dark energy density", respectively (the latter representing the cosmological constant), and Ω k = 1 − Ω r − Ω m − Ω Λ {\displaystyle \Omega _{k}=1-\Omega _{r}-\Omega _{m}-\Omega _{\Lambda }} determines the curvature. The Hubble parameter at a given redshift is then H ( z ) = H 0 E ( z ) {\displaystyle H(z)=H_{0}E(z)} . The formula for comoving distance, which serves as the basis for most of the other formulas, involves an integral. Although for some limited choices of parameters (see below) the comoving distance integral has a closed analytic form, in general—and specifically for the parameters of our universe—we can only find a solution numerically. Cosmologists commonly use the following measures for distances from the observer to an object at redshift z {\displaystyle z} along the line of sight (LOS): Comoving distance: d C ( z ) = d H ∫ 0 z d z ′ E ( z ′ ) {\displaystyle d_{C}(z)=d_{H}\int _{0}^{z}{\frac {dz'}{E(z')}}} Transverse comoving distance: d M ( z ) = { d H Ω k sinh ( Ω k d C ( z ) d H ) Ω k > 0 d C ( z ) Ω k = 0 d H
|
{
"page_id": 8065677,
"source": null,
"title": "Distance measure"
}
|
| Ω k | sin ( | Ω k | d C ( z ) d H ) Ω k < 0 {\displaystyle d_{M}(z)={\begin{cases}{\frac {d_{H}}{\sqrt {\Omega _{k}}}}\sinh \left({\frac {{\sqrt {\Omega _{k}}}d_{C}(z)}{d_{H}}}\right)&\Omega _{k}>0\\d_{C}(z)&\Omega _{k}=0\\{\frac {d_{H}}{\sqrt {|\Omega _{k}|}}}\sin \left({\frac {{\sqrt {|\Omega _{k}|}}d_{C}(z)}{d_{H}}}\right)&\Omega _{k}<0\end{cases}}} Angular diameter distance: d A ( z ) = d M ( z ) 1 + z {\displaystyle d_{A}(z)={\frac {d_{M}(z)}{1+z}}} Luminosity distance: d L ( z ) = ( 1 + z ) d M ( z ) {\displaystyle d_{L}(z)=(1+z)d_{M}(z)} Light-travel distance: d T ( z ) = d H ∫ 0 z d z ′ ( 1 + z ′ ) E ( z ′ ) {\displaystyle d_{T}(z)=d_{H}\int _{0}^{z}{\frac {dz'}{(1+z')E(z')}}} == Alternative terminology == Peebles calls the transverse comoving distance the "angular size distance", which is not to be mistaken for the angular diameter distance. Occasionally, the symbols χ {\displaystyle \chi } or r {\displaystyle r} are used to denote both the comoving and the angular diameter distance. Sometimes, the light-travel distance is also called the "lookback distance" and/or "lookback time". == Details == === Peculiar velocity === In real observations, the movement of the Earth with respect to the Hubble flow has an effect on the observed redshift. There are actually two notions of redshift. One is the redshift that would be observed if both the Earth and the object were not moving with respect to the "comoving" surroundings (the Hubble flow), defined by the cosmic microwave background. The other is the actual redshift measured, which depends both on the peculiar velocity of the object observed and on their peculiar velocity. Since the Solar System is moving at around 370 km/s in a direction between Leo and Crater, this decreases 1 + z {\displaystyle 1+z} for distant objects in that direction by a factor of about
|
{
"page_id": 8065677,
"source": null,
"title": "Distance measure"
}
|
1.0012 and increases it by the same factor for distant objects in the opposite direction. (The speed of the motion of the Earth around the Sun is only 30 km/s.) === Comoving distance === The comoving distance d C {\displaystyle d_{C}} between fundamental observers, i.e. observers that are both moving with the Hubble flow, does not change with time, as comoving distance accounts for the expansion of the universe. Comoving distance is obtained by integrating the proper distances of nearby fundamental observers along the line of sight (LOS), whereas the proper distance is what a measurement at constant cosmic time would yield. In standard cosmology, comoving distance and proper distance are two closely related distance measures used by cosmologists to measure distances between objects; the comoving distance is the proper distance at the present time. The comoving distance (with a small correction for our own motion) is the distance that would be obtained from parallax, because the parallax in degrees equals the ratio of an astronomical unit to the circumference of a circle at the present time going through the sun and centred on the distant object, multiplied by 360°. However, objects beyond a megaparsec have parallax too small to be measured (the Gaia space telescope measures the parallax of the brightest stars with a precision of 7 microarcseconds), so the parallax of galaxies outside our Local Group is too small to be measured. There is a closed-form expression for the integral in the definition of the comoving distance if Ω r = Ω m = 0 {\displaystyle \Omega _{r}=\Omega _{m}=0} or, by substituting the scale factor a {\displaystyle a} for 1 / ( 1 + z ) {\displaystyle 1/(1+z)} , if Ω Λ = 0 {\displaystyle \Omega _{\Lambda }=0} . Our universe now seems to be closely represented by
|
{
"page_id": 8065677,
"source": null,
"title": "Distance measure"
}
|
Ω r = Ω k = 0. {\displaystyle \Omega _{r}=\Omega _{k}=0.} In this case, we have: d C ( z ) = d H Ω m − 1 / 3 Ω Λ − 1 / 6 [ f ( ( 1 + z ) ( Ω m / Ω Λ ) 1 / 3 ) − f ( ( Ω m / Ω Λ ) 1 / 3 ) ] {\displaystyle d_{C}(z)=d_{H}\Omega _{m}^{-1/3}\Omega _{\Lambda }^{-1/6}[f((1+z)(\Omega _{m}/\Omega _{\Lambda })^{1/3})-f((\Omega _{m}/\Omega _{\Lambda })^{1/3})]} where f ( x ) ≡ ∫ 0 x d x x 3 + 1 {\displaystyle f(x)\equiv \int _{0}^{x}{\frac {dx}{\sqrt {x^{3}+1}}}} The comoving distance should be calculated using the value of z that would pertain if neither the object nor we had a peculiar velocity. Together with the scale factor it gives the proper distance of the object when the light we see now was emitted by the it, and set off on its journey to us: d = a d C {\displaystyle d=ad_{C}} === Proper distance === Proper distance roughly corresponds to where a distant object would be at a specific moment of cosmological time, which can change over time due to the expansion of the universe. Comoving distance factors out the expansion of the universe, which gives a distance that does not change in time due to the expansion of space (though this may change due to other, local factors, such as the motion of a galaxy within a cluster); the comoving distance is the proper distance at the present time. === Transverse comoving distance === Two comoving objects at constant redshift z {\displaystyle z} that are separated by an angle δ θ {\displaystyle \delta \theta } on the sky are said to have the distance δ θ d M ( z ) {\displaystyle \delta \theta d_{M}(z)}
|
{
"page_id": 8065677,
"source": null,
"title": "Distance measure"
}
|
, where the transverse comoving distance d M {\displaystyle d_{M}} is defined appropriately. === Angular diameter distance === An object of size x {\displaystyle x} at redshift z {\displaystyle z} that appears to have angular size δ θ {\displaystyle \delta \theta } has the angular diameter distance of d A ( z ) = x / δ θ {\displaystyle d_{A}(z)=x/\delta \theta } . This is commonly used to observe so called standard rulers, for example in the context of baryon acoustic oscillations. When accounting for the earth's peculiar velocity, the redshift that would pertain in that case should be used but d A {\displaystyle d_{A}} should be corrected for the motion of the solar system by a factor between 0.99867 and 1.00133, depending on the direction. (If one starts to move with velocity v towards an object, at any distance, the angular diameter of that object decreases by a factor of ( 1 + v / c ) / ( 1 − v / c ) . {\textstyle {\sqrt {(1+v/c)/(1-v/c)}}.} ) === Luminosity distance === If the intrinsic luminosity L {\displaystyle L} of a distant object is known, we can calculate its luminosity distance by measuring the flux S {\displaystyle S} and determine d L ( z ) = L / 4 π S {\textstyle d_{L}(z)={\sqrt {L/4\pi S}}} , which turns out to be equivalent to the expression above for d L ( z ) {\displaystyle d_{L}(z)} . This quantity is important for measurements of standard candles like type Ia supernovae, which were first used to discover the acceleration of the expansion of the universe. When accounting for the earth's peculiar velocity, the redshift that would pertain in that case should be used for d M , {\displaystyle d_{M},} but the factor ( 1 + z ) {\displaystyle (1+z)} should
|
{
"page_id": 8065677,
"source": null,
"title": "Distance measure"
}
|
use the measured redshift, and another correction should be made for the peculiar velocity of the object by multiplying by ( 1 + v / c ) / ( 1 − v / c ) , {\textstyle {\sqrt {(1+v/c)/(1-v/c)}},} where now v is the component of the object's peculiar velocity away from us. In this way, the luminosity distance will be equal to the angular diameter distance multiplied by ( 1 + z ) 2 , {\displaystyle (1+z)^{2},} where z is the measured redshift, in accordance with Etherington's reciprocity theorem (see below). === Light-travel distance === (also known as "lookback time" or "lookback distance") This distance d T {\displaystyle d_{T}} is the time that it took light to reach the observer from the object multiplied by the speed of light. For instance, the radius of the observable universe in this distance measure becomes the age of the universe multiplied by the speed of light (1 light year/year), which turns out to be approximately 13.8 billion light years. There is a closed-form solution of the light-travel distance if Ω r = Ω m = 0 {\displaystyle \Omega _{r}=\Omega _{m}=0} involving the inverse hyperbolic functions arcosh {\displaystyle {\text{arcosh}}} or arsinh {\displaystyle {\text{arsinh}}} (or involving inverse trigonometric functions if the cosmological constant has the other sign). If Ω r = Ω Λ = 0 {\displaystyle \Omega _{r}=\Omega _{\Lambda }=0} then there is a closed-form solution for d T ( z ) {\displaystyle d_{T}(z)} but not for z ( d T ) . {\displaystyle z(d_{T}).} Note that the comoving distance is recovered from the transverse comoving distance by taking the limit Ω k → 0 {\displaystyle \Omega _{k}\to 0} , such that the two distance measures are equivalent in a flat universe. There are websites for calculating light-travel distance from redshift. The age of
|
{
"page_id": 8065677,
"source": null,
"title": "Distance measure"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.