id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
24,572,347 | https://en.wikipedia.org/wiki/Spinellus%20fusiger | Spinellus fusiger, commonly known as bonnet mold, is a species of fungus in the phylum Mucoromycota. It is a pin mold that is characterized by erect sporangiophores (specialized hyphae that bear a sporangium) that are simple in structure, brown or yellowish-brown in color, and with branched aerial filaments that bear the zygospores. It grows as a parasitic mold on mushrooms, including several species from the genera Mycena, including M. haematopus, M. pura, M. epipterygia, M. leptocephala, and various Collybia species, such as C. alkalivirens, C. luteifolia, C. dryophila, and C. butyracea. It has also been found growing on agaric species in Amanita, Gymnopus, and Hygrophorus.
Taxonomy
The species was first described by German naturalist Christian Gottfried Ehrenberg in 1818 as Mucor rhombosporus, but he later conceded to making a mistake in examining the spores. Link later suggested the name Mucor fusiger for the species, and it has been known under a variety of names, such as Mucor macrocarpus, Phycomyces agaricicola, Spinellus macrocarpus, and Spinellus rhombosporus. It was assigned its current name by French botanist Philippe Édouard Léon Van Tieghem in 1875.
Description
During the reproductive phase of its life cycle, Spinellus fusiger grows throughout the cap of the mushroom host, eventually breaking through to produce radiating reproductive stalks (sporangiophores) bearing minute, spherical, terminal spore-containing structures called sporangia. Ultimately, the spores in the sporangia are released after the breakdown of the outer sporangial wall, becoming passively dispersed to new locations via wind, water, and insects. The sporangia contain non-motile mitospores known as aplanospores. Like other Spinellus species, S. fusiger is homothallic, and sexual spores known as zygospores are produced following the union of branches called gametangia, that arise from the same mycelium.
References
External links
Fungi of Poland Picture of spores
Fungi described in 1824
Zygomycota
Taxa named by Johann Heinrich Friedrich Link
Fungus species | Spinellus fusiger | Biology | 502 |
23,442,246 | https://en.wikipedia.org/wiki/Channelome | The channelome, sometimes called the "ion channelome", is the complete set of ion channels and porins expressed in a biological tissue or organism. It is analogous to the genome, the metabolome (describing metabolites), the proteome (describing general protein expression), and the microbiome. Characterization of the ion channelome, referred to as channelomics, is a branch of physiology, biophysics, neuroscience, and pharmacology, with particular attention paid to gene expression. It can be performed by a variety of techniques, including patch clamp electrophysiology, PCR, and immunohistochemistry. Channelomics is being used to screen and discover new medicines.
Functional studies
Structure and function of membrane channels are closely linked, but perhaps the most famous work studying the structure of ion channels is the paper by Doyle et al. 1998, which led to the Nobel Prize in Chemistry for Roderick MacKinnon. Abnormalities of channel structure consequently result in their physiological mis-function. Channelomic studies include the systematic study of diseases resulting from such mis-functions. Such a disease is termed a channelopathy. In addition, channelomic studies screen potential drugs for their effectiveness at channelopathies, by examining the binding affinities of candidate drug compounds.
References
Membrane biology
Ion channels
Channelopathies
Electrophysiology
Integral membrane proteins | Channelome | Chemistry | 281 |
23,802,570 | https://en.wikipedia.org/wiki/Ak%20singularity | In mathematics, and in particular singularity theory, an singularity, where is an integer, describes a level of degeneracy of a function. The notation was introduced by V. I. Arnold.
Let be a smooth function. We denote by the infinite-dimensional space of all such functions. Let denote the infinite-dimensional Lie group of diffeomorphisms and the infinite-dimensional Lie group of diffeomorphisms The product group acts on in the following way: let and be diffeomorphisms and any smooth function. We define the group action as follows:
The orbit of , denoted , of this group action is given by
The members of a given orbit of this action have the following fact in common: we can find a diffeomorphic change of coordinate in and a diffeomorphic change of coordinate in such that one member of the orbit is carried to any other. A function is said to have a type -singularity if it lies in the orbit of
where and is an integer.
By a normal form we mean a particularly simple representative of any given orbit. The above expressions for give normal forms for the type -singularities. The type -singularities are special because they are amongst the simple singularities, this means that there are only a finite number of other orbits in a sufficiently small neighbourhood of the orbit of .
This idea extends over the complex numbers where the normal forms are much simpler; for example: there is no need to distinguish from .
References
Singularity theory | Ak singularity | Mathematics | 307 |
24,003,670 | https://en.wikipedia.org/wiki/C8H12N2O2 | {{DISPLAYTITLE:C8H12N2O2}}
The molecular formula C8H12N2O2 (molar mass: 168.193 g/mol) may refer to:
Hexamethylene diisocyanate
Pyridoxamine
Molecular formulas | C8H12N2O2 | Physics,Chemistry | 63 |
8,895,933 | https://en.wikipedia.org/wiki/ReBoot%3A%20My%20Two%20Bobs | ReBoot: My Two Bobs is a 2001 Canadian television film based on the series ReBoot, that continues the events set in motion by the cliffhanger ending in Daemon Rising. Along with Daemon Rising, the two films are considered the fourth season. It was originally broadcast in Canada as a film, but was later rebroadcast as four individual episodes, titled "My Two Bobs", "Life's a Glitch", "Null-Bot of the Bride", and "Crouching Binome, Hidden Virus". It was released on DVD along with Daemon Rising.
Plot
At the end of Daemon Rising, Bob and Dot got engaged. To the confusion of everyone, however, a portal then opened from the web, and Ray Tracer and another Bob step through it. Inasmuch as the second Bob looks like the original from Seasons 1 and 2, Dot calls him Bob and calls the Bob which merged with his keytool Glitch Bob. Most of My Two Bobs is taken up by the efforts of Dot, the two Bobs themselves, and the other Mainframers to ascertain which Bob is the original and which is the copy, and to come to terms with the situation in general.
Because Bob can reboot and Glitch Bob can't, Bob spends much of the first half of the film bonding with Matrix and the others by helping them win games. After some counseling from Phong and Mouse, Dot decides to marry the new Bob, whereupon Glitch Bob — the nominal original — earnestly attempts to return himself to his original form in order to win Dot back. His efforts ultimately fail and leave him in a catatonic state, covered in a dark, starry crystal that proves to be impenetrable. Dot continues with her wedding plans as Glitch Bob is treated at the Supercomputer. She laments that her father Welman is too nullified to attend, but when the infection in Enzo's icon is transferred to Welman, he becomes intelligible enough to walk her down the isle in a mechanical suit.
Glitch Bob's condition steadily worsens on Dot's wedding day. The impenetrable starry substance covering him gradually dims completely, and the Guardians believe that they have lost him. This moment of crisis prompts all of the other keytools (which had disconnected from the Guardians when they were infected by Daemon) to return to the Supercomputer to separate Glitch from Bob and revive him, before returning to the Guardians. The Guardians discover that this Bob's web-degraded code no longer matches what they have on file, suggesting that he is in fact the copy.
Web Bob returns to Mainframe to stop the wedding, but Dot rejects him in favor of the new Bob. Even Glitch seems to leave him for the new Bob, leading everyone to believe that Web Bob is the copy. Just as Web Bob starts to leave in despair, Glitch steals some code from the groom and gives it to Web Bob, which restores his body to its original form. The loss of that code causes the Bob Dot was marrying to shapeshift, revealing a terrible truth: Web Bob was the original, while the new Bob had been Megabyte in disguise. Bob engages Megabyte in a spectacular battle in the church, but Megabyte escapes by disguising himself as a Binome.
An investigation reveals that Megabyte has become a Trojan Horse virus, which gives him the power to shapeshift and effectively disguise himself as anyone. It is also revealed that Megabyte had inadvertently stolen part of Bob's Guardian code when he crushed Glitch at the end of Season 2, and he used that code to impersonate him until Glitch returned it to the real Bob during the wedding. Meanwhile, Megabyte starts disguising himself as other Mainframers, including Mike the TV, and reassembles his viral army. Megabyte eludes capture by using various aliases and a doppelgänger and ultimately infiltrates the war room by taking on the form of Frisket. After suborning various personnel, including Dot's father, and capturing Enzo, Megabyte gains "complete control" of the Principal Office. The film ends with him proclaiming that he will now follow his predatory virus nature; he is no longer out to take over Mainframe again or even the Supercomputer, he just wants revenge on the Mainframers. His last words, which are the final words of the series, are "Prepare yourselves... for the hunt!"
Cast
Kathleen Barr: Dot Matrix
Michael Benyaer: Bob
Garry Chalk: Slash
Michael Donovan: Mike the TV / Phong
Paul Dobson: Matrix (adult Enzo)
Christopher Gray: Enzo Matrix
Tony Jay: Megabyte
Scott McNeil: Hack
Shirley Millner: Hexadecimal
References
External links
2001 television films
Canadian animated television films
2001 computer-animated films
ReBoot
Films about computing
Films set in computers
Cyberpunk films
Mainframe Studios films
Films based on television series
Television films based on television series
2001 films
2000s Canadian animated films | ReBoot: My Two Bobs | Technology | 1,062 |
11,503,447 | https://en.wikipedia.org/wiki/Immunolabeling | Immunolabeling is a biochemical process that enables the detection and localization of an antigen to a particular site within a cell, tissue, or organ. Antigens are organic molecules, usually proteins, capable of binding to an antibody. These antigens can be visualized using a combination of antigen-specific antibody as well as a means of detection, called a tag, that is covalently linked to the antibody. If the immunolabeling process is meant to reveal information about a cell or its substructures, the process is called immunocytochemistry. Immunolabeling of larger structures is called immunohistochemistry.
There are two complex steps in the manufacture of antibody for immunolabeling. The first is producing the antibody that binds specifically to the antigen of interest and the second is fusing the tag to the antibody. Since it is impractical to fuse a tag to every conceivable antigen-specific antibody, most immunolabeling processes use an indirect method of detection. This indirect method employs a primary antibody that is antigen-specific and a secondary antibody fused to a tag that specifically binds the primary antibody. This indirect approach permits mass production of secondary antibody that can be bought off the shelf. Pursuant to this indirect method, the primary antibody is added to the test system. The primary antibody seeks out and binds to the target antigen. The tagged secondary antibody, designed to attach exclusively to the primary antibody, is subsequently added.
Typical tags include: a fluorescent compound, gold beads, a particular epitope tag, or an enzyme that produces a colored compound. The association of the tags to the target via the antibodies provides for the identification and visualization of the antigen of interest in its native location in the tissue, such as the cell membrane, cytoplasm, or nuclear membrane. Under certain conditions the method can be adapted to provide quantitative information.
Immunolabeling can be used in pharmacology, molecular biology, biochemistry and any other field where it is important to know of the precise location of an antibody-bindable molecule.
Indirect vs. direct method
There are two methods involved in immunolabeling, the direct and the indirect methods. In the direct method of immunolabeling, the primary antibody is conjugated directly to the tag. The direct method is useful in minimizing cross-reaction, a measure of nonspecificity that is inherent in all antibodies and that is multiplied with each additional antibody used to detect an antigen. However, the direct method is far less practical than the indirect method, and is not commonly used in laboratories, since the primary antibodies must be covalently labeled, which require an abundant supply of purified antibody. Also, the direct method is potentially far less sensitive than the indirect method. Since several secondary antibodies are capable of binding to different parts, or domains, of a single primary antibody binding the target antigen, there is more tagged antibody associated with each antigen. More tag per antigen results in more signal per antigen.
Different indirect methods can be employed to achieve high degrees of specificity and sensitivity. First, two-step protocols are often used to avoid the cross-reaction between the immunolabeling of multiple primary and secondary antibody mixtures, where secondary antibodies Fab fragments are frequently used. Secondly, haptenylated primary antibodies can be used, where the secondary antibody can recognize the associated hapten. The hapten is covalently linked to the primary antibody by succinyl imidesters or conjugated IgG Fc-specific Fab sections. Lastly, primary monoclonal antibodies that have different Ig isotypes can be detected by specific secondary antibodies that are against the isotype of interest.
Antibody binding and specificity
Overall, antibodies must bind to the antigens with a high specificity and affinity. The specificity of the binding refers to an antibody's capacity to bind and only bind a single target antigen. Scientists commonly use monoclonal antibodies and polyclonal antibodies, which are composed of synthetic peptides. During the manufacture of these antibodies, antigen specific antibodies are sequestered by attaching the antigenic peptide to an affinity column and allowing nonspecific antibody to simply pass through the column. This decreases the likelihood that the antibodies will bind to an unwanted epitope of the antigen not found on the initial peptide. Hence, the specificity of the antibody is established by the specific reaction with the protein or peptide that is used for immunization by specific methods, such as immunoblotting or immunoprecipitation.
In establishing the specificity of antibodies, the key factor is the type of synthetic peptides or purified proteins being used. The lesser the specificity of the antibody, the greater the chance of visualizing something other than the target antigen. In the case of synthetic peptides, the advantage is the amino acid sequence is easily accessible, but the peptides do not always resemble the 3-D structure or post-translational modification found in the native form of the protein. Therefore, antibodies that are produced to work against a synthetic peptide may have problems with the native 3-D protein. These types of antibodies would lead to poor results in immunoprecipitation or immunohistochemistry experiments, yet the antibodies may be capable of binding to the denatured form of the protein during an immunoblotting run. On the contrary, if the antibody works well for purified proteins in their native form and not denatured, an immunoblot cannot be used as a standardized test to determine the specificity of the antibody binding, particularly in immunohistochemistry.
Specific immunolabeling techniques
Immunolabeling for light microscopy
Light microscopy is the use of a light microscope, which is an instrument that requires the usage of light to view the enlarged specimen. In general, a compound light microscope is frequently used, where two lenses, the eyepiece, and the objective work simultaneously to generate the magnification of the specimen. Light microscopy frequently uses immunolabeling to observe targeted tissues or cells. For instance, a study was conducted to view the morphology and the production of hormones in pituitary adenoma cell cultures via light microscopy and other electron microscopic methods. This type of microscopy confirmed that the primary adenoma cell cultures keep their physiological characteristics in vitro, which matched the histology inspection. Moreover, cell cultures of human pituitary adenomas were viewed by light microscopy and immunocytochemistry, where these cells were fixed and immunolabeled with a monoclonal mouse antibody against human GH and a polyclonal rabbit antibody against PRL. This is an example of how a immunolabeled cell culture of pituitary adenoma cells that were viewed via light microscopy and by other electron microscopy techniques can assist with the proper diagnosis of tumors.
Immunolabeling for electron microscopy
Electron microscopy (EM) is a focused area of science that uses the electron microscope as a tool for viewing tissues. Electron microscopy has a magnification level up to 2 million times, whereas light microscopy only has a magnification up to 1000-2000 times. There are two types of electron microscopes, the transmission electron microscope and the scanning electron microscope.
Electron microscopy is a common method that uses the immunolabeling technique to view tagged tissues or cells. The electron microscope method follows many of the same concepts as immunolabeling for light microscopy, where the particular antibody is able to recognize the location of the antigen of interest and then be viewed by the electron microscope. The advantage of electron microscopy over light microscopy is the ability to view the targeted areas at their subcellular level. Generally, a heavy metal that is electron dense is used for EM, which can reflect the incident electrons. Immunolabeling is typically confirmed using the light microscope to assure the presence of the antigen and then followed up with the electron microscope.
Immunolabeling and electron microscopy are often used to view chromosomes. A study was conducted to view possible improvements of immunolabeling chromosome structures, such as topoisomerase IIα and condensin in dissected mitotic chromosomes. In particular, these investigators used UV irradiation of separated nuclei or showed how chromosomes assist by high levels of specific immunolabeling, which were viewed by electron microscopy.
Immunolabeling for transmission electron microscopy
Transmission electron microscopy (TEM) uses a transmission electron microscope to form a two-dimensional image by shooting electrons through a thin piece of tissue. The brighter certain areas are on the image, the more electrons that are able to move through the specimen. Transmission Electron Microscopy has been used as a way to view immunolabeled tissues and cells. For instance, bacteria can be viewed by TEM when immunolabeling is applied. A study was conducted to examine the structures of CS3 and CS6 fimbriae in different Escherichia coli strains, which were detected by TEM followed by negative staining, and immunolabeling. More specifically, immunolabeling of the fimbriae confirmed the existence of different surface antigens.
Immunolabeling for scanning electron microscopy
Scanning electron microscopy (SEM) uses a scanning electron microscope, which produces large images that are perceived as three-dimensional when, in fact, they are not. This type of microscope concentrates a beam of electrons across a very small area (2-3 nm) of the specimen in order to produce electrons from said specimen. These secondary electrons are detected by a sensor, and the image of the specimen is generated over a certain time period.
Scanning electron microscopy is a frequently used immunolabeling technique. SEM is able to detect the surface of cellular components in high resolution. This immunolabeling technique is very similar to the immuno-fluorescence method, but a colloidal gold tag is used instead of a fluorophore. Overall, the concepts are very parallel in that an unconjugated primary antibody is used and sequentially followed by a tagged secondary antibody that works against the primary antibody. Sometimes SEM in conjunction with gold particle immunolabeling is troublesome in regards to the particles and charges resolution under the electron beam; however, this resolution setback has been resolved by the improvement of the SEM instrumentation by backscattered electron imaging. This is because electron backscattered diffraction patterns provide a clean surface of the sample to interact with the primary electron beam.
Immunolabeling with gold (Immunogold Labeling)
Immunolabeling with gold particles, also known as immunogold staining, is used regularly with scanning electron microscopy and transmission electron microscopy to successfully identify the area within cells and tissues where antigens are located. The gold particle labeling technique was first published by Faulk, W. and Taylor, G. when they were able to tag gold particles to anti-salmonella rabbit gamma globulins in one step in order to identify the location of the antigens of salmonella.
Studies have shown that the size of the gold particle must be enlarged (>40 nm) to view the cells in low magnification, but gold particles that are too large can decrease the efficiency of the binding of the gold tag. Scientists have concluded the usage of smaller gold particles (1-5 nm) should be enlarged and enhanced with silver. Although osmium tetroxide staining can scratch the silver, gold particle enhancement was found not to be susceptible to scratching by osmium tetroxide staining; therefore, many cell adhesion studies of different substrates can use the immunogold labeling mechanism via the enhancement of the gold particles.
Further Applications
Research has been conducted to test the compatibility of immunolabeling with fingerprints. Sometimes, fingerprints are not clear enough to recognize the ridge pattern. Immunolabeling may be a way for forensic personnel to narrow down who left the print. Researchers conducted a study which tested the compatibility of immunolabeling with many developmental techniques for fingerprints. They found that indanedione-zinc (IND-ZnCl), IND-ZnCl followed by ninhydrin spraying (IND-NIN), physical developer (PD), cyanoacrylate fuming (CA), cyanoacrylate followed by basic yellow staining (CA-BY), lumicyanoacrylate fuming (Lumi-CA) and polycyanoacrylate fuming (Poly-CA) all were compatible with immunolabeling. Immunolabeling can not only extract donor profiling information from fingerprints, but can also enhance the quality of the fingerprints which both would be beneficial in a forensic case.
References
External links
Labeling Procedures
Molecular cross-talk between the transcription, translation, and nonsense-mediated decay machineries
Nanoprobes Technical Help: Successful EM Immunolabeling
Immunolabeling as a tool for understanding the spatial distribution of fiber wall components and their biosynthetic enzymes
Medical diagnosis
Immunologic tests | Immunolabeling | Biology | 2,702 |
4,157,567 | https://en.wikipedia.org/wiki/Small%20snub%20icosicosidodecahedron | In geometry, the small snub icosicosidodecahedron or snub disicosidodecahedron is a uniform star polyhedron, indexed as U32. It has 112 faces (100 triangles and 12 pentagrams), 180 edges, and 60 vertices. Its stellation core is a truncated pentakis dodecahedron. It also called a holosnub icosahedron,
The 40 non-snub triangular faces form 20 coplanar pairs, forming star hexagons that are not quite regular. Unlike most snub polyhedra, it has reflection symmetries.
Convex hull
Its convex hull is a nonuniform truncated icosahedron.
Cartesian coordinates
Let be largest (least negative) zero of the polynomial , where is the golden ratio. Let the point be given by
.
Let the matrix be given by
.
is the rotation around the axis by an angle of , counterclockwise. Let the linear transformations
be the transformations which send a point to the even permutations of with an even number of minus signs.
The transformations constitute the group of rotational symmetries of a regular tetrahedron.
The transformations , constitute the group of rotational symmetries of a regular icosahedron.
Then the 60 points are the vertices of a small snub icosicosidodecahedron. The edge length equals , the circumradius equals , and the midradius equals .
For a small snub icosicosidodecahedron whose edge length is 1,
the circumradius is
Its midradius is
The other zero of plays a similar role in the description of the small retrosnub icosicosidodecahedron.
See also
List of uniform polyhedra
Small retrosnub icosicosidodecahedron
External links
Uniform polyhedra | Small snub icosicosidodecahedron | Physics | 382 |
9,080,562 | https://en.wikipedia.org/wiki/250%20nm%20process | The 250 nm process (250 nanometer process or 0.25 μm process) is a level of semiconductor process technology that was reached by most manufacturers in the 1997–1998 timeframe.
Products featuring 250 nm manufacturing process
The DEC Alpha 21264A, which was made commercially available in 1999.
The AMD K6-2 Chomper and Chomper Extended. Chomper was released on May 28, 1998.
The AMD K6-III "Sharptooth" used 250 nm.
The mobile Pentium MMX Tillamook, released in August 1997.
The Pentium II Deschutes.
The Pentium III Katmai.
The Dreamcast's CPU and GPU.
The initial version of the Emotion Engine processor used in the PlayStation 2.
00250
Computer-related introductions in 1998 | 250 nm process | Materials_science | 167 |
152,759 | https://en.wikipedia.org/wiki/Hilbert%27s%20second%20problem | In mathematics, Hilbert's second problem was posed by David Hilbert in 1900 as one of his 23 problems. It asks for a proof that arithmetic is consistent – free of any internal contradictions. Hilbert stated that the axioms he considered for arithmetic were the ones given in , which include a second order completeness axiom.
In the 1930s, Kurt Gödel and Gerhard Gentzen proved results that cast new light on the problem. Some feel that Gödel's theorems give a negative solution to the problem, while others consider Gentzen's proof as a partial positive solution.
Hilbert's problem and its interpretation
In one English translation, Hilbert asks:
"When we are engaged in investigating the foundations of a science, we must set up a system of axioms which contains an exact and complete description of the relations subsisting between the elementary ideas of that science. ... But above all I wish to designate the following as the most important among the numerous questions which can be asked with regard to the axioms: To prove that they are not contradictory, that is, that a definite number of logical steps based upon them can never lead to contradictory results. In geometry, the proof of the compatibility of the axioms can be effected by constructing a suitable field of numbers, such that analogous relations between the numbers of this field correspond to the geometrical axioms. ... On the other hand a direct method is needed for the proof of the compatibility of the arithmetical axioms."
Hilbert's statement is sometimes misunderstood, because by the "arithmetical axioms" he did not mean a system equivalent to Peano arithmetic, but a stronger system with a second-order completeness axiom. The system Hilbert asked for a completeness proof of is more like second-order arithmetic than first-order Peano arithmetic.
As a nowadays common interpretation, a positive solution to Hilbert's second question would in particular provide a proof that Peano arithmetic is consistent.
There are many known proofs that Peano arithmetic is consistent that can be carried out in strong systems such as Zermelo–Fraenkel set theory. These do not provide a resolution to Hilbert's second question, however, because someone who doubts the consistency of Peano arithmetic is unlikely to accept the axioms of set theory (which are much stronger) to prove its consistency. Thus a satisfactory answer to Hilbert's problem must be carried out using principles that would be acceptable to someone who does not already believe PA is consistent. Such principles are often called finitistic because they are completely constructive and do not presuppose a completed infinity of natural numbers. Gödel's second incompleteness theorem (see Gödel's incompleteness theorems) places a severe limit on how weak a finitistic system can be while still proving the consistency of Peano arithmetic.
Gödel's incompleteness theorem
Gödel's second incompleteness theorem shows that it is not possible for any proof that Peano Arithmetic is consistent to be carried out within Peano arithmetic itself. This theorem shows that if the only acceptable proof procedures are those that can be formalized within arithmetic then Hilbert's call for a consistency proof cannot be answered. However, as explain, there is still room for a proof that cannot be formalized in arithmetic:
"This imposing result of Godel's analysis should not be misunderstood: it does not exclude a meta-mathematical proof of the consistency of arithmetic. What it excludes is a proof of consistency that can be mirrored by the formal deductions of arithmetic. Meta-mathematical proofs of the consistency of arithmetic have, in fact, been constructed, notably by Gerhard Gentzen, a member of the Hilbert school, in 1936, and by others since then. ... But these meta-mathematical proofs cannot be represented within the arithmetical calculus; and, since they are not finitistic, they do not achieve the proclaimed objectives of Hilbert's original program. ... The possibility of constructing a finitistic absolute proof of consistency for arithmetic is not excluded by Gödel’s results. Gödel showed that no such proof is possible that can be represented within arithmetic. His argument does not eliminate the possibility of strictly finitistic proofs that cannot be represented within arithmetic. But no one today appears to have a clear idea of what a finitistic proof would be like that is not capable of formulation within arithmetic."
Gentzen's consistency proof
In 1936, Gentzen published a proof that Peano Arithmetic is consistent. Gentzen's result shows that a consistency proof can be obtained in a system that is much weaker than set theory.
Gentzen's proof proceeds by assigning to each proof in Peano arithmetic an ordinal number, based on the structure of the proof, with each of these ordinals less than ε0. He then proves by transfinite induction on these ordinals that no proof can conclude in a contradiction. The method used in this proof can also be used to prove a cut elimination result for Peano arithmetic in a stronger logic than first-order logic, but the consistency proof itself can be carried out in ordinary first-order logic using the axioms of primitive recursive arithmetic and a transfinite induction principle. gives a game-theoretic interpretation of Gentzen's method.
Gentzen's consistency proof initiated the program of ordinal analysis in proof theory. In this program, formal theories of arithmetic or set theory are assigned ordinal numbers that measure the consistency strength of the theories. A theory will be unable to prove the consistency of another theory with a higher proof theoretic ordinal.
Modern viewpoints on the status of the problem
While the theorems of Gödel and Gentzen are now well understood by the mathematical logic community, no consensus has formed on whether (or in what way) these theorems answer Hilbert's second problem. argues that Gödel's incompleteness theorem shows that it is not possible to produce finitistic consistency proofs of strong theories. states that although Gödel's results imply that no finitistic syntactic consistency proof can be obtained, semantic (in particular, second-order) arguments can be used to give convincing consistency proofs. argues that Gödel's theorem does not prevent a consistency proof because its hypotheses might not apply to all the systems in which a consistency proof could be carried out. calls the belief that Gödel's theorem eliminates the possibility of a persuasive consistency proof "erroneous", citing the consistency proof given by Gentzen and a later one given by Gödel in 1958.
See also
Takeuti conjecture
Notes
References
.
External links
Original text of Hilbert's talk, in German
English translation of Hilbert's 1900 address
02 | Hilbert's second problem | Mathematics | 1,396 |
11,944,410 | https://en.wikipedia.org/wiki/Trimaximal%20mixing | Trimaximal mixing (also known as threefold maximal mixing) refers to the highly symmetric, maximally CP-violating, fermion mixing configuration, characterised by a unitary matrix () having all its elements equal in modulus
(, ) as may be written, e.g.:
where and
are the complex cube roots of unity. In the standard PDG convention, trimaximal mixing corresponds to: , and . The Jarlskog -violating parameter takes its extremal value .
Originally proposed as a candidate lepton mixing matrix, and actively studied as such (and even as a candidate quark mixing matrix), trimaximal mixing is now definitively ruled-out as a phenomenologically viable lepton mixing scheme by neutrino oscillation experiments, especially the Chooz reactor experiment, in favour of the no longer tenable (related) tribimaximal mixing scheme.
References
Leptons
Standard Model
Particle physics
Neutrinos | Trimaximal mixing | Physics | 203 |
23,980,762 | https://en.wikipedia.org/wiki/C19H21NS | {{DISPLAYTITLE:C19H21NS}}
The molecular formula C19H21NS (molar mass: 295.44 g/mol, exact mass: 295.1395 u) may refer to:
Dosulepin, also known as dothiepin
McN5652
Pizotifen, or pizotyline | C19H21NS | Chemistry | 74 |
33,092,714 | https://en.wikipedia.org/wiki/Energies%20%28journal%29 | Energies is a biweekly peer-reviewed open-access scientific journal. It was established in 2008 and is published by MDPI. The editor-in-chief is Enrico Sciubba (Sapienza University of Rome). The journal publishes original papers, review articles, technical notes, and letters to the editor. It concentrates on scientific research, technology, engineering, and management in relation to the field of energy supply, conversion, dispatch, and final usage.
The journal occasionally publishes special issues on specific topics.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.004.
References
External links
Energy and fuel journals
English-language journals
Monthly journals
MDPI academic journals
Open access journals
Academic journals established in 2008 | Energies (journal) | Environmental_science | 166 |
21,987,832 | https://en.wikipedia.org/wiki/Jurkat%E2%80%93Richert%20theorem | The Jurkat–Richert theorem is a mathematical theorem in sieve theory. It is a key ingredient in proofs of Chen's theorem on Goldbach's conjecture.
It was proved in 1965 by Wolfgang B. Jurkat and Hans-Egon Richert.
Statement of the theorem
This formulation is from Diamond & Halberstam.
Other formulations are in Jurkat & Richert, Halberstam & Richert,
and Nathanson.
Suppose A is a finite sequence of integers and P is a set of primes. Write Ad for the number of items in A that are divisible by d, and write P(z) for the product of the elements in P that are less than z. Write ω(d) for a multiplicative function such that ω(p)/p is approximately the proportion of elements of A divisible by p, write X for any convenient approximation to |A|, and write the remainder as
Write S(A,P,z) for the number of items in A that are relatively prime to P(z). Write
Write ν(m) for the number of distinct prime divisors of m. Write F1 and f1 for functions satisfying certain difference differential equations (see Diamond & Halberstam for the definition and properties).
We assume the dimension (sifting density) is 1: that is, there is a constant C such that for 2 ≤ z < w we have
(The book of Diamond & Halberstam extends the theorem to dimensions higher than 1.) Then the Jurkat–Richert theorem states that for any numbers y and z with 2 ≤ z ≤ y ≤ X we have
and
Notes
Sieve theory
Theorems in analytic number theory | Jurkat–Richert theorem | Mathematics | 355 |
32,551,420 | https://en.wikipedia.org/wiki/CUT%20domain | In molecular biology, the CUT domain (also known as ONECUT) is a DNA-binding motif which can bind independently or in cooperation with the homeodomain, which is often found downstream of the CUT domain. Proteins display two modes of DNA binding, which hinge on the homeodomain and on the linker that separates it from the CUT domain, and two modes of transcriptional stimulation, which hinge on the homeodomain.
References
Protein domains | CUT domain | Biology | 97 |
29,078,220 | https://en.wikipedia.org/wiki/Moiety%20%28kinship%29 | In the anthropological study of kinship, a moiety () is a descent group that coexists with only one other descent group within a society. In such cases, the community usually has unilineal descent (either patri- or matrilineal) so that any individual belongs to one of the two moiety groups by birth, and all marriages take place between members of opposite moieties. It is an exogamous clan system with only two clans.
In the case of a patrilineal descent system, one can interpret a moiety system as one in which women are exchanged between the two moieties. Moiety societies operate particularly among the indigenous peoples of North America, Australia (see Australian Aboriginal kinship for details of Aboriginal moieties), and Indonesia.
Etymology
The word moiety comes from Latin medietat-, meaning 'a half', through Anglo-Normal moité.
References
Further reading
Anthropology
Kinship and descent | Moiety (kinship) | Biology | 195 |
31,161,659 | https://en.wikipedia.org/wiki/Escherichia%20coli%20in%20molecular%20biology | Escherichia coli (; commonly abbreviated E. coli) is a Gram-negative gammaproteobacterium commonly found in the lower intestine of warm-blooded organisms (endotherms). The descendants of two isolates, K-12 and B strain, are used routinely in molecular biology as both a tool and a model organism.
Diversity
Escherichia coli is one of the most diverse bacterial species, with several pathogenic strains with different symptoms and with only 20% of the genome common to all strains. Furthermore, from the evolutionary point of view, the members of genus Shigella (dysenteriae, flexneri, boydii, sonnei) are actually E. coli strains "in disguise" (i.e. E. coli is paraphyletic to the genus).
History
In 1885, Theodor Escherich, a German pediatrician, first discovered this species in the feces of healthy individuals and called it Bacterium coli commune because it is found in the colon and early classifications of Prokaryotes placed these in a handful of genera based on their shape and motility (at that time Ernst Haeckel's classification of Bacteria in the kingdom Monera was in place).
Following a revision of Bacteria it was reclassified as Bacillus coli by Migula in 1895 and later reclassified as Escherichia coli.
Due to its ease of culture and fast doubling, it was used in the early microbiology experiments; however, bacteria were considered primitive and pre-cellular and received little attention before 1944, when Avery, Macleod and McCarty demonstrated that DNA was the genetic material using Salmonella typhimurium, following which Escherichia coli was used for linkage mapping studies.
Strains
Four of the many E. coli strains (K-12, B, C, and W) are thought of as model organism strains. These are classified in Risk Group 1 in biosafety guidelines.
Escherich's isolate
The first isolate of Escherich was deposited in NCTC in 1920 by the Lister Institute in London (NCTC 86 ).
K-12
A strain was isolated from a stool sample of a patient convalescent from diphtheria and was labelled K-12 (not an antigen) in 1922 at Stanford University. This isolate was used in 1940s by Charles E. Clifton to study nitrogen metabolism, who deposited it in ATCC (strain ATCC 10798 ) and lent it to Edward Tatum for his tryptophan biosynthesis experiments, despite its idiosyncrasies due to the F+ λ+ phenotype.
In the course of the passages it lost its O antigen and in 1953 was cured first of its lambda phage (strain W1485 by UV by Joshua Lederberg and colleagues) and then in 1985 of the F plasmid by acridine orange curing. Strains derived from MG1655 include DH1, parent of DH5α and in turn of DH10B (rebranded as TOP10 by Invitrogen).
An alternative lineage from W1485 is that of W2637 (which contains an inversion rrnD-rrnE), which in turn resulted in W3110.
Due to the lack of specific record-keeping, the "pedigree" of strains was not available and had to be inferred by consulting lab-book and records in order to set up the E. coli Genetic Stock Centre at Yale by Barbara Bachmann. The different strains have been derived through treating E. coli K-12 with agents such as nitrogen mustard, ultra-violet radiation, X-ray etc. An extensive list of Escherichia coli K-12 strain derivatives and their individual construction, genotypes, phenotypes, plasmids and phage information can be viewed at Ecoliwiki.
B strain
A second common laboratory strain is the B strain, whose history is less straightforward and the first naming of the strain as E. coli B was by Delbrück and Luria in 1942 in their study of bacteriophages T1 and T7. The original E. coli B strain, known then as Bacillus coli, originated from Félix d'Herelle from the Institut Pasteur in
Paris around 1918 who studied bacteriophages, who claimed that it originated from Collection of the Institut Pasteur, but no strains of that period exist. The strain of d'Herelle was passed to Jules Bordet, Director of the Institut Pasteur du Brabant in Bruxelles and his student André Gratia. The former passed the strain to Ann Kuttner ("the Bact. coli obtained
from Dr. Bordet") and in turn to Eugène Wollman (B. coli Bordet), whose son deposited it in 1963 (CIP 63.70) as "strain BAM" (B American), while André Gratia passed the strain to Martha Wollstein, a researcher at Rockefeller, who refers to the strain as "Brussels strain of Bacillus coli" in 1921, who in turn passed it to Jacques Bronfenbrenner (B. coli P.C.), who passed it to Delbrück and Luria.
This strain gave rise to several other strains, such as REL606 and BL21.
C strain
E. coli C is morphologically distinct from other E. coli strains; it is more spherical in shape and has a distinct distribution of its nucleoid.
W strain
The W strain was isolated from the soil near Rutgers University by Selman Waksman.
Role in biotechnology
Because of its long history of laboratory culture and ease of manipulation, E. coli also plays an important role in modern biological engineering and industrial microbiology. The work of Stanley Norman Cohen and Herbert Boyer in E. coli, using plasmids and restriction enzymes to create recombinant DNA, became a foundation of biotechnology.
Considered a very versatile host for the production of heterologous proteins, researchers can introduce genes into the microbes using plasmids, allowing for the mass production of proteins in industrial fermentation processes. Genetic systems have also been developed which allow the production of recombinant proteins using E. coli. One of the first useful applications of recombinant DNA technology was the manipulation of E. coli to produce human insulin. Modified E. coli have been used in vaccine development, bioremediation, and production of immobilised enzymes.
E. coli have been used successfully to produce proteins previously thought difficult or impossible in E. coli, such as those containing multiple disulfide bonds or those requiring post-translational modification for stability or function. The cellular environment of E. coli is normally too reducing for disulphide bonds to form, proteins with disulphide bonds therefore may be secreted to its periplasmic space, however, mutants in which the reduction of both thioredoxins and glutathione is impaired also allow disulphide bonded proteins to be produced in the cytoplasm of E. coli. It has also been used to produce proteins with various post-translational modifications, including glycoproteins by using the N-linked glycosylation system of Campylobacter jejuni engineered into E. coli. Efforts are currently under way to expand this technology to produce complex glycosylations.
Studies are also being performed into programming E. coli to potentially solve complicated mathematics problems such as the Hamiltonian path problem.
Model organism
E. coli is frequently used as a model organism in microbiology studies. Cultivated strains (e.g. E. coli K-12) are well-adapted to the laboratory environment, and, unlike wild type strains, have lost their ability to thrive in the intestine. Many lab strains lose their ability to form biofilms. These features protect wild type strains from antibodies and other chemical attacks, but require a large expenditure of energy and material resources.
In 1946, Joshua Lederberg and Edward Tatum first described the phenomenon known as bacterial conjugation using E. coli as a model bacterium, and it remains a primary model to study conjugation. E. coli was an integral part of the first experiments to understand phage genetics, and early researchers, such as Seymour Benzer, used E. coli and phage T4 to understand the topography of gene structure. Prior to Benzer's research, it was not known whether the gene was a linear structure, or if it had a branching pattern.
E. coli was one of the first organisms to have its genome sequenced; the complete genome of E. coli K-12 was published by Science in 1997.
Lenski's long-term evolution experiment
The long-term evolution experiments using E. coli, begun by Richard Lenski in 1988, have allowed direct observation of major evolutionary shifts in the laboratory. In this experiment, one population of E. coli unexpectedly evolved the ability to aerobically metabolize citrate. This capacity is extremely rare in E. coli. As the inability to grow aerobically is normally used as a diagnostic criterion with which to differentiate E. coli from other, closely related bacteria such as Salmonella, this innovation may mark a speciation event observed in the lab.
References
Escherichia coli
Gram-negative bacteria | Escherichia coli in molecular biology | Biology | 1,929 |
46,363,918 | https://en.wikipedia.org/wiki/Squad%20%28video%20game%29 | Squad is a realism-based military tactical first-person shooter video game developed and published by Canadian indie developer Offworld Industries exclusively through the Steam distribution platform. It is a spiritual successor to the Project Reality modification for Battlefield 2. The game depicts realistic modern warfare between military and paramilitary factions in large and expansive battlefields. Squad became available on Steam Early Access in December 2015, and was officially released on Steam in September 2020.
Gameplay
Squad is a tactical shooter based around squad gameplay designed to encourage teamwork and communication. A match is played between two belligerent teams, with each team being made up of squads that cap at nine players. Players in squads select from various soldier classes which play distinct roles in combat, such as suppressive fire from an automatic rifleman, anti-tank support from a MANPATS gunner, or medical support from a combat medic. A squad of players is led by a squad leader, who can communicate with other allied squad leaders and construct structures such as forward operating bases and defensive emplacements. Any squad leader can additionally nominate himself for the position of commander, who, once voted into position, can coordinate his team's battle plan and call in additional support such as UAV recon and artillery.
Squad borrows many of its gameplay aspects from its predecessor, Project Reality, with its game modes placing heavy emphasis on team coordination. Matches take place on extremely large realistic battlefields up to in size, facilitating the use of a wide variety of vehicles such as main battle tanks, armored personnel carriers, transport trucks, and helicopters. The two teams, utilizing vehicle and infantry warfare, compete over various objectives, such as strategic locations to hold or weapon caches to destroy. To facilitate maneuver warfare, squad leaders can construct forward operating bases around the map, which provide a team-wide location for soldiers to respawn and must be kept supplied by a logistics network.
Both teams are kept in check by the "ticket" system, which simulates combat effectiveness. The loss of strategic locations, destruction of forward operating bases or vehicles and soldier deaths all remove tickets from a team's pool. A match ends once a team's ticket pool has been reduced to zero. A team that fails at defending too many of its strategic objectives will begin to lose tickets rapidly, starting at a loss of one per minute and capping out at a maximum of ten per minute. As such, teams have the ability to achieve victory in multiple ways, such as exhausting the enemy's ability to fight or by completing all of their tactical objectives.
There are currently fourteen playable factions in Squad, which are divided into three categories; BLUFOR, REDFOR, and INDFOR, and change based on the map and game mode being played. The real-life militaries featured include the United States Army, United States Marine Corps, British Army, Canadian Army and Australian Army for the BLUFOR side, as well as the Russian Ground Forces, Russian Airborne Forces, People's Liberation Army and People's Liberation Army Navy Marine Corps for the REDFOR. Five conventional and unconventional stock factions are included as part of INDFOR: the Insurgents, modeled after various Middle Eastern insurgent groups; the Irregular Militia, modeled after paramilitary forces in Eastern Europe and the Balkans; the Middle Eastern Alliance, a fictional military alliance amalgamation of the armed forces of various Middle Eastern and Central Asian countries, the Turkish Land Forces; and a Private Military Contractor faction based on various Western PMCs.
Development
Development of Squad was announced in October 2014 when Project Reality developer Sniperdog (a.k.a. Will Stahl) made a post on the Project Reality forums. The announcement carried the news that the team of fifteen was making a spiritual successor to Project Reality on Epic Games' Unreal Engine 4.
The game's Kickstarter campaign started in May 2015. It featured six backer levels with various rewards such as merchandise, in-game rewards, and pre-alpha testing access. Five days after the Kickstarter launch, the game had raised over $200,000.
On April 5, 2015, Squad appeared in Steam's Greenlight service and was announced in an update called "Vote For Us". It was officially greenlit eight days later.
Squad was released on Steam Early Access on December 15, 2015, and officially released on Steam on September 23, 2020.
Squad is currently at version 8.1 as of the 26th of September, 2024.
Reception
As of 2022, the game has sold over 3 million copies.
See also
Arma (series) – series with similar gameplay developed by Bohemia Interactive
Post Scriptum – a World War II tactical shooter published by Offworld Industries; originally a mod for Squad
Project Reality
References
External links
Squad website
2020 video games
Crowdfunded video games
First-person shooters
Kickstarter-funded video games
Multiplayer online games
Asymmetrical multiplayer video games
Tactical shooters
Unreal Engine 4 games
Indie games
Video games developed in Canada
Windows games
Windows-only games
Video games set in Europe
Video games set in the Middle East
Video games set in Afghanistan
Video games set in Canada
Video games set in Iraq
Video games set in Russia
Offworld Industries games | Squad (video game) | Physics | 1,043 |
16,898,244 | https://en.wikipedia.org/wiki/B3%20domain | The B3 DNA binding domain (DBD) is a highly conserved domain found exclusively in transcription factors (≥40 species) () combined with other domains (). It consists of 100-120 residues, includes seven beta strands and two alpha helices that form a DNA-binding pseudobarrel protein fold (); it interacts with the major groove of DNA.
B3 families
In Arabidopsis thaliana, there are three main families of transcription factors that contain B3 domain:
ARF (Auxin Response Factors)
ABI3 (ABscisic acid Insensitive3)
RAV (Related to ABI3/VP1)
and are only known NMR solution phase structures of the B3 DNA Binding Domain.
Related proteins
The N-terminal domain of restriction endonuclease EcoRII; the C-terminal domain of restriction endonuclease BfiI possess a similar DNA-binding pseudobarrel protein fold.
See also
Restriction endonuclease EcoRII
Auxin
Abscisic acid
References
External links
DBD database of predicted transcription factors Uses a curated set of DNA-binding domains to predict transcription factors in all completely sequenced genomes
Classification in the "Transcription factors" table according to the Transfac database.
Database of Arabidopsis Transcription Factors
B3 , RAV, and ARF family at PlantTFDB:Plant Transcription Factor Database
Molecular genetics
Gene expression
Transcription factors
Protein domains | B3 domain | Chemistry,Biology | 293 |
2,728,797 | https://en.wikipedia.org/wiki/Delta%20Velorum | Delta Velorum (δ Velorum, abbreviated Delta Vel, δ Vel) is a triple star system in the southern constellation of Vela, near the border with Carina, and is part of the False Cross. Based on parallax measurements, it is approximately from the Sun. It is one of the stars that at times lies near the south celestial pole due to precession.
δ Velorum consists of an eclipsing binary, designated Delta Velorum A, and a more distant third companion, Delta Velorum B. δ Velorum A's two components are themselves designated Aa (officially named Alsephina , the traditional name for the entire system) and Ab.
Nomenclature
δ Velorum (Latinised to Delta Velorum) is the system's Bayer designation. The designations of the two constituents as Delta Velorum A and B, and those of A components—Delta Velorum Aa and Ab—derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Delta Velorum bore the traditional name Alsafinah, which stems from the Arabic name al-safīnah meaning "the ship", referring to the ancient Greek constellation Argo Navis, the ship of the Argonauts. It was first used in a 10th-century Arabic translation of the Almagest, written by the Greek astronomer Ptolemy in the second century AD. Although the name originally referred to an entire constellation, it was assigned to this particular bright star at least as early as 1660, when it appeared in Andreas Cellarius's renowned Harmonia Macrocosmica, a magnificently illustrated 17th-century Dutch book about the cosmos. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Alsephina for the component δ Velorum Aa on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
The False Cross is an asterism formed of Delta and Kappa Velorum along with Iota Carinae and Epsilon Carinae. It is so called because it is sometimes mistaken for the Southern Cross, causing errors in astronavigation.
In Chinese, (), meaning Celestial Earth God's Temple, refers to an asterism consisting of Delta Velorum, Gamma2 Velorum, Kappa Velorum and b Velorum. Consequently, Delta Velorum itself is known as (), "the Third Star of Celestial Earth God's Temple". In a different Chinese view, this star appears in an asterism with the given name of Koo She (Chinese: 弧矢, hú shǐ, "Bow and Arrow"), comprising Delta Velorum, Omega Carinae and stars from Canis Major.
Stellar system
Delta Velorum is a triple star system. The outer components, δ Velorum A and B, have a wide orbit with a 143-year period. The primary component A has an apparent magnitude of 2.00, while the secondary B is magnitude 5.54, with a combined magnitude measured at 1.96. As of 2013, the two stars were separated by 0.6", but they have an eccentric orbit and their average separation over the whole orbit is nearly 2".
In 1978 the primary component was reported to be a spectroscopic binary in the Proceedings of the Australian Astronomical Observatory, and this was confirmed by the Hipparcos satellite.
In 2000 it was announced that the inner components Aa and Ab form an eclipsing binary, having an orbital period of 45.15 days and an eccentricity of 0.230. The semi-major axis as their orbit corresponds to a mean separation of . Delta Velorum is the brightest eclipsing binary, although Algol has a deeper minimum and is easier to observe visually. Observations of variability in the Delta Velorum system were made independently by ground-based astronomers and the Galileo spaceprobe at Jupiter. The inner pair were resolved using interferometry in 2007, and then using NACO adaptive optics with the Very Large Telescope. Photometry of the components of δ Velorum A gives apparent visual magnitudes of 2.33 and 3.44. The precise orbits allow a dynamical parallax of to be derived, representing a distance of 25.1 parsecs.
Another binary system is located at an angular separation of 69 arcseconds from δ Velorum, sometimes referred to as δ Velorum C and D. The pair is composed of an 11th-magnitude star and a 13th-magnitude star, which are 6 arcseconds apart. The two stars, with approximate spectral types of G8V and K0V, are expected to be more distant than δ Velorum and not physically associated.
HD 76653 is a probable (96% chance) co-moving companion; the two have an estimated physical separation of with similar proper motions. Both are likely members of the Ursa Major Moving Group.
Physical properties
The brightnesses of the three stars have been measured at visual and infrared wavelengths using adaptive optics. The physical properties implied by their surface brightnesses and colour indices suggests spectral types of A2IV, A4V and F8V, respectively.
More precise physical properties for the stars can be calculated using accurate orbital parameters. Both members of the spectroscopic binary Delta Velorum A are slightly evolved stars that are still on the main sequence. Component Aa has 2.5 times the mass of the Sun, 2.6 times the Sun's radius, and is radiating 56 times the luminosity of the Sun at an effective temperature of . Component Ab is only slightly smaller, with 2.4 times the Sun's mass and radius, with a luminosity of 47 times the Sun and an effective temperature of 9,370 K.
Both stars are rotating rapidly and are significantly oblate, with polar radii smaller than their equatorial radii. Gravity darkening results in their effective temperatures at the pole being higher. For component Aa, the polar radius and temperature are and 10,100 K, respectively, while the equatorial radius and temperature are and 9,700 K, respectively. For component Ab, the corresponding polar values are and 10,120 K, and the equatorial values are and 9,560 K. This results in the star being brighter when seen along their axes of rotation and less bright when observed at their equators. From Earth, the pair is observed nearly equatorially and the absolute visual magnitude is +0.02; from a different direction the absolute magnitude would be −0.138 or less.
Delta Velorum B is a smaller main-sequence star, with a mass of about , a temperature of 6,600 K, a radius of , and a bolometric luminosity of .
Southern pole star
The south celestial pole will pass close to Delta Velorum around 9000 AD because of precession.
References
External links
A-type main-sequence stars
F-type main-sequence stars
Algol variables
Triple star systems
Ursa Major moving group
Velorum, Delta
Vela (constellation)
CD-54 2351
074956
042913
3485
Alsephina | Delta Velorum | Astronomy | 1,522 |
19,493,432 | https://en.wikipedia.org/wiki/Mary%20Tiles | Mary Tiles (born 1946) is a philosopher and historian of mathematics and science. From 2006 until 2009, she served as chair of the philosophy department of the University of Hawaii at Manoa. She retired in 2009.
Life
At Bristol University, Tiles obtained her B.A. in philosophy and mathematics in 1967, and her Ph.D. in philosophy in 1973, followed by a B.Phil. in philosophy in 1974 at Oxford and a M.A. in 1978 at Cambridge. After positions as lecturer and visiting associate professor at different institutions, Tiles became associate professor of philosophy at University of Hawaii at Manoa in 1989, and full professor in 1992.
Work
Tiles' area of work is primarily philosophy and history of logic, mathematics and science, with a special emphasis on French contributions to this area, e.g. by Gaston Bachelard, Georges Canguilhem, Bruno Latour,
Michel Foucault, Pierre Bourdieu, Michel Serres, Jean-Claude Martzloff, Karine Chemla, Catherine Jami, and François Jullien.
One of her publications is the 1989 book The Philosophy of Set Theory: An Historical Introduction to Cantor's Paradise. As the subtitle suggests, it is an example of a book that treats the philosophy of mathematics as inseparable from historical concerns. Despite some criticisms, for its lack of technical detail and correctness, and for pressing the author's philosophical agenda on its readers, it has been recommended as an introductory textbook for undergraduates interested in the philosophy of mathematics.
Bibliography
with Hans Oberdiek, Living in a Technological Culture: Human Tools and Human Values, Routledge 1995.
with Jim Tiles, An Introduction to Historical Epistemology: The Authority of Knowledge, Oxford 1993.
Mathematics and the Image of Reason, Routledge 1991.
The Philosophy of Set Theory: An Historical Introduction to Cantor's Paradise, Blackwell 1989; reprinted by Dover 2004.
Bachelard: Science and Objectivity, Cambridge University Press 1984.
References
1946 births
Living people
Philosophers of science
American historians of mathematics
Mathematical logicians
Women logicians
American set theorists
British mathematicians
Alumni of the University of Oxford
Alumni of the University of Cambridge
University of Hawaiʻi faculty
20th-century American mathematicians
21st-century American mathematicians
21st-century British women mathematicians
Philosophers of technology
20th-century American women mathematicians
21st-century American women mathematicians | Mary Tiles | Mathematics | 479 |
43,339,202 | https://en.wikipedia.org/wiki/10%20Serpentis | 10 Serpentis is a single, white-hued star in Serpens Caput, the western section of the equatorial constellation of Serpens. It is faintly visible to the naked eye with an apparent visual magnitude of 5.15. Located around distant, it is moving closer to the Sun with a heliocentric radial velocity of −10 km/s and will make its closest approach in around 983,000 years at a separation of about .
Abt and Morrell (1995) gave this star a stellar classification of A6 III, matching an evolved red giant star that has used up its core hydrogen. In contrast, Houk and Swift (1999) classed it A7 IV, which is more in line with an evolving subgiant star that is on its way to becoming a giant. It has a high rate of spin with a projected rotational velocity of 115 km/s, giving it an oblate shape with an equatorial bulge that is an estimated 7% larger than the polar radius. The star is about 424 million years old with 1.64 times the mass of the Sun and is radiating 12 times the Sun's luminosity from its photosphere at an effective temperature of roughly 7,872 K.
References
A-type giants
Serpens
Durchmusterung objects
Serpentis, 10
137898
075761
5746 | 10 Serpentis | Astronomy | 281 |
5,556,716 | https://en.wikipedia.org/wiki/Romanian%20numbers | Romanian numbers are the system of number names used in Romanian to express counts, quantities, ranks in ordered sets, fractions, multiplication, and other information related to numbers.
In Romanian grammar, the words expressing numbers are sometimes considered a separate part of speech, called (plural: ), along with nouns, verbs, etc. (Note that the English word "numeral" can mean both the symbols used for writing numbers and the names of those numbers in a given language; also, Romanian only partially overlaps in meaning with English number.) Nevertheless, these words play the same roles in the sentence as they do in English: adjective, pronoun, noun, and adverb. This article focuses on the mechanism of naming numbers in Romanian and the use of the number names in sentences.
The symbols for numbers in Romanian texts are the same as those used in English, with the exception of using the comma as the decimal separator and the period or the space (ideally a narrow space) for grouping digits by three in large numbers. For example, in Romanian 1,5 V means one and a half volts, and 1.000.000 or 1 000 000 means one million.
General characteristics
As in other numeral systems, the Romanian number names use a limited set of words and combining rules, which can be applied to generate the name of any number within sufficiently large limits.
The general characteristics of the number formation rules in Romanian are:
The numeration base used is decimal.
Word order is big-endian with the exception of numbers from 11 to 19.
Large numbers use a combined form of the long and short scales.
Connection words are used in certain situations.
Some number names have two gender-specific forms.
Cardinal numbers
Cardinal numbers are the words we use for counting objects or expressing quantity.
Number name for 0
The number 0 is called . Like in English, it requires the plural form of nouns: "zero degrees", with being the plural form of ). Unlike English, the reading of number/numeral 0 is always and never replaced with words like oh, naught, nil, love, etc.
Numbers from 1 to 10
The number names from 1 to 10 derive from Latin. The table below gives the cardinal numbers in Romanian and the three other Eastern Romance languages (sometimes considered to be its dialects): Aromanian, Megleno-Romanian and Istro-Romanian.
Notes
1. When counting, the number names for 1 and 2 have the forms given in the table; however, when used in a sentence, they change according to the gender of the noun they modify or replace. It is worth noting that the two adjectival forms of the cardinal number for 1 ( and ) are identical with the corresponding indefinite articles.
"one boy, a boy",
"one of the boys",
"one girl, a girl",
"one of the girls",
"two boys",
"two girls".
2. The name for number five in Aromanian, written or , might be responsible for nicknaming the Aromanians țințar.
3. Sometimes pronounced as (initially a regionalism), more common when communicating telephone numbers, in order to avoid a possible confusion between and .
4. In Istro-Romanian, depending on the speaker, some number names are replaced with their Croatian (Slavic) equivalents.
Numbers from 11 to 19
Unlike all other Romance languages, Romanian has a consistent way of naming the numbers from 11 to 19. These are obtained by joining three elements: the units, the word (derived from Latin "over", but now meaning "towards" in Romanian), and the word for "ten". For example, fifteen is , which literally means "five over ten". This is the only exception to the big-endian principle of number naming.
The table below gives the forms of all nine such number names. Each number in the series has one or more shortened variants, often used in informal speech, where the element ' is replaced by . Prescriptive grammarians consider the informal variants to be indicative of careless speech.
Notes
1. The number name for 12 given in the table is the masculine form; this is the only number in this range that also has a feminine form: (informal ). However, the masculine form is sometimes used even with feminine nouns, especially when the number follows the noun it determines, as in "12 o'clock" or ("12th grade", see below for ordinal numbers); such use is considered incorrect.
2. Number names for 14 and 16 do not exactly follow the forming rule, possibly under the influence of the number names for 12 and 13. The forms and do exist, but are perceived as hypercorrect and very rarely used (one might hear them in telephone conversations, for the sake of correct transmission).
3. Instead of sometimes is used.
4. The number name for 18 is notorious for being the word in Romanian with the longest consonant cluster (five consonants with no intervening vowels): , split into two syllables, . For this reason, the variants (with a missing ) and or (with an additional vowel to break the consonant cluster) are frequent.
Numbers from 20 to 99
The numbers in this range that are multiple of 10 (that is, 20, 30, ..., 90) are named by joining the number of tens with the word (the plural of ), as shown in the table below. Note that they are spelled as a single word.
Notes
1. is often pronounced (but not written) . Similarly, is often pronounced .
2. does not follow the formation rule exactly. The expected form does not exist.
3. This is a direct descendant of Latin , which did not survive in Romanian.
The other numbers between 20 and 99 are named by combining three words: the number of tens, the conjunction "and", and the units. For example, 42 is .
For those numbers whose unit figure is 1 or 2 the corresponding number name has two gender-dependent forms:
masculine: "31 men"; "32 men";
feminine: "31 women"; "32 women";
neuter: "31 degrees"; "32 degrees".
Short versions
The numbers from 20 to 99 also have an informal, simplified pronunciation: The part shortens to when the units name starts with an unvoiced consonant or a vowel. For 50 and 80 this contraction is incomplete, reducing only to . When the next word starts with a voiced consonant the same rule applies except that is pronounced voiced as . The same rule applies if the units number is 0 and if the next word is the preposition . Examples:
→ ("75");
→ ("51");
→ ("88");
→ ("32");
→ ("20 times").
In regional speech further simplification is possible ( becoming and becoming ). Also, the number , when it refers to the revolutions of 1848, is pronounced , which also gave words like (meaning "participant in the Romanian 1848 Revolution" or "supporter of its ideology").
Numbers from 100 to 999
Any given number from 100 to 999 can be named by first saying the hundreds and then, without any connecting word, the two-digit number of tens and units; for example, 365 is trei sute șaizeci și cinci.
Note that the word for "hundred" is sută, and that if the number of hundreds is 2 or larger, the plural sute is required. The noun sută itself is feminine and as such the numbers 100 and 200 are o sută and două sute.
In fast utterances, the numbers 500 and 800 are usually pronounced cinsute and opsute, instead of the standard forms cinci sute and opt sute, respectively. In writing, however, the informal variants are only used for stylistic effects.
Large numbers
The table below lists the numbers representing powers of 10 larger than 100, that have a corresponding single-word name. The word for 1000 is feminine, all the others are neuter; this is important in the number naming. In Romanian, neuter nouns behave like masculine in the singular and like feminine in the plural.
To say any cardinal number larger than 1000 the number is split in groups of three digits, from right to left (into units, thousands, millions, etc.), then the groups are read from left to right as in the example below.
12,345,678 (written in Romanian 12.345.678) = douăsprezece milioane trei sute patruzeci și cinci de mii șase sute șaptezeci și opt
When a digit is zero, the corresponding quantity is simply not pronounced:
101,010 (written in Romanian 101.010) = o sută una mii zece
In writing, the groups of three digits are separated by dots. The comma is used as decimal separator. This may be confusing for native English speakers, who use the two symbols the other way around.
Decimal fractions
Numbers represented as decimal fractions (for example 1.62) are expressed by reading in order the integer part, the decimal separator, and the fractional part. This is the same as in English, with the following exceptions:
The decimal separator is the comma, in Romanian virgulă. For example, 2.5 is written 2,5 and pronounced doi virgulă cinci.
The fractional part is read as a multi-digit number, not by saying each digit independently. For example, 3.14 (written 3,14) is pronounced trei virgulă paisprezece (literally three comma fourteen). However, when the number of decimals is too large, they can be read one by one as a string of digits: trei virgulă unu patru unu cinci nouă (3.14159).
Decimal fractions whose integer part is 0 (such as 0.6) are always written and pronounced in Romanian together with the initial zero: 0,6 is read zero virgulă șase, unlike English point six.
In some situations it is customary to say cu "with" instead of virgulă. For example, medical staff might be heard stating the body temperature in words like treizeci și șapte cu cinci, meaning 37.5 °C.
Percents
Percentages (%) and permillages (‰) are read using the words la sută and la mie, like in the examples: cinci la sută (5%), nouă la mie (9‰). For percentages an alternative reading uses the neuter noun procent, meaning 1%; the previous example becomes cinci procente.
Negative numbers
Negative numbers are named just like in English, by placing the word minus, pronounced , at the beginning: −10 m is minus zece metri.
Preposition de
Syntactically, when a cardinal number determines a noun and when the number has certain values, the preposition de (roughly equivalent to of) is inserted between the number name and the modified noun in a way similar to English hundreds of birds. Example: șaizeci de minute "sixty minutes".
The rules governing the use of preposition de are as follows:
For numbers from 0 to 19 de is not used. The same applies to numbers whose last two digits make a number in the range from 1 to 19. Examples: șapte case "seven houses", șaisprezece ani "16 years (old)", o sută zece metri "110 meters".
An exception to this rule is when the objects that are counted are symbols (letters, numbers). In this case, for better understanding the meaning, de can be used, although the practice is sometimes criticized. Example: se scrie cu doi de i "it's written with two i's", doi de zece "two tens", "two A grades".
Another exception is for numbers whose last two digits are 01, in which case an optional de is sometimes used. Examples: o mie una de ori "1001 times", o sută unu de dalmațieni "101 Dalmatians". In the latter case the choice might be influenced by euphony (avoidance of the alliteration).
For integer numbers from 20 to 100, preposition de is placed between the number name and the modified noun. The same applies to numbers whose last two digits are either 00 or make a number in the range from 20 to 99. Examples: douăzeci de metri "twenty meters", o mie de ori "a thousand times".
In technical contexts, to save space, the preposition de may be dropped, especially in writing: 200 metri plat "200 meters sprint". In expressing quantities using measurement unit symbols the preposition de is never written, but usually pronounced: 24 V → douăzeci și patru de volți "24 V, twenty-four volts".
For non-integer decimal numbers de is never used: 20,5 kg (read douăzeci virgulă cinci kilograme, "20.5 kg").
For negative numbers all the rules and exceptions above apply unchanged: −20 °C is minus douăzeci de grade Celsius, −5 m is minus cinci metri, −23,4 V is minus douăzeci și trei virgulă patru volți, etc.
The preposition de is also used within the syntax of the number itself, for stating the number of thousands, millions, billions, etc.: douăzeci de mii "twenty thousand" (also note the plural mii, unlike the singular thousand in English). The rules for this de are the same as those described above: it is used when the last two digits of the number of thousands, millions, etc. are 00 or 20–99. Again, in technical contexts, this de may be dropped: treizeci milioane euro "thirty million euros".
Agreement between number name and modified noun
The number name and the noun it modifies must agree in number and gender.
The rule for number agreement is simple: When the number is 1, the modified noun is put in its singular form, otherwise it takes the plural form, including the case of number 0 and all non-integer numbers.
The gender agreement is somewhat complicated by the fact that the Romanian nouns are classified into three genders: masculine, feminine, and neuter. Specifically, the neuter gender is a combination of the other two: A neuter noun behaves like a masculine noun in the singular, and like a feminine noun in the plural. The gender has implications on the morphology of some of the grammatically connected words, including the number names.
When the units digit of a number is 1 or 2, its name has two distinct forms, masculine and feminine. The only exception is unsprezece "eleven" which has only one form used for both genders.
The gender agreement requires the choice of masculine number names for masculine nouns, and feminine number names for feminine nouns. For the neuter nouns the agreement is obtained by choosing the masculine name of the number not just for number 1, but for all other numbers whose units digit is 1, despite the fact that the noun behaves as feminine; for numbers whose last digit is 2 the feminine numeral is chosen. Examples:
Note
1. Although, as a neuter noun in the plural, scaune behaves like a feminine noun, the masculine form of the numeral douăzeci și unu is used. This is because unu "one" also represents a number by itself; in the singular, the neuter noun requires a masculine modifier. If the noun is also modified by an adjective, the feminine form of the adjective is used: douăzeci și unu de scaune galbene "21 yellow chairs".
Distributive numbers
Distributive numbers are used to show how a larger quantity is divided into smaller, equal portions. These numbers are named using the cardinal number names and the word câte (or cîte, depending on the spelling convention), roughly meaning "each", but requiring a different word order. The following examples show some distributive numbers in various cases:
Punem câte patru prăjituri pe fiecare farfurie. "We put four cakes on each plate."
Copiii merg doi câte doi. "The children are walking two by two."
Hai să ne despărțim în grupe de câte trei. "Let's split in groups of three each."
Au fost expuse desenele a câte doi elevi din fiecare clasă. "The drawings of two students in each class were displayed."
Am dat formularele câte unui copil din fiecare grupă. "I gave the forms to one child in each group." – Am dat formularele la cîte doi copii din fiecare grupă. "I gave the forms to two children in each group."
Collective numbers
Collective numbers are used when all members of a group are referred to by their number, like English all four wheels. Generally, for sets of more than a few elements, the word toți / toate ("all", masculine / feminine) is used together with the cardinal number. The use of the demonstrative cei / cele is optional in the nominative-accusative, but required in the genitive-dative. The genitive-dative form is tuturor celor for both genders. In the following examples note that the modified noun always has the nominative form, and that the definite article goes to the demonstrative where it is used:
nominative-accusative:
masculine: toți șapte piticii, toți cei șapte pitici "all seven dwarfs";
feminine: toate trei fiicele, toate cele trei fiice "all three daughters";
genitive-dative:
tuturor celor șapte pitici "of/to all seven dwarfs";
tuturor celor trei fiice "of/to all three daughters";
genitive (another pattern, using the preposition a):
numele a toți șapte piticii, numele a toți cei șapte pitici "the names of all seven dwarfs";
numele a toate trei fiicele, numele a toate cele trei fiice "the names of all three daughters";
dative (another pattern, using the preposition la):
le-am spus la toți șapte piticii, le-am spus la toți cei șapte pitici "I told all seven dwarfs";
le-am spus la toate trei fiicele, le-am spus la toate cele trei fiice "I told all three daughters".
Special words
When the number is 2 or sometimes 3 or 4, special words are used instead of toți, just as the word both replaces *all two in English. The most frequent of these words are:
amândoi/amîndoi, amândouă/amîndouă "both", with the genitive-dative form amândurora/amîndurora, which does not follow the usual declension rules;
ambii, ambele (also "both", but somewhat formal);
tustrei, tustrele "all three". This and the following collective numerals are used mainly for people and reflects a rather old style.
câteșitrei/cîteștrei, câteșitrele/cîteștrele "all three";
tuspatru "all four";
câteșipatru/cîteșipatru "all four".
Adverbial numbers
The adverbial number is the number used to show the repetition of a certain event, in constructions such as de cinci ori "five times". The table below shows a few examples of adverbial numbers.
For number 1 the usual form is o dată ("once", "one time"). The construction o oară is possible, but rarely used. In the plural, the adverbial numbers are formed using the preposition de, the cardinal number in the feminine, and the noun ori "times", which is the plural of the feminine noun oară.
Sample sentences:
Am citit cartea de trei ori. "I've read the book three times."
„Poștașul sună întotdeauna de două ori” "The postman always rings twice"
Approximate numbers can be used, like in the examples below.
ți-am spus de zeci de ori că nu mă interesează. "I've told you dozens (textually: tens] of times I'm not interested."
Am ascultat cîntecul acesta de sute de ori. "I've listened to this song hundreds of times."
Multiplicative numbers
For some numbers, special words are used to show multiplication of size, number, etc. The table below gives the most frequent such words, with their English equivalents.
The traditional multiplicative numbers are formed from the respective cardinal number with the prefix în- (changed into îm- when the following sound is a bilabial plosive), and the suffix -it, the same used to form the past participle of a large category of verbs.
In contemporary Romanian the neologisms are more frequently used.
The multiplicative number can be used as adjective and as adverb. Examples:
Adjective (note the gender agreement):
salariu întreit, salariu triplu ("triple wage", "wage three times as much");
putere întreită, putere triplă "three times more power".
Adverb (no agreement required):
Am muncit întreit. Am muncit triplu. "I worked three times harder."
Am economisit înzecit față de anul trecut. "I saved ten times as much as last year."
Often instead of the multiplicative numbers an adverbial construction is used. This can be applied for any number larger than 1.
Am muncit de trei ori mai mult față de anul trecut și am primit un salariu de zece ori mai mare. "I worked three times more than last year and earned a salary ten times bigger."
Fractional numbers
Numbers expressed as parts of a unit (such as "two thirds") are named using the cardinal number, in its masculine form, with the suffix -ime. Other morphological changes take place, as shown below.
A number like 3/5 is expressed as trei cincimi "three fifths". Since all the fractional number names behave like feminine nouns, when the numerator is 1, 2, or any other number with a distinct feminine form, that form must be used: două treimi (2/3). The preposition de is used depending also on the numerator: douăzeci de sutimi (20/100), o sută zece miimi (110/1000).
In music several other such words are frequently used for note lengths:
șaisprezecime "sixteenth note";
trezecișidoime "thirty-second note" - often pronounced treijdoime (informal);
șaizecișipătrime "sixty-fourth note" - often pronounced șaișpătrime (informal).
Fractions involving larger numbers tend to become hard to read. Especially in mathematics it is common to read fractions only using cardinal numbers and the words pe or supra ("on", "over"). For example, două treimi "two thirds" becomes doi pe trei or doi supra trei.
Ordinal numbers
The ordinal number (linguistics) is used to express the position of an object in an ordered sequence, as shown in English by words such as first, second, third, etc. In Romanian, with the exception of number 1, all ordinal numbers are named based on the corresponding cardinal number. Two gender-dependent forms exist for each number. The masculine form (also used with neuter nouns) ends in -lea, whereas the feminine form ends in -a. Starting from 2 they are preceded by the possessive article al / a.
Examples:
Am terminat de scris al treilea roman. "I finished writing the third novel."
Locuim la a cincea casă pe dreapta. "We live in the fifth house on the right."
Basic forms
The basic forms of the ordinal number are given in the table below. All other forms are made using them.
{| class="wikitable"
! rowspan="2" | Number !! colspan="2" | Ordinal number !! rowspan="2" | Meaning
|-
! masculine !! feminine
|-
| 1 || primul (întâiul/întîiul) || prima (întâia/întîia) || "the first"
|-
| 2 || al doilea || a doua || "the second"
|-
| 3 || al treilea || a treia || "the third"
|-
| 4 || al patrulea || a patra || "the fourth"
|-
| 5 || al cincilea || a cincea || "the fifth"
|-
| 6 || al șaselea || a șasea || "the sixth"
|-
| 7 || al șaptelea || a șaptea || "the seventh"
|-
| 8 || al optulea || a opta || "the eighth"
|-
| 9 || al nouălea || a noua || "the ninth"
|-
| 10 || al zecelea || a zecea || "the tenth|-
| 100 || al o sutălea || a o suta || "the one hundredth"
|-
| 1000 || al o miilea || a o mia || "the one thousandth"
|-
| 106 || al un milionulea || a o milioana || "the one millionth"
|-
| 109 || al un miliardulea || a o miliarda || "the one billionth"
|-
| ... || ... || ... || ...
|}
11-19
Ordinal numbers in this range can be formed by modifying the corresponding cardinal number: the ending -zece is transformed into -zecelea and -zecea for the masculine and feminine ordinal number. Examples:al unsprezecelea, a unsprezecea "the 11th";al doisprezecelea, a douăsprezecea "the 12th", note the gender difference doi-, două-;al treisprezecelea, a treisprezecea "the 13th", and so on.
20-99
Ordinal numbers in this range that have the unit digit 0 are formed by replacing the ending -zeci of the corresponding cardinal number with -zecilea and -zecea (masculine and feminine):al douăzecilea, a douăzecea "the 20th";al treizecilea, a treizecea "the 30th", and so on.
When the unit digit is not 0, the cardinal number is used for the tens and the ordinal number for the units. The only exception is when the unit digit is 1; in this case, instead of primul, prima a different word is used: unulea, una. Examples:al douăzeci și unulea, a douăzeci și una "the 21st";al douăzeci și doilea, a douăzeci și doua "the 22nd";al douăzeci și treilea, a douăzeci și treia "the 23rd", and so on.
All other numbers
The general rule for ordinal number formation is to combine the following elements:
the possessive article al, a;
the cardinal number without the last pronounced digit;
the ordinal number corresponding to the last pronounced digit.
Examples:
101st: al o sută unulea, a o sută una;
210th: al două sute zecelea, a două sute zecea;
700th: al șapte sutelea, a șapte suta;
As seen in the last example above, the ordinal form of the plural of 100, 1000, etc. is needed for this process. These forms are:
Examples with large numbers:
1500th: al o mie cinci sutelea, a o mie cinci suta;
2000th: al două miilea, a două mia;
17,017th: al șaptesprezece mii șaptesprezecilea, a șaptesprezece mii șaptesprezecea20,000th: al douăzeci de miilea, a douăzeci de mia;
2,000,000th: al două milioanelea, a două milioana;
2,000,000,000th: al două miliardelea, a două miliarda;
5,500,000,000th: al cinci miliarde cinci sute de miloanelea, a cinci miliarde cinci sute de miloana8,621,457,098th: al opt miliarde, șase sute douăzeci și unu de milioane, patru sute cincizeci și șapte de mii, nouăzeci și optulea; a opt miliarde, șase sute douăzeci și una de milioane, patru sute cincizeci și șapte de mii, nouăzeci și optaReverse order
In certain situations the word order in expressing the ordinal number is reversed. This occurs when the object is not necessarily perceived as an element in a sequence but rather as an indexed object. For example, instead of al treilea secol the expression secolul al treilea "third century" is used. Note that the noun must have the definite article appended. Other examples:etajul al cincilea "fifth floor";partea a doua "second part, part two";volumul al treilea "third volume, volume three";grupa a patra "fourth group".
For simplification, often the cardinal number replaces the ordinal number, although some grammarians criticize this practice: The form secolul douăzeci is seen as an incorrect variant of secolul al douăzecilea "20th century".
For number 1, the form of the ordinal number in this reverse-order construction is întâi (or întîi), in both genders: deceniul întâi "first decade", clasa întâi "first grade". For the feminine, sometimes întâia is used, which until recently used to be considered incorrect by normative works.
The same reverse order is used when naming historical figures:
Carol I (read Carol Întâi);
Carol al II-lea (read Carol al Doilea).
As seen above, ordinal numbers are often written using Roman numerals, especially in this reverse order case. The ending specific to the ordinal numbers (-lea, -a) must be preserved and connected to the Roman numeral with a hyphen. Examples:secolul al XIX-lea "19th century";clasa a V-a "5th grade";volumul I, volumul al II-lea "volume I, II".
Pronunciation
In the morphological processes described above, some pronunciation changes occur that are usually marked in writing. This section gives a few details about those pronunciation aspects not "visible" in the written form.
Non-syllabic "i"
The letter i in the word zeci (both as a separate word and in compounds), although thought by native speakers to indicate an independent sound, is only pronounced as a palatalization of the previous consonant. It does not form a syllable by itself: patruzeci "forty" is pronounced . The same applies to the last i in cinci: , including compounds: 15 is pronounced and 50 is .
However, in the case of ordinal numbers in the masculine form, before -lea the nonsylabic i becomes a full syllabic i in words like douăzecilea "20th" and in cincilea "5th" .
Semivocalic i can remain a semivowel or switch to a full vowel when followed by -lea: doi , al doilea or ("the second", masculine). It remains a semivowel when followed by -a: a treia ("the third", feminine).
Stress
The stress in numbers from 11 to 19 is on the units number, that is, the first element of the compound. Since in all nine cases that element has the stress on its first syllable, the compound itself will also have the stress on the first syllable. The same is valid for the informal short versions:unsprezece , unșpe (11);șaptesprezece , șapteșpe (17);
Numbers in the series 20, 30, ..., 90 have the normal stress on the element -zeci. However, a stress shift to the first element often occurs, probably because that element carries more information:treizeci (30);„șaizeci? – Nu, șaptezeci!” "Sixty? – No, seventy!"
Etymology
With few exceptions, the words involved in the formation of Romanian number names are inherited directly from Latin. This includes the names of all the non-zero digits, all the connecting words (și, spre, de), most of the words and prefixes used to express the non-cardinal types of numbers (toți, ori, al, în- etc.), and part of the multiple names (zece, mie). The remainder are largely relatively recent borrowings from French, such as zero, dublu, triplu, minus, plus, virgulă, milion, miliard, etc., most of which are used internationally.
But the most remarkable exception is the word sută, whose origin is still debated. It is possibly an old Slavic borrowing, although the phonetic evolution from sŭto to sută proves hard to explain. A Persian origin has also been suggested.
Usage
Dates. Calendar dates in Romanian are expressed using cardinal numbers, unlike English. For example, "the 21st of April" is 21 aprilie (read douăzeci și unu aprilie). For the first day of a month the ordinal number întâi is often used: 1 Decembrie (read Întâi Decembrie; upper case is used for names of national or international holidays). Normally the masculine form of the number is used everywhere, but when the units digit is 2, the feminine is also frequent: 2 ianuarie can be read both doi ianuarie and două ianuarie; the same applies for days 12 and 22.
Centuries. Centuries are named using ordinal numbers in reverse order: "14th century" is secolul al paisprezecelea (normally written secolul al XIV-lea). Cardinal numbers are often used although considered incorrect: secolul paisprezece. See above for details.
Royal titles. Ordinal numbers (in reverse word order) are used for naming ruling members of a monarchy and the Popes. For example: Carol al II-lea, Papa Benedict al XVI-lea. See above for details.
Particularities
In Romanian, a number like 1500 is never read in a way similar to English fifteen hundred, but always o mie cinci sute "one thousand five hundred".
Sometimes, the numbers 100 and 1000 are spelled out as una sută and una mie, instead of the usual o sută, o mie. This is to ensure that the number of hundreds or thousands is understood correctly, for example when writing out numbers as words, mostly in contexts dealing with money amounts, in forms, telegrams, etc. For example, the 100 lei note is marked with the text "UNA SUTĂ LEI". Such a spelling is very formal and used almost exclusively in writing.
The title of the book Arabian Nights is translated into Romanian as O mie și una de nopți (textually One thousand and one nights), using the conjunction și although not required by the number naming rules.
See also
Names of numbers in English
Notes
References
The Number System of Romanian
Numbers in Indo-European Languages
Detailed Romanian grammar with a section on numerals (PDF, 183 pages, 4.6 MB)
DEX online, a collection of Romanian dictionaries.
Web DEX online, web 2.0 Romanian dictionaries.
Narcisa Forăscu, "Grammar difficulties of the Romanian language": use the index on the left and select the terms "numerale" and "de (prepoziție)".
Capidan, Theodor. Aromânii, dialectul Aromân'', Academia Română, Studii și cercetări, XX 1932.
Romanian grammar
Numerals | Romanian numbers | Mathematics | 7,963 |
44,724,074 | https://en.wikipedia.org/wiki/Agricultural%20Hall%20of%20Fame%20of%20Quebec | The Agricultural Hall of Fame of Quebec (French: Temple de la renommée de l'agriculture du Québec) honours and celebrates those who have made a lasting contribution to the advancement in the field of agriculture in the province of Quebec, Canada.
A non-profit association founded in 1991 and located in Saint-Hyacinthe, Quebec, the Agricultural Hall of Fame of Quebec is currently administered by a board of 9 individuals elected during a general assembly that is held on a yearly basis.
Gallery
The hall of fame exposes portraits of its inductees in a gallery. The gallery used to be located at the ExpoCité complex in Quebec City but was moved to the La Coop building in Saint-Hyacinthe in 2014. Candidates for induction are submitted by member of the association who then form a selection committee. The year's inductees are honored during a banquet after the general assembly where their portraits are unveiled and permanently hung in the gallery.
Notable inductees
See also
Canadian Agricultural Hall of Fame
List of agriculture awards
References
Halls of fame in Canada
Awards established in 1991
1991 establishments in Quebec
Agriculture awards
Quebec awards | Agricultural Hall of Fame of Quebec | Technology | 227 |
62,262,369 | https://en.wikipedia.org/wiki/Topo%20Chico | Topo-Chico is a brand of sparkling mineral water from Mexico. Topo-Chico is both naturally carbonated at the source and artificially carbonated.
History
Topo-Chico has been sourced from and bottled in Monterrey, Mexico since 1895. The drink takes its name from the mountain Cerro del Topo Chico in Monterrey.
In 2017, The Coca-Cola Company purchased Topo-Chico for $220 million. The brand was originally popular in northern Mexico and Texas, with the Coca-Cola Company later helping popularize it across the United States. The drink has a cult following.
Ranch water is a cocktail made with tequila, lime juice and Topo-Chico, over ice, a popular drink in Texas. A similar drink, the Chilton, substitutes the lime for a lemon, the tequila for vodka, and adds salt on the rim. The drink allegedly derives its name from a doctor in Lubbock.
Topo-Chico Hard Seltzer
In 2021, the Coca-Cola Co used its sparkling mineral water brand Topo-Chico to launch a range of vegan friendly alcoholic hard seltzers in the United Kingdom and in the United States with Molson Coors. The range includes Tangy Lemon Lime, Tropical Mango and Cherry Acai flavors in the United Kingdom and flavors such as Tangy Lemon Lime, Tropical Mango, Strawberry Guava and Exotic Pineapple in the US.
In early 2022, Topo Chico ranch water launched their new Hard Seltzer Topo Chico Ranch Water in select markets, along with the national rollout of its variety pack. The product is now available in stores across Alabama, Arizona, California, Colorado, Georgia, New Mexico, Oklahoma, Tennessee, and Texas.
Neither Topo Chico Hard Seltzer nor Topo Chico Ranch Water are made with mineral water from the original Topo Chico spring. Rather, they are “inspired by the taste” of the original drink.
Legal issues
In 2023, a New York resident sued Coca-Cola because its Topo Chico Margarita Hard Seltzers do not contain tequila and cited that the product's packaging was misleading about the contents of the beverage. The lawsuit was dismissed later that year. In 2024, a Florida resident brought a similar suit against Coca-Cola, also citing that the product's packaging includes "false and misleading representations and omissions" suggesting that the product contains tequila.
In popular culture
"Topo Chico" is the subject and title of the last song on Robert Ellis's 2019 album Texas Piano Man.
Topo Chico is featured on the album cover of Seattle WA band iji's 2013 album UNLTD. COOL DRINKS.
See also
List of bottled water brands
List of Coca-Cola brands
References
Further reading
External links
Topo Chico Water Analysis Report
Coca-Cola brands
Mexican brands
Mineral water
Monterrey | Topo Chico | Chemistry | 572 |
55,669,529 | https://en.wikipedia.org/wiki/Costume%20book | A costume book is a collection of images or figures of dress worn by different people of different ranks and places. It emerged as a pictorial genre in the sixteenth century in Europe. Earlier costume books include figures from around the world. They are sometimes accompanied by text describing the costume and customs. An example of a costume book by Cesare Vecellio is Degli habit antichi et modern di diverse parti del mondo, published in Venice by Damaro Zen in 1590 and subsequently revised and published by the Sessa brothers in 1598 under the title Habiti antichi et modern di tutto il mondo.
Costume books are difficult to define as they may include hand painted and printed illustrations resembling travel accounts or encyclopedic collections. Early costume books are seen as ethnographic studies for understanding foreign cultures, especially before photography was invented.
A significant example of an early 19th century costume book is French painter Louis Dupré's Voyage à Athènes et à Constantinople (Paris: Dondey-Dupré, 1825). The book primarily depicts the inhabitants of Ottoman Greece. It emphasizes modernity and cultural diversity with a heavily philhellenic bias. However, it moves away from the stereotypes common in other costume studies in order to delineate the antiquity of the contemporary Greek scene.
Two major art historians working on costume books are Ann Rosalind Jones and Ulrike Ilg. Ilg discusses the manner in which costume books transposed the issue of morality upon clothing, noting how various albums dealt with concepts of modesty and luxury, particularly within the context of existing sumptuary laws and other regulations placed on dress.
References
Costume design
Design books | Costume book | Engineering | 337 |
8,471,957 | https://en.wikipedia.org/wiki/Phosphophyllite | Phosphophyllite (, and phosphate) is a rare mineral with the chemical formula , composed of hydrated zinc phosphate. It is highly prized by collectors for its rarity and for its delicate bluish green colour. Phosphophyllite is rarely cut because it is fragile and brittle, and large crystals are too valuable to be broken up.
The finest phosphophyllite crystals come from Potosí, Bolivia, but it is no longer mined there. Other sources include New Hampshire, United States and Hagendorf, Bavaria, Germany. It is often found in association with the minerals chalcopyrite and triphylite.
Phosphophyllite has been synthesized by the addition of diammonium phosphate to a solution of zinc and iron sulfate.
Popular culture
An anthropomorphic form of phosphophyllite is the protagonist of the manga and anime series Land of the Lustrous, with key features of the mineral such as its brittle nature and vibrant color reflected in their character traits and design.
References
Zinc minerals
Iron(II) minerals
Phosphate minerals
Tetrahydrate minerals
Luminescent minerals
Monoclinic minerals
Minerals in space group 14 | Phosphophyllite | Chemistry | 241 |
32,908,673 | https://en.wikipedia.org/wiki/Bibenzyl | Bibenzyl is the organic compound with the formula (C6H5CH2)2. It can be viewed as a derivative of ethane in which one phenyl group is bonded to each carbon atom. It is a colorless solid.
Occurrences
The compound is the product from the coupling of a pair of benzyl radicals.
Bibenzyl forms the central core of some natural products like dihydrostilbenoids and isoquinoline alkaloids. Marchantins are a family of bis(bibenzyl)-containing macrocycles.
See also
Benzil
Benzoin
References
Hydrocarbons
Benzyl compounds | Bibenzyl | Chemistry | 133 |
16,759 | https://en.wikipedia.org/wiki/Kevlar | Kevlar (para-aramid) is a strong, heat-resistant synthetic fiber, related to other aramids such as Nomex and Technora. Developed by Stephanie Kwolek at DuPont in 1965, the high-strength material was first used commercially in the early 1970s as a replacement for steel in racing tires. It is typically spun into ropes or fabric sheets that can be used as such, or as an ingredient in composite material components.
Kevlar has many applications, ranging from bicycle tires and racing sails to bulletproof vests, all due to its high tensile strength-to-weight ratio; by this measure it is five times stronger than steel. It is also used to make modern marching drumheads that withstand high impact; and for mooring lines and other underwater applications.
A similar fiber called Twaron with the same chemical structure was developed by Akzo in the 1970s; commercial production started in 1986, and Twaron is manufactured by Teijin Aramid.
History
Poly-paraphenylene terephthalamide (K29) – branded Kevlar – was invented by the Polish-American chemist Stephanie Kwolek while working for DuPont, in anticipation of a gasoline shortage. In 1964, her group began searching for a new lightweight strong fiber to use for light, but strong, tires. The polymers she had been working with, poly-p-phenylene-terephthalate and polybenzamide, formed liquid crystals in solution, something unique to polymers at the time.
The solution was "cloudy, opalescent upon being stirred, and of low viscosity" and usually was thrown away. However, Kwolek persuaded the technician, Charles Smullen, who ran the spinneret, to test her solution, and was amazed to find that the fiber did not break, unlike nylon. Her supervisor and her laboratory director understood the significance of her discovery and a new field of polymer chemistry quickly arose. By 1971, modern Kevlar was introduced. However, Kwolek was not very involved in developing the applications of Kevlar.
In 1971, Lester Shubin, who was then the Director of Science and Technology for the National Institute for Law Enforcement and Criminal Justice, suggested using Kevlar to replace nylon in bullet-proof vests. Prior to the introduction of Kevlar, flak jackets made of nylon had provided much more limited protection to users. Shubin later recalled how the idea developed: "We folded it over a couple of times and shot at it. The bullets didn't go through." In tests, they strapped Kevlar onto anesthetized goats and shot at their hearts, spinal cords, livers and lungs. They monitored the goats' heart rate and blood gas levels to check for lung injuries. After 24 hours, one goat died and the others had wounds that were not life threatening. Shubin received a $5 million grant to research the use of the fabric in bullet-proof vests.
Kevlar 149 was invented by Jacob Lahijani of Dupont in the 1980s.
Production
Kevlar is synthesized in solution from the monomers 1,4-phenylene-diamine (para-phenylenediamine) and terephthaloyl chloride in a condensation reaction yielding hydrochloric acid as a byproduct. The result has liquid-crystalline behavior, and mechanical drawing orients the polymer chains in the fiber's direction. Hexamethylphosphoramide (HMPA) was the solvent initially used for the polymerization, but for safety reasons, DuPont replaced it by a solution of N-methyl-pyrrolidone and calcium chloride. As this process had been patented by Akzo (see above) in the production of Twaron, a patent war ensued.
Kevlar production is expensive because of the difficulties arising from using concentrated sulfuric acid, needed to keep the water-insoluble polymer in solution during its synthesis and spinning.
Several grades of Kevlar are available:
Kevlar K-29 – in industrial applications, such as cables, asbestos replacement, tires, and brake linings.
Kevlar K49 – high modulus used in cable and rope products.
Kevlar K100 – colored version of Kevlar
Kevlar K119 – higher-elongation, flexible and more fatigue resistant
Kevlar K129 – higher tenacity for ballistic applications
Kevlar K149 – highest tenacity for ballistic, armor, and aerospace applications
Kevlar AP – 15% higher tensile strength than K-29
Kevlar XP – lighter weight resin and KM2 plus fiber combination
Kevlar KM2 – enhanced ballistic resistance for armor applications
The ultraviolet component of sunlight degrades and decomposes Kevlar, a problem known as UV degradation, and so it is rarely used outdoors without protection against sunlight.
Structure and properties
When Kevlar is spun, the resulting fiber has a tensile strength of about , and a relative density of 1.44 (0.052 lb/in3). The polymer owes its high strength to the many inter-chain bonds. These inter-molecular hydrogen bonds form between the carbonyl groups and NH centers. Additional strength is derived from aromatic stacking interactions between adjacent strands. These interactions have a greater influence on Kevlar than the van der Waals interactions and chain length that typically influence the properties of other synthetic polymers and fibers such as ultra-high-molecular-weight polyethylene. The presence of salts and certain other impurities, especially calcium, could interfere with the strand interactions and care is taken to avoid inclusion in its production. Kevlar's structure consists of relatively rigid molecules which tend to form mostly planar sheet-like structures rather like silk protein.
Thermal properties
Kevlar maintains its strength and resilience down to cryogenic temperatures (): in fact, it is slightly stronger at low temperatures. At higher temperatures the tensile strength is immediately reduced by about 10–20%, and after some hours the strength progressively reduces further. For example: enduring for 500 hours, its strength is reduced by about 10%; and enduring for 70 hours, its strength is reduced by about 50%.
Applications
Science
Kevlar is often used in the field of cryogenics for its low thermal conductivity and high strength relative to other materials for suspension purposes. It is most often used to suspend a paramagnetic salt enclosure from a superconducting magnet mandrel in order to minimize any heat leaks to the paramagnetic material. It is also used as a thermal standoff or structural support where low heat leaks are desired.
A thin Kevlar window has been used by the NA48 experiment at CERN to separate a vacuum vessel from a vessel at nearly atmospheric pressure, both in diameter. The window has provided vacuum tightness combined with reasonably small amount of material (only 0.3% to 0.4% of radiation length).
Protection
Kevlar is a well-known component of personal armor such as combat helmets, ballistic face masks, and ballistic vests. The PASGT helmet and vest that were used by United States military forces used Kevlar as a key component in their construction. Other military uses include bulletproof face masks and spall liners used to protect the crews of armoured fighting vehicles. Nimitz-class aircraft carriers use Kevlar reinforcement in vital areas. Civilian applications include: high heat resistance uniforms worn by firefighters, body armour worn by police officers, security, and police tactical teams such as SWAT.
Kevlar is used to manufacture gloves, sleeves, jackets, chaps and other articles of clothing designed to protect users from cuts, abrasions and heat. Kevlar-based protective gear is often considerably lighter and thinner than equivalent gear made of more traditional materials.
It is used for motorcycle safety clothing, especially in the areas featuring padding such as the shoulders and elbows. In the sport of fencing it is used in the protective jackets, breeches, plastrons and the bib of the masks. It is increasingly being used in the peto, the padded covering which protects the picadors' horses in the bullring. Speed skaters also frequently wear an under-layer of Kevlar fabric to prevent potential wounds from skates in the event of a fall or collision.
Sport
In kyudo, or Japanese archery, it may be used for bow strings, as an alternative to the more expensive hemp. It is one of the main materials used for paraglider suspension lines. It is used as an inner lining for some bicycle tires to prevent punctures. In table tennis, plies of Kevlar are added to custom ply blades, or paddles, in order to increase bounce and reduce weight. Tennis racquets are sometimes strung with Kevlar. It is used in sails for high performance racing boats.
In 2013, with advancements in technology, Nike used Kevlar in shoes for the first time. It launched the Elite II Series, with enhancements to its earlier version of basketball shoes by using Kevlar in the anterior as well as the shoe laces. This was done to decrease the elasticity of the tip of the shoe in contrast to the nylon conventionally used, as Kevlar expanded by about 1% against nylon which expanded by about 30%. Shoes in this range included LeBron, HyperDunk and Zoom Kobe VII. However these shoes were launched at a price range much higher than average cost of basketball shoes. It was also used in the laces for the Adidas F50 adiZero Prime football boot.
Several companies, including Continental AG, manufacture cycle tires with Kevlar to protect against punctures.
Folding-bead bicycle tires, introduced to cycling by Tom Ritchey in 1984, use Kevlar as a bead in place of steel for weight reduction and strength. A side effect of the folding bead is a reduction in shelf and floor space needed to display cycle tires in a retail environment, as they are folded and placed in small boxes.
Music
Kevlar has also been found to have useful acoustic properties for loudspeaker cones, specifically for bass and mid range drive units. Additionally, Kevlar has been used as a strength member in fiber optic cables such as the ones used for audio data transmissions.
Kevlar can be used as an acoustic core on bows for string instruments. Kevlar's physical properties provide strength, flexibility, and stability for the bow's user. To date, the only manufacturer of this type of bow is CodaBow.
Kevlar is also presently used as a material for tailcords (a.k.a. tailpiece adjusters), which connect the tailpiece to the endpin of bowed string instruments.
Kevlar is sometimes used as a material on marching snare drums. It allows for an extremely high amount of tension, resulting in a cleaner sound. There is usually a resin poured onto the Kevlar to make the head airtight, and a nylon top layer to provide a flat striking surface. This is one of the primary types of marching snare drum heads. Remo's Falam Slam patch is made with Kevlar and is used to reinforce bass drum heads where the beater strikes.
Kevlar is used in the woodwind reeds of Fibracell. The material of these reeds is a composite of aerospace materials designed to duplicate the way nature constructs cane reed. Very stiff but sound absorbing Kevlar fibers are suspended in a lightweight resin formulation.
Motor vehicles
Kevlar is sometimes used in structural components of cars, especially high-value performance cars such as the Ferrari F40.
The chopped fiber has been used as a replacement for asbestos in brake pads. Aramids such as Kevlar release less airborne fibres than asbestos brakes and do not have the carcinogenic properties associated with asbestos.
Other uses
Wicks for fire dancing props are made of composite materials with Kevlar in them. Kevlar by itself does not absorb fuel very well, so it is blended with other materials such as fiberglass or cotton. Kevlar's high heat resistance allows the wicks to be reused many times.
Kevlar is sometimes used as a substitute for Teflon in some non-stick frying pans.
Kevlar fiber is used in rope and in cable, where the fibers are kept parallel within a polyethylene sleeve. The cables have been used in suspension bridges such as the bridge at Aberfeldy, Scotland. They have also been used to stabilize cracking concrete cooling towers by circumferential application followed by tensioning to close the cracks. Kevlar is widely used as a protective outer sheath for optical fiber cable, as its strength protects the cable from damage and kinking. When used in this application it is commonly known by the trademarked name Parafil.
Kevlar was used by scientists at Georgia Institute of Technology as a base textile for an experiment in electricity-producing clothing. This was done by weaving zinc oxide nanowires into the fabric. If successful, the new fabric will generate about 80 milliwatts per square meter.
A retractable roof of over of Kevlar was a key part of the design of the Olympic Stadium, Montreal for the 1976 Summer Olympics. It was spectacularly unsuccessful, as it was completed 10 years late and replaced just 10 years later in May 1998 after a series of problems.
Kevlar can be found as a reinforcing layer in rubber bellows expansion joints and rubber hoses, for use in high temperature applications, and for its high strength. It is also found as a braid layer used on the outside of hose assemblies, to add protection against sharp objects.
Some cellphones (including the Motorola RAZR Family, the Motorola Droid Maxx, OnePlus 2 and Pocophone F1) have a Kevlar backplate, chosen over other materials such as carbon fiber due to its resilience and lack of interference with signal transmission.
The Kevlar fiber/epoxy matrix composite materials can be used in marine current turbines (MCT) or wind turbines due to their high specific strength and light weight compared to other fibers.
Composite materials
Aramid fibers are widely used for reinforcing composite materials, often in combination with carbon fiber and glass fiber. The matrix for high performance composites is usually epoxy resin. Typical applications include monocoque bodies for Formula 1 cars, helicopter rotor blades, tennis, table tennis, badminton and squash rackets, kayaks, cricket bats, and field hockey, ice hockey and lacrosse sticks.
Kevlar 149, the strongest fiber and most crystalline in structure, is an alternative in certain parts of aircraft construction. The wing leading edge is one application, Kevlar being less prone than carbon or glass fiber to break in bird collisions.
See also
Innegra S
Ultra-high-molecular-weight polyethylene
Twaron
Vectran
References
External links
Aramids
Matweb material properties of Kevlar
Kevlar
Kevlar in body armor
Synthesis of Kevlar
Aberfeldy Footbridge over the River Tay
Kevlar at Plastics Wiki
Organic polymers
Body armor
DuPont products
Synthetic fibers
Technical fabrics
Brand name materials
Products introduced in 1965
American inventions | Kevlar | Chemistry | 3,166 |
8,050,342 | https://en.wikipedia.org/wiki/Nose%20ring%20%28animal%29 | A nose ring is inserted into the nose of an animal. Nose rings are used to control bulls and occasionally cows, and to help wean young cattle by preventing suckling. Nose rings are used on pigs to discourage rooting. Some nose rings are installed through a pierced hole in the nasal septum or rim of the nose and remain there, while others are temporary tools.
History
Historically, the use of nose rings for controlling animals dates to the dawn of recorded human civilization. They were used in ancient Sumer and are seen on the Standard of Ur, where they were used on both bovines and equines. There are theories that the rod-and-ring symbol are a shepherd's crook and a nose rope.
Calf-weaning nose ring
Calf-weaning nose rings, sometimes called weaners, are pain-based anti-suckling devices. These nose rings (usually made of plastic) clip onto the nose without piercing it, and are reusable. They provide an alternative to separating calves from their mothers during the weaning period. They have plastic spikes which are uncomfortable for the dam when her calf presses against her udder, causing her to reject the calf's efforts at suckling. Use of calf-weaning nose rings reduces the stress of weaning by separating it into two stages. First, the calf is weaned from suckling milk—this stage usually lasts up to 14 days. Then later the calf is separated from its dam. Weaning nose rings are also available for sheep and goats.
Rings for adult cattle
The nose ring assists the handler to control a potentially dangerous animal with minimal risk of injury or disruption by exerting stress on one of the most sensitive parts of the animal, the nose. Bulls, especially, are powerful and sometimes unpredictable animals which, if uncontrolled, can kill or severely injure a human handler.
Control of the bull may be done by holding the ring by hand, attaching a lead rope to it, or clipping on a bull staff or bull pole. A rope or chain from the ring may be attached to a bull's horns or to a head-collar for additional control. A short length of chain or rope may be left hanging loose from the ring of an aggressive bull, so when he ducks in a threatening manner the bull will step on the chain and be deterred from attacking. This dangling lead may also facilitate capture and control of a frisky bull.
For safety reasons, many show societies require bulls over 10 months to be accompanied by two people, wear a halter and lead, and be led with a rope, chain, or bull pole attached to the bull's nose ring. Some shows require other cattle to be led with nose grips (bulldogs).
A bull pole or bull staff is a wooden or metal pole with a special hook on the end that snaps onto the nose ring. The James Safety First Bull Staff (1919) was a five-foot-long steel tube with a lock hook on the bull's end operated from the handler's end of the pole. The pole is used to keep a distance between the handler and the bull, and can be used to push a bull out of a pen without requiring the handler to enter the pen for cleaning or feeding. There is some risk that a bull might drive the staff into the handler if the bull misbehaves. One veterinary text recommends the use of a bull staff in addition to the halter: If you do choose to have a bull, be sure you are prepared to handle him properly. Many handlers rely on a nose ring to control a bull. But a ring in his nose is no good unless you have a bull staff and use it. A bull staff is a pole with a snap in the end that clips to the bull ring. Leading a bull with a staff gives you a lot more handling power as the bull can't get any closer to you than the length of the staff allows. Leading him only by a chain in the ring lets him run over you at will. Even with a staff, it's smart to never completely trust the ring; I have seen bulls rip rings out of their noses when they got angry enough.
Bull rings are usually in diameter, depending on the size of the bull. Bull rings are commonly made from aluminium, stainless steel or copper, in the form of a pair of hinged semicircles, held closed by a small brass bolt whose head is broken off during installation. If a ring needs to be removed (for example, if the bull has grown out of it), it is cut or unscrewed.
The ring is normally placed on the bull between 9 and 12 months of age. It is usually done by a veterinarian, who pierces the septum with a scalpel or punch. Self-piercing rings (with sharp ends designed to be pressed through the septum and then pulled together with a screw) have been available for many years; these are also usually installed by a veterinarian rather than the owner.
Nose tongs
Another restraint method is tongs which temporarily grasp the septum. They are variously called nose clamps, nose tongs, dogs, bulldogs, bull tongs, or barnacles.
Self-locking or spring-closing show-lead nose rings, also called "bulldogs" or nose grips, are removable rings that do not require the nose to be pierced. They are often used on steers and cows, along with a halter, at agricultural shows, or when handling cattle for examination, marking or treatment. They stay shut until released, and usually have a loop for the attachment of a cord or lead rope. They give similar control to a bull ring without the need for permanent attachment.
Bull-holders, also known as bull-tongs, have a pliers action and are used for short periods on grown cattle when they are being mouthed or drenched. A chain, rope or strap keeps the grips closed and may be passed over a bar at the front of a head bail to elevate the head. The thumb and forefinger may also used in this way on smaller animals.
Pig nose rings
Rooting is the act of a pig nudging into something with its snout, such as into the dirt to unearth plants to eat. In some circumstances, owners of pigs may find this undesirable. Nose rings make rooting painful for the animal, although a ringed pig may still be able to forage freely through leaf litter and surface vegetation. Pig nose-ringing may sometimes be required by local regulations, as when farm pigs are released into public woods to pannage (such as on the New Forest in southern England).
Nose rings specifically designed for pigs usually consist of open copper or steel wire rings with sharp ends, about in diameter. These are typically clipped to the rim of the nose instead of through the septum, as this is far more painful to the pig and is considered "thus more effective for deterring the pig from rooting than piercing through the septum is". As they may sometimes become dislodged, an adult pig may be given three to four rings.
See also
Bull § Handling
Animal husbandry
References
Animal welfare
Animal equipment
Cattle
Bulls
Pigs
Livestock
Metal rings
Farming tools | Nose ring (animal) | Biology | 1,488 |
24,202,148 | https://en.wikipedia.org/wiki/C6H3N3O8 | {{DISPLAYTITLE:C6H3N3O8}}
The molecular formula C6H3N3O8 may refer to:
Styphnic acid | C6H3N3O8 | Chemistry | 38 |
33,678,989 | https://en.wikipedia.org/wiki/Mayer%27s%20reagent | Mayer's reagent is an alkaloidal precipitating reagent used for the detection of alkaloids in natural products. Mayer's reagent is freshly prepared by dissolving a mixture of mercuric chloride (1.36 g) and of potassium iodide (5.00 g) in water (100.0 ml). Most alkaloids are precipitated from neutral or slightly acidic solution by Mayer's reagent (potassiomercuric iodide solution) to give a cream coloured precipitate. This test was invented by and named after the German chemist Julius Robert Von Mayer (1814–1878).
References
Potassium compounds
Mercury(II) compounds
Alkaloids
Chemical tests
Drug testing reagents | Mayer's reagent | Chemistry | 163 |
61,585,646 | https://en.wikipedia.org/wiki/NGC%204312 | NGC 4312 is an edge-on unbarred spiral galaxy located about 55 million light-years away in the constellation Coma Berenices. It was discovered by astronomer William Herschel on January 14, 1787. NGC 4312 is a member of the Virgo Cluster and is a LINER galaxy.
It has undergone ram-pressure stripping in the past.
Black Hole
NGC 4312 may harbor an intermediate-mass black hole with an estimated mass ranging from 10,000 (1*10^4) to 300,000 (3*10^5) solar masses.
See also
List of NGC objects (4001–5000)
References
External links
4312
040095
Coma Berenices
17870114
Unbarred spiral galaxies
7442
Virgo Cluster
LINER galaxies | NGC 4312 | Astronomy | 156 |
6,881,874 | https://en.wikipedia.org/wiki/Sakari%20Pinom%C3%A4ki | Sakari Pinomäki (1933–2011) was a Finnish systems engineer and an inventor, who pioneered the mechanized forestry industry. He was the founder of PIKA Forest Machines which produced the first purpose-built forest machine in 1964 in Ylöjärvi
, Finland. His inventions had over 50 patents.
History
Sakari Pinomäki's first company, PIKA Forest Machines, is credited with designing the first self-propelled tree length timber processor, the PIKA Model 60, in 1968, and the first fully mobile timber "harvester", the PIKA Model 75, in 1974. These machines differed significantly from other "retro-fitted" forestry machines in that they were designed from inception to be timber harvesting and processing equipment, and were not conventional farming or earth moving equipment with additional apparatus welded onto them to allow timber processing work to be possible. Pinomäki coined the term "harvester" to describe his Model 75 machine, which differs from a tree length processor in that a harvester grips, fells, de-limbs and sections the tree on site, while a processor simply de-limbs a tree that has been felled by chain saws and dragged to the delimbing equipment. His designs and innovations have been subsequently copied by at least five other major manufacturers of heavy timber equipment, including Timberjack, Valmet and Ponsse, and were instrumental in developing the "Scandinavian" system of timber harvesting, which is far more sustainable and nature conserving than the methods employed up till the mid-20th century. The two machine harvester-forwarder system consequently became the worldwide standard for sustainable forestry.
One of the most significant of PIKA's inventions has been the Paralcon Hydraulic valve system that can be used on any twin boom extending-retracting crane. This valve uses return oil flow pressure to power the extension piston, and flow oil pressure to power the retraction movement of the crane, as opposed to standard configurations that use additional pumps to power these crane movements. The result is that far less motor torque is necessary to operate the crane, which consumes far less fuel; this in turn saves operators money and reduces CO2 emissions for the machinery in use.
Additionally, S. Pinomäki Ky PIKA was first to market in the early 21st century with the world's first production "Combination" machine, a single machine that can function in both harvester and forwarder roles. This 50 percent reduction in machinery to perform the same harvesting work means far less environmental pollution from CO2 emissions and terrain damage from machinery operation, and has again be co-opted by every major manufacturer of heavy timber processing equipment.
Sakari died on July 29, 2011, after a battle with pancreatic cancer. He is survived by his two daughters and three grandchildren.
See also
Forestry
Harvester (forestry)
Logging
References
External links
http://www.tts.fi/uk/publication/teho-magazine/teho06_1.htm
http://www.fao.org/docrep/w2809E/w2809e06.htm
http://www.websters-online-dictionary.org/definition/Hoisting+boom+assembly
http://www.pinox.com/suomi/ajankohtaista/uutiset2005/news_17062005_fi.htm
https://web.archive.org/web/20050211073051/http://koneviesti.fi/VANHAT/kv18%202004/keskiosa.html
20th-century Finnish engineers
Systems engineers
1933 births
2011 deaths
Finnish foresters
Forestry equipment | Sakari Pinomäki | Engineering | 762 |
49,842,259 | https://en.wikipedia.org/wiki/Lethal%20Injection%20Secrecy%20Act | The Lethal Injection Secrecy Act is a statute in the US state of Georgia that was signed by the state's governor, Nathan Deal, and went into effect that July. The law makes the identities of people who prescribe drugs used in lethal injections, as well as those of the companies that produce and supply them, state secrets. It also makes the identities of prison staff who carry out executions a state secret. It has been called the strictest law of its kind in the country.
Legal challenges
In July 2013, the law was challenged by the lawyers of Warren Hill, a prisoner who was sentenced to death in 1989 for murdering his cellmate in prison while serving a life sentence for murdering his girlfriend. Hill's lawyers argued that the law was unconstitutional. On July 18, Fulton County Superior Court Judge Gail S. Tusan granted a stay on Hill's execution, on the basis that the secrecy law violated the First Amendment by concealing information "essential to the determination of the efficacy and potency of lethal injection drugs" from the public. The state appealed this ruling, and in May 2014, the Georgia Supreme Court upheld the secrecy law. On February 2, 2016, the 11th U.S. Circuit Court of Appeals rejected a request from Brandon Astor Jones' lawyers that his execution be stayed on the basis that he had waited too long to request such a stay. The five dissenting judges in this ruling warned of the dangers of the secrecy law's effects—namely, not knowing the qualifications of the company that made the drug or its source.
References
Capital punishment in Georgia (U.S. state)
Georgia (U.S. state) statutes
Lethal injection | Lethal Injection Secrecy Act | Environmental_science | 337 |
1,082,916 | https://en.wikipedia.org/wiki/Magnetic%20helicity | In plasma physics, magnetic helicity is a measure of the linkage, twist, and writhe of a magnetic field.
Magnetic helicity is a useful concept in the analysis of systems with extremely low resistivity, such as astrophysical systems. When resistivity is low, magnetic helicity is conserved over longer timescales, to a good approximation. Magnetic helicity dynamics are particularly important in analyzing solar flares and coronal mass ejections. Magnetic helicity is relevant in the dynamics of the solar wind. Its conservation is significant in dynamo processes, and it also plays a role in fusion research, such as reversed field pinch experiments.
When a magnetic field contains magnetic helicity, it tends to form large-scale structures from small-scale ones. This process can be referred to as an inverse transfer in Fourier space. This property of increasing the scale of structures makes magnetic helicity special in three dimensions, as other three-dimensional flows in ordinary fluid mechanics are the opposite, being turbulent and having the tendency to "destroy" structure, in the sense that large-scale vortices break up into smaller ones, until dissipating through viscous effects into heat. Through a parallel but inverted process, the opposite happens for magnetic vortices, where small helical structures with non-zero magnetic helicity combine and form large-scale magnetic fields. This is visible in the dynamics of the heliospheric current sheet, a large magnetic structure in the Solar System.
Mathematical definition
Generally, the helicity of a smooth vector field confined to a volume is the standard measure of the extent to which the field lines wrap and coil around one another. It is defined as the volume integral over of the scalar product of and its curl, :
Magnetic helicity
Magnetic helicity is the helicity of a magnetic vector potential where is the associated magnetic field confined to a volume . Magnetic helicity can then be expressed as
Since the magnetic vector potential is not gauge invariant, the magnetic helicity is also not gauge invariant in general. As a consequence, the magnetic helicity of a physical system cannot be measured directly. In certain conditions and under certain assumptions, one can however measure the current helicity of a system and from it, when further conditions are fulfilled and under further assumptions, deduce the magnetic helicity.
Magnetic helicity has units of magnetic flux squared: Wb2 (webers squared) in SI units and Mx2 (maxwells squared) in Gaussian Units.
Current helicity
The current helicity, or helicity of the magnetic field confined to a volume , can be expressed as
where is the current density. Unlike magnetic helicity, current helicity is not an ideal invariant (it is not conserved even when the electrical resistivity is zero).
Gauge considerations
Magnetic helicity is a gauge-dependent quantity, because can be redefined by adding a gradient to it (gauge choosing). However, for perfectly conducting boundaries or periodic systems without a net magnetic flux, the magnetic helicity contained in the whole domain is gauge invariant, that is, independent of the gauge choice. A gauge-invariant relative helicity has been defined for volumes with non-zero magnetic flux on their boundary surfaces.
Topological interpretation
The name "helicity" is because the trajectory of a fluid particle in a fluid with velocity and vorticity forms a helix in regions where the kinetic helicity . When , the resulting helix is right-handed and when it is left-handed. This behavior is very similar to that found concerning magnetic field lines.
Regions where magnetic helicity is not zero can also contain other sorts of magnetic structures, such as helical magnetic field lines. Magnetic helicity is a continuous generalization of the topological concept of linking number to the differential quantities required to describe the magnetic field. Where linking numbers describe how many times curves are interlinked, magnetic helicity describes how many magnetic field lines are interlinked.
Magnetic helicity is proportional to the sum of the topological quantities twist and writhe for magnetic field lines. The twist is the rotation of the flux tube around its axis, and writhe is the rotation of the flux tube axis itself. Topological transformations can change twist and writhe numbers, but conserve their sum. As magnetic flux tubes (collections of closed magnetic field line loops) tend to resist crossing each other in magnetohydrodynamic fluids, magnetic helicity is very well-conserved.
As with many quantities in electromagnetism, magnetic helicity is closely related to fluid mechanical helicity, the corresponding quantity for fluid flow lines, and their dynamics are interlinked.
Properties
Ideal quadratic invariance
In the late 1950s, Lodewijk Woltjer and Walter M. Elsässer discovered independently the ideal invariance of magnetic helicity, that is, its conservation when resistivity is zero. Woltjer's proof, valid for a closed system, is repeated in the following:
In ideal magnetohydrodynamics, the time evolution of a magnetic field and magnetic vector potential can be expressed using the induction equation as
respectively, where is a scalar potential given by the gauge condition (see ). Choosing the gauge so that the scalar potential vanishes, , the time evolution of magnetic helicity in a volume is given by:
The dot product in the integrand of the first term is zero since is orthogonal to the cross product , and the second term can be integrated by parts to give
where the second term is a surface integral over the boundary surface of the closed system. The dot product in the integrand of the first term is zero because is orthogonal to The second term also vanishes because motions inside the closed system cannot affect the vector potential outside, so that at the boundary surface since the magnetic vector potential is a continuous function. Therefore,
and magnetic helicity is ideally conserved. In all situations where magnetic helicity is gauge invariant, magnetic helicity is ideally conserved without the need for the specific gauge choice
Magnetic helicity remains conserved in a good approximation even with a small but finite resistivity, in which case magnetic reconnection dissipates energy.
Inverse transfer
Small-scale helical structures tend to form larger and larger magnetic structures. This can be called an inverse transfer in Fourier space, as opposed to the (direct) energy cascade in three-dimensional turbulent hydrodynamical flows. The possibility of such an inverse transfer was first proposed by Uriel Frisch and collaborators and has been verified through many numerical experiments. As a consequence, the presence of magnetic helicity is a possibility to explain the existence and sustainment of large-scale magnetic structures in the Universe.
An argument for this inverse transfer taken from is repeated here, which is based on the so-called "realizability condition" on the magnetic helicity Fourier spectrum (where is the Fourier coefficient at the wavevector of the magnetic field , and similarly for , the star denoting the complex conjugate). The "realizability condition" corresponds to an application of Cauchy-Schwarz inequality, which yields:
with the magnetic energy spectrum. To obtain this inequality, the fact that (with the solenoidal part of the Fourier transformed magnetic vector potential, orthogonal to the wavevector in Fourier space) has been used, since . The factor 2 is not present in the paper since the magnetic helicity is defined there alternatively as .
One can then imagine an initial situation with no velocity field and a magnetic field only present at two wavevectors and . We assume a fully helical magnetic field, which means that it saturates the realizability condition: and . Assuming that all the energy and magnetic helicity transfers are done to another wavevector , the conservation of magnetic helicity on the one hand and of the total energy (the sum of magnetic and kinetic energy) on the other hand gives:
The second equality for energy comes from the fact that we consider an initial state with no kinetic energy. Then we have the necessarily . Indeed, if we would have , then:
which would break the realizability condition. This means that . In particular, for , the magnetic helicity is transferred to a smaller wavevector, which means to larger scales.
See also
Woltjer's theorem
References
External links
A. A. Pevtsov's Helicity Page
Mitch Berger's Publications Page
Physical quantities
Plasma parameters
Astrophysics | Magnetic helicity | Physics,Astronomy,Mathematics | 1,749 |
32,955,012 | https://en.wikipedia.org/wiki/GCM%20transcription%20factors | In molecular biology, the GCM transcription factors are a family of proteins which contain a GCM motif. The GCM motif is a domain that has been identified in proteins belonging to a family of transcriptional regulators involved in fundamental developmental processes which comprise Drosophila melanogaster GCM and its mammalian homologues (human GCM1 and GCM2). In GCM transcription factors the N-terminal moiety contains a DNA-binding domain of 150 amino acids. Sequence conservation is highest in this GCM domain. In contrast, the C-terminal moiety contains one or two transactivating regions and is only poorly conserved.
The GCM motif has been shown to be a DNA binding domain that recognises preferentially the nonpalindromic octamer 5'-ATGCGGGT-3'. The GCM motif contains many conserved basic amino acid residues, seven cysteine residues, and four histidine residues. The conserved cysteines are involved in shaping the overall conformation of the domain, in the process of DNA binding and in the redox regulation of DNA binding. The GCM domain as a new class of Zn-containing DNA-binding domain with no similarity to any other DNA-binding domain. The GCM domain consists of a large and a small domain tethered together by one of the two Zn ions present in the structure. The large and the small domains comprise five- and three-stranded beta-sheets, respectively, with three small helical segments packed against the same side of the two beta-sheets. The GCM domain exercises a novel mode of sequence-specific DNA recognition, where the five-stranded beta-pleated sheet inserts into the major groove of the DNA. Residues protruding from the edge strand of the beta-pleated sheet and the following loop and strand contact the bases and backbone of both DNA strands, providing specificity for its DNA target site.
References
Protein families | GCM transcription factors | Biology | 400 |
46,822,665 | https://en.wikipedia.org/wiki/H1504%2B65 | H1504+65 is an enigmatic peculiar star in the constellation Ursa Minor. With a surface temperature of 200,000 K (360,000°F) and an atmosphere composed of carbon, oxygen and 2% neon, it is the second hottest white dwarf ever discovered, with only RX J0439.8−6809 being hotter. It is thought to be the stellar core of a post-asymptotic giant branch star, though its composition is unexplainable by current models of stellar evolution.
References
White dwarfs
Ursa Minor | H1504+65 | Astronomy | 119 |
23,342,173 | https://en.wikipedia.org/wiki/Reed%20%28weaving%29 | A reed is part of a weaving loom, and resembles a comb or a frame with many vertical slits. It is used to separate and space the warp threads, to guide the shuttle's motion across the loom, and to push the weft threads into place.
In most floor looms with, the reed is securely held by the beater. Floor looms and mechanized looms both use a beater with a reed, whereas Inkle weaving and tablet weaving do not use reeds.
History
Modern reeds are made by placing flattened strips of wire (made of carbon or stainless steel) between two half round ribs of wood, and binding the whole together with tarred string.
Historically, reeds were made of reed or split cane. The split cane was then bound between ribs of wood in the same manner as wire is now.
In 1738, John Kay replaced split cane with flattened iron or brass wire, and the change was quickly adopted.
To make a reed, wire is flattened to a uniform thickness by passing it between rollers. The flat wire is then straightened, given rounded edges, and filed smooth. The final step is to cut the wire to the correct length and assemble. The tarred cord that binds the reed together is wrapped around each set of wooden ribs and between the dents to hold the ribs together and at the correct spacing.
The length of the metal wire varies depending on the type of fabric and the type of loom being used. For a machine-powered cotton loom, the metal wires are commonly long. For hand-powered floor looms, around is common.
Dents
Both the wires and the slots in the reed are known as dents (namely, teeth). The warp threads pass through the dents after going through the heddles and before becoming woven cloth. The number of dents per inch (or per cm or per 10 cm) indicates the number of gaps in the reed per linear width. The number of warp thread ends by weaving width determines the fineness of the cloth. One or more warp threads may pass through each dent. The number of warp threads that go through each dent depends on the warp and the desired characteristics of the final fabric, and it is possible that the number of threads in each dent is not constant for a whole warp. The number of threads per dent might not be constant if the weaver alternates 2 and three threads per dent, in order to get a number of ends per inch that is 2.5 times the number of dents per inch, or if the thickness of the warp threads were to change at that point, and the fabric to have a thicker or thinner section.
One thread per dent is most common for coarse work. However for finer work (20 or more ends per inch), two or more threads are put through each dent. Threads can be doubled in every other space, so that a reed with 10 dents per inch could give 15 ends per inch, or 20 if the threads were simply doubled. Also, threads can be put in every other dent so as to make a cloth with 6 ends per inch from a reed with 12 dents per inch. Putting more than one thread through each dent reduces friction and the number of reeds that one weaver needs, and is used in weaving mills. If too many threads are put through one dent there may be reed marks left in the fabric, especially in linen and cotton.
For cotton fabrics, reeds typically have between 6 and 90 dents per inch. When the reed has a very high number of dents per inch, it may contain two offset rows of wires. This minimizes friction between the dents and warp threads and prevents loose fibers from twisting and blocking the shed.
Interchangeability
Handweaving looms (including floor and table looms) use interchangeable reeds, where the reeds can vary in width and dents per inch. This allows the same loom to be used for making both very fine and very coarse fabric, as well as weaving threads at dramatically different densities.
The width of the reed sets the maximum width of the warp.
Common reed sizes for the hand-weaver are 6, 8, 10, 12, or 15 dents per inch, although sizes between 5 and 24 are not uncommon. A reed with a larger number of dents per inch is generally used to weave finer fabric with a larger number of ends per inch. Because it is used to beat the weft into place, the reed regulates the distance between threads or groups of threads.
Sleying the reed
Sleying is the term used for pulling the warp threads through the reed, which happens during the warping process (putting a warp on the loom). Sleying is done by inserting a reed hook through the reed, hooking the warp threads and then pulling them through the dent. The warp threads are taken in the order they come from the heddles, so as to avoid crossing threads. If the threads cross, the shed will not open correctly when weaving begins.
Use in cooking
In Emilia-Romagna, Italy wooden reeds are still used for the traditional making of garganelli and maccheroni al pèttine (macaroni on reed). A small square of egg fresh pasta is cut, rolled on a stick and pressed on a wooden reed.
With this culinary technique, the pasta is ridged around the circumference; extruded pasta could only have longitudinal ridges.
These ridges help the pasta "hold" the dressings like bolognese sauce better than it would without ridges or with longitudinal ones.
References
Weaving equipment | Reed (weaving) | Engineering | 1,141 |
22,782,145 | https://en.wikipedia.org/wiki/Commander%20%28knife%29 | The Commander (knife) is a large recurve folding knife made by Emerson Knives, Inc. that was based on a custom design, the ES1-M, by Ernest Emerson that he originally built for a West Coast Navy SEAL Team. It was winner of the Blade Magazine Overall Knife of the Year Award for 1999.
History
The Commander has its origins with Emerson's CQC-8 or "Banana" folding fighting knife based on the Bob Taylor Warrior Knife and the Bill Moran ST-23: a knife designed with the blade in line for reverse grip or sabre grip fighting. This knife became popular among the British SAS and the US Navy SEALs, however the SEALs wanted something more aggressive so Emerson developed the SSRT(Silent Sentry Removal tool) model: a larger, hooked blade with a serrated, doubled-edged spine. This blade's profile resembled the horn of a Rhinoceros and its more popular name is the Rhino. The blade folded below the level of the handle scales so the user could not be cut by this extra edge, a small hole drilled through both handle scales and liners allowed the blade to be held in place so it would not open on a parachute jump and cause harm to the operator. Although the knife functioned perfectly in the field, its final design was too specific for the Navy.
During this same period, Emerson was working on a SERE (Survival, Escape, Resistance, and Evasion) folding knife for troops at Fort Bragg. When officers from Naval Special Warfare saw this knife they felt that with a few small changes such as the addition of a blade-catcher, it would suit their needs: the final result was dubbed the ES1-M. A civilian version was made and called the ES1-C; this model did not include the blade-catcher.
In field-testing it was realized that the blade-catcher would open the knife when drawn from the pocket. Emerson modified this design and secured a patent for it in March 1999. This mechanism is known as the Wave and the knife was added to the production line of Emerson's new factory and was called "The Commander". The Commander was the winner of the Blade Magazine Overall Knife of the Year Award for 1999.
The Emerson Commander was the third model manufactured by Emerson Knives, Inc. The earliest run in 1998 is one of the most sought-after production models by collectors, as the majority of the work from waterjet cutting the liners to grinding the blades was performed by hand, by Emerson himself. In 2000, Emerson Knives offered a larger version based on the original size of the ES1-M and called it the "Super Commander" as well as a 10% downsized version dubbed the "Mini Commander". The first runs of Super Commanders were made as limited editions for Triple Aught Design (TAD) Gear of San Francisco and featured the company's logo on the reverse side of the blade. In 2005 the Super Commander became a regular model in the company's lineup.
In 2006, Emerson released a Commander with a larger clip-point blade and no recurve. This model was specifically made for the hunting market as a skinning knife. Its designation is the CQC-16. In 2009, at the annual Blade Show in Atlanta, Georgia, Emerson announced a collaboration with Kershaw Knives to manufacture an automatic version of the Commander. At the 2010 SHOT Show in Las Vegas, NV, Emerson unveiled the UBR Commander, a 10% scaled-up version of the Super Commander.
Specifications
The Commander features a recurve shaped blade that is long and hardened to a Rockwell hardness of 57-59 RC. The handle is long making the knife in length when opened. The blade steel is Crucible's 154 CM and is thick, although CPM S30V steel, ATS-34, Damascus steel, and Titanium with a carbide edge have been used. The butt-end of the knife features a hole for tying a lanyard. Some models are made with partially serrated blades to aid in the cutting of seatbelts or webbing.
The handle material of the Commander is composed of two titanium liners utilizing a Walker liner lock and a double detent as the locking mechanism. The reasons for using titanium as a liner lock material were due to its exceptional strength-to-weight ratio and corrosion resistance. The handle's scales are made from black G-10 Fiberglass, although models were made for a few years utilizing green G-10 and limited runs have been made with a desert tan color. A pocket clip held in place by three screws allows the knife to be clipped to a pocket, web-gear, or MOLLE.
Each model is equipped with Emerson's Wave opening mechanism, aside from one-off versions that lacked it. The Wave is a small hook on the spine of the blade designed to catch the edge of a user's pocket. The blade opens as the knife is drawn.
The Mini Commander is built to the same specifications except the blade is in length and overall. Likewise, the Super Commander has a blade and is in overall length.
In the media
The Commander was featured in the short-lived UPN television series Soldier of Fortune, Inc. and was used by the character of Zak in the 1998 movie The Placebo Effect. The knife is written about in the military and spy novels of Dennis Chalker and Marcus Wynne and Barry Eisler's John Rain.
References
External links
Emerson Knives Official Site
Patent for the WAVE
Military knives
Equipment of the United States Navy
Pocket knives
Mechanical hand tools
Goods manufactured in the United States | Commander (knife) | Physics | 1,138 |
1,331,039 | https://en.wikipedia.org/wiki/Dirac%20large%20numbers%20hypothesis | The Dirac large numbers hypothesis (LNH) is an observation made by Paul Dirac in 1937 relating ratios of size scales in the Universe to that of force scales. The ratios constitute very large, dimensionless numbers: some 40 orders of magnitude in the present cosmological epoch. According to Dirac's hypothesis, the apparent similarity of these ratios might not be a mere coincidence but instead could imply a cosmology with these unusual features:
The strength of gravity, as represented by the gravitational constant, is inversely proportional to the age of the universe:
The mass of the universe is proportional to the square of the universe's age: .
Physical constants are actually not constant. Their values depend on the age of the Universe.
Stated in another way, the hypothesis states that all very large dimensionless quantities occurring in fundamental physics should be simply related to a single very large number, which Dirac chose to be the age of the universe.
Background
LNH was Dirac's personal response to a set of large number "coincidences" that had intrigued other theorists of his time. The "coincidences" began with Hermann Weyl (1919), who speculated that the observed radius of the universe, RU, might also be the hypothetical radius of a particle whose rest energy is equal to the gravitational self-energy of the electron:
where,
with
and re is the classical electron radius, me is the mass of the electron, mH denotes the mass of the hypothetical particle, and rH is its electrostatic radius.
The coincidence was further developed by Arthur Eddington (1931) who related the above ratios to N, the estimated number of charged particles in the universe, with the following ratio:
.
In addition to the examples of Weyl and Eddington, Dirac was also influenced by the primeval-atom hypothesis of Georges Lemaître, who lectured on the topic in Cambridge in 1933. The notion of a varying-G cosmology first appears in the work of Edward Arthur Milne a few years before Dirac formulated LNH. Milne was inspired not by large number coincidences but by a dislike of Einstein's general theory of relativity. For Milne, space was not a structured object but simply a system of reference in which relations such as this could accommodate Einstein's conclusions:
where MU is the mass of the universe and t is the age of the universe. According to this relation, G increases over time.
Dirac's interpretation of the large number coincidences
The Weyl and Eddington ratios above can be rephrased in a variety of ways, as for instance in the context of time:
where t is the age of the universe, is the speed of light and re is the classical electron radius. Hence, in units where and , the age of the universe is about 1040 units of time. This is the same order of magnitude as the ratio of the electrical to the gravitational forces between a proton and an electron:
Hence, interpreting the charge of the electron, the masses and of the proton and electron, and the permittivity factor in atomic units (equal to 1), the value of the gravitational constant is approximately 10−40. Dirac interpreted this to mean that varies with time as . Although George Gamow noted that such a temporal variation does not necessarily follow from Dirac's assumptions, a corresponding change of G has not been found.
According to general relativity, however, G is constant, otherwise the law of conserved energy is violated. Dirac met this difficulty by introducing into the Einstein field equations a gauge function that describes the structure of spacetime in terms of a ratio of gravitational and electromagnetic units. He also provided alternative scenarios for the continuous creation of matter, one of the other significant issues in LNH:
'additive' creation (new matter is created uniformly throughout space) and
'multiplicative' creation (new matter is created where there are already concentrations of mass).
Later developments and interpretations
Dirac's theory has inspired and continues to inspire a significant body of scientific literature in a variety of disciplines, with it sparking off many speculations, arguments and new ideas in terms of applications. In the context of geophysics, for instance, Edward Teller seemed to raise a serious objection to LNH in 1948 when he argued that variations in the strength of gravity are not consistent with paleontological data. However, George Gamow demonstrated in 1962 how a simple revision of the parameters (in this case, the age of the Solar System) can invalidate Teller's conclusions. The debate is further complicated by the choice of LNH cosmologies: In 1978, G. Blake argued that paleontological data is consistent with the "multiplicative" scenario but not the "additive" scenario. Arguments both for and against LNH are also made from astrophysical considerations. For example, D. Falik argued that LNH is inconsistent with experimental results for microwave background radiation whereas Canuto and Hsieh argued that it is consistent. One argument that has created significant controversy was put forward by Robert Dicke in 1961. Known as the anthropic coincidence or fine-tuned universe, it simply states that the large numbers in LNH are a necessary coincidence for intelligent beings since they parametrize fusion of hydrogen in stars and hence carbon-based life would not arise otherwise.
Various authors have introduced new sets of numbers into the original "coincidence" considered by Dirac and his contemporaries, thus broadening or even departing from Dirac's own conclusions. Jordan (1947) noted that the mass ratio for a typical star (specifically, a star of the Chandrasekhar mass, itself a constant of nature, approx. 1.44 solar masses) and an electron approximates to 1060, an interesting variation on the 1040 and 1080 that are typically associated with Dirac and Eddington respectively. (The physics defining the Chandrasekhar mass produces a ratio that is the −3/2 power of the gravitational fine-structure constant, 10−40.)
Modern studies
Several authors have recently identified and pondered the significance of yet another large number, approximately 120 orders of magnitude. This is for example the ratio of the theoretical and observational estimates of the energy density of the vacuum, which Nottale (1993) and Matthews (1997) associated in an LNH context with a scaling law for the cosmological constant. Carl Friedrich von Weizsäcker identified 10120 with the ratio of the universe's volume to the volume of a typical nucleon bounded by its Compton wavelength, and he identified this ratio with the sum of elementary events or bits of information in the universe.
Valev (2019) found an equation connecting cosmological parameters (for example density of the universe) and Planck units (for example Planck density). This ratio of densities, and other ratios (using four fundamental constants: speed of light in vacuum c, Newtonian constant of gravity G, reduced Planck constant ℏ, and Hubble constant H) computes to an exact number, . This provides evidence of the Dirac large numbers hypothesis by connecting the macro-world and the micro-world.
See also
References
Further reading
External links
Audio of Dirac talking about the large numbers hypothesis
Full transcript of Dirac's speech.
Robert Matthews: Dirac's coincidences sixty years on
The Mysterious Eddington–Dirac Number
Physical cosmology
Obsolete scientific theories
Large Numbers Hypothesis
Astronomical hypotheses
1937 introductions
Coincidence | Dirac large numbers hypothesis | Physics,Astronomy | 1,511 |
1,827,330 | https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20of%20Biochemistry | The Max Planck Institute of Biochemistry (; abbreviated MPIB) is a research institute of the Max Planck Society located in Martinsried, a suburb of Munich. The institute was founded in 1973 by the merger of three formerly independent institutes: the Max Planck Institute of Biochemistry, the Max Planck Institute of Protein and Leather Research (founded 1954 in Regensburg), and the Max Planck Institute of Cell Chemistry (founded 1956 in Munich).
With about 750 employees in currently nine research departments and more than 20 research groups, the MPIB is one of the largest institutes of the Max Planck Society.
Departments
There are nine departments currently in the institute:
Cell and Virus Structure (John A. G. Briggs)
Cellular Biochemistry (Franz-Ulrich Hartl)
Cellular and Molecular Biophysics (Petra Schwille)
Machine Learning and Systems Biology (Karsten Borgwardt)
Molecular Machines and Signaling (Brenda Schulman)
Molecular Medicine (Reinhard Fässler)
Proteomics and Signal Transduction (Matthias Mann)
Structural Cell Biology (Elena Conti)
Totipotency (Kikuë Tachibana)
Research groups
There are 26 research groups currently based at the MPIB, including 3 emeritus research groups:
Bacteriophages: Microbiology, Bacteriophages, RNA polymerases, Structural Biology, Biochemistry (Maria Sokolova)
Cell and Virus Structures: Structural Biology, Cryo-Electron Tomography, Viruses, Membrane trafficking (John Briggs)
Cell Dynamics: Phagocytosis, Actin Dynamics, Cell Motility (Günther Gerisch)
Cellular and Molecular Biophysics: Biophysics, Fluorescence Correlation Spectroscopy, Atomic Force Microscopy, Single Molecule, Synthetic Biology (Petra Schwille)
Cellular Biochemistry: Molecular Chaperones, Protein Folding, Protostasis, Aging and Neurodegenerative Diseases (Franz-Ulrich Hartl)
Chaperonin-assisted Protein Folding: Protein Folding and Assembly, Rubisco, GroEL and GroES, Mass Spectrometry (Manajit Hayer-Hartl)
Chromatin Biology: Genetics and Biochemistry of Chromatin, Transcription, Histone Modifications, Drosophila Development (Jürg Müller)
Chromosome Biology: Cell division, Meiosis (Wolfgang Zachariae)
Computational Systems Biochemistry: Systems Biology, Proteomics, Mass Spectrometry, Bioinformatics (Jürgen Cox)
CryoEM Technology: Cryo-Electron Tomography, Focused Ion Beam Milling, correlated light microscopy, in situ Structural Biology, visual Proteomics (Jürgen Plitzko)
DNA Hybridnanomaterials: DNA Nanotechnology, DNA-Silica Hybridnanomaterials, Bionanotechnology, Biophysics (Amelie Heuer-Jungemann)
Immunoregulation: Immunity, Macrophage: T cell cross-talk, Self-Tolerance, Amino Acid metabolism (Peter Murray)
Machine Learning and Systems Biology: biomedical research, data mining, machine learning, biological systems (Karsten Borgwardt)
Mechanisms of Protein Biogenesis: RNA Biology; Translation Dynamics; Protein Folding; Systems Biology (Danny Nedialkova)
Molecular Imaging and Bionanotechnology: Super-Resolution Microscopy, DNA Nanotechnology, Biophysics, Single-Molecule Studies (Ralf Jungmann)
Molecular Machines and Signaling: Structural Biology, Ubiquitin Proteasome System, Ubiquitin-like Proteins (Brenda Schulman)
Molecular Medicine: Integrin, Adhesion Signalling, Mouse Genetics (Reinhard Fässler)
Molecular Structural Biology: Cryo-Electron Tomography, Electron Microscopical Structure Research, Protein and Cell Structure, Protein Degradation (Wolfgang Baumeister)
Proteomics and Signal Transduction: Mass Spectrometry, Systems Biology, Bioinformatics, Cancer (Matthias Mann)
Structural Cell Biology: Structural Studies, RNA Transport, RNA Surveillance, RNA Degradation (Elena Conti)
Structure and Dynamics of Molecular Machines: DNA Replication Dynamics, Structural Biology, Single-Molecule Imaging, Biophysics (Karl Duderstadt)
Structure Research: Structural Biology, Methods of Protein Crystallography, Protein Degradation, Medicinal Chemistry (Robert Huber)
Totipotency: Mechanistic Cell Biology, Genomics and Biochemistry of Chromatin Reprogramming, Transcription, Mouse Genetics (Kikuë Tachibana)
Translational Medicine: Fibronectin, Integrin, Bone, Disease (Inaam Nakchbandi)
Graduate Program
The International Max Planck Research School for Molecules of Life (IMPRS-ML) is a PhD program covering various aspects of life science ranging from biochemistry to computational biology. The school is run in cooperation with the Max Planck Institute for Biological Intelligence, the Ludwig Maximilian University of Munich, and the Technical University of Munich.
References
External links
Homepage of the Max Planck Institute of Biochemistry (MPIB)
Homepage of the International Max Planck Research School for Molecules of Life (IMPRS-ML)
Biochemistry
Biochemistry research institutes
Research institutes in Munich
Genetics in Germany | Max Planck Institute of Biochemistry | Chemistry,Biology | 1,010 |
59,034,214 | https://en.wikipedia.org/wiki/Digital%20thread | Digital thread, also known as digital chain, is defined as “the use of digital tools and representations for design, evaluation, and life cycle management.”. It is a data-driven architecture that links data gathered during a Product lifecycle from all involved and distributed manufacturing systems. This data can come from any part of product's lifecycle, its transportation, or its supply chain. Digital thread "enables the collection, transmission, and sharing of data and information between systems across the product lifecycle" to enable real-time decision making, gather data, and iterate on the product.
The term 'digital thread' was first used in the Global Horizons 2013 report by the USAF Global Science and Technology Vision Task Force. Digital thread was further refined in 2018 by Singh and Willcox at MIT in their paper entitled "Engineering with a Digital Thread". In this academic paper the term digital thread is defined as "a data-driven architecture that links together information generated from across the product lifecycle and is envisioned to be the primary or authoritative data and communication platform for a company’s products at any instance of time."
Digital thread enables "data to be integrated into one platform, allowing seamless use of and ease of access to all data".
Applications
Digital twin
Idaho National Laboratories describes Digital Twin as "the merging of integrated and connected data, sensors and instrumentation, artificial intelligence, and online monitoring into a single cohesive unit."
It is a critical capability of model-based systems engineering (MBSE) and the foundation for a Digital twin, which is defined as "a digital replica of a physical entity". In fact, digital thread was first described as related to Digital twin in the Global Horizons 2013 report. Digital thread is a means to gather data for use in the development of a Digital twin; "some argue [digital thread] is the backbone of digital twin applications". "digital thread platforms can capture data from different systems, standardize it, and provide a seamless link between the physical process or product and the digital twin". The term digital thread is also used to describe the traceability of the digital twin back to the requirements, parts and control systems that make up the physical asset.
Although digital thread and Digital twin are "every so often understood to be synonymous...they are not the same as Digital Twin relies on real-time data from its physical counterpart". "In short, digital thread describes the process while digital twin symbolizes technology". "Compared to the digital twin, the digital thread can support decision-making by designing and regulating the data interaction and processing instead of high-fidelity system models".
A digital thread enables a Digital twin by ensuring that incoming data is made uniform and easily accessible through the three main data chains:
The Product Innovation chain - Product designs, processes, and design flow are incorporated into the digital thread
The Enterprise Value chain - Supplier information, material data, and manufacturing processes are incorporated into the digital thread.
The Field and Service chain - Maintenance manuals and part availability are incorporated into the digital thread.
Enabling a Digital twin could result in petabytes of data, and "necessitate the use of highly sophisticated tools and software."
Tools
DeepLynx
"DeepLynx is an ontological data warehouse with timeseries data support". It was primarily authored by John Darrington and Cristopher Ritter to tackle Model-Based Systems Engineering (MBSE) tool integrations and warehousing, and has evolved to enable support for digital twin.
Internet of things
A key aspect of digital thread is the Internet of things, whose "cyber-physical systems, sensors, and so-called smart devices" are an important source of the data required by digital thread. "The ability to gather massive amounts of data through the aspired omnipresence of sensors furthermore fuels the emergence of other key technologies" such as Big data analytics, Artificial intelligence, and Cloud computing. "Thus, the data collected by using IoT technologies constitute the basis of advanced simulation models, which is in essence the livelihood of the digital twin paradigm and therefore also an integral part of the wider digital thread."
Smart manufacturing
Big data analytics and artificial intelligence used in conjunction with Digital Thread are increasingly more required in smart manufacturing applications. Big data analytics is a "prerequisite for managing highly variable" data of smart manufacturing processes, gathered through digital thread. Artificial Intelligence can be trained using this data to create "autonomously self-improving production processes [14] and to facilitate organizational decision-making". "the digital thread paradigm not only leads to the accumulation and processing of massive amounts of data but is also shaped by the analytical results these both technologies provide".
References
Software frameworks | Digital thread | Technology | 949 |
30,612,124 | https://en.wikipedia.org/wiki/Left-hand%E2%80%93right-hand%20activity%20chart | Left-hand–right-hand activity chart is an illustration that shows the contributions of the right and left hands of a worker and the balance of the workload between the right and left hands.
References
Further reading
Aft, L. S. (2000). Work measurements and methods improvement, Wiley, .
Industrial engineering | Left-hand–right-hand activity chart | Engineering | 65 |
218,320 | https://en.wikipedia.org/wiki/Ultraviolet%20catastrophe | The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century and early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range. The term "ultraviolet catastrophe" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law.
The phrase refers to the fact that the empirically derived Rayleigh–Jeans law, which accurately predicted experimental results at large wavelengths, failed to do so for short wavelengths. (See the image for further elaboration.) As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a problem. This problem was later found to be due to a property of quanta as proposed by Max Planck: There could be no fraction of a discrete energy package already carrying minimal energy.
Since the first use of this term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence.
Problem
The Rayleigh-Jeans law is an approximation to the spectral radiance of electromagnetic radiation as a function of wavelength from a black body at a given temperature through classical arguments. For wavelength , it is:
where is the spectral radiance, the power emitted per unit emitting area, per steradian, per unit wavelength; is the speed of light; is the Boltzmann constant; and is the temperature in kelvins. For frequency , the expression is instead
This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of .
The "ultraviolet catastrophe" is the expression of the fact that the formula misbehaves at higher frequencies; it predicts infinite energy emission because as .
An example, from Mason's A History of the Sciences,
illustrates multi-mode vibration via a piece of string. As a natural vibrator, the string will oscillate with specific modes (the standing waves of a string in harmonic resonance), dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator. Since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are.
According to classical electromagnetism, the number of electromagnetic modes in a 3-dimensional cavity, per unit frequency, is proportional to the square of the frequency. This implies that the radiated power per unit frequency should be proportional to frequency squared. Thus, both the power at a given frequency and the total radiated power is unlimited as higher and higher frequencies are considered: this is unphysical, as the total radiated power of a cavity is not observed to be infinite, a point that was made independently by Einstein, Lord Rayleigh, and Sir James Jeans in 1905.
Solution
In 1900, Max Planck derived the correct form for the intensity spectral distribution function by making some assumptions that were strange for the time. In particular, Planck assumed that electromagnetic radiation can be emitted or absorbed only in discrete packets, called quanta, of energy:
where:
is the Planck constant,
is the frequency of light,
is the speed of light,
is the wavelength of light.
By applying this new energy to the partition function in statistical mechanics, Planck's assumptions led to the correct form of the spectral distribution functions:
where:
is the absolute temperature of the body,
is the Boltzmann constant,
denotes the exponential function.
In 1905, Albert Einstein solved the problem physically by postulating that Planck's quanta were real physical particles – what we now call photons, not just a mathematical fiction. They modified statistical mechanics in the style of Boltzmann to an ensemble of photons. Einstein's photon had an energy proportional to its frequency and also explained an unpublished law of Stokes and the photoelectric effect. This published postulate was specifically cited by the Nobel Prize in Physics committee in their decision to award the prize for 1921 to Einstein.
See also
Wien approximation
Vacuum catastrophe
Planckian locus
References
Bibliography
Further reading
Foundational quantum physics
Physical paradoxes
Physical phenomena | Ultraviolet catastrophe | Physics | 882 |
59,919,751 | https://en.wikipedia.org/wiki/Hayward%20metric | The Hayward metric is the simplest description of a black hole which is non-singular. The metric was written down by Sean Hayward as the minimal model which is regular, static, spherically symmetric and asymptotically flat. The metric is not derived from any particular alternative theory of gravity, but provides a framework to test the formation and evaporation of non-singular black holes both within general relativity and beyond. Hayward first published his metric in 2005 and numerous papers have studied it since.
References
Theories of gravity
General relativity | Hayward metric | Physics | 107 |
2,151,656 | https://en.wikipedia.org/wiki/Wave%20tank | A wave tank is a laboratory setup for observing the behavior of surface waves. The typical wave tank is a box filled with liquid, usually water, leaving open or air-filled space on top. At one end of the tank, an actuator generates waves; the other end usually has a wave-absorbing surface. A similar device is the ripple tank, which is flat and shallow and used for observing patterns of surface waves from above.
Wave basin
A wave basin is a wave tank which has a width and length of comparable magnitude, often used for testing ships, offshore structures and three-dimensional models of harbors (and their breakwaters).
Wave flume
A wave flume (or wave channel) is a special sort of wave tank: the width of the flume is much less than its length. The generated waves are therefore – more or less – two-dimensional in a vertical plane (2DV), meaning that the orbital flow velocity component in the direction perpendicular to the flume side wall is much smaller than the other two components of the three-dimensional velocity vector. This makes a wave flume a well-suited facility to study near-2DV structures, like cross-sections of a breakwater. Also (3D) constructions providing little blockage to the flow may be tested, e.g. measuring wave forces on vertical cylinders with a diameter much less than the flume width.
Wave flumes may be used to study the effects of water waves on coastal structures, offshore structures, sediment transport and other transport phenomena.
The waves are most often generated with a mechanical wavemaker, although there are also wind–wave flumes with (additional) wave generation by an air flow over the water – with the flume closed above by a roof above the free surface. The wavemaker frequently consists of a translating or rotating rigid wave board. Modern wavemakers are computer controlled, and can generate besides periodic waves also random waves, solitary waves, wave groups or even tsunami-like wave motion. The wavemaker is at one end of the wave flume, and at the other end is the construction being tested, or a wave absorber (a beach or special wave absorbing constructions).
Often, the side walls contain glass windows, or are completely made of glass, allowing for a clear visual observation of the experiment, and the easy deployment of optical instruments (e.g. by Laser Doppler velocimetry or particle image velocimetry).
Circular wave basin
In 2014, the first circular, combined current and wave test basin, FloWaveTT, was commissioned in The University of Edinburgh. This allows for "true" 360° waves to be generated to simulate rough storm conditions as well as scientific controlled waves in the same facility.
See also
Water tunnel (hydrodynamic)
Airy wave theory
Ocean waves
Ripple tank
Shallow water equations
Further reading
References
External links
Experimental physics
Hydrodynamics
Water waves
Scale modeling
Physical models
Articles containing video clips | Wave tank | Physics,Chemistry | 600 |
61,611,982 | https://en.wikipedia.org/wiki/Richard%20Evershed | Richard Evershed is a Professor of Biogeochemistry and Fellow of the Royal Society.
Education and career
Evershed attended St Ivo School, St Ives in the late 1960s and graduated in 1978 from Nottingham Trent University (Trent Polytechnic, Nottingham) with a BSc in Applied Chemistry. He undertook his PhD in the Department of Chemistry at the University of Keele, investigating pheromones in social insects. Following his PhD he worked as a postdoctoral researcher in the Organic Geochemistry Unit in the School of Chemistry, University of Bristol, where he worked with Professor Geoffrey Eglinton and Professor James Maxwell to develop GC/MS and HPLC methodologies to investigate porphyrins in crude oils and source rocks. He moved to the Department of Chemistry, University of Liverpool in 1984 to manage a biochemical mass spectrometry unit, before taking up a position as Lecturer in the School of Chemistry, University of Bristol, in 1993. He was promoted to Reader in 1996, and a Chair of Biochemistry in 2000.
He is currently the Director of the Bristol Biogeochemistry Research Centre, and the Bristol node of the NERC Life Sciences Mass Spectrometry Facility. He was elected a Fellow of the Royal Society in 2010.
Research
Evershed's research is highly interdisciplinary. He applies the principles and techniques of organic and analytical chemistry, to address questions spanning archaeological chemistry and palaeontology to biogeochemistry. These diverse areas are linked by his overarching interests in the preservation, recycling, decay and transport processes that impact biological materials once they enter the geosphere.
He pioneered several methodologies to analyse archaeological materials and provide ‘chemical fingerprints’, for example the method of lipid residue analysis in archaeological pottery. He has also developed techniques for comparing and distinguishing between food signatures and possible environmental contamination. His research has had a significant impact on our understanding of human activity in the past, opening new avenues for the identification of plant and animal exploitation in the past. These methods have contributed to our understanding of the origins of dairying, and provided evidence for the earliest use of beeswax, for example.
Other areas his research has focused on includes stable isotope applications for studying ancient diet and agriculture, the study of marker compounds in ancient soils, and the analysis of ancient tars, resins and embalming agents. His palaeontological research has applied a similar approach to fossils, to develop a better understanding of the processes involved in the diagenesis of fossil and sub fossil organisms.
In biogeochemistry his research has focused on understanding the fate of soil organic matter. His research has developed biomolecular and isotopic methods to characterise soil organic matter and to understand how soil organisms impact the cycling of organic matter. The wider aim of this research is to produce better models for nutrient cycles, which are central to understanding the effects of global warming and intensive agriculture. This study of organic matter has also been applied to palaeoenvironment and palaeoclimatic reconstruction, using sedimentary archives such as ocean sediments and peat bogs.
One of his areas of research involves the relationships between prehistoric milk use and the evolution of lactase persistence. His research suggests that milk was being processed in pots in Europe in the 7th millennium BC, well before the lactase persistence allele became common there.
Evershed was awarded a European Research Council Advanced Grant (2013–2018) for Neo-Milk, The Milking Revolution in Temperate Neolithic Europe, which investigated where, when and why dairying arose in temperate Neolithic Europe.
Awards
Evershed was awarded the Royal Society of Chemistry’s Interdisciplinary Award in 2003, and the Aston Medal of the British Mass Spectrometry Society in 2010. In 2016, he was the winner of the Royal Society of Chemistry's Robert Boyle Prize for Analytical Science. In 2002, he was awarded the Royal Society of Chemistry Theophilus Redwood Lectureship.
Selected publications
With Nicola Temple he wrote Sorting the beef from the bull a book on the science of food fraud forensics.
Lloyd, C, Michaelides, K, Chadwick, D, Dungait, J & Evershed, R, 2011, "Tracing the flow-driven vertical transport of livestock-derived organic matter through soil using biomarkers". Organic Geochemistry, pp. 56 – 66
Styring, A, Sealy, J & Evershed, R, 2010, "Resolving the bulk δ15N values of ancient human and animal bone collagen via compound-specific nitrogen isotope analysis of constituent amino acids". Geochimica et Cosmochimica Acta, vol 74., pp. 241 – 251
Outram, A, Stear, N, Bendrey, R, Olsen, S, Kasparov, A, Zaibert, V, Thorpe, N & Evershed, R, 2009, Earliest horse harnessing and milking. Science, vol 323., pp. 1332 – 1335
Bull, I, Berstan, R, Vass, A & Evershed, R, 2009, "Identification of a disinterred grave by molecular and stable isotope analysis*. Science and Justice, vol 49., pp. 142 – 149
References
Living people
Alumni of Nottingham Trent University
Fellows of the Royal Society
Biogeochemists
Year of birth missing (living people) | Richard Evershed | Chemistry | 1,088 |
5,826,615 | https://en.wikipedia.org/wiki/Representation%20rigid%20group | In mathematics, in the representation theory of groups, a group is said to be representation rigid if for every , it has only finitely many isomorphism classes of complex irreducible representations of dimension .
External links
The proalgebraic completion of rigid groups
Properties of groups
Representation theory of groups | Representation rigid group | Mathematics | 61 |
34,266,985 | https://en.wikipedia.org/wiki/Dermabacteraceae | Dermabacteraceae is an Actinomycetota family.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature and the phylogeny is based on whole-genome sequences.
Notes
References
Micrococcales
Soil biology | Dermabacteraceae | Biology | 62 |
34,133,981 | https://en.wikipedia.org/wiki/Autotoky | Autotoky is uniparental reproduction by self-fertilization or by parthenogenesis. The word comes from the Greek words auto meaning self and tokos meaning birth. Plants that reproduce by parthenogenesis usually do so by apomixis, a process that retains meiosis. Animals that reproduce by parthenogenesis usually use automixis, another process that retains meiosis. The elements of meiosis that are retained in these reproductive systems are (1) pairing of homologous chromosomes, (2) DNA double-strand break formation, and (3) recombinational repair at prophase I. The adaptive function of meiosis that is retained in these forms of autotoky appears to be repair of DNA damage.
References
Asexual reproduction | Autotoky | Biology | 164 |
11,420,931 | https://en.wikipedia.org/wiki/HgcC%20family%20RNA | HgcC is a small non coding RNA (ncRNA). It is the functional product of a gene which is not translated into protein.
This ncRNA gene was originally identified by computationally searching the genome of the thermophilic archea Methanococcus jannaschii for non-coding regions of high guanine-cytosine (GC) content. The original rational for this search was based on the observation that the genomes of these bacteria are adenosine-thiamine (AT) rich and consequently have a low GC content. However, the GC content of ribosomal RNA (rRNA) and transfer RNA (tRNA) genes in hyperthermophiles shows a strong correlation with optimal growth temperature. It was proposed that non coding regions of high GC-content might encode functional RNA products. The computational screen identified a number of novel ncRNA genes in the genome of M.jannaschii. These were named hgc- ("high GC") A, B, C, D, E, F and G. Two other homologues were detected called HhcA and HhcB after "homologue of hgcC". A further RNA element, SscA RNA, was also identified.
The HgcC gene product was experimentally validated by Northern blot and RACE-PCR analysis. The function of this ncRNA is unknown.
References
External links
Non-coding RNA | HgcC family RNA | Chemistry | 302 |
45,637,541 | https://en.wikipedia.org/wiki/Home%20lift | A home lift not to be confused with a home elevator is a type of lift specifically designed for private homes, where the design takes into consideration the following four factors:
1. Compact design in view of the limitations of space in a private residence,
2. Usage of the lift restricted primarily to the residents of the private homes,
3. Special facilities to meet the needs of elderly or disabled persons, including wheelchair users, and
4. Quiet, smooth, jerk-free movement of the lift and Controls to have ease of operation.
A home lift may be linked to specific country codes or directives. For example, the European standard of Machine Directive 2006 42 EC requires compliance with 194 parameters of safety for a lift to be installed inside a private property.
Overview
Home lifts are compact lifts for 2 to 4 persons which typically run on domestic electricity. Unlike hydraulic lifts or traditional "gear and counterweight" operated elevators, a home lift doesn't require additional space for machine room, over head, or pit, making it more suitable for domestic and private use. Often, maintenance costs are also lower than a more conventional lift.
The driving system for a home lift can be built inside the lift structure itself and features a screw, an electric motor, and a nut mounted behind the control panel of the lift's platform; it is thus referred to as a "screw and nut" system. When the lift is operated, the engine forces the nut to rotate around the screw, pushing the lift up and down. Most home lifts come with an open platform structure to free even more space and grant access from 3 different sides of the platform. This requires all producers to include specific safety mechanisms and, in some countries, to limit the travel speed..
Home lifts have been present on the market for decades, and represent a growing trend. Many home lifts producers sell their products through their network, but it is not rare to see them providing their lifts to bigger elevating system groups. Several lift manufacturers enter new markets like India with customization and installation partners who have scaled up their technical capabilities.
Types
Electric home lifts
Electric home lifts are powered by an electric motor that plugs into a 13-amp power socket, like any other household appliance. They use a steel roped drum-braked gear motor drive system which means it is self-contained within the roof space of the lift car itself. 'Through floor' dual rail lifts create a self-supporting structure and the weight of the entire structure and lift are in compression through the rails into the floor of the home.
Cable-driven home lifts
Cable-driven home lifts consist of a shaft, a cabin, a control system and counterweights. Some models also require a technical room. Cable-driven lifts are similar to those found in commercial buildings. These elevators take up most space due to the shaft and the equipment room, so installing a cable system in a new building is much easier than trying to retrofit an existing building. Traction elevators need a pulley system for movement. They are less common for new buildings, as hydraulic technology is used in most cases.
Chain-driven home lifts
Chain-driven home lifts are similar to cable-driven lifts, but they use a chain wrapped around a drum instead of a cable to raise and lower the car. Chains are more durable than cables and do not need to be replaced as often. Chain-driven home lifts also do not require a separate machine room, which saves space.
Machine room-less home lifts
Machine room-less home lifts operate by sliding up and down a travel path with a counterweight. This type is an excellent choice for existing residential buildings, since neither machine rooms nor pits reaching into the ground are required. However, traction elevators still require additional space above the elevator roof to accommodate the components required to raise and lower the car. Shaftless home lifts consist of a rectangular elevator cabin positioned on a rail. The lift travels on the route from the lower floor to the upper floor and back.
Hydraulic home lifts
Hydraulic home lifts are driven by a piston that moves in a cylinder. Since the drive system is completely housed in the elevator shaft, no machine room is required and the control system is small enough to fit into a cabinet on a wall near the elevator. For hydraulic systems with holes, the cylinder must extend to the depth of the floor corresponding to the feet of the elevator, while hydraulic systems without holes do not require a pit.
Pneumatic home lifts
See Pneumatic Elevators
Pneumatic home lifts use a vacuum system inside a tube to drive their movement. A pit or machine room is not required, so pneumatic home lifts are easiest to retrofit into an existing home. Pneumatic lifts consist of acrylic or glass tubes (typically about 80 cm in diameter). It looks like a larger version of Pneumatic mail tubes found in older buildings. Pneumatic elevators are not hidden in the wall and are normally placed in the near a staircase.
Screw-nut driven home lifts
Screw-nut driven home lifts are designed around the concept of a motor that rotates a nut, which turns the screw thus moves the lift up and down. It's known to be reliable, safe and space efficient, and requires less maintenance than hydraulic or belt driven elevators. most commonly used up to 6 floors.
Design and customizability
Home lifts, pre-installed or retro fitted usually comes with some design options, this is so the owner can make it fit their house. Colour and size are the most common choices such as white, grey and black. However, some lift producers go beyond this and provide options for the artwall (backwall) carpet colours and patterns, giving the customer variety of options to consider and to match each homes interior design.
See also
Elevator
Stairlift
Wheelchair lift
References
Elevators
Home automation | Home lift | Technology,Engineering | 1,174 |
18,211,737 | https://en.wikipedia.org/wiki/HD%2017156%20b | HD 17156 b, named Mulchatna by the IAU, is an extrasolar planet approximately 255 light-years away in the constellation of Cassiopeia. The planet was discovered orbiting the yellow subgiant star HD 17156 in April 2007. The planet is classified as a relatively cool hot Jupiter planet slightly smaller than Jupiter but slightly larger than Saturn. This highly-eccentric three-week orbit takes it approximately 0.0523 AU of the star at periastron before swinging out to approximately 0.2665 AU at apastron. Its eccentricity is about the same as 16 Cygni Bb, a so-called "eccentric Jupiter". Until 2009, HD 17156 b was the transiting planet with the longest orbital period.
Discovery
The planet was discovered on April 14, 2007, by a team using the radial velocity method on the Keck and Subaru telescopes. The team made an initial negative, transit search, but they were only able to cover 25% of the search space. This left the possibility of a transit open.
After the possibility of a transit was discussed on oklo.org, various groups performed a follow-on search. These searches confirmed a three-hour transit on October 2, 2007, and a paper was published two days later.
Name
The planet was originally named "HD 17156 b", being the second object in the HD 17156 system.
The planet was given the name "Mulchatna" by the IAU, chosen by United States representatives for the NameExoWorlds content, with the comment that "The Mulchatna River is a tributary of the Nushagak River in southwestern Alaska, USA". Its parent star was simultaneously named Nushagak in the contest.
Orbit
Careful radial velocity measurements have made it possible to detect the Rossiter–McLaughlin effect, the shifting in photospheric spectral lines caused by the planet occulting a part of the rotating stellar surface. This effect allows the measurement of the angle between the planet's orbital plane and the equatorial plane of the star. This planet's spin-orbit angle was initially measured by Narita in 2007 as +62 ± 25 but has been remeasured by Cochran +9.4 ± 9.3 degrees. The study in 2012, refined the misalignemt angle to 10°.
Due to its high eccentricity and large distance from its star, HD 17156 b has a low probability of ever entering a secondary eclipse. The star's true temperature cannot be measured with accuracy. Due to the high eccentricity of its orbit, the atmosphere of HD 17156 b undergoes a 27-fold variation in stellar flux during each orbit.
See also
16 Cygni Bb
References
External links
Cassiopeia (constellation)
Transiting exoplanets
Giant planets
Exoplanets discovered in 2007
Exoplanets detected by radial velocity
Exoplanets with proper names | HD 17156 b | Astronomy | 594 |
24,797,786 | https://en.wikipedia.org/wiki/Misonidazole | Misonidazole is a radiosensitizer that was investigated in clinical trials. It was used in these trials for radiation therapy to cause normally resistant hypoxic tumor cells to become sensitive to the treatment.
See also
Etanidazole
References
Secondary alcohols
Ethers
Nitroimidazoles | Misonidazole | Chemistry | 65 |
2,538,735 | https://en.wikipedia.org/wiki/Environmental%20education | Environmental education (EE) refers to organized efforts to teach how natural environments function, and particularly, how human beings can manage behavior and ecosystems to live sustainably. It is a multi-disciplinary field integrating disciplines such as biology, chemistry, physics, ecology, earth science, atmospheric science, mathematics, and geography.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) states that EE is vital in imparting an inherent respect for nature among society and in enhancing public environmental awareness. UNESCO emphasises the role of EE in safeguarding future global developments of societal quality of life (QOL), through the protection of the environment, eradication of poverty, minimization of inequalities and insurance of sustainable development.
The term often implies education within the school system, from primary to post-secondary. However, it sometimes includes all efforts to educate the public and other audiences, including print materials, websites, media campaigns, etc. There are also ways that environmental education is taught outside the traditional classroom: aquariums, zoos, parks, and nature centers all have ways of teaching the public about the environment.
UNESCO and environmental awareness and education
UNESCO's involvement in environmental awareness and education goes back to the very beginnings of the Organization, with the creation in 1948 of the IUCN (International Union for the Conservation of Nature, now the World Conservation Union), the first major non-governmental organization (NGO) mandated to help preserve the natural environment. UNESCO was also closely involved in convening the United Nations Conference on the Human Environment in Stockholm, Sweden in 1972, which led to the setting up of the United Nations Environment Programme (UNEP). Subsequently, for two decades, UNESCO and UNEP led the International Environmental Education Programme (1975-1995), which set out a vision for, and gave practical guidance on how to mobilize education for environmental awareness.
In 1976, UNESCO launched an environmental education newsletter 'Connect' as the official organ of the UNESCO-UNEP International Environmental Education Programme (IEEP). It served as a clearinghouse to exchange information on Environmental Education (EE) in general and to promote the aims and activities of the IEEP in particular, as well as being a network for institutions and individuals interested and active in environment education until 2007.
The long-standing cooperation between UNESCO and UNEP on environmental education (and later ESD) also led to the co-organization of four major international conferences on environmental education since 1977: the First Intergovernmental Conference on Environmental Education in Tbilisi, Georgia (October 1977); the Conference "International Strategy for Action in the Field of Environmental Education and Training for the 1990s" in Moscow, Russian Federation (August 1987); the third International Conference "Environment and Society: Education and Public Awareness for Sustainability" at Thessaloniki, Greece (December 1997); and the Fourth International Conference on Environmental Education towards a Sustainable Future in Ahmedabad, India (November 2007). These meetings highlighted the pivotal role education plays in sustainable development.
It was at the Tbilisi conference in 1977 that the essential role of 'education in environmental matters' (as stated in the recommendations of the 1972 Stockholm Conference) was fully explored. Organized by UNESCO in cooperation with UNEP, this was the world's first intergovernmental conference on environmental education. In the subsequent Tbilisi Declaration, environment was interpreted in its 'totality—natural and built, technological and social (economic, political, cultural-historical, ethical, aesthetic)' (point 3). The goals formulated for environmental education went far beyond ecology in the curriculum and included development of a 'clear awareness of, and concern about, economic, social, political, and ecological interdependence in urban and rural areas' (point 2) which became one of the major bases of ESD.
Focus
Environmental education focuses on:
1. Engaging with citizens of all demographics to;
2. Think critically, ethically, and creatively when evaluating environmental issues;
3. Make educated judgments about those environmental issues;
4. Develop skills and a commitment to act independently and collectively to sustain and enhance the environment; and,
5. To enhance their appreciation of the environment; resulting in positive environmental behavioural change.
Attributes
There are a few central qualities involved in environmental education that are useful contributions to the individual.
Environmental education:
Enhances real-world problem solving.
Strengthens physical activity and diet quality.
Improves communication/leadership when working in groups.
Careers
There are various different career paths one could delve into within environmental education. Many of these careers require discovering and planning how to resolve environmental issues occurring in today's world. The specific responsibilities associated with each career will depend in part on their physical location, taking into account what environmental issue is most prevalent in the area. A general outlook of some careers in this field are:
Federal Government Park Ranger- Responsible for protecting the national parks, historical sites, and national seashores across the United States including the wildlife and ecosystems within them. There are many qualifications in order for one to become a park ranger and some include: obtaining a bachelor's degree and a passing grade in the PEB. Some focuses within this field include: enforcing park rules, giving tours to groups for educational purposes, and protecting parks from forest fires.
Outdoor Education Teacher- Teach students by using outdoor field and classroom work. Some invite guest speakers who are experts in their field to help teach how the basic principles of science are implemented in the real world. Some requirements for this career include becoming CPR certified and having a bachelor's degree in either environmental science or a field related to it. It can be a problematic field as there is no concurrence on the central concepts that are taught as well as teachers do not agree on what constitutes an important environmental issue.
Environmental Scientist- Use of field work to research contamination in nature when writing plans in creating projects for environmental research. Environmental Scientists research topics such as air pollution, water quality, and wildlife. They also study how human health is affected by changes in the environment. Some requirements for this career are a bachelor's degree with a double major in environmental science and either biology, physics or chemistry.
Environmental Engineer- Involves the combination of biology/chemistry with engineering to generate ways to ensure the health of the planet. Scientific research is analyzed and projects are designed as a result of that research in order to come up with solutions to issues of the environment like air pollution. A bachelor's degree in civil engineering or general engineering is required as well as some experience in this field.
Related fields
Environmental education has crossover with multiple other disciplines. These fields of education complement environmental education yet have unique philosophies.
Citizen Science (CS) aims to address both scientific and environmental outcomes through enlisting the public in the collection of data, through relatively simple protocols, generally from local habitats over long periods of time.
Education for Sustainable Development (ESD) aims to reorient education to empower individuals to make informed decisions for environmental integrity, social justice, and economic viability for both present and future generations, whilst respecting cultural diversities.
Climate Change Education (CCE) aims in enhancing the public's understanding of climate change, its consequences, and its problems, and to prepare current and future generations to limit the magnitude of climate change and to respond to its challenges. Specifically, CCE needs to help learners develop knowledge, skills and values and action to engage and learn about the causes, impact and management of climate change.
Science Education (SE) focuses primarily on teaching knowledge and skills, to develop innovative thought in society.
Outdoor Education (OE) relies on the assumption that learning experiences outdoors in 'nature' foster an appreciation of nature, resulting in pro-environmental awareness and action. Outdoor education means learning "in" and "for" the outdoors.
Experiential education (ExE) is a process through which a learner constructs knowledge, skill, and value from direct experiences" Experiential education can be viewed as both a process and method to deliver the ideas and skills associated with environmental education.
Garden-based learning (GBL) is an instructional strategy that utilizes the garden as a teaching tool. It encompasses programs, activities and projects in which the garden is the foundation for integrated learning, in and across disciplines, through active, engaging, real-world experiences that have personal meaning for children, youth, adults and communities in an informal outside learning setting.
Inquiry-based Science (IBS) is an active open style of teaching in which students follow scientific steps in a similar manner as scientists to study some problem. Often used in biological and environmental settings.
While each of these educational fields has their own objectives, there are points where they overlap with the intentions and philosophy of environmental education.
History
The roots of environmental education can be traced back as early as the 18th century when Jean-Jacques Rousseau stressed the importance of an education that focuses on the environment in Emile: or, On Education. Several decades later, Louis Agassiz, a Swiss-born naturalist, echoed Rousseau's philosophy as he encouraged students to "Study nature, not books." These two influential scholars helped lay the foundation for a concrete environmental education program, known as nature study, which took place in the late 19th and early 20th century.
The nature study movement used fables and moral lessons to help students develop an appreciation of nature and embrace the natural world. Anna Botsford Comstock, the head of the Department of Nature Study at Cornell University, was a prominent figure in the nature study movement. She wrote the Handbook for Nature Study in 1911 which used nature to educate children on cultural values. Comstock and the other leaders of the movement, such as Liberty Hyde Bailey, helped Nature Study garner tremendous amounts of support from community leaders, teachers, and scientists to change the science curriculum for children across the United States.
A new type of environmental education, Conservation Education, emerged in the US as a result of the Great Depression and Dust Bowl during the 1920s and 1930s. Conservation Education dealt with the natural world in a drastically different way from Nature Study because it focused on rigorous scientific training rather than natural history. Conservation Education was a major scientific management and planning tool that helped solve social, economic, and environmental problems during this time period.
The modern environmental education movement, which gained significant momentum in the late 1960s and early 1970s, stems from Nature Study and Conservation Education. During this time period, many events—such as the Cold War, the Civil rights movement and the Vietnam War—placed many Americans at odds with one another and the U.S. government. However, as more people began to fear the fallout from radiation, the chemical pesticides mentioned in Rachel Carson's Silent Spring, and the significant amounts of air pollution and waste, the public's concern for their health and the health of their natural environment led to a unifying phenomenon known as environmentalism. Environmental education was born of the realization that solving complex local and global problems cannot be accomplished by politicians and experts alone, but requires "the support and active participation of an informed public in their various roles as consumers, voters, employers, and business and community leaders." In 1960 the National Rural Studies Association (now known as the National Association for Environmental Education) was established in the UK to promote environmental education and support teachers in incorporating sustainability into their curricula.
One of the first articles about environmental education as a new movement appeared in the Phi Delta Kappan in 1969, authored by James A. Swan. A definition of "Environmental Education" first appeared in The Journal of Environmental Education in 1969, written by William B. Stapp. Stapp later went on to become the first Director of Environmental Education for UNESCO, and then the Global Rivers International Network.
Ultimately, the first Earth Day on April 22, 1970 – a national teach-in about environmental problems – paved the way for the modern environmental education movement. Later that same year, President Nixon passed the National Environmental Education Act, which was intended to incorporate environmental education into K-12 schools. Then, in 1971, the National Association for Environmental Education (now known as the North American Association for Environmental Education) was created to improve environmental literacy by providing resources to teachers and promoting environmental education programs.
Internationally, environmental education gained recognition when the UN Conference on the Human Environment held in Stockholm, Sweden, in 1972, declared environmental education must be used as a tool to address global environmental problems. The United Nations Education Scientific and Cultural Organization (UNESCO) and United Nations Environment Program (UNEP) created three major declarations that have guided the course of environmental education.
In 2002, the United Nations Decade of Education for Sustainable Development 2005-2014 (UNDESD) was formed as a way to reconsider, excite, and change approaches to acting positively on global challenges. The Commission on Education and Communication (CEC) helped support the work of the UNDESD by composing a backbone structure for education for sustainability, which contained five major components. The components are "Imagining a better future", "Critical thinking and reflection", "Participation in decision making" and "Partnerships, and Systemic thinking".
On June 9–14, 2013, the seventh World Environmental Education Congress was held in Marrakesh, Morocco. The overall theme of the conference was "Environmental education and issues in cities and rural areas: seeking greater harmony", and incorporated 11 different areas of concern. The World Environmental Education Congress had 2,400 members, representing over 150 countries. This meeting was the first time ever that it had been held in an Arab country, and was put together by two different organizations, the Mohamed VI Foundation for Environmental Protection and the World Environmental Education Congress Permanent Secretariat in Italy. Topics addressed at the congress include stressing the importance of environmental education and its role to empower, establishing partnerships to promote environmental education, how to mainstream environmental and sustainability, and even how to make universities "greener".
Stockholm Declaration
June 5–16, 1972 - The Declaration of the United Nations Conference on the Human Environment. The document was made up of 7 proclamations and 26 principles "to inspire and guide the peoples of the world in the preservation and enhancement of the human environment."
Belgrade Charter
October 13–22, 1975 - The Belgrade Charter was the outcome of the International Workshop on Environmental Education held in Belgrade, then in Yugoslavia, now in Serbia. The Belgrade Charter was built upon the Stockholm Declaration and added goals, objectives, and guiding principles of environmental education programs. It defined an audience for environmental education, which included the general public.
Tbilisi Declaration
October 14–26, 1977 - The Tbilisi Declaration "noted the unanimous accord in the important role of environmental education in the preservation and improvement of the world's environment, as well as in the sound and balanced development of the world's communities." The Tbilisi Declaration updated and clarified The Stockholm Declaration and The Belgrade Charter by including new goals, objectives, characteristics, and guiding principles of environmental education.
Later that decade, in 1977, the Intergovernmental Conference on Environmental Education in Tbilisi, Georgian SSR, Soviet Union emphasized the role of Environmental Education in preserving and improving the global environment and sought to provide the framework and guidelines for environmental education. The Conference laid out the role, objectives, and characteristics of environmental education, and provided several goals and principles for environmental education.
Pope Francis in his 2015 encyclical letter Laudato si', referred to a broadening of the goals of environmental education:
Environmental education in the teaching curriculum
Environmental education has been considered an additional or elective subject in much of traditional K-12 curriculum. At the elementary school level, environmental education can take the form of science enrichment curriculum, natural history field trips, community service projects, and participation in outdoor science schools. EE policies assist schools and organizations in developing and improving environmental education programs that provide citizens with an in-depth understanding of the environment. School related EE policies focus on three main components: curricula, green facilities, and training.
Schools can integrate environmental education into their curricula with sufficient funding from EE policies. This approach – known as using the "environment as an integrating context" for learning – uses the local environment as a framework for teaching state and district education standards. In addition to funding environmental curricula in the classroom, environmental education policies allot the financial resources for hands-on, outdoor learning. These activities and lessons help address and mitigate "nature deficit disorder", as well as encourage healthier lifestyles.
Green schools, or green facility promotion, are another main component of environmental education policies. Greening school facilities cost, on average, a little less than 2 percent more than creating a traditional school, but payback from these energy efficient buildings occur within only a few years. Environmental education policies help reduce the relatively small burden of the initial start-up costs for green schools. Green school policies also provide grants for modernization, renovation, or repair of older school facilities. Additionally, healthy food options are also a central aspect of green schools. These policies specifically focus on bringing freshly prepared food, made from high-quality, locally grown ingredients into schools.
In secondary school, environmental curriculum can be a focused subject within the sciences or is a part of student interest groups or clubs. At the undergraduate and graduate level, it can be considered its own field within education, environmental studies, environmental science and policy, ecology, or human/cultural ecology programs.
Environmental education is not restricted to in-class lesson plans. Children can learn about the environment in many ways. Experiential lessons in the school yard, field trips to national parks, after-school green clubs, and school-wide sustainability projects help make the environment an easily accessible topic. Furthermore, celebration of Earth Day or participation in EE week (run through the National Environmental Education Foundation) can help further environmental education. Effective programs promote a holistic approach and lead by example, using sustainable practices in the school to encourage students and parents to bring environmental education into their home.
The final aspect of environmental education policies involves training individuals to thrive in a sustainable society. In addition to building a strong relationship with nature, citizens must have the skills and knowledge to succeed in a 21st-century workforce. Thus, environmental education policies fund both teacher training and worker training initiatives. Teachers train to effectively teach and incorporate environmental studies. On the other hand, the current workforce must be trained or re-trained so they can adapt to the new green economy. Environmental education policies that fund training programs are critical to educating citizens to prosper in a sustainable society.
In the United States
Following the 1970s, non-governmental organizations that focused on environmental education continued to form and grow, the number of teachers implementing environmental education in their classrooms increased, and the movement gained stronger political backing. A critical move forward came when the United States Congress passed the National Environmental Education Act of 1990, which placed the Office of Environmental Education in the U.S. Environmental Protection Agency (EPA) and allowed the agency to create environmental education initiatives at the federal level.
EPA defines environmental education as "a process that allows individuals to explore environmental issues, engage in problem solving, and take action to improve the environment. As a result, individuals develop a deeper understanding of environmental issues and have the skills to make informed and responsible decisions." EPA has listed the components of what should be gained from EE:
Awareness and sensitivity to the environment and environmental challenges
Knowledge and understanding of the environment and environmental challenges
Attitudes of concern for the environment and motivation to improve or maintain environmental quality
Skills to identify and help resolve environmental challenges
Participation in activities that lead to the resolution of environmental challenges.
Through the EPA Environmental Education (EE) Grant Program, public schools, communities agencies, and NGO's are eligible to receive federal funding for local educational projects that reflect the EPA's priorities: air quality, water quality, chemical safety, and public participation among the communities.
In the United States some of the antecedents of environmental education were the Nature Study movement, conservation education and school camping. Nature studies integrated academic approach with outdoor exploration. Conservation education raised awareness about the misuse of natural resources and the need for their preservation. George Perkins Marsh discoursed on humanity's integral part of the natural world. Governmental agencies such as the U.S. Forest Service and the EPA supported conservation efforts. Conservation ideals still guide environmental education today. School camping was exposure to the environment and use of resources outside of the classroom for educational purposes. The legacies of these antecedents are still present in the evolving arena of environmental education.
Obstacles
A study of Ontario teachers explored obstacles to environmental education. Through an internet-based survey questionnaire, 300 K-12 teachers from Ontario, Canada responded. Based on the results of the survey, the most significant challenges identified by the sample of Ontario teachers include over-crowded curriculum, lack of resources, low priority of environmental education in schools, limited access to the outdoors, student apathy to environmental issues, and the controversial nature of sociopolitical action.
An influential article by Stevenson outlines conflicting goals of environmental education and traditional schooling. According to Stevenson, the recent critical and action orientation of environmental education creates a challenging task for schools. Contemporary environmental education strives to transform values that underlie decision making from ones that aid environmental (and human) degradation to those that support a sustainable planet. This contrasts with the traditional purpose of schools of conserving the existing social order by reproducing the norms and values that currently dominate environmental decision making. Confronting this contradiction is a major challenge to environmental education teachers.
Additionally, the dominant narrative that all environmental educators have an agenda can present difficulties in expanding reach. It is said that an environmental educator is one "who uses information and educational processes to help people analyze the merits of the many and varied points of view usually present on a given environmental issues." Greater efforts must be taken to train educators on the importance of staying within the profession's substantive structure, and in informing the general public on the profession's intention to empower fully informed decision making.
Another obstacle facing the implementation of environmental education lies the quality of education itself. Charles Sayan, the executive director of the Ocean Conservation Society, represents alternate views and critiques on environmental education in his new book The Failure of Environmental Education (And How We Can Fix It). In a Yale Environment 360 interview, Sayan discusses his book and outlines several flaws within environmental education, particularly its failed efforts to "reach its potential in fighting climate change, biodiversity loss, and environmental degradation". He believes that environmental education is not "keeping pace with environmental degradation" and encourages structural reform by increasing student engagement as well as improving relevance of information. These same critiques are discussed in Stewart Hudson's BioScience paper, "Challenges for Environmental Education: Issues and Ideas for the 21st Century". Another study describes obstacles in environmental education also rooted in the capability of the school leaders. They are the epicentre of education. However, implementing ESD in schools is difficult as school leaders faced too many challenges for a single plan to work.
In 2017, a study found that high school science textbooks and government resources on climate change from United States, EU, Canada and Australia did focus their recommendations for emission reductions on lower-impact actions instead of promoting the most effective emission-reduction strategies.
Movement
A movement that has progressed since the relatively recent founding of environmental education in industrial societies has transported the participant from nature appreciation and awareness to education for an ecologically sustainable future. This trend may be viewed as a microcosm of how many environmental education programs seek to first engage participants through developing a sense of nature appreciation which then translates into actions that affect conservation and sustainability.
Programs range from New York to California, including Life Lab at University of California, Santa Cruz, as well as Cornell University in
Environmental Education in the Global South
Environmentalism has also begun to make waves in the development of the global South, as the "First World" takes on the responsibility of helping developing countries to combat environmental issues produced and prolonged by conditions of poverty. Unique to environmental education in the Global South is its particular focus on sustainable development. This goal has been a part of international agenda since the 1900s, with the United Nations Educational Scientific and Cultural Organizations (UNESCO) and the Earth Council Alliance (ECA) at the forefront of pursuing sustainable development in the south.
The 1977 Tbilisi intergovernmental conference played a key role in the development of outcome of the conference was the Tbilisi Declaration, a unanimous accord which "constitutes the framework, principles, and guidelines for environmental education at all levels—local, national, regional, and international—and for all age groups both inside and outside the formal school system" recommended as a criterion for implementing environmental education. The Declaration was established with the intention of increasing environmental stewardship, awareness and behavior, which paved the way for the rise of modern environmental education.
After the 1992 Rio Earth Summit, over 80 National Councils for Sustainable Development in developing countries were created between 1992–1998 to aid in compliance of international sustainability goals and encourage "creative solutions".
In 1993, the Earth Council Alliance released the Treaty on environmental education for sustainable societies and global responsibility, sparking discourse on environmental education. The Treaty, in 65 statements, outlines the role of environmental education in facilitating sustainable development through all aspects of democratized participation and provides a methodology for the Treaty's signatories. It has been instrumentally utilized in expanding the field towards the global South, wherein the discourse of "environmental education for sustainable development" recognizes a need to include human population dynamics in EE and emphasizes "aspects related to contemporary economic realities and by placing greater emphasis on concerns for planetary solidarity". Even as a necessary tool for the proliferation of environmental stewardship, environmental education implemented in the South varies and addresses environmental issues in relation to their impact different communities and specific community needs. Whereas in the developed global North where the environmentalist sentiments are centered around conservation without taking into consideration "the needs of people living within communities", the global South must push forth a conservation agenda that parallels with social, economic, and political development. The role of environmental education in the South is centered around potential economic growth in development projects, as explicitly stated by the UNESCO, to apply environmental education for sustainable development through a "creative and effective use of human potential and all forms of capital to ensure rapid and more equitable economic growth, with minimal impact on the environment".
Moving into the 21st century, EE was furthered by United Nations as a part of the 2000 Millennium Development Goals to improve the planet by 2015. The MDGs included global efforts to end extreme poverty, work towards gender equality, access to education, and sustainable development to name a few. Although the MDGs produced great outcomes, its objectives were not met, and MDGs were soon replaced by Sustainable Development Goals. A "universal call to action to end poverty, protect the planet and ensure that all people enjoy peace and prosperity", SDGs became the new face of global priorities. These new goals incorporated objectives from MDGs yet incorporated a necessary environmental framework to "address key systemic barriers to sustainable development such as inequality, unsustainable consumption patterns, weak institutional capacity and environmental degradation that the MDGs neglected".
Trends
One of the current trends within environmental education seeks to move from an approach of ideology and activism to one that allows students to make informed decisions and take action based on experience as well as data. Within this process, environmental curricula have progressively been integrated into governmental education standards. A study found that standardized curriculum can be a significant impediment to environmental education implementation. Some environmental educators find this movement distressing and move away from the original political and activist approach to environmental education while others find this approach more valid and accessible. Regardless, many educational institutions are encouraging students to take an active role in environmental education and stewardship at their institutions. They know that "to be successful, greening initiatives require both grassroots support from the student body and top down support from high-level campus administrators."
Italy announced in 2019 that environmental education (including topics of sustainability and climate change) will be integrated into other subject matter and will be a mandatory part of the curriculum in public schools.
In the United States, Title IV, Part A of the Every Student Succeeds Act states that environmental education is eligible for grant funding. The program gives a "well-rounded" education as well as access to student health and safety programs. Title IV, Part B states that environmental literacy programs are also eligible for funding through the 21st Century Community Learning Centers Program. The funds that are available for both parts are block granted to the states using the Title I formula. In the FY2018 budget, Titles IVA and IVB were both given $1.1 billion and $1.2 billion. For title IVA, this is a $700 million raise from the 2017 budget which makes the 2018–2019 school year the most availability to environmental education ever.
Renewable Energy Education
Renewable energy education (REE) is a relatively new field of education. The overall objectives of REE pertain to giving a working knowledge and understanding of concepts, facts, principles and technologies for gathering the renewable sources of energy. Based on these objectives, the role of a renewable energy education programs should be informative, investigative, educative, and imaginative. REE should be taught with the world's population in mind as the world will run out of non-renewable resources within the next century. Renewable energy education is also being brought to political leaders as a means of getting more sustainable development to occur around the globe. This is happening in the hopes that it will uproot millions of people out of poverty and into a better quality of life in many countries. Renewable energy education is also about bringing awareness of climate change to the general public as well as an understanding of the current renewable energy technologies. An understanding of the new technologies is imperative to get them stream-lined and accepted by the vast majority of the public.
See also
The Amazonia Conference
Arts-based environmental education
Citizen Science
Climate Change
Ecological empathy
Education for Sustainable Development
Environmental adult education
Environmental protection
Environmental psychology
Environmental science
Environmental studies
Expeditionary education
Fourth International Conference on Environmental Education
Global education
Go Green Initiative (GGI)
Human rights education
Learnscapes
List of environmental degrees
List of environmental education institutions
Nature centers
Network of Conservation Educators and Practitioners
Outdoor education
Quality of life
Science Education
Science, Technology, Society and Environment Education
UNESCO
Sources
Notes
References
Bibliography
Hoelscher, David W. 2009. "Cultivating the Ecological Conscience: Smith, Orr, and Bowers on Ecological Education." M.A. thesis, University of North Texas. https://digital.library.unt.edu/ark:/67531/metadc12133/m1/
Lieberman, G.A. & L.L. Hoody. 1998. "Closing the Achievement Gap: Using the Environment as an Integrating Context for Learning." State Education and Environment Roundtable, Poway, CA.
Lieberman, Gerald A. 2013. Education and the Environment: Creating Standards-Based Programs in Schools and Districts. Cambridge, MA: Harvard Education Press.
Palmer, J.A, 1998. Environmental Education in the 21st Century: Theory, Practice, Progress, and Promise. Routledge.
Roth, Charles E. 1978. "Off the Merry-Go-Round and on to the Escalator". pp. 12–23 in From Ought to Action in Environmental Education, ed. William B. Stapp. Columbus, OH: SMEAC Information Reference Center. Ed 159 046.
Beatty. A., 2012. Climate Change Education. Washington, DC: The National Academies Press
Education Resources Information Centre (ERIC), 2002. Outdoor, Experiential, and Environmental Education: Converging or Diverging Approaches? [pdf]. ERIC Development Team. Available at: <http://files.eric.ed.gov/fulltext/ED467713.pdf>
United Nations Educational, Scientific and Cultural Organization., 2014a. Ecological Sciences for Sustainable Development. [online] Available at: <http://www.unesco.org/new/en/natural-sciences/environment/ecological-sciences/capacity-building-and-partnerships/educational-materials/>
United Nations Educational, Scientific and Cultural Organization., 2014b. Shaping the Future We Want: UN Decade of Education for Sustainable Development. [pdf] Paris: UNESCO. Available at: < http://unesdoc.unesco.org/images/0023/002301/230171e.pdf>
External links
Belgrade Charter
Council for Environmental Education (CEE)
Earth Day Network
Environmental Education Linked Network
Mobile Environmental Education Projects (MEEPs)
National Environmental Education Foundation
State Education and Environment Roundtable (SEER)
United Nations Environmental Education Programme (UNEP)
Alternative education
Environmental social science
Outdoor education | Environmental education | Environmental_science | 6,627 |
65,895,602 | https://en.wikipedia.org/wiki/Maushop | Maushop (sometimes Moshup) is a mythical hero and giant from Wampanoag folklore. He is said to have several companions, including a giant frog and his wife Granny Squannit.
Mythology
Maushop served as an explanation for geographical locations. According to legend, he came from Aquinnah on Cape Cod and lived there from before the Wampanoag. Maushop was so large that his diet consisted mainly of whales. To catch them, he threw boulders into the water to make stepping stones. During a celebration, he emptied his pipe ashes into the ocean, and they became Nantucket.
At one point, a crab bites his toe causing him to stomp around, leaving large footprints in the ground. Moshup's Rock is named for this story, before Christian missionaries renamed it to "Devil's Footprint."
Maushop was seen as a provider for the Wampanoag, teaching them how to hunt and fish. The Wampanoag apparently became too reliant on him, so he left so they would learn how to survive on their own.
References
Creation myths
Heroes in mythology and legend
Native American giants
Legendary footprints | Maushop | Astronomy | 241 |
1,122,854 | https://en.wikipedia.org/wiki/Equilibrium%20constant | The equilibrium constant of a chemical reaction is the value of its reaction quotient at chemical equilibrium, a state approached by a dynamic chemical system after sufficient time has elapsed at which its composition has no measurable tendency towards further change. For a given set of reaction conditions, the equilibrium constant is independent of the initial analytical concentrations of the reactant and product species in the mixture. Thus, given the initial composition of a system, known equilibrium constant values can be used to determine the composition of the system at equilibrium. However, reaction parameters like temperature, solvent, and ionic strength may all influence the value of the equilibrium constant.
A knowledge of equilibrium constants is essential for the understanding of many chemical systems, as well as the biochemical processes such as oxygen transport by hemoglobin in blood and acid–base homeostasis in the human body.
Stability constants, formation constants, binding constants, association constants and dissociation constants are all types of equilibrium constants.
Basic definitions and properties
For a system undergoing a reversible reaction described by the general chemical equation
a thermodynamic equilibrium constant, denoted by , is defined to be the value of the reaction quotient Qt when forward and reverse reactions occur at the same rate. At chemical equilibrium, the chemical composition of the mixture does not change with time, and the Gibbs free energy change for the reaction is zero. If the composition of a mixture at equilibrium is changed by addition of some reagent, a new equilibrium position will be reached, given enough time. An equilibrium constant is related to the composition of the mixture at equilibrium by
where {X} denotes the thermodynamic activity of reagent X at equilibrium, [X] the numerical value of the corresponding concentration in moles per liter, and γ the corresponding activity coefficient. If X is a gas, instead of [X] the numerical value of the partial pressure in bar is used. If it can be assumed that the quotient of activity coefficients, , is constant over a range of experimental conditions, such as pH, then an equilibrium constant can be derived as a quotient of concentrations.
An equilibrium constant is related to the standard Gibbs free energy change of reaction by
where R is the universal gas constant, T is the absolute temperature (in kelvins), and is the natural logarithm. This expression implies that must be a pure number and cannot have a dimension, since logarithms can only be taken of pure numbers. must also be a pure number. On the other hand, the reaction quotient at equilibrium
does have the dimension of concentration raised to some power (see , below). Such reaction quotients are often referred to, in the biochemical literature, as equilibrium constants.
For an equilibrium mixture of gases, an equilibrium constant can be defined in terms of partial pressure or fugacity.
An equilibrium constant is related to the forward and backward rate constants, kf and kr of the reactions involved in reaching equilibrium:
Types of equilibrium constants
Cumulative and stepwise formation constants
A cumulative or overall constant, given the symbol β, is the constant for the formation of a complex from reagents. For example, the cumulative constant for the formation of ML2 is given by
M + 2 L ML2; [ML2] = β12[M][L]2
The stepwise constant, K, for the formation of the same complex from ML and L is given by
ML + L ML2; [ML2] = K[ML][L] = Kβ11[M][L]2
It follows that
β12 = Kβ11
A cumulative constant can always be expressed as the product of stepwise constants. There is no agreed notation for stepwise constants, though a symbol such as K is sometimes found in the literature. It is best always to define each stability constant by reference to an equilibrium expression.
Competition method
A particular use of a stepwise constant is in the determination of stability constant values outside the normal range for a given method. For example, EDTA complexes of many metals are outside the range for the potentiometric method. The stability constants for those complexes were determined by competition with a weaker ligand.
ML + L′ ML′ + L
The formation constant of [Pd(CN)4]2− was determined by the competition method.
Association and dissociation constants
In organic chemistry and biochemistry it is customary to use pKa values for acid dissociation equilibria.
where log denotes a logarithm to base 10 or common logarithm, and Kdiss is a stepwise acid dissociation constant. For bases, the base association constant, pKb is used. For any given acid or base the two constants are related by , so pKa can always be used in calculations.
On the other hand, stability constants for metal complexes, and binding constants for host–guest complexes are generally expressed as association constants. When considering equilibria such as
M + HL ML + H
it is customary to use association constants for both ML and HL. Also, in generalized computer programs dealing with equilibrium constants it is general practice to use cumulative constants rather than stepwise constants and to omit ionic charges from equilibrium expressions. For example, if NTA, nitrilotriacetic acid, N(CH2CO2H)3 is designated as H3L and forms complexes ML and MHL with a metal ion M, the following expressions would apply for the dissociation constants.
The cumulative association constants can be expressed as
Note how the subscripts define the stoichiometry of the equilibrium product.
Micro-constants
When two or more sites in an asymmetrical molecule may be involved in an equilibrium reaction there are more than one possible equilibrium constants. For example, the molecule -DOPA has two non-equivalent hydroxyl groups which may be deprotonated. Denoting -DOPA as LH2, the following diagram shows all the species that may be formed (X = ).
The concentration of the species LH is equal to the sum of the concentrations of the two micro-species with the same chemical formula, labelled L1H and L2H. The constant K2 is for a reaction with these two micro-species as products, so that [LH] = [L1H] + [L2H] appears in the numerator, and it follows that this macro-constant is equal to the sum of the two micro-constants for the component reactions.
K2 = k21 + k22
However, the constant K1 is for a reaction with these two micro-species as reactants, and [LH] = [L1H] + [L2H] in the denominator, so that in this case
1/K1 =1/ k11 + 1/k12,
and therefore K1 =k11 k12 / (k11 + k12).
Thus, in this example there are four micro-constants whose values are subject to two constraints; in consequence, only the two macro-constant values, for K1 and K2 can be derived from experimental data.
Micro-constant values can, in principle, be determined using a spectroscopic technique, such as infrared spectroscopy, where each micro-species gives a different signal. Methods which have been used to estimate micro-constant values include
Chemical: blocking one of the sites, for example by methylation of a hydroxyl group, followed by determination of the equilibrium constant of the related molecule, from which the micro-constant value for the "parent" molecule may be estimated.
Mathematical: applying numerical procedures to 13C NMR data.
Although the value of a micro-constant cannot be determined from experimental data, site occupancy, which is proportional to the micro-constant value, can be very important for biological activity. Therefore, various methods have been developed for estimating micro-constant values. For example, the isomerization constant for -DOPA has been estimated to have a value of 0.9, so the micro-species L1H and L2H have almost equal concentrations at all pH values.
pH considerations (Brønsted constants)
pH is defined in terms of the activity of the hydrogen ion
pH = −log10 {H+}
In the approximation of ideal behaviour, activity is replaced by concentration. pH is measured by means of a glass electrode, a mixed equilibrium constant, also known as a Brønsted constant, may result.
HL L + H;
It all depends on whether the electrode is calibrated by reference to solutions of known activity or known concentration. In the latter case the equilibrium constant would be a concentration quotient. If the electrode is calibrated in terms of known hydrogen ion concentrations it would be better to write p[H] rather than pH, but this suggestion is not generally adopted.
Hydrolysis constants
In aqueous solution the concentration of the hydroxide ion is related to the concentration of the hydrogen ion by
\mathit{K}_W =[H][OH]
[OH]=\mathit{K}_W[H]^{-1}
The first step in metal ion hydrolysis can be expressed in two different ways
It follows that . Hydrolysis constants are usually reported in the β* form and therefore often have values much less than 1. For example, if and so that β* = 10−10. In general when the hydrolysis product contains n hydroxide groups
Conditional constants
Conditional constants, also known as apparent constants, are concentration quotients which are not true equilibrium constants but can be derived from them. A very common instance is where pH is fixed at a particular value. For example, in the case of iron(III) interacting with EDTA, a conditional constant could be defined by
This conditional constant will vary with pH. It has a maximum at a certain pH. That is the pH where the ligand sequesters the metal most effectively.
In biochemistry equilibrium constants are often measured at a pH fixed by means of a buffer solution. Such constants are, by definition, conditional and different values may be obtained when using different buffers.
Gas-phase equilibria
For equilibria in a gas phase, fugacity, f, is used in place of activity. However, fugacity has the dimension of pressure, so it must be divided by a standard pressure, usually 1 bar, in order to produce a dimensionless quantity, . An equilibrium constant is expressed in terms of the dimensionless quantity. For example, for the equilibrium 2NO2 N2O4,
Fugacity is related to partial pressure, , by a dimensionless fugacity coefficient ϕ: . Thus, for the example,
Usually the standard pressure is omitted from such expressions. Expressions for equilibrium constants in the gas phase then resemble the expression for solution equilibria with fugacity coefficient in place of activity coefficient and partial pressure in place of concentration.
Thermodynamic basis for equilibrium constant expressions
Thermodynamic equilibrium is characterized by the free energy for the whole (closed) system being a minimum. For systems at constant temperature and pressure the Gibbs free energy is minimum. The slope of the reaction free energy with respect to the extent of reaction, ξ, is zero when the free energy is at its minimum value.
The free energy change, dGr, can be expressed as a weighted sum of change in amount times the chemical potential, the partial molar free energy of the species. The chemical potential, μi, of the ith species in a chemical reaction is the partial derivative of the free energy with respect to the number of moles of that species, Ni
A general chemical equilibrium can be written as
where nj are the stoichiometric coefficients of the reactants in the equilibrium equation, and mj are the coefficients of the products. At equilibrium
The chemical potential, μi, of the ith species can be calculated in terms of its activity, ai.
μ is the standard chemical potential of the species, R is the gas constant and T is the temperature. Setting the sum for the reactants j to be equal to the sum for the products, k, so that δGr(Eq) = 0
Rearranging the terms,
This relates the standard Gibbs free energy change, ΔGo to an equilibrium constant, K, the reaction quotient of activity values at equilibrium.
Equivalence of thermodynamic and kinetic expressions for equilibrium constants
At equilibrium the rate of the forward reaction is equal to the backward reaction rate. A simple reaction, such as ester hydrolysis
AB + H2O <=> AH + B(OH)
has reaction rates given by expressions
According to Guldberg and Waage, equilibrium is attained when the forward and backward reaction rates are equal to each other. In these circumstances, an equilibrium constant is defined to be equal to the ratio of the forward and backward reaction rate constants
.
The concentration of water may be taken to be constant, resulting in the simpler expression
.
This particular concentration quotient, , has the dimension of concentration, but the thermodynamic equilibrium constant, , is always dimensionless.
Unknown activity coefficient values
It is very rare for activity coefficient values to have been determined experimentally for a system at equilibrium. There are three options for dealing with the situation where activity coefficient values are not known from experimental measurements.
Use calculated activity coefficients, together with concentrations of reactants. For equilibria in solution estimates of the activity coefficients of charged species can be obtained using Debye–Hückel theory, an extended version, or SIT theory. For uncharged species, the activity coefficient γ0 mostly follows a "salting-out" model: log10 γ0 = bI where I stands for ionic strength.
Assume that the activity coefficients are all equal to 1. This is acceptable when all concentrations are very low.
For equilibria in solution use a medium of high ionic strength. In effect this redefines the standard state as referring to the medium. Activity coefficients in the standard state are, by definition, equal to 1. The value of an equilibrium constant determined in this manner is dependent on the ionic strength. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories.
Dimensionality
An equilibrium constant is related to the standard Gibbs free energy of reaction change, , for the reaction by the expression
Therefore, K, must be a dimensionless number from which a logarithm can be derived. In the case of a simple equilibrium
A + B <=> AB,
the thermodynamic equilibrium constant is defined in terms of the activities, {AB}, {A} and {B}, of the species in equilibrium with each other:
Now, each activity term can be expressed as a product of a concentration and a corresponding activity coefficient, . Therefore,
When , the quotient of activity coefficients, is set equal to 1, we get
K then appears to have the dimension of 1/concentration. This is what usually happens in practice when an equilibrium constant is calculated as a quotient of concentration values. This can be avoided by dividing each concentration by its standard-state value (usually mol/L or bar), which is standard practice in chemistry.
The assumption underlying this practice is that the quotient of activities is constant under the conditions in which the equilibrium constant value is determined. These conditions are usually achieved by keeping the reaction temperature constant and by using a medium of relatively high ionic strength as the solvent. It is not unusual, particularly in texts relating to biochemical equilibria, to see an equilibrium constant value quoted with a dimension. The justification for this practice is that the concentration scale used may be either mol dm−3 or mmol dm−3, so that the concentration unit has to be stated in order to avoid there being any ambiguity.
Note. When the concentration values are measured on the mole fraction scale all concentrations and activity coefficients are dimensionless quantities.
In general equilibria between two reagents can be expressed as
{\mathit{p}A} + \mathit{q}B <=> A_\mathit{p}B_\mathit{q} ,
in which case the equilibrium constant is defined, in terms of numerical concentration values, as
The apparent dimension of this K value is concentration1−p−q; this may be written as M(1−p−q) or mM(1−p−q), where the symbol M signifies a molar concentration (). The apparent dimension of a dissociation constant is the reciprocal of the apparent dimension of the corresponding association constant, and vice versa.
When discussing the thermodynamics of chemical equilibria it is necessary to take dimensionality into account. There are two possible approaches.
Set the dimension of to be the reciprocal of the dimension of the concentration quotient. This is almost universal practice in the field of stability constant determinations. The "equilibrium constant" , is dimensionless. It will be a function of the ionic strength of the medium used for the determination. Setting the numerical value of to be 1 is equivalent to re-defining the standard states.
Replace each concentration term by the dimensionless quotient , where is the concentration of reagent in its standard state (usually 1 mol/L or 1 bar). By definition the numerical value of is 1, so also has a numerical value of 1.
In both approaches the numerical value of the stability constant is unchanged. The first is more useful for practical purposes; in fact, the unit of the concentration quotient is often attached to a published stability constant value in the biochemical literature. The second approach is consistent with the standard exposition of Debye–Hückel theory, where , etc. are taken to be pure numbers.
Water as both reactant and solvent
For reactions in aqueous solution, such as an acid dissociation reaction
AH + H2O A− + H3O+
the concentration of water may be taken as being constant and the formation of the hydronium ion is implicit.
AH A− + H+
Water concentration is omitted from expressions defining equilibrium constants, except when solutions are very concentrated.
(K defined as a dissociation constant)
Similar considerations apply to metal ion hydrolysis reactions.
Enthalpy and entropy: temperature dependence
If both the equilibrium constant, and the standard enthalpy change, , for a reaction have been determined experimentally, the standard entropy change for the reaction is easily derived. Since and
To a first approximation the standard enthalpy change is independent of temperature. Using this approximation, definite integration of the van 't Hoff equation
gives
This equation can be used to calculate the value of log K at a temperature, T2, knowing the value at temperature T1.
The van 't Hoff equation also shows that, for an exothermic reaction (), when temperature increases K decreases and when temperature decreases K increases, in accordance with Le Chatelier's principle. The reverse applies when the reaction is endothermic.
When K has been determined at more than two temperatures, a straight line fitting procedure may be applied to a plot of against to obtain a value for . Error propagation theory can be used to show that, with this procedure, the error on the calculated value is much greater than the error on individual log K values. Consequently, K needs to be determined to high precision when using this method. For example, with a silver ion-selective electrode each log K value was determined with a precision of ca. 0.001 and the method was applied successfully.
Standard thermodynamic arguments can be used to show that, more generally, enthalpy will change with temperature.
where Cp is the heat capacity at constant pressure.
A more complex formulation
The calculation of K at a particular temperature from a known K at another given temperature can be approached as follows if standard thermodynamic properties are available. The effect of temperature on equilibrium constant is equivalent to the effect of temperature on Gibbs energy because:
where ΔrGo is the reaction standard Gibbs energy, which is the sum of the standard Gibbs energies of the reaction products minus the sum of standard Gibbs energies of reactants.
Here, the term "standard" denotes the ideal behaviour (i.e., an infinite dilution) and a hypothetical standard concentration (typically 1 mol/kg). It does not imply any particular temperature or pressure because, although contrary to IUPAC recommendation, it is more convenient when describing aqueous systems over wide temperature and pressure ranges.
The standard Gibbs energy (for each species or for the entire reaction) can be represented (from the basic definitions) as:
In the above equation, the effect of temperature on Gibbs energy (and thus on the equilibrium constant) is ascribed entirely to heat capacity. To evaluate the integrals in this equation, the form of the dependence of heat capacity on temperature needs to be known.
If the standard molar heat capacity C can be approximated by some analytic function of temperature (e.g. the Shomate equation), then the integrals involved in calculating other parameters may be solved to yield analytic expressions for them. For example, using approximations of the following forms:
For pure substances (solids, gas, liquid):
For ionic species at :
then the integrals can be evaluated and the following final form is obtained:
The constants A, B, C, a, b and the absolute entropy, S̆, required for evaluation of C(T), as well as the values of G298 K and S298 K for many species are tabulated in the literature.
Pressure dependence
The pressure dependence of the equilibrium constant is usually weak in the range of pressures normally encountered in industry, and therefore, it is usually neglected in practice. This is true for condensed reactant/products (i.e., when reactants and products are solids or liquid) as well as gaseous ones.
For a gaseous-reaction example, one may consider the well-studied reaction of hydrogen with nitrogen to produce ammonia:
N2 + 3 H2 2 NH3
If the pressure is increased by the addition of an inert gas, then neither the composition at equilibrium nor the equilibrium constant are appreciably affected (because the partial pressures remain constant, assuming an ideal-gas behaviour of all gases involved). However, the composition at equilibrium will depend appreciably on pressure when:
the pressure is changed by compression or expansion of the gaseous reacting system, and
the reaction results in the change of the number of moles of gas in the system.
In the example reaction above, the number of moles changes from 4 to 2, and an increase of pressure by system compression will result in appreciably more ammonia in the equilibrium mixture. In the general case of a gaseous reaction:
α A + β B σ S + τ T
the change of mixture composition with pressure can be quantified using:
where p denote the partial pressures and X the mole fractions of the components, P is the total system pressure, Kp is the equilibrium constant expressed in terms of partial pressures and KX is the equilibrium constant expressed in terms of mole fractions.
The above change in composition is in accordance with Le Chatelier's principle and does not involve any change of the equilibrium constant with the total system pressure. Indeed, for ideal-gas reactions Kp is independent of pressure.
In a condensed phase, the pressure dependence of the equilibrium constant is associated with the reaction volume. For reaction:
α A + β B σ S + τ T
the reaction volume is:
where V̄ denotes a partial molar volume of a reactant or a product.
For the above reaction, one can expect the change of the reaction equilibrium constant (based either on mole-fraction or molal-concentration scale) with pressure at constant temperature to be:
The matter is complicated as partial molar volume is itself dependent on pressure.
Effect of isotopic substitution
Isotopic substitution can lead to changes in the values of equilibrium constants, especially if hydrogen is replaced by deuterium (or tritium). This equilibrium isotope effect is analogous to the kinetic isotope effect on rate constants, and is primarily due to the change in zero-point vibrational energy of H–X bonds due to the change in mass upon isotopic substitution. The zero-point energy is inversely proportional to the square root of the mass of the vibrating hydrogen atom, and will therefore be smaller for a D–X bond that for an H–X bond.
An example is a hydrogen atom abstraction reaction R' + H–R R'–H + R with equilibrium constant KH, where R' and R are organic radicals such that R' forms a stronger bond to hydrogen than does R. The decrease in zero-point energy due to deuterium substitution will then be more important for R'–H than for R–H, and R'–D will be stabilized more than R–D, so that the equilibrium constant KD for R' + D–R R'–D + R is greater than KH. This is summarized in the rule the heavier atom favors the stronger bond.
Similar effects occur in solution for acid dissociation constants (Ka) which describe the transfer of H+ or D+ from a weak aqueous acid to a solvent molecule: HA + H2O = H3O+ + A− or DA + D2O D3O+ + A−. The deuterated acid is studied in heavy water, since if it were dissolved in ordinary water the deuterium would rapidly exchange with hydrogen in the solvent.
The product species H3O+ (or D3O+) is a stronger acid than the solute acid, so that it dissociates more easily, and its H–O (or D–O) bond is weaker than the H–A (or D–A) bond of the solute acid. The decrease in zero-point energy due to isotopic substitution is therefore less important in D3O+ than in DA so that KD < KH, and the deuterated acid in D2O is weaker than the non-deuterated acid in H2O. In many cases the difference of logarithmic constants pKD – pKH is about 0.6, so that the pD corresponding to 50% dissociation of the deuterated acid is about 0.6 units higher than the pH for 50% dissociation of the non-deuterated acid.
For similar reasons the self-ionization of heavy water is less than that of ordinary water at the same temperature.
See also
Determination of equilibrium constants
Stability constants of complexes
Equilibrium fractionation
References
Data sources
IUPAC SC-Database A comprehensive database of published data on equilibrium constants of metal complexes and ligands
NIST Standard Reference Database 46 : Critically selected stability constants of metal complexes
Inorganic and organic acids and bases pKa data in water and DMSO
NASA Glenn Thermodynamic Database webpage with links to (self-consistent) temperature-dependent specific heat, enthalpy, and entropy for elements and molecules
Equilibrium chemistry
Dimensionless numbers of chemistry | Equilibrium constant | Chemistry | 5,648 |
11,306,710 | https://en.wikipedia.org/wiki/Phyllosticta%20coryli | Phyllosticta coryli is a plant pathogen infecting hazelnut.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Hazelnut tree diseases
coryli
Fungi described in 1872
Fungus species | Phyllosticta coryli | Biology | 45 |
48,979,229 | https://en.wikipedia.org/wiki/NGC%201084 | NGC 1084 is an unbarred spiral galaxy in the constellation Eridanus. It is located at a distance of about 63 million light-years away from the Milky Way. The galaxy was discovered by William Herschel on 10 January 1785. It has multiple spiral arms, which are not well defined. It belongs in the same galaxy group with NGC 988, NGC 991, NGC 1022, NGC 1035, NGC 1042, NGC 1047, NGC 1052 and NGC 1110. This group is in turn associated with the Messier 77 group.
Star formation in the galaxy is chaotic and not confined to the spiral arms, but the rate is not high enough to classify it as a starburst galaxy. Star formation has taken place in small bursts in the last 40 million years. The cause of this activity has been proposed as a merger with a gas-rich dwarf galaxy. A radio source has been detected 3.5' south-west of the galaxy, connected to it by a bridge.
Supernovae
Five supernovae have been observed in NGC 1084:
SN 1963P (type Ia, mag. 14.5) was discovered by Paul Wild on 18 September 1963.
SN 1996an (type II, mag. 14) was discovered by Masakatsu Aoki on 27 July 1996.
SN 1998dl (Type II, mag. 16) was discovered by the Lick Observatory Supernova Search (LOSS) on 20 August 1998.
SN 2009H (Type II, mag. 17.4) was discovered by the Lick Observatory Supernova Search (LOSS) on 2 January 2009.
SN 2012ec (Type IIP, mag. 14.5) was discovered by Berto Monard on 11 August 2012.
Gallery
References
External links
Unbarred spiral galaxies
Eridanus (constellation)
1084
10464 | NGC 1084 | Astronomy | 376 |
320,340 | https://en.wikipedia.org/wiki/Needle%20and%20syringe%20programmes | A needle and syringe programme (NSP), also known as needle exchange program (NEP), is a social service that allows injection drug users (IDUs) to obtain clean and unused hypodermic needles and associated paraphernalia at little or no cost. It is based on the philosophy of harm reduction that attempts to reduce the risk factors for blood-borne diseases such as HIV/AIDS and hepatitis.
History
Needle-exchange programmes can be traced back to informal activities undertaken during the 1970s. The idea is likely to have been rediscovered in multiple locations. The first government-approved initiative (Netherlands) was undertaken in the early to mid-1980s, followed closely by initiatives in the United Kingdom and Australia by 1986. While the initial programme was motivated by an outbreak of hepatitis B, the AIDS pandemic motivated the rapid adoption of these programmes around the world.
Operation
Needle and syringe programs operate differently in different parts of the world; the first NSPs in Europe and Australia gave out sterile equipment to drug users, having begun in the context of the early AIDS epidemic. The United States took a far more reluctant approach, typically requiring IDUs to already have used needles to exchange for sterile ones - this "One-for-one" system is where the same number of syringes must be returned.
According to Santa Cruz County, California, exchange staff interviewed by Santa Cruz Local in 2019, it is a common practice not to count the number of exchanged needles exactly, but rather to estimate the number based on a container's volume. Holyoke, Massachusetts, also uses the volume system. United Nations Office on Drugs and Crime for South Asia suggests visual estimation or asking the client how many they brought back. The volume-based method left potential for gaming the system and an exchange agency in Vancouver devoted significant effort to game the system.
Some, such as the Columbus Public Health in Ohio weigh the returned sharps rather than counting.
The practices and policies vary between needle and syringe program sites. In addition to exchange, there is a model called "needs-based" where the syringes are handed out without requiring any to be returned.
According to a report published in 1994, Montreal's CACTUS exchange which has a policy of one-for-one, plus one needle with a limit of 15 had a return rate of 75-80% between 1991 and 1993.
An exchange in Boulder, Colorado, implemented a one-for-one with four starter needles and reported an exchange rate of 89.1% in 1992.
In the United States, where the One-for-one system still dominates, some 25% of injecting drug users are living positive with HIV; in Australia, which hands out equipment for free to anyone needing it (only charging a small fee for some more expensive equipment, like wheel filters and higher-quality tourniquets), only 1% of the IDU population is HIV-positive as of 2015, compared to over 20% in the late 1980s when NSP programs began to spread nationally and became accessible to most of the population.
International experience
Programs providing sterile needles and syringes currently operate in 87 countries around the world. IA comprehensive 2004 study by the World Health Organization (WHO) found a "compelling case that NSPs substantially and cost effectively reduce the spread of HIV among IDUs and do so without evidence of exacerbating injecting drug use at either the individual or societal level." WHO's findings have also been supported by the American Medical Association (AMA), which in 2000 adopted a position strongly supporting NSPs when combined with addiction counseling.
Australia
The Melbourne, Australia, inner-city suburbs of Richmond, and Abbotsford are locations in which the use and dealing of heroin has been concentrated. The Burnet Institute research organisation completed the 2013 'North Richmond Public Injecting Impact Study' in collaboration with the Yarra Drug and Health Forum and North Richmond Community Health Centre and recommended 24-hour access to sterile injecting equipment due to the ongoing "widespread, frequent and highly visible" nature of illicit drug use in the areas. Between 2010 and 2012 a four-fold increase in the levels of inappropriately discarded injecting equipment was documented for the two suburbs. In the surrounding City of Yarra, an average of 1,550 syringes per month was collected from public syringe disposal bins in 2012. Paul Dietze stated, "We have tried different measures and the problem persists, so it's time to change our approach".
On 28 May 2013, the Burnet Institute stated that it recommended 24-hour access to sterile injecting equipment in the Melbourne suburb of Footscray after the area's drug culture continued to grow after more than ten years of intense law enforcement efforts. The institute's research concluded that public injecting behaviour is frequent in the area and injecting paraphernalia has been found in carparks, parks, footpaths, and drives. Furthermore, people who inject drugs have broken into syringe disposal bins to reuse discarded equipment.
A study commissioned by the Australian Government revealed that for every A$1 invested in NSPs in Australia, $4 was saved in direct healthcare costs, and if productivity and economic benefits are included, the programs returned a staggering $27 for every $1 invested. The study notes that over a longer time horizon than that considered (10 years) the cost-benefit ratio grows even further. In terms of infections averted and lives saved, the study finds that, between 2000 and 2009, 32,000 HIV infections and 96,667 hepatitis C infections were averted, and approximately 140,000 disability-adjusted life years were gained.
United Kingdom
From the 1980s, Maggie Telfer from the Bristol Drugs Project advocated for needle exchanges to be established in the United Kingdom. The British public body, the National Institute for Health and Care Excellence (NICE), introduced a recommendation in April 2014 due to an increase in the number of young people who inject steroids at UK needle exchanges. NICE previously published needle exchange guidelines in 2009, in which needle and syringe services were not advised for people under 18, but the organisation's director Professor Mike Kelly explained that a "completely different group" of people were presenting at programmes. In the updated guidance, NICE recommended the provision of specialist services for "rapidly increasing numbers of steroid users", and that needles should be provided to people under the age of 18—a first for NICE—following reports of 15-year-old steroid injectors seeking to develop their muscles.
United States
The first program in the United States to be operated at public expenses was established in Tacoma, Washington in November 1988. The Centers for Disease Control and Prevention and the National Institutes of Health confirm that needle exchange is an effective strategy for the prevention of HIV. The NIH estimated in 2002 that in the United States, 15–20% of injection drug users have HIV and at least 70% have hepatitis C. The Centers for Disease Control (CDC) reports one-fifth of all new HIV infections and the vast majority of hepatitis C infections are the result of injection drug use. United States Department of Health and Human Services reports 7%, or 2,400 cases of HIV infections in 2018 were among drug users.
Portland, Oregon, was the first city in nation to expend public funds on a NSP which opened in 1989. It is also one of the longest running programme in the country. Despite the word "exchange" in the programme name, the Portland needle exchange operated by Multnomah County hands out syringes to addicts who do not present any to exchange. The exchange programme reports 70% of their users are transients who experience "homelessness or unstable housing" It was reported that during the fiscal year 2015–2016, the county dispensed 2,478,362 syringes and received 2,394,460, a shortage of 83,902 needles. In 2016, it was reported that the Cleveland needle exchange program sees "mostly white suburban kids ages 18 to 25".
San Francisco
Since the full sanction of syringe exchange programs (SEP) by then-Mayor Frank Jordan in 1993, the San Francisco Department of Public Health has been responsible for the management of syringe access and the proposed disposal of these devices in the city. This sanction, which was originally executed as a state of emergency to address the HIV epidemic, allowed SEPs to provide sterile syringes, take back used devices, and operate as a service for health education to support individuals struggling with substance use disorders. Since then, it was approximated that from July 1, 2017, to December 31, 2017, only 1,672,000 out of the 3,030,000 distributed needles (60%) were returned to the designated sites. In April 2018, acting Mayor Mark Farrell allocated $750,000 towards the removal of abandoned needles littering the streets of San Francisco.
General characteristics
As of 2011, at least 221 programmes operated in the US. Most (91%) were legally authorized to operate; 38.2% were managed by their local health authorities. The CDC reported in 1993 that the most significant expenses for the NSPs is personnel cost, which reports it represents 66% of the budget.
More than 36 million syringes were distributed annually, mostly through large urban programmes operating a stationary site. More generally, US NEPs distribute syringes through a variety of methods including mobile vans, delivery services and backpack/pedestrian routes that include secondary (peer-to-peer) exchange.
Funding
In the United States, a ban on federal funding for needle exchange programs began in 1988, when republican North Carolina Senator Jesse Helms led Congress to enact a prohibition on the use of federal funds to encourage drug abuse. The ban was briefly lifted in 2009, reinstated in 2010, and partially lifted again in 2015. Currently, federal funds can still not be used for the purchase of needles and syringes or other injecting paraphernalia by needle exchange programs, though can be used for training and other program support in the case of a declared public health emergency. In the time between 2010 and 2011 when no ban was in place, at least three programmes were able to obtain federal funds and two-thirds reported planning to pursue such funding. A 1997 study estimated that while the funding ban was in effect, it "may have led to HIV infection among thousands of IDUs, their sexual partners, and their children." US NEPs continue to be funded through a mixture of state and local government funds, supplemented by private donations. The funding ban was effectively lifted for every aspect of the exchanges except the needles themselves in the omnibus spending bill passed in December 2015 and signed by President Obama. This change was first suggested by Kentucky Republicans Hal Rogers and Mitch McConnell, according to their spokespeople.
Legal aspects
Many states criminalized needle possession without a prescription, arresting people as they left underground needle exchange efforts. In some jurisdictions, such as New York, needle exchange activists challenged the laws in court, with judges ruling that their actions were justified by a "necessity defense" which permits breaking of a law to prevent an imminent harm. In other jurisdictions where syringe possession without a prescription remained illegal, physician-based prescription programmes have shown promise. Epidemiological research demonstrating that syringe access programmes are both effective and cost-effective helped to change state and local NEP-operation laws, as well as the status of syringe possession more broadly. For example, between 1989 and 1992, three exchanges in New York City tagged syringes to help demonstrate rates of return prior to the legalization of the approach.
By 2012, legal syringe exchange programmes existed in at least 35 states. In some settings, syringe possession and purchase is decriminalized, while in others, authorized NEP clients are exempt from certain drug paraphernalia laws. However, despite the legal changes, gaps between the formal law and environment mean that many programmes continue to face law enforcement interference and covert programmes continue to exist within the U.S.
Colorado allows covert syringe exchange programmes to operate. Current Colorado laws leave room for interpretation over the requirement of a prescription to purchase syringes. Based on such laws, the majority of pharmacies do not sell syringes without a prescription and police arrest people who possess syringes without a prescription. Boulder County health department reports between January 2012 and March 2012, the group received over 45,000 dirty needles and distributed around 45,200 sterile syringes.
As of 2017, NSPs are illegal in 15 states. NSPs are prohibited by local regulations in cities in Orange County, California, even though it is not disallowed by state law in California.
Law enforcement
Conflict with law enforcement
Removal of legal barriers to the operation of NEPs and other syringe access initiatives has been identified as an important part of a comprehensive approach to reducing HIV transmission among IDUs. Legal barriers include both "law on the books" and "law on the streets", i.e., the actual practices of law enforcement officers, which may or may not reflect relevant law. Changes in syringe and drug control policy can be ineffective in reducing such barriers if police continue to treat syringe possession as a crime or participation in NEP as evidence of criminal activity.
Although most US NEPs operate legally, many report some form of police interference. In a 2009 national survey of 111 US NEP managers, 43% reported at least monthly client harassment, 31% at least monthly unauthorized confiscation of clients' syringes, 12% at least monthly client arrest en route to or from NEP and 26% uninvited police appearances at program sites at least every six months. In multivariate modeling, legal status of the program (operating legally vs illegally) and jurisdiction's syringe regulation environment were not associated with frequency of police interference.
A detailed 2011 analysis of NEP client experiences in Los Angeles suggested that as many as 7% of clients report negative encounters with security officers in any given month. Given that syringes are not prohibited in the jurisdiction and their confiscation can only occur as part of an otherwise authorized arrest, almost 40% of those who reported syringe confiscation were not arrested. This raises concerns about extrajudicial confiscation of personal property. Approximately 25% of the encounters detailed by respondents involved private security personnel, rather than local police.
Similar findings have emerged internationally. For example, despite instituting laws protecting syringe access and possession and adopting NEPs, IDUs and sex workers in Mexico's Northern Border regions report frequent syringe confiscation by law enforcement personnel. In this region as well as elsewhere, reports of syringe confiscation are correlated with increases in risky behaviors, such as groin injecting, public injection and utilization of pharmacies. These practices translate to risk for HIV and other blood-borne diseases.
Racial gradient
NEPs serving predominantly IDUs of color may be almost four times more likely to report frequent client arrest en route to or from the program and almost four times more likely to report unauthorized syringe confiscation. A 2005 study in Philadelphia found that African-Americans accessing the city's legally operated exchange decreased at more than twice the rate of white individuals after the initiation of a police anti-drug operation. These and other findings illustrate a possible mechanism by which racial disparities in law enforcement can translate into disparities in HIV transmission. The majority (56%) of respondents reported not documenting adverse police events; those who did were 2.92 times more likely to report unauthorized syringe confiscation. These findings suggest that systematic surveillance and interventions are needed to address police interference.
Causes
Police interference with legal NEP operations may be partially explained by training defects. A study of police officers in an urban police department four years after the decriminalization of syringe purchase and possession in the US state of Rhode Island suggested that up to a third of police officers were not aware that the law had changed. This knowledge gap parallels other areas of public health law, underscoring pervasive gaps in dissemination.
Even police officers with accurate knowledge of the law, however, reported intention to confiscate syringes from drug users as a way to address problematic substance use. Police also reported anxiety about accidental needle sticks and acquiring communicable diseases from IDUs, but were not trained or equipped to deal with this occupational risk; this anxiety was intertwined with negative attitudes towards syringe access initiatives.
Training and interventions to address law enforcement barriers
US NEPs have successfully trained police, especially when framed as addressing police occupational safety and human resources concerns. Preliminary evidence also suggests that training can shift police knowledge and attitudes regarding NEPs specifically and public health-based approaches towards problematic drug use in general.
According to a 2011 survey, 20% of US NEPs reported training police during the previous year. Covered topics included the public health rationale behind NEPs (71%), police occupational health (67%), needle stick injury (62%), NEPs' legal status (57%), and harm reduction philosophy (67%). On average, training was seen as moderately effective, but only four programmes reported conducting any formal evaluation. Assistance with training police was identified by 72% of respondents as the key to improving police relations.
Advocacy
Organizations ranging from the NIH, CDC, the American Bar Association, the American Medical Association, the American Psychological Association, the World Health Organization and many others endorsed low-threshold programmes including needle exchange.
Needle exchange programmes have faced opposition on both political and moral grounds. Advocacy groups including the National District Attorneys Association (NDAA), Drug Watch International, The Heritage Foundation, Drug Free Australia, and so forth and religious organizations such as the Catholic Church.
In the United States NEP programmes have proliferated, despite lack of public acceptance. Internationally, needle exchange is widely accepted.
Research
Disease transmission
Two 2010 'reviews of reviews' by a team originally led by Norah Palmateer that examined systematic reviews and meta-analyses on the topic found insufficient evidence that NSP prevents transmission of the hepatitis C virus, tentative evidence that it prevents transmission of HIV, and sufficient evidence that it reduces self-reported risky injecting behaviour. In a comment Palmateer warned politicians not to use her team's review of reviews as a justification to close existing programmes or to hinder the introduction of new needle-exchange schemes. The weak evidence on the programmes' disease prevention effectiveness is due to inherent design limitations of the reviewed primary studies and should not be interpreted as the programmes lacking preventive effects.
The second of the Palmateer team's 'review of reviews' scrutinised 10 previous formal reviews of needle exchange studies, and after critical appraisal only four reviews were considered rigorous enough to meet the inclusion criteria. Those were done by the teams of Gibson (2001), Wodak and Cooney (2004), Tilson (2007) and Käll (2007). The Palmateer team judged that their conclusion in favour of NSP effectiveness was not consistent with the results from the HIV studies they reviewed.
The Wodak and Cooney review had, from 11 studies of what they determined as demonstrating acceptable rigour, found 6 that were positive regarding the effectiveness of NSPs in preventing HIV, 3 that were negative and 2 inconclusive. However a review by Käll et al. disagreed with the Wodak and Cooney review, reclassifying the studies on NSP effectiveness to 3 positive, 3 negative and 5 inconclusive. The US Institute of Medicine evaluated the conflicting evidence of both Drs Wodak and Käll in their Geneva session and concluded that although multicomponent HIV prevention programmes that include needle and syringe exchange reduced intermediate HIV risk behavior, evidence regarding the effect of needle and syringe exchange alone on HIV incidence was limited and inconclusive, given "myriad design and methodological issues noted in the majority of studies." Four studies that associated needle exchange with reduced HIV prevalence failed to establish a causal link, because they were designed as population studies rather than assessing individuals.
NEPs successfully serve as one component of HIV prevention strategies. Multi-component HIV prevention programmes that include NSE reduce drug-related HIV risk behaviors and enhance the impact of harm reduction services.
Tilson (2007) concluded that only comprehensive packages of services in multi-component prevention programmes can be effective in reducing drug-related HIV risks. In such packages, it is unclear what the relative contribution of needle exchange may be to reductions in risk behavior and HIV incidence.
Multiple examples can be cited showing the relative ineffectiveness of needle exchange programmes alone in stopping the spread of blood-borne disease. Many needle exchange programmes do not make any serious effort to treat drug addiction. For example, David Noffs of the Life Education Center wrote, "I have visited sites around Chicago where people who request info on quitting their habit are given a single sheet on how to go cold turkey—hardly effective treatment or counseling."
A 2013 systematic review found support for the use of NEPs to prevent and treat HIV and HCV infection. A 2014 systematic review and meta-analysis found evidence that NEPs were effective in reducing HIV transmission among injection drug users, but that other harm reduction programmes have probably also contributed to the decrease in HIV incidence. NEPs appear to be as effective in low- and middle-income countries as in high-income ones.
Worker training
Lemon and Shah presented a 2013 paper at the International Congress of Psychiatrists that highlighted lack of training for needle exchange workers and also showed the workers performing a range of tasks beyond contractual obligations, for which they had little support or training. It also showed how needle exchange workers were a common first contact for distressed drug users. Perhaps the most concerning finding was that workers were not legally allowed to provide Naloxone should it be needed.
Drug use
According to a 2022 study by Vanderbilt University economist Analisa Packham, syringe exchange programs reduce HIV rates by 18.2 percent but lead to greater drug use. Syringe exchange programmes increased drug-related mortality rates by 11.7 percent and opioid-related mortality rates by 21.6 percent.
Arguments for and against
Needle disposal
NSPs Do Not Increase Litter: Broad Arguments
Activist groups claim there is no way to ensure SEP users will be properly disposed of. Peer reviewed studies suggest that there are less improperly disposed of syringes in cities with needle exchange programs than in cities without. Other studies of similar design find that syringe exchange program drop boxes were associated with an overall decrease of improper syringe disposal (over 98% decrease) and going further from said syringe exchange sites increases the amount of improperly disposed needles. Other ethnographic studies find evidence that criminal related drug possession laws further serve to increase improperly disposed of needles, and decreasing the severity of possession laws may positively impact proper syringe disposal, this corroborates the CDC's own guidelines on syringe disposal, which claim "Studies have found that syringe litter is more likely in areas without SSPs".
NSPs Do Increase Litter: Broad Arguments
On the other hand, there is data to suggest SEPs do increase improper syringe disposal. Opposition groups contribute their own proof through photographic evidence of increased needle litter, additionally, opponents argue that programs which do not mandate a 1:1 needle exchange encourage the more convenient improper discarding of needles when the programs are not open or are not accepting needle returns. Additionally, many programs allow for unlimited access to needles, which opponents argue increases litter to a much higher degree on the basis of increasing total needles in circulation. Portland residents in areas where syringe acquisition is unlimited claim to be "drowning in needles" and picking up upwards of 100 per week. Opposition groups also argue government action in increasing the amount of syringe disposal boxes is slow.
NSPs that strictly adhere to one-for-one policy and do not furnish starter syringes/needles do not increase the number of them in circulation.
The few studies that specifically evaluated the effects of NEPs produced "modest" evidence of no impact on improper needle discards and injection frequency and "weak" evidence on lack of impact on numbers of drug users, high-risk user networks and crime trends.
Some NSPs hands outs needles without an expectation of used syringes being returned. One NSP in Portland, Oregon, hands out syringes without question. Neighbors near the NSP are routinely finding discarded syringes and the neighborhood organization to which they are a part of, the University Park park neighborhood association, desires the needle handout operation to stop. A local resident visited a NSP in Chico, California, and she was handed 100 syringes without question. The City Council in Chico is discussing banning the operation.
A 2003 Australian bi-partisan Federal Parliamentary inquiry published recommendations, registering concern about the lack of accountability of Australia's needle exchanges, and lack of a national program to track needle stick injuries. Community concern about discarded needles and needle stick injury led Australia to allocate $17.5 million in 2003/4 to investigating retractable technology for syringes.
Treatment program enrollment
IDUs risk multiple health problems from non-sterile injecting practices, drug complications and associated lifestyle choices. Unrelated health problems such as diabetes may be neglected because of drug dependence. IDUs are typically reluctant to use conventional health services. Such reluctance/neglect implies poorer health and increased use of emergency services, creating added costs. Harm reduction based health care centres, also known as targeted health care outlet or low-threshold health care outlet for IDUs have been established to address this issue.
NSP staff facilitate connections among people who use drugs and medical facilities, thereby exposing them to voluntary physical, psychological and emotional treatment programmes.
Social services for addicts can be organized around needle exchanges, increasing their accessibility.
Cost effectiveness
As of 2011, CDC estimated that every HIV infection prevented through a needle exchange program saves an estimated US$178,000+. Separately it reported an overall 30 percent or more reduction in HIV cases among IDUs.
Proponents
Proponents of harm reduction argue that the provision of a needle exchange provides a social benefit in reducing health costs and also provides a safe means to dispose of used syringes. For example, in the United Kingdom, proponents of SEPs assert that, along with other programmes, they have reduced the spread of HIV among intravenous drug users. These supposed benefits have led to an expansion of these programmes in most jurisdictions that have introduced them, increasing geographical coverage and operating hours. Vending machines that automatically dispense injecting equipment have been successfully introduced.
Other promoted benefits of these programmes include providing a first point of contact for formal drug treatment, access to health and counselling service referrals, the provision of up-to-date information about safe injecting practices, access to contraception and sexual health services and providing a means for data collection from users about their behaviour and/or drug use patterns. SEP outlets in some settings offer basic primary health care. These are known as 'targeted primary health care outlets', because they primarily target people who inject drugs and/or 'low-threshold health care outlets', because they reduce common barriers to health care from the conventional health care outlets,. Clients frequently visit SEP outlets for help accessing sterile injecting equipment. These visits are used opportunistically to offer other health care services.
A clinical trial of needle exchange found that needle exchange did not cause an increase in drug injection.
California Environmental Quality Act (CEQA)
Within California, those opposed to syringe exchange programs have frequently invoked the California Environmental Quality Act (CEQA) as a means to bar syringe exchange programs from operating, citing the environmental impact of improper syringe disposals. Most notably SEP opposition within Santa Cruz, and Orange County—whose only syringe exchange program The Orange County Needle Exchange Program (OCNEP) was blocked from operating in October, 2019 by an Orange County lawsuit which charged the program with creating hazardous conditions and litter for residents. The OCNEP contests that public needle litter still exists after the shutdown of their program.
Legislation in California signed by governor Gavin Newsom in 2021, AB-1344, aimed to block the use of CEQA to challenge SEPs. The provision states that "Needle and syringe exchange services application submissions, authorizations, and operations performed pursuant to this chapter shall be exempt from review under the California Environmental Quality Act, Division 13 (commencing with Section 21000) of the Public Resources Code."
The provision was passed on the basis of curtailing the opioid epidemic. There is no part of the bill which explicitly addresses the environmental concerns of the plaintiffs.
Scope
In a 1993 mortality study among 415 injection drug users in the Philadelphia area, over four years, 28 died: 5 from HIV-related causes; 7 from overdose, 5 from homicide, 4 from heart disease, 3 from renal failure, 2 from liver disease, 1 from suicide and 1 from cancer.
Community issues
NSP effectiveness studies usually focused on addict health effects; the United States National District Attorneys Association argues that they neglect effects on the broader community.
NSPs may concentrate drug activity into communities in which they operate. Only a small number of short-term studies considered whether NSPs have such effects. To the extent that this happens, they may negatively affect property values, increase localized crime rates and damage broader perceptions about the host community. In 1987 in the Platzspitz park in Zürich "...authorities chose to allow illegal drug use and sales at the park, in an effort to contain Zürich's growing drug problem. Police were not allowed to enter the park or make arrests. Clean needles were given out to addicts as part of the Zürich Intervention Pilot Project, or ZIPP-AIDS program. However, lack of control over what went on in the park caused a multitude of problems. Drug dealers and users arrived from all over Europe, and crime became rampant as dealers fought for control and addicts (who numbered up to 20,000) stole to support their habit."
In Australia, which is considered a leading proponent of harm reduction, a survey showed that one-third of the public believed that NSPs encouraged drug use, and 20% believed that NSPs dispensed drugs.
Diversion
NPR interviewed a syringe exchange program Prevention Point Philadelphia in Philadelphia, United States, and some of its clients. The program Prevention Point allows anyone presenting syringes to exchange for the same quantity without limitation and this has led to drug addicts selling clean syringes to other drug addicts to make drug money. Some drug dealers use the needle exchange to obtain a supply of large quantities of needles to sell or give to their drug buyers.
Some participants interviewed by a The Baltimore Sun in February 2000 revealed that they sell some of the new syringes obtained from the exchange in order to make drug money and did not always stop needle sharing among drug addicts.
See also
Supervised injection site
References
Addiction medicine
Drug culture
Drug paraphernalia
Drug safety
Harm reduction
Medical hygiene
Infection-control measures
Medical waste
Prevention of HIV/AIDS
Public policy
Public services | Needle and syringe programmes | Chemistry,Biology | 6,393 |
23,783,987 | https://en.wikipedia.org/wiki/Roentgen%20%28unit%29 | The roentgen or röntgen (; symbol R) is a legacy unit of measurement for the exposure of X-rays and gamma rays, and is defined as the electric charge freed by such radiation in a specified volume of air divided by the mass of that air (statcoulomb per kilogram).
In 1928, it was adopted as the first international measurement quantity for ionizing radiation to be defined for radiation protection, as it was then the most easily replicated method of measuring air ionization by using ion chambers. It is named after the German physicist Wilhelm Röntgen, who discovered X-rays and was awarded the first Nobel Prize in Physics for the discovery.
However, although this was a major step forward in standardising radiation measurement, the roentgen has the disadvantage that it is only a measure of air ionisation, and not a direct measure of radiation absorption in other materials, such as different forms of human tissue. For instance, one roentgen deposits of absorbed dose in dry air, or in soft tissue. One roentgen of X-rays may deposit anywhere from in bone depending on the beam energy.
As the science of radiation dosimetry developed, it was realised that the ionising effect, and hence tissue damage, was linked to the energy absorbed, not just radiation exposure. Consequently new radiometric units for radiation protection were defined which took this into account. In 1953 the International Commission on Radiation Units and Measurements (ICRU) recommended the rad, equal to 100 erg/g, as the unit of measure of the new radiation quantity absorbed dose. The rad was expressed in coherent cgs units. In 1975 the unit gray was named as the SI unit of absorbed dose. One gray is equal to 1 J/kg (i.e. 100 rad). Additionally, a new quantity, kerma, was defined for air ionisation as the exposure for instrument calibration, and from this the absorbed dose can be calculated using known coefficients for specific target materials. Today, for radiation protection, the modern units, absorbed dose for energy absorption and the equivalent dose (sievert) for stochastic effect, are overwhelmingly used, and the roentgen is rarely used. The International Committee for Weights and Measures (CIPM) has never accepted the use of the roentgen.
The roentgen has been redefined over the years. It was last defined by the U.S.'s National Institute of Standards and Technology (NIST) in 1998 as , with a recommendation that the definition be given in every document where the roentgen is used.
History
The roentgen has its roots in the Villard unit defined in 1908 by the American Roentgen Ray Society as "the quantity of radiation which liberates by ionisation one esu of electricity per cm3 of air under normal conditions of temperature and pressure." Using 1 esu ≈ 3.33564 C and the air density of ~1.293 kg/m3 at 0 °C and 101 kPa, this converts to 2.58 × 10−4 C/kg, which is the modern value given by NIST.
1 × 3.33564 × 10−10 × 1,000,000 ÷ 1.293 = 2.58 × 10−4
This definition was used under different names (e, R, and German unit of radiation) for the next 20 years. In the meantime, the French Roentgen was given a different definition which amounted to 0.444 German R.
ICR definitions
In 1928, the International Congress of Radiology (ICR) defined the roentgen as "the quantity of X-radiation which, when the secondary electrons are fully utilised and the wall effect of the chamber is avoided, produce in 1 cc of atmospheric air at 0 °C and 76 cm of mercury pressure such a degree of conductivity that 1 esu of charge is measured at saturation current." The stated 1 cc of air would have a mass of 1.293 mg at the conditions given, so in 1937 the ICR rewrote this definition in terms of this mass of air instead of volume, temperature and pressure. The 1937 definition was also extended to gamma rays, but later capped at 3 MeV in 1950.
GOST definition
The USSR all-union committee of standards (GOST) had meanwhile adopted a significantly different definition of the roentgen in 1934. GOST standard 7623 defined it as "the physical dose of X-rays which produces charges each of one electrostatic unit in magnitude per cm3 of irradiated volume in air at 0 °C and normal atmospheric pressure when ionization is complete." The distinction of physical dose from dose caused confusion, some of which may have led Cantrill and Parker report that the roentgen had become shorthand for 83 ergs per gram (0.0083 Gy) of tissue. They named this derivative quantity the roentgen equivalent physical (rep) to distinguish it from the ICR roentgen.
ICRP definition
The introduction of the roentgen measurement unit, which relied upon measuring the ionisation of air, replaced earlier less accurate practices that relied on timed exposure, film exposure, or fluorescence. This led the way to setting exposure limits, and the National Council on Radiation Protection and Measurements of the United States established the first formal dose limit in 1931 as 0.1 roentgen per day. The International X-ray and Radium Protection Committee, now known as the International Commission on Radiological Protection (ICRP) soon followed with a limit of 0.2 roentgen per day in 1934. In 1950, the ICRP reduced their recommended limit to 0.3 roentgen per week for whole-body exposure.
The International Commission on Radiation Units and Measurements (ICRU) took over the definition of the roentgen in 1950, defining it as "the quantity of X or γ-radiation such that the associated corpuscular emission per 0.001293 gram of air produces, in air, ions carrying 1 electrostatic unit of quantity of electricity of either sign." The 3 MeV cap was no longer part of the definition, but the degraded usefulness of this unit at high beam energies was mentioned in the accompanying text. In the meantime, the new concept of roentgen equivalent man (rem) had been developed.
Starting in 1957, the ICRP began to publish their recommendations in terms of rem, and the roentgen fell into disuse. The medical imaging community still has a need for ionization measurements, but they gradually converted to using C/kg as legacy equipment was replaced. The ICRU recommended redefining the roentgen to be exactly 2.58 × 10−4 C/kg in 1971.
European Union
In 1971 the European Economic Community, in Directive 71/354/EEC, catalogued the units of measure that could be used "for ... public health ... purposes". The directive included the curie, rad, rem, and roentgen as permissible units, but required that the use of the rad, rem and roentgen be reviewed before 31 December 1977. This document defined the roentgen as exactly 2.58 × 10−4 C/kg, as per the ICRU recommendation. Directive 80/181/EEC, published in December 1979, which replaced directive 71/354/EEC, explicitly catalogued the gray, becquerel, and sievert for this purpose and required that the curie, rad, rem and roentgen be phased out by 31 December 1985.
NIST definition
Today the roentgen is rarely used, and the International Committee for Weights and Measures (CIPM) never accepted the use of the roentgen. From 1977 to 1998, the US NIST's translations of the SI brochure stated that the CIPM temporarily accepted the use of the roentgen (and other radiology units) with SI units since 1969. However, the only related CIPM decision shown in the appendix are with regards to the curie in 1964. The NIST brochures defined the roentgen as 2.58 × 10−4 C/kg, to be employed with exposures of x or γ radiation, but did not state the medium to be ionized. The CIPM's current SI brochure excludes the roentgen from the tables of non-SI units accepted for use with the SI. The US NIST clarified in 1998 that it was providing its own interpretations of the SI system, whereby it accepted the roentgen for use in the US with the SI, while recognizing that the CIPM did not. By then, the limitation to x and γ radiation had been dropped. NIST recommends defining the roentgen in every document where this unit is used. The continued use of the roentgen is strongly discouraged by the NIST.
Development of replacement radiometric quantities
Although a convenient quantity to measure with an air ion chamber, the roentgen had the disadvantage that it was not a direct measure of either the intensity of X-rays or their absorption, but rather was a measurement of the ionising effect of X-rays in a specific circumstance; which was dry air at 0 °C and 1 standard atmosphere of pressure.
Because of this the roentgen had a variable relationship to the amount of energy absorbed dose per unit mass in the target material, as different materials have different absorption characteristics. As the science of radiation dosimetry developed, this was seen as a serious shortcoming.
In 1940, Louis Harold Gray, who had been studying the effect of neutron damage on human tissue, together with William Valentine Mayneord and the radiobiologist John Read, published a paper in which a unit of measure, dubbed the "gram roentgen" (symbol: gr) defined as "that amount of neutron radiation which produces an increment in energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one roentgen of radiation" was proposed. This unit was found to be equivalent to 88 ergs in air. In 1953 the ICRU recommended the rad, equal to 100 erg/g, as the new unit of measure of absorbed radiation. The rad was expressed in coherent cgs units.
In the late 1950s the General Conference on Weights and Measures (CGPM) invited the ICRU to join other scientific bodies to work with the International Committee for Weights and Measures (CIPM) in the development of a system of units that could be used consistently over many disciplines. This body, initially known as the "Commission for the System of Units", renamed in 1964 as the "Consultative Committee for Units" (CCU), was responsible for overseeing the development of the International System of Units (SI). At the same time it was becoming increasingly obvious that the definition of the roentgen was unsound, and in 1962 it was redefined.
The CCU decided to define the SI unit of absorbed radiation in terms of energy per unit mass, which in MKS units was J/kg. This was confirmed in 1975 by the 15th CGPM, and the unit was named the "gray" in honour of Louis Harold Gray, who had died in 1965. The gray was equal to 100 rad. The definition of the roentgen had had the attraction of being relatively simple to define for photons in air, but the gray is independent of the primary ionizing radiation type, and can be used for both kerma and absorbed dose in a wide range of matter.
When measuring absorbed dose in a human due to external exposure, the SI unit the gray, or the related non-SI rad are used. From these can be developed the dose equivalents to consider biological effects from differing radiation types and target materials. These are equivalent dose, and effective dose for which the SI unit sievert or the non-SI rem are used.
Radiation-related quantities
The following table shows radiation quantities in SI and non-SI units:
See also
Gray (unit) – SI unit of absorbed dose
Orders of magnitude (radiation)
Rad (unit) – cgs unit of absorbed dose
Roentgen equivalent man, or rem – a unit of radiation dose equivalent
Sievert (symbol: Sv) – the SI derived unit of dose equivalent
Wilhelm Röntgen
References
External links
NIST: Units outside the SI
Radiation Dose Units – Health Physics Society
Units of radiation dose
Non-SI metric units | Roentgen (unit) | Mathematics | 2,562 |
46,471,596 | https://en.wikipedia.org/wiki/T%20Persei | T Persei is a red supergiant located in the constellation Perseus. It varies in brightness between magnitudes 8.3 and 9.7 and is considered to be a member of the Double Cluster.
T Persei is a member of the Perseus OB1 association around the h and χ Persei open clusters, around 2 degrees north of the centre of the clusters. It is generally treated as an outlying member of the clusters. It lies half a degree away from S Persei, another red supergiant Double Cluster member.
Vojtěch Šafařík discovered that the star is a variable star in 1882. It was listed with its variable star designation, T Persei, in Annie Jump Cannon's 1907 work Second Catalog of Variable Stars. T Per is a semiregular variable star, whose brightness varies from magnitude 8.34 to 9.7 over a period of 2,430 days. Unlike many red supergiants, it does not appear to have a long secondary period. It is relatively inactive for the red supergiant star, with low mass loss rate /year and no detectable dust shell.
The Washington Double Star Catalog lists T Persei as having a 9th magnitude companion away. This is derived from Hipparcos measurements. However, no other sources report a companion.
References
M-type supergiants
Perseus (constellation)
Persei, T
014142
Semiregular variable stars
010829
BD+58 439
J02192186+5857403 | T Persei | Astronomy | 313 |
49,376,069 | https://en.wikipedia.org/wiki/Premio%20M%C3%A9xico%20de%20Ciencia%20y%20Tecnolog%C3%ADa | Premio México de Ciencia y Tecnología is an award bestowed in by the CONACYT to Ibero-American (Latin America plus the Iberian Peninsula) scholars in recognition of advances in science and/or technology. In the selection of the recipients the work done on institutions located in Ibero-America is deemed particularly meriting.
Award winners
Jacinto Convit, 1990
Juan José Giambiagi, 1991
Johanna Döbereiner, 1992
José Leite Lopes, 1993
Ignacio Rodriguez-Iturbe, 1994
José Luis Massera, 1997
Margarita Salas Falgueras, 1998
Sergio Enrique Ferreira, 1999
Jacob Palis, Jr., 2000
Ricardo Bressani Castignoli, 2001
Martín Schmal, 2002
Constantino Tsallis, 2003
Ginés Morata Pérez, 2004
Avelino Corma Canós, 2005
Antonio García-Bellido y García de Diego, 2006
Ramón Latorre de la Cruz, 2007
Mayana Zatz, 2008
Miguel Ángel Alario y Franco, 2009
Boaventura de Sousa Santos, 2010
Carlos López Otín, 2011
Juan Carlos Castilla Zenobi, 2012
Víctor Alberto Ramos, 2013
Carlos Martínez Alonso, 2014
Andrés Moya, 2015
Rafael Radi Isola, 2016
María Ángela Nieto Toledano, 2017
José W. F. Valle, 2018
See also
CONACYT
History of science and technology in Mexico
National Prize for Arts and Sciences
References
1990 establishments in Mexico
Academic awards
Awards established in 1990
International awards
Science and technology in Mexico
Science and technology awards | Premio México de Ciencia y Tecnología | Technology | 313 |
14,891,677 | https://en.wikipedia.org/wiki/Demolition%20belt | In military terminology, a demolition belt is a selected land area sown with explosive charges, mines, and other available obstacles to deny use of the land to enemy operations, and as a protection to friendly troops.
There are two types of demolition belt:
A primary demolition belt is a continuous series of obstacles across the whole front, selected by the division or higher commander. The preparation of such a belt is normally a priority engineer task.
A subsidiary demolition belt is a supplement to the primary belt to give depth in front or behind or to protect the flanks.
See also
Camouflet
References
Area denial weapons
Force protection tactics
Military terminology of the United States | Demolition belt | Engineering | 131 |
93,427 | https://en.wikipedia.org/wiki/Lazy%20initialization | In computer programming, lazy initialization is the tactic of delaying the creation of an object, the calculation of a value, or some other expensive process until the first time it is needed. It is a kind of lazy evaluation that refers specifically to the instantiation of objects or other resources.
This is typically accomplished by augmenting an accessor method (or property getter) to check whether a private member, acting as a cache, has already been initialized. If it has, it is returned straight away. If not, a new instance is created, placed into the member variable, and returned to the caller just-in-time for its first use.
If objects have properties that are rarely used, this can improve startup speed. Mean average program performance may be slightly worse in terms of memory (for the condition variables) and execution cycles (to check them), but the impact of object instantiation is spread in time ("amortized") rather than concentrated in the startup phase of a system, and thus median response times can be greatly improved.
In multithreaded code, access to lazy-initialized objects/state must be synchronized to guard against race conditions.
The "lazy factory"
In a software design pattern view, lazy initialization is often used together with a factory method pattern. This combines three ideas:
Using a factory method to create instances of a class (factory method pattern)
Storing the instances in a map, and returning the same instance to each request for an instance with same parameters (multiton pattern)
Using lazy initialization to instantiate the object the first time it is requested (lazy initialization pattern)
Examples
ActionScript 3
The following is an example of a class with lazy initialization implemented in ActionScript:
package examples.lazyinstantiation
{
public class Fruit
{
private var _typeName:String;
private static var instancesByTypeName:Dictionary = new Dictionary();
public function Fruit(typeName:String):void
{
this._typeName = typeName;
}
public function get typeName():String
{
return _typeName;
}
public static function getFruitByTypeName(typeName:String):Fruit
{
return instancesByTypeName[typeName] ||= new Fruit(typeName);
}
public static function printCurrentTypes():void
{
for each (var fruit:Fruit in instancesByTypeName)
{
// iterates through each value
trace(fruit.typeName);
}
}
}
}
Basic use:
package
{
import examples.lazyinstantiation;
public class Main
{
public function Main():void
{
Fruit.getFruitByTypeName("Banana");
Fruit.printCurrentTypes();
Fruit.getFruitByTypeName("Apple");
Fruit.printCurrentTypes();
Fruit.getFruitByTypeName("Banana");
Fruit.printCurrentTypes();
}
}
}
C
In C, lazy evaluation would normally be implemented inside one function, or one source file, using static variables.
In a function:
#include <string.h>
#include <stdlib.h>
#include <stddef.h>
#include <stdio.h>
struct fruit {
char *name;
struct fruit *next;
int number;
/* Other members */
};
struct fruit *get_fruit(char *name) {
static struct fruit *fruit_list;
static int seq;
struct fruit *f;
for (f = fruit_list; f; f = f->next)
if (0 == strcmp(name, f->name))
return f;
if (!(f = malloc(sizeof(struct fruit))))
return NULL;
if (!(f->name = strdup(name))) {
free(f);
return NULL;
}
f->number = ++seq;
f->next = fruit_list;
fruit_list = f;
return f;
}
/* Example code */
int main(int argc, char *argv[]) {
int i;
struct fruit *f;
if (argc < 2) {
fprintf(stderr, "Usage: fruits fruit-name [...]\n");
exit(1);
}
for (i = 1; i < argc; i++) {
if ((f = get_fruit(argv[i]))) {
printf("Fruit %s: number %d\n", argv[i], f->number);
}
}
return 0;
}
Using one source file instead allows the state to be shared between multiple functions, while still hiding it from non-related functions.
fruit.h:
#ifndef _FRUIT_INCLUDED_
#define _FRUIT_INCLUDED_
struct fruit {
char *name;
struct fruit *next;
int number;
/* Other members */
};
struct fruit *get_fruit(char *name);
void print_fruit_list(FILE *file);
#endif /* _FRUIT_INCLUDED_ */
fruit.c:
#include <string.h>
#include <stdlib.h>
#include <stddef.h>
#include <stdio.h>
#include "fruit.h"
static struct fruit *fruit_list;
static int seq;
struct fruit *get_fruit(char *name) {
struct fruit *f;
for (f = fruit_list; f; f = f->next)
if (0 == strcmp(name, f->name))
return f;
if (!(f = malloc(sizeof(struct fruit))))
return NULL;
if (!(f->name = strdup(name))) {
free(f);
return NULL;
}
f->number = ++seq;
f->next = fruit_list;
fruit_list = f;
return f;
}
void print_fruit_list(FILE *file) {
struct fruit *f;
for (f = fruit_list; f; f = f->next)
fprintf(file, "%4d %s\n", f->number, f->name);
}
main.c:
#include <stdlib.h>
#include <stdio.h>
#include "fruit.h"
int main(int argc, char *argv[]) {
int i;
struct fruit *f;
if (argc < 2) {
fprintf(stderr, "Usage: fruits fruit-name [...]\n");
exit(1);
}
for (i = 1; i < argc; i++) {
if ((f = get_fruit(argv[i]))) {
printf("Fruit %s: number %d\n", argv[i], f->number);
}
}
printf("The following fruits have been generated:\n");
print_fruit_list(stdout);
return 0;
}
C#
In .NET Framework 4.0 Microsoft has included a Lazy class that can be used to do lazy loading.
Below is some dummy code that does lazy loading of Class Fruit
var lazyFruit = new Lazy<Fruit>();
Fruit fruit = lazyFruit.Value;
Here is a dummy example in C#.
The Fruit class itself doesn't do anything here, The class variable _typesDictionary is a Dictionary/Map used to store Fruit instances by typeName.
using System;
using System.Collections;
using System.Collections.Generic;
public class Fruit
{
private string _typeName;
private static IDictionary<string, Fruit> _typesDictionary = new Dictionary<string, Fruit>();
private Fruit(string typeName)
{
this._typeName = typeName;
}
public static Fruit GetFruitByTypeName(string type)
{
Fruit fruit;
if (!_typesDictionary.TryGetValue(type, out fruit))
{
// Lazy initialization
fruit = new Fruit(type);
_typesDictionary.Add(type, fruit);
}
return fruit;
}
public static void ShowAll()
{
if (_typesDictionary.Count > 0)
{
Console.WriteLine("Number of instances made = {0}", _typesDictionary.Count);
foreach (KeyValuePair<string, Fruit> kvp in _typesDictionary)
{
Console.WriteLine(kvp.Key);
}
Console.WriteLine();
}
}
public Fruit()
{
// required so the sample compiles
}
}
class Program
{
static void Main(string[] args)
{
Fruit.GetFruitByTypeName("Banana");
Fruit.ShowAll();
Fruit.GetFruitByTypeName("Apple");
Fruit.ShowAll();
// returns pre-existing instance from first
// time Fruit with "Banana" was created
Fruit.GetFruitByTypeName("Banana");
Fruit.ShowAll();
Console.ReadLine();
}
}
A fairly straightforward 'fill-in-the-blanks' example of a Lazy Initialization design pattern, except that this uses an enumeration for the type
namespace DesignPatterns.LazyInitialization;
public class LazyFactoryObject
{
// internal collection of items
// IDictionary makes sure they are unique
private IDictionary<LazyObjectSize, LazyObject> _LazyObjectList =
new Dictionary<LazyObjectSize, LazyObject>();
// enum for passing name of size required
// avoids passing strings and is part of LazyObject ahead
public enum LazyObjectSize
{
None,
Small,
Big,
Bigger,
Huge
}
// standard type of object that will be constructed
public struct LazyObject
{
public LazyObjectSize Size;
public IList<int> Result;
}
// takes size and create 'expensive' list
private IList<int> Result(LazyObjectSize size)
{
IList<int> result = null;
switch (size)
{
case LazyObjectSize.Small:
result = CreateSomeExpensiveList(1, 100);
break;
case LazyObjectSize.Big:
result = CreateSomeExpensiveList(1, 1000);
break;
case LazyObjectSize.Bigger:
result = CreateSomeExpensiveList(1, 10000);
break;
case LazyObjectSize.Huge:
result = CreateSomeExpensiveList(1, 100000);
break;
case LazyObjectSize.None:
result = null;
break;
default:
result = null;
break;
}
return result;
}
// not an expensive item to create, but you get the point
// delays creation of some expensive object until needed
private IList<int> CreateSomeExpensiveList(int start, int end)
{
IList<int> result = new List<int>();
for (int counter = 0; counter < (end - start); counter++)
{
result.Add(start + counter);
}
return result;
}
public LazyFactoryObject()
{
// empty constructor
}
public LazyObject GetLazyFactoryObject(LazyObjectSize size)
{
// yes, i know it is illiterate and inaccurate
LazyObject noGoodSomeOne;
// retrieves LazyObjectSize from list via out, else creates one and adds it to list
if (!_LazyObjectList.TryGetValue(size, out noGoodSomeOne))
{
noGoodSomeOne = new LazyObject();
noGoodSomeOne.Size = size;
noGoodSomeOne.Result = this.Result(size);
_LazyObjectList.Add(size, noGoodSomeOne);
}
return noGoodSomeOne;
}
}
C++
This example is in C++.
import std;
class Fruit {
public:
static Fruit* GetFruit(const std::string& type);
static void PrintCurrentTypes();
private:
// Note: constructor private forcing one to use static |GetFruit|.
Fruit(const std::string& type) : type_(type) {}
static std::map<std::string, Fruit*> types;
std::string type_;
};
// static
std::map<std::string, Fruit*> Fruit::types;
// Lazy Factory method, gets the |Fruit| instance associated with a certain
// |type|. Creates new ones as needed.
Fruit* Fruit::GetFruit(const std::string& type) {
auto [it, inserted] = types.emplace(type, nullptr);
if (inserted) {
it->second = new Fruit(type);
}
return it->second;
}
// For example purposes to see pattern in action.
void Fruit::PrintCurrentTypes() {
std::println("Number of instances made = {}", types.size());
for (const auto& [type, fruit] : types) {
std::println({}, type);
}
std::println();
}
int main() {
Fruit::GetFruit("Banana");
Fruit::PrintCurrentTypes();
Fruit::GetFruit("Apple");
Fruit::PrintCurrentTypes();
// Returns pre-existing instance from first time |Fruit| with "Banana" was
// created.
Fruit::GetFruit("Banana");
Fruit::PrintCurrentTypes();
}
// OUTPUT:
//
// Number of instances made = 1
// Banana
//
// Number of instances made = 2
// Apple
// Banana
//
// Number of instances made = 2
// Apple
// Banana
//
Crystal
class Fruit
private getter type : String
@@types = {} of String => Fruit
def initialize(@type)
end
def self.get_fruit_by_type(type : String)
@@types[type] ||= Fruit.new(type)
end
def self.show_all
puts "Number of instances made: #{@@types.size}"
@@types.each do |type, fruit|
puts "#{type}"
end
puts
end
def self.size
@@types.size
end
end
Fruit.get_fruit_by_type("Banana")
Fruit.show_all
Fruit.get_fruit_by_type("Apple")
Fruit.show_all
Fruit.get_fruit_by_type("Banana")
Fruit.show_all
Output:
Number of instances made: 1
Banana
Number of instances made: 2
Banana
Apple
Number of instances made: 2
Banana
Apple
Haxe
This example is in Haxe.
class Fruit {
private static var _instances = new Map<String, Fruit>();
public var name(default, null):String;
public function new(name:String) {
this.name = name;
}
public static function getFruitByName(name:String):Fruit {
if (!_instances.exists(name)) {
_instances.set(name, new Fruit(name));
}
return _instances.get(name);
}
public static function printAllTypes() {
trace([for(key in _instances.keys()) key]);
}
}Usageclass Test {
public static function main () {
var banana = Fruit.getFruitByName("Banana");
var apple = Fruit.getFruitByName("Apple");
var banana2 = Fruit.getFruitByName("Banana");
trace(banana == banana2); // true. same banana
Fruit.printAllTypes(); // ["Banana","Apple"]
}
}
Java
This example is in Java.
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
public class Program {
/**
* @param args
*/
public static void main(String[] args) {
Fruit.getFruitByTypeName(FruitType.banana);
Fruit.showAll();
Fruit.getFruitByTypeName(FruitType.apple);
Fruit.showAll();
Fruit.getFruitByTypeName(FruitType.banana);
Fruit.showAll();
}
}
enum FruitType {
none,
apple,
banana,
}
class Fruit {
private static Map<FruitType, Fruit> types = new HashMap<>();
/**
* Using a private constructor to force the use of the factory method.
* @param type
*/
private Fruit(FruitType type) {
}
/**
* Lazy Factory method, gets the Fruit instance associated with a certain
* type. Instantiates new ones as needed.
* @param type Any allowed fruit type, e.g. APPLE
* @return The Fruit instance associated with that type.
*/
public static Fruit getFruitByTypeName(FruitType type) {
Fruit fruit;
// This has concurrency issues. Here the read to types is not synchronized,
// so types.put and types.containsKey might be called at the same time.
// Don't be surprised if the data is corrupted.
if (!types.containsKey(type)) {
// Lazy initialisation
fruit = new Fruit(type);
types.put(type, fruit);
} else {
// OK, it's available currently
fruit = types.get(type);
}
return fruit;
}
/**
* Lazy Factory method, gets the Fruit instance associated with a certain
* type. Instantiates new ones as needed. Uses double-checked locking
* pattern for using in highly concurrent environments.
* @param type Any allowed fruit type, e.g. APPLE
* @return The Fruit instance associated with that type.
*/
public static Fruit getFruitByTypeNameHighConcurrentVersion(FruitType type) {
if (!types.containsKey(type)) {
synchronized (types) {
// Check again, after having acquired the lock to make sure
// the instance was not created meanwhile by another thread
if (!types.containsKey(type)) {
// Lazy initialisation
types.put(type, new Fruit(type));
}
}
}
return types.get(type);
}
/**
* Displays all entered fruits.
*/
public static void showAll() {
if (types.size() > 0) {
System.out.println("Number of instances made = " + types.size());
for (Entry<FruitType, Fruit> entry : types.entrySet()) {
String fruit = entry.getKey().toString();
fruit = Character.toUpperCase(fruit.charAt(0)) + fruit.substring(1);
System.out.println(fruit);
}
System.out.println();
}
}
}
Output
Number of instances made = 1
Banana
Number of instances made = 2
Banana
Apple
Number of instances made = 2
Banana
Apple
JavaScript
This example is in JavaScript.
var Fruit = (function() {
var types = {};
function Fruit() {};
// count own properties in object
function count(obj) {
return Object.keys(obj).length;
}
var _static = {
getFruit: function(type) {
if (typeof types[type] == 'undefined') {
types[type] = new Fruit;
}
return types[type];
},
printCurrentTypes: function () {
console.log('Number of instances made: ' + count(types));
for (var type in types) {
console.log(type);
}
}
};
return _static;
})();
Fruit.getFruit('Apple');
Fruit.printCurrentTypes();
Fruit.getFruit('Banana');
Fruit.printCurrentTypes();
Fruit.getFruit('Apple');
Fruit.printCurrentTypes();
Output
Number of instances made: 1
Apple
Number of instances made: 2
Apple
Banana
Number of instances made: 2
Apple
Banana
PHP
Here is an example of lazy initialization in PHP 7.4:
<?php
header('Content-Type: text/plain; charset=utf-8');
class Fruit
{
private string $type;
private static array $types = array();
private function __construct(string $type)
{
$this->type = $type;
}
public static function getFruit(string $type)
{
// Lazy initialization takes place here
if (!isset(self::$types[$type])) {
self::$types[$type] = new Fruit($type);
}
return self::$types[$type];
}
public static function printCurrentTypes(): void
{
echo 'Number of instances made: ' . count(self::$types) . "\n";
foreach (array_keys(self::$types) as $key) {
echo "$key\n";
}
echo "\n";
}
}
Fruit::getFruit('Apple');
Fruit::printCurrentTypes();
Fruit::getFruit('Banana');
Fruit::printCurrentTypes();
Fruit::getFruit('Apple');
Fruit::printCurrentTypes();
/*
OUTPUT:
Number of instances made: 1
Apple
Number of instances made: 2
Apple
Banana
Number of instances made: 2
Apple
Banana
*/
Python
This example is in Python.
class Fruit:
def __init__(self, item: str):
self.item = item
class FruitCollection:
def __init__(self):
self.items = {}
def get_fruit(self, item: str) -> Fruit:
if item not in self.items:
self.items[item] = Fruit(item)
return self.items[item]
if __name__ == "__main__":
fruits = FruitCollection()
print(fruits.get_fruit("Apple"))
print(fruits.get_fruit("Lime"))
Ruby
This example is in Ruby, of lazily initializing an authentication token from a remote service like Google. The way that @auth_token is cached is also an example of memoization.
require 'net/http'
class Blogger
def auth_token
@auth_token ||=
(res = Net::HTTP.post_form(uri, params)) &&
get_token_from_http_response(res)
end
# get_token_from_http_response, uri and params are defined later in the class
end
b = Blogger.new
b.instance_variable_get(:@auth_token) # returns nil
b.auth_token # returns token
b.instance_variable_get(:@auth_token) # returns token
Rust
Rust have .
use std::cell::LazyCell;
let lazy = LazyCell::new(|| 42);
Scala
Scala has built-in support for lazy variable initiation.
scala> val x = { println("Hello"); 99 }
Hello
x: Int = 99
scala> lazy val y = { println("Hello!!"); 31 }
y: Int = <lazy>
scala> y
Hello!!
res2: Int = 31
scala> y
res3: Int = 31
Smalltalk
This example is in Smalltalk, of a typical accessor method to return the value of a variable using lazy initialization.
height
^height ifNil: [height := 2.0].
The 'non-lazy' alternative is to use an initialization method that is run when the object is created and then use a simpler accessor method to fetch the value.
initialize
height := 2.0
height
^height
Note that lazy initialization can also be used in non-object-oriented languages.
Theoretical computer science
In the field of theoretical computer science, lazy initialization (also called a lazy array) is a technique to design data structures that can work with memory that does not need to be initialized. Specifically, assume that we have access to a table T of n uninitialized memory cells (numbered from 1 to n), and want to assign m cells of this array, e.g., we want to assign T[ki] := vi for pairs (k1, v1), ..., (km, vm) with all ki being different. The lazy initialization technique allows us to do this in just O(m) operations, rather than spending O(m+n) operations to first initialize all array cells. The technique is simply to allocate a table V storing the pairs (ki, vi) in some arbitrary order, and to write for each i in the cell T[ki] the position in V where key ki is stored, leaving the other cells of T uninitialized. This can be used to handle queries in the following fashion: when we look up cell T[k] for some k, we can check if T[k] is in the range {1, ..., m}: if it is not, then T[k] is uninitialized. Otherwise, we check V[T[k]], and verify that the first component of this pair is equal to k. If it is not, then T[k] is uninitialized (and just happened by accident to fall in the range {1, ..., m}). Otherwise, we know that T[k] is indeed one of the initialized cells, and the corresponding value is the second component of the pair.
See also
Double-checked locking
Lazy loading
Proxy pattern
Singleton pattern
References
External links
Article "Java Tip 67: Lazy instantiation - Balancing performance and resource usage" by Philip Bishop and Nigel Warren
Java code examples
Use Lazy Initialization to Conserve Resources
Description from the Portland Pattern Repository
Lazy Initialization of Application Server Services
Lazy Inheritance in JavaScript
Lazy Inheritance in C#
Software design patterns
Programming language comparisons
Articles with example C code
Articles with example C++ code
Articles with example C Sharp code
Articles with example Java code
Articles with example JavaScript code
Articles with example PHP code
Articles with example Python (programming language) code
Articles with example Ruby code
Articles with example Smalltalk code | Lazy initialization | Technology | 5,829 |
55,611 | https://en.wikipedia.org/wiki/Alexandroff%20extension | In the mathematical field of topology, the Alexandroff extension is a way to extend a noncompact topological space by adjoining a single point in such a way that the resulting space is compact. It is named after the Russian mathematician Pavel Alexandroff.
More precisely, let X be a topological space. Then the Alexandroff extension of X is a certain compact space X* together with an open embedding c : X → X* such that the complement of X in X* consists of a single point, typically denoted ∞. The map c is a Hausdorff compactification if and only if X is a locally compact, noncompact Hausdorff space. For such spaces the Alexandroff extension is called the one-point compactification or Alexandroff compactification. The advantages of the Alexandroff compactification lie in its simple, often geometrically meaningful structure and the fact that it is in a precise sense minimal among all compactifications; the disadvantage lies in the fact that it only gives a Hausdorff compactification on the class of locally compact, noncompact Hausdorff spaces, unlike the Stone–Čech compactification which exists for any topological space (but provides an embedding exactly for Tychonoff spaces).
Example: inverse stereographic projection
A geometrically appealing example of one-point compactification is given by the inverse stereographic projection. Recall that the stereographic projection S gives an explicit homeomorphism from the unit sphere minus the north pole (0,0,1) to the Euclidean plane. The inverse stereographic projection is an open, dense embedding into a compact Hausdorff space obtained by adjoining the additional point . Under the stereographic projection latitudinal circles get mapped to planar circles . It follows that the deleted neighborhood basis of given by the punctured spherical caps corresponds to the complements of closed planar disks . More qualitatively, a neighborhood basis at is furnished by the sets as K ranges through the compact subsets of . This example already contains the key concepts of the general case.
Motivation
Let be an embedding from a topological space X to a compact Hausdorff topological space Y, with dense image and one-point remainder . Then c(X) is open in a compact Hausdorff space so is locally compact Hausdorff, hence its homeomorphic preimage X is also locally compact Hausdorff. Moreover, if X were compact then c(X) would be closed in Y and hence not dense. Thus a space can only admit a Hausdorff one-point compactification if it is locally compact, noncompact and Hausdorff. Moreover, in such a one-point compactification the image of a neighborhood basis for x in X gives a neighborhood basis for c(x) in c(X), and—because a subset of a compact Hausdorff space is compact if and only if it is closed—the open neighborhoods of must be all sets obtained by adjoining to the image under c of a subset of X with compact complement.
The Alexandroff extension
Let be a topological space. Put and topologize by taking as open sets all the open sets in X together with all sets of the form where C is closed and compact in X. Here, denotes the complement of in Note that is an open neighborhood of and thus any open cover of will contain all except a compact subset of implying that is compact .
The space is called the Alexandroff extension of X (Willard, 19A). Sometimes the same name is used for the inclusion map
The properties below follow from the above discussion:
The map c is continuous and open: it embeds X as an open subset of .
The space is compact.
The image c(X) is dense in , if X is noncompact.
The space is Hausdorff if and only if X is Hausdorff and locally compact.
The space is T1 if and only if X is T1.
The one-point compactification
In particular, the Alexandroff extension is a Hausdorff compactification of X if and only if X is Hausdorff, noncompact and locally compact. In this case it is called the one-point compactification or Alexandroff compactification of X.
Recall from the above discussion that any Hausdorff compactification with one point remainder is necessarily (isomorphic to) the Alexandroff compactification. In particular, if is a compact Hausdorff space and is a limit point of (i.e. not an isolated point of ), is the Alexandroff compactification of .
Let X be any noncompact Tychonoff space. Under the natural partial ordering on the set of equivalence classes of compactifications, any minimal element is equivalent to the Alexandroff extension (Engelking, Theorem 3.5.12). It follows that a noncompact Tychonoff space admits a minimal compactification if and only if it is locally compact.
Non-Hausdorff one-point compactifications
Let be an arbitrary noncompact topological space. One may want to determine all the compactifications (not necessarily Hausdorff) of obtained by adding a single point, which could also be called one-point compactifications in this context.
So one wants to determine all possible ways to give a compact topology such that is dense in it and the subspace topology on induced from is the same as the original topology. The last compatibility condition on the topology automatically implies that is dense in , because is not compact, so it cannot be closed in a compact space.
Also, it is a fact that the inclusion map is necessarily an open embedding, that is, must be open in and the topology on must contain every member
of .
So the topology on is determined by the neighbourhoods of . Any neighborhood of is necessarily the complement in of a closed compact subset of , as previously discussed.
The topologies on that make it a compactification of are as follows:
The Alexandroff extension of defined above. Here we take the complements of all closed compact subsets of as neighborhoods of . This is the largest topology that makes a one-point compactification of .
The open extension topology. Here we add a single neighborhood of , namely the whole space . This is the smallest topology that makes a one-point compactification of .
Any topology intermediate between the two topologies above. For neighborhoods of one has to pick a suitable subfamily of the complements of all closed compact subsets of ; for example, the complements of all finite closed compact subsets, or the complements of all countable closed compact subsets.
Further examples
Compactifications of discrete spaces
The one-point compactification of the set of positive integers is homeomorphic to the space consisting of K = {0} U {1/n | n is a positive integer} with the order topology.
A sequence in a topological space converges to a point in , if and only if the map given by for in and is continuous. Here has the discrete topology.
Polyadic spaces are defined as topological spaces that are the continuous image of the power of a one-point compactification of a discrete, locally compact Hausdorff space.
Compactifications of continuous spaces
The one-point compactification of n-dimensional Euclidean space Rn is homeomorphic to the n-sphere Sn. As above, the map can be given explicitly as an n-dimensional inverse stereographic projection.
The one-point compactification of the product of copies of the half-closed interval [0,1), that is, of , is (homeomorphic to) .
Since the closure of a connected subset is connected, the Alexandroff extension of a noncompact connected space is connected. However a one-point compactification may "connect" a disconnected space: for instance the one-point compactification of the disjoint union of a finite number of copies of the interval (0,1) is a wedge of circles.
The one-point compactification of the disjoint union of a countable number of copies of the interval (0,1) is the Hawaiian earring. This is different from the wedge of countably many circles, which is not compact.
Given compact Hausdorff and any closed subset of , the one-point compactification of is , where the forward slash denotes the quotient space.
If and are locally compact Hausdorff, then where is the smash product. Recall that the definition of the smash product: where is the wedge sum, and again, / denotes the quotient space.
As a functor
The Alexandroff extension can be viewed as a functor from the category of topological spaces with proper continuous maps as morphisms to the category whose objects are continuous maps and for which the morphisms from to are pairs of continuous maps such that . In particular, homeomorphic spaces have isomorphic Alexandroff extensions.
See also
Notes
References
General topology
Compactification (mathematics) | Alexandroff extension | Mathematics | 1,867 |
40,600,688 | https://en.wikipedia.org/wiki/Ivlia%20%28ship%29 | Ivlia (bireme) is a modern reconstruction of an ancient Greek rowing warship (galley) with oars at two levels, and is an example of experimental archaeology. Between 1989 and 1994, this vessel undertook six international historical and geographical expeditions, tracing the route of the ancient seafarers.
Ship construction
After processing the available scientific data using ancient illustrations on vases and reliefs, as well as written and archaeological sources, members of the Odesa Archeological Museum, under the direction of Prof. Vladimir N. Stanko, Ph.D., proposed the building of a bireme because, in antiquity, it had been the most widely used vessel in the northern Black Sea region.
The ship was constructed in 1989 at the Sochi Naval Shipyard by a team led by shipwright Damir S. Shkhalakhov. Ivlia was built from Durmast oak and Siberian larch, while the oars were made of beech. The technical design of the project was carried out by specialists of the Nikolayev University of Shipbuilding. The main sponsor of the construction of the ship was the Black Sea Shipping Company.
Expedition route
Starting from Odesa in Ukraine in 1989, Ivlia followed the routes of the ancient mariners on the Black Sea and the Mediterranean Sea as well as the Atlantic Ocean, covering more than 3,000 nautical miles in six expedition seasons and visiting over 50 European ports, finally sailing up the river Seine to reach Paris. To celebrate the completion of the voyages, the Mayor of Paris and future President of France, Jacques Chirac, was received on board the Ivlia.
The expedition's progress was widely covered by international media. During the time of the voyage, hundreds of articles were published, along with dozens of TV and radio reports. The ship was regularly visited by official delegations and thousands of tourists. Ivlia also took part in international maritime festivals: Colombo'92 in Genoa (Italy), Brest’92, Cancal’93, and Vieux Greements’94 (France). Over six seasons the crew members included more than 200 people – citizens of Russia, Ukraine, Moldova, France, Greece and Georgia.
Scientific aspects
The authors of the project, Igor Melnik, Mikhail Agbunov and Pavel Goncharuk, together with the staff of the Odesa Archaeological Museum and the Nikolayev University of Shipbuilding, developed the research program of the expedition primarily to address the following objectives:
Clarification of written and archaeological sources of data on the design, construction technology, and load capacity of ancient Greek ships.
Practical research into the seaworthiness of antique biremes. The bireme performed well, even with tailwinds of up to 7 on the Beaufort scale.
Verification of cabotage routes of Hellenic sailors, as well as the possibility of galleys in antiquity making voyages on the open sea, out of sight of the coast. In the scientific world, there is continuing debate about how far the routes of ancient mariners were from the coastline. Many scholars believe that ancient seagoing ships were weak, and consequently, their pilots kept close to the shores. However, a coast that was unlit and unequipped with navigation marks, as it was in ancient times, posed far more perils for navigation than the open sea. To this must be added the threat of pirate attacks, since many of the coastal peoples engaged in brigandage.
Clarifying the details of ancient periplus and verifying a range of hypotheses from the project authors to localize the ancient Greek settlement of the North-Western Black Sea region.
Mastering the ancient art of navigation, control of an antique vessel by sail, and methods of mooring and anchoring galleys.
The practical experience gained on Ivlia'''s expeditions enabled the project authors to affirm:
The level of cartographical advancement and navigational knowledge of the ancient Greeks and the seaworthiness of their ships were likely higher than is commonly believed.
Ancient mariners were likely able to navigate by the stars, make open sea crossings, were capable of sailing away from the coast, were familiar with and used the prevailing winds and currents.
The famous triremes, which were distinguished by their performance in combat, were less seaworthy than the biremes: they were built for battle and used in large naval campaigns.
Biremes were the most common type of vessel used during the great Greek colonization of the Mediterranean. The geographical discoveries of antiquity were made on these vessels, suited as they were to long ocean voyages. As the most seaworthy ships of the time, it is likely that biremes were used by the Carthaginian explorers Himilco and Hanno for explorations beyond the Pillars of Hercules, as well as Pytheas of Massalia's voyage to the legendary island of Thule.
In addition, the research program conducted on board Ivlia included the participation of the Institute of Biology of the Southern Seas. In accordance with the research program, developed under the leadership of the Acad. Y. P. Zaitsev. During the expedition, density, salinity, transparency and contamination of seawater were regularly measured. Also regular measurements were made of environmental parameters and the level of pollution of the seawater, assessments of the state of marine flora and fauna, and a variety of medical experiments were conducted. The data obtained during the six years of voyages are summarized in the articles and books subsequently published by the authors of the project.
Gallery
References
Literature
Bockius, Ronald (2007). Schifffahrt und Schiffbau in der Antike, p. 52-64. Theiss Verlag. Casson, Lionel (1991). The Ancient Mariners, ch.8. Princeton University Press. Gilles, Daniel (1992). L'Album Souvenir de la Fete Brest'92, p. 7, 111, 236, 257. Le Chasse Maree. Armen. Mark, Samuel (2005). Homeric Seafaring. Texas University Press. Melnik, Igor K. (2010). Historical Maritime Sailing in Models & Reconstruktions, p. 46-49. Kyiv, Phoenix. Morrison, John. The Athenian Trireme, pp. 28–30. Cambridge University Press. "Il Secolo XIX", (Italy), 23.05.1992. "In porto, dopo 3 anni d'odissea, una triremi russa", Giorgio Carrozi. "La Stampa", (Italy), 31.05.1992 (№147). "E'approdata a Sanremo la triremi dell'antica Grecia". "Il Tirreno", (Italy), 06.05.1992. " Una mostra per l’Ivlia". "Le Monde", (France), 19.07.1992. "Pavel, galerien d'Odessa", Annick Cojean. "Revue Thalassa", (France), 07.1992 (№3). "Et vogue la galere" p. 64-65. "Presse – Ocean (Ouest)", (France), 09.09.1994. "Ivlia se prepare pour une transatlantique", Severine Le Bourhis, p. 15. "Le Télégramme", (France), 02.08.1994. "La galere antique a la conquete de l’Atlantique", Noel Pochet. "La Presse de la Manche", (France), 14.08.1993. " Et vogue la galere ukrainienne", Th. Motte, p. 3-4. "Le Chasse Maree", (France), 07.1992 (№67). "Ivlia, la galere", p. 16. "Le Marin", (France), 21.05.1993. "Sous le vent de la galere", Cristhine Le Portal. "Le Parisien", (edition Paris), 16.09.1993. "Une galere antique", Laurent Mauron. "Libération", (France), 07.12.1993. "Ivlia ou l'Odyssee suspendue", Patrick Le Roux, p. 28-29.''
External links
Official website of the Ivlia project.
Youtube Ivlia Project.
Books of Igor Melnik in electronic form.
Documentary.
X*Legio Project.
Around the World Magazine.
48 Oar Bireme Rows to Sea.
Club Polar Odyssey.
Engineering Concepts applied to Ancient Greek Warships.
Ships of ancient Greece
Navy of ancient Rome
Naval warfare of antiquity
History of Odesa
Replications of ancient voyages
Shipbuilding
Human-powered vehicles
Human-powered transport
History of rowing
Rowing boats
Galleys
1989 ships
Replica ships | Ivlia (ship) | Engineering | 1,810 |
54,959,185 | https://en.wikipedia.org/wiki/Spiroligomer | Spiroligomer molecules (also known as bis-peptides) are synthetic oligomers made by coupling pairs of bis-amino acids into a fused ring system. Spiroligomer molecules are rich in stereochemistry and functionality because of the variety of bis-amino acids that are capable of being incorporated during synthesis. Due to the rigidity of the fused ring system, the three-dimensional shape of a Spiroligomer molecule – as well as the display of any functional groups – can be predicted, allowing for molecular modeling and dynamics.
Synthesis
Spiroligomer molecules are synthesized in a step-wise approach by adding a single bis-amino acid at each stage of the synthesis. This stepwise elongation allows for complete control of the stereochemistry, as any bis-amino acid can be incorporated to allow for elongation; or any mono-amino acid can be added to terminate a chain. This can be accomplished using either solution-phase or solid-phase reactions. The original synthesis of Spiroligomer molecule allowed for functionalization on the ends of the oligomers, but it did not allow for the incorporation of functionality on the interior diketopiperazine (DKP) nitrogens. Much work has been done to allow for the functionalization of the entire Spiroligomer molecule, as opposed to just the ends. By exploiting a neighboring group effect, Spiroligomer molecule can be synthesized with a variety of functional groups along the length of the molecule.
Structure
Spiroligomer molecules can be synthesized in any direction, and between any pair of bis-amino acids.
Spiroligomer diketopiperazines can be created between either end of a bis-amino acid.
Spiroligomer molecules are known to be conformationally rigid, due to the fused-ring backbone.
Chemical characteristics
Spiroligomer molecules are peptidomimetics, completely resistant to proteases, and not likely to raise an immune response.
Uses
Spiroligomer molecules have been utilized for a variety of applications which include catalysis, protein binding, metal-binding, molecular scaffolds, and charge-transfer studies, et al.
Catalysis
Two unique types of Spiroligomer catalysts (spiroligozymes) have been developed, an esterase mimic and a Claisen catalyst.
Transesterification
The first Spiroligomer catalyst was an esterase-mimic, which catalyzed the transfer of a trifluoroacetate group.
Aromatic Claisen rearrangement
The second Spiroligomer catalyst accelerated an aromatic Claisen rearrangement with a catalytic dyad similar to that found in ketosteroid isomerase.
Protein binding
A Spiroligomer peptidomimetic was designed to mimic P53 and bind HDM2. The molecule enters cells through passive diffusion, and this mimic was shown to stabilize HDM2 in cell culture.
Metal binding
Binuclear metal binding
Molecular scaffolds
Rods used for distance measuring with spin probes.
Electron transfer
Donor-Bridge-Acceptor
Other uses
Possible applications that are currently investigated include the binding and inactivation of cholera toxin and the cross linkage of surface proteins of various viruses (HIV, Ebola virus). Further the group of Christian Schafmeister developed molecular hinges, which can be used for the construction of molecular machines, such as nano-valves or data storage systems.
See also
Molecular engineering
Molecular machine
Molecular nanotechnology
Nanotechnology
References
Oligomers
Amino acids
Molecular modelling
Molecular biology | Spiroligomer | Chemistry,Materials_science,Biology | 745 |
24,573,437 | https://en.wikipedia.org/wiki/Narcissistic%20parent | A narcissistic parent is a parent affected by narcissism or narcissistic personality disorder. Typically, narcissistic parents are exclusively and possessively close to their children and are threatened by their children's growing independence. This results in a pattern of narcissistic attachment, in which the parent believes that the child exists solely to fulfill the parent's needs and wishes. A narcissistic parent will often try to control their children with threats and emotional abuse. Narcissistic parenting adversely affects children's psychological development, affecting their reasoning and their emotional, ethical, and societal behaviors and attitudes. Personal boundaries are often disregarded so the narcissistic parent can mold and manipulate the child to satisfy the parent's expectations.
Narcissistic people have low self-esteem and feel the need to control how others regard them, fearing that otherwise they will be blamed or rejected and that their personal inadequacies will be exposed. Narcissistic parents are self-absorbed, often to the point of grandiosity. They also tend to be inflexible and lack the empathy necessary for child raising.
Characteristics
Narcissism, as described in Sigmund Freud’s clinical study, includes behaviors such as self-aggrandizement, self-esteem, vulnerability, fear of failure, fear of losing people's affection, reliance on defense mechanisms, perfectionism, and interpersonal conflict.
To maintain their self-esteem and protect their vulnerable true selves, narcissists seek to control others' behavior, particularly that of their children, whom they view as extensions of themselves. Thus, narcissistic parents may speak of "carrying the torch", maintaining the family image, or making the mother or father proud. They may reproach their children for exhibiting weakness, being too dramatic, being selfish, or not meeting expectations. Children of narcissists learn to play their part and to show off their special skills, especially in public or for others.
Destructive narcissistic parents have a pattern of consistently needing to be the focus of attention, exaggerating, seeking compliments, and putting their children down. Punishment in the form of blame, criticism or emotional blackmail, and attempts to induce guilt may be used to ensure compliance with the parent's wishes and fuel their need for narcissistic supply.
Children of narcissists
Narcissism tends to play out intergenerationally, with narcissistic parents producing either narcissistic or codependent children. While a self-confident parent, or good-enough parent, can allow a child autonomous development, the narcissistic parent may instead use the child to promote their own image. A parent concerned with self-enhancement, or with being mirrored and admired by their child, may leave the child feeling like a puppet to the parent's emotional and intellectual demands.
Children of a narcissistic parent may not be supportive of others in the home. Observing the parent's behavior, the child learns that manipulation and guilt are effective strategies for getting what they want. The child may also develop a false self and use aggression and intimidation to get their way. Or instead, the child may invest in opposite behaviors if they have observed them among friends and other families. When a child of a narcissistic parent experiences safe, real love or sees the example played out in other families, they may identify and act on the differences between their life and that of a child in a healthy family. For example, volatility and a lack of empathy at home may increase a child's empathy and desire to be respectful. Similarly, intense emotional control and disrespect for boundaries at home may increase a child's value for emotional expression and their desire to extend respect to others. The child observes the narcissistic parent's behavior and is often on the receiving end of that behavior. When an alternative arises to the pain and distress caused at home, the child may choose to focus on more comforting, safety-inducing behaviors.
Some common issues in narcissistic parenting result from a lack of appropriate, responsible nurturing. This may lead to a child feeling empty, feeling insecure in loving relationships, developing fears, mistrusting others, experiencing identity conflict, and developing commitment issues.
Sensitive, guilt-ridden children in the family may learn to meet the parent's needs for gratification and seek love by accommodating the parent's wishes. The child's normal feelings are ignored, denied, and eventually repressed in attempts to gain the parent's "love". Guilt and shame keep the child locked in a developmental arrest. Aggressive impulses and rage may become split off and not integrated with normal development. Some children develop a false self as a defense mechanism and become codependent in relationships. A child's unconscious denial of their true self may perpetuate a cycle of self-hatred, in which they fear any reminder of the authentic self.
Narcissistic parenting may also lead to children being either victims or bullies, having a poor or overly inflated body image, using or abusing drugs or alcohol, or acting out (in a potentially harmful manner) for attention.
In most cases, a narcissist will select one child in the family to be the Golden Child and another child to be the Scapegoat. The Golden Child becomes an extension of the narcissist, who lives vicariously through them. As a result, many golden children do not develop a healthy sense of self and struggle with boundaries. Scapegoats, on the other hand, become the receptacle for all the negative emotions of the narcissistic parent, who blames them for everything that goes wrong in the family.
Short-term and long-term effects
Because of their vulnerability, children are extremely affected by the behavior of a narcissistic parent. A narcissistic parent will often abuse the normal parental role of guiding children and being the primary decision-maker in a child's life, becoming overly possessive and controlling. This possessiveness and excessive control weaken the child; the parent sees the child simply as an extension of the parent. This may affect the child's imagination and level of curiosity, and the child often develops an extrinsic style of motivation. This heightened level of control may be due to the narcissistic parent's need to maintain the child's dependence on them.
Narcissistic parents are quick to anger, putting their children at risk for physical and emotional abuse. To avoid anger and punishment, children of abusive parents often resort to complying with their parent's every demand. This affects both the child's well-being and ability to make logical decisions on their own, and as adults, such individuals often lack self-confidence and the ability to gain control over their lives. Identity crisis, loneliness, and struggle with self-expression are also commonly seen in children raised by a narcissistic parent. The struggle to discover one's self as an adult stems from the substantial amount of projective identification that the now adult experiences as a child. Because of excessive identification with the parent, the child may never get the opportunity to experience their own identity.
Mental health effects
Studies have found that children of narcissistic parents have significantly higher rates of depression and lower self-esteem during adulthood than those who did not perceive their caregivers as narcissistic. The parent's lack of empathy towards their child contributes to this, as the child's desires are often denied, their feelings restrained, and their overall emotional well-being ignored.
Children of narcissistic parents are taught to submit and conform, causing them to lose touch of themselves as individuals. This can lead to the child possessing very few memories of feeling appreciated or loved by their parents for being themselves, as they instead associate the love and appreciation with conformity. Children may benefit with distance from the narcissistic parent. Some children of narcissistic parents resort to leaving home during adolescence if they grow to view the relationship with their parent(s) as toxic.
The results of a prior study indicated that narcissistic parenting behaviours have an impact on children's self-esteem far into adulthood. A lot of respondents also mentioned that they needed the approval or affirmation of others in order to feel competent or deserving, and some said that their sense of self depended entirely on how "successful" they perceived themselves to be in terms of their appearance, social life, or academic or professional accomplishments. Respondents also mentioned how these consequences affected their friendships and romantic relationships as adults, and one participant raised concern for how these effects would affect her children.
See also
References
Further reading
Donaldson-Pressman, S & Pressman, RM The Narcissistic Family: Diagnosis and Treatment (1997)
Miller A The Drama of the Gifted Child, How Narcissistic Parents Form and Deform the Emotional Lives of their Talented Children, Basic Books, Inc (1981)
Payson, Eleanor The Wizard of Oz and Other Narcissists: Coping with the One-Way Relationship in Work, Love, and Family (2002) – see Chapter 5
Family
Narcissism
Parenting
Domestic violence | Narcissistic parent | Biology | 1,900 |
10,238,162 | https://en.wikipedia.org/wiki/Mark%20Goresky | Robert Mark Goresky is a Canadian mathematician who invented intersection homology with his advisor and life partner Robert MacPherson.
Career
Goresky received his Ph.D. from Brown University in 1976. His thesis, titled Geometric Cohomology and Homology of Stratified Objects, was written under the direction of MacPherson. Many of the results in his thesis were published in 1981 by the American Mathematical Society. He has taught at the University of British Columbia in Vancouver, and Northeastern University.
Awards
Goresky received a Sloan Research Fellowship in 1981. He received the Coxeter–James Prize in 1984. In 2002, Goresky and MacPherson were jointly awarded the Leroy P. Steele Prize for Seminal Contribution to Research by the American Mathematical Society.
In 2012 Goresky became a fellow of the American Mathematical Society.
Personal
Goresky's PhD advisor, Robert D. MacPherson, later became his life partner. Their discovery of intersection homology made "both of them famous." After the collapse of the Soviet Union, they were instrumental in channeling aid to Russian mathematicians, especially many who had to hide their sexuality.
Selected publications
Goresky, Mark; MacPherson, Robert, La dualité de Poincaré pour les espaces singuliers, C. R. Acad. Sci. Paris Sér. A-B 284 (1977), no. 24, A1549–A1551.
Goresky, Mark; MacPherson, Robert, Intersection homology theory, Topology 19 (1980), no. 2, 135–162.
Goresky, Mark, Whitney stratified chains and cochains, Trans. Amer. Math. Soc. 267 (1981), 175–196.
Goresky, Mark; MacPherson, Robert, Intersection homology. II, Inventiones Mathematicae 72 (1983), no. 1, 77–129.
Goresky, Mark; MacPherson, Robert, Stratified Morse Theory, Springer Verlag, N. Y. (1989), Ergebnisse vol. 14.
References
External links
Home page
20th-century American mathematicians
21st-century American mathematicians
Canadian mathematicians
Topologists
Brown University alumni
Fellows of the American Mathematical Society
Living people
1950 births
Northeastern University faculty
Academic staff of the University of British Columbia
Canadian expatriate academics in the United States | Mark Goresky | Mathematics | 477 |
27,302,606 | https://en.wikipedia.org/wiki/Legal%20status%20of%20methamphetamine | The production, distribution, and sale of methamphetamine is restricted or illegal in many jurisdictions.
Legal status by country
Legality of similar chemicals
See ephedrine and pseudoephedrine for legal restrictions in place as a result of their use as precursors in the clandestine manufacture of methamphetamine.
References
Drug control law
Methamphetamine
Drug policy by country | Legal status of methamphetamine | Chemistry | 79 |
31,981,546 | https://en.wikipedia.org/wiki/Coral%20Reef%20Initiative%20for%20the%20South%20Pacific | Coral Reef Initiative for the South Pacific (CRISP) is a French inter-ministerial project founded in 2002. Its aims focus on developing a vision for the future for coral reef ecosystems and the communities that depend on them within the French overseas territories and Pacific Island developing countries. Programme coordination is provided by the CRISP Coordination Unit and a programme manager who is supported by scientific counselors. The programme is hosted by the Secretariat of the Pacific Community who is located in Nouméa, New Caledonia. CRISP is under the institutional protection from the Pacific Community and the South Pacific Regional Environment Programme. It is a regional initiative that promotes the protection and sustainable management of the coral reefs of the Pacific island states.
History
During the French-Oceania Summit of 2003, French President Jacques Chirac promoted the idea of bringing together Oceania participants to work towards sustainable development of the Pacific Ocean coral reefs. Its launch was announced in September 2004 during the South Pacific Regional Environment Programme (SPREP) meeting held in Papeete, French Polynesia. At the initial launch, the project was valued at €10 million over the course of three years, involving fifteen Pacific Island countries and three French Pacific Territories, (New Caledonia, French Polynesia, and Wallis and Futuna) envisioned as a "driving belt" between these locales. The programme's implementation was facilitated by the United Nations Environment Programme (UNEP). Its establishment also included a political desire for local oversight in the Pacific region.
Programmes
Some of CRISP's components include integrated coastal and watershed management, and development of coral ecosystems. The ReefBase Pacific project is a collaborative programme with Secretariat of the Pacific Regional Environment Programme (SPREP). International Coral Reef Action Network (IRCAN) projects have also been incorporated into CRISP.
An additional component is educational, such as the Workshop on Economic Evaluation of MPAs that was sponsored by CRISP in 2008. In partnership with SPREP, CRISP also supports activities of various societies such as the Aiga Folau o Samoa (Samoa Voyaging Society), which is promoting the spread of regional awareness in protecting the environment. CRISP provides support to organizations in developing case studies, of which Navakavu Locally Managed Marine Area, Viti Levu, Fiji (2009) is one example. For the Navakavu Locally Managed Marine Area study, CRISP provided biological monitoring test and comparison, as well as fish larvae research. Pacific COREMO (Coral Reef Monitoring) database training of 2009 through the Institute of Marine Resources at the University of the South Pacific included representatives from CRISP, one of its partner organizations. Supporting Kanak traditions, CRISP's partnership with Conservation International provided recommendations and underwater species guides to the Kanak people.
References
External links
Official website
Organizations established in 2002
Coral reefs
Animal welfare organizations based in France
Organizations based in New Caledonia
2002 establishments in France | Coral Reef Initiative for the South Pacific | Biology | 570 |
73,742,058 | https://en.wikipedia.org/wiki/Pestalotiopsis%20pauciseta | Pestalotiopsis pauciseta is an endophytic fungi isolated from the leaves of several medicinal plants in tropical climates. Pestalotiopsis pauciseta is known for its role in medical mycology, having the ability to produce a chemical compound called paclitaxel (taxol). Taxol is the first billion-dollar anticancer drug, notably the fungal-taxol produced by Pestalotiopsis pauciseta was determined to be comparable to standard taxol.
Taxonomy
Pestalotiopsis pauciseta was initially described by Pier Andrea Saccardo as Pestalotia pauciseta in 1914, and was later changed to the genus Pestalotiopsis by authors Chen, Y.X.; Wei, G. in 1993.
Description
Pestalotiopsis pauciseta has amphigenous pustules, which can range from globose to lenticular in shape, usually black, scattered and hemispherical (80-200μm). Conidiomata are eustromatic, cupulate, can be found separated or confluent, and are initially dark brown in color when immersed. After immersion, conidiomata are erumpent, thick walled, and irregularly dehisce.
Habitat/distribution
Many species of Pestalotiopsis are saprobes in soil, degraders of plant matter, or organisms growing upon rotting wild fruits. Others are plant pathogens or occupy plant leaves and twigs as endophytes. Species of Pestalotiopsis have been repeatedly isolated as saprobes from dead leaves, bark, and twigs. Species have been isolated from polluted stream water and are associated with the deterioration of wood, paper, fabrics, and decay of wool. The genus Pestalotiopsis are known as plant pathogens; P. pauciseta isolated as endophytes, likely has endophytic and pathogenic stages.
Bioactivity
Fungal-taxol is an anticancer compound that has been developed into a medication used to treat ovarian, lung, breast, and head and neck cancers. The UV absorption spectrum of taxol isolated from Pestalotiopsis pauciseta VM1 was similar to that of standard taxol with maximum absorption at 235 nm and 232 nm.
More than 130 unique compounds have been isolated from various species of Pestalotiopsis. Antifungal, anticancer, antimicrobial, and antitumor activities are some of the most significant bioactivities of secondary metabolites isolated from this genus. It is suspected that P. pauciseta is one of many fungal plant endophytes that has the ability to produce bioactive compounds that are originally from their host plant.
References
pauciseta
Fungus species | Pestalotiopsis pauciseta | Biology | 564 |
5,642,853 | https://en.wikipedia.org/wiki/Gambling%20and%20information%20theory | Statistical inference might be thought of as gambling theory applied to the world around us. The myriad applications for logarithmic information measures tell us precisely how to take the best guess in the face of partial information. In that sense, information theory might be considered a formal expression of the theory of gambling. It is no surprise, therefore, that information theory has applications to games of chance.
Kelly Betting
Kelly betting or proportional betting is an application of information theory to investing and gambling. Its discoverer was John Larry Kelly, Jr.
Part of Kelly's insight was to have the gambler maximize the expectation of the logarithm of his capital, rather than the expected profit from each bet. This is important, since in the latter case, one would be led to gamble all he had when presented with a favorable bet, and if he lost, would have no capital with which to place subsequent bets. Kelly realized that it was the logarithm of the gambler's capital which is additive in sequential bets, and "to which the law of large numbers applies."
Side information
A bit is the amount of entropy in a bettable event with two possible outcomes and even odds. Obviously we could double our money if we knew beforehand what the outcome of that event would be. Kelly's insight was that no matter how complicated the betting scenario is, we can use an optimum betting strategy, called the Kelly criterion, to make our money grow exponentially with whatever side information we are able to obtain. The value of this "illicit" side information is measured as mutual information relative to the outcome of the betable event:
where Y is the side information, X is the outcome of the betable event, and I is the state of the bookmaker's knowledge. This is the average Kullback–Leibler divergence, or information gain, of the a posteriori probability distribution of X given the value of Y relative to the a priori distribution, or stated odds, on X. Notice that the expectation is taken over Y rather than X: we need to evaluate how accurate, in the long term, our side information Y is before we start betting real money on X. This is a straightforward application of Bayesian inference. Note that the side information Y might affect not just our knowledge of the event X but also the event itself. For example, Y might be a horse that had too many oats or not enough water. The same mathematics applies in this case, because from the bookmaker's point of view, the occasional race fixing is already taken into account when he makes his odds.
The nature of side information is extremely finicky. We have already seen that it can affect the actual event as well as our knowledge of the outcome. Suppose we have an informer, who tells us that a certain horse is going to win. We certainly do not want to bet all our money on that horse just upon a rumor: that informer may be betting on another horse, and may be spreading rumors just so he can get better odds himself. Instead, as we have indicated, we need to evaluate our side information in the long term to see how it correlates with the outcomes of the races. This way we can determine exactly how reliable our informer is, and place our bets precisely to maximize the expected logarithm of our capital according to the Kelly criterion. Even if our informer is lying to us, we can still profit from his lies if we can find some reverse correlation between his tips and the actual race results.
Doubling rate
Doubling rate in gambling on a horse race is
where there are horses, the probability of the th horse winning being , the proportion of wealth bet on the horse being , and the odds (payoff) being (e.g., if the th horse winning pays double the amount bet). This quantity is maximized by proportional (Kelly) gambling:
for which
where is information entropy.
Expected gains
An important but simple relation exists between the amount of side information a gambler obtains and the expected exponential growth of his capital (Kelly):
for an optimal betting strategy, where is the initial capital, is the capital after the tth bet, and is the amount of side information obtained concerning the ith bet (in particular, the mutual information relative to the outcome of each betable event).
This equation applies in the absence of any transaction costs or minimum bets. When these constraints apply (as they invariably do in real life), another important gambling concept comes into play: in a game with negative expected value, the gambler (or unscrupulous investor) must face a certain probability of ultimate ruin, which is known as the gambler's ruin scenario. Note that even food, clothing, and shelter can be considered fixed transaction costs and thus contribute to the gambler's probability of ultimate ruin.
This equation was the first application of Shannon's theory of information outside its prevailing paradigm of data communications (Pierce).
Applications for self-information
The logarithmic probability measure self-information or surprisal, whose average is information entropy/uncertainty and whose average difference is KL-divergence, has applications to odds-analysis all by itself. Its two primary strengths are that surprisals: (i) reduce minuscule probabilities to numbers of manageable size, and (ii) add whenever probabilities multiply.
For example, one might say that "the number of states equals two to the number of bits" i.e. #states = 2#bits. Here the quantity that's measured in bits is the logarithmic information measure mentioned above. Hence there are N bits of surprisal in landing all heads on one's first toss of N coins.
The additive nature of surprisals, and one's ability to get a feel for their meaning with a handful of coins, can help one put improbable events (like winning the lottery, or having an accident) into context. For example if one out of 17 million tickets is a winner, then the surprisal of winning from a single random selection is about 24 bits. Tossing 24 coins a few times might give you a feel for the surprisal of getting all heads on the first try.
The additive nature of this measure also comes in handy when weighing alternatives. For example, imagine that the surprisal of harm from a vaccination is 20 bits. If the surprisal of catching a disease without it is 16 bits, but the surprisal of harm from the disease if you catch it is 2 bits, then the surprisal of harm from NOT getting the vaccination is only 16+2=18 bits. Whether or not you decide to get the vaccination (e.g. the monetary cost of paying for it is not included in this discussion), you can in that way at least take responsibility for a decision informed to the fact that not getting the vaccination involves more than one bit of additional risk.
More generally, one can relate probability p to bits of surprisal sbits as probability = 1/2sbits. As suggested above, this is mainly useful with small probabilities. However, Jaynes pointed out that with true-false assertions one can also define bits of evidence ebits as the surprisal against minus the surprisal for. This evidence in bits relates simply to the odds ratio = p/(1-p) = 2ebits, and has advantages similar to those of self-information itself.
Applications in games of chance
Information theory can be thought of as a way of quantifying information so as to make the best decision in the face of imperfect information. That is, how to make the best decision using only the information you have available. The point of betting is to rationally assess all relevant variables of an uncertain game/race/match, then compare them to the bookmaker's assessments, which usually comes in the form of odds or spreads and
place the proper bet if the assessments differ sufficiently. The area of gambling where this has the most use is sports betting. Sports handicapping lends itself to information theory extremely well because of the availability of statistics. For many years noted economists have tested different mathematical theories using sports as their laboratory, with vastly differing results.
One theory regarding sports betting is that it is a random walk. Random walk is a scenario where new information, prices and returns will fluctuate by chance, this is part of the efficient-market hypothesis. The underlying belief of the efficient market hypothesis is that the market will always make adjustments for any new information. Therefore no one can beat the market because they are trading on the same information from which the market adjusted. However, according to Fama, to have an efficient market three qualities need to be met:
There are no transaction costs in trading securities
All available information is costlessly available to all market participants
All agree on the implications of the current information for the current price and distributions of future prices of each security
Statisticians have shown that it's the third condition which allows for information theory to be useful in sports handicapping. When everyone doesn't agree on how information will affect the outcome of the event, we get differing opinions.
See also
Principle of indifference
Statistical association football predictions
Advanced NFL Stats
References
External links
Statistical analysis in sports handicapping models
DVOA as an explanatory variable
Gambling mathematics
Wagering
Information theory
Statistical inference | Gambling and information theory | Mathematics,Technology,Engineering | 1,939 |
4,923,477 | https://en.wikipedia.org/wiki/The%20Cheviot%2C%20the%20Stag%20and%20the%20Black%2C%20Black%20Oil | The Cheviot, the Stag and the Black, Black Oil is a play written in the 1970s by Merseyside-born playwright John McGrath. From April 1973, beginning at a venue in Aberdeen (Aberdeen Arts Centre), it was performed in a touring production in community centres on Scotland by 7:84 and other community theatre groups. A television version directed by John Mackenzie was broadcast on 6 June 1974 by the BBC as part of the Play for Today series.
Plot
A musical drama, Cheviot recounts the history of economic change in the Scottish Highlands, from the Highland Clearances in the early 19th century through to the contemporary oil boom at the time of its first production. The 7:84 Touring Theatre Company presents its live stage play to the people of South and North Uist, Benbecula and Lewis. The stage play is mixed with filmed reconstructions of documented events in the Highland Clearances, darkly humorous songs and sketches and, later, interviews with those participating and affected by the North Sea Oil industry in 1974.
Scotland from 270 miles above the Earth. Castle from helicopter. Land mixes with water, seabirds, fiddle music; people enter the presentation in a community hall. Images of giant earth-moving equipment, sheep, stag, gas flare, then the faces of the locals watching the play – some baffled, some sceptical, some participating, particularly in a song sung in Scottish Gaelic...
Each sketch and reconstruction is supported by a continuous narration of facts and statistics, presenting an account of Scottish history from 1746 to 1974. Scenes describe 60 years of poverty, abuse and small scale eviction endured by the crofting tenants of the Highlands from 1746 – "Culloden and all that" – when speaking, singing or writing Scottish Gaelic and the wearing of the plaid were forcibly forbidden by the government.
The sudden expansion of English and Scottish capital and estate enlargement – "more money to buy more land" – at the beginning of the 19th century is outlined next. Patrick Sellar, a factor of the Duke of Sutherland, is introduced. His systemised evictions of the Highlanders were the broadest and most brutal of all the Clearances, and he is evoked as representative of the issues of land ownership in the Highlands and Islands and the north of Scotland. With frequent shots of the audience the play gives dispassionate readings of the equally dispassionate contemporary accounts of the brutality involved in evicting Highland crofting tenants to make way for the more profitable Cheviot, and later Blackface, sheep.
The reasons for the Clearances are explained and how they were enabled for the 'ruling classes' with the connivance of the church, the Law, the police and the military. It details where the people went: often to allotments on the seashore with wretched soil and conditions, where they were supposed to fish and gather kelp for the soda ash industry. It details the economic reasons why the men were often away south for much of the year, trying to find work to pay the rents on their crofts, or in the Highland regiments defending the British Empire. It also details the emigrations to the Victorian slums of Glasgow and to the rest of the world.
The few, but hugely important, successful instances of organised resistance to the evictions feature. It lists political resistance to the evictions such as the Land Leagues of the 1880s, which are contrasted with the Victorian landed gentry's passion for stag hunting; this and the sheep industry now having taken over many millions of acres. The land raids by crofters in the early 20th century are mentioned.
The play briefly mentions the modern day (1974) exploitation of the Highlands by the tourist industry then makes political comparisons between the past and 1974. McGrath explained in 1981: "At the first sniff of oil off the east coast of Scotland, things began to jump. First in Aberdeen and the North-East. Then all over. Suddenly villages that did not merit even an advance factory for 100 workers are being taken over by thousands of men in labour camps building oil-rigs, and oil-production platforms."
Oilmen at Aberdeen are interviewed about conditions, health and safety at work and wages. These are followed by interviews with American oil bosses and members of the population of Aberdeen and on issues such as the inaffordability of housing after the oil boom.
The play details the political history of North Sea oil from the North Sea Gas explorations of 1962, and explores issues of shore and village destruction and pollution, accompanied by shots of refineries and plant. It explains that exploration is now looking to the West and has in fact already started off the Butt of Lewis.
With a final montage of images from 1746 to the Aberdeen riggers, the performers tell audience members that this is their land and urges them to resist exploitation, warning them that they will find the oil corporations even more insensitive than Patrick Sellar.
References
Theatre in Scotland
1973 plays
Highland Clearances
Plays set in the 19th century
Plays set in the 20th century
Plays based on actual events
Plays by John McGrath
Plays set in Scotland
Works about petroleum | The Cheviot, the Stag and the Black, Black Oil | Chemistry | 1,042 |
2,828,566 | https://en.wikipedia.org/wiki/String%20theory%20landscape | In string theory, the string theory landscape (or landscape of vacua) is the collection of possible false vacua, together comprising a collective "landscape" of choices of parameters governing compactifications.
The term "landscape" comes from the notion of a fitness landscape in evolutionary biology. It was first applied to cosmology by Lee Smolin in his book The Life of the Cosmos (1997), and was first used in the context of string theory by Leonard Susskind.
Compactified Calabi–Yau manifolds
In string theory the number of flux vacua is commonly thought to be roughly , but could be or higher. The large number of possibilities arises from choices of Calabi–Yau manifolds and choices of generalized magnetic fluxes over various homology cycles, found in F-theory.
If there is no structure in the space of vacua, the problem of finding one with a sufficiently small cosmological constant is NP complete. This is a version of the subset sum problem.
A possible mechanism of string theory vacuum stabilization, now known as the KKLT mechanism, was proposed in 2003 by Shamit Kachru, Renata Kallosh, Andrei Linde, and Sandip Trivedi.
Fine-tuning by the anthropic principle
Fine-tuning of constants like the cosmological constant or the Higgs boson mass are usually assumed to occur for precise physical reasons as opposed to taking their particular values at random. That is, these values should be uniquely consistent with underlying physical laws.
The number of theoretically allowed configurations has prompted suggestions that this is not the case, and that many different vacua are physically realized. The anthropic principle proposes that fundamental constants may have the values they have because such values are necessary for life (and therefore intelligent observers to measure the constants). The anthropic landscape thus refers to the collection of those portions of the landscape that are suitable for supporting intelligent life.
Weinberg model
In 1987, Steven Weinberg proposed that the observed value of the cosmological constant was so small because it is impossible for life to occur in a universe with a much larger cosmological constant.
Weinberg attempted to predict the magnitude of the cosmological constant based on probabilistic arguments. Other attempts have been made to apply similar reasoning to models of particle physics.
Such attempts are based in the general ideas of Bayesian probability; interpreting probability in a context where it is only possible to draw one sample from a distribution is problematic in frequentist probability but not in Bayesian probability, which is not defined in terms of the frequency of repeated events.
In such a framework, the probability of observing some fundamental parameters is given by,
where is the prior probability, from fundamental theory, of the parameters and is the "anthropic selection function", determined by the number of "observers" that would occur in the universe with parameters .
These probabilistic arguments are the most controversial aspect of the landscape. Technical criticisms of these proposals have pointed out that:
The function is completely unknown in string theory and may be impossible to define or interpret in any sensible probabilistic way.
The function is completely unknown, since so little is known about the origin of life. Simplified criteria (such as the number of galaxies) must be used as a proxy for the number of observers. Moreover, it may never be possible to compute it for parameters radically different from those of the observable universe.
Simplified approaches
Tegmark et al. have recently considered these objections and proposed a simplified anthropic scenario for axion dark matter in which they argue that the first two of these problems do not apply.
Vilenkin and collaborators have proposed a consistent way to define the probabilities for a given vacuum.
A problem with many of the simplified approaches people have tried is that they "predict" a cosmological constant that is too large by a factor of 10–1000 orders of magnitude (depending on one's assumptions) and hence suggest that the cosmic acceleration should be much more rapid than is observed.
Interpretation
Few dispute the large number of metastable vacua. The existence, meaning, and scientific relevance of the anthropic landscape, however, remain controversial.
Cosmological constant problem
Andrei Linde, Sir Martin Rees and Leonard Susskind advocate it as a solution to the cosmological constant problem.
Weak scale supersymmetry from the landscape
The string landscape ideas can be applied to the notion of weak scale supersymmetry and the Little Hierarchy problem.
For string vacua which include the MSSM (Minimal Supersymmetric Standard Model) as the low energy effective field theory, all values of SUSY breaking fields
are expected to be equally likely on the landscape. This led Douglas and others to propose that the SUSY breaking scale is distributed as a power
law in the landscape where is the number of F-breaking fields
(distributed as complex numbers) and is the number of D-breaking fields (distributed as real numbers).
Next, one may impose the Agrawal, Barr, Donoghue, Seckel (ABDS) anthropic requirement that the derived weak scale lie within a factor of a few
of our measured value (lest nuclei as needed for life as we know it become unstable (the atomic principle)).
Combining these effects with a mild power-law draw to large soft SUSY breaking terms,
one may calculate the Higgs boson and superparticle masses expected from the landscape.
The Higgs mass probability distribution peaks around 125 GeV while sparticles (with the exception of light higgsinos) tend to
lie well beyond current LHC search limits. This approach is an example of the application of stringy naturalness.
Scientific relevance
David Gross suggests that the idea is inherently unscientific, unfalsifiable or premature. A famous debate on the anthropic landscape of string theory is the Smolin–Susskind debate on the merits of the landscape.
Popular reception
There are several popular books about the anthropic principle in cosmology. The authors of two physics blogs, Lubos Motl and Peter Woit, are opposed to this use of the anthropic principle.
See also
Swampland
Extra dimensions
References
External links
String landscape; moduli stabilization; flux vacua; flux compactification on arxiv.org.
Physical cosmology
String theory
Multiverse | String theory landscape | Physics,Astronomy | 1,315 |
7,963,586 | https://en.wikipedia.org/wiki/Anal%20columns | Anal columns (Columns of Morgagni or less commonly Morgagni's columns) are a number of vertical folds, produced by an infolding of the mucous membrane and some of the muscular tissue in the upper half of the lumen of the anal canal. They are named after Giovanni Battista Morgagni, who has several other eponyms named after him.
References
External links
— "The Female Pelvis: The Rectum"
()
Digestive system | Anal columns | Biology | 96 |
6,904,406 | https://en.wikipedia.org/wiki/MHC%20restriction | MHC-restricted antigen recognition, or MHC restriction, refers to the fact that a T cell can interact with a self-major histocompatibility complex molecule and a foreign peptide bound to it, but will only respond to the antigen when it is bound to a particular MHC molecule.
When foreign proteins enter a cell, they are broken into smaller pieces called peptides. These peptides, also known as antigens, can derive from pathogens such as viruses or intracellular bacteria. Foreign peptides are brought to the surface of the cell and presented to T cells by proteins called the major histocompatibility complex (MHC). During T cell development, T cells go through a selection process in the thymus to ensure that the T cell receptor (TCR) will not recognize MHC molecule presenting self-antigens, i.e that its affinity is not too high. High affinity means it will be autoreactive, but no affinity means it will not bind strongly enough to the MHC. The selection process results in developed T cells with specific TCRs that might only respond to certain MHC molecules but not others. The fact that the TCR will recognize only some MHC molecules but not others contributes to "MHC restriction". The biological reason of MHC restriction is to prevent supernumerary wandering lymphocytes generation, hence energy saving and economy of cell-building materials.
T-cells are a type of lymphocyte that is significant in the immune system to activate other immune cells. T-cells will recognize foreign peptides through T-cell receptors (TCRs) on the surface of the T cells, and then perform different roles depending on the type of T cell they are in order to defend the host from the foreign peptide, which may have come from pathogens like bacteria, viruses or parasites. Enforcing the restriction that T cells are activated by peptide antigens only when the antigens are bound to self-MHC molecules, MHC restriction adds another dimension to the specificity of T cell receptors so that an antigen is recognized only as peptide-MHC complexes.
MHC restriction in T cells occurs during their development in the thymus, specifically positive selection. Only the thymocytes (developing T cells in the thymus) that are capable of binding, with an appropriate affinity, with the MHC molecules can receive a survival signal and go on to the next level of selection. MHC restriction is significant for T cells to function properly when it leaves the thymus because it allows T cell receptors to bind to MHC and detect cells that are infected by intracellular pathogens, viral proteins and bearing genetic defects. Two models explaining how restriction arose are the germline model and the selection model.
The germline model suggests that MHC restriction is a result of evolutionary pressure favoring T cell receptors that are capable of binding to MHC. The selection model suggests that not all T cell receptors show MHC restriction, however only the T cell receptors with MHC restriction are expressed after thymus selection. In fact, both hypotheses are reflected in the determination of TCR restriction, such that both germline-encoded interactions between TCR and MHC and co-receptor interactions with CD4 or CD8 to signal T cell maturation occur during selection.
Introduction
The TCRs of T cells recognize linear peptide antigens only if coupled with a MHC molecule. In other words, the ligands of TCRs are specific peptide-MHC complexes. MHC restriction is particularly important for self-tolerance, which makes sure that the immune system does not target self-antigens. When primary lymphocytes are developing and differentiating in the thymus or bone marrow, T cells die by apoptosis if they express high affinity for self-antigens presented by an MHC molecule or express too low an affinity for self MHC.
T cell maturation involves two distinct developmental stages: positive selection and negative selection. Positive selection ensures that any T-cells with a high enough affinity for MHC bound peptide survive and goes on to negative selection, while negative selection induces death in T-cells which bind self-peptide-MHC complex too strongly. Ultimately, the T-cells differentiate and mature to become either T helper cells or T cytotoxic cells. At this point the T cells leave the primary lymphoid organ and enter the blood stream.
The interaction between TCRs and peptide-MHC complex is significant in maintaining the immune system against foreign antigens. MHC restriction allows TCRs to detect host cells that are infected by pathogens, contains non-self proteins or bears foreign DNA. However, MHC restriction is also responsible for chronic autoimmune diseases and hypersensitivity.
Structural specificity
The peptide-MHC complex presents a surface that looks like an altered self to the TCR. The surface consisting of two α helices from the MHC and a bound peptide sequence is projected away from the host cell to the T cells, whose TCRs are projected away from the T cells towards the host cells. In contrast with T cell receptors which recognize linear peptide epitopes, B cell receptors recognize a variety of conformational epitopes (including peptide, carbohydrate, lipid and DNA) with specific three-dimensional structures.
Imposition
The imposition of MHC restriction on the highly variable TCR has caused heated debate. Two models have been proposed to explain the imposition of MHC restriction. The Germline model proposes that MHC restriction is hard-wired in the TCR Germline sequence due to co-evolution of TCR and MHC to interact with each other. The Selection model suggests that MHC restriction is not a hard-wired property in the Germline sequences of TCRs, but imposed on them by CD4 and CD8 co-receptors during positive selection. The relative importance of the two models are not yet determined.
Germline model
The Germline hypothesis suggests that the ability to bind to MHC is intrinsic and encoded within the germline DNA that are coding for TCRs. This is because of evolutionary pressure selects for TCRs that are capable of binding to MHC and selects against those that are not capable of binding to MHC. Since the emergence of TCR and MHC ~500 million years ago, there is ample opportunity for TCR and MHC to coevolve to recognize each other. Therefore, it is proposed that evolutionary pressure would lead to conserved amino acid sequences at regions of contact with MHCs on TCRs.
Evidence from X-ray crystallography has shown comparable binding topologies between various TCR and MHC-peptide complexes. In addition, conserved interactions between TCR and specific MHCs support the hypothesis that MHC restriction is related to the co-evolution of TCR and MHC to some extent.
Selection model
The selection hypothesis argues that instead of being an intrinsic property, MHC restriction is imposed on the T cells during positive thymic selection after random TCRs are produced. According to this model, T cells are capable of recognizing a variety of peptide epitopes independent of MHC molecules before undergoing thymic selection. During thymic selection, only the T cells with affinity to MHC are signaled to survive after the CD4 or CD8 co-receptors also bind to the MHC molecule. This is called positive selection.
During positive selection, co-receptors CD4 and CD8 initiate a signaling cascade following MHC binding. This involves the recruitment of Lck, a tyrosine kinase essential for T cell maturation that is associated with the cytoplasmic tail of the CD4 or CD8 co-receptors. Selection model argues that Lck is directed to TCRs by co-receptors CD4 and CD8 when they recognize MHC molecules. Since TCRs interact better with Lck when they are binding to the MHC molecules that are binding to the co-receptors in a ternary complex, T cells that can interact with MHCs bound to by the co-receptors can activate the Lck kinase and receive a survival signal.
Supporting this argument, genetically modified T cells without CD4 and CD8 co-receptors express MHC-independent TCRs. It follows that MHC restriction is imposed by CD4 and CD8 co-receptors during positive selection of T cell selection.
Reconciliation
A reconciliation of the two models was offered later on suggesting that both co-receptor and germline predisposition to MHC binding play significant roles in imposing MHC restriction. Since only those T cells that are capable of binding to MHCs are selected for during positive selection in the thymus, to some extent evolutionary pressure selects for germline TCR sequences that bind MHC molecules. On the other hand, as suggested by the selection model, T cell maturation requires the TCRs to bind to the same MHC molecules as the CD4 or CD8 co-receptor during T cell selection, thus imposing MHC restriction.
References
External links
Immune system | MHC restriction | Biology | 1,848 |
59,863 | https://en.wikipedia.org/wiki/Correspondence%20principle | In physics, a correspondence principle is any one of several premises or assertions about the relationship between classical and quantum mechanics.
The physicist Niels Bohr coined the term in 1920 during the early development of quantum theory; he used it to explain how quantized classical orbitals connect to quantum radiation.
Modern sources often use the term for the idea that the behavior of systems described by quantum theory reproduces classical physics in the limit of large quantum numbers: for large orbits and for large energies, quantum calculations must agree with classical calculations. A "generalized" correspondence principle refers to the requirement for a broad set of connections between any old and new theory.
History
Max Planck was the first to introduce the idea of quanta of energy, while studying black-body radiation in 1900. In 1906, he was also the first to write that quantum theory should replicate classical mechanics at some limit, particularly if the Planck constant h were taken to be infinitesimal. With this idea, he showed that Planck's law for thermal radiation leads to the Rayleigh–Jeans law, the classical prediction (valid for large wavelength).
Niels Bohr used a similar idea, while developing his model of the atom. In 1913, he provided the first postulates of what is now known as old quantum theory. Using these postulates he obtained that for the hydrogen atom, the energy spectrum approaches the classical continuum for large n (a quantum number that encodes the energy of the orbit). Bohr coined the term "correspondence principle" during a lecture in 1920.
Arnold Sommerfeld refined Bohr's theory leading to the Bohr-Sommerfeld quantization condition. Sommerfeld referred to the correspondence principle as Bohr's magic wand (), in 1921.
Bohr's correspondence principle
The seeds of Bohr's correspondence principle appeared from two sources. First Sommerfeld and Max Born developed a "quantization procedure" based on the action angle variables of classical Hamiltonian mechanics. This gave a mathematical foundation for stationary states of the Bohr-Sommerfeld model of the atom. The second seed was Albert Einstein's quantum derivation of Planck's law in 1916. Einstein developed the statistical mechanics for Bohr-model atoms interacting with electromagnetic radiation, leading to absorption and two kinds of emission, spontaneous and stimulated emission. But for Bohr the important result was the use of classical analogies and the Bohr atomic model to fix inconsistencies in Planck's derivation of the blackbody radiation formula.
Bohr used the word "correspondence" in italics in lectures and writing before calling it a correspondence principle. He viewed this as a correspondence between quantum motion and radiation, not between classical and quantum theories. He writes in 1920 that there exists "a far-reaching correspondence between the various types of possible transitions between the stationary states on the one hand and the various harmonic components of the motion on the other hand."
Bohr's first article containing the definition of the correspondence principle was in 1923, in a summary paper entitled (in the English translation) "On the application of quantum theory to atomic structure". In his chapter II, "The process of radiation", he defines his correspondence principle as a condition connecting harmonic components of the electron moment to the possible occurrence of a radiative transition. In modern terms, this condition is a selection rule, saying that a given quantum jump is possible if and only if a particular type of motion exists in the corresponding classical model.
Following his definition of the correspondence principle, Bohr describes two applications. First he shows that the frequency of emitted radiation is related to an integral which can be well approximated by a sum when the quantum numbers inside the integral are large compared with their differences. Similarly he shows a relationship for the intensities of spectral lines and thus the rates at which quantum jumps occur.
These asymptotic relationships are expressed by Bohr as consequences of his general correspondence principle. However, historically each of these applications have been called "the correspondence principle".
The PhD dissertation of Hans Kramers working in Bohr's group in Copenhagen applied Bohr's correspondence principle to account for all of the known facts of the spectroscopic Stark effect, including some spectral components not known at the time of Kramers work.
Sommerfeld had been skeptical of the correspondence principle as it did not seem to be a consequence of a fundamental theory; Kramers' work convinced him that the principle had heuristic utility nevertheless. Other physicists picked up the concept, including work by John Van Vleck and by Kramers and Heisenberg on dispersion theory. The principle became a cornerstone of the semi-classical Bohr-Sommerfeld atomic theory;
Bohr's 1922 Nobel prize was partly awarded for his work with the correspondence principle.
Despite the successes, the physical theories based on the principle faced increasing challenges the early 1920s. Theoretical calculations by Van Vleck and by Kramers of the ionization potential of Helium disagreed significantly with experimental values. Bohr, Kramers, and John C. Slater responded with a new theoretical approach now called the BKS theory based on the correspondence principle but disavowing conservation of energy. Einstein and Wolfgang Pauli criticized the new approach, and the Bothe–Geiger coincidence experiment showed that energy was conserved in quantum collisions.
With the existing theories in conflict with observations, two new quantum mechanics concepts arose. First, Heisenberg's 1925 Umdeutung paper on matrix mechanics was inspired by the correspondence principle, although he did not cite Bohr. Further development in collaboration with Pascual Jordan and Max Born resulted in a mathematical model without connection to the principle. Second, Schrodinger's wave mechanics in the following year similarly did not use the principle. Both pictures were later shown to be equivalent and accurate enough to replace old quantum theory. These approaches have no atomic orbits: the correspondence is more of an analogy than a principle.
Dirac's correspondence
Paul Dirac developed significant portions of the new quantum theory in the second half of the 1920s. While he did not apply Bohr's correspondence principle, he developed a different, more formal classical–quantum correspondence. Dirac connected the structures of classical mechanics known as Poisson brackets to analogous structures of quantum mechanics known as commutators:
By this correspondence, now called canonical quantization, Dirac showed how the mathematical form of classical mechanics could be recast as a basis for the new mathematics of quantum mechanics.
Dirac developed these connections by studying the work of Heisenberg and Kramers on dispersion, work that was directly built on Bohr's correspondence principle; the Dirac approach provides a mathematically sound path towards Bohr's goal of a connection between classical and quantum mechanics. While Dirac did not call this correspondence a "principle", physics textbooks refer to his connections a "correspondence principle".
The classical limit of wave mechanics
The outstanding success of classical mechanics in the description of natural phenomena up to the 20th century means that quantum mechanics must do as well in similar circumstances.
One way to quantitatively define this concept is to require quantum mechanical theories to produce classical mechanics results as the quantum of action goes to zero, . This transition can be accomplished in two different ways.
First, the particle can be approximated by a wave packet, and the indefinite spread of the packet with time can be ignored. In 1927, Paul Ehrenfest proved his namesake theorem that showed that Newton's laws of motion hold on average in quantum mechanics: the quantum statistical expectation value of the position and momentum obey Newton's laws.
Second, the individual particle view can be replaced with a statistical mixture of classical particles with a density matching the quantum probability density. This approach led to the concept of semiclassical physics, beginning with the development of WKB approximation used in descriptions of quantum tunneling for example.
Modern view
While Bohr viewed "correspondence" as principle aiding his description of quantum phenomena, fundamental differences between the mathematical structure of quantum and of classical mechanics prevents correspondence in many cases. Rather than a principle, "there may be in some situations an approximate correspondence between classical and quantum concepts," physicist Asher Peres put it. Since quantum mechanics operates in a discrete space and classical mechanics in a continuous one, any correspondence will be necessarily fuzzy and elusive.
Introductory quantum mechanics textbooks suggest that quantum mechanics goes over to classical theory in the limit of high quantum numbers or in a limit where the Planck constant in the quantum formula is reduced to zero, . However such correspondence is not always possible. For example, classical systems can exhibit chaotic orbits which diverge but quantum states are unitary and maintain a fixed overlap.
Generalized correspondence principle
The term "generalized correspondence principle" has been used in the study of the history of science to mean the reduction of a new scientific theory to an earlier scientific theory in appropriate circumstances. This requires that the new theory explain all the phenomena under circumstances for which the preceding theory was known to be valid; it also means that new theory will retain large parts of the older theory. The generalized principle applies correspondence across aspects of a complete theory, not just a single formula as in the classical limit correspondence. For example, Albert Einstein in his 1905 work on relativity noted that classical mechanics relied on Galilean relativity while electromagnetism did not, and yet both work well. He produced a new theory that combined them in a away that reduced to these separate theories in approximations.
Ironically the singular failure of this "generalized correspondence principle" concept of scientific theories is the replacement of classical mechanics with quantum mechanics.
See also
Quantum decoherence
Classical limit
Classical probability density
Leggett–Garg inequality
References
Quantum mechanics
Theory of relativity
Philosophy of physics
Principles
Metatheory | Correspondence principle | Physics | 1,970 |
3,023,644 | https://en.wikipedia.org/wiki/Idose | Idose is a hexose, a six carbon monosaccharide. It has an aldehyde group and is thus an aldose. Idose is not found in nature, but its oxidized derivative iduronic acid, is a component of dermatan sulfate and heparan sulfate, which are glycosaminoglycans. The first and third hydroxyls point the opposite way from the second and fourth. It is made by aldol condensation of D- and L-glyceraldehyde. L-Idose is a C-5 epimer of D-glucose.
It can be identified by mass spectrometry.
References
Aldohexoses
Pyranoses | Idose | Chemistry | 152 |
29,253 | https://en.wikipedia.org/wiki/Spandrel | A spandrel is a roughly triangular space, usually found in pairs, between the top of an arch and a rectangular frame, between the tops of two adjacent arches, or one of the four spaces between a circle within a square. They are frequently filled with decorative elements.
Meaning
There are four or five accepted and cognate meanings of the term spandrel in architectural and art history, mostly relating to the space between a curved figure and a rectangular boundary – such as the space between the curve of an arch and a rectilinear bounding moulding, or the wallspace bounded by adjacent arches in an arcade and the stringcourse or moulding above them, or the space between the central medallion of a carpet and its rectangular corners, or the space between the circular face of a clock and the corners of the square revealed by its hood. Also included is the space under a flight of stairs, if it is not occupied by another flight of stairs.
In a building with more than one floor, the term spandrel is also used to indicate the space between the top of the window in one story and the sill of the window in the story above. The term is typically employed when there is a sculpted panel or other decorative element in this space, or when the space between the windows is filled with opaque or translucent glass, in this case called "spandrel glass". In concrete or steel construction, an exterior beam extending from column to column usually carrying an exterior wall load is known as a "spandrel beam".
In architectural ornamentation, the horizontal decorative elements that are hung over interior and exterior openings between the posts are called spandrels. They can be made of sawn out wood, ball-and-dowels, and spindles. Wooden ornamental spandrels are known as gingerbread spandrels. If they are in an arch form, they are called gingerbread arch spandrels. The spandrels over doorways in perpendicular work are generally richly decorated. At Magdalen College, Oxford, is one which is perforated. The spandrel of doors is sometimes ornamented in the Decorated period, but seldom forms part of the composition of the doorway itself, being generally over the label.
Domes
Spandrels can also occur in the construction of domes and are typical in grand architecture from the medieval period onwards. Where a dome needed to rest on a square or rectangular base, the dome was raised above the level of the supporting pillars, with three-dimensional spandrels called pendentives taking the weight of the dome and concentrating it onto the pillars.
See also
Alfiz, an area encompassing the spandrels and voussoirs, sometimes also extending to the floor
Cathedral architecture
Spandrel (biology)
Squinch
Skeuomorph
References
External links
Ornaments (architecture)
Architectural elements | Spandrel | Technology,Engineering | 582 |
78,371,710 | https://en.wikipedia.org/wiki/Butyrate%20fermentation | Butyrate fermentation is a process that produces butyric acid via anaerobic bacteria. This process occurs commonly in clostridia which can be isolated from many anaerobic environments such as mud, fermented foods, and intestinal tracts or feces. Clostridium can ferment carbohydrates into butyric acid, producing byproducts including hydrogen gas, carbon dioxide, and acetate. Butyrate fermentation is currently being utilized in the production of a variety of biochemicals and biofuels.
Butyrate in humans originates from the anaerobic microbes that ferment dietary fibers in the lower intestinal tract. Butyrate plays an important role in immune and inflammatory responses, as well as the formation of the intestinal barrier. The presence of short-chain fatty acids lowers the pH of the gut allowing optimal growth for butyrate-producing bacteria. The two major metabolic pathways used for butyrate fermentation are butyryl-CoA phosphorylation and acetate CoA transferase.
Microbial Biosynthesis
Butyrate is produced by several fermentation processes performed by obligate anaerobic bacteria. This fermentation pathway was discovered by Louis Pasteur in 1861. Examples of butyrate-producing species of bacteria include:
Clostridium butyricum
Clostridium kluyveri
Clostridium pasteurianum
Faecalibacterium prausnitzii
Fusobacterium nucleatum
Butyrivibrio fibrisolvens
Eubacterium limosum
The pathway starts with the glycolytic cleavage of glucose to two molecules of pyruvate, as happens in most organisms. Pyruvate is oxidized into acetyl coenzyme A catalyzed by pyruvate:ferredoxin oxidoreductase. Two molecules of carbon dioxide () and two molecules of hydrogen () are formed as waste products. Subsequently, ATP is produced in the last step of the fermentation. Three molecules of ATP are produced for each glucose molecule, a relatively high yield. The balanced equation for this fermentation is
Other pathways to butyrate include succinate reduction and crotonate disproportionation.
Several species form acetone and n-butanol in an alternative pathway, which starts as butyrate fermentation. Some of these species are:
Clostridium acetobutylicum, the most prominent acetone and butanol producer, used also in industry
Clostridium beijerinckii
Clostridium tetanomorphum
Clostridium aurantibutyricum
These bacteria begin with butyrate fermentation, as described above, but, when the pH drops below 5, they switch into butanol and acetone production to prevent further lowering of the pH. Two molecules of butanol are formed for each molecule of acetone.
The change in the pathway occurs after acetoacetyl CoA formation. This intermediate then takes two possible pathways:
acetoacetyl CoA → acetoacetate → acetone
acetoacetyl CoA → butyryl CoA → butyraldehyde → butanol
Butyrate can be produced by dietary fibers through two different metabolic pathways. The first metabolic pathway is, butyryl-CoA is phosphorylated to form butyryl-phosphorylated to form butyryl-phosphate and transformed to butyrate via butyrate kinase. The second pathway, the CoA part of butyryl-CoA is transferred to acetate via butyryl-CoA: acetate CoA-transferase, leading to the formation of butyrate and acetyl-CoA. These metabolic pathways are how the butyrate is produced.
Applications for Commercial Use
For commercial purposes Clostridium species are used preferably for butyric acid or butanol production.
Butyric acid that is produced via butyrate fermentation is a common food additive and found within products including butter, milk, cheese, and vegetable oils. Some species within the genus Clostridium are capable of producing biochemicals and biofuels. This fermentation process is able to produce acetone, butanol, and ethanol and is one of the first commercial fermentation processes used for bulk chemical production. This species has also been used in therapy, research, and even cosmetics (such as perfumes). It has also been applied to bioprocesses such as in the manufacturing of yogurt, with the most common species used for probiotics being Clostridium butyricum.
Roles in Metabolism
Butyrate, one of the main products from gut microbial fermentation, plays many metabolic roles in the homeostasis of the human body. Butyrate is found to increase energy expenditure to counteract High Fat Diet (HFD) obesity. This is due to butyrate activating thermogenesis, which is a function in adipose tissue to dispel chemical energy by uncoupling protein to energy usage and body temperature. Butyrate also promotes fatty acid oxidation and decreases HFD-induced triglycerides elevation and reduces the respiratory exchange ratio. In metabolic disorders, such as obesity and diabetes, there is a dysfunction in glucose homeostasis due to the decrease in insulin sensitivity and pancreatic β cell dysfunction, which can lead to reduced insulin secretion. Butyrate is shown to help the regulation of glucose homeostasis by improving pancreatic β cell development and improving insulin sensitivity. It is also shown that children with β cell autoimmunity, there is a low abundance of butyrate-producing intestinal bacteria.
Inflammation of The Gut
When butyrate is present in the intestine, IFN-γ, TNF-α, IL-6, and IL-8 are inhibited. These are proinflammatory cytokines which increase inflammation and can cause tissue destruction. Butyrate is also capable of inducing IL-10 and TGF-β which are anti-inflammatory cytokines. Short-chain fatty acids are capable of modifying neutrophil recruitment, which improves immune response. This shows clinical significance in inflammatory bowel disease due to its chronic inflammatory nature. In inflammatory bowel disease, it is seen that there is a reduction of butyrate-producing bacteria which greatly diminishes the defense mechanisms of the mucosal barrier of the gut.
References
Wikipedia Student Program
Fermentation | Butyrate fermentation | Chemistry,Biology | 1,367 |
44,546,627 | https://en.wikipedia.org/wiki/Brook%20Advisory%20Centres | Brook Advisory Centres were set up by Lady Helen Brook in 1964 offering contraceptive advice to young single people under the age of 25.
Brook was asked in 1958 by the Eugenics Society to run the birth control clinic they had just been bequeathed by Marie Stopes. This clinic, unlike the Family Planning Association where Brook had previously worked, was not required to confine its service to married women or women who could prove that they were very shortly to be married. The work of the Centres was facilitated by the National Health Service (Family Planning) Act 1967.
Brook, who had worked as a volunteer for the Family Planning Association, was Chairman of the organisation from 1964 to 1974 and President 1974–97. Until her death in 1997, despite severe eye problems in later life she retained an keen interest in, and supported, the activities of the centres that bear her name.
By 1969 the Centres were offering contraceptive advice to more than ten thousand unmarried people under 25, the majority were aged between 19 and 21 with around one in six being under 19. By 1997 there were 18 branches, funded by local health authorities.
Brook established solid relationships with central and local government authorities; the Guest of Honour at its 25th anniversary in 1989 was the Princess Royal.
The organisation changed its name to Brook.
See also
Teenage pregnancy and sexual health in the United Kingdom
References
External links
Brook
Birth control in the United Kingdom
Private providers of NHS services | Brook Advisory Centres | Biology | 283 |
1,129,005 | https://en.wikipedia.org/wiki/Substantial%20equivalence | In food safety, the concept of substantial equivalence holds that the safety of a new food, particularly one that has been genetically modified (GM), may be assessed by comparing it with a similar traditional food that has proven safe in normal use over time. It was first formulated as a food safety policy in 1993, by the Organisation for Economic Co-operation and Development (OECD).
As part of a food safety testing process, substantial equivalence is the initial step, establishing toxicological and nutritional differences in the new food compared to a conventional counterpart—differences are analyzed and evaluated, and further testing may be conducted, leading to a final safety assessment.
Substantial equivalence is the underlying principle in GM food safety assessment for a number of national and international agencies, including the Canadian Food Inspection Agency (CFIA), Japan's Ministry of Health, Labour and Welfare (MHLW), the US Food and Drug Administration (FDA), and the United Nations' Food and Agriculture Organization (FAO) and World Health Organization.
Origin
The concept of comparing genetically modified foods to traditional foods as a basis for safety assessment was first introduced as a recommendation during the 1990 Joint FAO/WHO Expert Consultation on biotechnology and food safety (a scientific conference of officials and industry), although the term substantial equivalence was not used. Adopting the term, substantial equivalence was formulated as a food safety policy by the OECD, first described in their 1993 report, "Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles.
The term was borrowed from the FDA's 1976 substantial equivalence definition for new medical devices—under Premarket Notification 510(k), a new Class II device that is essentially similar to an existing device can be cleared for release without further testing. The underlying approach of comparing a new product or technique to an existing one has long been used in various fields of science and technology.
In June 1999, G8 leaders requested the OECD to “undertake a study on the implications of biotechnology and other aspects of food safety.” In 2000, the OECD Edinburgh Conference on Scientific and Health Aspects of Genetically Modified Foods was held. Following those discussions, the OECD published an opinion that substantial equivalence is an important tool in analyzing the safety of novel foods, including GM foods. The document noted that substantial equivalence serves as a framework for approaching food safety assessment, rather than functioning as a quantitative standard or measure.
Description
The OECD bases the substantial equivalence principle on a definition of food safety where we can assume that a food is safe for consumption if it has been eaten over time without evident harm. It recognizes that traditional foods may naturally contain toxic components (usually called antinutrients)—such as the glycoalkaloids solanine in potatoes and alpha-tomatine in tomatoes—which do not affect their safety when prepared and eaten in traditional ways.
The report proposes that, while biotechnology broadens the scope of food modification, it does not inherently introduce additional risk, and therefore, GM products may be assessed in the same way as conventionally bred products. Further, the relative precision of biotech methods should allow assessment to be focused on the most likely problem areas. The concept of substantial equivalence is then described as a comparison between a GM food and a similar conventional food, taking into account food processing, and how the food is normally consumed, including quantity, dietary patterns, and the characteristics of the consuming population.
Assessment process
Substantial equivalence is the starting point for GM food safety assessment: significant differences between a new food item and its conventional counterpart would indicate the need for further testing. A "targeted approach" is taken, by selecting specific relevant molecules for comparison. For plants, selection of a suitable comparator may involve growing the new plant side by side with genetically closely-related varieties, or using publicly available composition data for closely-related varieties.
Evaluation for substantial equivalence can be applied at different points in the food chain, from unprocessed harvested crop to final ingredient or product, depending on the nature of the food item and its intended use.
For a GM plant, the overall evaluation process may be viewed in four phases:
Substantial equivalence analysisConsidering introduced genes, newly expressed proteins, and new secondary metabolites
Toxicological and nutritional analysis of detected differencesGene transfer, allergenicity, degradation characteristics, bioavailability, toxicity, and estimated intake levels
Toxicological and nutritional evaluationIf necessary, additional toxicity testing, possibly including whole foods (return to Phase 2).
Final safety assessment of GM plant
Technological developments
There has been discussion about applying new biochemical concepts and methods in evaluating substantial equivalence, such as metabolic profiling and protein profiling. These concepts refer, respectively, to the complete measured biochemical spectrum (total fingerprint) of compounds (metabolites) or of proteins present in a food or crop. The goal would be to compare overall the biochemical profile of a new food to an existing food to see if the new food's profile falls within the range of natural variation already exhibited by the profile of existing foods or crops. However, these techniques are not considered sufficiently evaluated, and standards have not yet been developed, to apply them.
Adoption
Approaches to GM food regulation vary by country, while substantial equivalence is generally the underlying principle of GM food safety assessment. This is the case for national and international agencies that include the Canadian Food Inspection Agency (CFIA), Japan's Ministry of Health, Labour and Welfare (MHLW), the US Food and Drug Administration (FDA), and the United Nations' Food and Agriculture Organization (FAO) and World Health Organization. In 1997, the European Union established a novel food assessment procedure whereby, once the producer has confirmed substantial equivalence with an existing food, government notification, with accompanying scientific evidence, is the only requirement for commercial release, however, foods containing genetically modified organisms (GMOs) are excluded and require mandatory authorization.
To establish substantial equivalence, the modified product is tested by the manufacturer for unexpected changes to a targeted set of components such as toxins, nutrients, or allergens, that are present in a similar unmodified food. The manufacturer's data is then assessed by a regulatory agency. If regulators determine that there is no significant difference between the modified and unmodified products, then there will generally be no further requirement for food safety testing. However, if the product has no natural equivalent, or shows significant differences from the unmodified food, or for other reasons that regulators may have (for instance, if a gene produces a protein that has not been a food component before), further safety testing may be required.
Issues
There have been criticisms of the effectiveness of substantial equivalence.
See also
GRAS - Generally Recognized As Safe, an FDA designation
Notes
References
Food safety
Genetically modified organisms in agriculture
Biotechnology | Substantial equivalence | Biology | 1,380 |
44,970,184 | https://en.wikipedia.org/wiki/National%20GPS%20Network | The British National GPS Network, known as OS Net, is a network of global navigation satellite system GNSS base stations covering Great Britain. It is managed by Ordnance Survey.
It provides access to a stable, national coordinate reference system (through downloaded GNSS data) that allows highly accurate location to be determined using suitable equipment, and is used in surveying, construction and precision agriculture industries, among other uses. The use of ground-based stations makes this system more accurate than satellite based GPS systems.
Using a single receiver, without any additional corrections, a civilian user can achieve a positional accuracy equal to 5 m – 10 m 95% of the time, and a height accuracy of 15 m – 20 m 95% of the time. Combined with data or corrections from a service such as OS Net, a positional accuracy of 1 – 2 cm is achievable, depending on the equipment used and environmental factors.
References
Global Positioning System
Ordnance Survey | National GPS Network | Technology,Engineering | 193 |
15,516,517 | https://en.wikipedia.org/wiki/Application%20Level%20Events | Application Level Events (ALE) is a standard created by EPCglobal, an organization of industry leaders devoted to the development of standards for the Electronic Product Code (EPC) and Radio-frequency identification (RFID) technologies and standards. The ALE specification is a software specification indicating required functionality and behavior, as well as a common API expressed through XML Schema Definition (XSD) and Web Services Description Language (WSDL).
External links
Application Level Events (ALE) Standard at GS1 website
RFID Journal - ALE: A New Standard for Data Access
References
Radio-frequency identification
GS1 standards | Application Level Events | Engineering | 125 |
63,030,703 | https://en.wikipedia.org/wiki/Matching%20in%20hypergraphs | In graph theory, a matching in a hypergraph is a set of hyperedges, in which every two hyperedges are disjoint. It is an extension of the notion of matching in a graph.
Definition
Recall that a hypergraph is a pair , where is a set of vertices and is a set of subsets of called hyperedges. Each hyperedge may contain one or more vertices.
A matching in is a subset of , such that every two hyperedges and in have an empty intersection (have no vertex in common).
The matching number of a hypergraph is the largest size of a matching in . It is often denoted by .
As an example, let be the set Consider a 3-uniform hypergraph on (a hypergraph in which each hyperedge contains exactly 3 vertices). Let be a 3-uniform hypergraph with 4 hyperedges:
Then admits several matchings of size 2, for example:
However, in any subset of 3 hyperedges, at least two of them intersect, so there is no matching of size 3. Hence, the matching number of is 2.
Intersecting hypergraph
A hypergraph is called intersecting if every two hyperedges in have a vertex in common. A hypergraph is intersecting if and only if it has no matching with two or more hyperedges, if and only if .
Matching in a graph as a special case
A graph without self-loops is just a 2-uniform hypergraph: each edge can be considered as a set of the two vertices that it connects. For example, this 2-uniform hypergraph represents a graph with 4 vertices and 3 edges:
By the above definition, a matching in a graph is a set of edges, such that each two edges in have an empty intersection. This is equivalent to saying that no two edges in are adjacent to the same vertex; this is exactly the definition of a matching in a graph.
Fractional matching
A fractional matching in a hypergraph is a function that assigns a fraction in to each hyperedge, such that for every vertex in , the sum of fractions of hyperedges containing is at most 1. A matching is a special case of a fractional matching in which all fractions are either 0 or 1. The size of a fractional matching is the sum of fractions of all hyperedges.
The fractional matching number of a hypergraph is the largest size of a fractional matching in . It is often denoted by .
Since a matching is a special case of a fractional matching, for every hypergraph : Matching-number() ≤ fractional-matching-number()
Symbolically, this principle is written:
In general, the fractional matching number may be larger than the matching number. A theorem by Zoltán Füredi provides upper bounds on the fractional-matching-number() ratio:
If each hyperedge in contains at most vertices, then
In particular, in a simple graph:
The inequality is sharp: Let be the -uniform finite projective plane. Then since every two hyperedges intersect, and by the fractional matching that assigns a weight of to each hyperedge (it is a matching since each vertex is contained in hyperedges, and its size is since there are hyperedges). Therefore the ratio is exactly .
If is such that the -uniform finite projective plane does not exist (for example, ), then a stronger inequality holds:
If is -partite (the vertices are partitioned into parts and each hyperedge contains a vertex from each part), then:
In particular, in a bipartite graph, . This was proved by András Gyárfás.
The inequality is sharp: Let be the truncated projective plane of order . Then since every two hyperedges intersect, and by the fractional matching that assigns a weight of to each hyperedge (there are hyperedges).
Perfect matching
A matching is called perfect if every vertex in is contained in exactly one hyperedge of . This is the natural extension of the notion of perfect matching in a graph.
A fractional matching is called perfect if for every vertex in , the sum of fractions of hyperedges in containing is exactly 1.
Consider a hypergraph in which each hyperedge contains at most vertices. If admits a perfect fractional matching, then its fractional matching number is at least . If each hyperedge in contains exactly vertices, then its fractional matching number is at exactly . This is a generalization of the fact that, in a graph, the size of a perfect matching is .
Given a set of vertices, a collection of subsets of is called balanced if the hypergraph admits a perfect fractional matching.
For example, if and then is balanced, with the perfect fractional matching
There are various sufficient conditions for the existence of a perfect matching in a hypergraph:
Hall-type theorems for hypergraphs - presents sufficient conditions analogous to Hall's marriage theorem, based on sets of neighbors.
Perfect matching in high-degree hypergraphs - presents sufficient conditions analogous to Dirac's theorem on Hamiltonian cycles, based on degree of vertices.
Keevash and Mycroft developed a geometric theory for hypergraph matching.
Balanced set-family
A set-family over a ground set is called balanced (with respect to ) if the hypergraph admits a perfect fractional matching.
For example, consider the vertex set and the edge set is balanced, since there is a perfect fractional matching with weights
Computing a maximum matching
The problem of finding a maximum-cardinality matching in a hypergraph, thus calculating , is NP-hard even for 3-uniform hypergraphs (see 3-dimensional matching). This is in contrast to the case of simple (2-uniform) graphs in which computing a maximum-cardinality matching can be done in polynomial time.
Matching and covering
A vertex-cover in a hypergraph is a subset of , such that every hyperedge in contains at least one vertex of (it is also called a transversal or a hitting set, and is equivalent to a set cover). It is a generalization of the notion of a vertex cover in a graph.
The vertex-cover number of a hypergraph is the smallest size of a vertex cover in . It is often denoted by , for transversal.
A fractional vertex-cover is a function assigning a weight to each vertex in , such that for every hyperedge in , the sum of fractions of vertices in is at least 1. A vertex cover is a special case of a fractional vertex cover in which all weights are either 0 or 1. The size of a fractional vertex-cover is the sum of fractions of all vertices.
The fractional vertex-cover number of a hypergraph is the smallest size of a fractional vertex-cover in . It is often denoted by .
Since a vertex-cover is a special case of a fractional vertex-cover, for every hypergraph :
fractional-vertex-cover-number () ≤ vertex-cover-number ().
Linear programming duality implies that, for every hypergraph :
fractional-matching-number () = fractional-vertex-cover-number().
Hence, for every hypergraph :
If the size of each hyperedge in is at most then the union of all hyperedges in a maximum matching is a vertex-cover (if there was an uncovered hyperedge, we could have added it to the matching). Therefore:
This inequality is tight: equality holds, for example, when contains vertices and contains all subsets of vertices.
However, in general , since ; see Fractional matching above.
Ryser's conjecture says that, in every -partite -uniform hypergraph:
Some special cases of the conjecture have been proved; see Ryser's conjecture.
Kőnig's property
A hypergraph has the Kőnig property if its maximum matching number equals its minimum vertex-cover number, namely if . The Kőnig-Egerváry theorem shows that every bipartite graph has the Kőnig property. To extend this theorem to hypergraphs, we need to extend the notion of bipartiteness to hypergraphs.
A natural generalization is as follows. A hypergraph is called 2-colorable if its vertices can be 2-colored so that every hyperedge (of size at least 2) contains at least one vertex of each color. An alternative term is Property B. A simple graph is bipartite iff it is 2-colorable. However, there are 2-colorable hypergraphs without Kőnig's property. For example, consider the hypergraph with with all triplets It is 2-colorable, for example, we can color blue and white. However, its matching number is 1 and its vertex-cover number is 2.
A stronger generalization is as follows. Given a hypergraph and a subset of , the restriction of to is the hypergraph whose vertices are , and for every hyperedge in that intersects , it has a hyperedge that is the intersection of and . A hypergraph is called balanced if all its restrictions are essentially 2-colorable, meaning that we ignore singleton hyperedges in the restriction. A simple graph is bipartite iff it is balanced.
A simple graph is bipartite iff it has no odd-length cycles. Similarly, a hypergraph is balanced iff it has no odd-length circuits. A circuit of length in a hypergraph is an alternating sequence , where the are distinct vertices and the are distinct hyperedges, and each hyperedge contains the vertex to its left and the vertex to its right. The circuit is called unbalanced if each hyperedge contains no other vertices in the circuit. Claude Berge proved that a hypergraph is balanced if and only if it does not contain an unbalanced odd-length circuit. Every balanced hypergraph has Kőnig's property.
The following are equivalent:
Every partial hypergraph of (i.e., a hypergraph derived from by deleting some hyperedges) has the Kőnig property.
Every partial hypergraph of has the property that its maximum degree equals its minimum edge coloring number.
has the Helly property, and the intersection graph of (the simple graph in which the vertices are and two elements of are linked if and only if they intersect) is a perfect graph.
Matching and packing
The problem of set packing is equivalent to hypergraph matching.
A vertex-packing in a (simple) graph is a subset of its vertices, such that no two vertices in are adjacent.
The problem of finding a maximum vertex-packing in a graph is equivalent to the problem of finding a maximum matching in a hypergraph:
Given a hypergraph , define its intersection graph as the simple graph whose vertices are and whose edges are pairs such that , have a vertex in common. Then every matching in is a vertex-packing in and vice versa.
Given a graph , define its star hypergraph as the hypergraph whose vertices are and whose hyperedges are the stars of the vertices of (i.e., for each vertex in , there is a hyperedge in that contains all edges in that are adjacent to ). Then every vertex-packing in is a matching in and vice versa.
Alternatively, given a graph , define its clique hypergraph as the hypergraph whose vertices are the cliques of , and for each vertex in , there is a hyperedge in containing all cliques in that contain . Then again, every vertex-packing in is a matching in and vice versa. Note that cannot be constructed from in polynomial time, so it cannot be used as a reduction for proving NP-hardness. But it has some theoretical uses.
See also
3-dimensional matching – a special case of hypergraph matching to 3-uniform hypergraphs.
Vertex cover in hypergraphs
Bipartite hypergraph
Rainbow matching in hypergraphs
D-interval hypergraph - an infinite hypergraph in which there is some relation between the matching and the covering number.
Erdős–Ko–Rado theorem on pairwise non-disjoint edges in hypergraphs
References
Hypergraphs
Matching (graph theory) | Matching in hypergraphs | Mathematics | 2,478 |
581,220 | https://en.wikipedia.org/wiki/Contraceptive%20sponge | The contraceptive sponge combines barrier and spermicidal methods to prevent conception. Sponges work in two ways. First, the sponge is inserted into the vagina, so it can cover the cervix and prevent any sperm from entering the uterus. Secondly, the sponge contains spermicide.
The sponges are inserted vaginally prior to intercourse and must be placed over the cervix to be effective.
Sponges provide no protection from sexually transmitted infections. Sponges can provide contraception for multiple acts of intercourse over a 24-hour period, but cannot be reused beyond that time or once removed.
Effectiveness
Sponge's effectiveness is 91% if used perfectly by women who never gave birth, and 80% if used perfectly by women who have given at least one birth. Since it is hard to use the sponge perfectly every time having vaginal sex, its real effectiveness can be lower, and it is advised to combine sponges with other birth control methods, like withdrawal of penis before ejaculation or condoms.
Use
To use the sponge, wet the sponge and squeeze it, fold it and put it in the vagina covering the cervix. A sponge works for 24 hours once put in, during which the female can have sex multiple times. Once the sponge is pulled out, it should not be reused and should be trashed, not flushed. The sponge should be left in place for 6 hours after having sex. A sponge should not be in the vagina for more than 30 hours.
Spermicide
Sponges are a physical barrier, trapping sperm and preventing their passage through the cervix into the reproductive system. The spermicide is an important component of pregnancy prevention.
Side effects
People sensitive to Nonoxynol-9, an ingredient in the spermicide used in the sponge, may experience unpleasant irritation and may face increase risk of sexually transmitted infections. Sponge users may have a slightly higher risk of toxic shock syndrome.
In popular culture
Shortly after they were taken off the U.S. market, the sponge was featured in an episode of the sitcom Seinfeld titled "The Sponge". In the episode, Elaine Benes conserves her remaining sponges by choosing to not have intercourse unless she is certain her partner is "sponge-worthy".
References
External links
The Contraceptive Sponge – DrDonnica.com
Barrier contraception
Spermicide
Products introduced in 1983 | Contraceptive sponge | Biology | 484 |
23,561,715 | https://en.wikipedia.org/wiki/Carbon%20profiling | Carbon profiling is a mathematical process that calculates how much carbon dioxide is put into the atmosphere per m2 of space in a building over one year. The analysis has two parts that are added together to produce an overall figure that is termed the 'carbon profile':
Operational carbon emissions
Embodied carbon emissions
Embodied carbon emissions
Embodied carbon emissions relate to the amount of carbon dioxide emitted into the atmosphere from creating and maintaining the materials that form a building, e.g. the carbon dioxide released from the baking of bricks or smelting of iron. These emissions can also be considered to be Upfront Carbon Emissions, or UCE. “Embodied carbon refers to the carbon footprint associated with building materials, from cradle to grave," and can be quantified as a part of environmental impact using life-cycle assessment (LCA).
In the Carbon Profiling Model these emissions are measured as Embodied Carbon Efficiency (ECE), measured as kg of CO2/m2/year.
As of 2018, "Embodied carbon is responsible 11% of global GHG emissions and 28% of global building sector emissions ... Embodied carbon will be responsible for almost half of total new construction emissions between now and 2050." Zero-carbon architecture (similar to zero-energy building), incorporates design techniques that maximize embodied carbon.
Steve Webb, co-founder of Webb Yates Engineers, says: "We’ve known for a long time that aluminium, steel, concrete and ceramics have very high embodied energy ... High carbon frames should be taxed like cigarettes. There should be a presumption in favour of timber and stone."
Occupational carbon emissions
Occupational carbon emissions relate to the amount of carbon dioxide emitted into the atmosphere from the direct use of energy to run the building e.g. the heating or electricity used by the building over the year. In the Carbon Profiling Model these emissions are measured in BER’s (Building Emission Rate) in kg of /m2/year.
The BER is a United Kingdom government accepted unit of measurement that comes from an approved calculation process called sBEM (Simplified Building Emission Model)
The purpose of Carbon Profiling is to provide a method of analyzing and comparing both operational and embodied carbon emissions at the same time. With this information it is then possible to allocate a project's resources in such a way to minimize the total amount of Carbon Dioxide emitted into the atmosphere through the use of a given piece of space.
A secondary benefit is that having quantified the Carbon Profiling of different buildings it is then possible to make comparisons and rank buildings in term of their performance. This allows investors and occupiers to identify which building are good and bad carbon investments.
Simon Sturgis and Gareth Roberts of Sturgis Associates in the United Kingdom originally developed ‘Carbon Profiling’ in December 2007.
References
Further reading
Carbon finance
Carbon dioxide
Environmental monitoring | Carbon profiling | Chemistry | 582 |
58,046,455 | https://en.wikipedia.org/wiki/Kawai%20Q-80 | The Kawai Q-80 by Kawai Musical Instruments in 1989, is a music sequencer that has a built in 2DD floppy disk drive for storage. It allows playback, editing, and recording via its MIDI connections. There is a battery backup to hold the configuration when the unit is powered down. The tempo can be set from 40-250 beats per minute.
Active quantisation
Only corrects the notes that are completely out of time with the rest of the track, for a more natural feel and less robotic to the performance.
Connections
MIDI in, out and Thru.
Tape sync in and out
Metronome
Footswitch input
Storage
Using the units internal S-RAM the Q-80 can hold;
A total of 26,000 notes, this consists of 10 songs (up to 32 tracks, 15,000 notes per track)
100 motifs per song (similar to a pattern in a drum machine)
References
External links
Owners manual
https://drive.google.com/file/d/0B3OQk-sD72jXN3UwUF9tNGF6UmM/view
Kawai synthesizers
Music sequencers
Products introduced in 1989 | Kawai Q-80 | Engineering | 239 |
61,766,234 | https://en.wikipedia.org/wiki/Capronia%20mansonii | Capronia mansonii is a mesophilic black yeast that is a part of the Herpotrichiellaceae. The species is uncommon in nature but is saprotrophic in nature and been discovered on decaying plant matter, particularly wood. This fungus is naturally found in the Netherlands and has successfully been cultured in lab. It is a teleomorph of the ascomycota division and possesses brown spores.
History and taxonomy
Capronia mansonii is a type of black yeast that was first discovered from an isolated strain in 1968. The fungus was originally described from a strain in vitro found in Norway by Marie Beatrice Schol-Schwarz on an aspen tree, and it has not yet been described in situ. This fungus was the first species in Herpotrichiellaceae discovered to create ascomata in an isolated culture. It is one of the only five species out of thirty Capronia species that has successfully produced ascomata in vitro. The basionym for this species is Dictyotrichiella mansonii. Its anamorph is thought to be Exophiala mansonii but uncertainty and discourse remains. The original anamorph was first thought to be Rhinocladiella atrovirens and then Exophiala castellanii An analysis of rRNA gene sequences concluded that C. mansonii is the same biological species as E. castellanii. Capronia mansonii is often misidentified as its sister species Capronia munkii but can be differentiated by its larger and thicker cell walls and more frequent ascospores that transversely septate. It is also differentiated from its anamorph because it lacks conidia, slimy colonies, and aerial hyphae.
Growth and morphology
This fungus is a teleomorph or sexual form that is formed in vitro. This species has yet to be described in situ. The fungus is thought to be closely related to Exophiala dermatitidis, and is often hypothesized in literature to be the teleomorph of E. dermatitidis. The fungus is a part of the ascomycota phylum, also commonly defined as sac fungi. This phylum is often defined by its possession of asci, a microscope sexual structure that produces non-motile spores called ascospores. The asci of C. mansonii produce 8 ascospores upon germination. These ascospores begin with a glassy transparent appearance and then progress to a more grey-yellow, olive, and finally brown colour. These ascospores have 4–5 transverse thick-walled septa and 1 incomplete longitudinal septum. The spores have been described in literature as not tight at the septa. Juvenile asci have thicker, longer, and more lightly coloured ascus walls whereas as fully matured asci form thinner dark brown walls that are filled with ascospores. The ascomatal wall itself can range from a brown-yellow to a light brown colour which is commonly seen in other black yeasts.
Physiology and reproduction
This mesophilic fungus has been successfully cultured by Untereiner at room temperature ranging from 20–25 °C. C. mansonii has also been observed in a yeast budding form. This fungus has a homothallic breeding system indicating that it does not need a partner to sexually reproduce. The ascospores of this fungus have been described to germinate within 12 hours on Oatmeal Agar. They appear slimy and resemble yeast within 48 hours, reaching full maturity at 16 weeks. The ascomata that have been grown in lab have been shown to fully mature and develop septae but are unable to produce asci and ascospores. Artificial daylight is thought to be the limiting factor that prevents the production of asci. Further replications of the above experiments revealed that the structure formed may actually be a pseudothecium, an ascocarp that resembles a ascocarp but whose asci do not organize into a hymenium. The pseudothecia grew in abundance and also failed to produce ascospores.
Habitat and ecology
Members of the Capronia family are described as saprotrophic meaning they get their nutrients from decaying matter. Strains of this fungus have been found on various plant hosts, particularly on their leaves. They are regularly found on other decaying ascomycota and basidiomycota in the Netherlands, particularly on the wood of Populus tremula. The holotype was discovered on the stems of a Lupinus polyphyllus by Schol-Schwarz in 1968. This fungus has occasionally been found on fresh sausages consisting of pork, beef, or mixed meats. They remain unstable on meat and are unable to persist for more than three days in the presence of other lactic acid bacteria.
References
Eurotiomycetes
Fungi described in 1968
Fungi of Europe
Fungus species | Capronia mansonii | Biology | 1,021 |
47,686,993 | https://en.wikipedia.org/wiki/Root%20mucilage | Root mucilage is made of plant-specific polysaccharides or long chains of sugar molecules. This polysaccharide secretion of root exudate forms a gelatinous substance that sticks to the caps of roots. Root mucilage is known to play a role in forming relationships with soil-dwelling life forms. Just how this root mucilage is secreted is debated, but there is growing evidence that mucilage derives from ruptured cells. As roots penetrate through the soil, many of the cells surrounding the caps of roots are continually shed and replaced. These ruptured or lysed cells release their component parts, which include the polysaccharides that form root mucilage. These polysaccharides come from the Golgi apparatus and plant cell wall, which are rich in plant-specific polysaccharides. Unlike animal cells, plant cells have a cell wall that acts as a barrier surrounding the cell providing strength, which supports plants just like a skeleton.
This cell wall is used to produce everyday products such as timber, paper, and natural fabrics, including cotton.
Root mucilage is a part of a wider secrete from plant roots known as root exudate. Plant roots secrete a variety of organic molecules into the surrounding soil, such as proteins, enzymes, DNA, sugars and amino acids, which are the building blocks of life. This collective secretion is known as root exudate. This root exudate prevents root infection from bacteria and fungi, helps the roots to penetrate through the soil, and can create a micro-climate that is beneficial to the plant.
Root mucilage composition
To determine the sugars within root mucilage, monosaccharide analysis and monosaccharide linkage analysis are undertaken. Monosaccharide linkage analysis involves methylating the root mucilage, which contains polysaccharides. The root mucilage is hydrolysed using acid to break down the polysaccharides into their monosaccharide components. The subsistent monosaccharides are then reduced to open their rings. The open ring monosaccharides are then acetylated, and separated typically by using gas chromatography, although liquid chromatography is also used. The masses of the monosaccharides are then detected using mass spectrometry. The gas chromatography retention times and the mass spectrometry chromatogram are used to identify how the monosaccharides are linked to form the polysaccharides that make root mucilage. For monosaccharide analysis, which reveals the sugars that make root mucilage, scientists hydrolyse the root mucilage using acid, and put the samples directly through gas chromatography linked to mass spectrometry.
Several scientists have determined the composition of plant root mucilage using monosaccharide analysis and linkage analysis, showing that Maize (Zea mays) root mucilage contains high levels of galactose, xylose, arabinose, rhamnose, and glucose, and lower levels of uronic acid, mannose, fucose, and glucuronic acid. Wheat (Triticum aestivum) root mucilage also contains high levels of xylose, arabinose, galactose, glucose, and lower levels of rhamnose, glucuronic acid and mannose. Cowpea (Vigna unguiculata) also contains high levels of arabinose, galactose, glucose, fucose, and xylose, and lower levels of rhamnose, mannose, and glucuronic acid. Many other plants have had their root mucilage composition determined using monosaccharide analysis and monosaccharide linkage analysis. With the following monosaccharides determined as well as their linkages, scientists have determined the presence of pectin, arabinogalactan proteins, xyloglucan, arabinan, and xylan, which are plant-specific polysaccharides within the root mucilage of plants.
Importance and role of root mucilage
Plants use up to 40% of their energy secreting root mucilage, which they generate from photosynthesis that takes place in the leaves. Root mucilage plays a role in developing a symbiotic relationship with the soil-dwelling fungi. This important relationship is known to affect 94% of land plants, and benefits plants by increasing water and nutrient uptake from the soil, particularly phosphorus. In return, the fungi receive food in the form of carbohydrates from the plant in the form of broken-down root mucilage. Without this relationship, many plants would struggle to gain sufficient water or nutrients.
Root mucilage also helps soil to stick to roots. The purpose of this is to maintain the plant's contact with the soil so that the plant can regulate the levels of water it can absorb, decrease friction so that roots can penetrate through the soil, and maintain a micro-climate. Root mucilage contributes to the particular hydrophysical properties of the rhizosphere, which can affect the plant's response to water deficit. For example, root mucilage can reduce evaporation and store water in the rhizosphere.
See also
Mucilage
Marine mucilage
References
Polysaccharides | Root mucilage | Chemistry | 1,148 |
33,968,017 | https://en.wikipedia.org/wiki/Enders%20SAMP/RAMP%20hydrazone-alkylation%20reaction | The Enders SAMP/RAMP hydrazone alkylation reaction is an asymmetric carbon-carbon bond formation reaction facilitated by pyrrolidine chiral auxiliaries. It was pioneered by E. J. Corey and Dieter Enders in 1976, and was further developed by Enders and his group. This method is usually a three-step sequence. The first step is to form the hydrazone between (S)-1-amino-2-methoxymethylpyrrolidine (SAMP) or (R)-1-amino-2-methoxymethylpyrrolidine (RAMP) and a ketone or aldehyde. Afterwards, the hydrazone is deprotonated by lithium diisopropylamide (LDA) to form an azaenolate, which reacts with alkyl halides or other suitable electrophiles to give alkylated hydrazone species with the simultaneous generation of a new chiral center. Finally, the alkylated ketone or aldehyde can be regenerated by ozonolysis or hydrolysis.
This reaction is a useful technique for asymmetric α-alkylation of ketones and aldehydes, which are common synthetic intermediates for medicinally interesting natural products and other related organic compounds. These natural products include (-)-C10-demethyl arteannuin B, the structural analog of antimalarial artemisinin, the polypropionate metabolite (-)-denticulatin A and B isolated from Siphonaria denticulata, zaragozic acid A, a potent inhibitor of sterol synthesis, and epothilone A and B, which have been proven to be very effective anticancer drugs.
History
Regioselective and stereoselective formation of carbon-carbon bonds adjacent to carbonyl group is an important procedure in organic chemistry. Alkylation reaction of enolates has been the main focus of the field. Both A. G. Myers and D. A. Evans developed asymmetric alkylation reactions for enolates.
The apparent shortcoming for enolate alkylation reactions is over-alkylation, even if the amount of base added for enolization as well as the reaction temperature are carefully controlled. The ketene formation during the deprotonation process for substrates possessing Evans' oxazolidinone is also a main side reaction for the related alkylation reactions. Development in the field of enamine chemistry and the utilization of imine derivatives of enolates managed to provide an alternative for enolate alkylation reactions.
In 1963, G. Stork reported the first enamine alkylation reaction for ketones - Stork enamine alkylation reaction.
In 1976, Meyers reported the first alkylation reaction of metallated azaenolates of hydrazones with an acyclic amino acid-based auxiliary. Compared with the free carbonyl compounds and the chiral enamine species reported previously, the hydrazones exhibit higher reactivity, regioselectivity and stereoselectivity.
The combination of cyclic amino acid derivatives (SAMP and RAMP) and the powerful hydrazone techniques were pioneered by E. J. Corey and D. Enders in 1976, and were independently developed by D. Enders later. Both SAMP and RAMP are synthesized from amino acids. The detailed synthesis of these two auxiliaries are shown below.
Mechanism
The Enders SAMP/RAMP hydrazone alkylation begins with the synthesis of the hydrazone from a N,N-dialkylhydrazine and a ketone or aldehyde
The hydrazone is then deprotonated on the α-carbon position by a strong base, such as lithium diisopropylamide (LDA), leading to the formation of a resonance stabilized anion - an azaenolate. This anion is a very good nucleophile and readily attacks electrophiles, such as alkyl halides, to generate alkylated hydrazones with simultaneous creation of a new chiral center at the α-carbon.
The stereochemistry of this reaction is discussed in detail in next section.
Stereochemistry
Stereochemistry of the azaenolate
After the deprotonation, the hydrazone turns into an azaenolate with lithium cation chelating both the nitrogen and oxygen. There are two possible options for lithium chelation. One is that lithium is antiperiplanar to the C=C bond (blue colored), leading to the conformation of ZC-N; the other one is that lithium and the C=C bond are at the same side of the C-N bond (red colored), leading to the EC-N conformer. There are also two available orientations for the chelating nitrogen and R2 group, being either EC=C or ZC=C. This leads to four possible azaenolate intermediates (A, B, C and D) for the Enders' SAMP/RAMP hydrazone alkylation reaction.
Experiments and calculations show that one specific stereoisomer of the azaenolate is favored over the other three possible candidates. Therefore, although four isomers are possible for the azaenolate, only the one (azaenolate A) with the stereochemistry of its C=C double bonds being E and that of its C-N bond being Z stereochemistry is dominant (EC=CZC-N) for both cyclic and acyclic ketones.
Stereochemistry of alkylation
The favored azaenolate is the dominant starting molecule for the subsequent alkylation reaction. There are two possible faces of accessing for any electrophile to react with. The steric interaction between the pyrrolidine ring and the electrophilic reagent hinders the attack of the electrophile from the top face. On the contrary, when the electrophile attacks from the bottom face, such unfavorable interaction does not exist. Therefore, the electrophilic attack proceeds from the sterically more accessible face.
Variants
The chelation of lithium cation with the methoxy group is one of the most important features of the transition state for Enders' hydrazone alkylation reaction. It is necessary to have this chelation effect to achieve high stereoselectivity. The development and modification of Enders' hydrazone alkylation reaction mainly focus on the addition of more steric hindrance on the pyrrolidine rings of both SAMP and RAMP, while preserving the methoxy group for lithium chelation.
The most famous four variants of SAMP and RAMP are SADP, SAEP, SAPP and RAMBO, whose structures are shown below.
In 2011, several N-amino cyclic carbamates were synthesized and studied for asymmetric hydrazone alkylation reactions. Both the stereochemistry and regioselectivity of the reactions turned out to be very promising. These new compounds consist of a new class of chiral auxiliary based on the carbamate structure and, therefore, no longer belong to the family of SAMP and RAMP. But they do provide very powerful alternatives to the traditional pyrrolidine systems.
Auxiliary release
Hydrazones are usually very stable towards solvolysis, and conversion to the ketone can require vigorous conditions. Also, aldehydic hydrazones often instead disproportionate to a nitrile and amine.
Two principal workup environments are common: oxidation and solvolysis. Reductive conversions are possible with low-valent transition metals, but remained relatively unstudied .
Oxidative cleavage has high yields and is most frequently used. Ozone or singlet oxygen can ozonolyze the diazene bond (and any olefinic moieties present), leaving a carbonyl, a nitrosamine, and dioxygen. Lemieux's gentler oxidation tolerates acetals and benzyl ethers. Peroxide reagents (e.g. NaBO3, (tBu4NSO4)2, or m-ClBzO2H) cleave the hydrazone with varying speeds, selectivities, and mechanisms, but the Baeyer-Villiger oxidation is a common side-reaction. High-valent transition metal oxyhalides (e.g. WF6, CoF3, MoOCl3) appear to primarily cleave via radicals. All except ozone and singlet oxygen generate nitriles from aldehydic hydrazones, either as the major or a substantial minor product.
Certain electrophiles also elicit nitriles: chloroformates, strongly-activated alkynes, or methyl iodide and a hindered base. Methyl iodide is also useful for hydrolysis: the alkylated hydrazonium iodide easily hydrolyzes to a carbonyl and hydrazoform, and air cleaves the hydrazoform to the hydrazine and carbon dioxide.
Indeed, a wide variety of acids promote hydrolysis. Bismuth trichloride cleaves arbitrary hydrazones in a microwave. Oxalic acid abstracts hydrazine from ketonic hydrazones; the oxalate adduct then decomposes to the original auxiliary in aqueous base. Silica gel hydrolyzes exquisitely acid-sensitive substrates, but is too weak to affect ketonic hydrazones adjacent to a primary carbon. Ketonic hydrazones adjacent to a secondary or tertiary carbon hydrolyze in the presence of catalytic cupric salts; that procedure also preserves substrates disturbed by oxidants or strong acids.
Boron trifluoride etherate catalyzes thioketalization, and Baker's yeast will hydrolyze non-bioactive substrates.
Hydrazone carbamates are cleaved much more readily than their parent hydrazones: para-toluenesulfonic acid affords the corresponding ketones in near-quantitative yields.
Conditions
Ender's hydrazone alkylation reaction is usually run through a sequence of three steps. The first step should always be the synthesis of the hydrazones. The ketone or aldehyde is mixed with either SAMP or RAMP and allowed to react under argon for 12 hours. The crude hydrazone obtained is purified by distillation or recrystallization. At 0 degree celsius, the hydrazone is transferred into the ether solution of lithium diisopropylamide. Then this mixture is cooled down to -110 degree celsius and is slowly added the alkyl halide. This mixture is then allowed to warm up to room temperature. After 12 hours of reaction at room temperature, the crude alkylated hydrazone is allowed to react with ozone in a Schlenk tube to cleave the C=N bond. After distillation or column chromatography, pure alkylation product can be obtained.
Applications
Synthesis of zaragozic acid A
K. C. Nicolaou and coworkers at Scripps Research Institute generated the chiral hydrazone through Enders' hydrazone alkylation reaction with high stereoselectivity (de > 95%). The subsequent ozonolysis and Wittig reaction led to the side chain fragment of zaragozic acid A, which is a potent medicine for coronary heart disease.
Synthesis of denticulatin A and B
Ziegler and coworkers reacted an allyl iodide with the azaenolate to generate a chiral hydrocarbon chain. To avoid loss of the enantiomeric purity of the product, the authors used cupric acetate to regenerate the carbonyl group, obtaining only moderate yield for the cleavage of C=N bond but good enantioselectivity (ee = 89%). The ketone was transformed after several steps into denticulatin A and B - polypropionate metabolites isolated from Siphonaria Denticulata.
Synthesis of the derivative of arteannuin
(-)-C10-demethyl arteannuin B is a structural analog of the antimalarial artemisinin. It exhibits potent antimalarial activity even against a drug-resistant strain. Little and coworkers obtained the alkylated hydrazone in diastereomerically pure form (de > 95%) through the Enders' alkylation reaction. This intermediate was then elaborated into (-)-C10-demethyl arteannuin B.
Synthesis of epothilone A
Epothilone A and B are reported to be highly effective anticancer drugs. Several of their structural derivatives show very promising inhibition against breast cancer with only mild side effect and some of them are now under trials. In 1997, K. C. Nicolaou and coworkers reported the first total synthesis of both Epothilone A and B. Ender's alkylation reaction was utilized at the very beginning of the synthesis to install the stereogenic center at C8. The reaction proceeded with both high yield and high diastereoselectivity.
See also
Myers' asymmetric alkylation
Stork enamine alkylation
Enders' reagents
Hajos–Parrish–Eder–Sauer–Wiechert reaction
References
Organic reactions
Name reactions | Enders SAMP/RAMP hydrazone-alkylation reaction | Chemistry | 2,782 |
21,257,731 | https://en.wikipedia.org/wiki/Aranella%20fimbriata | Aranella fimbriata is a taxonomic synonym that may refer to:
Utricularia longeciliata syn. [Aranella fimbriata Gleason]
Utricularia fimbriata syn. [Aranella fimbriata (Kunth) Barnhart]
Utricularia simulans syn. [Aranella fimbriata Barnhart]
Utricularia by synonymy | Aranella fimbriata | Biology | 92 |
62,035,010 | https://en.wikipedia.org/wiki/Sh%202-9 | Sh 2-9, also known as Gum 65, is combination emission and reflection nebula in the Scorpius constellation, surrounding the multiple star system Sigma Scorpii. Sigma Scorpii is 1° to the northwest of Messier 4, and the nebula can be easily seen with small telescopes.
Sharpless 9 is a red emission nebula that surrounds the star Sigma Scorpii. It is thought the star Sigma Scorpii, a variable giant star, is ionizing this region. It is also recorded as reflection nebula C130.
This region is noted as both an emission and reflection nebula, although sometimes only one aspect is noted.
The magnitude 1.1 Antares is 2° to the southeast of this nebula.
One of strongest 2.3 GHz sources in the region coincides with Sharpless 9.
There is a radio source on the edge, and it has been proposed that this is because there is a collision between this nebula and the dark nebula Kh 527.
Catalogs
Examples:
Sharpless 9
Gum 65
Cederblad 140
References
External links
Image of Sharpless-2 9
Scorpius
Emission nebulae
Reflection nebulae | Sh 2-9 | Astronomy | 238 |
3,126,169 | https://en.wikipedia.org/wiki/Ultra-low%20particulate%20air | Ultra-low particulate air (ULPA) is a type of air filter. A ULPA filter can remove from the air at least 99.999% of dust, pollen, mold, bacteria and any airborne particles with a minimum particle penetration size of 120 nanometres (0.12 μm, ultrafine particles). A ULPA filter can remove—to a large extent but not 100%—oil smoke, tobacco smoke, rosin smoke, smog, and insecticide dust. It can also remove carbon black to some extent. Some fan filter units incorporate ULPA filters. The EN 1822 and ISO 29463 standards may be used to rate ULPA filters.
Materials used in ULPA filters
Both high-efficiency particulate air (HEPA) and ULPA filter media have similar designs.
The filter media is like an enormous web of randomly arranged fibres. When air passes through this dense web, the solid particles get attached to the fibres and thus eliminated from the air.
Porosity is one of the key considerations of these fibres. Lower porosity, while decreasing the speed of filtration, increases the quality of filtered air. This parameter is measured in pores per linear inch.
Method of functioning
Physically blocking particles with a filter, called sieving, cannot remove smaller-sized particles. The cleaning process, based on the particle size of the pollutant, is based on four techniques:
Sieving
Diffusion
Inertial impaction
Interception
A number of recommended practices have been written on testing these filters, including:
IEST-RP-CC001: HEPA and ULPA Filters,
IEST-RP-CC007: Testing ULPA Filters,
IEST-RP-CC022: Testing HEPA and ULPA Filter Media, and
IEST-RP-CC034: HEPA and ULPA Filter Leak Tests.
Specifications
See also the different classes for air filters for comparison
See also
Minimum efficiency reporting value (MERV)
High-efficiency particulate air (HEPA)
Microparticle performance rating (MPR)
References
External links
ULPA Filter Efficiency Chart: Sentry Air Systems
European Standard for EPA, HEPA & ULPA Filters — EN 1822 p. 6
EN 1822: the standard that greatly impacted the European cleanrooms market
Ulpa Filter Designs and How it clears the air
Filters
Cleanroom technology | Ultra-low particulate air | Chemistry,Engineering | 483 |
13,733,769 | https://en.wikipedia.org/wiki/Darcy%20friction%20factor%20formulae | In fluid dynamics, the Darcy friction factor formulae are equations that allow the calculation of the Darcy friction factor, a dimensionless quantity used in the Darcy–Weisbach equation, for the description of friction losses in pipe flow as well as open-channel flow.
The Darcy friction factor is also known as the Darcy–Weisbach friction factor, resistance coefficient or simply friction factor; by definition it is four times larger than the Fanning friction factor.
Notation
In this article, the following conventions and definitions are to be understood:
The Reynolds number Re is taken to be Re = V D / ν, where V is the mean velocity of fluid flow, D is the pipe diameter, and where ν is the kinematic viscosity μ / ρ, with μ the fluid's Dynamic viscosity, and ρ the fluid's density.
The pipe's relative roughness ε / D, where ε is the pipe's effective roughness height and D the pipe (inside) diameter.
f stands for the Darcy friction factor. Its value depends on the flow's Reynolds number Re and on the pipe's relative roughness ε / D.
The log function is understood to be base-10 (as is customary in engineering fields): if x = log(y), then y = 10x.
The ln function is understood to be base-e: if x = ln(y), then y = ex.
Flow regime
Which friction factor formula may be applicable depends upon the type of flow that exists:
Laminar flow
Transition between laminar and turbulent flow
Fully turbulent flow in smooth conduits
Fully turbulent flow in rough conduits
Free surface flow.
Transition flow
Transition (neither fully laminar nor fully turbulent) flow occurs in the range of Reynolds numbers between 2300 and 4000. The value of the Darcy friction factor is subject to large uncertainties in this flow regime.
Turbulent flow in smooth conduits
The Blasius correlation is the simplest equation for computing the Darcy friction
factor. Because the Blasius correlation has no term for pipe roughness, it
is valid only to smooth pipes. However, the Blasius correlation is sometimes
used in rough pipes because of its simplicity. The Blasius correlation is valid
up to the Reynolds number 100000.
Turbulent flow in rough conduits
The Darcy friction factor for fully turbulent flow (Reynolds number greater than 4000) in rough conduits can be modeled by the Colebrook–White equation.
Free surface flow
The last formula in the Colebrook equation section of this article is for free surface flow. The approximations elsewhere in this article are not applicable for this type of flow.
Choosing a formula
Before choosing a formula it is worth knowing that in the paper on the Moody chart, Moody stated the accuracy is about ±5% for smooth pipes and ±10% for rough pipes. If more than one formula is applicable in the flow regime under consideration, the choice of formula may be influenced by one or more of the following:
Required accuracy
Speed of computation required
Available computational technology:
calculator (minimize keystrokes)
spreadsheet (single-cell formula)
programming/scripting language (subroutine).
Colebrook–White equation
The phenomenological Colebrook–White equation (or Colebrook equation) expresses the Darcy friction factor f as a function of Reynolds number Re and pipe relative roughness ε / Dh, fitting the data of experimental studies of turbulent flow in smooth and rough pipes.
The equation can be used to (iteratively) solve for the Darcy–Weisbach friction factor f.
For a conduit flowing completely full of fluid at Reynolds numbers greater than 4000, it is expressed as:
or
where:
Hydraulic diameter, (m, ft) – For fluid-filled, circular conduits, = D = inside diameter
Hydraulic radius, (m, ft) – For fluid-filled, circular conduits, = D/4 = (inside diameter)/4
Note: Some sources use a constant of 3.71 in the denominator for the roughness term in the first equation above.
Solving
The Colebrook equation is usually solved numerically due to its implicit nature. Recently, the Lambert W function has been employed to obtain an exact solution in an explicit reformulation of the Colebrook equation.
or
will get:
then:
Expanded forms
Additional, mathematically equivalent forms of the Colebrook equation are:
where:
1.7384... = 2 log (2 × 3.7) = 2 log (7.4)
18.574 = 2.51 × 3.7 × 2
and
or
where:
1.1364... = 1.7384... − 2 log (2) = 2 log (7.4) − 2 log (2) = 2 log (3.7)
9.287 = 18.574 / 2 = 2.51 × 3.7.
The additional equivalent forms above assume that the constants 3.7 and 2.51 in the formula at the top of this section are exact. The constants are probably values which were rounded by Colebrook during his curve fitting; but they are effectively treated as exact when comparing (to several decimal places) results from explicit formulae (such as those found elsewhere in this article) to the friction factor computed via Colebrook's implicit equation.
Equations similar to the additional forms above (with the constants rounded to fewer decimal places, or perhaps shifted slightly to minimize overall rounding errors) may be found in various references. It may be helpful to note that they are essentially the same equation.
Free surface flow
Another form of the Colebrook-White equation exists for free surfaces. Such a condition may exist in a pipe that is flowing partially full of fluid. For free surface flow:
The above equation is valid only for turbulent flow. Another approach for estimating f in free surface flows, which is valid under all the flow regimes (laminar, transition and turbulent) is the following:
where a is:
and b is:
where Reh is Reynolds number where h is the characteristic hydraulic length (hydraulic radius for 1D flows or water depth for 2D flows) and Rh is the hydraulic radius (for 1D flows) or the water depth (for 2D flows). The Lambert W function can be calculated as follows:
Approximations of the Colebrook equation
Haaland equation
The Haaland equation was proposed in 1983 by Professor S.E. Haaland of the Norwegian Institute of Technology. It is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation, but the discrepancy from experimental data is well within the accuracy of the data.
The Haaland equation is expressed:
Swamee–Jain equation
The Swamee–Jain equation is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation.
Serghides's solution
Serghides's solution is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. It was derived using Steffensen's method.
The solution involves calculating three intermediate values and then substituting those values into a final equation.
The equation was found to match the Colebrook–White equation within 0.0023% for a test set with a 70-point matrix consisting of ten relative roughness values (in the range 0.00004 to 0.05) by seven Reynolds numbers (2500 to 108).
Goudar–Sonnad equation
Goudar equation is the most accurate approximation to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. Equation has the following form
Brkić solution
Brkić shows one approximation of the Colebrook equation based on the Lambert W-function
The equation was found to match the Colebrook–White equation within 3.15%.
Brkić-Praks solution
Brkić and Praks show one approximation of the Colebrook equation based on the Wright -function, a cognate of the Lambert W-function
, , , and
The equation was found to match the Colebrook–White equation within 0.0497%.
Praks-Brkić solution
Praks and Brkić show one approximation of the Colebrook equation based on the Wright -function, a cognate of the Lambert W-function
, , , and
The equation was found to match the Colebrook–White equation within 0.0012%.
Niazkar's solution
Since Serghides's solution was found to be one of the most accurate approximation of the implicit Colebrook–White equation, Niazkar modified the Serghides's solution to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe.
Niazkar's solution is shown in the following:
Niazkar's solution was found to be the most accurate correlation based on a comparative analysis conducted in the literature among 42 different explicit equations for estimating Colebrook friction factor.
Blasius correlations
Early approximations for smooth pipes by Paul Richard Heinrich Blasius in terms of the Darcy–Weisbach friction factor are given in one article of 1913:
.
Johann Nikuradse in 1932 proposed that this corresponds to a power law correlation for the fluid velocity profile.
Mishra and Gupta in 1979 proposed a correction for curved or helically coiled tubes, taking into account the equivalent curve radius, Rc:
,
with,
where f is a function of:
Pipe diameter, D (m, ft)
Curve radius, R (m, ft)
Helicoidal pitch, H (m, ft)
Reynolds number, Re (dimensionless)
valid for:
Retr < Re < 105
6.7 < 2Rc/D < 346.0
0 < H/D < 25.4
Swamee equation
The Swamee equation is used to solve directly for the Darcy–Weisbach friction factor (f) for a full-flowing circular pipe for all flow regimes (laminar, transitional, turbulent). It is an exact solution for the Hagen–Poiseuille equation in the laminar flow regime and an approximation of the implicit Colebrook–White equation in the turbulent regime with a maximum deviation of less than 2.38% over the specified range. Additionally, it provides a smooth transition between the laminar and turbulent regimes to be valid as a full-range equation, 0 < Re < 108.
Table of Approximations
The following table lists historical approximations to the Colebrook–White relation for pressure-driven flow. Churchill equation (1977) is the only equation that can be evaluated for very slow flow (Reynolds number < 1), but the Cheng (2008), and Bellos et al. (2018) equations also return an approximately correct value for friction factor in the laminar flow region (Reynolds number < 2300). All of the others are for transitional and turbulent flow only.
References
Further reading
Brkić, Dejan; Praks, Pavel (2019). "Accurate and efficient explicit approximations of the Colebrook flow friction equation based on the Wright ω-function". Mathematics 7 (1): article 34. https://doi.org/10.3390/math7010034. ISSN 2227-7390
Praks, Pavel; Brkić, Dejan (2020). "Review of new flow friction equations: Constructing Colebrook’s explicit correlations accurately". Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería 36 (3): article 41. https://doi.org/10.23967/j.rimni.2020.09.001. ISSN 1886-158X (online version) - ISSN 0213-1315 (printed version)
External links
Web-based calculator of Darcy friction factors by Serghides' solution.
Open source pipe friction calculator.
Equations of fluid dynamics
Piping
Fluid mechanics
fr:Équation de Darcy-Weisbach
it:Equazione di Colebrook
pt:Equações explícitas para o fator de atrito de Darcy-Weisbach | Darcy friction factor formulae | Physics,Chemistry,Engineering | 2,551 |
2,972,272 | https://en.wikipedia.org/wiki/Radite | Radite is a trade name for an early plastic, formed of pyroxylin—a partially nitrated cellulose— manufactured by DuPont and introduced by the Sheaffer Pen Company in 1924 when plastics were first used as a material for pen manufacture.
Sheaffer's Radite pens were the first commercial plastic pens, and Sheaffer marketed the material as "indestructible." Jade green in color, the pens were best sellers at the time. The material is credited with helping Sheaffer capture 25% of the market.
Radite is extremely similar to other celluloid pen materials trademarked at the time, such as Permanite, Pyralin, Fiberloid, Viscoloid, and Herculoid.
References
Plastics | Radite | Physics | 157 |
46,984,439 | https://en.wikipedia.org/wiki/Penicillium%20ootensis | Penicillium ootensis is a species of fungus in the genus Penicillium.
References
Further reading
ootensis
Fungi described in 1996
Fungus species | Penicillium ootensis | Biology | 34 |
1,988,157 | https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay%20filter | A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, (or derivatives of the smoothed signal) at the central point of each sub-set. The method, based on established mathematical procedures, was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. Some errors in the tables have been corrected. The method has been extended for the treatment of 2- and 3-dimensional data.
Savitzky and Golay's paper is one of the most widely cited papers in the journal Analytical Chemistry and is classed by that journal as one of its "10 seminal papers" saying "it can be argued that the dawn of the computer-controlled analytical instrument can be traced to this article".
Applications
The data consists of a set of points , , where is an independent variable and is an observed value. They are treated with a set of convolution coefficients, , according to the expression
Selected convolution coefficients are shown in the tables, below. For example, for smoothing by a 5-point quadratic polynomial, and the smoothed data point, , is given by
,
where, , etc. There are numerous applications of smoothing, such as avoiding the propagation of noise through an algorithm chain, or sometimes simply to make the data appear to be less noisy than it really is.
The following are applications of numerical differentiation of data. Note When calculating the nth derivative, an additional scaling factor of may be applied to all calculated data points to obtain absolute values (see expressions for , below, for details).
Location of maxima and minima in experimental data curves. This was the application that first motivated Savitzky. The first derivative of a function is zero at a maximum or minimum. The diagram shows data points belonging to a synthetic Lorentzian curve, with added noise (blue diamonds). Data are plotted on a scale of half width, relative to the peak maximum at zero. The smoothed curve (red line) and 1st derivative (green) were calculated with 7-point cubic Savitzky–Golay filters. Linear interpolation of the first derivative values at positions either side of the zero-crossing gives the position of the peak maximum. 3rd derivatives can also be used for this purpose.
Location of an end-point in a titration curve. An end-point is an inflection point where the second derivative of the function is zero. The titration curve for malonic acid illustrates the power of the method. The first end-point at 4 ml is barely visible, but the second derivative allows its value to be easily determined by linear interpolation to find the zero crossing.
Baseline flattening. In analytical chemistry it is sometimes necessary to measure the height of an absorption band against a curved baseline. Because the curvature of the baseline is much less than the curvature of the absorption band, the second derivative effectively flattens the baseline. Three measures of the derivative height, which is proportional to the absorption band height, are the "peak-to-valley" distances h1 and h2, and the height from baseline, h3.
Resolution enhancement in spectroscopy. Bands in the second derivative of a spectroscopic curve are narrower than the bands in the spectrum: they have reduced half-width. This allows partially overlapping bands to be "resolved" into separate (negative) peaks. The diagram illustrates how this may be used also for chemical analysis, using measurement of "peak-to-valley" distances. In this case the valleys are a property of the 2nd derivative of a Lorentzian. (x-axis position is relative to the position of the peak maximum on a scale of half width at half height).
Resolution enhancement with 4th derivative (positive peaks). The minima are a property of the 4th derivative of a Lorentzian.
Moving average
The "moving average filter" is a trivial example of a Savitzky–Golay filter that is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles.
Each subset of the data set is fit with a straight horizontal line as opposed to a higher order polynomial. An unweighted moving average filter is the simplest convolution filter.
The moving average is often used for a quick technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series.
It was not included in some tables of Savitzky-Golay convolution coefficients as all the coefficient values are identical, with the value .
Derivation of convolution coefficients
When the data points are equally spaced, an analytical solution to the least-squares equations can be found. This solution forms the basis of the convolution method of numerical smoothing and differentiation. Suppose that the data consists of a set of n points (xj, yj) (j = 1, ..., n), where xj is an independent variable and yj is a datum value. A polynomial will be fitted by linear least squares to a set of m (an odd number) adjacent data points, each separated by an interval h. Firstly, a change of variable is made
where is the value of the central point. z takes the values (e.g. m = 5 → z = −2, −1, 0, 1, 2). The polynomial, of degree k is defined as
The coefficients a0, a1 etc. are obtained by solving the normal equations (bold a represents a vector, bold J represents a matrix).
where is a Vandermonde matrix, that is -th row of has values .
For example, for a cubic polynomial fitted to 5 points, z= −2, −1, 0, 1, 2 the normal equations are solved as follows.
Now, the normal equations can be factored into two separate sets of equations, by rearranging rows and columns, with
Expressions for the inverse of each of these matrices can be obtained using Cramer's rule
The normal equations become
and
Multiplying out and removing common factors,
The coefficients of y in these expressions are known as convolution coefficients. They are elements of the matrix
In general,
In matrix notation this example is written as
Tables of convolution coefficients, calculated in the same way for m up to 25, were published for the Savitzky–Golay smoothing filter in 1964, The value of the central point, z = 0, is obtained from a single set of coefficients, a0 for smoothing, a1 for 1st derivative etc. The numerical derivatives are obtained by differentiating Y. This means that the derivatives are calculated for the smoothed data curve. For a cubic polynomial
In general, polynomials of degree (0 and 1), (2 and 3), (4 and 5) etc. give the same coefficients for smoothing and even derivatives. Polynomials of degree (1 and 2), (3 and 4) etc. give the same coefficients for odd derivatives.
Algebraic expressions
It is not necessary always to use the Savitzky–Golay tables. The summations in the matrix JTJ can be evaluated in closed form,
so that algebraic formulae can be derived for the convolution coefficients. Functions that are suitable for use with a curve that has an inflection point are:
Smoothing, polynomial degree 2,3 : (the range of values for i also applies to the expressions below)
1st derivative: polynomial degree 3,4
2nd derivative: polynomial degree 2,3
3rd derivative: polynomial degree 3,4
Simpler expressions that can be used with curves that don't have an inflection point are:
Smoothing, polynomial degree 0,1 (moving average):
1st derivative, polynomial degree 1,2:
Higher derivatives can be obtained. For example, a fourth derivative can be obtained by performing two passes of a second derivative function.
Use of orthogonal polynomials
An alternative to fitting m data points by a simple polynomial in the subsidiary variable, z, is to use orthogonal polynomials.
where P0, ..., Pk is a set of mutually orthogonal polynomials of degree 0, ..., k. Full details on how to obtain expressions for the orthogonal polynomials and the relationship between the coefficients b and a are given by Guest. Expressions for the convolution coefficients are easily obtained because the normal equations matrix, JTJ, is a diagonal matrix as the product of any two orthogonal polynomials is zero by virtue of their mutual orthogonality. Therefore, each non-zero element of its inverse is simply the reciprocal the corresponding element in the normal equation matrix. The calculation is further simplified by using recursion to build orthogonal Gram polynomials. The whole calculation can be coded in a few lines of PASCAL, a computer language well-adapted for calculations involving recursion.
Treatment of first and last points
Savitzky–Golay filters are most commonly used to obtain the smoothed or derivative value at the central point, z = 0, using a single set of convolution coefficients. (m − 1)/2 points at the start and end of the series cannot be calculated using this process. Various strategies can be employed to avoid this inconvenience.
The data could be artificially extended by adding, in reverse order, copies of the first (m − 1)/2 points at the beginning and copies of the last (m − 1)/2 points at the end. For instance, with m = 5, two points are added at the start and end of the data y1, ..., yn.
y3,y2,y1, ... ,yn, yn−1, yn−2.
Looking again at the fitting polynomial, it is obvious that data can be calculated for all values of z by using all sets of convolution coefficients for a single polynomial, a0 .. ak.
For a cubic polynomial
Convolution coefficients for the missing first and last points can also be easily obtained. This is also equivalent to fitting the first (m + 1)/2 points with the same polynomial, and similarly for the last points.
Weighting the data
It is implicit in the above treatment that the data points are all given equal weight. Technically, the objective function
being minimized in the least-squares process has unit weights, wi = 1. When weights are not all the same the normal equations become
,
If the same set of diagonal weights is used for all data subsets, , an analytical solution to the normal equations can be written down. For example, with a quadratic polynomial,
An explicit expression for the inverse of this matrix can be obtained using Cramer's rule. A set of convolution coefficients may then be derived as
Alternatively the coefficients, C, could be calculated in a spreadsheet, employing a built-in matrix inversion routine to obtain the inverse of the normal equations matrix. This set of coefficients, once calculated and stored, can be used with all calculations in which the same weighting scheme applies. A different set of coefficients is needed for each different weighting scheme.
It was shown that Savitzky–Golay filter can be improved by introducing weights that decrease at the ends of the fitting interval.
Two-dimensional convolution coefficients
Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels.
Such a grid is referred as a kernel, and the data points that constitute the kernel are referred as nodes. The trick is to transform the rectangular kernel into a single row by a simple ordering of the indices of the nodes. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable z to a set of m data points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of the values at the m × n kernel nodes. The following example, for a bivariate polynomial of total degree 3, m = 7, and n = 5, illustrates the process, which parallels the process for the one dimensional case, above.
The rectangular kernel of 35 data values,
{| border="1" style="border-collapse:collapse;" class="wikitable"
|-
! ||−3||−2||−1||0||1
!2
!3
|-
!−2
|d1||d2||d3||d4||d5
|d6
|d7
|-
!−1
|d8||d9||d10||d11||d12
|d13
|d14
|-
!0
|d15||d16||d17||d18||d19
|d20
|d21
|-
!1
|d22||d23||d24||d25||d26
|d27
|d28
|-
!2
|d29||d30||d31||d32||d33
|d34
|d35
|}
becomes a vector when the rows are placed one after another.
d = (d1 ... d35)T
The Jacobian has 10 columns, one for each of the parameters a00 − a03, and 35 rows, one for each pair of v and w values. Each row has the form
The convolution coefficients are calculated as
The first row of C contains 35 convolution coefficients, which can be multiplied with the 35 data values, respectively, to obtain the polynomial coefficient , which is the smoothed value at the central node of the kernel (i.e. at the 18th node of the above table). Similarly, other rows of C can be multiplied with the 35 values to obtain other polynomial coefficients, which, in turn, can be used to obtain smoothed values and different smoothed partial derivatives at different nodes.
Nikitas and Pappa-Louisi showed that depending on the format of the used polynomial, the quality of smoothing may vary significantly. They recommend using the polynomial of the form
because such polynomials can achieve good smoothing both in the central and in the near-boundary regions of a kernel, and therefore they can be confidently used in smoothing both at the internal and at the near-boundary data points of a sampled domain. In order to avoid ill-conditioning when solving the least-squares problem, p < m and q < n. For software that calculates the two-dimensional coefficients and for a database of such C's, see the section on multi-dimensional convolution coefficients, below.
Multi-dimensional convolution coefficients
The idea of two-dimensional convolution coefficients can be extended to the higher spatial dimensions as well, in a straightforward manner, by arranging multidimensional distribution of the kernel nodes in a single row. Following the aforementioned finding by Nikitas and Pappa-Louisi in two-dimensional cases, usage of the following form of the polynomial is recommended in multidimensional cases:
where D is the dimension of the space, 's are the polynomial coefficients, and u's are the coordinates in the different spatial directions. Algebraic expressions for partial derivatives of any order, be it mixed or otherwise, can be easily derived from the above expression. Note that C depends on the manner in which the kernel nodes are arranged in a row and on the manner in which the different terms of the expanded form of the above polynomial is arranged, when preparing the Jacobian.
Accurate computation of C in multidimensional cases becomes challenging, as precision of standard floating point numbers available in computer programming languages no longer remain sufficient. The insufficient precision causes the floating point truncation errors to become comparable to the magnitudes of some C elements, which, in turn, severely degrades its accuracy and renders it useless. Chandra Shekhar has brought forth two open source software, Advanced Convolution Coefficient Calculator (ACCC) and Precise Convolution Coefficient Calculator (PCCC), which handle these accuracy issues adequately. ACCC performs the computation by using floating point numbers, in an iterative manner. The precision of the floating-point numbers is gradually increased in each iteration, by using GNU MPFR. Once the obtained C's in two consecutive iterations start having same significant digits until a pre-specified distance, the convergence is assumed to have reached. If the distance is sufficiently large, the computation yields a highly accurate C. PCCC employs rational number calculations, by using GNU Multiple Precision Arithmetic Library, and yields a fully accurate C, in the rational number format. In the end, these rational numbers are converted into floating point numbers, until a pre-specified number of significant digits.
A database of C's that are calculated by using ACCC, for symmetric kernels and both symmetric and asymmetric polynomials, on unity-spaced kernel nodes, in the 1, 2, 3, and 4 dimensional spaces, is made available. Chandra Shekhar has also laid out a mathematical framework that describes usage of C calculated on unity-spaced kernel nodes to perform filtering and partial differentiations (of various orders) on non-uniformly spaced kernel nodes, allowing usage of C provided in the aforementioned database. Although this method yields approximate results only, they are acceptable in most engineering applications, provided that non-uniformity of the kernel nodes is weak.
Some properties of convolution
The sum of convolution coefficients for smoothing is equal to one. The sum of coefficients for odd derivatives is zero.
The sum of squared convolution coefficients for smoothing is equal to the value of the central coefficient.
Smoothing of a function leaves the area under the function unchanged.
Convolution of a symmetric function with even-derivative coefficients conserves the centre of symmetry.
Properties of derivative filters.
Signal distortion and noise reduction
It is inevitable that the signal will be distorted in the convolution process. From property 3 above, when data which has a peak is smoothed the peak height will be reduced and the half-width will be increased. Both the extent of the distortion and S/N (signal-to-noise ratio) improvement:
decrease as the degree of the polynomial increases
increase as the width, m of the convolution function increases
For example, If the noise in all data points is uncorrelated and has a constant standard deviation, σ, the standard deviation on the noise will be decreased by convolution with an m-point smoothing function to
polynomial degree 0 or 1: (moving average)
polynomial degree 2 or 3: .
These functions are shown in the plot at the right. For example, with a 9-point linear function (moving average) two thirds of the noise is removed and with a 9-point quadratic/cubic smoothing function only about half the noise is removed. Most of the noise remaining is low-frequency noise(see Frequency characteristics of convolution filters, below).
Although the moving average function gives better noise reduction it is unsuitable for smoothing data which has curvature over m points. A quadratic filter function is unsuitable for getting a derivative of a data curve with an inflection point because a quadratic polynomial does not have one. The optimal choice of polynomial order and number of convolution coefficients will be a compromise between noise reduction and distortion.
Multipass filters
One way to mitigate distortion and improve noise removal is to use a filter of smaller width and perform more than one convolution with it. For two passes of the same filter this is equivalent to one pass of a filter obtained by convolution of the original filter with itself. For example, 2 passes of the filter with coefficients (1/3, 1/3, 1/3) is equivalent to 1 pass of the filter with coefficients
(1/9, 2/9, 3/9, 2/9, 1/9).
The disadvantage of multipassing is that the equivalent filter width for passes of an –point function is so multipassing is subject to greater end-effects. Nevertheless, multipassing has been used to great advantage. For instance, some 40–80 passes on data with a signal-to-noise ratio of only 5 gave useful results. The noise reduction formulae given above do not apply because correlation between calculated data points increases with each pass.
Frequency characteristics of convolution filters
Convolution maps to multiplication in the Fourier co-domain. The discrete Fourier transform of a convolution filter is a real-valued function which can be represented as
θ runs from 0 to 180 degrees, after which the function merely repeats itself. The plot for a 9-point quadratic/cubic smoothing function is typical. At very low angle, the plot is almost flat, meaning that low-frequency components of the data will be virtually unchanged by the smoothing operation. As the angle increases the value decreases so that higher frequency components are more and more attenuated. This shows that the convolution filter can be described as a low-pass filter: the noise that is removed is primarily high-frequency noise and low-frequency noise passes through the filter. Some high-frequency noise components are attenuated more than others, as shown by undulations in the Fourier transform at large angles. This can give rise to small oscillations in the smoothed data and phase reversal, i.e., high-frequency oscillations in the data get inverted by Savitzky–Golay filtering.
Convolution and correlation
Convolution affects the correlation between errors in the data. The effect of convolution can be expressed as a linear transformation.
By the law of error propagation, the variance-covariance matrix of the data, A will be transformed into B according to
To see how this applies in practice, consider the effect of a 3-point moving average on the first three calculated points, , assuming that the data points have equal variance and that there is no correlation between them. A will be an identity matrix multiplied by a constant, σ2, the variance at each point.
In this case the correlation coefficients,
between calculated points i and j will be
In general, the calculated values are correlated even when the observed values are not correlated. The correlation extends over calculated points at a time.
Multipass filters
To illustrate the effect of multipassing on the noise and correlation of a set of data, consider the effects of a second pass of a 3-point moving average filter. For the second pass
After two passes, the standard deviation of the central point has decreased to , compared to 0.58σ for one pass. The noise reduction is a little less than would be obtained with one pass of a 5-point moving average which, under the same conditions, would result in the smoothed points having the smaller standard deviation of 0.45σ.
Correlation now extends over a span of 4 sequential points with correlation coefficients
The advantage obtained by performing two passes with the narrower smoothing function is that it introduces less distortion into the calculated data.
Comparison with other filters and alternatives
Compared with other smoothing filters, e.g. convolution with a Gaussian or multi-pass moving-average filtering, Savitzky–Golay filters have an initially flatter response and sharper cutoff in the frequency domain, especially for high orders of the fit polynomial (see frequency characteristics). For data with limited signal bandwidth, this means that Savitzky–Golay filtering can provide better signal-to-noise ratio than many other filters; e.g., peak heights of spectra are better preserved than for other filters with similar noise suppression. Disadvantages of the Savitzky–Golay filters are comparably poor suppression of some high frequencies (poor stopband suppression) and artifacts when using polynomial fits for the first and last points.
Alternative smoothing methods that share the advantages of Savitzky–Golay filters and mitigate at least some of their disadvantages are Savitzky–Golay filters with properly chosen alternative fitting weights, Whittaker–Henderson smoothing and Hodrick–Prescott filter (equivalent methods closely related to smoothing splines), and convolution with a windowed sinc function.
Implementations in Programming Language(s)
MATLAB
sgolayfilt from Signal Processing Toolbox. Available since before version R2006b.
Python
flatten from module Lightkurve. Lightkurve is the official library for analysis of Kepler & TESS Telescope data.
scipy.signal.savgol_filter from module SciPy. SciPy is a robust library widely used for scientific computing in the academic community.
See also
Kernel smoother – Different terminology for many of the same processes, used in statistics
Local regression — the LOESS and LOWESS methods
Numerical differentiation – Application to differentiation of functions
Smoothing spline
Stencil (numerical analysis) – Application to the solution of differential equations
Hodrick–Prescott filter
Kalman filter
Appendix
Tables of selected convolution coefficients
Consider a set of data points . The Savitzky–Golay tables refer to the case that the step is constant, h. Examples of the use of the so-called convolution coefficients, with a cubic polynomial and a window size, m, of 5 points are as follows.
Smoothing ;
1st derivative ;
2nd derivative .
Selected values of the convolution coefficients for polynomials of degree 1, 2, 3, 4 and 5 are given in the following tables The values were calculated using the PASCAL code provided in Gorry.
Notes
References
External links
Advanced Convolution Coefficient Calculator (ACCC) for multidimensional least-squares filters
Savitzky–Golay filter in Fundamentals of Statistics
A wider range of coefficients for a range of data set sizes, orders of fit, and offsets from the centre point
Filter theory
Signal estimation | Savitzky–Golay filter | Engineering | 5,448 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.