id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
16,781,789 | https://en.wikipedia.org/wiki/HD%20221287%20b | HD 221287 b, also known as Pipitea, is an exoplanet that orbits HD 221287, approximately 173 light years away in the constellation of Tucana. This planet has mass >3.12 MJ (>992 M🜨) and orbits in a habitable zone at 1.25 AUs (6.06 μpc) from the star, taking 1.25 years to orbit at 29.9 km/s around the star. Dominique Naef discovered this planet in early 2007 by using HARPS spectrograph located in Chile.
Based on a probable 10−4 fraction of the planet mass as a satellite, the planet can have a Mars-sized moon with habitable surface. On the other hand, this mass can be distributed into many small satellites as well.
It was named "Pipitea" by representatives of the Cook Islands in the IAU's 2019 NameExoWorlds contest, with the comment "Pipitea is a small, white and gold pearl found in Penrhyn lagoon in the northern group of the Cook Islands."
Insolation data for HD 221287 b
From Luminosity and distance irridance can be calculated:
See also
HD 100777 b
HD 190647 b
Notes
References
External links
Giant planets
Tucana
Exoplanets discovered in 2007
Exoplanets detected by radial velocity
Giant planets in the habitable zone
Exoplanets with proper names
es:HD 221287#Sistema planetario | HD 221287 b | Astronomy | 308 |
1,861,998 | https://en.wikipedia.org/wiki/Leo%20A | Leo A (also known as Leo III) is an irregular galaxy that is part of the Local Group. It lies 2.6 million light-years from Earth, and was discovered by Fritz Zwicky in 1942. The estimated mass of this galaxy is solar masses, with at least 80% consisting of dark matter. It is one of the most isolated galaxies in the Local Group and shows no indications of an interaction or merger for several billion years. However, Leo A is nearly unique among irregular galaxies in that more than 90% of its stars formed more recently than 8 billion years ago, suggesting a rather unusual evolutionary history. The presence of RR Lyrae variables shows that the galaxy has an old stellar population that is up to 10 billion years in age.
The neutral hydrogen in this galaxy occupies in a volume similar to its optical extent, and is distributed in a squashed, uneven ring. The galaxy is not rotating and the hydrogen is moving about in random clumps. The proportion of elements with higher atomic numbers than helium is only about 1–2% of the ratio in the Sun. This indicates a much less complete conversion of gas into stars than in the Milky Way galaxy. The Leo A galaxy shows sign of increased star formation some time within the last 1–4 billion years, although the current level is low. There are four H II regions powered by short-lived, O-class stars.
References
External links
Dwarf galaxies
Irregular galaxies
Leo (constellation)
Local Group
05364
28868
Astronomical objects discovered in 1942 | Leo A | Astronomy | 309 |
57,100,265 | https://en.wikipedia.org/wiki/Luteuthis%20shuishi | Luteuthis shuishi is a species of octopus that lives in the South China Sea, which is known only from one female specimen collected at a depth of 767 meters. It has short arms and is quite gelatinous. The octopus's total length is about 300 millimeters.
References
Octopuses
Cephalopods described in 2002
Marine molluscs of Asia
Species known from a single specimen | Luteuthis shuishi | Biology | 84 |
490,020 | https://en.wikipedia.org/wiki/Furfural | Furfural is an organic compound with the formula C4H3OCHO. It is a colorless liquid, although commercial samples are often brown. It has an aldehyde group attached to the 2-position of furan. It is a product of the dehydration of sugars, as occurs in a variety of agricultural byproducts, including corncobs, oat, wheat bran, and sawdust. The name furfural comes from the Latin word , meaning bran, referring to its usual source. Furfural is only derived from dried biomass. In addition to ethanol, acetic acid, and sugar, furfural is one of the oldest organic chemicals available readily purified from natural precursors.
History
Furfural was first isolated in 1821 (published in 1832) by the German chemist Johann Wolfgang Döbereiner, who produced a small sample as a byproduct of formic acid synthesis. In 1840, the Scottish chemist John Stenhouse found that the same chemical could be produced by distilling a wide variety of crop materials, including corn, oats, bran, and sawdust, with aqueous sulfuric acid; he also determined furfural's empirical formula (C5H4O2). George Fownes named this oil "furfurol" in 1845 (from furfur (bran), and oleum (oil)). In 1848, the French chemist Auguste Cahours determined that furfural was an aldehyde. Determining the structure of furfural required some time: the furfural molecule contains a cyclic ether (furan), which tends to break open when it's treated with harsh reagents. In 1870, German chemist Adolf von Baeyer speculated about the structure of the chemically similar compounds furan and 2-furoic acid. Additional research by German chemist Heinrich Limpricht supported this idea. From work published in 1877, Baeyer had confirmed his previous belief on the structure of furfural. By 1886, furfurol was being called "furfural" (short for "furfuraldehyde") and the correct chemical structure for furfural was being proposed. By 1887, the German chemist Willy Marckwald had inferred that some derivatives of furfural contained a furan nucleus. In 1901, the German chemist Carl Harries determined furan's structure through work with succindialdehyde and 2-methylfuran, thereby also confirming furfural's proposed structure.
Furfural remained relatively obscure until 1922, when the Quaker Oats Company began mass-producing it from oat hulls. Today, furfural is still produced from agricultural byproducts like sugarcane bagasse and corn cobs. The main countries producing furfural today are the Dominican Republic, South Africa and China.
Properties
Furfural dissolves readily in most polar organic solvents, but it is only slightly soluble in either water or alkanes.
Furfural participates in the same kinds of reactions as other aldehydes and other aromatic compounds. It exhibits less aromatic character than benzene, as can be seen from the fact that furfural is readily hydrogenated to tetrahydrofurfuryl alcohol. When heated in the presence of acids, furfural irreversibly polymerizes, acting as a thermosetting polymer.
Production
Furfural may be obtained by the acid catalyzed dehydration of 5-carbon sugars (pentoses), particularly xylose.
→ + 3
These sugars may be obtained from pentosans obtained from hemicellulose present in lignocellulosic biomass.
Between 3% and 10% of the mass of crop residue feedstocks can be recovered as furfural, depending on the type of feedstock. Furfural and water evaporate together from the reaction mixture, and separate upon condensation. The global production capacity is about 800,000 tons as of 2012. China is the biggest supplier of furfural, and accounts for the greater part of global capacity. The other two major commercial producers are Illovo Sugar in South Africa and Central Romana in the Dominican Republic.
In the laboratory, furfural can be synthesized from plant material by heating with sulfuric acid or other acids. With the purpose to avoid toxic effluents, an effort to substitute sulfuric acid with easily separable and reusable solid acid catalysts has been studied around the world. The formation and extraction of xylose and subsequently furfural can be favored over the extraction of other sugars with varied conditions, such as acid concentration, temperature, and time.
In industrial production, some lignocellulosic residue remains after the removal of the furfural. This residue is dried and burned to provide steam for the operation of the furfural plant. Newer and more energy efficient plants have excess residue, which is or can be used for co-generation of electricity, cattle feed, activated carbon, mulch/fertiliser, etc.
Uses and occurrence
It is commonly found in many cooked or heated foods such as coffee (55–255 mg/kg) and whole grain bread (26 mg/kg).
In petrochemical industry, furfural is utilized as a specialized chemical solvent for diene extraction.
Furfural is an important renewable, non-petroleum based, chemical feedstock which can be converted into solvents, polymers, fuels and other useful chemicals by a range of catalytic reduction.
Hydrogenation of furfural provides furfuryl alcohol (FA), which is used to produce furan resins, which are exploited in thermoset polymer matrix composites, cements, adhesives, casting resins and coatings. Further hydrogenation of furfuryl alcohol leads to tetrahydrofurfuryl alcohol (THFA), which is used as a solvent in agricultural formulations and as an adjuvant to help herbicides penetrate the leaf structure.
Palladium-catalyzed decarbonylation on furfural manufactures industrially furan.
Another important solvent made from furfural is methyltetrahydrofuran. Furfural is used to make other furan derivatives, such as furoic acid, via oxidation, and furan itself via palladium catalyzed vapor phase decarbonylation.
There is a good market for value added chemicals that can be obtained from furfural.
Safety
Furfural is carcinogenic in lab animals and mutagenic in single cell organisms, but there is no data on human subjects. It is classified in IARC Group 3 due to the lack of data on humans and too few tests on animals to satisfy Group 2A/2B criteria. It is hepatotoxic.
The median lethal dose is high, 650–900 mg/kg (oral, dogs), consistent with its pervasiveness in foods.
The Occupational Safety and Health Administration has set a permissible exposure limit for furfural at 5ppm over an eight-hour time-weighted average (TWA), and also designates furfural as a risk for skin absorption.
See also
Aniline acetate test
Bial's test
Molisch's test
Tollens' reagent
References
External links
Conjugated dienes
Monomers
Flavors
Solvents
Fuel dyes
Resins
2-Furyl compounds
Substances discovered in the 19th century | Furfural | Physics,Chemistry,Materials_science | 1,541 |
32,255,439 | https://en.wikipedia.org/wiki/Neottiella%20rutilans | Neottiella rutilans is a species of apothecial fungus belonging to the family Pyronemataceae. This European species appears in autumn as bright yellowish-orange discs among Polytrichum and related mosses.
Description
This cup fungus has a shallow, somewhat uneven cup and a short stem. The upper surface is yellow, often tinged with reddish-orange, and the underside is covered by a dense felting of white hairs.
Ecology
This fungus tends to grow among mosses, particularly Polytrichum species, on sandy soils on heaths and drier moorland, appearing in the autumn and winter.
Like other cup fungi, the upper surface is the spore-producing surface and as it faces upwards, the spores cannot fall out. Instead, the spores are ejected when the fungus is disturbed; if the cup is given a sharp tap when it is mature, a cloud of spores rises in a thin mist.
References
External links
Neottiella rutilans at Species Fungorum
Pezizales
Fungi described in 1822
Taxa named by Elias Magnus Fries
Fungus species | Neottiella rutilans | Biology | 218 |
22,133,845 | https://en.wikipedia.org/wiki/Fabric%20computing | Fabric computing or unified computing involves constructing a computing fabric consisting of interconnected nodes that look like a weave or a fabric when seen collectively from a distance.
Usually the phrase refers to a consolidated high-performance computing system consisting of loosely coupled storage, networking and parallel processing functions linked by high bandwidth interconnects (such as 10 Gigabit Ethernet and InfiniBand) but the term has also been used to describe platforms such as the Azure Services Platform and grid computing in general (where the common theme is interconnected nodes that appear as a single logical unit).
The fundamental components of fabrics are "nodes" (processor(s), memory, and/or peripherals) and "links" (functional connections between nodes). While the term "fabric" has also been used in association with storage area networks and with switched fabric networking, the introduction of compute resources provides a complete "unified" computing system. Other terms used to describe such fabrics include "unified fabric", "data center fabric" and "unified data center fabric".
Ian Foster, director of the Computation Institute at the Argonne National Laboratory and University of Chicago suggested in 2007 that grid computing "fabrics" were "poised to become the underpinning for next-generation enterprise IT architectures and be used by a much greater part of many organizations".
History
While the term has been in use since the mid to late 1990s the growth of cloud computing and Cisco's evangelism of unified data center fabrics followed by unified computing (an evolutionary data center architecture whereby blade servers are integrated or unified with supporting network and storage infrastructure) starting March 2009 has renewed interest in the technology.
There have been mixed reactions to Cisco's architecture, particularly from rivals who claim that these proprietary systems will lock out other vendors. Analysts claim that this "ambitious new direction" is "a big risk" as companies such as IBM and HP who have previously partnered with Cisco on data center projects (accounting for $2–3bn of Cisco's annual revenue) are now competing with them.
In 2007, Wombat Financial Software launched the "Wombat Data Fabric," the first commercial off-the-shelf software platform providing high performance / low-latency RDMA-based messaging across an Infiniband switch.
Key characteristics
The main advantages of fabrics are that massive concurrent processing combined with a huge, tightly coupled address space makes it possible to solve huge computing problems (such as those presented by delivery of cloud computing services); and that they are both scalable and able to be dynamically reconfigured.
Challenges include a non-linearly degrading performance curve, whereby adding resources does not linearly increase performance which is a common problem with parallel computing and maintaining security.
Companies
companies offering unified or fabric computing systems include Avaya, Brocade, Cisco, Dell, Egenera, HPE, IBM, Liquid Computing Corporation, TIBCO, Unisys, and Xsigo Systems.
See also
Cloud computing
Converged infrastructure
Grid computing
Omni-Path
Parallel computing
Massively parallel
Massively parallel processor array (MPPA)
References
External links
Cisco Unified Computing and Servers
Flexible HPE Converged Systems
What is a Switch Fabric
Cloud computing
Distributed computing architecture
Computer networking | Fabric computing | Technology,Engineering | 655 |
2,114,155 | https://en.wikipedia.org/wiki/Duoprism | In geometry of 4 dimensions or higher, a double prism or duoprism is a polytope resulting from the Cartesian product of two polytopes, each of two dimensions or higher. The Cartesian product of an -polytope and an -polytope is an -polytope, where and are dimensions of 2 (polygon) or higher.
The lowest-dimensional duoprisms exist in 4-dimensional space as 4-polytopes being the Cartesian product of two polygons in 2-dimensional Euclidean space. More precisely, it is the set of points:
where and are the sets of the points contained in the respective polygons. Such a duoprism is convex if both bases are convex, and is bounded by prismatic cells.
Nomenclature
Four-dimensional duoprisms are considered to be prismatic 4-polytopes. A duoprism constructed from two regular polygons of the same edge length is a uniform duoprism.
A duoprism made of n-polygons and m-polygons is named by prefixing 'duoprism' with the names of the base polygons, for example: a triangular-pentagonal duoprism is the Cartesian product of a triangle and a pentagon.
An alternative, more concise way of specifying a particular duoprism is by prefixing with numbers denoting the base polygons, for example: 3,5-duoprism for the triangular-pentagonal duoprism.
Other alternative names:
q-gonal-p-gonal prism
q-gonal-p-gonal double prism
q-gonal-p-gonal hyperprism
The term duoprism is coined by George Olshevsky, shortened from double prism. John Horton Conway proposed a similar name proprism for product prism, a Cartesian product of two or more polytopes of dimension at least two. The duoprisms are proprisms formed from exactly two polytopes.
Example 16-16 duoprism
Geometry of 4-dimensional duoprisms
A 4-dimensional uniform duoprism is created by the product of a regular n-sided polygon and a regular m-sided polygon with the same edge length. It is bounded by n m-gonal prisms and m n-gonal prisms. For example, the Cartesian product of a triangle and a hexagon is a duoprism bounded by 6 triangular prisms and 3 hexagonal prisms.
When m and n are identical, the resulting duoprism is bounded by 2n identical n-gonal prisms. For example, the Cartesian product of two triangles is a duoprism bounded by 6 triangular prisms.
When m and n are identically 4, the resulting duoprism is bounded by 8 square prisms (cubes), and is identical to the tesseract.
The m-gonal prisms are attached to each other via their m-gonal faces, and form a closed loop. Similarly, the n-gonal prisms are attached to each other via their n-gonal faces, and form a second loop perpendicular to the first. These two loops are attached to each other via their square faces, and are mutually perpendicular.
As m and n approach infinity, the corresponding duoprisms approach the duocylinder. As such, duoprisms are useful as non-quadric approximations of the duocylinder.
Nets
Perspective projections
A cell-centered perspective projection makes a duoprism look like a torus, with two sets of orthogonal cells, p-gonal and q-gonal prisms.
The p-q duoprisms are identical to the q-p duoprisms, but look different in these projections because they are projected in the center of different cells.
Orthogonal projections
Vertex-centered orthogonal projections of p-p duoprisms project into [2n] symmetry for odd degrees, and [n] for even degrees. There are n vertices projected into the center. For 4,4, it represents the A3 Coxeter plane of the tesseract. The 5,5 projection is identical to the 3D rhombic triacontahedron.
Related polytopes
The regular skew polyhedron, {4,4|n}, exists in 4-space as the n2 square faces of a n-n duoprism, using all 2n2 edges and n2 vertices. The 2n n-gonal faces can be seen as removed. (skew polyhedra can be seen in the same way by a n-m duoprism, but these are not regular.)
Duoantiprism
Like the antiprisms as alternated prisms, there is a set of 4-dimensional duoantiprisms: 4-polytopes that can be created by an alternation operation applied to a duoprism. The alternated vertices create nonregular tetrahedral cells, except for the special case, the 4-4 duoprism (tesseract) which creates the uniform (and regular) 16-cell. The 16-cell is the only convex uniform duoantiprism.
The duoprisms , t0,1,2,3{p,2,q}, can be alternated into , ht0,1,2,3{p,2,q}, the "duoantiprisms", which cannot be made uniform in general. The only convex uniform solution is the trivial case of p=q=2, which is a lower symmetry construction of the tesseract , t0,1,2,3{2,2,2}, with its alternation as the 16-cell, , s{2}s{2}.
The only nonconvex uniform solution is p=5, q=5/3, ht0,1,2,3{5,2,5/3}, , constructed from 10 pentagonal antiprisms, 10 pentagrammic crossed-antiprisms, and 50 tetrahedra, known as the great duoantiprism (gudap).
Ditetragoltriates
Also related are the ditetragoltriates or octagoltriates, formed by taking the octagon (considered to be a ditetragon or a truncated square) to a p-gon. The octagon of a p-gon can be clearly defined if one assumes that the octagon is the convex hull of two perpendicular rectangles; then the p-gonal ditetragoltriate is the convex hull of two p-p duoprisms (where the p-gons are similar but not congruent, having different sizes) in perpendicular orientations. The resulting polychoron is isogonal and has 2p p-gonal prisms and p2 rectangular trapezoprisms (a cube with D2d symmetry) but cannot be made uniform. The vertex figure is a triangular bipyramid.
Double antiprismoids
Like the duoantiprisms as alternated duoprisms, there is a set of p-gonal double antiprismoids created by alternating the 2p-gonal ditetragoltriates, creating p-gonal antiprisms and tetrahedra while reinterpreting the non-corealmic triangular bipyramidal spaces as two tetrahedra. The resulting figure is generally not uniform except for two cases: the grand antiprism and its conjugate, the pentagrammic double antiprismoid (with p = 5 and 5/3 respectively), represented as the alternation of a decagonal or decagrammic ditetragoltriate. The vertex figure is a variant of the sphenocorona.
k_22 polytopes
The 3-3 duoprism, -122, is first in a dimensional series of uniform polytopes, expressed by Coxeter as k22 series. The 3-3 duoprism is the vertex figure for the second, the birectified 5-simplex. The fourth figure is a Euclidean honeycomb, 222, and the final is a paracompact hyperbolic honeycomb, 322, with Coxeter group [32,2,3], . Each progressive uniform polytope is constructed from the previous as its vertex figure.
See also
Polytope and 4-polytope
Convex regular 4-polytope
Duocylinder
Tesseract
Notes
References
Regular Polytopes, H. S. M. Coxeter, Dover Publications, Inc., 1973, New York, p. 124.
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, (Chapter 5: Regular Skew Polyhedra in three and four dimensions and their topological analogues)
Coxeter, H. S. M. Regular Skew Polyhedra in Three and Four Dimensions. Proc. London Math. Soc. 43, 33-62, 1937.
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
Uniform 4-polytopes | Duoprism | Physics | 1,939 |
39,670,586 | https://en.wikipedia.org/wiki/Balloon%20phobia | Balloon phobia or globophobia is a fear of balloons. The most common source of fear is the sound of balloons popping, but individuals can also be triggered by their texture and smell.
Generally, people with globophobia will refuse to touch, feel, smell, or go near a balloon for fear it will burst.
Globophobia originates from the Latin word Globus meaning sphere and the Greek word Phobos which translates to fear.
This is a form of phonophobia.
Signs and symptoms
Indications that someone suffers from Globophobia include:
Feelings of intense fear and anxiety from balloons
A fear of balloons that lasts a minimum of six months
Engaging in avoidance behavior when in the presence of balloons
A fear of balloons that interferes with day-to-day life
Globophobia has numerous symptoms, and most of them overlap with anxiety. Some symptoms of globophobia are:
Rapid or shallow breathing
Palpitations
Shaking, trembling, sweating, and chills
Gastrointestinal distress, including nausea, vomiting, or stomach pain
Feeling dizzy or light-headed
Difficulties swallowing or feeling like something is stuck in one's throat
A prickling sensation, similar to pins and needles
A dry or sticky mouth
Feeling confused or disoriented
Muscle tension
Unusual or severe headaches
Unusual flushing or paleness, particularly in one's face
Feeling extremely hot or cold
Fatigue or tiredness
A lack of appetite
Insomnia
Causes
Globophobia can be the result of a negative or traumatic experience with balloons, negative depictions of balloons, or a traumatic event somehow connected to balloons. For example, a loud noise could sound similar to a balloon popping. These negative experiences usually occur during childhood, and globophobia is most prevalent among young children.
Other factors that can increase the likelihood of someone developing Globophobia include:
Having a sensory processing disorder, like autism
Having another related phobia, such as phonophobia or coulrophobia
Having a history of anxiety, depression, or panic attacks
Being a naturally more anxious or fearful person
Having heightened stress levels
Treatment
Response prevention therapy
Response prevention is a type of exposure therapy. When dealing with patients with globophobia, a doctor roughly handles a barely inflated balloon in the presence of a patient. The patient will eventually hold the balloon themself to understand that it is not full enough to pop. The balloon will then gradually become more inflated, and once it is filled enough to pop, squeaky noises should be intentionally produced by the balloon. The patients are expected to be frightened by this action, so they should stand a great distance from the balloon and gradually move closer once they feel more comfortable. The same process of patients moving closer to the balloon should be followed except the balloon will pop this time. This practice aims to assure people with globophobia that the noises balloons make are not harmful. Patients are expected to not be as tense and apprehensive around balloons and the sounds they produce following exposure therapy.
In vivo flooding
This form of exposure therapy was performed on a college-aged student with globophobia. Before the experiment, the unnamed male reports that he tries to avoid balloons at all costs due to the great amount of distress they place on him. He claims that he cannot be any less than four feet away from a balloon without feeling intense fear. The experiment is conducted over the course of three days and involves the subject being surrounded by hundreds of balloons that are simultaneously popping. The researchers found no clear signs of emotional distress of the man but noted him attempting to avoid the popping balloons. Following the experiment's conclusion, the subject states that he does not attempt to avoid situations that may involve balloons anymore. He has also reported that no additional balloon-related problems have intervened with his daily life.
Cognitive behavioral therapy
Cognitive behavioral therapy or CBT is a common practice used to treat phobias. It works "by deconstructing negative thought patterns surrounding balloons into smaller parts which will be focused on one at a time".
Clinical hypnotherapy
Hypnotherapy involves relaxation techniques that assist in reducing stress, fear, and anxiety responses. The objective of hypnotherapy sessions is to alter negative thoughts and memories surrounding balloons to generate a less fearful perception on them.
Neuro linguistic memory manipulations
Neuro linguistic memory manipulations or NLP manipulations entail "seeing yourself and your fears as if you are a third party" to detach yourself from the fear and to minimize the severity of distress balloons might produce.
Medication
Potential medications to use to treat globophobia include beta blockers, selective serotonin reuptake inhibitors (SSRIs), sedatives, and anti-anxiety relievers.
Diagnosis
The Diagnostic and Statistical Manual, 5th edition (DMS-5) does not include every single phobia, so globophobia is not mentioned. Mental health professionals can instead diagnose patients with a "specific phobia", like globophobia which is "an umbrella term that describes any phobia of a specific object or situation".
Notable cases
Oprah Winfrey, American talk show host
References
Phobias
Balloons | Balloon phobia | Chemistry | 1,055 |
35,539,273 | https://en.wikipedia.org/wiki/Robenacoxib | Robenacoxib, sold under the brand name Onsior, is a nonsteroidal anti-inflammatory drug (NSAID) used in veterinary medicine for the relief of pain and inflammation in cats and dogs. It is a COX-2 inhibitor (coxib).
References
COX-2 inhibitors
Nonsteroidal anti-inflammatory drugs
Fluoroarenes
Anilines
Carboxylic acids
Veterinary drugs | Robenacoxib | Chemistry | 85 |
73,780,505 | https://en.wikipedia.org/wiki/Paurocotylis%20pila | Paurocotylis pila, commonly known as the scarlet berry truffle, is an ascomycete fungus in the genus Paurocotylis. It was first described by Miles Joseph Berkley in 1855.
This species is native to New Zealand and Australia and is naturalized in the United Kingdom. It often appears in forests under podocarp trees such as totara; however, it also occurs in gardens, forest tracks, and parks.
Taxonomy
First described in 1855 by Miles Joseph Berkeley in Joseph Dalton Hooker's The Botany of the Antarctic Voyage II, Flora Novae-Zealandiae, the type specimen was found 'on the ground' and was collected by William Colenso in Te Hāwera, South Taranaki in the North Island of New Zealand.
Paurocotylis pila is the only species from the genus Paurocotylis found in New Zealand.
Etymology
Greek, pauro means few and cotylis means cavity, possibly referring to the observed interior of the type specimen. Latin, pila means sphere, presumably referring to the shape of the fruit body.
Description
This truffle-like fungus produces a spherical to tuber-shaped fruit body (ascoma) with a smooth surface, which can be lobed or wrinkled. Paurocotylis pila's fruiting body is ball shaped, with a thin, matte red-orange outer rind and has no stalk. Often the rind is creased, but occasionally is smooth. Varying in size, it ranges from 10-30mm across, and is found half buried in soil, or under leaf litter. The fruit body is made of yellow-brown tissue, with multiple hollow chambers. Inside the chambers, the asci break up to leave round, cream or yellow ascospores. Once collected and dried, the rind's colour changes to a dull red-brown. P. pila fruit bodies usually range from in diameter, although some in the UK are up to 60 mm. The fruit body does not have a stipe. There is no odour noted and it is regarded as non-edible.
Range
DNA barcode (internal transcribed spacer) sequences in the National Center for Biotechnology Information database indicate a distribution in New Zealand, Australia and the United Kingdom.
Natural global range
This species is native to New Zealand, however, it has been introduced to England. In England, it has spread to Nottingham, Yorkshire, Sheffield, and more. Paurocotylis pila is also native to Tasmania, and has been found in Australia.
New Zealand range
Paurocotylis pila is found all across New Zealand; often appearing in forests under podocarp trees such as totara. However, it also occurs in gardens, forest tracks, and parks.
Habitat
This species is found in leaf litter and soil in forests, parks and gardens. Paurocotylis pila prefers disturbed forests, and is often found in soil near tracks. It has even been found in abandoned gravel pits. In England, it has been found fruiting in garden soil. Paurocotylis pila has been found near tracks in forest parks, under Podocarpus. Disturbed soil may make it easier for the fruit bodies to be spotted, or that they are seen more in those areas because it is where observers are. It is thought that due to their berry-like shape and striking colour, birds play a role in their dispersal.
Experts have suggested that some members of this genus and related genera of fungi may change between being saprobic and endophytic throughout its life. This is unlikely for this species since it is found under various tree species.
Ecology
Life cycle/Phenology
Paurocotylis pila is a saprobic species that grows underground. The fruiting bodies emerge after warm rain, mainly in autumn. After emerging from underground, Paurocotylis pila often remains partially covered by soil or leaf litter. From there, it is presumed to be dispersed by ground-foraging birds looking for fallen fruit. Fruiting in autumn, Paurocotylis pila coincides with podocarp trees fruiting in the forest. As its colour resembles the fruit, it attracts birds. Bird dispersal has likely assisted it in its spread throughout England, with specimens found in England with damage from birds pecking.
Predators, Parasites, and Diseases
Birds eat this species, which likely aids in its dispersal. Supporting evidence for bird dispersal is peck marks, often seen on Paurocotylis pila. It is unknown if any other predators, diseases, or parasites live on this species. Evidence of Ascomycota fungi being eaten by moa was found in moa coprolite. This shows that this species may have been eaten and dispersed by moa, but it is unknown which bird species are continuing to spread it today. Given that the species is spreading in the UK, some introduced birds may be spreading the it alongside native species.
References
Fungi described in 1855
Pyronemataceae
Fungus species
Fungi of New Zealand
Fungi of Australia
Fungi of the United Kingdom
Inedible fungi | Paurocotylis pila | Biology | 1,047 |
45,596,218 | https://en.wikipedia.org/wiki/Penicillium%20glycyrrhizacola | Penicillium glycyrrhizacola is a species of the genus of Penicillium.
References
glycyrrhizacola
Fungi described in 2013
Fungus species | Penicillium glycyrrhizacola | Biology | 39 |
75,982 | https://en.wikipedia.org/wiki/Defecation | Defecation (or defaecation) follows digestion, and is a necessary process by which organisms eliminate a solid, semisolid, or liquid waste material known as feces from the digestive tract via the anus or cloaca. The act has a variety of names ranging from the common, like pooping or crapping, to the technical, e.g. bowel movement, to the obscene (shitting), to the euphemistic ("doing number two", "dropping a deuce" or "taking a dump"), to the juvenile ("making doo-doo"). The topic, usually avoided in polite company, can become the basis for some potty humor.
Humans expel feces with a frequency varying from a few times daily to a few times weekly. Waves of muscular contraction (known as peristalsis) in the walls of the colon move fecal matter through the digestive tract towards the rectum. Undigested food may also be expelled within the feces, in a process called egestion. When birds defecate, they also expel urine and urates in the same mass, whereas other animals may also urinate at the same time, but spatially separated. Defecation may also accompany childbirth and death. Babies defecate a unique substance called meconium prior to eating external foods.
There are a number of medical conditions associated with defecation, such as diarrhea and constipation, some of which can be serious. The feces expelled can carry diseases, most often through the contamination of food. E. coli is a particular concern.
Before toilet training, human feces are most often collected into a diaper. Thereafter, in many societies people commonly defecate into a toilet. However, open defecation, the practice of defecating outside without using a toilet of any kind, is still widespread in some developing countries. Some people defecate into the ocean. First world countries use sewage treatment plants and/or on-site treatment.
Description
Physiology
The rectum ampulla stores fecal waste (also called stool) before it is excreted. As the waste fills the rectum and expands the rectal walls, stretch receptors in the rectal walls stimulate the desire to defecate. This urge to defecate arises from the reflex contraction of rectal muscles, relaxation of the internal anal sphincter, and an initial contraction of the skeletal muscle of the external anal sphincter. If the urge is not acted upon, the material in the rectum is often returned to the colon by reverse peristalsis, where more water is absorbed and the feces are stored until the next mass peristaltic movement of the transverse and descending colon.
When the rectum is full, an increase in pressure within the rectum forces apart the walls of the anal canal, allowing the fecal matter to enter the canal. The rectum shortens as material is forced into the anal canal and peristaltic waves push the feces out of the rectum. The internal and external anal sphincters along with the puborectalis muscle allow the feces to be passed by muscles pulling the anus up over the exiting feces.
Voluntary and involuntary control
The external anal sphincter is under voluntary control whereas the internal anal sphincter is involuntary. In infants, the defecation occurs by reflex action without the voluntary control of the external anal sphincter. Defecation is voluntary in adults. Young children learn voluntary control through the process of toilet training. Once trained, loss of control, called fecal incontinence, may be caused by physical injury, nerve injury, prior surgeries (such as an episiotomy), constipation, diarrhea, loss of storage capacity in the rectum, intense fright, inflammatory bowel disease, psychological or neurological factors, childbirth, or death.
Sometimes, due to the inability to control one's bowel movement or due to excessive fear, defecation (usually accompanied by urination) occurs involuntarily, soiling a person's undergarments. This may cause significant embarrassment to the person if this occurs in the presence of other people or a public place.
Posture
The positions and modalities of defecation are culture-dependent. Squat toilets are used by the vast majority of the world, including most of Africa, Asia, and the Middle East. The use of sit-down toilets in the Western world is a relatively recent development, beginning in the 19th century with the advent of indoor plumbing.
Disease
Regular bowel movements determine the functionality and the health of the alimentary tracts in human body. Defecation is the most common regular bowel movement which eliminates waste from the human body. The frequency of defecation is hard to identify, which can vary from daily to weekly depending on individual bowel habits, the impact from the environment and genetic. If defecation is delayed for a prolonged period the fecal matter may harden, resulting in constipation. If defecation occurs too fast, before excess liquid is absorbed, diarrhea may occur. Other associated symptoms can include abdominal bloating, abdominal pain, and abdominal distention. Disorders of the bowel can seriously impact quality of life and daily activities. The causes of functional bowel disorder are multifactorial, and dietary habits such as food intolerance and low fiber diet are considered to be the primary factors.
Constipation
Constipation, also known as defecatory dysfunction, is difficulty experienced when passing stools. It is one of the most notable alimentary disorders that affects different age groups in the population. Common constipation is associated with abdominal distention, pain or bloating. Research has revealed that chronic constipation complied with higher risk of cardiovascular events such as coronary heart disease and ischemic stroke, while associating with an increasing risk of mortality. Besides dietary factors, psychological traumas and 'pelvic floor disorders' can also cause chronic constipation and defecatory disorder respectively. Multiple interventions, including physical activities, 'high-fibre diet', probiotics and drug therapies can be widely and efficiently used to treat constipation and defecatory disorder.
Inflammatory bowel diseases
Inflammatory disease is characterized as long-lasting, chronic inflammation throughout the gastrointestinal tract. Crohn's disease (CD) and ulcerative colitis (UC) are two universal types of inflammatory bowel disease that have been studied over a century. They are closely related to different environmental risk factors, family genetics, and lifestyle choices such as smoking. Crohn's disease has been found to be related to immune disorders particularly. Different levels of cumulative intestinal injuries can cause different complications, such as fistulae, damage of bowel function, symptom recurrence, disability, etc. Patients can be children or adults. Recent research shows that immunodeficiency and monogenic disorders are the causes in young patients with inflammatory bowel diseases.
Common symptoms of inflammatory bowel diseases differ by the infection level, but may include severe abdominal pain, diarrhea, fatigue, and unexpected weight loss. Crohn's disease can lead to infection of any part of the digestive tract, including ileum to anus. Internal manifestations include diarrhea, abdomen pain, fever, chronic anaemia, etc. External manifestations include impact on skin, joints, eyes, and liver. Significantly reduced microbiota diversity inside the gastrointestinal tract can also be observed. Ulcerative colitis mainly affects the function of the large bowel, and its incidence rate is three times greater than that of Crohn's disease. In terms of clinical features, over 90 percent of patients exhibit constant diarrhea, rectal bleeding, softer stool, mucus in the stool, tenesmus, and abdominal pain. The symptoms may continue for around 6 weeks or even longer.
The inflammatory bowel diseases could be effectively treated by 'pharmacotherapies' to relieve and maintain the symptoms, which showed in 'mucosal healing' and symptoms elimination. However, an optimal therapy for curing both inflammatory diseases are still under research due to the heterogeneity in clinical feature. Although both UC and CD are sharing similar symptoms, the medical treatment of them are distinctively different. Dietary treatment can benefit for curing CD by increase the dietary zinc and fish intake, which is related to mucosal healing of the bowel. Treatments vary from drug treatment to surgery based on the active level of the CD. UC can also be relieved by using immunosuppressive therapy for mild to moderate disease level and application of biological agents for severe cases.
Irritable bowel syndrome
Irritable bowel syndrome is diagnosed as an intestinal disorder with chronic abdominal pain and inconsistency in form of stool, and is a common bowel disease that can be easily diagnosed in modern society. The variation in incident rate can be explained by different diagnostic criteria in different countries, with the 18–34 age group being recognized as the high frequency incident group. The definite cause of irritable bowel syndrome remains a mystery; however, it has been found to relate to multiple factors, such as 'alternation of mood and pressure, sleep disorders, food triggers, changing of dysbiosis and even sexual dysfunction'. One third of irritable bowel syndrome patients has family history with the disease suggesting that genetic predisposition could be a significant cause for irritable bowel syndrome.
Patients with irritable bowel syndrome commonly experience abdominal pain, changes to stool form, recurrent abdominal bloating and gas, co-morbid disorders and alternation in bowel habits that caused diarrhea or constipation. However, anxiety and tension can also be detected, although patients with irritable bowel disease seem healthy. Apart from these typical symptoms, rectal bleeding, unexpected weight loss and increased inflammatory markers require further medical examination and investigation.
Treatment for irritable bowel disease is multimodal. Dietary intervention and pharmacotherapies can both relieve the symptoms to a certain degree. Avoiding allergic food groups can be beneficial by reducing fermentation in the digestive tract and gas production, hence effectively alleviating abdominal pain and bloating. Drug interventions, such as laxatives, loperamide, and lubiprostone are applied to relieve intense symptoms including diarrhea, abdominal pain and constipation. Psychological treatment, dietary supplements and gut-focused hypnotherapy are recommended for targeting depression, mood disorders and sleep disturbance.
Bowel obstruction
Bowel obstruction is a bowel condition which is a blockage that can be found in both the small intestines and large intestines. Increase of contractions can relieve blockages; however, continuous contractions with decreasing functionality may lead to terminated mobility of the small intestines, which then forms the obstruction. At the same time, the lack of contractility encourages liquid and gas accumulation. and "electrolyte disturbances". Small bowel obstruction can result in severe renal damage and hypovolemia. while evolving into "mucosal ischemia and perforation". Patients with small bowel obstruction were found to experience constipation, strangulation and abdominal pain and vomiting. Surgical intervention is primarily used to cure severe small bowel obstruction condition. Nonoperative therapy included nasogastric tube decompression, water-soluble-contrast medium process or symptomatic management can be applied to treat less severe symptoms
According to research, large bowel obstruction is less common than small bowel obstruction, but is still associated with a high mortality rate. Large bowel obstruction, also known as colonic obstruction, includes acute colonic obstruction, where a blockage is formed in the colon. Colonic obstructions frequently occur within the elder population, often accompanied by significant 'comorbidities'. Although colonic malignancy is revealed as the major cause of the colonic obstruction, volvulus has also been founded as a secondary common cause around the world. In addition, lower mobility, unhealthy mentality and restricted living environment are also listed as risk factors. Surgery and colonic stent placements are widely applied for curing colonic obstructions.
Other
Attempting forced expiration of breath against a closed airway (the Valsalva maneuver) is sometimes practiced to induce defecation while on a toilet. This contraction of expiratory chest muscles, diaphragm, abdominal wall muscles, and pelvic diaphragm exerts pressure on the digestive tract. Ventilation at this point temporarily ceases as the lungs push the chest diaphragm down to exert the pressure. Cardiac arrest and other cardiovascular complications can in rare cases occur due to attempting to defecate using the Valsalva maneuver. Valsalva retinopathy is another pathological syndrome associated with the Valsalva maneuver. Thoracic blood pressure rises and as a reflex response the amount of blood pumped by the heart decreases. Death has been known to occur in cases where defecation causes the blood pressure to rise enough to cause the rupture of an aneurysm or to dislodge blood clots (see thrombosis). Also, in releasing the Valsalva maneuver blood pressure falls; this, coupled with standing up quickly to leave the toilet, can result in a blackout.
Society and culture
Open defecation
Open defecation is the human practice of defecating outside (in the open environment) rather than into a toilet. People may choose fields, bushes, forests, ditches, streets, canals or other open space for defecation. They do so because either they do not have a toilet readily accessible or due to traditional cultural practices. The practice is common where sanitation infrastructure and services are not available. Even if toilets are available, behavior change efforts may still be needed to promote the use of toilets.
Open defecation can pollute the environment and cause health problems. High levels of open defecation are linked to high child mortality, poor nutrition, poverty, and large disparities between rich and poor.
Ending open defecation is an indicator being used to measure progress towards the Sustainable Development Goal Number 6. Extreme poverty and lack of sanitation are statistically linked. Therefore, eliminating open defecation is thought to be an important part of the effort to eliminate poverty.
Anal cleansing after defecation
The anus and buttocks may be cleansed after defecation with toilet paper, similar paper products, or other absorbent material. In many cultures, such as Hindu and Muslim, water is used for anal cleansing after defecation, either in addition to using toilet paper or exclusively. When water is used for anal cleansing after defecation, toilet paper may be used for drying the area afterwards. Some doctors and people who work in the science and hygiene fields have stated that switching to using a bidet as a form of anal cleansing after defecation is both more hygienic and more environmentally friendly.
Mythology and tradition
Some peoples have culturally significant stories in which defecation plays a role. For example:
In an Alune and Wemale legend from the island of Seram, Maluku Province, Indonesia, the mythical girl Hainuwele defecates valuable objects.
One of the traditions of Catalonia (Spain) relates to the caganer, a figurine depicting the act of defecation which appears in nativity scenes in Catalonia and neighbouring areas with Catalan culture. The exact origin of the caganer is lost, but the tradition has existed since at least the 18th century.
Psychology
Some aspects of psychology surround the act of defecation. There is an inherent desire for privacy among humans. Freud stipulated a second stage of development, the Anal Stage, which centers around the release of waste from the bladder and bowels. He categorized two types: anal retentive and anal expulsive.
See also
Artist's Shit
Ecological sanitation
Hemorrhoid
Human waste
Improved sanitation
Rectal tenesmus - a feeling of incomplete defecation
Reuse of human excreta
Shit
Sustainable sanitation
Urination
References
Further reading
Eric P. Widmaier; Hershel Raff; Kevin T. Strang (2006). Vanders' Human Physiology: The Mechanisms of Body Function. Chapter 15. 10th ed. McGraw Hill. .
Excretion
Digestive system
Medical signs
Feces
Symptoms and signs: Digestive system and abdomen | Defecation | Biology | 3,438 |
8,197,423 | https://en.wikipedia.org/wiki/Waitomo%20Glowworm%20Caves | The Waitomo Glowworm Caves attraction is a cave at Waitomo on the North Island of New Zealand. It is known for its population of Arachnocampa luminosa, a glowworm species found exclusively in New Zealand. This cave is part of the waitomo streamway system that includes the Ruakuri Cave, Lucky Strike, and Tumutumu Cave.
The attraction has a modern visitor centre at the entrance, largely designed in wood. There are organized tours that include a boat ride under the glowworms.
History
The name "Waitomo" comes from the Māori words wai, water and tomo, hole or shaft. The local Māori people had known about the caves for about a century before a local Māori, originally from Kawhia, Tane Tinorau, and English surveyors, Laurence Cussen and Fred Mace, were shown the entrance in 1884 and Tane and Fred did extensive explorations in 1887 and 1888. Their exploration was conducted with candlelight on a raft going into the cave where the stream goes underground (now the cave's tourist exit.) As they began their journey, they came across the Glowworm Grotto and were amazed by the twinkling glow coming from the ceiling. As they travelled further into the cave by poling themselves towards an embankment, they were also astounded by the limestone formations. These formations surrounded them in all shapes and sizes.
They returned many times after and Chief Tane independently discovered the upper level entrance to the cave, which is now the current entrance. Visitor access improved when the railway was extended to Ōtorohanga in 1887. By 1889 Tane Tinorau and his wife Huti had opened the cave to visitors and were leading groups for a small fee. Thomas Humphries, (Commissioner of Crown Lands and Chief Surveyor of Auckland 1889 – 1891) did a full study later the same year, noting graffiti had already been inscribed on the ‘most delicate portions’ of the cave, though noting "The natives are now taking great care of the caves", but recommending that government take over the cave to provide more visitor facilities. About 500 tourists visited the cave in the first two years. After years of attempts to buy the caves, the government used the Scenery Preservation Act 1903 and the Public Works Act 1905 to take them over for £625. In 1906, after an escalation in vandalism, the administration of the cave was taken over by the government. In 1910 the Waitomo Caves Hotel was built to house the many visitors.
Tourist Hotel Corporation, a state-owned business, took over in 1957. The hotel was sold to Southern Pacific Hotels Corporation in 1990 and, in 1994, they agreed a licence for the caves with DOC and the Māori owners, selling it to Tourism Holdings Limited in 1996.
In 1989, the land and cave were returned to the descendants of Chief Tane Tinorau and Huti who comprise many of the employees of the caves today. The descendants receive a percentage of the cave’s revenue and are involved in its management and development under the 1990 Waitomo Deed of Settlement.
Geology
Geological and volcanic activity has created around 300 known limestone caves in the Waitomo region over the last 30 million years.
The limestone formation in the Waitomo Glowworm Caves occurred when the region was still under the ocean about 30 million years ago. The limestone is composed of fossilized corals, seashells, fish skeletons, and many small marine organisms on the sea beds. Over millions of years, these fossilized rocks have been layered upon each other and compressed to create limestone and within the Waitomo region the limestone can be over 200 m thick.
The caves began to form when earth movement caused the hard limestone to bend and buckle under the ocean and rise above the sea floor. As the rock was exposed to air, it separated and created cracks and weaknesses that allowed for water to flow through them dissolving the limestone and over millions of years large caves were formed.
The stalactites, stalagmites, and other cave formations grew from water dripping from the ceiling or flowing over the walls and leaving behind limestone deposits. The stalagmites form upward from the floor while the stalactites form from the ceiling. When these formations connect they are called pillars or columns and if they twist around each other they are called Helictite. These cave decorations take millions of years to form given that the average stalactite grows one cubic centimetre every 100 years.
Biology
The most common animals in the caves are insects. This includes albino cave ants and giant crickets but the most renowned is the glowworm Arachnocampa luminosa. The adults are around the size of an average mosquito. However, there are several small underground lakes that were created by freshwater creeks or brooks which are home to New Zealand longfin eels.
The walls of the caves are covered with a variety of fungi including the cave flower (a distant relation to the genus Pleurotus) that is actually a mushroom-like fungus.
Cave monitoring
The glowworms of the Waitomo Glowworm Caves are closely guarded by a Scientific Advisory Group. This group has automated equipment that continually monitors the air quality especially the carbon dioxide levels, rock and air temperature, and humidity. Data from this equipment is carefully analyzed by specialist staff. The advisory group uses the information to establish how the cave should be managed. They determine if and when air flow patterns should be changed and how many people are allowed to visit the caves each day.
Guided tours
The guided tour through the Waitomo Glowworm Caves brings the visitor through three different levels and begins at the top level of the cave and the Catacombs. The levels are linked by the Tomo, which is a 16 m vertical shaft made of limestone. The second level is called the Banquet Chamber. This level is where early visitors stopped to eat and there is evidence of this in the smoke on the ceiling of the chamber. From here it may be possible to link back to the upper level to see the largest formation called the Pipe Organ but on busy days this area is closed to the public because the build-up of carbon dioxide may be hazardous.
The third and final level goes down into the Cathedral, demonstration platform, and the jetty. The Cathedral is an enclosed area with rough surfaces, now paved, and is about 18 m high, giving it good acoustics. A number of famous singers and choirs have performed here including Dame Kiri Te Kanawa.
The tour concludes with a boat ride through the Glowworm Grotto. The boat takes the visitor onto the underground Waitomo River where the only light comes from the tiny glowworms creating a sky of living lights.
Location
The Waitomo Glowworm Caves are located in the Northern King Country region of the North Island of New Zealand, 12 km northwest of Te Kūiti. This cave is about 2 hours south of Auckland, 1 hour south of Hamilton, and 2 hours west of Rotorua by car. The directions to the Caves are to exit State Highway 3 onto Waitomo Caves Road and to continue on the road for about 8 km.
See also
Waitomo Caves
References
External links
Waitomo Caves Discovery Centre
1889 visitor description
1889 establishments in New Zealand
Caves of Waikato
Limestone caves
Show caves in New Zealand
Waitomo District
Places with bioluminescence
Tourist attractions in Waikato | Waitomo Glowworm Caves | Chemistry,Biology | 1,487 |
11,460,108 | https://en.wikipedia.org/wiki/Cochliobolus%20lunatus | Cochliobolus lunatus is a fungal plant pathogen that can cause disease in humans and other animals. The anamorph of this fungus is known as Curvularia lunata, while C. lunatus denotes the teleomorph or sexual stage. They are, however, the same biological entity. C. lunatus is the most commonly reported species in clinical cases of reported Cochliobolus infection.
Morphology
Macroscopic features of C. lunatus include brown to black colour, hairy, velvety or woolly texture, and loosely arranged and rapidly growing colonies on potato dextrose agar medium. Microscopically, the conidiophores are septate. There is great variety in the arrangement of the conidiophores, as they can be isolated or in groups, straight or bent, show simple or geniculate growth pattern, and vary in colour ranging from pale to dark brown. Conidiophore length can reach 650 μm and are often 5-9 μm wide, with swollen bases ranging from 10-15 μm in diameter. Conidia develop at the tips and sides of the spores and have a smooth texture. C. lunatus is differentiated from other Cochliobolus species by its 3 septa and 4 cells, with the first and last cell usually of a paler shade of brown than those in the middle. Conidia range from 9-15 μm in diameter and have a curved appearance.
Phylogeny
The order Pleosporales includes many plant pathogens of economic importance. C. lunatus belongs to Clade-II in the family Pleosporaceae, which is the largest family in its order. The MAPK gene in C. lunatus is homologous to MAPK genes such as , , Chk1 and of other fungal pathogens, which are highly conserved in eukaryotic lineages. There are over 80 species in the genus.
Ecology
Cochliobolus lunatus has a widespread distribution, though it is especially prevalent in the tropics and subtropics. Infection is caused by airborne conidia and ascospores, however, sclerotioid C. lunatus can also survive in the soil. The optimal temperature for in vitro growth and infection ranges from while death results from exposure at for a 1 minute duration, or for a 5 minute duration. Successful plant host infection requires the host surface to be wet for 13 hours. The majority of clinical cases have been reported in India, the United States, Brazil, Japan and Australia.
Pathogenicity and therapy
Plant diseases
Cochliobolus lunatus is best known as the causative agent of seedling blight and seed germination failure in monocotyledon crops such as sugarcane, rice, millet and maize (corn). C. lunatus also causes leaf spot on a wide variety of angiosperm hosts, where each lesion contains a sporulating mass of fungi at its center. The Clk1 gene plays an important role in fungal growth during the infection process, specifically conidiation, which is vital to the process of foliar infection. Fungicides, in particular those with organo-mercurial compounds, have been associated with effective eradication of this pathogen.
Human diseases
Phaeohyphomycoses
Cochliobolus lunatus is one of the main causative agents of phaeohyphomycosis. Initial infection via breaks to the epidermal barrier or the inhalation of spores can lead to disseminated infections, which are often associated with a poor prognosis. C. lunatus is an opportunistic pathogen, infecting immunocompromised patients and those on rigorous steroid drug regimens such as solid organ transplant recipients, advanced AIDS patients and cancer patients. Dematiaceous fungi such as C. lunatus can facilitate foreign body infections of catheters, heart valves and pacemakers, for example.
With regards to treatment, surgical excision using a method similar to Mohs surgery is preferred if the mycosis is accessible, especially for abscesses in the brain. Administration of antifungals is commonly indicated as secondary management therapy, though the specific best regimen depends on the nature and location of the phaeohyphomycosis. When treating immunocompromised patients, it is critical that the underlying disease is controlled, and immune modulators such as granulocyte-macrophage colony-stimulating factor and gamma interferon can be indicated when surgery or antifungals are not feasible alternatives.
Allergy
Allergic fungal manifestations include asthma, rhinitis, sinusitis and bronchopulmonary mycoses caused by a variety of etiological fungal agents including C. lunatus. These agents provoke humoral immune responses, characterized by type I (immediate) and type III (immune complex mediated) hypersensitivity reactions. Prevalence of these diseases among the atopic population is 20-30 % and 6% in the general population. Allergic rhinitis, more commonly known as hay fever, is less frequently encountered in clinic compared to allergic fungal sinusitis. Differential diagnosis of allergic bronchopulmonary mycosis is difficult, and it is often misdiagnosed as tuberculosis, pneumonia, bronchiectasis, lung abscess or bronchial asthma.
Several serological tests can be performed to assess total IgE and allergen specific IgE and IgG: ELISA, MAST, HIA, and CAP RAST. However, more conventional allergy testing such as skin-prick tests can provide rapid results and are easy to conduct and inexpensive, though they may indicate false-positive or false-negative results. Current research has shown that there is an association between allergic fungal sinusitis and MHC II alleles, suggesting a genetic component to this chronic inflammatory respiratory tract disorder. Treatment for allergic fungal sinusitis includes post-operative corticosteroid and aggressive anti-allergic inflammatory regimen including itraconazole or amphotericin B, while treatment for bronchopulmonary mycosis usually does not include surgery.
Eye infection
Mycotic keratitis and conjunctivitis are more commonly reported in tropical climates. Environmental factors such as wind, temperature, rainfall and humidity have been found to influence the ecology of filamentous fungi. In the Gulf of Mexico for example, increased numbers of airborne spores of C. lunatus during hot, humid months has been linked to increased clinical reports of keratitis. C. lunatus commonly infects the cornea, and orbit of the eye, and infection can result from trauma, surgery or dissemination from paranasal sinuses. Endophthalmitis can result from deep fungal keratitis caused by C. lunatus, where the Descemet's membrane is penetrated and compromised.
In immunocompetent atopic individuals, 17% of those affected with allergic fungal sinusitis can develop orbital mycotic symptoms, where the fungus acts as an allergen causing allergic mucin. Pre-existing allergic fungal sinusitis, allergic conjunctivitis and use of soft contact lenses are risk factors for development of ophthalmomycosis. Typical therapy includes administration of natamycin and azoles such as itraconazole, fluconazole, posaconazole and voriconazole.
References
Fungal plant pathogens and diseases
Cochliobolus
Fungi described in 1898
Fungus species | Cochliobolus lunatus | Biology | 1,561 |
18,951,627 | https://en.wikipedia.org/wiki/Mini-ykkC%20RNA%20motif | The mini-ykkC RNA motif (later renamed Guanidine-II riboswitch) was discovered as a putative RNA structure that is conserved in bacteria. The motif consists of two conserved stem-loops whose terminal loops contain the RNA sequence ACGR, where R represents either A or G. Mini-ykkC RNAs are widespread in Pseudomonadota, but some are predicted in other phyla of bacteria. It was expected that the RNAs are cis-regulatory elements, because they are typically located upstream of protein-coding genes.
The genes that are apparently controlled by mini-ykkC RNAs bear a resemblance to the genes controlled by ykkC-yxkD leader (guanidine-I), and nine gene families are common to both. Therefore, it was proposed that these two RNA classes have the same function. The complex structure and many conserved nucleotides found in the ykkC-yxkD leader are not present in the mini-ykkC RNA motif. Despite this, it was shown that each of the mini-ykkC two stem-loop structures directly binds free guanidine. Therefore, mini-ykkC RNA motif represents a distinct class of guanidine-sensing RNAs called Guanidine-II riboswitch. Its crystal structure was also determined.
References
External links
Cis-regulatory RNA elements | Mini-ykkC RNA motif | Chemistry | 287 |
37,017,219 | https://en.wikipedia.org/wiki/Mercury%28II%29%20hydride | Mercury(II) hydride (systematically named mercurane(2) and dihydridomercury) is an inorganic compound with the chemical formula (also written as ). It is both thermodynamically and kinetically unstable at ambient temperature, and as such, little is known about its bulk properties. However, it can also be a white, crystalline solid, which is kinetically stable at temperatures below , which was synthesized for the first time in 1951.
Mercury(II) hydride is the second simplest mercury hydride (after the significantly less stable mercury(I) hydride). Due to its instability, it has no practical industrial uses. However, in analytical chemistry, mercury(II) hydride is fundamental to certain forms of spectrometric techniques used to determine mercury content. In addition, it is investigated for its effect on high sensitivity isotope-ratio mass spectrometry methods that involve mercury, such as MC-ICP-MS, when used to compare thallium to mercury.
Properties
Structure
In solid mercury(II) hydride, the HgH2 molecules are connected by mercurophilic bonds. Trimers and a lesser proportion of dimers are detected in the vapour. Unlike solid zinc(II), and cadmium(II) hydride, which are network solids, solid mercury(II) hydride is a covalently bound molecular solid. This is due to relativistic effects, which also accounts for the relatively low decomposition temperature of -125 °C.
The HgH2 molecule is linear and symmetric in the form H-Hg-H. The bond length is 1.646543 Å. The antisymmetric stretching frequency, ν3 of the bond is 1912.8 cm−1, 57.34473 THz for isotopes 202Hg and 1H. The energy needed to break the Hg-H bond in HgH2 is 70 kcal/mol. The second bond in the resulting HgH is much weaker only needing 8.6 kcal/mol to break. Reacting two hydrogen atoms releases 103.3 kcal/mol, and so HgH2 formation from hydrogen molecules and Hg gas is endothermic at 24.2 kcal/mol.
Biochemistry
Alireza Shayesteh et al conjectured that bacteria containing the flavoprotein mercuric reductase, such as Escherichia coli, can in theory reduce soluble mercury compounds to volatile HgH2, which should have a transient existence in nature.
Production
Mercury(II) chloride reduction
Mercury(II) hydride may be prepared by the reduction of mercury(II) chloride. In this process, mercury(II) chloride and a hydride salt equivalent react to produce mercury(II) hydride according to the following equations, which depend on the stoichiometry of the reaction:
2 + 2 → +
+ 2 → + 2
Variations of this method exits where mercury(II) chloride is substituted for its heavier halide homologues.
Direct synthesis
Mercury(II) hydride can also be generated by direct synthesis from the elements in the gas phase or in cryogenic inert gas martices:
Hg → Hg*
Hg* + → []*
[]* →
This requires excitation of the mercury atom to the 1P or 3P state, as atomic mercury in its ground-state does not insert into the dihydrogen bond. Excitation is accomplished by means of an ultraviolet-laser, or electric discharge. The initial yield is high; however, due to the product being in an excited state, a significant amount dissociates rapidly into mercury(I) hydride, then back into the initial reagents:
2 []* → 2 HgH +
2 HgH →
→ 2 Hg +
This is the preferred method for matrix isolation research. Besides mercury(II) hydride, it also produces other mercury hydrides in lesser quantities, such as the mercury(I) hydrides (HgH and Hg2H2).
Reactions
Upon treatment with a Lewis base, mercury(II) hydride converts to an adduct. Upon treatment with a standard acid, mercury(II) hydride and its adducts convert either to a mercury salt or a mercuran(2)yl derivative and elemental hydrogen. Oxidation of mercury(II) hydride gives elemental mercury. Unless cooled below , mercury(II) hydride decomposes to produce elemental mercury and hydrogen:
→ Hg + H2
History
Mercury(II) hydride was successfully synthesized and identified in 1951 by Egon Wiberg and Walter Henle, by the reaction of mercury(II) iodide and lithium tetrahydroaluminate in a mixture of petroleum ether and tetrahydrofuran. In 1993 Legay-Sommaire announced HgH2 production in cryogenic argon and krypton matrices with a KrF laser. In 2004, solid HgH2 was definitively synthesized and consequentially analysed, by Xuefeng Wang and Lester Andrews, by direct matrix isolation reaction of excited mercury with molecular hydrogen.
In 2005, gaseous HgH2 was synthesized by Alireza Shayesteh et al, by the direct gas-phase reaction of excited mercury with molecular hydrogen at standard temperature; and Xuefeng Wang and Lester Andrews determined the structure of solid mercury HgH2, to be a molecular solid.
References
Mercury(II) compounds
Metal hydrides | Mercury(II) hydride | Chemistry | 1,164 |
78,357,360 | https://en.wikipedia.org/wiki/HD%209289 | HD 9289 is a white-hued variable star in the constellation of Cetus. It has the variable-star designation BW Ceti (abbreviated to BW Cet). With an apparent magnitude of 9.38, it is too faint to be observed by the naked eye from Earth. It is located at a distance of approximately according to Gaia EDR3 parallax measurements, and is moving away from the Solar System at a heliocentric radial velocity of 11.352 km/s.
Stellar properties
HD 9289 is an A-type main-sequence star with the spectral type A3 SrEuCr. The suffix indicates that the star shows strong spectral lines of strontium, europium, and chromium, characteristic of an Ap star. The star radiates roughly 8.7 times the luminosity of the Sun from its photosphere. It possesses a magnetic field with a strength of 2.0 kG, which is 3,000–9,000 times stronger than Earth's magnetic field (0.22–0.67 G).
The star was first classified as a rapidly oscillating Ap star (roAp) in 1993 by Kurtz et al. when it was found to pulsate at multiple periods, all clustered around 10.5 minutes (1585.06 μHz). Additional observations confirmed the presence of rotational amplitude modulation, similar to that of the well-studied roAp star HR 1217. In 2011, a new set of pulsation frequencies were discovered, the strongest of them being at 1585.936 μHz with an amplitude of 0.63 mmag. Few of them were consistent with the initial reports, however, which was explained by the fact that the measurements by Kurtz et al. were affected by aliasing that caused misidentifications, though an innate shift in the star's pulsation behavior could not be ruled out.
In 2012, the rotational period of HD 9289 was constrained to through differential photometry observations. This was revised slightly upward in 2021 to .
Binary companion
In 2012, a previously undetected visual companion was discovered at a separation of 0.441 arcseconds to the east-northeast of HD 9289. The probability that the two stars are unrelated and aligned by chance is very low (2.16 %), therefore the pair are most certainly part of a wide binary system. The secondary star is about 1.70 magnitudes fainter than the primary when observed in the K band.
References
Rapidly oscillating Ap stars
A-type main-sequence stars
009289
BD−11 00286
Cetus
J01311648-1107078
Ceti, BW | HD 9289 | Astronomy | 569 |
1,697,920 | https://en.wikipedia.org/wiki/Astronomy%20Day | Astronomy Day is an annual event in various countries, intended to provide a means of interaction between the general public and various astronomy enthusiasts, groups and professionals.
History
This event was started in 1973 by Doug Berger, the president of the Astronomical Association of Northern California. His intent was to set up various telescopes in busy urban locations so that passersby could enjoy views of the heavens. Since then the event has expanded and is now sponsored by a number of organizations associated with astronomy.
Originally, Astronomy Day occurred on a Saturday between mid-April and mid-May, and was scheduled so as to occur at or close to the first quarter Moon. In 2007, an autumn Astronomy Day was added. It was scheduled to occur on a Saturday between mid-September and mid-October so as to be on or close to the first quarter Moon.
Future events
The lunar influence on the schedule means that the events happen on a different date each year, rather than set calendar dates. The table below shows the dates for up coming Astronomy Days:
Past events
The Astronomical League canceled the in-person event in 2020 due to the global pandemic of COVID-19 virus. Some organizations, such as the Lowell Observatory, hosted virtual events to continue the tradition.
See also
Events
Earth Hour
Earth Day/Earth Week
Earth Week
100 Hours of Astronomy (100HA)
National Dark-Sky Week (NDSW)
National Astronomy Week (NAW)
World Space Week (WSW)
White House Astronomy Night
References
External links
Astronomical League information page on Astronomy Day
Royal Astronomical Society of Canada - Astronomy Day
Sky And Telescope
A Csillagászat Napja 2018-ban
Astronomy education events
April observances
May observances
Unofficial observances
September observances
October observances
Observances about science
Observances held on the first quarter moon | Astronomy Day | Astronomy | 367 |
2,783,744 | https://en.wikipedia.org/wiki/Randall%20C.%20Kennedy | Randall C Kennedy is director of research and cofounder of Competitive Systems Analysis, an IT consulting company. He was a former systems analyst for Giga Information Group. Kennedy was a contributor for InfoWorld, focusing on Windows, Microsoft and other topics, but was dismissed on February 21, 2010. In his announcement of the dismissal, InfoWorld editor-in-chief, Eric Knorr, stated that Kennedy had been dismissed for violating InfoWorld's policies of "integrity and honesty", and for "breach of trust".
Kennedy discovered an undocumented change in the protocol used by the Microsoft SQL Server Net-Lib component from named pipes to TCP/IP in Microsoft Data Access Components 2.6 that was fixed in the subsequent version 2.7). He also saw curious benchmark results when comparing performance of SQL Server on Windows NT 4 versus Windows 2000 but was prevented from publishing in Network World once Microsoft threatened legal action for his violation of the SQL Server software licence agreement.
InfoWorld dismissal
Kennedy was dismissed from InfoWorld on 19 February 2010 for 'misrepresenting himself to other media organisations as Craig Barth CTO of Devil Mountain Software (aka exo.performance.network) in interviews for a number of stories regarding Windows and other Microsoft software topics' as Eric Knorr of InfoWorld explained 21 February. Knorr also explained that Devil Mountain Software 'is a Randall Kennedy business that specialises in the analysis of Windows performance data. There is no Craig Barth and Kennedy has stated this fabrication was a misguided effort to separate himself (or more accurately his InfoWorld blogger persona) from his Devil Mountain Software business'.
Kennedy now insists he was not sacked, that InfoWorld were trying to save the situation, that he on his own decided to resign, and that he is having a good time on his island home on Mauritius.
References
Further reading
Foster, Ed (2001) Is it OK for Microsoft and others to forbid disclosure of benchmark results?
Fontana, Joe (5 March 2001) Microsoft gets tough with independent testers Network World.
Kennedy, Randall C (21 November 2001) It's not a bug, it's a feature InfoWorld.
Knorr, Eric (21 February 2010) "An unfortunate ending" InfoWorld.
Why we don't trust Devil Mountain Software (and neither should you) ZDNet
Insane blogger fools reporter, gets fired
if( Randall C. Kennedy == Craig Barth ){ Scandal }
Living people
Year of birth missing (living people) | Randall C. Kennedy | Technology | 513 |
59,209,120 | https://en.wikipedia.org/wiki/Janssen%20revolver | The Janssen revolver () was invented by the French astronomer Pierre Jules César Janssen in 1874. It was the instrument that originated chronophotography, a branch of photography based on capturing movement from a sequence of images. To create the apparatus Pierre Janssen was inspired by the revolving cylinder of Samuel Colt's revolver.
Usage
The revolver used two discs and a sensitive plate, the first with twelve holes (shutter) and the second with only one, on the plate. The first one would take a full turn every eighteen seconds, so that each time a shutter window passed in front of the window of the second (fixed) disk, the sensitive plate was discovered in the corresponding portion of its surface, creating an image. In order for the images not to overlap, the sensitive plate rotated with a quarter of the shutter speed. The Shutter Speed was one and a half seconds. A mirror on the outside of the apparatus reflected the movement of the object towards the lens that was located in the barrel of this photographic revolver. When the revolver was in operation it was capable of taking forty-eight images in seventy-two seconds.
History
In the mid-nineteenth century, one of the scientific challenges of the moment was to determine with the greatest accuracy possible the distance between the Earth and the Sun, the so-called Astronomical Unit, which indicates the size of the Solar System. At that time, the only way to know it was through the astronomical phenomenon of Venus transit: the passage of Venus ahead of the Sun, which required two simultaneous observations being made at a time from different land latitudes and measure the total duration of the event. With this data and applying the laws of Kepler, which describe the behavior of planetary orbits, the distance with the rest of the planets of the Solar System could be obtained.
The method had two drawbacks: the frequency of the phenomenon and technical problems of getting the start and end of the transit. The Venus transit in 1874 was a unique opportunity, which was why more than sixty co-ordinated expeditions from up to ten different countries were dispatched to locations in China, Vietnam, New Caledonia, some Pacific islands and Japan. The distortion caused by the terrestrial atmosphere, the diffraction of the telescopes, the subjectivity of the observer and the "black drop effect" (an optical effect that distorts the silhouette of Venus just in the instant that enters and leaves the solar disk) meant the attempt faced huge technical challenges, which had previously been insurmountable.
Janssen's invention of the photographic revolver was designed in an attempt to overcome these difficulties.
Application
Janssen tested the device with the support of the French government in Nagasaki (Japan).
As the exact moment in which the transit of Venus would take place was impossible to predict, he added a watch set to create a sequence of images. The revolver recorded 48 photographs in 72 seconds in a daguerreotype, material that was no longer used but was ideal for the sunlight that was presented in the situation, since it could capture the light in a great time of exposure and obtain clearer results.
The British expeditions photographed the transit from different geographic points by using apparatuses inspired by the revolver of Janssen. Unfortunately, the quality of the resulting images of the two expeditions was not sufficient to accurately calculate the Astronomical Unit, and the observations were more reliable at eye. Even so, Janssen introduced his revolver to the Société Francaise de Photographie in 1875 and the Académie des Sciences in 1876, to which he suggested the possibility of using his apparatus for the study of the animal movement, especially of the birds, because of the rapidity of the movement of their wings.
Legacy
In 1882, the physiologist Etienne-Jules Marey concluded that a galloping horse would have four legs in the air at a certain moment. Four years previously, Eadweard Muybridge was the first to record the movement of living beings, in The Horse in Motion, with 12 serialized cameras that allowed him to play and even project those photographs in a row. The action was not being reconstructed from the point of view of an observer, but from a camera that accompanied the subject - such as a tracking shot - and in which, in each photograph, the action had a different viewpoint. Marey, based on the invention of Janssen, managed to solve these problems with his 1882 photo gun, which captured 12 small photos on a circular plate and at regular intervals. This improvement allowed the image to be captured by a fragile glass plate, so that it was no longer used by the impractical daguerreotype, thus reducing the exposure time.
It was, therefore, the first camcorder, although it still had certain differences of conception with the later camcorders: On one hand, the obtained images had as a goal the decomposition of the movement for its study, and not for their projection; and on the other hand, being obtained on a glass disk, the duration of the action that could be recorded was necessarily very short.
Both inventions were a first step in the development of the first film cameras, but they can not be considered as such because their main objective was not the projection of films, but to study movement as a result of its decomposition.
References
1874 introductions
1870s in film
History of film
History of astronomy | Janssen revolver | Astronomy | 1,068 |
27,325,068 | https://en.wikipedia.org/wiki/Project%20SUNSHINE | Project SUNSHINE was a series of research studies that began in 1953 to ascertain the impact of radioactive fallout on the world's population. The project was initially kept secret, and only became known publicly in 1956. Commissioned jointly by the United States Atomic Energy Commission and USAF Project Rand, SUNSHINE sought to examine the long-term effects of nuclear radiation on the biosphere due to repeated nuclear detonations of increasing yield. With the conclusion from Project GABRIEL that radioactive isotope Strontium-90 (Sr-90) represented the most serious threat to human health from nuclear fallout, Project SUNSHINE sought to measure the global dispersion of Sr-90 by measuring its concentration in the tissues and bones of the dead. Of particular interest was tissue from the young, whose developing bones have the highest propensity to accumulate Sr-90 and thus the highest susceptibility to radiation damage. SUNSHINE elicited a great deal of controversy when it was revealed that many of the remains sampled were utilized without prior permission from relatives of the dead, which wasn't known until many years later.
History
On January 18, 1955, then-AEC commissioner Dr. Willard Libby said that there was insufficient data regarding the effects of fallout due to a lack of human samples – especially samples taken from children – to analyze. Libby was quoted saying, "I don't know how to get them, but I do say that it is a matter of prime importance to get them, and particularly in the young age group. So, human samples are often of prime importance, and if anybody knows how to do a good job of body snatching, they will really be serving their country." This led to over 1,500 samples being gathered, of which only 500 were analyzed. Many of the 1,500 sample cadavers were babies and young children, and were taken from countries from Australia to Europe, often without their parents' consent or knowledge. According to the investigation launched after a British newspaper reported that British scientists had obtained children’s bodies from various hospitals and shipped their body parts to the United States, a British mother had said that her stillborn baby's legs were removed by British doctors, and to prevent her from finding out what had happened, she was not allowed to dress the baby for the funeral.
Notable studies
In 1958, research for project SUNSHINE was brought to Belgium. Scientists started doing tests that were slightly different than those done previously in the United States and Europe by analyzing soils in agricultural regions instead of human bones. They headed in two main directions: environmental surveys and experimental research in natural and in controlled conditions. Their goal was to see the effect of Strontium-90 in the soils as well as to see how it transferred to the grass and grazing animals such as cows and sheep, the animals from which humans consume milk and meat. Researchers also looked for direct influences of strontium-90 by observing how well the contaminated grass and crops grew.
In a 1957 article, Dr. Whitlock, director of Health Education in the National Dairy Council, Chicago, Illinois, discussed the impact of strontium-90 in the cow milk consumed by humans, concluding that the effects of Sr-90 would not be detectably harmful to the general populace of the US. "From the foregoing information, it would seem we have a long way to go before the presence of Strontium-90 in milk and other foods can catch up with the amounts of radioactivity to which we have long been exposed through natural resources." Specifically referring to the natural radioactivity one is exposed to from potassium-40."
See also
Project GABRIEL
Strontium-90
References
Radiation health effects research
United States Atomic Energy Commission
Nuclear fallout | Project SUNSHINE | Chemistry,Technology | 751 |
50,006,201 | https://en.wikipedia.org/wiki/Field%20%28mineral%20deposit%29 | A field is a mineral deposit containing a metal or other valuable resources in a cost-competitive concentration. It is usually used in the context of a mineral deposit from which it is convenient to extract its metallic component. The deposits are exploited by mining in the case of solid mineral deposits (such as iron or coal) and extraction wells in case of fluids (such as oil, gas or brines).
Description
In geology and related fields a deposit is a layer of rock or soil with uniform internal features that distinguish it from adjacent layers. Each layer is generally one of a series of parallel layers which lie one above the other, laid one on the other by natural forces. They may extend for hundreds of thousands of square kilometers of the Earth's surface. The deposits are usually seen as a different color material groups or different structure exposed in cliffs, canyons, caves and river banks. individual agglomerates may vary in thickness from a few millimeters up to a kilometer or more. Each cluster represents a specific type of deposit: flint river, sea sand, coal swamp, sand dunes, lava beds, etc.
It can consist of layers of sediment, usually by marine or differentiations of certain minerals during cooling of magma or during metamorphosis of the previous rock. The mineral deposits are generally oxides, silicates and sulfates or metal not commonly concentrated in the Earth's crust. The deposits must be machined to extract the metals in question from the waste rock and minerals from the reservoir. The deposits are formed by a variety of geological processes. The abundance of a field will result in direct costs associated with the mining of the deposit and the consequent cost of the extracted metal.
Important minerals in ore fields
Argentite: Ag2S
Baryte: BaSO4
Beryl:
Sphalerite: ZnS
Bornite: Cu5FeS4
Cassiterite: SnO2
Chalcocite: Cu2S
Chalcopyrite: CuFeS2
Chromite:
Cinnabar: HgS
Cobaltite:
Coltan:
Galena: PbS
Gold: Au
Hematite: Fe2O3
Ilmenite: FeTiO3
Magnetite: Fe3O4
Molybdenite: MoS2
Pentlandite:
Pyrite: FeS2
Scheelite: CaWO4
Sphalerite: ZnS
Uraninite: UO2
Wolframite:
See also
Petroleum reservoir
References
Civil engineering
Ore deposits | Field (mineral deposit) | Engineering | 501 |
18,678,110 | https://en.wikipedia.org/wiki/ELMO%20%28protein%29 | ELMO (Engulfment and Cell Motility) is a family of related proteins (~82 kDa) involved in intracellular signalling networks. These proteins have no intrinsic catalytic activity and instead function as adaptors which can regulate the activity of other proteins through their ability to mediate protein-protein interactions.
This family contains members in all animals. In humans there are three paralogous isoforms:
ELMO1
ELMO2
ELMO3
The ELMO domain was first characterized in the CED-12 proteins of Caenorhabditis elegans and Drosophila melanogaster, which is a homolog to the ELMO protein found in mammals. This protein is involved in Rac-GTPase activation, apoptotic cell phagocytosis, cell migration, and cytoskeletal rearrangements.
Structure and function of ELMO proteins
The ELMO family are evolutionarily conserved orthologs of the C. elegans protein CED-12. All isoforms contain a series of armadillo repeats, which begin at the N-terminus and extend around two thirds of the way along the protein, as well as a C-terminal proline-rich motif and a central PH domain. They function as part of a protein complex with Dock180-related proteins to form a bipartite guanine nucleotide exchange factor for Rac (a member of the Rho family of small G proteins). The Dock180-ELMO interaction requires the ELMO PH domain and also involves binding of the ELMO proline-rich motif to the Dock180 SH3 domain.
References
Protein families | ELMO (protein) | Chemistry,Biology | 343 |
47,733,167 | https://en.wikipedia.org/wiki/Pyxis%20globular%20cluster | The Pyxis globular cluster is a globular cluster in the constellation Pyxis. It lies around 130,000 light-years distant from earth and around 133,000 light-years distant from the centre of the Milky Way—a distance not previously thought to contain globular clusters. It is around 13.3 ± 1.3 billion years old. Discovered in 1995 by astronomer Ronald Weinberger while he was looking for planetary nebulae, it is in the Galactic halo. Irwin and colleagues noted that it appears to lie on the same plane as the Large Magellanic Cloud and raised the possibility that it might be an escaped object from that galaxy.
References
Pyxis globular cluster
Pyxis | Pyxis globular cluster | Astronomy | 148 |
5,571,005 | https://en.wikipedia.org/wiki/Compatibility%20%28geochemistry%29 | Compatibility is a term used by geochemists to describe how elements partition themselves in the solid and melt within Earth's mantle. In geochemistry, compatibility is a measure of how readily a particular trace element substitutes for a major element within a mineral.
Compatibility of an ion is controlled by two things: its valence and its ionic radius. Both must approximate those of the major element for the trace element to be compatible in the mineral. For instance, olivine (an abundant mineral in the upper mantle) has the chemical formula . Nickel, with very similar chemical behaviour to iron and magnesium, substitutes readily for them and hence is very compatible in the mantle.
Compatibility controls the partitioning of different elements during melting. The compatibility of an element in a rock is a weighted average of its compatibility in each of the minerals present. By contrast, an incompatible element is one that is least stable within its crystal structure. If an element is incompatible in a rock, it partitions into a melt as soon as melting begins. In general, when an element is referred to as being “compatible” without mentioning what rock it is compatible in, the mantle is implied. Thus incompatible elements are those that are enriched in the continental crust and depleted in the mantle. Examples include: rubidium, barium, uranium, and lanthanum. Compatible elements are depleted in the crust and enriched in the mantle, with examples nickel and titanium.
Compatibility is commonly described by an element's distribution coefficient. A distribution coefficient describes how the solid and liquid phases of an element will distribute themselves in a mineral. Current studies of Earth's rare trace elements seek to quantify and examine the chemical composition of elements in the Earth's crust. There are still uncertainties in the understanding of the lower crust and upper mantle region of Earth's interior. In addition, numerous studies have focused on looking at the partition coefficients of certain elements in the basaltic magma to characterize the composition of oceanic crust. By having a way to measure the composition of elements in the crust and mantle given a mineral sample, compatibility allows relative concentrations of a particular trace element to be determined. From a petrological point of view, the understanding of how major and rare trace elements differentiate in the melt provides deeper understanding of Earth's chemical evolution over the geologic time scale.
Quantifying compatibility
Distribution (Partition) coefficient
In a mineral, nearly all elements distribute unevenly between the solid and liquid phase. This phenomenon known as chemical fractionation and can be described by an equilibrium constant, which sets a fixed distribution of an element between any two phases at equilibrium. A distribution constant is used to define the relationship between the solid and liquid phase of a reaction. This value is essentially a ratio of the concentration of an element between two phases, typically between the solid and liquid phase in this context. This constant is often referred to as when dealing with trace elements, where
for trace elements
The equilibrium constant is an empirically determined value. These values depend on temperature, pressure, and composition of the mineral melt. values differ considerably between major elements and trace elements. By definition, incompatible trace elements have an equilibrium constant value of less than one because trace elements have higher concentrations in the melt than solids. This means that compatible elements have a value of . Thus, incompatible elements are concentrated in the melt, whereas compatible elements tend to be concentrated in the solid. Compatible elements with are strongly fractionated and have very low concentrations in the liquid phase.
Bulk distribution coefficient
The bulk distribution coefficient is used to calculate the elemental composition for any element that makes up a mineral in a rock. The bulk distribution coefficient, , is defined as
where is the element of interest in the mineral, and is the weight fraction of mineral in the rock. is the distribution coefficient for the element in mineral . This constant can be used to describe how individual elements in a mineral is concentrated in two different phases. During chemical fractionation, certain elements may become more or less concentrated, which can allow geochemists to quantify the different stages of magma differentiation. Ultimately, these measurements can be used to provide further understanding of elemental behavior in different geologic settings.
Applications
One of the main sources of information about the Earth's composition comes from understanding the relationship between peridotite and basalt melting. Peridotite makes up most of Earth's mantle. Basalt, which is highly concentrated in the Earth's oceanic crust, is formed when magma reaches the Earth's surface and cools down at a very fast rate. When magma cools, different minerals crystallize at different times depending on the cooling temperature of that respective mineral. This ultimately changes the chemical composition of the melt as different minerals begin to crystallize. Fractional crystallization of elements in basaltic liquids has also been studied to observe the composition of lava in the upper mantle. This concept can be applied by scientists to give insight on the evolution of Earth's mantle and how concentrations of lithophile trace elements have varied over the last 3.5 billion years.
Understanding the Earth's interior
Previous studies have used compatibility of trace elements to see the effect it would have on the melt structure of the peridotite solidus. In such studies, partition coefficients of specific elements were examined and the magnitude of these values gave researchers some indication about the degree of polymerization of the melt. A study conducted in East China in 1998 looked at the chemical composition of various elements found in the crust in China. One of the parameters used to characterize and describe the crustal structure in this region was compatibility of various element pairs. Essentially, studies like this showed how compatibility of certain elements can change and be affected by the chemical compositions and conditions of Earth's interior.
Oceanic volcanism is another topic that commonly incorporates the use of compatibility. Since the 1960s, the structure of Earth's mantle started being studied by geochemists. The oceanic crust, which is rich in basalts from volcanic activity, show distinct components that provides information about the evolution of the Earth's interior over the geologic timescale. Incompatible trace elements become depleted when mantle melts and become enriched in oceanic or continental crust through volcanic activity. Other times, volcanism can produce enriched mantle melt onto the crust. These phenomena can be quantified by looking at radioactive decay records of isotopes in these basalts, which is a valuable tool for mantle geochemists. More specifically, the geochemistry of serpentinites along the ocean floor, specifically subduction zones, can be examined using compatibility of specific trace elements. The compatibility of lead (Pb) into zircons under different environments can also be an indication of zircons in rocks. When observing levels of non-radiogenic lead in zircons, this can be a useful tool for radiometric dating of zircons.
References
Geochemistry
Geology | Compatibility (geochemistry) | Chemistry | 1,383 |
37,403,337 | https://en.wikipedia.org/wiki/Hydrogen-bond%20catalysis | Hydrogen-bond catalysis is a type of organocatalysis that relies on use of hydrogen bonding interactions to accelerate and control organic reactions. In biological systems, hydrogen bonding plays a key role in many enzymatic reactions, both in orienting the substrate molecules and lowering barriers to reaction. The field is relatively undeveloped compared to research in Lewis acid catalysis.
Hydrogen-bond donors can catalyze reactions through a variety of mechanisms. Hydrogen bonding can stabilize anionic intermediates. They sequester anions, enabling the formation of reactive electrophilic cations. More acidic donors can act as general or specific acids, which activate electrophiles by protonation. A powerful approach is the simultaneous activation of both partners in a reaction, e.g. nucleophile and electrophile, termed "bifunctional catalysis". In all cases, the close association of the catalyst molecule to substrate also makes hydrogen-bond catalysis a powerful method of inducing enantioselectivity.
Hydrogen-bonding catalysts are often simple to make, relatively robust, and can be synthesized in high enantiomeric purity. New reactions catalyzed by hydrogen-bond donors are being discovered at an increasing pace, including asymmetric variants of common organic reactions, such as aldol additions, Diels-Alder cycloadditions and Mannich reactions.
Catalytic strategies
Stabilization of tetrahedral intermediates
Many organic reactions involve the formation of tetrahedral intermediates through nucleophilic attack of functional groups such as aldehydes, amides or imines. In these cases, catalysis with hydrogen-bond donors is an attractive strategy since the anionic tetrahedral intermediates are better hydrogen-bond acceptors than the starting compound. This means that relative to the initial catalyst-substrate complex, the transition state, bearing more negative charge, is stabilized.
For example, in a typical acyl substitution reaction, the starting carbonyl compound is coordinated to the catalyst through one, two or possibly more hydrogen bonds. During the attack of the nucleophile, negative charge builds on the oxygen until the tetrahedral intermediate is reached. Therefore, the formally negative oxygen engages in a much stronger hydrogen bond than the starting carbonyl oxygen because of its increased negative charge. Energetically, this has the effect of lowering the intermediate and the transition state, thus accelerating the reaction.
This mode of catalysis is found in the active sites of many enzymes, such as the serine proteases. In this example, the amide carbonyl is coordinated to two N–H donors. These sites of multiple coordination designed to promote carbonyl reactions in biology are termed "oxyanion holes". Delivery of serine nucleophile forms a tetrahedral intermediate, which is stabilized by the increase hydrogen bonding to the oxyanion hole.
Many synthetic catalysts employ this strategy to activate a variety of electrophiles. Using a chiral BINOL catalyst, for instance, the Morita-Baylis-Hillman reaction involving the addition of enones to aldehydes can be effected with high enantioselectivity. The nucleophile is an enolate-type species generated from the conjugate addition of PEt3 to the enone, and adds enantioselectively to the aldehyde coordinated to catalyst.
In addition to carbonyls, other electrophiles such as imines can be used. For example, using a simple chiral thiourea catalyst, the asymmetric Mannich reaction of aromatic imines with silyl ketene acetals can be catalyzed with high ee in near quantitative conversion. The mechanism of this reaction is not fully resolved and the reaction is very substrate-specific, only effective on certain aromatic electrophiles.
The scope of this mode of activation includes combinations of electrophiles, nucleophiles and catalyst structures. Furthermore, analogous reactions involving oxyanion intermediates such as enolate addition to nitroso compounds or opening of epoxides have also been catalyzed with this strategy.
Stabilization of anionic fragments
Another strategy that has been explored is the stabilization of reactions that develop partial negative charges in the transition state. Examples of applications are most commonly reactions that are approximated concerted and pericyclic in nature. During the course of the reaction, one fragment develops partial negative character and the transition state can be stabilized by accepting hydrogen bond(s).
A demonstrative example is the catalysis of Claisen rearrangements of ester-substituted allyl vinyl ethers reported by the Jacobsen research group. A chiral guanidinium catalyst was found to promote the reaction near room temperature with high enantioselectivity. During the transition state, the fragment coordinated to the amidinium catalyst develops partial anionic character due to the electronegativity of the oxygen and the electron-withdrawing ester group. This increases the strength of hydrogen bonding and lowers the transition state energy, thus accelerating the reaction.
Similarly, negative charge can develop in cycloaddition reactions such as the Diels-Alder reaction, when the partners are appropriately substituted. As a representative example, Rawal and coworkers developed a chiral catalyst based on α,α,α',α'-tetraaryl-2,2-disubstituted 1,3-dioxolane-4,5-dimethanol (TADDOL) that could catalyze Diels-Alder reactions. In the following example, the reaction with a highly electron-rich diene and an electron-poor dienophile is thought to develop significant negative charge on the enal fragment, and is the transition state is stabilized by increased hydrogen bonding to the TADDOL (Ar = 1-naphthyl).
Anion binding
Hydrogen-bond catalysts can also accelerate reactions by assisting in the formation of electrophilic species through abstracting and coordinating an anion such as a halide. Urea and thiourea catalysts are the most common donors in anion-binding catalysis, and their ability to bind halides and other anions has been well established in the literature. The use of chiral anion-binding catalysts can create an asymmetric ion pair and induce remarkable stereoselectivity.
One of the first reactions proposed to proceed through anion-binding catalysis is the Pictet-Spengler-type cyclization of hydroxy lactams with TMSCl under thiourea catalysis. In the proposed mechanism, after initial substitution of the hydroxyl group with chloride, the key ion pair is formed. The activated iminium ion is closely associated with the chiral thiourea-bound chloride, and intramolecular cyclization proceeds with high stereoselectivity.
Asymmetric ion pairs can also be attacked in intermolecular reactions. In an interesting example, asymmetric addition of enol silane nucleophiles to oxocarbenium ions can be effected by catalytically forming the oxocarbenium through anion binding. Starting from an acetal, the chloro ether is generated with boron trichloride and reacted with the enol silane and catalyst. The mechanism of formation of the oxocarbenium-thiourea-chloride complex is not fully resolved. It is thought that under the reaction conditions, the chloro ether can epimerize and thiourea can stereoselectively bind chloride to form a closely associated ion pair. This asymmetric ion pair is then attacked by the silane to generate alkylated product.
One example of the anion-binding mechanism is the hydrocyanation of imines catalyzed by Jacobsen's amido-thiourea catalyst depicted in the below diagram. This reaction is also one of the most extensively studied through computational, spectroscopic, labeling and kinetic experiments. While direct addition of cyanide to a catalyst-bound imine was considered, an alternative mechanism involving formation of an iminium-cyanide ion pair controlled by catalyst was calculated to have a barrier that is lower by 20 kcal/mol. The proposed most likely mechanism begins with binding of the catalyst to HNC, which exists in equilibrium with HCN. This complex then protonates a molecule of imine, forming an iminium-cyanide ion pair with the catalyst binding and stabilizing the cyanide anion. The iminium is thought to also interact with the amide carbonyl on the catalyst molecule (see bifunctional catalysis below). The bound cyanide anion then rotates, and attacks the iminium through carbon. The investigators conclude that though imine-urea binding was observed through spectroscopy and was supported by early kinetic experiments, imine binding is off-cycle and all evidence points toward this mechanism involving thiourea-bound cyanide.
Protonation
It is often difficult to distinguish between hydrogen-bond catalysis and general acid catalysis. Hydrogen-bond donors can have varying acidity, from mild to essentially strong Brønsted acids like phosphoric acids. Looking at the extent of proton transfer over the course of the reaction is challenging and has not been investigated thoroughly in most reactions. Nevertheless, strong acid catalysts are often grouped with hydrogen-bond catalysts as they represent an extreme on this continuum and their catalytic behaviors share similarities. The mechanism of activation for these reactions involves initial protonation of the electrophilic partner. This has the effect of rendering the substrate more electrophilic and creating an ion pair, through which it is possible to transfer stereochemical information.
Asymmetric catalysis involving nearly complete protonation of substrate has been effective in Mannich reactions of aromatic aldimines with carbon nucleophiles. In addition, aza-Friedel-Crafts reactions of furans, amidoalkylations of diazocarbonyl compounds, asymmetric hydrophosphonylation of aldimines and transfer hydrogenations have also been reported. Chiral Brønsted acids are often easily prepared from chiral alcohols such as BINOLs, and many are already present in the literature due to their established utility in molecular recognition research.
Multifunctional strategies
One of the main advantages of hydrogen-bond catalysis is the ability to construct catalysts that engage in multiple non-covalent interactions to promote the reaction. In addition to using hydrogen-bond donors to activate or stabilize a reactive center during the reaction, it is possible to introduce other functional groups, such as Lewis bases, arenes, or addition hydrogen-bonding sites to lend additional stabilization or to influence the other reactive partner.
For instance, the natural enzyme chorismate mutase, which catalyzes the Claisen rearrangement of chorismate, features many other interactions in addition to the hydrogen bonds involved in stabilizing the enolate-like fragment, which is an example of the anionic fragment stabilization strategy discussed above. A key interaction is the stabilization of the other cationic allyl fragment through a cation-pi interaction in the transition state. The use of many additional hydrogen bonds has several putative purposes. The stabilization of multiple hydrogen bonds to the enzyme helps overcome the entropic cost of binding. Additionally, the interactions help hold the substrate in a reactive conformation, and the enzyme-catalyzed reaction has near-zero entropy of activation, while typical Claisen rearrangements in solution have very negative entropies of activation.
The use of cation-pi interactions has also been implemented in reactions with synthetic catalysts. A combination of anion-binding and cation-pi strategies can be used to effect enantioselective cationic polycyclizations. In the transition state, it is proposed that the thiourea group binds chloride, while the aromatic system stabilizes the associated polyene cation. In support of this, increasing the size of the aromatic ring leads to improvements both in yield and stereoselectivity. The enantioselectivity correlates well with both the polarizability and the quadrupole moment of the aryl group.
Since such a large number of catalysts and reactions involve binding to electrophiles to stabilize the transition state, many bifunctional catalysts also present a Lewis-basic, hydrogen-bond acceptor site. As a representative example, Deng and coworkers have developed a thiourea-amine catalyst capable of promoting stereoselective Michael reactions. In the proposed transition state, one of the thiourea N–H donors is coordinated to the Michael acceptor and will stabilize the negative charge buildup. The basic nitrogen lone pair acts as a hydrogen-bond acceptor to coordinate the nucleophile, but in the transition state acts as a general base to promote the nucleophilic enolate addition.
This motif of engaging both the nucleophilic and electrophilic partners in a reaction and stabilizing them in the transition state is very common in bifunctional catalysis and many more examples can be found in the article on thiourea organocatalysis.
A relatively new strategy of using synthetic oligopeptides to perform catalysis has yielded many examples of catalytic methods. Peptides feature multiple potential sites for hydrogen bonding and it is generally not understood how these engage substrate or how they promote reaction. Peptides have the advantage of being extremely modular and often these catalysts are screened in large arrays. Highly enantioselective reactions have been discovered in this manner such as the aldol reaction depicted below.
Other transformations catalyzed by synthetic peptides include hydrocyanation, acylation, conjugate additions, aldehyde-imine couplings, aldol reaction and bromination. Although the nature of the transition states is unclear, in many examples small changes in the catalyst structure have dramatic effects on reactivity. It is hypothesized that a large number of hydrogen bonds both within the peptide and between catalyst and substrate must cooperate to meet the geometrical requirements for catalysis. Beyond this, understanding of catalyst design and mechanism has not yet progressed beyond requiring the testing of libraries of peptides.
Catalyst design
Privileged structures
The types of hydrogen-bond donors used in catalysis vary widely from reaction to reaction, even among similar catalytic strategies. While specific systems are often studied and optimized extensively, a general understanding of the optimal donor for a reaction or the relationship between catalyst structure and reactivity is greatly lacking. It is not yet practical to rationally design structures to promote a desired reaction with the desired selectivity. However, contemporary hydrogen-bond catalysis is primarily focused on a few types of systems that experimentally seem to be effective in a variety of situations. These are termed "privileged structures". However, it is worth noting that other structural scaffolds and motifs have also shown promising results, such as metal-coordinated hydrogen-bond donors.
Ureas and thioureas are by far the most common structures and can stabilize a variety of negatively charged intermediates, as well as engage in anion-binding catalysis. Bifunctional urea and thiourea catalysis are abundant in the literature. Thioureas are often found to be stronger hydrogen-bond donors (i.e., more acidic) than ureas because their amino groups are more positively charged. Quantum chemical analyses revealed that this counterintuitive phenomenon, which is not explainable by the relative electronegativities of O and S, results from the effective steric size of the chalcogen atoms.
Guanidinium and amidinium ions are structural relatives of ureas and thioureas and can catalyze similar reactions but, by virtue of their positive charge, are stronger donors and much more acidic. The mechanism of guanidinium and amidinium catalysis is thought to often involve partial protonation of substrate.
Diol catalysts are thought to engage substrate with a single hydrogen bond, with the other hydroxyl participating in an internal hydrogen bond. These are some of the earliest hydrogen-bond catalysts investigated. They are most commonly used in stabilizing partial anionic charge in transition states, for example coordinating to aldehyde dienophiles in hetero-Diels-Alder reactions.
Phosphoric acid catalysts are the most common strong acid catalysts and work by formation of chiral ion pairs with basic substrates such as imines.
Squaramide catalysts are easily prepared from starting materials like methyl squarate, possess high activities under low catalyst loadings. Squaramide catalysis can be a replacement for thiourea organocatalysis in some scenarios. Squaramides have higher affinity for halide ions than thiourea.
Catalyst tuning
In general, acidity of donor sites correlates well with the strength of the donor. For example, it is a common strategy to add electron-withdrawing aryl substituents on a thiourea catalyst, which can increase its acidity and thus the strength of its hydrogen bonding. However, it is still unclear how donor strength correlates with desired reactivity. Importantly, more acidic catalysts are not necessarily more effective. For instance, ureas are less acidic than thioureas by roughly 6 pKa units, but it is not generally true that ureas are significantly worse are catalyzing reactions.
Furthermore, the effect of varying substituents on the catalyst is rarely well understood. Small substituent changes can completely change reactivity or selectivity. An example of this was in the optimization studies of a bifunctional Strecker reaction catalyst, one of the first well-studied thiourea catalysts. Specifically, varying the X substituent on the salicylaldimine substituent, it was found that typical electron-withdrawing or electron-donating substituents had little effect on the rate, but ester substituents such as acetate or pivaloate seemed to cause noticeable rate acceleration. This observation is difficult to rationalize given that the X group is far from the reactive center during the course of the reaction and electronics do not seem to be the cause.
Synthetic applications
Natural product synthesis
To date, there have been few examples of hydrogen-bond catalysis in the synthesis of natural products despite the large number of reactions being discovered. Generally, with high required catalyst loading and often extreme substrate specificity, hydrogen-bond catalysis is not yet useful.
In the Jacobsen synthesis of (+)-yohimbine, an indole alkaloid, an early enantioselective Pictet-Spengler reaction using a pyrrole-substituted thiourea catalyst produced gram-scale quantities of product in 94% ee and 81% yield. The remainder of the synthesis was short, using a reductive amination and an intramolecular Diels-Alder reaction.
In 2008, Takemoto disclosed a concise synthesis of (−)-epibatidine that relied on a Michael cascade, catalyzed by a bifunctional catalyst. After initial asymmetric Michael addition to the β-nitrostyrene, intramolecular Michael addition furnishes the cyclic ketoester product in 75% ee. Standard functional group manipulations and an intramolecular cyclization yields the natural product.
Scalable synthesis of building blocks
Aside from total synthesis, hydrogen-bond catalysis has been applied to the bulk synthesis of difficult-to-access chiral small molecules. An example is the gram-scale Strecker synthesis of unnatural amino acids using thiourea catalysis, reported in the journal Nature in 2009. The catalyst, whether polymer-bound or homogeneous, is derived from natural tert-leucine and can catalyze (4 mol% catalyst loading) the formation of the Strecker product from benzhydryl amines and aqueous HCN. Hydrolysis of the nitrile and deprotections produces pure unnatural tert-leucine in 84% overall yield and 99% ee.
See also
Organocatalysis
Thiourea organocatalysis
Hydrogen bond
Further reading
Hydrogen Bond Catalysis. Evans Group Meeting Presentation by Peter H. Fuller. Link
Asymmetric Hydrogen Bond Catalysis. MacMillan Group Meeting Presentation by Anthony Mastracchio. Link
Hydrogen Bonding in Asymmetric Catalysis. Leighton Group Meeting Presentation by Uttam Tambar. Link
Asymmetric Catalysis by Chiral Hydrogen-Bond Donors. Wipf Group Meeting Presentation by Zhenglai Fang Link
Enantioselective Organocatalysis. Ed. Peter I. Dalko, Wiley-VCH: Weinheim, 2007.
References
Catalysis
Organic chemistry | Hydrogen-bond catalysis | Chemistry | 4,349 |
8,731,428 | https://en.wikipedia.org/wiki/Kile%20%28unit%29 | The kile () was an Ottoman unit of volume similar to a bushel, like other dry measures also often defined as a specific weight of a particular commodity. Its value varied widely by location, period, and commodity, from 8 to 132 oka. The 'standard' kile was 36 litres or 20 oka.
References
Diran Kélékian, Dictionnaire Turc-Français, Constantinople: Imprimerie Mihran, 1911.
A.D. Alderson and Fahir İz, The Concise Oxford Turkish Dictionary, 1959.
Halil İnalcık, Donald Quataert, An Economic and Social History of the Ottoman Empire, 1300-1914, Cambridge University Press, 1997. . Has extensive tables of values of the kile at various times and places.
Obsolete units of measurement
Units of mass
Units of volume
Turkish words and phrases
Ottoman units of measurement | Kile (unit) | Physics,Mathematics | 182 |
58,179,215 | https://en.wikipedia.org/wiki/Hot%20Particulate%20Ingestion%20Rig | The Hot Particulate Ingestion Rig (HPIR) is a gas burner that can shoot sand into a hot gas flow and onto a target material to test how that material's thermal barrier coating is impacted by the molten sand. It was developed by the U.S. Army Research Laboratory (ARL) to experiment with new coating materials for gas turbine engines used in military aircraft.
Mechanism
The HPIR uses standard military fuel and dry compressed air to produce combusted gas flows that can range from 400 °C to 1650 °C that travels as fast as 1060 meters per second or Mach 0.8. A LabVIEW interface is used to monitor and control all the operations of the HPIR parameters and pneumatic table. Monitoring is also performed by Williamson PRO series single/dual wavelength pyrometers, S-type thermocouples, and a FLIR SC6700 mid-wave infrared (IR) camera in order to determine the emissivity of each sample.
Samples are placed in a steel holder in front of the rig at a 10 degree incident angle so that heats up the surface in a uniform manner. A pneumatic table moves the sample into the flame and an S-type thermocouple is used to monitor the flame's temperature. During testing, the sample is initially exposed to a hot gas flow at 0.28 Mach at a flame temperature of 815 °C until the pyrometer detects that the surface temperature of the target has reached 540 °C. Then, the sample goes through several cycles of heating and cooling as an initial survivability check before it can be exposed to even higher temperatures. Short-term durability testing consists of three of these cycles with the heating stage reaching engine-relevant temperatures and the cooling stage set at ambient conditions.
In 2016, the HPIR was modified to ingest sand and salt into the combustion chamber at 1 to 200 grams per minute.
Sandphobic coating technology
In 2015, researchers at ARL were tasked with finding a way to prevent flying, micron-sized sand and dust particles from entering the gas turbine engines of military aircraft and damaging the internal machinery.
While modern engines have particle separators that can filter out large particles, fine, powder-like sand particles that are smaller than 100 micrometers in size have consistently managed to pass through the engine's combustors and attach to the blades and vanes. As the rotor blades experienced cycles of heating and cooling during operation, the particles melted due to the extreme temperatures and then subsequently hardened onto the turbine blades. As a result, the micron-sized sand particles have frequently destroyed the engine's internal coating, which has led to severe sand glazing, blade tip wear, calcia-magnesia-alumina-silicate (SMAS) attack, oxidation, plugged cooling holes, and, ultimately, engine loss. This problem has recently worsened due to the fact that more recent, state-of-the-art turbine engines operate at much higher temperatures than past generation turbomachinery, ranging from 1400 °C to 1500 °C.
According to ARL scientists, the damage caused by these tiny sand particles have reduced the lifespan of a typical T-700 engine from 6000 hours to 400 hours, and replacing the rotors can cost more than $30,000. They estimate that one third of fielded engines used by the military have been affected by this sand ingestion problem.
As part of a collaborative research effort with the Aviation and Missile Research, Development, and Engineering Center (AMRDEC), the U.S. Navy Naval Air Systems Command (NAVAIR) and the National Aeronautics and Space Administration (NASA), ARL modified the HPIR so that it can model how sand particles adhere, melt, and glassify on thermal barrier coatings.
According to ARL researchers, the HPIR is the first system to confirm how the sand particles damage the turbine blades at temperatures similar to that of a turbine engine out on the field. Using high-speed imaging technology, ARL scientists were able to film how sand particles experience a phase change from solid to liquid before being deposited onto turbine blade material targets and vaporizing. In 2018, the team used the HPIR to test different coating materials and develop what they call “sandphobic coatings,” which will be designed so that the sand particles flake off the rotor blades instead of attaching to them.
References
Military technology
Gas turbines
Turbines
Test equipment
Sand | Hot Particulate Ingestion Rig | Chemistry,Technology | 912 |
2,051,812 | https://en.wikipedia.org/wiki/Proportional%20control | Proportional control, in engineering and process control, is a type of linear feedback control system in which a correction is applied to the controlled variable, and the size of the correction is proportional to the difference between the desired value (setpoint, SP) and the measured value (process variable, PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor.
The proportional control concept is more complex than an on–off control system such as a bi-metallic domestic thermostat, but simpler than a proportional–integral–derivative (PID) control system used in something like an automobile cruise control. On–off control will work where the overall system has a relatively long response time, but can result in instability if the system being controlled has a rapid response time. Proportional control overcomes this by modulating the output to the controlling device, such as a control valve at a level which avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional gain.
A drawback of proportional control is that it cannot eliminate the residual SP − PV error in processes with compensation e.g. temperature control, as it requires an error to generate a proportional output. To overcome this the PI controller was devised, which uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time to produce an "I" component for the controller output.
Theory
In the proportional control algorithm, the controller output is proportional to the error signal, which is the difference between the setpoint and the process variable. In other words, the output of a proportional controller is the multiplication product of the error signal and the proportional gain.
This can be mathematically expressed as
where
: Controller output with zero error.
: Output of the proportional controller
: Proportional gain
: Instantaneous process error at time t.
: Set point
: Process variable
Constraints: In a real plant, actuators have physical limitations that can be expressed as constraints on . For example, may be bounded between −1 and +1 if those are the maximum output limits.
Qualifications: It is preferable to express as a unitless number. To do this, we can express as a ratio with the span of the instrument. This span is in the same units as error (e.g. C degrees) so the ratio has no units.
Development of control block diagrams
Proportional control dictates . From the block diagram shown, assume that r, the setpoint, is the flowrate into a tank and e is error, which is the difference between setpoint and measured process output. is process transfer function; the input into the block is flow rate and output is tank level.
The output as a function of the setpoint, r, is known as the closed-loop transfer function.
If the poles of are stable, then the closed-loop system is stable.
First-order process
For a first-order process, a general transfer function is . Combining this with the closed-loop transfer function above returns . Simplifying this equation results in where and . For stability in this system, ; therefore, must be a positive number, and (standard practice is to make sure that ).
Introducing a step change to the system gives the output response of .
Using the final-value theorem,
which shows that there will always be an offset in the system.
Integrating process
For an integrating process, a general transfer function is , which, when combined with the closed-loop transfer function, becomes .
Introducing a step change to the system gives the output response of .
Using the final-value theorem,
meaning there is no offset in this system. This is the only process that will not have any offset when using a proportional controller.
Offset error
Offset error is the difference between the desired value and the actual value, error. Over a range of operating conditions, proportional control alone is unable to eliminate offset error, as it requires an error to generate an output adjustment. While a proportional controller may be tuned (via adjustment, if possible) to eliminate offset error for expected conditions, when a disturbance (deviation from existing state or setpoint adjustment) occurs in the process, corrective control action, based purely on proportional control, will result in an offset error.
Consider an object suspended by a spring as a simple proportional control. The spring will attempt to maintain the object in a certain location despite disturbances that may temporarily displace it. Hooke's law tells us that the spring applies a corrective force that is proportional to the object's displacement. While this will tend to hold the object in a particular location, the absolute resting location of the object will vary if its mass is changed. This difference in resting location is the offset error.
Proportional band
The proportional band is the band of controller output over which the final control element (a control valve, for instance) will move from one extreme to another. Mathematically, it can be expressed as:
So if , the proportional gain, is very high, the proportional band is very small, which means that the band of controller output over which the final control element will go from minimum to maximum (or vice versa) is very small. This is the case with on–off controllers, where is very high and hence, for even a small error, the controller output is driven from one extreme to another.
Advantages
The clear advantage of proportional over on–off control can be demonstrated by car speed control. An analogy to on–off control is driving a car by applying either full power or no power and varying the duty cycle, to control speed. The power would be on until the target speed is reached, and then the power would be removed, so the car reduces speed. When the speed falls below the target, with a certain hysteresis, full power would again be applied. It can be seen that this would obviously result in poor control and large variations in speed. The more powerful the engine, the greater the instability; the heavier the car, the greater the stability. Stability may be expressed as correlating to the power-to-weight ratio of the vehicle.
In proportional control, the power output is always proportional to the (actual versus target speed) error. If the car is at target speed and the speed increases slightly due to a falling gradient, the power is reduced slightly, or in proportion to the change in error, so that the car reduces speed gradually and reaches the new target point with very little, if any, "overshoot", which is much smoother control than on–off control. In practice, PID controllers are used for this and the large number of other control processes that require more responsive control than using proportional alone.
References
External links
Proportional control compared to on–off or bang–bang control
Classical control theory
Control devices
Control engineering | Proportional control | Engineering | 1,389 |
184,383 | https://en.wikipedia.org/wiki/Benedict%27s%20reagent | Benedict's reagent (often called Benedict's qualitative solution or Benedict's solution) is a chemical reagent and complex mixture of sodium carbonate, sodium citrate, and copper(II) sulfate pentahydrate. It is often used in place of Fehling's solution to detect the presence of reducing sugars and other reducing substances. Tests that use this reagent are called Benedict's tests. A positive result of Benedict's test is indicated by a color change from clear blue to brick-red with a precipitate.
Generally, Benedict's test detects the presence of aldehyde groups, alpha-hydroxy-ketones, and hemiacetals, including those that occur in certain ketoses. In example, although the ketose fructose is not strictly a reducing sugar, it is an alpha-hydroxy-ketone which results to a positive test because the base component of Benedict converts it into aldoses glucose and mannose. Oxidizing the reducing sugar by the cupric (Cu2+) complex of the reagent produces a cuprous (Cu+), which precipitates as insoluble red copper(I) oxide (Cu2O).
The test is named after American chemist Stanley Rossiter Benedict.
Composition and preparation
Benedict's reagent is a deep-blue aqueous solution. Each litre contains:
17.3 g copper sulfate
173 g sodium citrate
100 g anhydrous sodium carbonate or, equivalently, 270 g sodium carbonate decahydrate
Separate solutions of the reagents are made. The sodium carbonate and sodium citrate are mixed first, and then the copper sulfate is added slowly with constant stirring.
Sodium citrate acts as a complexing agent which keeps Cu2+ in solution, since it would otherwise precipitate. Sodium carbonate serves to keep the solution alkaline. In the presence of mild reducing agents, the copper(II) ion is reduced to copper(I), which precipitates in the alkaline conditions as very conspicuous red copper(I) oxide.
Organic analysis
To test for the presence of monosaccharides and reducing disaccharide sugars in food, the food sample is dissolved in water and a small amount of Benedict's reagent is added. During a water bath, which is usually 4–10 minutes, the solution should progress through the colors of blue (with no reducing sugar present), orange, yellow, green, red, and then brick red precipitate or brown (if a high concentration of reducing sugar is present). A color change would signify the presence of a reducing sugar.
The common disaccharides lactose and maltose are directly detected by Benedict's reagent because each contains a glucose with a free reducing aldehyde moiety after isomerization.
Sucrose (table sugar) contains two sugars (fructose and glucose) joined by their glycosidic bond in such a way as to prevent the glucose undergoing isomerization to an aldehyde, or fructose to alpha-hydroxy-ketone form. Sucrose is thus a non-reducing sugar which does not react with Benedict's reagent. However, sucrose indirectly produces a positive result with Benedict's reagent if heated with dilute hydrochloric acid prior to the test, although it is modified during this treatment as the acidic conditions and heat break the glycosidic bond in sucrose through hydrolysis. The products of sucrose decomposition are glucose and fructose, both of which can be detected by Benedict's reagent as described above.
Starches do not react or react very poorly with Benedict's reagent because of lesser number of reducing sugar components that occur at the ends of the carbohydrate chains. Other carbohydrates which produce a negative result include inositol.
Benedict's reagent can also be used to test for the presence of glucose in urine, elevated levels of which is known as glucosuria. Glucosuria can be indicative of diabetes mellitus, but Benedict's test is not recommended or used for diagnosis of the aforementioned condition. This is due to the possibility of a reaction in which the presence of other reducing substances such as ascorbic acid, drugs (levodopa, contrast used in radiological procedures) and homogentisic acid (alkaptonuria) creates a false positive.
As color of the obtained precipitate can be used to infer the quantity of sugar present in the solution, the test is semi-quantitative. A greenish precipitate indicates about 0.5 g% concentration; yellow precipitate indicates 1 g% concentration; orange indicates 1.5 g% concentration; and red indicates 2 g% or higher concentration.
Quantitative reagent
Benedict's quantitative reagent contains potassium thiocyanate and is used to quantitatively determine the concentration of reducing sugars. This solution forms a copper thiocyanate precipitate which is white and can be used in titration. The titration should be repeated with 1% glucose solution instead of the sample for calibration.
Net reaction
The net reaction between an aldehyde (or an alpha-hydroxy-ketone) and the copper(II) ions in Benedict's solution may be written as:
.
The hydroxide ions in the equation forms when sodium carbonate dissolves in water. With the citrate included, the reaction becomes:
.
See also
Dextrose equivalent
Other oxidizing reagents
Fehling's solution
Tollens' reagent
Other reducing reagents
Jones reductor
Walden reductor
References
Analytical reagents
Biochemistry detection methods
Carbohydrate methods
Chemical tests
Coordination complexes
Copper compounds
Oxidizing agents | Benedict's reagent | Chemistry,Biology | 1,235 |
11,648,371 | https://en.wikipedia.org/wiki/Fibulin | Fibulin (FY-beau-lin) (now known as Fibulin-1 FBLN1) is the prototypic member of a multigene family, currently with seven members. Fibulin-1 is a calcium-binding glycoprotein. In vertebrates, fibulin-1 is found in blood and extracellular matrices. In the extracellular matrix, fibulin-1 associates with basement membranes and elastic fibers. The association with these matrix structures is mediated by its ability to interact with numerous extracellular matrix constituents including fibronectin, proteoglycans, laminins and tropoelastin. In blood, fibulin-1 binds to fibrinogen and incorporates into clots.
Fibulins are secreted glycoproteins that become incorporated into a fibrillar extracellular matrix when expressed by cultured cells or added exogenously to cell monolayers. The five known members of the family share an elongated structure and many calcium-binding sites, owing to the presence of tandem arrays of epidermal growth factor-like domains. They have overlapping binding sites for several basement-membrane proteins, tropoelastin, fibrillin, fibronectin and proteoglycans, and they participate in diverse supramolecular structures. The amino-terminal domain I of fibulin consists of three anaphylatoxin-like (AT) modules, each approximately 40 residues long and containing four or six cysteines. The structure of an AT module was determined for the complement-derived anaphylatoxin C3a, and was found to be a compact alpha-helical fold that is stabilized by three disulphide bridges in the pattern Cys14, Cys25 and Cys36 (where Cys is cysteine). The bulk of the remaining portion of the fibulin molecule is a series of nine EGF-like repeats.
Genes
FBLN1, FBLN2, FBLN3, FBLN4, FBLN5, FBLN7 and HMCN1 is also known as "fibulin-6".
References
External links
Protein domains
Blood proteins | Fibulin | Chemistry,Biology | 464 |
70,102,727 | https://en.wikipedia.org/wiki/Syntrophales | The Syntrophales are an order of gram-negative Thermodesulfobacteriota. It is the only order in the monotypic class Syntrophia. Acetate is converted by syntrophales into acetyl-CoA, which can be used as a source of carbon and energy. Given that genes involved in fermentation were missing, this might then be channeled into gluconeogenesis.
See also
List of bacterial orders
List of bacteria genera
References
2. Langwig, M.V., De Anda, V., Dombrowski, N. et al. Large-scale protein level comparison of Deltaproteobacteria reveals cohesive metabolic groups. ISME J 16, 307–320 (2022). https://doi.org/10.1038/s41396-021-01057-y
Thermodesulfobacteriota
Bacteria orders | Syntrophales | Biology | 195 |
22,526,294 | https://en.wikipedia.org/wiki/LabVantage | LabVantage Solutions, Inc. is a laboratory information management system (LIMS) provider based in Somerset, New Jersey. Founded in 1981, LabVantage is the third largest LIMS provider in the world.
Laboratory MicroSystems was founded by Mark Chudzicki and Michael Boskin in 1981. Chudzicki started the company when he was attending graduate school at Rensselaer Polytechnic Institute. The company began making money several years after it was founded but needed a $100,000 loan from New York state's Corporation for Innovation Development. Laboratory MicroSystems received financing from 100 Capital District shareholders in 1985. The company in 1986 was based in Hendrik Hudson Hotel, a downtown Troy hotel that was refurbished into an office building, employed 15 people, and had annual sales of $1 million.
It was named to the Inc. 500 in 1987 after a 532% increase in sales in its first five years in business. The company in 1988 primarily served Fortune 500 companies including General Electric, Dow Corning, Pennzoil, DuPont, Monsanto, and Exxon. It set up software that cost between $10,000 and $70,000 in 1990. Chudzicki and Boskin in 1990 sold the company, which had 12 employees at the time, to Instron. The offer was for about $2.5 million, half to be paid in Instron stock and half to be paid in cash. The acquisition was finalized at $2.42 million to be distributed among Laboratory MicroSystems' 100 shareholders.
In 1997, Instron sold Laboratory MicroSystems to Axiom Systems, a subsidiary of Purnendu Chatterjee's The Chatterjee Group, which renamed the company to LabVantage. Strategic Directions International said in a 2000 report, "by the late 1990s, the company appeared to have lost its way" because the numerous products LabVantage had to maintain caused it to be "overwhelmed" which hurt its client contentment. The report further noted that the company's sales failed to increase meaningfully in the past few years.
In 2005, about half of the company's employees work in India, while 60 employees are in North America and 20 are in Europe.
LabVantage's customers in the United States include Aventis, Pfizer, and Unilever's Best Foods (now called Hellmann's and Best Foods). In India, LabVantage provides services for GAIL, Indian Oil Corporation, and Reliance Industries.
References
External links
Official website
Companies established in 1981
Companies based in Somerset County, New Jersey
Information systems
Information technology companies of the United States | LabVantage | Technology | 531 |
6,814,674 | https://en.wikipedia.org/wiki/Malliavin%20derivative | In mathematics, the Malliavin derivative is a notion of derivative in the Malliavin calculus. Intuitively, it is the notion of derivative appropriate to paths in classical Wiener space, which are "usually" not differentiable in the usual sense.
Definition
Let be the Cameron–Martin space, and denote classical Wiener space:
;
By the Sobolev embedding theorem, . Let
denote the inclusion map.
Suppose that is Fréchet differentiable. Then the Fréchet derivative is a map
i.e., for paths , is an element of , the dual space to . Denote by the continuous linear map defined by
sometimes known as the H-derivative. Now define to be the adjoint of in the sense that
Then the Malliavin derivative is defined by
The domain of is the set of all Fréchet differentiable real-valued functions on ; the codomain is .
The Skorokhod integral is defined to be the adjoint of the Malliavin derivative:
See also
H-derivative
References
Generalizations of the derivative
Stochastic calculus
Malliavin calculus | Malliavin derivative | Mathematics | 224 |
30,265,224 | https://en.wikipedia.org/wiki/Enzo%20Martinelli | Enzo Martinelli (11 November 1911 – 27 August 1999) was an Italian mathematician, working in the theory of functions of several complex variables: he is best known for his work on the theory of integral representations for holomorphic functions of several variables, notably for discovering the Bochner–Martinelli formula in 1938, and for his work in the theory of multi-dimensional residues.
Biography
Life
He was born in Pescia on 11 November 1911, where his father was the Director of the local agricultural school. His family later went to Rome, where his father ended his working career as the Director-general of the Italian Ministry of Public Education. Enzo Martilnelli lived in Rome almost all of his life: the only exception was a period of nearly eight years, from 1947 to 1954, when he was in Genova, working at the local university. In 1946 he married in Rome Luigia Panella, also her a mathematician, who later become an associate professor at the faculty of Engineering of the Sapienza University of Rome, and who was his loving companion for the rest of his life. They had a son, Roberto, and a daughter, Maria Renata, who later followed her parents footsteps becoming also her a mathematician: four grandchildren completed their family.
Academic career
In 1933 he earned his laurea from the Sapienza University of Rome: the title of his thesis was "Sulle funzioni poligene di una e di due variabili complesse", and his thesis supervisor was Francesco Severi. From 1934 to 1946 he worked as an assistant professor first to the chair of mathematical analysis held by Francesco Severi and then to the chair of geometry held by Enrico Bompiani. In 1939 he became "Libero Docente" (free professor) of Mathematical analysis: he taught also courses on analytic geometry, algebraic geometry and topology as associate professor. In 1946 he won a competitive examination by a judging commission for the chair of "Geometria analitica con elementi di Geometria Proiettiva e Geometria Descrittiva con Disegno", awarded by the University of Genova: the second place and the third place went respectively to Giovanni Dantoni and Guido Zappa. Martinelli held that chair from 1946 to 1954, teaching also mathematical analysis, function theory, differential geometry and algebraic analysis as associate professor. In 1954 he went back in Rome to the chair of Geometry at the university, holding that chair up to his retirement, in 1982: he also taught courses on topology, higher mathematics, higher geometry upon charge. In the years 1968–1969, during a very difficult period for the Sapienza University of Rome, he served the university as the director of the Guido Castelnuovo Institute of Mathematics.
He attended various conferences and meetings. In 1943 and in 1946 he was invited in Zurich by Rudolf Fueter, in order to present his researches: later and during all his career he lectured in almost all Italian and foreign universities.
He was also a member of the UMI Scientific Commission (from 1967 to 1972), of the editorial boards of the Rendiconti di Matematica e delle sue Applicazioni (from 1955 to 1992) and of the Annali di Matematica Pura ed Applicata (from 1965 to 1999).
Honors
According to , Enzo's talent for mathematics was already evident when he was only a lyceum student. While still attending the university, he won the Cotronei Foundation prize, and after earning his laurea, the Beltrami Foundation prize, the Fubini and Torelli prizes, and the Prize for Mathematical Sciences of the Ministry of National Education: this last one was awarded him in 1943, and the judging commission consisted of Francesco Severi (as the president of the commission), Ugo Amaldi and Antonio Signorini (as the supervisor of the commission).
In 1948 he was elected Corresponding Member of the Accademia Ligure di Scienze e Lettere: in 1961 and in 1977 he was elected respectively Corresponding and Full Member of the Accademia dei Lincei, and from 1982 to 1985 he was "Professore Linceo". Finally, in 1980 he was elected Corresponding Member of the Accademia delle Scienze di Torino and then, in 1994, Full Member. Also, in 1986, the Sapienza University of Rome, to which Enzo Martinelli was particularly tied for all his life, awarded him the title of professor emeritus.
Personality traits
He is unanimously remembered as a real gentleman, gifted by a caring attention, politeness, generosity and the rare ability to listen to colleagues and students alike: and remember long conversations with him on various mathematical research topics, and his disposability to give help and advice to whoever asked for it. In particular recalls the time when he was his doctoral student at the University of Genova: they meet every Sunday in the afternoon at Martinelli's home, since Martinelli was not able to meet him during the week. During one of their meetings, lasting a little more than two hours, Martinelli taught him Élie Cartan's theory of exterior differential forms, and Rizza used successfully this tool in his first research works. Another episode illustrating this aspect of Martinelli's personality is recalled by Gaetano Fichera. When he was back in Rome in 1945, at the end of World War II, he exposed to Martinelli a theory identical to the theory of differential form: he developed it while being prisoner of the nazists in Teramo during wartime. Martinelli, very tactfully, told him that the idea was already being developed by Élie Cartan and Georges de Rham.
An excellent teacher himself, capable to arose curiosity and enthusiasm by his lessons, he admired and respected much his own: however, this was quite common for the Italian scientists of the same and the preceding generations, who were advised in the early days of their scientific career by some of the best Italian scientists ever. His doctoral advisor was Francesco Severi: other great Italian mathematicians where among his teachers. Guido Castelnuovo, Federigo Enriques, Enrico Bompiani, Tullio Levi-Civita Mauro Picone and Antonio Signorini were all working at the Sapienza University of Rome when Enzo Martinelli was a student there, following their lessons: describes the activity of the institute of mathematics during that period as extremely stimulating.
Another central aspect of his personality was a deep sense of justice and legality: Martinelli was very careful in performing his citizen and university professor duties, and he was also ready to fight for his own rights and for the needs of higher education. Concerned by the growing interference of bureaucracy in university education, already in the 1950s he was heard by complaining that: "In Italia mancano le menti semplificatrici". Martinelli was also free from every kind of authoritarianism to the point that when, during the protests of 1968 in Italy, many newspapers accused the Italian university scientific community of being so, all the assistant professors and students of Martinelli (and perhaps Martinelli himself) were perplexed. In the same period, while performing his duties as the director of the Guido Castelnuovo Institute of Mathematics at the Sapienza university of Rome, his rare intellectual honesty and rigorous rationality, according to Rizza, caused him troubles when dealing with many who "believed in everything except the cold light of reason".
Work
Research activity
He is the author of more than 50 research works, the first of which was published when Martinelli still was an undergraduate student: precisely, his research production consist of 47 papers and 30 between treatises, textbooks and various other publications. According to , his research personality can be described by two words: "enthusiasm" and "dissatisfaction": enthusiasm is meant as his steady interest in mathematics at all levels, while dissatisfaction is meant as the desire to going deeper into all mathematical problems investigated, without stopping at first success and expressing all the results in a simple, elegant and essential form.
Teaching activity
The aspects of his personality described before and his deep professional commitment also made him a great teacher: at least fifteen textbooks on geometry, topology, complex analysis testify his didactic activity. Those books appear as models of clarity and mathematical rigour, and also offer insights on more complex theories and problems to the clever student: indeed, it was one of Martinelli's concerns to teach mathematics showing its lively development and its attractiveness in term of interesting difficult problems offered, in order that no gifted student would abandon the idea to do mathematical research.
Selected publications
. The first paper where the now called Bochner-Martinelli formula is introduced and proved.
. In this paper, Martinelli proves an earlier result of Luigi Amoroso on the boundary values of pluriharmonic function by using tensor calculus.
. Available at the SEALS Portal . In this paper Martinelli gives a proof of Hartogs' extension theorem by using the Bochner-Martinelli formula.
. Available at the SEALS Portal .
. Available at the SEALS Portal . In this work, Martinelli goes further in its analysis of integral representations of holomorphic functions of complex variables whose domain of integration is a set whose dimension (as a subset of the –dimensional euclidean space) assumes all integer values between and .
. The concluding work of Martinelli on the theory of integral representations of holomorphic functions of complex variables.
. This paper contains Martinelli's improvement of the solution of the Dirichlet problem for holomorphic functions of several complex variables given by few years before: Martinelli relaxes the smoothness condition on the boundary of the given domain, requiring it to be only of class . However, the boundary value is required to be of the same class, smoother than class data allowed by Gaetano Fichera.
.
. The notes form a course, published by the Accademia Nazionale dei Lincei, held by Martinelli during his stay at the Accademia as "Professore Linceo".
. In this article, Martinelli gives another form of the Martinelli–Bochner formula.
See also
Almost complex manifold
Bochner–Martinelli formula
Complex manifold
Kähler manifold
Pluriharmonic function
Residue theorem
Several complex variables
Notes
References
Biographical and general references
.
, freely available from the Ministero per i Beni Culturali e Ambientali – Dipartimento per i Beni Archivistici e Librari – Direzione Generale per gli Archivi. The complete inventory of the Reale Accademia d'Italia, which incorporated the Accademia Nazionale dei Lincei between 1939 and 1944.
, available from the Accademia delle Scienze di Torino. The relation on the activity of the "Accademia" during the years 1998–1999 read by the president of the Turin Academy of Sciences.
. The story of the life of Gaetano Fichera written by his wife, Matelda Colautti Fichera. The first phrase of the title is the last verse (and title) of a famous poem of Salvatore Quasimodo, and was the concluding phrase of the last lesson of Fichera, in the occasion of his retirement from university teaching in 1992, published in . There is also a free electronic edition with a different title: .
. The Last Lesson of the course of higher analysis by Gaetano Fichera, before his retirement from university teaching in 1992.
.
. The biographical and bibliographical entry (updated up to 1976) on Luigi Amerio, published under the auspices of the Accademia dei Lincei in a book collecting many profiles of its living members up to 1976.
. A celebration article written by Giovanni Battista Rizza, his first former doctoral student, published in the proceedings of the conference .
. An obituary written Giovanni Battista Rizza, by his first doctoral student.
. The commemoration of Enzo Martinelli written by his first doctoral student.
. This is a monographic fascicle published on the "Bollettino dell'Unione Matematica Italiana", describing the history of the Istituto Nazionale di Alta Matematica Francesco Severi from its foundation in 1939 to 2003. It was written by Gino Roghi and includes a presentation by Salvatore Coen and a preface by Corrado De Concini. It is almost exclusively based on sources from the institute archives: the wealth and variety of materials included, jointly with its appendices and indexes, make this monograph a useful reference not only for the history of the institute itself, but also for the history of many mathematicians who taught or followed the institute courses or simply worked there.
. The personal reminiscences about his geometry teacher Enzo Martinelli, by Giuseppe Tomassini.
. This work describes the research activity at the Sapienza University of Rome and at the (at that time newly created) "Istituto Nazionale di Alta Matematica Francesco Severi" from the end of the 1930s to the early 1940s.
Scientific references
. An epoch-making paper in the theory of CR-functions, where the Dirichlet problem for analytic functions of several complex variables is solved for general data. An English translation of the title reads as:-"Characterization of the trace, on the boundary of a domain, of an analytic function of several complex variables".
, (in Italian). Notes from a course held by Francesco Severi at the Istituto Nazionale di Alta Matematica (which at present bears his name), containing appendices of Enzo Martinelli, Giovanni Battista Rizza and Mario Benedicty. An English translation of the title reads as:-"Lectures on analytic functions of several complex variables – Lectured in 1956–57 at the Istituto Nazionale di Alta Matematica in Rome".
Proceedings of conferences dedicated to Enzo Martinelli
. The proceedings of the "International Meeting in honour of ENZO MARTINELLI – Rome, 30 May – 1 June 1983", an international conference in his honour organized by M. Bruni, G. Fichera, S. Marchiafava, G. B. Rizza e F. Succi, published in the "Rivista di Matematica della Università di Parma" journal: the papers and are taken from them.
. The electronic proceedings of a conference on topics belonging to or related to André Lichnerowicz and Enzo Martinelli fields of research.
External links
. The biographical entry about Enzo Martinelli the Enciclopedia Treccani.
1911 births
1999 deaths
People from Pescia
20th-century Italian mathematicians
Geometers
Complex analysts
Italian mathematical analysts
Members of the Lincean Academy
Academic staff of the Sapienza University of Rome | Enzo Martinelli | Mathematics | 3,020 |
5,648,302 | https://en.wikipedia.org/wiki/Expansion%20joint | A expansion joint, or movement joint, is an assembly designed to hold parts together while safely absorbing temperature-induced expansion and contraction of building materials. They are commonly found between sections of buildings, bridges, sidewalks, railway tracks, piping systems, ships, and other structures.
Building faces, concrete slabs, and pipelines expand and contract due to warming and cooling from seasonal variation, or due to other heat sources. Before expansion joint gaps were built into these structures, they would crack under the stress induced.
Bridge expansion joints
Bridge expansion joints are designed to allow for continuous traffic between structures while accommodating movement, shrinkage, and temperature variations on reinforced and prestressed concrete, composite, and steel structures. They stop the bridge from bending out of place in extreme conditions, and also allow enough vertical movement to permit bearing replacement without the need to dismantle the bridge expansion joint. There are various types, which can accommodate movement from , including joints for small movement (EMSEAL BEJS, XJS, JEP, WR, WOSd, and Granor AC-AR), medium movement (ETIC EJ, Wd), and large movement (WP, ETIC EJF/Granor SFEJ).
Modular expansion joints are used when the movements of a bridge exceed the capacity of a single gap joint or a finger type joint. Modular multiple-gap expansion joints can accommodate movements in all directions and rotations about every axis. They can be used for longitudinal movements of as little as 160mm, or for very large movements of over 3000 mm. The total movement of the bridge deck is divided among a number of individual gaps which are created by horizontal surface beams. The individual gaps are sealed by watertight elastomeric profiles, and surface beam movements are regulated by an elastic control system. The drainage of the joint is via the drainage system of the bridge deck. Certain joints feature so-called “sinus plates” on their surface, which reduce noise from over-passing traffic by up to 80%.
Masonry control joints are also sometimes used in bridge slabs.
Masonry
Clay bricks expand as they absorb heat and moisture. This places compression stress on the bricks and mortar, encouraging bulging or flaking. A joint replacing mortar with elastomeric sealant will absorb the compressive forces without damage. Concrete decking (most typically in sidewalks) can suffer similar horizontal issues, which is usually relieved by adding a wooden spacer between the slabs. The wooden expansion joint compresses as the concrete expands. Dry, rot-resistant cedar is typically used, with a row of nails protruding out that will embed into the concrete and hold the spacer in place.
Comparison to control joints
Control joints, or contraction joints, are sometimes confused with expansion joints, but have a different purpose and function. Concrete and asphalt have relatively weak tensile strength, and typically form random cracks as they age, shrink, and are exposed to environmental stresses (including stresses of thermal expansion and contraction). Control joints attempt to attenuate cracking by designating lines for stress relief. They are cut into pavement at regular intervals. Cracks tend to form along the cuts, rather than in random fashion elsewhere. This is primarily an aesthetic issue; the appearance of even, regular cracking, which may be hidden in the joint’s crevice, is often preferred over random cracking.
Thus, expansion joints reduce cracks, including in the overall structure, while control joints manage cracks, primarily along the visual surface.
Roadway control joints may be sealed with hot tar, cold sealant (such as silicone), or compression sealant (such as rubber or polymers based crossed linked foams). Mortar with a breakaway bond may be used to fill some control joints.
Control joints must have adequate depth and not exceed maximum spacing for them to be effective. Typical specifications for a four-inch-thick slab are:
25% depth of material
spacing at 24× to 36× of slab depth (some specification call for a maximum of 30×)
special care for inside corners
Tile and stone flooring movement joints
Movement joints are designed to absorb the movement of the subfloor and the tiles themselves due to thermal expansion and contraction, moisture variations, and structural shifts. These joints are essentially gaps, typically filled with a flexible material like silicone or rubber, that separate tiles and allow for movement without causing the tiles to crack, buckle, or become disjointed.
Railway expansion joints
If a railway track runs over a bridge which has expansion joints that move more than a few millimeters, the track must be able to compensate this longer expansion or contraction. On the other hand, the track must always provide a continuous surface for the wheels traveling over it. These conflicting requirements are served by special expansion joints, where two rails glide along with each other at a very acute angle during expansion or contraction. They are typically seen near one or both ends of large steel bridges. Such an expansion joint looks somewhat like the tongue of a railroad switch, but with a different purpose and operation.
Ducted air systems
Expansion joints are required in large ducted air systems to allow fixed pieces of piping to be largely free of stress as thermal expansion occurs. Bends in elbows also can accommodate this. Expansion joints also isolate pieces of equipment such as fans from the rigid ductwork, thereby reducing vibration to the ductwork as well as allowing the fan to “grow” as it comes up to the operating air system temperature without placing stress on the fan or the fixed portions of ductwork.
An expansion joint is designed to allow deflection in the axial (compressive), lateral (shear), or angular (bending) deflections. Expansion joints can be non-metallic or metallic (often called bellows type). Non-metallic can be a single ply of rubberized material or a composite made of multiple layers of heat and erosion resistant flexible material. Typical layers are: outer cover to act a gas seal, a corrosion-resistant material such as Teflon, a layer of fiberglass to act as an
insulator and to add durability, several layers of insulation to ensure that the heat transfer from the flue gas is reduced to the required temperature and an inside layer.
A bellows is made up of a series of one or more convolutions of metal to allow the axial, lateral, or angular deflection.
Pipe expansion joints
Pipe expansion joints are necessary in systems that convey high temperature substances such as steam or exhaust gases, or to absorb movement and vibration. A typical joint is a bellows of metal (most commonly stainless steel), plastic (such as PTFE), fabric (such as glass fibre) or an elastomer such as rubber.
A bellows is made up of a series of convolutions, with the shape of the convolution designed to withstand the internal pressures of the pipe, but flexible enough to accept axial, lateral, and angular deflections. Expansion joints are also designed for other criteria, such as noise absorption, anti-vibration, earthquake movement, and building settlement. Metal expansion joints have to be designed according to rules laid out by EJMA, for fabric expansion joints there are guidelines and a state-of-the-art description by the Quality Association for Fabric Expansion Joints. Pipe expansion joints are also known as "compensators", as they compensate for the thermal movement.
Pressure balanced expansion joints
Expansion joints are often included in industrial piping systems to accommodate movement due to thermal and mechanical changes in the system. When the process requires large changes in temperature, metal components change size. Expansion joints with metal bellows are designed to accommodate certain movements while minimizing the transfer of forces to sensitive components in the system.
Pressure created by pumps or gravity is used to move fluids through the piping system. Fluids under pressure occupy the volume of their container. The unique concept of pressure balanced expansion joints is they are designed to maintain a constant volume by having balancing bellows compensate for volume changes in the bellows (line bellows) which is moved by the pipe. An early name for these devices was “pressure-volumetric compensator”.
Manufacturing of rubber expansion joints
Wrapping fabric reinforced rubber sheets
Rubber expansion joints are mainly manufactured by manual wrapping of rubber sheets and fabric reinforced rubber sheets around a bellows-shaped product mandrel. Besides rubber and fabric, reinforced rubber and/or steel wires or metal rings are added for additional reinforcement. After the entire product is built up on the mandrel, it is covered with a winding of (nylon) peel ply to pressurize all layers together. Because of the labor-intensive production process, a large part of the production has moved to eastern Europe and Asian countries.
Molded rubber expansion joints
Some types of rubber expansion joints are made with a molding process. Typical joints that are molded are medium-sized expansion joints with bead rings, which are produced in large quantities. These rubber expansion joints are manufactured on a cylindrical mandrel, which is wrapped with bias cut fabric ply. At the end the bead rings are positioned and the end sections are folded inwards over the bead rings. This part is finally placed in a mold and molded into shape and vulcanized. This is a highly automated solution for large quantities of the same type of joint.
Automated winding of rubber expansion joints
New technology has been developed to wind rubber and reinforcement layers on the (cylindrical or bellows-shaped) mandrel automatically using industrial robots instead of manual wrapping. This is fast and accurate and provides repeatable high quality. Another aspect of using industrial robots for the production of rubber expansion joints is the possibility to apply an individual reinforcement layer instead of using pre-woven fabric. The fabric reinforcement is pre-woven and cut at the preferred bias angle. With individual reinforcement it is possible to add more or less fiber material at different sections of the product by changing the fiber angles over the length of the product.
Expansion joint accessories
Liners
Internal liners can be used to either protect the metallic bellows from erosion or reduce turbulence across the bellows. They must be used when purge connectors are included in the design. In order to provide enough clearance in the liner design, appropriate lateral and angular movements must be specified by the designer. When designing an expansion joint with combination ends, flow direction must be specified as well.
Covers
External covers or shrouds should be used to protect the internal bellows from being damaged. They also serve a purpose as insulation of the bellows. Covers can either be designed as removable or permanent accessories.
Particulate barriers/purge connectors
In systems that have a media with significant particulate content (i.e. flash or catalyst), a barrier of ceramic fiber can be utilized to prevent corrosion and restricted bellows flexibility resulting from the accumulation of the particulate. Purge connectors may also be utilized to perform this same function. Internal liners must also be included in the design if the expansion joint includes purge connectors or particulate barriers.
Limit rods
Limit rods may be used in an expansion joint design to limit the axial compression or expansion. They allow the expansion joint to move over a range according to where the nut stops are placed along the rods. Limit rods are used to prevent bellows over-extension while restraining the full pressure thrust of the system.
Failure modes
Expansion joint failure can occur for various reasons, but experience shows that failures fall into several distinct categories. This list includes, but is not limited to: shipping and handling damage, improper installation/insufficient protection, during/after installation, improper anchoring, guiding, and supporting of the system, anchor failure in service, corrosion, system over-pressure, excessive bellows deflection, torsion, bellows erosion, and particulate matter in bellows convolutions restricting proper movement.
There are various actions that can be taken to prevent and minimize expansion joint failure. During installation, prevent any damage to the bellows by carefully following the instructions furnished by the manufacturer. After installation, carefully inspect the entire piping system to see if any damage occurred during installation, if the expansion joint is in the proper location, and if the expansion joint flow direction and positioning is correct. Also, periodically inspect the expansion joint throughout the operating life of the system in order to check for external corrosion, loosening of threaded fasteners and deterioration of anchors, guides, and other hardware.
Other expansion joint types
Other types of expansion joints can include: fabric expansion joint, metal expansion joint (Pressure balanced expansion joints are a type of Metal expansion joints), toroidal expansion joint, gimbal expansion joint, universal expansion joint, in-line expansion joint, refractory lined expansion joint, hinged expansion joint, reinforced expansion joint and more.
Copper expansion joints are excellent materials designed for the movement of building components due to temperature, loads, and settlement. Copper is easy to form and lasts a long time. Details regarding roof conditions, roof edges, floors, are available.
See also
Breather switch
Copper expansion joints for buildings
Expansion Joint Manufacturers Association
Metal expansion joint
Reinforced rubber
Slide plate
Toroidal expansion joint
References
External links
Quality Association for Fabric Expansion Joints
Structural engineering
Road hazards
Piping
Heating, ventilation, and air conditioning
Mechanical engineering
de:Dehnfuge | Expansion joint | Physics,Chemistry,Technology,Engineering | 2,690 |
1,555,681 | https://en.wikipedia.org/wiki/Finger%20%28unit%29 | A finger (sometimes fingerbreadth or finger's breadth) is any of several units of measurement that are approximately the width of an adult human finger. [Exactly which part of the finger should be used is not defined; the width at the base of fingernail (#6 in the sketch) is typically less than that at the knuckle (#5).]
The digit, also known as digitus or digitus transversus (Latin), dactyl (Greek) or dactylus, or finger's breadth of an inch or of a foot. (about 2 cm)
In medicine and related disciplines (anatomy, radiology, etc.) the fingerbreadth (literally the width of a finger) is an informal but widely used unit of measure.
In the measurement of distilled spirits, a finger of whiskey refers to the amount of whiskey that would fill a glass to the level of one finger wrapped around the glass at the bottom.
Another definition (from Noah Webster): "nearly an inch."
Finger is also the name of a longer unit of length, used historically in cloth measurement, to mean one eighth of a yard or 4 inches. (114.3 mm) Again, which finger and whose finger, is not defined.
These units have no legal status but remain in use for 'rough and ready' comparisons.
See also
('6' in the diagram above)
(before 1826)
(from 1826)
References
Units of length
Human-based units of measurement | Finger (unit) | Mathematics | 307 |
46,704,027 | https://en.wikipedia.org/wiki/Penicillium%20melanoconidium | Penicillium megasporum is a species in the genus Penicillium which produces xanthomegin, verrucosidin, roquefortine C and penitrem A. Penicillium megasporum occurs in grain
References
Further reading
melanoconidium
Fungi described in 2004
Fungus species | Penicillium melanoconidium | Biology | 67 |
73,124,554 | https://en.wikipedia.org/wiki/Linked-read%20sequencing | Linked-read sequencing, a type of DNA sequencing technology, uses specialized technique that tags DNA molecules with unique barcodes before fragmenting them. Unlike traditional sequencing technology, where DNA is broken into small fragments and then sequenced individually, resulting in short read lengths that has difficulties in accurately reconstructing the original DNA sequence, the unique barcodes of linked-read sequencing allows scientists to link together DNA fragments that come from the same DNA molecule. A pivotal benefit of this technology lies in the small quantities of DNA required for large genome information output, effectively combining the advantages of long-read and short-read technologies.
History
This sequencing method was originally developed by 10x Genomics in 2015, and was launched under the name 'GemCode' or 'Chromium'. GemCode employed a method of gel bead-based barcoding to amalgamate short DNA fragments. The longer fragments produced by this could then be sequenced using validated technology such as Illumina next-generation sequencing. An updated version of linked-read sequencing was introduced by the same company in 2018, termed 'Linked-Reads V2'. While GemCode uses a single barcode for tagging of both the gel bead and the DNA fragment, Linked-Reads V2 uses separate barcodes for improved detection of genetic variants.
The group developed the linked-read sequencing technology published their first paper regarding this technology in 2016. The authors of this paper developed the linked-read sequencing technology initially to sequence the genomes of both healthy individuals and cancer patients to determine somatic mutations, copy number variations, and structural variations in cancer genomes. Later that year, another research group combined linked-read sequencing technology with long-read sequencing technology to assemble human genome. Both studies demonstrated the utility of linked-read sequencing in comprehensive genome analysis and in understanding genetic diseases. However, in 2019, a lawsuit relating to patent infringement resulted in 10x Genomics discontinuing their line of linked-read products.
Method
Overview
The linked-read sequencing is microfluidic-based, and only needs nanograms of input DNA. One nanogram of DNA can be distributed across more than 100,000 droplet partitions, where DNA fragments are barcoded and subjected to polymerase chain reactions (PCR). As a result, DNA fragments (or reads) that share the same barcode can be grouped as coming from one single long input DNA sequence. And, long range information can be assembled from short reads.
Steps of Linked-read sequencing:
Sample Preparation: DNA is extracted from a sample (e.g., blood) and cut into fragments of 50 to 200 kilo base-pairs long.
Barcode Sequencing: each DNA fragment is labelled with a unique barcode through a process known as "Gel Bead-In Emulsion" (GEM).
Library Preparation: barcoded DNA fragments are amplified with PCR to generate sequencing libraries.
Sequencing: with Illumina next-generation sequencing technology, generate millions to billions of short sequence reads that represent fragments of the original DNA molecules.
Barcode Processing: group short reads to longer fragments based on barcodes.
Downstream Analysis: processed reads are aligned to a reference genome, or used for de novo assembly of complex genomes, haplotype phasing, or identification of structural variations.
Barcode Sequencing
During barcode sequencing, high molecular weight DNA samples that contain the targeted DNA sequence, ranging from fifty to several hundred kilobases in size, are combined with gel beads containing unique barcodes, enzymes, and sequencing reagents. Microfluidic device can partition input DNA molecules into individual nanoliter-sized droplets of water-in-oil emulsion, called GEMs. Each GEM contains gel beads coated with the same barcode and primers, and a small amount of DNA. The primers are complementary to specific regions of the DNA molecule, allowing for amplification of the DNA in the droplets through PCR. The barcodes enable the identification and grouping of sequencing reads that originate from the same long fragment, which is crucial for downstream analysis.
Library Preparation and Sequencing
The barcoded DNA fragments are amplified using PCR to create a library of DNA fragments with identical barcodes. All the fragments derived from a given DNA molecule are tagged with the same barcode. This step increases the quantity of DNA for sequencing and reduces the chances of losing unique DNA fragments during sequencing. Droplets (or GEM) are later collected in a tube, and the emulsion is broken, releasing the amplified, barcoded DNA sequences.
Standard Illumina next-generation sequencing technology can be used to sequence libraries. During sequencing, the barcodes are read along with the DNA sequences, allowing researchers and scientists to group together DNA fragments that originate from the same DNA molecule. Even though each DNA fragment is typically not fully sequenced, the information from many overlapping fragments in the same genomic region can be combined to reconstruct the long stretches of the genome. Therefore, a genome can be easily assembled from scratch without any prior reference.
Processing
The raw sequencing data is then processed through bioinformatics (e.g., the GemCode analysis software developed by 10x Genomics) to remove low-quality reads and to assign reads to their respective barcodes. Reads can be aligned to a reference genome or assembled de novo to generate long-range contigs. The read alignment step is important for determining the order and orientation of the long DNA fragments, and for identifying genomic variations, such as insertions or deletions.
Applications
De Novo Genome Assembly
Linked-read sequencing can facilitate de novo genome assembly, which involves reconstructing a genome from scratch without any prior reference. Linked-read sequencing enables assembly of large genomic regions, and helps improve the completeness and contiguity of the resulting genome. This can be particularly useful for studying organisms that lack a high-quality reference genome, such as non-model organisms or organisms with complex genomes. Many scientists have been using linked-read sequencing technology for de novo genome assembly recently in a variety of organisms, including humans, plants, and animals. For example, Dr. Evan Eichler and his research group used linked-read sequencing to assemble genome of orangutan, which had previously been difficult to study due to its complex genome. The resulting genome assembly helped scientists to study new insights into the evolutionary history of primates and the genetic basis of human diseases. Also, the aligned or assembled reads can be used for other genetic investigations or downstream analysis, such as haplotype phasing.
Haplotype Phasing
Haplotype refers to a group of genetic variants inherited together on a chromosome from one parent due to their genetic linkage. Haplotype phasing (also called haplotype estimation) refers to the process of reconstructing individual haplotypes, important for determining the genetic basis of diseases. Linked-read sequencing allows consistent coverage of genes related to different diseases, helping scientists to obtain all the regions carrying mutations from targeted genes. For example, in 2018, a group of researchers used linked-read sequencing technology to sequence genetic information from a pregnant woman who was a carrier of Duchenne muscular dystrophy (DMD) mutation. Linked-read sequencing allows them to identify the maternal haplotypes and determine the presence of the mutant alleles in the foetal DNA. This non-invasive prenatal diagnosis of DMD demonstrates the clinical applicability of linked-read sequencing.
Structural Variation Analysis
Structural variations, such as deletions, duplications, inversions, translocations, and other rearrangements, are common in human genomes. These variations can have significant impacts on genome functions, and have been implicated in many diseases. Linked-read sequencing technology labels all reads that originate from the same long DNA fragment with the same barcode, so it enables the detection of a large number of structural variants. Complexity of structural variants can be resolved with linked-read sequencing, and provide a complete picture of the genomic landscape. Many scientists have already been using linked-read sequencing to identify and characterise structural variants in diverse populations, including people with genetic disorders or cancers
Transcriptome Analysis
Transcriptome analysis is the study of all the RNA transcripts that are produced by the genome of an organism. Linked-read sequencing has been used by researchers to assemble transcript isoforms and alternative splicing events. Information regarding alternative splicing events can provide insights into the regulation of gene expression in human transcriptome
Epigenetic Analysis
Epigenetics refers to the study of heritable changes in genetic activities that are distinct from changes in DNA sequences. Epigenetic analysis involves studying DNA-protein interactions, histone modifications, and DNA methylation. Linked-read sequencing has been used for studying DNA methylation patterns by many studies. For example, in 2021, a study investigated the DNA methylation differences in peripheral blood cells between twins, in which one twin had Alzheimer’s Disease and the other was cognitively normal. Linked-read sequencing technology allowed researchers to identify more than 3000 differentially methylated regions between these twins discordant for Alzheimer’s Disease, and investigation of these differentially methylated regions eventually led to identification of genes enriched in neurodevelopmental processes, neuronal signalling, and immune system functions
Use
Advantages
Wide range of genomic applications and scientific questions, including de novo genome assembly, haplotype phasing, structural variant analysis, and transcriptome and epigenetic analysis.
Accuracy and scalability.
Method requires small quantities of input DNA, which can be beneficial for small samples or single cell studies.
More cost effective per sample in comparison with long-read technologies such as Oxford Nanopore sequencing.
Libraries produced by linked-read can be processed using Illumina short read sequencing, increasing accessibility.
Limitations
Complexity of library construction - this technology requires high molecular DNA preparation in order to produce long enough DNA molecules for sequencing.
Limitations in read length may result in limited haplotype resolution, which could reduce the efficacy of this technology in highly complex genomic regions.
Controversy
In 2018, Bio-Rad Laboratories filed a lawsuit against 10x Genomics stating that their linked-read technology infringed on three patents which had been licensed from Bio-Rad at the University of Chicago. Bio-Rad was awarded a sum of $23,930,716 by a jury. The 10x Genomics filed a motion for judgement as a matter of law (JMOL) but were denied in 2019, and the court proceedings concluded in 2020. Following this lawsuit, 10x Genomics discontinued their linked-read assay. An exception was made for linked-read products which had already been sold by the company prior to the lawsuit, allowing 10x Genomics to continue to provide those researchers with services such as support and warranty maintenance for this technology.
References
Molecular biology
Biotechnology | Linked-read sequencing | Chemistry,Biology | 2,224 |
63,867,828 | https://en.wikipedia.org/wiki/Sodium%20hydroselenide | Sodium hydroselenide is an inorganic compound with the chemical formula . It is a salt of hydrogen selenide. It consist of sodium cations and hydroselenide anions . Each unit consists of one sodium, one selenium, and one hydrogen atom. Sodium hydroselenide is a selenium analog of sodium hydroxide NaOH.
Production
Sodium hydroselenide can be made by reducing selenium with sodium borohydride:
Alternatively it can be made from sodium ethoxide exposed to hydrogen selenide:
Sodium hydroselenide is not made for storage, instead it is used immediately after production in a fume hood thanks to the appalling odour of hydrogen selenide.
Properties
Sodium hydroselenide dissolves in water or ethanol. In humid air sodium hydroselenide is changed to sodium polyselenide and elemental selenium.
Sodium hydroselenide is slightly reducing.
Use
In organic synthesis, hydrogen sodium hydroselenide is a nucleophillic agent for insertion of selenium.
References
Sodium compounds
Selenides | Sodium hydroselenide | Chemistry | 228 |
70,151,472 | https://en.wikipedia.org/wiki/Hank%20the%20Tank | Hank the Tank (also known as Henrietta) is a five-hundred-pound female American black bear that gained attention for repeated interactions with humans in the Lake Tahoe area, resulting in her eventual capture and relocation to Colorado. Known by wildlife officials as Bear 64F, Hank became a symbol of human-wildlife conflict and sparked broader discussions about bear management in populated areas.
Background
Hank, a bear local to the Tahoe Keys region of California, had been observed to frequent urban areas in 2021. The California Department of Fish and Wildlife (CDFW) observed that Hank exhibited behavior not considered normal for those in the wild with the primary example of showing a decreased fear of humans. This abnormal behavior was explained by a continuous access to food sources provided by humans, such as but not limited to open or unsecured garbage, which drew her to residential areas while searching for food.
The Lake Tahoe region has had a significant increase in the population of bears in recent years. According to the Tahoe Interagency Bear Team (TIBT), bears have been increasing their presence in the area due to overpopulation, competition for resources, and unsecured trash and bird feeders within residential zones. In the early 2000s the black bear population in Lake Tahoe was estimated to be approximately 120 bears for every 100 square kilometers, the second-highest density recorded in the US.
A result of bears having access to human food often is that they do not hibernate during winter, as is observed in around twenty percent of bears in the Lake Tahoe area. Research has shown that Wildland–urban interfaces are critical areas for human-bear interactions and potential conflicts.
Incidents
Between the years 2021 and early 2022, Hank has been associated with a series of break ins and property damage amounting to around 20 cases in the Tahoe Keys neighborhood. The media started to cover her activities as concerns about safety grew among residents. At one point, it was suspected that Hank's cases rose to as many as thirty homes. However, DNA profiling revealed that multiple bears including Hank were involved in the break-ins.
In 2022, there were a total of 902 conflict calls, 235 home invasions, and 31 permits issued on the Tahoe Basin within the borders of California. The following year, 2023, saw 660 conflict calls, 217 home invasions, and 38 permits issued. This data indicates a significant level of human-bear interaction in the region.
Relocation and management
Due to the high-profile nature of the incidents, the CDFW initially considered euthanizing Hank as a last-resort measure to address public safety concerns. However, DNA evidence indicating that multiple bears were involved prompted the agency to implement a program of tagging and monitoring bears in the region instead. No bears would be killed under this initiative, which relied on DNA sampling to track and identify individual animals associated with break-ins. On August 7, 2023, after over a year of tracking, CDFW officials successfully captured Hank and her three cubs. Hank was transported to The Wild Animal Sanctuary near Springfield, Colorado, while her cubs were placed in Sonoma County Wildlife Rescue to undergo rehabilitation. The aim was to "retrain" the cubs to rely on natural food sources and avoid human environments.
Public response
Hank's association with repeated incidents and a discussion around the possibility of euthanizing Hank generated significant public and media interest. Three wildlife sanctuaries offered to rehome Hank and saw support from the BEAR League, a local bear advocacy organization, who pledged to cover all expenses related to her relocation. Ann Bryant, executive director of the BEAR League, made comments highlighting a strong local opposition to euthanization as a solution. She underlined how Lake Tahoe residents adapted to coexisting with local wildlife and gave names to bears as a form of established community identity.
Hank was presented as a "conflict bear" in the Media due to her reliance on human food sources and frequent encounters with Tahoe locals. Her notoriety contributed to broader discussions about human-wildlife interactions, specifically giving rise to debates regarding how best to manage bear populations in urban and semi-urban areas.
Human-wildlife conflict
Hank's case brought the root cause of the human–wildlife conflict seen especially in regions like Lake Tahoe where wildlife and urban communities coexist. In recent years, bears have been becoming relatively more "habituated" to human presence, often due to unsecured garbage and intentional or accidental feeding. This habituation in time breaks their natural fear of people and can lead to increased incidents of property damage and public safety risks.
CDFW describes "conflict bears" as those that have grown accustomed to humans and having a tendency to approach them, very often requiring intervention through relocation or euthanasia. Experts have emphasized that public education on securing food sources and bear-proofing properties is crucial to reducing conflicts and protecting both humans, bear populations, and the natural dynamic of human-bear interaction. Conservationists have been arguing that the case of Hank and her cubs is a call for more effective waste management systems and awareness campaigns in areas with many bears. Authorities hope to reduce human-bear conflicts and allow bears to retain their natural avoidance of humans by promoting responsible practices such as using bear boxes and kodiak cans.
See also
List of individual bears
Human–wildlife conflict
Wildlife management
References
Individual bears
Lake Tahoe
Wildlife conservation
Animals in the United States | Hank the Tank | Biology | 1,093 |
11,353,293 | https://en.wikipedia.org/wiki/Attribute-based%20access%20control | Attribute-based access control (ABAC), also known as policy-based access control for IAM, defines an access control paradigm whereby a subject's authorization to perform a set of operations is determined by evaluating attributes associated with the subject, object, requested operations, and, in some cases, environment attributes.
ABAC is a method of implementing access control policies that is highly adaptable and can be customized using a wide range of attributes, making it suitable for use in distributed or rapidly changing environments. The only limitations on the policies that can be implemented with ABAC are the capabilities of the computational language and the availability of relevant attributes. ABAC policy rules are generated as Boolean functions of the subject's attributes, the object's attributes, and the environment attributes.
Unlike role-based access control (RBAC), which defines roles that carry a specific set of privileges associated with them and to which subjects are assigned, ABAC can express complex rule sets that can evaluate many different attributes. Through defining consistent subject and object attributes into security policies, ABAC eliminates the need for explicit authorizations to individuals’ subjects needed in a non-ABAC access method, reducing the complexity of managing access lists and groups.
Attribute values can be set-valued or atomic-valued. Set-valued attributes contain more than one atomic value. Examples are role and project. Atomic-valued attributes contain only one atomic value. Examples are clearance and sensitivity. Attributes can be compared to static values or to one another, thus enabling relation-based access control.
Although the concept itself existed for many years, ABAC is considered a "next generation" authorization model because it provides dynamic, context-aware and risk-intelligent access control to resources allowing access control policies that include specific attributes from many different information systems to be defined to resolve an authorization and achieve an efficient regulatory compliance, allowing enterprises flexibility in their implementations based on their existing infrastructures.
Attribute-based access control is sometimes referred to as policy-based access control (PBAC) or claims-based access control (CBAC), which is a Microsoft-specific term. The key standards that implement ABAC are XACML and ALFA (XACML).
Dimensions of attribute-based access control
ABAC can be seen as:
Externalized authorization management
Dynamic authorization management
Policy-based access control
Fine-grained authorization
Components
Architecture
ABAC comes with a recommended architecture which is as follows:
The PEP or Policy Enforcement Point: it is responsible for protecting the apps & data you want to apply ABAC to. The PEP inspects the request and generates an authorization request from which it sends to the PDP.
The PDP or Policy Decision Point is the brain of the architecture. This is the piece which evaluates incoming requests against policies it has been configured with. The PDP returns a Permit/Deny decision. The PDP may also use PIPs to retrieve missing metadata
The PIP or Policy Information Point bridges the PDP to external sources of attributes e.g. LDAP or databases.
Attributes
Attributes can be about anything and anyone. They tend to fall into 4 different categories:
Subject attributes: attributes that describe the user attempting the access e.g. age, clearance, department, role, job title
Action attributes: attributes that describe the action being attempted e.g. read, delete, view, approve
Object attributes: attributes that describe the object (or resource) being accessed e.g. the object type (medical record, bank account), the department, the classification or sensitivity, the location
Contextual (environment) attributes: attributes that deal with time, location or dynamic aspects of the access control scenario
Policies
Policies are statements that bring together attributes to express what can happen and is not allowed. Policies in ABAC can be granting or denying policies. Policies can also be local or global and can be written in a way that they override other policies. Examples include:
A user can view a document if the document is in the same department as the user
A user can edit a document if they are the owner and if the document is in draft mode
Deny access before 9 AM
With ABAC you can have an unlimited number of policies that cater to many different scenarios and technologies.
Other models
Historically, access control models have included mandatory access control (MAC), discretionary access control (DAC), and more recently role-based access control (RBAC). These access control models are user-centric and do not take into account additional parameters such as resource information, the relationship between the user (the requesting entity) and the resource, and dynamic information, e.g. time of the day or user IP.
ABAC tries to address this by defining access control based on attributes which describe the requesting entity (the user), the targeted object or resource, the desired action (view, edit, delete), and environmental or contextual information. This is why access control is said to be attribute-based.
Implementations
There are three main implementations of ABAC:
OASIS XACML
Abbreviated Language for Authorization (ALFA).
NIST's Next-generation Access Control (NGAC)
XACML, the eXtensible Access Control Markup Language, defines an architecture (shared with ALFA and NGAC), a policy language, and a request/response scheme. It does not handle attribute management (user attribute assignment, object attribute assignment, environment attribute assignment) which is left to traditional IAM tools, databases, and directories.
Companies, including every branch in the United States military, have started using ABAC. At a basic level, ABAC protects data with 'IF/THEN/AND' rules rather than assign data to users. The US Department of Commerce has made this a mandatory practice and the adoption is spreading throughout several governmental and military agencies.
Applications
The concept of ABAC can be applied at any level of the technology stack and an enterprise infrastructure. For example, ABAC can be used at the firewall, server, application, database, and data layer. The use of attributes bring additional context to evaluate the legitimacy of any request for access and inform the decision to grant or deny access.
An important consideration when evaluating ABAC solutions is to understand its potential overhead on performance and its impact on the user experience. It is expected that the more granular the controls, the higher the overhead.
API and microservices security
ABAC can be used to apply attribute-based, fine-grained authorization to the API methods or functions. For instance, a banking API may expose an approveTransaction(transId) method. ABAC can be used to secure the call. With ABAC, a policy author can write the following:
Policy: managers can approve transactions up to their approval limit
Attributes used: role, action ID, object type, amount, approval limit.
The flow would be as follows:
The user, Alice, calls the API method approveTransaction(123)
The API receives the call and authenticates the user.
An interceptor in the API calls out to the authorization engine (typically called a Policy Decision Point or PDP) and asks: Can Alice approve transaction 123?
The PDP retrieves the ABAC policy and necessary attributes.
The PDP reaches a decision e.g. Permit or Deny and returns it to the API interceptor
If the decision is Permit, the underlying API business logic is called. Otherwise the API returns an error or access denied.
Application security
One of the key benefits to ABAC is that the authorization policies and attributes can be defined in a technology neutral way. This means policies defined for APIs or databases can be reused in the application space. Common applications that can benefit from ABAC are:
Content Management Systems
ERPs
Home-grown Applications
Web Applications
The same process and flow as the one described in the API section applies here too.
Database security
Security for databases has long been specific to the database vendors: Oracle VPD, IBM FGAC, and Microsoft RLS are all means to achieve fine-grained ABAC-like security.
An example would be:
Policy: managers can view transactions in their region
Reworked policy in a data-centric way: users with can do the action on if
Data security
Data security typically goes one step further than database security and applies control directly to the data element. This is often referred to as data-centric security. On traditional relational databases, ABAC policies can control access to data at the table, column, field, cell and sub-cell using logical controls with filtering conditions and masking based on attributes. Attributes can be data, user, session or tools based to deliver the greatest level of flexibility in dynamically granting/denying access to a specific data element. On big data, and distributed file systems such as Hadoop, ABAC applied at the data layer control access to folder, sub-folder, file, sub-file and other granular.
Big data security
Attribute-based access control can also be applied to Big Data systems like Hadoop. Policies similar to those used previously can be applied when retrieving data from data lakes.
File server security
As of Windows Server 2012, Microsoft has implemented an ABAC approach to controlling access to files and folders. This is achieved through dynamic access control (DAC) and Security Descriptor Definition Language (SDDL). SDDL can be seen as an ABAC language as it uses metadata of the user (claims) and of the file/ folder to control access.
See also
References
External links
ATTRIBUTE BASED ACCESS CONTROL (ABAC) - OVERVIEW
Unified Attribute Based Access Control Model (ABAC) covering DAC, MAC and RBAC
Attribute Based Access Control Models (ABAC) and Implementation in Cloud Infrastructure as a Service
Access control
Computer access control | Attribute-based access control | Engineering | 1,974 |
22,713,707 | https://en.wikipedia.org/wiki/Integrated%20Operations%20in%20the%20High%20North | Integrated Operations in the High North (IOHN, IO High North or IO in the High North) is a unique collaboration project that during a four-year period starting May 2008 is working on designing, implementing and testing a Digital Platform for what in the upstream oil and gas industry is called the next or second generation of Integrated Operations.
The work on the Digital platform is focussed on capture, transfer and integration of real-time data from the remote production installations to the decision makers. A risk evaluation across the whole chain is also included. The platform is based on open standards and enables a higher degree of interoperability. Requirements for the digital platform come from use cases defined within the Drilling and Completion, Reservoir and Production and Operations and Maintenance domains. The platform will subsequently be demonstrated through pilots within these three domains.
The project was a sidecar initiative for Statoil’s Global Operations Data Integration Project. This was part of a very ambitious Master Plan IT (MapIT), which also included the Real Time Visualization (RTV) tender. The RTV tender aimed to be an ontology-aware information workspace for a wide range of disciplines, as per the IO Capability Stack. Additionally, the sidecar project aimed to increase the semantic web knowledge among suppliers in the industry.
This new platform is considered an important enabler for safe and sustainable operations in remote, vulnerable and hazardous areas such as the High North, but the technology is clearly also applicable in more general applications.
The IOHN project consortium consists of 23 participants, including operators, service providers, software vendors, technology providers, research institutions and universities. In addition, the Norwegian Defence Force is working with the project to resolve common infrastructural and interoperability challenges.
The project is managed by Det Norske Veritas (DNV). Nils Sandsmark was the project manager during the initiation and start-up phase. Frédéric Verhelst took over as project manager from the beginning of 2009.
Financing comes from the participants and the Research Council of Norway (RCN) for parts of the project (GOICT
and AutoConRig).
Participants
The consortium consists of the following 22 participants (in alphabetical order):
See also
Integrated Operations
Semantic Web
ISO 15926 aka Oil and Gas Ontology, an enabler for the next or second generation of Integrated Operations by integrating data across disciplines and business domains.
Petroleum exploration in the Arctic
POSC Caesar Association, the custodian of ISO 15926, the Oil and Gas Ontology.
References
External links
Integrated Operations in the High North website
W3C workshop on Semantic Web in Oil and Gas industry, Houston, December 9–10, 2008. Position papers from several participants in IOHN.
Semantic Days 2009 conference, Stavanger, May 18–20, 2009. One session is devoted to IOHN.
IO 09 Science and Practice conference, Trondheim, September 29–30, 2009. One session is devoted to IOHN.
Integrated Operations in the High North—mid term report, Oil IT Journal, March 2010.
Petroleum organizations
Petroleum engineering
Semantic Web
Knowledge engineering
Information science
Ontology (information science)
Knowledge representation | Integrated Operations in the High North | Chemistry,Engineering | 637 |
75,985,917 | https://en.wikipedia.org/wiki/Avionics%20bay | Avionics bay, also known as E&E bay or electronic equipment bay in aerospace engineering is known as compartment in an aircraft that houses the avionics and other electronic equipment, such as flight control computers, navigation systems, communication systems, and other electronic equipment essential for the operation. It is designed to be modular with individual components that can be easily removed and replaced in case of failure and is designed to be highly reliable and fault-tolerant with various backup systems.
In larger commercial airplanes, the main avionics compartment is typically located in the forward section of the aircraft under the cockpit. Purpose of its location is to provide easy access to the avionics and other electronic equipment for maintenance and repair.
For example, on larger aircraft such as the Boeing 747-400, the avionics bays are divided into 3 parts - the main equipment center (MEC), the center equipment center (CEC) and the aft equipment center (AEC).
Components
Typically avionics bay contain plug-in modules for:
Flight Control Computer (FCC)
Autopilot
Automatic flight director system (AFDS)
Autothrottle system (A/T)
Mode control panel (MCP)
Flight management computer (FMC)
Primary flight computers (PFC)
Actuator control electronics (ACE)
Flight data recorder
Cockpit voice recorder
Battery and battery charger
The avionics bay also contains the oxygen tanks for the pilots in case of a cabin depressurization
Thermal management in spacecraft
In spacecraft, smoke detection is not practical for avionics bays as there is no forced airflow in the compartment. Suppressants, such as Halon, operate by either chemically interrupting the combustion process or by reducing the oxygen concentration within the bay's atmosphere.
In popular culture
The avionics bay of a 747-200 was used as a way to deploy the military into an aircraft in the movie Executive Decision
References
Avionics
Aircraft instruments
Aircraft systems
Aerospace engineering
Aircraft | Avionics bay | Technology,Engineering | 397 |
28,547 | https://en.wikipedia.org/wiki/Sokal%20affair | The Sokal affair, also known as the Sokal hoax, was a demonstrative scholarly hoax performed by Alan Sokal, a physics professor at New York University and University College London. In 1996, Sokal submitted an article to Social Text, an academic journal of cultural studies. The submission was an experiment to test the journal's intellectual rigor, specifically to investigate whether "a leading North American journal of cultural studies—whose editorial collective includes such luminaries as Fredric Jameson and Andrew Ross—[would] publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions."
The article, "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity", was published in the journal's Spring/Summer 1996 "Science Wars" issue. It proposed that quantum gravity is a social and linguistic construct. The journal did not practice academic peer review at the time, so it did not submit the article for outside expert review by a physicist. Three weeks after its publication in May 1996, Sokal revealed in the magazine Lingua Franca that the article was a hoax.
The hoax caused controversy about the scholarly merit of commentary on the physical sciences by those in the humanities; the influence of postmodern philosophy on social disciplines in general; and academic ethics, including whether Sokal was wrong to deceive the editors or readers of Social Text; and whether Social Text had abided by proper scientific ethics.
In 2008, Sokal published Beyond the Hoax, which revisited the history of the hoax and discussed its lasting implications.
Background
In an interview on the U.S. radio program All Things Considered, Sokal said he was inspired to submit the bogus article after reading Higher Superstition (1994), in which authors Paul R. Gross and Norman Levitt claim that some humanities journals will publish anything as long as it has "the proper leftist thought" and quoted (or was written by) well-known leftist thinkers.
Gross and Levitt had been defenders of the philosophy of scientific realism, opposing postmodernist academics who questioned scientific objectivity. They asserted that anti-intellectual sentiment in liberal arts departments (especially English departments) caused the increase of deconstructionist thought, which eventually resulted in a deconstructionist critique of science. They saw the critique as a "repertoire of rationalizations" for avoiding the study of science.
Article
Sokal reasoned that if the presumption of editorial laziness was correct, the nonsensical content of his article would be irrelevant to whether the editors would publish it. What would matter would be ideological obsequiousness, fawning references to deconstructionist writers, and sufficient quantities of the appropriate jargon. After the article was published and the hoax revealed, he wrote:
The results of my little experiment demonstrate, at the very least, that some fashionable sectors of the American academic Left have been getting intellectually lazy. The editors of Social Text liked my article because they liked its conclusion: that "the content and methodology of postmodern science provide powerful intellectual support for the progressive political project" [sec. 6]. They apparently felt no need to analyze the quality of the evidence, the cogency of the arguments, or even the relevance of the arguments to the purported conclusion.
Content of the article
"Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity" proposed that quantum gravity has progressive political implications, and that the "morphogenetic field" could be a valid theory of quantum gravity. (A morphogenetic field is a concept adapted by Rupert Sheldrake in a way that Sokal characterized in the affair's aftermath as "a bizarre New Age idea".) Sokal wrote that the concept of "an external world whose properties are independent of any individual human being" was "dogma imposed by the long post-Enlightenment hegemony over the Western intellectual outlook".
After referring skeptically to the "so-called scientific method", the article declared that "it is becoming increasingly apparent that physical 'reality is fundamentally "a social and linguistic construct." It went on to state that because scientific research is "inherently theory-laden and self-referential", it "cannot assert a privileged epistemological status with respect to counterhegemonic narratives emanating from dissident or marginalized communities", and that therefore a "liberatory science" and an "emancipatory mathematics", spurning "the elite caste canon of 'high science, needed to be established for a "postmodern science [that] provide[s] powerful intellectual support for the progressive political project."
Moreover, the article's footnotes conflate academic terms with sociopolitical rhetoric, e.g.:
Publication
Sokal submitted the article to Social Text, whose editors were collecting articles for the "Science Wars" issue. "Transgressing the Boundaries" was notable as an article by a natural scientist; biologist Ruth Hubbard also had an article in the issue. Later, after Sokal revealed the hoax in Lingua Franca, Social Text editors wrote that they had requested editorial changes that Sokal refused to make, and had had concerns about the quality of the writing: "We requested him (a) to excise a good deal of the philosophical speculation and (b) to excise most of his footnotes." Still, despite calling Sokal a "difficult, uncooperative author", and noting that such writers were "well known to journal editors", based on Sokal's credentials Social Text published the article in the May 1996 Spring/Summer "Science Wars" issue. The editors did not seek peer review of the article by physicists or otherwise; they later defended this decision on the basis that Social Text was a journal of open intellectual inquiry and the article was not offered as a contribution to physics.
Responses
Follow-up between Sokal and the editors
In the article "A Physicist Experiments With Cultural Studies" in the May 1996 issue of Lingua Franca, Sokal revealed that "Transgressing the Boundaries" was a hoax and concluded that Social Text "felt comfortable publishing an article on quantum physics without bothering to consult anyone knowledgeable in the subject" because of its ideological proclivities and editorial bias.
In their defense, Social Text editors said they believed that Sokal's essay "was the earnest attempt of a professional scientist to seek some kind of affirmation from postmodern philosophy for developments in his field" and that "its status as parody does not alter, substantially, our interest in the piece, itself, as a symptomatic document." Besides criticizing his writing style, Social Text editors accused Sokal of behaving unethically in deceiving them.
Sokal said the editors' response demonstrated the problem that he sought to identify. Social Text, as an academic journal, published the article not because it was faithful, true, and accurate to its subject, but because an "academic authority" had written it and because of the appearance of the obscure writing. The editors said they considered it poorly written but published it because they felt Sokal was an academic seeking their intellectual affirmation. Sokal remarked:
Social Text response revealed that none of the editors had suspected Sokal's piece was a parody. Instead, they speculated Sokal's admission "represented a change of heart, or a folding of his intellectual resolve". Sokal found further humor in the idea that the article's absurdity was hard to spot:
Book by Sokal and Bricmont
In 1997, Sokal and Jean Bricmont co-wrote Impostures intellectuelles (published in the US as Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science and in the UK as Intellectual Impostures, 1998). The book featured analysis of extracts from established intellectuals' writings that Sokal and Bricmont claimed misused scientific terminology. It closed with a critical summary of postmodernism and criticism of the strong programme of social constructionism in the sociology of scientific knowledge.
In 2008, Sokal published a followup book, Beyond the Hoax, which revisited the history of the hoax and discussed its lasting implications.
Jacques Derrida
The French philosopher Jacques Derrida, whose 1966 statement about Einstein's theory of relativity was quoted in Sokal's paper, was singled out for criticism, particularly in U.S. newspaper coverage of the hoax. One weekly magazine used two images of him, a photo and a caricature, to illustrate a "dossier" on Sokal's paper. Arkady Plotnitsky commented: Even given Derrida's status as an icon of intellectual controversy on the Anglo-American cultural scene, it is remarkable that out of thousands of pages of Derrida's published works, a single extemporaneous remark on relativity made in 1966 (before Derrida was "the Derrida" and, in a certain sense, even before "deconstruction") ... is made to stand for nearly all of deconstructive or even postmodernist (not a term easily, if at all, applicable to Derrida) treatments of science.
Derrida later responded to the hoax in "" ("Sokal and Bricmont Aren't Serious"), first published on November 20, 1997, in . He called Sokal's action "sad" for having trivialized Sokal's mathematical work and "ruining the chance to carefully examine controversies" about scientific objectivity. Derrida then faulted him and Bricmont for what he considered "an act of intellectual bad faith" in their follow-up book, : they had published two articles almost simultaneously, one in English in The Times Literary Supplement on October 17, 1997 and one in French in on October 18–19, 1997, but while the two articles were almost identical, they differed in how they treated Derrida.
The English-language article had a list of French intellectuals who were not included in Sokal's and Bricmont's book: "Such well-known thinkers as Althusser, Barthes, and Foucault—who, as readers of the TLS will be well aware, have always had their supporters and detractors on both sides of the Channel—appear in our book only in a minor role, as cheerleaders for the texts we criticize." The French-language list, however, included Derrida: "" ("Famous thinkers such as Althusser, Barthes, Derrida and Foucault are essentially absent from our book").
According to Brian Reilly, Derrida may also have been sensitive to another difference between the French and English versions of Impostures intellectuelles. In the French, his citation from the original hoax article is said to be an "isolated" instance of abuse, whereas the English text adds a parenthetical remark that Derrida's work contained "no systematic misuse (or indeed attention to) science". Sokal and Bricmont insisted that the difference between the articles was "banal". Nevertheless, Derrida concluded that Sokal was not serious in his method, but had used the spectacle of a "quick practical joke" to displace the scholarship Derrida believed the public deserved.
Criticism of social sciences
Sociologist Stephen Hilgartner, chairman of Cornell University's science and technology studies department, wrote "The Sokal Affair in Context" (1997), comparing Sokal's hoax to "Confirmational Response: Bias Among Social Work Journals" (1990), an article by William M. Epstein published in Science, Technology, & Human Values. Epstein used a similar method to Sokal's, submitting fictitious articles to real academic journals to measure their response. Though much more systematic than Sokal's work, it received scant media attention. Hilgartner argued that the "asymmetric" effect of the successful Sokal hoax compared with Epstein's experiment cannot be attributed to its quality, but that "[t]hrough a mechanism that resembles confirmatory bias, audiences may apply less stringent standards of evidence and ethics to attacks on targets that they are predisposed to regard unfavorably." As a result, according to Hilgartner, though competent in terms of method, Epstein's experiment was largely muted by the more socially accepted social work discipline he critiqued, while Sokal's attack on cultural studies, despite lacking experimental rigor, was accepted. Hilgartner also argued that Sokal's hoax reinforced the views of well-known pundits such as George Will and Rush Limbaugh, so that his opinions were amplified by media outlets predisposed to agree with his argument.
The Sokal Affair extended from academia to the public press. Anthropologist Bruno Latour, who was criticized in Fashionable Nonsense, described the scandal as a "tempest in a teacup". Retired Northeastern University mathematician-turned social scientist Gabriel Stolzenberg wrote essays criticizing the statements of Sokal and his allies, arguing that they insufficiently grasped the philosophy they criticized, rendering their criticism meaningless. In Social Studies of Science, Bricmont and Sokal responded to Stolzenberg, denouncing his representations of their work and criticizing his commentary about the "strong programme" of the sociology of science. Stolzenberg replied in the same issue that their critique and allegations of misrepresentation were based on misreadings. He advised readers to slowly and skeptically examine the arguments of each party, bearing in mind that "the obvious is sometimes the enemy of the true".
Influence
Sociological follow-up study
In 2009, Cornell sociologist Robb Willer performed an experiment in which undergraduate students read Sokal's paper and were told either that it was written by another student or that it was by a famous academic. He found that students who believed the paper's author was a high-status intellectual rated it better in quality and intelligibility.
Sokal III
In October 2021, the scholarly journal Higher Education Quarterly published a bogus article "authored" by "Sage Owens" and "Kal Avers-Lynde III". The initials stand for "Sokal III". The Quarterly retracted the article.
See also
Academese
, sometimes referred to as a reverse Sokal hoax
, a software developer known for his early hoax involving postmodern deconstruction at the 2nd International Conference on Cyberspace in 1991
, an actor gave a lecture to a group of experts with almost no content but was praised
The Ern Malley affair, Australia's most infamous literary hoax
Logology (science)
, a program that produces imitations of postmodernist writing
, also called "Sokal Squared"
References
Footnotes
Citations
Bibliography
– English translation.
Further reading
External links
Alan Sokal Articles on the Social Text Affair Alan Sokal's own page with very extensive links; includes the original article
Original hoax article (HTML)
1996 hoaxes
1996 in science
Academic journal articles
Academic scandals
Criticism of postmodernism
Duke University
Hoaxes in science
Hoaxes in the United States
Literary forgeries
Philosophy papers
Science and technology studies
Scientific misconduct incidents
Sociology of scientific knowledge | Sokal affair | Technology | 3,153 |
76,111,276 | https://en.wikipedia.org/wiki/Xanthomonas%20campestris%20pv.%20raphani | Xanthomonas campestris pv. raphani is a gram-negative, obligate aerobic bacterium that like many other Xanthomonas spp. bacteria has been found associated with plants. This organism is closely related with Xanthomonas campestris pv. campestris, but causes a non-vascular leaf spot disease that is clearly distinct from black rot of brassicas.
Leaf spot diseases of brassicas were associated with X. campestris pv. armoraciae (McCulloch) Dye or X. campestris pv. raphani (White) Dye. The leaf spot isolates most commonly found in brassicas have been identified as X. campestris pv. raphani..
Hosts and symptoms
The host range of X. campestris pv. raphani is wider than X. campestris pv. campestris and includes Brassica spp., radish, ornamental crucifers like wallflowers and tomato.
Symptoms include circular dark spots that later became light brown or gray, sometimes surrounded by a water-soaked halo. In severely infections, the spots can coalesce and become irregular, but not limited by the veins. Symptoms also include black, sunken, elongated lesions on the middle vein, petiole, and/or stem.
Significance
This pathovar causes a minor disease in brassica crops; it can be occasionally isolated from seeds.
References
campestris pv. raphani
Bacterial plant pathogens and diseases
Pathovars | Xanthomonas campestris pv. raphani | Biology | 311 |
54,637,408 | https://en.wikipedia.org/wiki/TEDAX | Technician Specialist in Deactivation of Explosive Artifacts (), commonly known by its abbreviation TEDAX, is the Spanish name for bomb disposal units.
Many TEDAX groups exist in Spain, most of them in the police corps but also in the Armed Forces (but they changed their name in 2001). Since 2001, these units of the Armed Forces are not named TEDAX because they are adapted to the international standards of EOD (Explosive Ordnance Disposal) due to the entry of Spain in NATO. Other reason to change the name was because of these groups are also specialized on unexploded ordnance.
The TEDAX of the law enforcement agencies and the EODs of the Armed Forces have become a key element in the fight against terrorism, each in its area of competence. For the performance of their function they have the support of high technology of specific design, like specialized robots, special suits of high protection against explosion, etc.
In Spain there are TEDAX units in the Civil Guard, in the National Police Corps and in some Autonomous Police (like Mossos d'Esquadra or Ertzaintza), and there are EOD Units in the Army, in the Air Force and in the Navy.
The TEDAX units were created in the 1970s and they were fundamental to the fight against the terrorist group ETA and in the 2004 Madrid train bombings. Outside the national territory, EOD units have become essential parts of the international operations carried out by the Spanish Armed Forces around the world, in areas where the threat of artifacts and ammunition is very high. These units are also specialized in CBRN defense.
The first victim of the TEDAX police units, Rafael Valdenebro Sotelo, died in 1978 when he tried to deactivate an explosive device attributed to the Canary Islands Independence Movement. Many other victims of the police units were killed trying to defuse ETA bombs. In the Armed Forces, the first victim was Captain Fernando Álvarez Rodríguez, died in 1993 in Bosnia and Herzegovina.
References
Bomb disposal
Emergency services
National law enforcement agencies of Spain | TEDAX | Chemistry | 413 |
46,373,107 | https://en.wikipedia.org/wiki/Adsorption/bio-oxidation%20process | The adsorption/bio-oxidation process (AB process) is a two-stage modification of the activated sludge process used for wastewater treatment. It consists of a high-loaded A-stage and low-loaded B-stage. The process is operated without a primary clarifier, with the A-stage being an open dynamic biological system. Both stages have separate settling tanks and sludge recycling lines, thus maintaining unique microbial communities in both reactors.
History
Adsorption/bio-oxidation process was invented in the mid-1970s by Botho Böhnke, a professor of the RWTH Aachen University. It was based on the finding made by the German engineer Karl Imhoff in the 1950s. Imhoff stated that the treatment efficiency of 60–80 percent could be achieved in highly loaded activated sludge basins.
In 1977 Böhnke published his first article on adsorption/bio-oxidation process. The same year the patent was issued. Extensive research of the following years, conducted by Böhnke together with Bernd and Andreas Diering, ended up in 1985 with the establishment of the company Dr.-Ing. Bernd Diering GmbH. The same year, the AB-process was for the first time applied in a full-scale at the Krefeld, Germany sewage treatment plant (800 000 P.E.). In 1990, 19 full scale installations existed in Western Germany alone. Further application of the process in Europe was hindered by the tightening of the effluent discharge requirements with respect to nitrogen and phosphorus. The process came into notice in 2000 again due to the increased interest in energy recovery from wastewater.
Principle of operation
The A-stage, or adsorption stage is the most innovative component of the process. It is not preceded by primary treatment. Influent organic matter is removed in the A-stage mainly by flocculation and sorption to sludge due to the high loading rates (2–10 g BOD • g VSS−1 • d−1) and low sludge age (typically 4–10 h). Hydrolysis of complex organic molecules occurs improving biodegradability of the influent of the B-stage. High loading rates and low sludge age favours development of dynamic biocoenosis with a large fraction of microorganisms present in the exponential growth phase. Diverse sludge biocoenosis increase variety of organic compounds that can be degraded in the A-stage and makes the process more stable towards the shock loads. Altogether, up to 80% of the influent organic matter can be removed in the A-stage. The required reactor volume and oxygen supply are lower if compared to the removal in the conventional activated sludge process.
The B-stage, or bio-oxidation stage, is a typical low-loaded activated sludge process, where biodegradation of the remaining organic material occurs. The B-stage can be designed for nitrogen and/or phosphorus removal by alternating aerobic, anoxic and anaerobic zones in the reactor.
Typical operational conditions of the adsorption/bio-oxidation process
Advantages of the process
Lower aeration requirements decrease energy consumption and aeration costs for 20 percent if compared with conventional single stage activated sludge plant.
The volumes of aeration tanks are 40% lower if compared with conventional single stage activated sludge plant.
Increased sludge production in the A-stage results in increased biogas production in the digester (for plants with anaerobic digestion of excess sludge).
Stability towards the shock loads (pH, chemical oxygen demand (COD), toxic substances) explained by the wide-ranging biochemical potential, high mutation capacity and adaptability of sludge in the A-stage.
A-stage can receive higher organic loads than conventional activated sludge systems.
Effluent concentrations are more stable because of the two-stage process configuration employed.
Heavy metals are mainly removed with the A-stage sludge. Therefore, B-stage sludge has lower concentrations of heavy metals than sludge from conventional activated sludge process and may comply with the agricultural standards.
Drawbacks of the process
Incomplete denitrification is often observed in the B-stage if the influent C/N ratio is low. Direct by-pass of a part of A-stage influent with high organic matter content to the B-stage is used to increase the C/N ratio.
High sludge production in the A-stage is a drawback for WWTPs that are not equipped with anaerobic digestion of sludge because it increases sludge treatment costs.
Sludge from A-stage has poor settling properties.
High retention times cause an increased need for additional reactors to maintain throughput increasing equipment costs
Nutrient removal
Nitrogen removal in the A-stage can reach 30–40%, as nitrogen of organic compounds is incorporated in upflow anaerobic sludge blanket (UASB) reactor sludge.
The sludge age of the B-stage is typically between 8 and 20 days promoting the growth of nitrifiers. Therefore, complete nitrification is usually achieved in the B-stage. Complete denitrification is difficult to achieve, because of the low C:N ratio in the influent of the B-stage. Insufficient carbon supply of carbon source to the B-stage occurs due to the high efficiency of organic matter removal in the A-stage. The problem can be solved by decreasing organic matter removal in the A-stage, external carbon source supply, intermittent aeration or decreased HRT of the A-stage and/or on-line control of certain operational parameters. To achieve biological nitrogen and phosphorus removal anaerobic and anoxic compartments are introduced before the aerated zone of the B-stage.
Phosphorus removal from the secondary effluent of the B-stage can be achieved by coagulation with ferric and aluminium salts, e.g. FeCl3 or Al2(SO4)3.
Applications for municipal wastewater treatment
The adsorption/bio-oxidation process was applied at the Krefeld plant (800 000 P.E.) in 1985 for the first time. The plant was expanded and modified and currently treats municipal and industrial wastewater of 1 200 000 P.E.
Currently adsorption/bio-oxidation process is applied at the municipal treatment plants in Germany, the Netherlands (WWTP Dokhaven (Rotterdam), WWTP Utrecht, WWTP Garmerwolde (Groningen) etc.), Austria (WWTP Salzburg, WWTP Strass etc.), Spain, US, China etc.
Adsorption/bio-oxidation process is a part of innovative wastewater treatment concept WaterSchoon, realized in the Netherlands. 250 apartments in the new district Noorderhoek (Sneek, the Netherlands) are equipped with separate collection systems for toilet wastewater and the rest of the household wastewater (or so-called greywater). Both streams are treated separately in order to maximize recovery of resources from wastewater. Adsorption/bio-oxidation process is used for grey water treatment to increase sludge production. Sludge, produced in both stages of the process, is digested together with toilet wastewater in the UASB reactor to maximize energy recovery.
Applications for industrial wastewater treatment
The adsorption/bio-oxidation process is used for treatment of industrial wastewater with high COD, including wastewater from:
Pulp and paper industry
Textile industry
Food industry, including dairy industry
Pharmaceutical industry
Leather tanning industry
The C/N and C/P ratios of industrial wastewater is often too high for complete aerobic biodegradation of the influent organic matter, even after the adsorption stage. Addition of nutrients prior to bio-oxidation stage is required in these cases.
See also
Biosorption
List of waste-water treatment technologies
References
Environmental engineering
Pollution control technologies
Treatment
Water pollution
Sanitation | Adsorption/bio-oxidation process | Chemistry,Engineering,Environmental_science | 1,619 |
79,836 | https://en.wikipedia.org/wiki/Charles%20Wheatstone | Sir Charles Wheatstone (; 6 February 1802 – 19 October 1875) was an English physicist and inventor best known for his contributions to the development of the Wheatstone bridge, originally invented by Samuel Hunter Christie, which is used to measure an unknown electrical resistance, and as a major figure in the development of telegraphy. His other contributions include the English concertina, the stereoscope (a device for displaying three-dimensional images) and the Playfair cipher (an encryption technique).
Life
Charles Wheatstone was born in Barnwood, Gloucestershire. His father, W. Wheatstone, was a music-seller in the town, who moved to 128 Pall Mall, London, four years later, becoming a teacher of the flute. Charles, the second son, went to a village school, near Gloucester, and afterwards to several institutions in London. One of them was in Kennington, and kept by a Mrs. Castlemaine, who was astonished at his rapid progress. From another he ran away, but was captured at Windsor, not far from the theatre of his practical telegraph. As a boy he was very shy and sensitive, liking to retreat into an attic, without any other company than his own thoughts.
When he was about fourteen years old he was apprenticed to his uncle and namesake, a maker and seller of musical instruments at 436 Strand, London; but he showed little taste for handicraft or business, and loved better to study books. His father encouraged him in this, and finally took him out of the uncle's charge.
At the age of fifteen, Wheatstone translated French poetry, and wrote two songs, one of which was given to his uncle, who published it without knowing it as his nephew's composition. Some lines of his on the lyre became the motto of an engraving by Bartolozzi. He often visited an old book-stall in the vicinity of Pall Mall, which was then a dilapidated and unpaved thoroughfare. Most of his pocket-money was spent in purchasing the books which had taken his fancy, whether fairy tales, history, or science.
One day, to the surprise of the bookseller, he coveted a volume on the discoveries of Volta in electricity, but not having the price, he saved his pennies and secured the volume. It was written in French, and so he was obliged to save again, until he could buy a dictionary. Then he began to read the volume, and, with the help of his elder brother, William, to repeat the experiments described in it, with a home-made battery, in the scullery behind his father's house. In constructing the battery, the boy philosophers ran short of money to procure the requisite copper-plates. They had only a few copper coins left. A happy thought occurred to Charles, who was the leading spirit in these researches, 'We must use the pennies themselves,' said he, and the battery was soon complete.
At Christchurch, Marylebone, on 12 February 1847, Wheatstone was married to Emma West. She was the daughter of a Taunton tradesman, and of handsome appearance. She died in 1866, leaving a family of five young children to his care. His domestic life was quiet and uneventful.
Though silent and reserved in public, Wheatstone was a clear and voluble talker in private, if taken on his favourite studies, and his small but active person, his plain but intelligent countenance, was full of animation. Sir Henry Taylor tells us that he once observed Wheatstone at an evening party in Oxford earnestly holding forth to Lord Palmerston on the capabilities of his telegraph. 'You don't say so!' exclaimed the statesman. 'I must get you to tell that to the Lord Chancellor.' And so saying, he fastened the electrician on Lord Westbury, and effected his escape. A reminiscence of this interview may have prompted Palmerston to remark that a time was coming when a minister might be asked in Parliament if war had broken out in India, and would reply, 'Wait a minute; I'll just telegraph to the Governor-General, and let you know.'
Wheatstone was knighted in 1868, after his completion of the automatic telegraph. He had previously been made a Chevalier of the Legion of Honour. Some thirty-four distinctions and diplomas of home or foreign societies bore witness to his scientific reputation. Since 1836 he had been a Fellow of the Royal Society, and in 1859 he was elected a foreign member of the Royal Swedish Academy of Sciences, and in 1873 a Foreign Associate of the French Academy of Sciences. The same year he was awarded the Ampere Medal by the French Society for the Encouragement of National Industry. In 1875 he was created an honorary member of the Institution of Civil Engineers. He was a D.C.L. of Oxford and an LL.D. of Cambridge.
While on a visit to Paris during the autumn of 1875, and engaged in perfecting his receiving instrument for submarine cables, he caught a cold, which produced inflammation of the lungs, an illness from which he died in Paris, on 19 October 1875 aged 73. A memorial service was held in the Anglican Chapel, Paris, and attended by a deputation of the academy. His remains were taken to his home in Park Crescent, London, (marked by a blue plaque today) and buried in Kensal Green Cemetery.
Music instruments and acoustics
In September 1821, Wheatstone brought himself into public notice by exhibiting the 'Enchanted Lyre,' or 'Acoucryptophone,' at a music shop at Pall Mall and in the Adelaide Gallery. It consisted of a mimic lyre hung from the ceiling by a cord, and emitting the strains of several instruments – the piano, harp, and dulcimer. In reality it was a mere sounding box, and the cord was a steel rod that conveyed the vibrations of the music from the several instruments which were played out of sight and ear-shot. At this period Wheatstone made numerous experiments on sound and its transmission. Some of his results are preserved in Thomson's Annals of Philosophy for 1823.
He recognised that sound is propagated by waves or oscillations of the atmosphere, as light was then believed to be by undulations of the luminiferous ether. Water, and solid bodies, such as glass, or metal, or sonorous wood, convey the modulations with high velocity, and he conceived the plan of transmitting sound-signals, music, or speech to long distances by this means. He estimated that sound would travel through solid rods, and proposed to telegraph from London to Edinburgh in this way. He even called his arrangement a 'telephone.' (Robert Hooke, in his Micrographia, published in 1667, writes: 'I can assure the reader that I have, by the help of a distended wire, propagated the sound to a very considerable distance in an instant, or with as seemingly quick a motion as that of light.' Nor was it essential the wire should be straight; it might be bent into angles. This property is the basis of the mechanical or lover's telephone, said to have been known to the Chinese many centuries ago. Hooke also considered the possibility of finding a way to quicken our powers of hearing.)
A writer in the Repository of Arts for 1 September 1821, in referring to the 'Enchanted Lyre,' beholds the prospect of an opera being performed at the King's Theatre, and enjoyed at the Hanover Square Rooms, or even at the Horns Tavern, Kennington. The vibrations are to travel through underground conductors, like to gas in pipes.
And if music be capable of being thus conducted,' he observes, 'perhaps the words of speech may be susceptible of the same means of propagation. The eloquence of counsel, the debates of Parliament, instead of being read the next day only, – But we shall lose ourselves in the pursuit of this curious subject.
Besides transmitting sounds to a distance, Wheatstone devised a simple instrument for augmenting feeble sounds, to which he gave the name of 'Microphone.' It consisted of two slender rods, which conveyed the mechanical vibrations to both ears, and is quite different from the electrical microphone of Professor Hughes.
In 1823, his uncle, the musical instrument maker, died, and Wheatstone, with his elder brother, William, took over the business. Charles had no great liking for the commercial part, but his ingenuity found a vent in making improvements on the existing instruments, and in devising philosophical toys. He also invented instruments of his own. One of the most famous was the Wheatstone concertina. It was a six sided instrument with 64 keys, logically arranged for simple chromatic fingerings. The English concertina became increasingly famous throughout his lifetime, however it didn't reach its peak of popularity until the early 20th century.
In 1827, Wheatstone introduced his 'kaleidophone', a device for rendering the vibrations of a sounding body apparent to the eye. It consists of a metal rod, carrying at its end a silvered bead, which reflects a 'spot' of light. As the rod vibrates the spot is seen to describe complicated figures in the air, like a spark whirled about in the darkness. His photometer was probably suggested by this appliance. It enables two lights to be compared by the relative brightness of their reflections in a silvered bead, which describes a narrow ellipse, so as to draw the spots into parallel lines.
In 1828, Wheatstone improved the German wind instrument, called the Mundharmonika, creating the symphonium (or symphonion), a mouth-blown free-reed instrument with a logical layout of button keys, patented on 19 December 1829, prefiguring the bellows-blown English concertina. The portable harmonium is another of his inventions, which gained a prize medal at the Great Exhibition of 1851. He also improved the speaking machine of De Kempelen, and endorsed the opinion of Sir David Brewster, that before the end of this century a singing and talking apparatus would be among the conquests of science.
In 1834, Wheatstone, who had won a name for himself, was appointed to the Chair of Experimental Physics in King's College London. His first course of lectures on sound were a complete failure, due to his abhorrence of public speaking. In the rostrum he was tongue-tied and incapable, sometimes turning his back on the audience and mumbling to the diagrams on the wall. In the laboratory he felt himself at home, and ever after confined his duties mostly to demonstration.
Velocity of electricity
He achieved renown by a great experiment made in 1834 – the measurement of the velocity of electricity in a wire. He cut the wire at the middle, to form a gap which a spark might leap across, and connected its ends to the poles of a Leyden jar filled with electricity. Three sparks were thus produced, one at each end of the wire, and another at the middle. He mounted a tiny mirror on the works of a watch, so that it revolved at a high velocity, and observed the reflections of his three sparks in it. The points of the wire were so arranged that if the sparks were instantaneous, their reflections would appear in one straight line; but the middle one was seen to lag behind the others, because it was an instant later. The electricity had taken a certain time to travel from the ends of the wire to the middle. This time was found by measuring the amount of lag, and comparing it with the known velocity of the mirror. Having got the time, he had only to compare that with the length of half the wire, and he could find the velocity of electricity. His results gave a calculated velocity of 288,000 miles per second, i.e. faster than what we now know to be the speed of light (), but were nonetheless an interesting approximation.
It was already appreciated by some scientists that the "velocity" of electricity was dependent on the properties of the conductor and its surroundings. Francis Ronalds had observed signal retardation in his buried electric telegraph cable (but not his airborne line) in 1816 and outlined its cause to be induction. Wheatstone witnessed these experiments as a youth, which were apparently a stimulus for his own research in telegraphy. Decades later, after the telegraph had been commercialised, Michael Faraday described how the velocity of an electric field in a submarine wire, coated with insulator and surrounded with water, is only , or still less.
Wheatstone's device of the revolving mirror was afterwards employed by Léon Foucault and Hippolyte Fizeau to measure the relative speeds of light in air versus water, and later to measure the speed of light.
Spectroscopy
Wheatstone and others also contributed to early spectroscopy through the discovery and exploitation of spectral emission lines.
As John Munro wrote in 1891, "In 1835, at the Dublin meeting of the British Association, Wheatstone showed that when metals were volatilised in the electric spark, their light, examined through a prism, revealed certain rays which were characteristic of them. Thus the kind of metals which formed the sparking points could be determined by analysing the light of the spark. This suggestion has been of great service in spectrum analysis, and as applied by Robert Bunsen, Gustav Robert Kirchhoff, and others, has led to the discovery of several new elements, such as rubidium and thallium, as well as increasing our knowledge of the heavenly bodies."
Telegraph
Wheatstone abandoned his idea of transmitting intelligence by the mechanical vibration of rods, and took up the electric telegraph. In 1835 he lectured on the system of Baron Schilling, and declared that the means were already known by which an electric telegraph could be made of great service to the world. He made experiments with a plan of his own, and not only proposed to lay an experimental line across the Thames, but to establish it on the London and Birmingham Railway. Before these plans were carried out, however, he received a visit from William
Cooke at his house in Conduit Street on 27 February 1837, which had an important influence on his future.
Cooperation with Cooke
Cooke was an officer in the Madras Army, who, being home on leave, was attending some lectures on anatomy at the University of Heidelberg, where, on 6 March 1836, he witnessed a demonstration with the telegraph of professor Georg Munke, and was so impressed with its importance, that he forsook his medical studies and devoted all his efforts to the work of introducing the telegraph. He returned to London soon after, and was able to exhibit a telegraph with three needles in January 1837. Feeling his want of scientific knowledge, he consulted Michael Faraday and Peter Roget (then secretary of the Royal Society): Roget sent him to Wheatstone.
At a second interview, Cooke told Wheatstone of his intention to bring out a working telegraph, and explained his method. Wheatstone, according to his own statement, remarked to Cooke that the method would not act, and produced his own experimental telegraph. Finally, Cooke proposed that they should enter into a partnership, but Wheatstone was at first reluctant to comply. He was a well-known man of science, and had meant to publish his results without seeking to make capital of them. Cooke, on the other hand, declared that his sole object was to make a fortune from the scheme. In May they agreed to join their forces, Wheatstone contributing the scientific, and Cooke the administrative talent. The deed of partnership was dated 19 November 1837. A joint patent was taken out for their inventions, including the five-needle telegraph of Wheatstone, and an alarm worked by a relay, in which the current, by dipping a needle into mercury, completed a local circuit, and released the detent of a clockwork.
The five-needle telegraph, which was mainly, if not entirely, due to Wheatstone, was similar to that of Schilling, and based on the principle enunciated by Ampère – that is to say, the current was sent into the line by completing the circuit of the battery with a make and break key, and at the other end it passed through a coil of wire surrounding a magnetic needle free to turn round its centre. According as one pole of the battery or the other was applied to the line by means of the key, the current deflected the needle to one side or the other. There were five separate circuits actuating five different needles. The latter were pivoted in rows across the middle of a dial shaped like a diamond, and having the letters of the alphabet arranged upon it in such a way that a letter was literally pointed out by the current deflecting two of the needles towards it.
Early installations
An experimental line, with a sixth return wire, was run between the Euston terminus and Camden Town station of the London and North Western Railway on 25 July 1837. The actual distance was only one and a half-miles (2.4 km), but spare wire had been inserted in the circuit to increase its length. It was late in the evening before the trial took place. Cooke was in charge at Camden Town, while Robert Stephenson and other gentlemen looked on; and Wheatstone sat at his instrument in a dingy little room, lit by a tallow candle, near the booking-office at Euston. Wheatstone sent the first message, to which Cooke replied: and "never" said Wheatstone, "did I feel such a tumultuous sensation before, as when, all alone in the still room, I heard the needles click, and as I spelled the words, I felt all the magnitude of the invention pronounced to be practicable beyond cavil or dispute."
In spite of this trial, however, the directors of the railway treated the 'new-fangled' invention with indifference, and requested its removal. In July 1839, however, it was favoured by the Great Western Railway, and a line erected from the Paddington station terminus to West Drayton railway station, a distance of . Part of the wire was laid underground at first, but subsequently all of it was raised on posts along the line. Their circuit was eventually extended to in 1841, and was publicly exhibited at Paddington as a marvel of science, which could transmit fifty signals a distance of 280,000 miles per minute (7,500 km/s). The price of admission was a shilling (£0.05), and in 1844 one fascinated observer recorded the following:
It is perfect from the terminus of the Great Western as far as Slough – that is, eighteen miles; the wires being in some places underground in tubes, and in others high up in the air, which last, he says, is by far the best plan. We asked if the weather did not affect the wires, but he said not; a violent thunderstorm might ring a bell, but no more. We were taken into a small room (we being Mrs Drummond, Miss Philips, Harry Codrington and myself – and afterwards the Milmans and Mr Rich) where were several wooden cases containing different sorts of telegraphs. In one sort every word was spelt, and as each letter was placed in turn in a particular position, the machinery caused the electric fluid to run down the line, where it made the letter show itself at Slough, by what machinery he could not undertake to explain. After each word came a sign from Slough, signifying "I understand", coming certainly in less than one second from the end of the word......Another prints the messages it brings, so that if no-one attended to the bell,....the message would not be lost. This is effected by the electrical fluid causing a little hammer to strike the letter which presents itself, the letter which is raised hits some manifold writing paper (a new invention, black paper which, if pressed, leaves an indelible black mark), by which means the impression is left on white paper beneath. This was the most ingenious of all, and apparently Mr. Wheatstone's favourite; he was very good-natured in explaining but understands it so well himself that he cannot feel how little we know about it, and goes too fast for such ignorant folk to follow him in everything. Mrs Drummond told me he is wonderful for the rapidity with which he thinks and his power of invention; he invents so many things that he cannot put half his ideas into execution, but leaves them to be picked up and used by others, who get the credit of them.
Public attention and success
The public took to the new invention after the capture of the murderer John Tawell, who in 1845, had become the first person to be arrested as the result of telecommunications technology. In the same year, Wheatstone introduced two improved forms of the apparatus, namely, the 'single' and the 'double' needle instruments, in which the signals were made by the successive deflections of the needles. Of these, the single-needle instrument, requiring only one wire, is still in use.
The development of the telegraph may be gathered from two facts. In 1855, the death of the Emperor Nicholas at St. Petersburg, about one o'clock in the afternoon, was announced in the House of Lords a few hours later. The result of The Oaks of 1890 was received in New York fifteen seconds after the horses passed the winning-post.
Differences with Cooke
In 1841 a difference arose between Cooke and Wheatstone as to the share of each in the honour of inventing the telegraph. The question was submitted to the arbitration of the famous engineer, Marc Isambard Brunel, on behalf of Cooke, and Professor Daniell, of King's College, the inventor of the Daniell cell, on the part of Wheatstone. They awarded to Cooke the credit of having introduced the telegraph as a useful undertaking which promised to be of national importance, and to Wheatstone that of having by his researches prepared the public to receive it. They concluded with the words: 'It is to the united labours of two gentlemen so well qualified for mutual assistance that we must attribute the rapid progress which this important invention has made during five years since they have been associated.' The decision, however vague, pronounces the needle telegraph a joint production. If it had mainly been invented by Wheatstone, it was chiefly introduced by Cooke. Their respective shares in the undertaking might be compared to that of an author and his publisher, but for the fact that Cooke himself had a share in the actual work of invention.
Further work on telegraphs
From 1836 to 1837 Wheatstone had thought a good deal about submarine telegraphs, and in 1840 he gave evidence before the Railway Committee of the House of Commons on the feasibility of the proposed line from Dover to Calais. He had even designed the machinery for making and laying the cable. In the autumn of 1844, with the assistance of J. D. Llewellyn, he submerged a length of insulated wire in Swansea Bay, and signalled through it from a boat to the Mumbles Lighthouse. Next year he suggested the use of gutta-percha for the coating of the intended wire across the English Channel.
In 1840 Wheatstone had patented an alphabetical telegraph, or, 'Wheatstone A B C instrument,' which moved with a step-by-step motion, and showed the letters of the message upon a dial. The same principle was used in his type-printing telegraph, patented in 1841. This was the first apparatus which printed a telegram in type. It was worked by two circuits, and as the type revolved a hammer, actuated by the current, pressed the required letter on the paper.
The introduction of the telegraph had so far advanced that, on 2 September 1845, the Electric Telegraph Company was registered, and Wheatstone, by his deed of partnership with Cooke, received a sum of £33,000 for the use of their joint inventions.
In 1859 Wheatstone was appointed by the Board of Trade to report on the subject of the Atlantic cables, and in 1864 he was one of the experts who advised the Atlantic Telegraph Company on the construction of the successful lines of 1865 and 1866.
In 1870 the electric telegraph lines of the United Kingdom, worked by different companies, were transferred to the Post Office, and placed under Government control.
Wheatstone further invented the automatic transmitter, in which the signals of the message are first punched out on a strip of paper (punched tape), which is then passed through the sending-key, and controls the signal currents. By substituting a mechanism for the hand in sending the message, he was able to telegraph about 100 words a minute, or five times the ordinary rate. In the Postal Telegraph service this apparatus is employed for sending Press telegrams, and it has recently been so much improved, that messages are now sent from London to Bristol at a speed of 600 words a minute, and even of 400 words a minute between London and Aberdeen. On the night of 8 April 1886, when Gladstone introduced his Bill for Home Rule in Ireland, no fewer than 1,500,000 words were dispatched from the central station at St. Martin's-le-Grand by 100 Wheatstone transmitters. The plan of sending messages by a running strip of paper which actuates the key was originally patented by Alexander Bain in 1846; but Wheatstone, aided by Augustus Stroh, an accomplished mechanician, and an able experimenter, was the first to bring the idea into successful operation. This system is often referred to as the Wheatstone Perforator and is the forerunner of the stock market ticker tape.
Optics
Stereopsis was first described by Wheatstone in 1838. In 1840 he was awarded the Royal Medal of the Royal Society for his explanation of binocular vision, a research which led him to make stereoscopic drawings and construct the stereoscope. He showed that our impression of solidity is gained by the combination in the mind of two separate pictures of an object taken by both of our eyes from different points of view. Thus, in the stereoscope, an arrangement of lenses or mirrors, two photographs of the same object taken from different points are so combined as to make the object stand out with a solid aspect. Sir David Brewster improved the stereoscope by dispensing with the mirrors, and bringing it into its existing form with lenses.
The 'pseudoscope' (Wheatstone coined the term from the Greek ψευδίς σκοπειν) was introduced in 1852, and is in some sort the reverse of the stereoscope, since it causes a solid object to seem hollow, and a nearer one to be farther off; thus, a bust appears to be a mask, and a tree growing outside of a window looks as if it were growing inside the room. Its purpose was to test his theory of stereo vision and for investigations into what would now be called experimental psychology.
Measuring time
In 1840, Wheatstone introduced his chronoscope, for measuring minute intervals of time, which was used in determining the speed of a bullet or the passage of a star. In this apparatus an electric current actuated an electro-magnet, which noted the instant of an occurrence by means of a pencil on a moving paper. It is said to have been capable of distinguishing 1/7300 part of a second (137 microsecond), and the time a body took to fall from a height of one inch (25 mm).
On 26 November 1840, he exhibited his electro-magnetic clock in the library of the Royal Society, and propounded a plan for distributing the correct time from a standard clock to a number of local timepieces. The circuits of these were to be electrified by a key or contact-maker actuated by the arbour of the standard, and their hands corrected by electro-magnetism. The following January Alexander Bain took out a patent for an electro-magnetic clock, and he subsequently charged Wheatstone with appropriating his ideas. It appears that Bain worked as a mechanist to Wheatstone from August to December 1840, and he asserted that he had communicated the idea of an electric clock to Wheatstone during that period; but Wheatstone maintained that he had experimented in that direction during May. Bain further accused Wheatstone of stealing his idea of the electro-magnetic printing telegraph; but Wheatstone showed that the instrument was only a modification of his own electro-magnetic telegraph.
In 1840, Alexander Bain mentioned to the Mechanics Magazine editor his financial problems. The editor introduced him to Sir Charles Wheatstone. Bain demonstrated his models to Wheatstone, who, when asked for his opinion, said "Oh, I shouldn't bother to develop these things any further! There's no future in them." Three months later Wheatstone demonstrated an electric clock to the Royal Society, claiming it was his own invention. However, Bain had already applied for a patent for it. Wheatstone tried to block Bain's patents, but failed. When Wheatstone organised an Act of Parliament to set up the Electric Telegraph Company, the House of Lords summoned Bain to give evidence, and eventually compelled the company to pay Bain £10,000 and give him a job as manager, causing Wheatstone to resign.
Polar clock
One of Wheatstone's most ingenious devices was the 'Polar clock,' exhibited at the meeting of the British Association in 1848. It is based on the fact discovered by Sir David Brewster, that the light of the sky is polarised in a plane at an angle of ninety degrees from the position of the sun. It follows that by discovering that plane of polarisation, and measuring its azimuth with respect to the north, the position of the sun, although beneath the horizon, could be determined, and the apparent solar time obtained.
The clock consisted of a spyglass, having a Nicol (double-image) prism for an eyepiece, and a thin plate of selenite for an object-glass. When the tube was directed to the North Pole—that is, parallel to the Earth's axis—and the prism of the eyepiece turned until no colour was seen, the angle of turning, as shown by an index moving with the prism over a graduated limb, gave the hour of day. The device is of little service in a country where watches are reliable; but it formed part of the equipment of the 1875–1876 North Polar expedition commanded by Captain Nares.
Wheatstone bridge
In 1843 Wheatstone communicated an important paper to the Royal Society, entitled 'An Account of Several New Processes for Determining the Constants of a Voltaic Circuit.' It contained an exposition of the well known balance for measuring the electrical resistance of a conductor, which still goes by the name of Wheatstone's Bridge or balance, although it was first devised by Samuel Hunter Christie, of the Royal Military Academy, Woolwich, who published it in the Philosophical Transactions for 1833. The method was neglected until Wheatstone brought it into notice.
His paper abounds with simple and practical formulae for the calculation of currents and resistances by the law of Ohm. He introduced a unit of resistance, namely, a foot of copper wire weighing one hundred grains (6.5 g), and showed how it might be applied to measure the length of wire by its resistance. He was awarded a medal for his paper by the Society. The same year he invented an apparatus which enabled the reading of a thermometer or a barometer to be registered at a distance by means of an electric contact made by the mercury. A sound telegraph, in which the signals were given by the strokes of a bell, was also patented by Cooke and Wheatstone in May of that year.
Cryptography
Wheatstone's remarkable ingenuity was also displayed in the invention of ciphers. He was responsible for the then unusual Playfair cipher, named after his friend Lord Playfair. It was used by the militaries of several nations through at least World War I, and is known to have been used during World War II by British intelligence services.
It was initially resistant to cryptanalysis, but methods were eventually developed to break it. He also became involved in the interpretation of cipher manuscripts in the British Museum. He devised a cryptograph or machine for turning a message into cipher which could only be interpreted by putting the cipher into a corresponding machine adjusted to decrypt it.
As an amateur mathematician, Wheatstone published a mathematical proof in 1854 (see Cube (algebra)).
Electrical generators
In 1840, Wheatstone brought out his magneto-electric machine for generating continuous currents.
On 4 February 1867, he published the principle of reaction in the dynamo-electric machine by a paper to the Royal Society; but Mr. C. W. Siemens had communicated the identical discovery ten days earlier, and both papers were read on the same day.
It afterwards appeared that Werner von Siemens, Samuel Alfred Varley, and Wheatstone had independently arrived at the principle within a few months of each other. Varley patented it on 24 December 1866; Siemens called attention to it on 17 January 1867; and Wheatstone exhibited it in action at the Royal Society on the above date.
Disputes over invention
Wheatstone was involved in various disputes with other scientists throughout his life regarding his role in different technologies and appeared at times to take more credit than he was due. As well as William Fothergill Cooke, Alexander Bain and David Brewster, mentioned above, these also included Francis Ronalds at the Kew Observatory. Wheatstone was erroneously believed by many to have created the atmospheric electricity observing apparatus that Ronalds invented and developed at the observatory in the 1840s and also to have installed the first automatic recording meteorological instruments there (see for example, Howarth, p158).
Personal life
Wheatstone married Emma West, spinster, a daughter of John Hooke West, deceased, at Christ Church, Marylebone, on 12 February 1847. The marriage was by licence.
See also
William Fothergill Cooke
Oliver Heaviside
Notes
References
Further reading
The Scientific Papers of Sir Charles Wheatstone (1879)
This article incorporates text from Heroes of the Telegraph by John Munro (1849–1930) in 1891, now in the public domain and available at this site].
Jeans, W. T., [https://www.gutenberg.org/ebooks/73641 The Lives of Electricians'': Professors Tyndall, Wheatstone, and Morse. (1887, Whittaker & Co.)
External links
Biographical material at Pandora Web Archive
Biographical sketch at Institute for Learning Technologies
Gravesite in Kensal Green, London
Charles Wheatstone at Cyber Philately
Charles Wheatstone at Open Library
English electrical engineers
English physicists
Optical physicists
English inventors
Concertina makers
People associated with electricity
19th-century cryptographers
Academics of King's College London
Fellows of the Royal Society
Members of the Royal Swedish Academy of Sciences
Recipients of the Pour le Mérite (civil class)
Recipients of the Copley Medal
People from Gloucester
1802 births
1875 deaths
British cryptographers
Royal Medal winners
Telegraph engineers and inventors
Knights of the Legion of Honour
Spectroscopists
Knights Bachelor
19th-century British physicists
19th-century English engineers | Charles Wheatstone | Physics,Chemistry | 7,207 |
75,115,484 | https://en.wikipedia.org/wiki/ASASSN-21qj | ASASSN-21qj, also known as 2MASS J08152329-3859234, is a Sun-like main sequence star with a rotating disk of circumstellar dust and gas which are leftovers from its stellar formation around 300 million years ago. The star is located 1,850 light years (567.2 parsecs) from Earth in the constellation of Puppis.
Planetary collision event
In 2021 the All-Sky Automated Survey for Supernovae reported that this star was rapidly fading. The published Astronomer's Telegram asked for follow-up observations. On twitter the astronomers Dr. Matthew Kenworthy and Dr. Eric Mamajek speculated about this object and amateur astronomer Arttu Sainio made his own investigation and discovered a brightening in NEOWISE data. He then joined the discussion on social media. The star brightened 2.5 years before the dimming event. More contributions came from amateur and professional astronomers, such as spectroscopic follow-up by amateur astronomers Hamish Barker, Sean Curry and the amateur Southern Spectroscopic project Observatory Team (2SPOT) members Stéphane Charbonnel, Pascal Le Dû, Olivier Garde, Lionel Mulato and Thomas Petit. Dr. Franz-Josef Hambsch observed this object with his remote observatory ROAD and submitted his observations to AAVSO. Other observations from professional telescope include ATLAS, ALMA, LCOGT and TESS.
In 2023, a scientific paper reported observations consistent with two ice-giant type exoplanets of several to tens of Earth masses having undergone a planetary collision event. The collision occurred at a distance of 2-16 AU (astronomical units) from the star. The infrared brightening is thought to be the result of dust produced by the disruption being heated by the collision, reaching a temperature of 1000 K (727°C; 1340°F) and then the dust slowly cooled off and expanded in size. Together with the newly formed planet, the dust cloud orbited the star and 1000 days later the dust moved in front of the star, causing a dimming event. Because of the dust cloud had now reached a large size, the dimming event would last for 600 days. The newly formed planet did not cause a transit.
Another work also studied the event in detail and concluded that the event was produced by the breakup of exocomets. This paper was later mentioned in an author correction of the first work. The system has been observed with JWST, with the data being studied by researchers.
A few other planetary collisions were discovered in the past, such as around NGC 2547–ID8, HD 166191 and V488 Persei.
See also
List of extrasolar planetary collisions
BD+20°307
References
Puppis
J08152329-3859234 | ASASSN-21qj | Astronomy | 577 |
8,276,257 | https://en.wikipedia.org/wiki/Transformational%20Satellite%20Communications%20System | The Transformational Satellite Communications System (TSAT) program was a United States Department of Defense (DOD) program sponsored by the U.S. Air Force for a secure, high-capacity global communications network serving the Department of Defense, NASA and the United States Intelligence Community (IC). It was intended as an enabler of net-centric warfare that would facilitate defense and intelligence professionals making rapid decisions based on integrated, comprehensive information. In 2003, the estimated project costs for the period up to 2015 were estimated to US$ 12 billion (inflation adjusted US$ billion in ). In October 2008, the DoD announced that it was postponing making a decision on choosing a contractor to build the system until 2010. In April 2009 Secretary of Defence Robert M. Gates asked that the project be canceled in its entirety.
Scope
The Transformational Satellite Communications System (TSAT) aimed to provide the Department of Defense (DoD) with high data rate Military Satellite Communications (MILSATCOM) and Internet-like services as defined in the Transformational Communications Architecture (TCA). TSAT would have supported global net-centric operations. As the spaceborne element of the Global Information Grid (GIG), TSAT would extend the GIG to users without terrestrial connections providing improved connectivity and data transfer capability, vastly improving satellite communications for the warfighter. TSAT's Internet Protocol (IP) routing would connect thousands of users through networks rather than limited point-to-point connections. TSAT would have enabled high data rate connections to Space and Airborne Intelligence, Surveillance, and Reconnaissance (SISR, AISR) platforms.
Capabilities and services
The TSAT program was planned as a five satellite constellation (a sixth satellite was planned as a spare to ensure mission availability), TSAT satellite operations centers (TSOC) for on-orbit control, TSAT Mission Operations Systems (TMOS) to provide network management, and ground gateways. The TMOS single contract was awarded in January 2006.
TSAT planned radio frequency (RF) and laser communications links to meet defense and intelligence community requirements for high data rate, protected communications. The space segment aimed to make use of key technology advancements that have proven mature by independent testing of integrated subsystem brassboards to achieve a transformational leap in SATCOM capabilities. These technologies include but are not limited to: single and multi-access laser communications (to include wide field-of-view technology), Internet protocol based packet switching, bulk and packet encryption/decryption, battle command-on-the-move antennas, dynamic bandwidth and resource allocation techniques, and protected bandwidth efficient modulation.
Chronology
An Interim Program Review was held 22 October 2004; the Milestone Decision Authority (MDA) directed the TSAT program to continue as planned to achieve the delivery, launch, and on-orbit checkout of the first TSAT satellite.
In June 2003, the acquisition strategy for TSAT was approved, as stated in the FY05 PB justification.
On 20 January 2004, the TSAT program entered Phase B, Risk Reduction and Design Development. Phase B space segment contracts (Cost Plus, Fixed Fee) were awarded to Lockheed Martin and Boeing in late Jan 04. A $300M FY05 Congressional reduction resulted in a first launch delay from FY12 to FY13. In response to the Congressional reduction, the Air Force adjusted the FY06/07 budget.
On January 27, 2006 TSAT Mission Operations System (TMOS) segment development contract, worth US$2+ Billion was awarded to Lockheed Martin.
In July 2007, Lockheed Martin and Northrop Grumman announced a plan to develop an IPv6-based networking system with Juniper Networks for the TSAT project. Boeing also engaged in development related to the program.
The results of the competition to select the final space segment development contractor were originally to be announced in October 2007. However, the Air Force deferred this announcement until second-quarter 2008.
FY07 aimed to verify with subsystem hardware testing in a space-like environment, that technologies are mature. If a technology fails to mature, less-capable technology off-ramps exist and can be used to preserve schedule. Even the technology off-ramps will significantly enhance warfighter capabilities, and the advanced technology can be 'spiraled' into a later spacecraft. First launch was scheduled for 2QFY13.
In October 2008, the DoD announced that it was deferring until 2010 a decision on choosing a contractor to build the system. The DoD did not announce whether it would continue to fund further development of the system in the interim.
In December 2008, the US Air Force released a new request for proposal (RFP) to Lockheed Martin and Boeing. The new proposal calls for five satellites and ground stations providing message and data routing for US Army units, including vehicles in the new Future Combat Systems, with the launch of the first satellite projected for 2019. The RFP requests that the new system use the specifications developed under the less-costly, designed backup system.
Program termination
On 6 April 2009, U.S. Secretary of Defense Gates announced the department's recommendations for the FY2010 budget. Among these recommendations was the plan to cancel the TSAT program. High cost, technological risk, and development delays were given as primary reasons, though some have argued that funding instability within the DoD was a primary cause of the protracted development timeline. As an interim replacement strategy, Secretary Gates recommended the fielding of two additional AEHF satellites.
References
Notes
External links
U.S. Air Force TSAT
Global Security TSAT
IPv6 TSAT announcement
See also
Department of Defense Architecture Framework
DoD Joint Technical Framework version 6.0
DoD Business Enterprise Architecture
Global Information Grid-Enterprise Services initiative
Department Of Defense Directive (DoDD) 8100.01 "Global Information Grid - Overarching Policy", September 2002
JTF-Global Network Operations
Global Information Grid
Communications satellite constellations
Applications of distributed computing
Military communications
IPv6 | Transformational Satellite Communications System | Engineering | 1,214 |
70,694,306 | https://en.wikipedia.org/wiki/Reduction-sensitive%20nanoparticles | Reduction-sensitive nanoparticles (RSNP) consist of nanocarriers that are chemically responsive to reduction. Drug delivery systems using RSNP can be loaded with different drugs that are designed to be released within a concentrated reducing environment, such as the tumor-targeted microenvironment. Reduction-Sensitive Nanoparticles provide an efficient method of targeted drug delivery for the improved controlled release of medication within localized areas of the body.
Redox sensitive nanoparticles vs. reduction sensitive nanoparticles
Nanoparticles are small in size with maximized surface area and have an enhanced level of solubility; these elements result in an improved bioavailability. Reduction-Sensitive Nanoparticles are nanoparticles that are responsive to reduction signaling environments. Redox-Sensitive Nanoparticles can be responsive to signaling through a reduction activation or an oxidative activation. Therefore, degradation of chemical bonds can be either activated through oxidants or reductants in the localized area. The cleavage/degradation of chemical bonds will enable the drugs loaded within the nanoparticle to be released into the body. Depending on the activation mechanism, Redox-Sensitive Nanoparticles can be associated with Reduction-Sensitive Nanoparticles if the chemical activation method is through reduction.
Nanoparticle drug loading
Nanoparticle Drug Loading is dependent on the mass ratio of the drug being loaded and the drug-loaded nanoparticle. Variations necessary to consider are the pore volume size, the surface, shape, and charge of the nanoparticle. The mode of drug loading will depend on the type of drug being administered, which will vary depending on the illness that is treated.
Drug Release
One of the limitations of nanoparticles for drug delivery is the insufficient or slow release of drugs. The rate of release is a critical element to identify how slowed drug release could limit the proper concentration of treatment. If the drug is not administered in concentrations high enough it could result in undertreatment of tumor cells with little to no effect. Concentration thresholds must be met to initiate cell death amongst tumor cells. However, the uncontrolled release of treatment could also permit adverse side effects. RSNPs have improved rates of drug release which improves the medication concentrations that can be administered to a specific area.
RSNPs consist of reduction or redox-sensitive bonds. After administration in the body, the RSNP will eventually come into contact with the tumor microenvironment (TME). Nanoparticles can be synthesized to activate when exposed to selective characteristics of the tumor microenvironments. TMEs depict unique characteristics that create a differing microenvironment in comparison to healthy tissue. Thus, nanoparticles can be designed to react to the unique elements of TMEs such as the formation of a reducing environment. The reducing abilities of the TMEs are due to the expression of reducing agents. RSNPs are formulated to express reduction-sensitive bonds that are cleaved when exposed to reducing agents. After the reduction occurs the degradation of the nanoparticles commences and the loaded drugs begin to release.
Physicochemical characterization
RSNPs
The physicochemical characteristics of nanoparticles are inclusive of the size, shape, chemical composition, stability, topography, surface charge, and surface area. Deviations of these characteristics can be impacted by the classification of the nanoparticle. For example, the RSNP can be classified as a polymeric, micelle, or lipid-polymeric hybrid. The reduction sensitivity of nanoparticles is reliant on the reduction-responsive chemical structures infused into the nanoparticle. Reduction occurs when the number of electrons increases in a chemical species. Reduction sensitive nanoparticles depict high plasma stability and quick responsiveness/activation. The reducing environment of tumor cells is greatly impacted by the oxidation and reduction states of NADPH/NADP+ and Glutathione.
Tumor microenvironment
For the effective application of RSNPs, the physicochemical characteristics of the tumor microenvironment must also be considered. The characteristics depicted by the TME are tumor hypoxia, angiogenesis, metabolism, acidosis, reactive oxygen species (ROS), etc. The elements of the tumor microenvironment can affect the reduction-inducing environment. Tumor cells abnormally regulate redox homeostasis leading to differences in the redox balance and increases in ROS levels. Research trends have shown that increased levels of ROS are correlated with high levels of antioxidant activity, such as intracellular GSH.
Reducing agents
Glutathione (GSH) or γ-glutamyl-cysteinyl-glycine is a critical biological reducing agent for drug delivery applications; it creates an effective reducing environment in the cytosol and nucleus of a cell. Glutathione is an antioxidant that is naturally produced in the liver and takes part in tissue building, tissue repair, immune responses, chemical production, and protein production. GSH is also a significant signaler of cell differentiation, proliferation, apoptosis, and ferroptosis. Furthermore, the glutathione concentration in the tumor microenvironment is reportedly at least four times higher compared to regular tissue. This is due to the high metabolic needs of tumor cells; for example, the rapid proliferation rates of tumor cells.
The over-expression of nicotinamide adenine dinucleotide phosphate NADPH can lead to higher ROS levels. NADPH has a lower concentration than GSH in the reducing environment. NADPH is an electron donor that exists among all organisms; additionally, the NADPH is used as a source of reduction to drive anabolic reactions and redox balances. The reduction and oxidation states of NADPH/NADP+ will influence the reduced responsiveness of the environment. Cancer cells express a unique NADPH homeostasis due to the adaptive alterations of signaling pathways and metabolic enzymes.
Subtypes
Reduction sensitive bonds
Disulfide bonds
Redox-Sensitive Nanoparticles with Disulfide bonds are commonly observed in medical research. RSNP can consist of disulfide bonds that are cleaved and introduced to a reduction condition. Additionally, the reduction of glutathione results in the formation of sulfhydryl groups. In large concentrations of GSH, the disulfide bonds are capable of being cleaved. Following the activation process, the degradation of the drug carrier results in the drug release. These linkages are commonly used between hydrophilic and hydrophobic segments in copolymers. Moreover, RSNP's hydrophilic shells will degrade in response to the reducing environment. The disulfide bonds are used as linkers and cross-linking agents. Disulfide bonds can be expressed attached to the side chains, the backbone, on the surface, and as linkages between moieties.
Disulfide bonds can also act as cross-linking agents in micelles nanoparticles. Micelles lack the structural stability as a nanocarrier for drug delivery. The lack of stability can result in the loss of drugs after administration and before reaching the infected area. This occurrence can potentially cause adverse side effects from the improper release of medication. Disulfide bonds can be used as crosslinked structures to increase the structural stability of micelle nanocarriers. In general, these crosslinks are located in the shell or the core of micelles nanoparticles.
Diselenide bonds
Redox-Sensitive Nanoparticles with Diselenide bonds share comparable reduction responsiveness to disulfide bonds. Diselenide consists of two selenium atoms along with an additional element or radical. Diselenide bonds are dynamic covalent bonds that can be exchange between molecules. Diselenide bonds have an estimated bond energy of 172 kJ/mol, and disulfide bonds have estimated bond energy of 268 kJ/mol; the lower bond energy holds a higher potential to design an increased sensitive redox-responsive delivery. Diselenide bonds have been observed to be attached to hydrophobic parts of amphiphilic triblocks or hyperbranched copolymers to create micelles.
Succinimide-thioether bonds
Succinimide-thioether linkages express sensitivity to reducing environments and can be cleaved as a result. Succinimide-thioether bonds show slower rates of release in comparison to disulfide bonds; however, succinimide-thioether nanoparticles are still sensitive to the reducing environment and are cleaved by GSH for fast intracellular release.
Trimethyl benzoquinone bonds
Nanoparticles with Trimethyl Benzoquinone have demonstrated responsiveness to reduced environments. The experiments that have been conducted testing TMBQ are limited in observing the full scope of TMBQ nanoparticles in delivery systems.
Development/Synthesis
The synthesis of reduction sensitive nanoparticles is dependent on the mechanism subtype of the nanoparticle. Additionally, the synthesis can vary within subtype classes depending on how the different reduction sensitive bonds are expressed. The deviations of RSNPs can range from attachments to the backbone, side chains, on the surface, etc. Research has been conducted with reduction sensitivity mechanisms using polymeric, lipid-polymer hybrids, and micelles nanoparticles. The production methods would be dependent on the delivery method design for the nanoparticle. Polymeric nanoparticle synthesis occurs from the addition of electrolyte-saturated or a nonelectrolyte-saturated solution with a water-miscible solvent; additionally, the mixture should be constantly stirred. Lipid micelles are formed by amphiphilic molecules through self-assembly. Lipid-polymer hybrids have multiple synthesis methods which consist of the single-step method, the two-step method, nanoprecipitation, emulsification-solvent evaporation, and a non-conventional two-step method.
Advantages
Reduction Sensitive Nanoparticles provide a mode of localized drug delivery by targeting elements of the tumor microenvironment. RSNP has the advantages of high stability when adhering to hydraulic degradation, fast responsiveness to the intracellular reducing environment, and drug release occurs in the cytosol and cell nucleus. Furthermore, drug release in the cytosol and cell nucleus has shown the potential to effectively administer treatment of more potent and poorly soluble anticancer drugs. The quick-release of RSNPs has the potential to offer an effective treatment for multidrug-resistant tumors. This addresses an important limitation of nanoparticles. Nanoparticle drug delivery often exhibits slow drug release. The slow release can lead the nanomedicine to be released at low concentrations; moreover, these limited concentrations inhibit the cell death of the tumor cells. Polymeric RSNPs have shown improved solubility, stability, biocompatibility, and decreased drug toxicity; for example, carbohydrate polymers.
Limitations
The effectiveness of reduction-sensitive nanoparticles is dependent on the responsiveness of the RSNP throughout the body. The microtumor and inflammatory environments contain higher concentrations of reducing agents in contrast to healthy cells; however, healthy cells still express GSH and NADPH. RSNPs are designed to be receptive to higher concentrations of reducing agents for the ability to distinguish between cancer cells and healthy cells. Furthermore, the other limitations are dependent on other characterizations, such as the type of nanoparticle; For example, micelles nanoparticles' lower levels of physical stability which can lead to drug loss and release in unwanted locations. Additionally, polymeric nanoparticles cannot effectively target the tumor and often undergo drug release too early.
Applications
Tumor/cancer treatments
Reduction Sensitive Nanoparticles are used as nanomedicines for drug delivery. As nanocarriers, RSNP can be loaded with drugs for disease therapeutics. This is commonly observed in the use of tumor and cancer treatments. Cancer cells create reducing environments that are used for RSNP activation. RSNPS can also increase the penetration of cancer treatment to the cancer cells. Specific applications include, but are not limited to Breast Cancer, Liver Cancer (hepatoma), Melanoma, Lung Cancer, Malignant Glioma, Ovarian Cancer, Cervical Cancer, Subcutaneous EAT, Pancreatic Cancer, Colon Cancer, Prostate Cancer, etc.
Inflammatory diseases
The development of RSNP for inflammatory diseases has been explored to a lesser extent. Regardless, in more recent years reduction-sensitive and redox-sensitive nanoparticles have gained more momentum in the realm of inflammatory diseases. Further advances have demonstrated Research has been conducted to evaluate the potential of RSNP as a therapeutic for inflammatory bowel disease. The activation mechanism consisted of pH and redox sensitivity. The outcomes of the experiment demonstrated higher selectivity to the reducing potential; therefore establishing the promising potential of RSNPs for the treatment of inflammatory bowel disease. Other studies have demonstrated potential applications as activatable magnetic resonance contrast agents. These proposed agents would help detect and monitor the treatment of inflammatory diseases by applying redox dysregulation.
References
Nanoparticles by physical property
Drug delivery devices | Reduction-sensitive nanoparticles | Chemistry | 2,756 |
3,362,363 | https://en.wikipedia.org/wiki/Nicotiana%20benthamiana | Nicotiana benthamiana, colloquially known as benth or benthi, is a species of Nicotiana indigenous to Australia. It is a close relative of tobacco.
A synonym for this species is Nicotiana suaveolens var. cordifolia, a description given by George Bentham in Flora Australiensis in 1868. This was transferred to Nicotiana benthamiana by Karel Domin in Bibliotheca Botanica (1929), honoring the original author in the specific epithet.
History
The plant was used by people of Australia as a stimulant, containing nicotine and other alkaloids, before the introduction of commercial tobacco (N. tabacum and N. rustica). Indigenous names for it include tjuntiwari and muntju. It was first collected on the north coast of Australia by Benjamin Bynoe on a voyage of in 1837.
Description
The herbaceous plant is found amongst rocks on hills and cliffs throughout the northern regions of Australia. Variable in height and habit, the species may be erect and up to or sprawling out no taller than . The flowers are white.
Research uses
N. benthamiana has been used as a model organism in plant research. For example, the leaves are rather frail and can be injured in experiments to study ethylene synthesis. Ethylene is a plant hormone which is secreted, among other situations, after injuries. Using gas chromatography, the quantity of ethylene emitted can be measured. Due to the large number of plant pathogens able to infect it, N. benthamiana is widely used in the field of plant virology. It is also an excellent target plant for agroinfiltration.
N. benthamiana has a number of wild strains across Australia, and the laboratory strain is an extremophile originating from a population that has retained a loss-of-function mutation in Rdr1 (RNA-dependent RNA polymerase 1), rendering it hypersusceptible to viruses.
Biotechnology
N. benthamiana is also a common plant used for "pharming" of monoclonal antibodies and other recombinant proteins; for example, the drug ZMapp was produced using these plants.
GMO
Cocaine
In 2022, a genetically engineered N. benthamiana was developed that was able to produce 25% of the amount of cocaine found in a coca plant.
COVID-19 vaccine development
The Quebec City-based biotechnology company, Medicago Inc., uses N. benthamiana as a "factory" to produce virus-like particles over short incubation periods (days) and in high volume, enabling rapid manufacturing capability for a potential COVID-19 vaccine.
In February 2022, Health Canada authorised use of the COVID-19 vaccine called CoVLP (brand name Covifenz) developed from N. benthamiana for preventing infection in adults 18 to 64 years old.
References
benthamiana
Plant models
Tobacco
Tobacco in Australia
Cocaine | Nicotiana benthamiana | Biology | 615 |
417,014 | https://en.wikipedia.org/wiki/Passive%20transport | Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to an area of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
Passive transport follows Fick's first law.
Diffusion
Diffusion is the net movement of material from an area of high concentration to an area with lower concentration. The difference of concentration between the two areas is often termed as the concentration gradient, and diffusion will continue until this gradient has been eliminated. Since diffusion moves materials from an area of higher concentration to an area of lower concentration, it is described as moving solutes "down the concentration gradient" (compared with active transport, which often moves material from area of low concentration to area of higher concentration, and therefore referred to as moving the material "against the concentration gradient").
However, in many cases (e.g. passive drug transport) the driving force of passive transport can not be simplified to the concentration gradient. If there are different solutions at the two sides of the membrane with different equilibrium solubility of the drug, the difference in the degree of saturation is the driving force of passive membrane transport. It is also true for supersaturated solutions which are more and more important owing to the spreading of the application of amorphous solid dispersions for drug bioavailability enhancement.
Simple diffusion and osmosis are in some ways similar. Simple diffusion is the passive movement of solute from a high concentration to a lower concentration until the concentration of the solute is uniform throughout and reaches equilibrium. Osmosis is much like simple diffusion but it specifically describes the movement of water (not the solute) across a selectively permeable membrane until there is an equal concentration of water and solute on both sides of the membrane. Simple diffusion and osmosis are both forms of passive transport and require none of the cell's ATP energy.
Speed of diffusion
For passive diffusion, the law of diffusion states that the mean squared displacement is with d being the number of dimensions and D the diffusion coefficient). So to diffuse a distance of about takes time , and the "average speed" is . This means that in the same physical environment, diffusion is fast when the distance is small, but less when the distance is large.
This can be seen in material transport within the cell. Prokaryotes typically have small bodies, allowing diffusion to suffice for material transport within the cell. Larger cells like eukaryotes would either have very low metabolic rate to accommodate the slowness of diffusion, or invest in complex cellular machinery to allow active transport within the cell, such as kinesin walking along microtubules.
Example of diffusion: gas exchange
A biological example of diffusion is the gas exchange that occurs during respiration within the human body. Upon inhalation, oxygen is brought into the lungs and quickly diffuses across the membrane of alveoli and enters the circulatory system by diffusing across the membrane of the pulmonary capillaries. Simultaneously, carbon dioxide moves in the opposite direction, diffusing across the membrane of the capillaries and entering into the alveoli, where it can be exhaled. The process of moving oxygen into the cells, and carbon dioxide out, occurs because of the concentration gradient of these substances, each moving away from their respective areas of higher concentration toward areas of lower concentration. Cellular respiration is the cause of the low concentration of oxygen and high concentration of carbon dioxide within the blood which creates the concentration gradient. Because the gasses are small and uncharged, they are able to pass directly through the cell membrane without any special membrane proteins. No energy is required because the movement of the gasses follows Fick's first law and the second law of thermodynamics.
Facilitated diffusion
Facilitated diffusion, also called carrier-mediated osmosis, is the movement of molecules across the cell membrane via special transport proteins that are embedded in the plasma membrane by actively taking up or excluding ions [14]. Through facilitated diffusion, energy is not required in order for molecules to pass through the cell membrane. Active transport of protons by H+ ATPases alters membrane potential allowing for facilitated passive transport of particular ions such as potassium down their charge gradient through high affinity transporters and channels.
Example of facilitated diffusion: GLUT2
An example of facilitated diffusion is when glucose is absorbed into cells through Glucose transporter 2 (GLUT2) in the human body. There are many other types of glucose transport proteins, some that do require energy, and are therefore not examples of passive transport. Since glucose is a large molecule, it requires a specific channel to facilitate its entry across plasma membranes and into cells. When diffusing into a cell through GLUT2, the driving force that moves glucose into the cell is the concentration gradient. The main difference between simple diffusion and facilitated diffusion is that facilitated diffusion requires a transport protein to 'facilitate' or assist the substance through the membrane. After a meal, the cell is signaled to move GLUT2 into membranes of the cells lining the intestines called enterocytes. With GLUT2 in place after a meal and the relative high concentration of glucose outside of these cells as compared to within them, the concentration gradient drives glucose across the cell membrane through GLUT2.
Filtration
Filtration is movement of water and solute molecules across the cell membrane due to hydrostatic pressure generated by the cardiovascular system. Depending on the size of the membrane pores, only solutes of a certain size may pass through it. For example, the membrane pores of the Bowman's capsule in the kidneys are very small, and only albumins, the smallest of the proteins, have any chance of being filtered through. On the other hand, the membrane pores of liver cells are extremely large, but not forgetting cells are extremely small to allow a variety of solutes to pass through and be metabolized.
Osmosis
Osmosis is the net movement of water molecules across a selectively permeable membrane from an area of high water potential to an area of low water potential. A cell with a less negative water potential will draw in water, but this depends on other factors as well such as solute potential (pressure in the cell e.g. solute molecules) and pressure potential (external pressure e.g. cell wall). There are three types of Osmosis solutions: the isotonic solution, hypotonic solution, and hypertonic solution. Isotonic solution is when the extracellular solute concentration is balanced with the concentration inside the cell. In the Isotonic solution, the water molecules still move between the solutions, but the rates are the same from both directions, thus the water movement is balanced between the inside of the cell as well as the outside of the cell. A hypotonic solution is when the solute concentration outside the cell is lower than the concentration inside the cell. In hypotonic solutions, the water moves into the cell, down its concentration gradient (from higher to lower water concentrations). That can cause the cell to swell. Cells that don't have a cell wall, such as animal cells, could burst in this solution. A hypertonic solution is when the solute concentration is higher (think of hyper - as high) than the concentration inside the cell. In hypertonic solution, the water will move out, causing the cell to shrink.
See also
Active transport
Transport phenomena
References
Transport phenomena
Cellular processes
Membrane biology
Physiology
Cell biology | Passive transport | Physics,Chemistry,Engineering,Biology | 1,646 |
16,647,678 | https://en.wikipedia.org/wiki/Segmented%20spindle | A segmented spindle, also known by the trademark Kataka, is a specialized mechanical linear actuator conceived by the Danish mechanical engineer Jens Joerren Soerensen during the mid-1990s. The actuator forms a telescoping tubular column, or spindle, from linked segments resembling curved parallelograms. The telescoping linear actuator has a lifting capacity up to 200 kg (~440 pounds) for a travel of 400 mm (~15.75 inches).
A short elongated housing forms the base of the actuator and includes an electrical gear drive and storage magazine for the spindle segments. The drive spins a helically grooved wheel that engages the similarly grooved inside face of the spindle segments. As the wheel spins it simultaneously pull the segments from their horizontal arrangement in the magazine and stacks them along the vertical path of a helix into a rigid tubular column. The reverse process lowers the column.
See also
Helical band actuator
Rigid belt actuator
Rigid chain actuator
References
External links
Kataka web site
Actuators
Hardware (mechanical)
Gears | Segmented spindle | Physics,Technology,Engineering | 230 |
11,422,187 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z185 | In molecular biology, Small nucleolar RNA Z185 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA Z185 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
Plant snoRNA Z185 was identified in a screen of Oryza sativa.
References
External links
Small nuclear RNA | Small nucleolar RNA Z185 | Chemistry | 197 |
50,601,254 | https://en.wikipedia.org/wiki/ULTRA%20%28machine%20translation%20system%29 | ULTRA is a machine translation system created for five languages (Japanese, Chinese, Spanish, English, and German) in the Computing Research Laboratory in 1991.
ULTRA (Universal Language Translator), is a machine translation system developed at the Computing Research Laboratory, which can translate between five languages (Japanese, Chinese, Spanish, English and German). It uses Artificial intelligence as well as linguistic and logic programming methods. The main goal of the system is to be robust, to cover general language and to be simple to use. It uses bidirectional parsers/generators.
The system has a language-independent system of intermediate representation, which means that it takes into account needs for expression (expression is one of the main elements of language) and it uses relaxation techniques to provide the best translation. It used an X Window user interface.
ULTRA's databases
ULTRA has vocabularies based on about 10,000 word senses in each of its five languages.
It represents expressions.
It has an access to many dictionary databases.
Operation
Users paste a sentence into the "source" window. They chose a target language and press Translate. The tool translates the source text, taking into consideration what is said, how it is said and why it is said.
Lexical entries in the system have two parts:
Specific language entry corresponding to the graphic form, which is representing some kind of information/sense and
Intermediate representation giving proper forms that represent the sense of the expression.
ULTRA works with Intermediate representation of the language between the systems, so no transfer takes place. Each language has its own systems, which are independent. Having the independent systems gives an extra benefit. Adding another language does not disrupt existing language translations.
Intermediate representation
Developers David Farwell and Yorick Wilks created IR (interlingual representation). It was a base for analyzing and generating expressions.
They analyzed many different types of communications (business letters, documents, emails) to compare the communication style. ULTRA looks for the best words for some kinds of information and good forms and equivalents for some expression in target language.
References
External
Austermuhl Frank, ″Electronic tools for translations″, Manchester 2001
Wilks Yorick, ″Machine Translation. Its Scope and Limits.″, Springer Science+Business Media LLC 2009
Farwell David, Wilks Yorick, "ULTRA: A multilingual machine translator", Washington 1991
Computational linguistics | ULTRA (machine translation system) | Technology | 480 |
74,367,386 | https://en.wikipedia.org/wiki/Motorized%20potentiometer | A motorized potentiometer combines a potentiometer with an electric motor.
Uses
Motorized potentiometers can be found in audio/video equipment, specifically mixing consoles. In this application, they are called motorized faders. Mixing consoles with motorized faders typically are used to save and restore settings on the same console and sometimes to transfer settings to a different console. Save and restore also allows to control more channels then there are sliders by switching which tracks are controlled. While historically, the faders where literal motorized potentiometers, nowadays faders may directly digitize the fader position and apply the value digitally in the digital signal processing.
Motorized potentiometers are used in industrial controls.
Motorized potentiometers may be used for remote control applications.
Motorized potentiometers can be used to build electrical/electronic analog computers. The motorized potentiometer can act as a computing element, but also as a way to convert a physical into an electrical value.
Radio control servo motors use a potentiometer as feedback for the servo position.
Features
Some motorized potentiometers allow both manual and motorized operation.
Motorized potentiometers can be slide or rotary potentiometers. There also exist multiple turn motorized potentiometers.
The end of travel may be detected using limit switches, a peak in motor current as the mechanism stalls, or a separate resistive element used for position feedback.
History
Given that the history of the motorized potentiometer is linked to electronic analog computers, and electronic analog computers to military use, recording keeping and publication were limited, also meaning that parallel invention was highly likely. The M9 Gun Director had a potentiometer controlled by op amps. The Bomben-Abwurfrechner BT-9 has a motor driven potentiometer to convert a pressure into a potentiometer setting.
In 1968 a patent was filled describing a motor-potentiometer combination where the motor only engages when energized, allowing manual operation.
In 1970 a patent was filled describing a motor-potentiometer with overload clutch and interchangeable gear ratio.
Manufactures
Manufactures are for e.g. Alps Electric.
References
Resistive components
Analog computers | Motorized potentiometer | Physics | 445 |
22,829,996 | https://en.wikipedia.org/wiki/Extremal%20orders%20of%20an%20arithmetic%20function | In mathematics, specifically in number theory, the extremal orders of an arithmetic function are best possible bounds of the given arithmetic function. Specifically, if f(n) is an arithmetic function and m(n) is a non-decreasing function that is ultimately positive and
we say that m is a minimal order for f. Similarly if M(n) is a non-decreasing function that is ultimately positive and
we say that M is a maximal order for f. Here, and denote the limit inferior and limit superior, respectively.
The subject was first studied systematically by Ramanujan starting in 1915.
Examples
For the sum-of-divisors function σ(n) we have the trivial result because always σ(n) ≥ n and for primes σ(p) = p + 1. We also have proved by Gronwall in 1913. Therefore n is a minimal order and is a maximal order for σ(n).
For the Euler totient φ(n) we have the trivial result because always φ(n) ≤ n and for primes φ(p) = p − 1. We also have proven by Landau in 1903.
For the number of divisors function d(n) we have the trivial lower bound 2 ≤ d(n), in which equality occurs when n is prime, so 2 is a minimal order. For ln d(n) we have a maximal order , proved by Wigert in 1907.
For the number of distinct prime factors ω(n) we have a trivial lower bound 1 ≤ ω(n), in which equality occurs when n is a prime power. A maximal order for ω(n) is .
For the number of prime factors counted with multiplicity Ω(n) we have a trivial lower bound 1 ≤ Ω(n), in which equality occurs when n is prime. A maximal order for Ω(n) is
It is conjectured that the Mertens function, or summatory function of the Möbius function, satisfies though to date this limit superior has only been shown to be larger than a small constant. This statement is compared with the disproof of Mertens conjecture given by Odlyzko and te Riele in their several decades old breakthrough paper Disproof of the Mertens Conjecture. In contrast, we note that while extensive computational evidence suggests that the above conjecture is true, i.e., along some increasing sequence of tending to infinity the average order of grows unbounded, that the Riemann hypothesis is equivalent to the limit being true for all (sufficiently small) .
See also
Average order of an arithmetic function
Normal order of an arithmetic function
Notes
Further reading
A survey of extremal orders, with an extensive bibliography.
Arithmetic functions | Extremal orders of an arithmetic function | Mathematics | 561 |
3,569,061 | https://en.wikipedia.org/wiki/Ciena | Ciena Corporation is an American networking systems and software company based in Hanover, Maryland. The company has been described by The Baltimore Sun as the "world's biggest player in optical connectivity". The company reported revenues of $4 billion and more than 8,500 employees, . Gary Smith serves as president and chief executive officer (CEO).
Customers include AT&T, Deutsche Telekom, KT Corporation and Verizon Communications.
History
Early history and initial public offering
Ciena was founded in 1992 under the name HydraLite by electrical engineer David R. Huber. Huber served as chief executive officer, while his former employer, Optelecom, an optical networking company, provided "management assistance and production facilities," and co-founder Kevin Kimberlin "provided initial equity capital during the formation of the Company". Dave Huber engaged William K. Woodruff & Co. to raise $3 million in venture funding in September of 1993. Woodruff presented the idea to John Bayless at Sevin Rosen in November 1993 that resulted in Sevin Rosen investing $3 million April 10, 1994. William K. Woodruff & Co. was a co-manager of Ciena's IPO in February 1997. The company subsequently received funding from Sevin Rosen Funds as a result of a demonstration at its laboratory attended by Jon Bayless, a partner at the firm, who saw the value in applying HydraLite's fiber-optic technology to cable television. Sevin Rosen offered funding immediately, investing $1.25 million in April 1994.
Ciena received $40 million in venture capital financing, including $3.3 million from Sevin Rosen Funds. Other early investors in the company included Charles River Ventures, Japan Associated Finance Co., Star Venture, and Vanguard Venture Partners. Bayless also recruited physicist Patrick Nettles, a former colleague at the telecommunications company Optilink, to serve as Ciena's first CEO, and Lawrence P. Huang, another former colleague, to accept the sales chief role. Huber and Nettles, who changed the company's name to Ciena, began working from an office in Dallas in February 1994; Huber would remain with Ciena until 1995.
The name of the company was changed to Ciena in 1994. Its first products were introduced in May 1996 to Sprint Corporation. At $195 million, the company's first-year sales were the highest ever recorded by a startup at the time with $54.8 million from Sprint alone by November 1996. WorldCom also became an early customer. As of early 1997, Sprint and WorldCom accounted for 97 percent of Ciena's revenue. Ciena began diversifying its clientele and acquiring smaller contracts in 1997.
Ciena went public on NASDAQ in February 1997 with initial public offering by a startup company to date, with a valuation of $3.4 billion. The company's headquarters were relocated to Maryland in March 1997. Ciena earned approximately $370 million in revenue and profits of $110 million for the fiscal year ending in October 1997. Customers at the time included AT&T, Bell Atlantic, and Digital Teleport.
In March 1998, Nettles and Michael Birck of Tellabs began discussing a possible merger. Tellabs announced the purchase of Ciena for $7.1 billion in June. Revenue surpassed $700 million by August 1998, and Ciena had approximately 1,300 employees at the time. The merger was called off. in September 1998 with financial performance and shareholder disapproval cited in the media as reasons.
Since 2000s
During the telecoms crash, Ciena's annual sales decreased from $1.6 billion to approximately $300 million. To address the company's challenges this presented, Gary Smith replaced Nettles as the company's CEO in 2001, and Nettles became executive chairman. The company raised $1.52 billion by selling 11 million shares of stock and $600 million in convertible bonds in 2001. Ciena was the second largest fiber optic networking equipment producer in the U.S. at the time.
While many telecommunications companies experienced downturns during the early 2000s, Ciena's cash influx provided flexibility and allowed the company to expand its product portfolio to include a broader range of advanced networking solutions and other technologies. Ciena also completed a series of strategic acquisitions, buying 11 companies between 1997 and early 2004, spending more than $2 billion to purchase five networking technology companies during 2001 to 2004.
AT&T, which previously tested select Ciena equipment, signed a supply agreement in 2001. In 2002, Ciena reported $361.1 million in sales and a loss of $1.59 billion, and had approximately 3,500 employees. The company was the fourth largest producer of fiber optic equipment in the U.S. by 2003.
In 2003, a federal court jury determined that Corvis Corporation, another fiber optic telecommunications equipment provider established by Huber in 1997, infringed a patent owned by Ciena.
In 2008, Ciena earned $902 million and reported a profit of $39 million. The company earned $653 million and reported a loss of $580 million in 2009; Ciena was generating approximately two-thirds of its revenue in the U.S. at the time. Ciena had net losses until 2015, when the company earned $2.4 billion in sales and posted a $12 million profit. Ciena's global workforce increased from 4,300 in 2011 to 5,345 by October 2015. The company's research and development budget for its Ottawa facilities was approximately $180 million per year, as of 2015.
Ciena earned $2.8 billion in revenue in 2017, and reported annual sales of approximately $3.09 billion in 2018.It crossed the 4 billion mark by 2024. The company ranked number 770 and number 744 on the Fortune 1000 in 2017 and 2018, respectively and ranked 699 in 2024.
Acquisitions
Ciena acquired the telecommunications company AstraCom Inc. in 1997 for $13.1 million. Fourteen of AstraCom's engineers signed four-year contracts with Ciena, and joined the company's new research and development team in Alpharetta, Georgia. In early 1998, the company acquired Norcross, Georgia–based ATI Telecom International Ltd. and its subsidiary Alta Telecom in a transaction worth $52.5 million. Alta's engineering and installation products were used by service providers for switching, transport, and wireless communications; the company continued to operate as a subsidiary of Ciena. Ciena purchased Terabit Technology Inc., a producer of detectors for data transmission based in Santa Barbara, California, for $11.7 million in April 1998. The company acquired Cupertino, California–based Lightera Networks Inc. and Marlborough, Massachusetts–based Omnia Communications Inc. for $980 million in stock in 1999.
The company purchased Cyras Corp. of Fremont, California, during 2000 to 2001 for $2 billion in stock. ONI Systems, a San Jose, California–based producer of phone and computer data equipment, was acquired by Ciena for $900 million in stock in June 2002. The acquisitions of Cyras, which produced optical switch systems, and ONI, which made transport equipment for data transfer, allowed Ciena to focus on networks in metropolitan areas.
Ciena purchased WaveSmith Networks Inc., an optical-networking equipment manufacturer based in Acton, Massachusetts, for $158 million in stock in 2003. Ciena acquired the Ottawa-based data storage networking company Akara Corp. for $45 million in 2003. Akara expanded Ciena's product line and storage networking capabilities, and continued to operate as a subsidiary. Catena Networks and New Jersey–based Internet Photonics were purchased by Ciena in 2004. The stock transactions were valued at $486.7 million and $150 million, respectively. Catena had approximately 220 employees at the time, and the purchase of Internet Photonics marked Ciena's entrance into the cable industry.
In 2008, Ciena acquired World Wide Packets Inc. (WWP), a Spokane Valley, Washington–based producer of switches and software for Ethernet services, for approximately $296 million. WWP offered the LightningEdge operating system and network management tools, and had more than 100 customers in 25 countries at the time. WWP became a whole owned subsidiary, and the company's office and 65 employees in Spokane, Washington were used by Ciena until mid 2018.
Ciena acquired Nortel's optical technology and Carrier Ethernet division for approximately $770 million during 2009 to 2010. Nortel's Metro Ethernet Networks business developed next-generation optical-transmission equipment and had more than 1,000 customers in 65 countries at the time. The business had approximately 1,400 employees in Canada, including 1,125 in Ottawa and 250 in Montreal. In 2017, Ciena's 1,600 Ottawa personnel were relocated to a new campus in Kanata, Ontario, along with employees of Catena. These 1,600, many of whom worked for Nortel, comprise less than 30 percent of Ciena's workforce, but represent the company's largest operational hub and complete half of its research and development work.
Ciena acquired Cyan, which offers platforms and software systems for network operators, for approximately $400 million in 2015. The assets of TeraXion Inc., a network management system company based in Quebec City, were purchased for $32 million in 2016. Ciena acquired Packet Design, an Austin-based network performance management software company specializing in network optimization, route analytics, and topology, in 2016. In 2018, Ciena purchased software and services company DonRiver for an undisclosed amount.
Operations in India
Ciena opened a campus in Gurgaon, India, in 2006. The campus focuses on research and development, and was further expanded in 2018 to begin manufacturing products for local markets. There were approximately 1,500 employees on site, representing 20 percent of the company's global workforce, as of May 2018.
Ciena and Sify partnered in mid 2018 to increase the information and communications technology company's network capacity from 100G to 400G. Ciena's converged packet optical products support big data analysis, cloud computing, and the Internet of things across 40 of Sify's data centers in India. In 2019, Bharti Airtel used Ciena equipment to build a 130,000 km photonic control plane network, connecting more than 4,000 locations in India. Ciena provides converged packet optical and Ethernet services to Bharti Airtel, Jio, and Vodafone Idea Limited, and supplies equipment to the Government of India, as of mid 2019.
Rajesh Nambiar was named the chairman and president of Ciena India in mid 2019 till Oct 2020.
Products
Ciena develops and markets equipment, software and services, primarily for the telecommunications industry and large cloud service firms. Their products and services support the transport and management of voice and data traffic on communications networks.
Network infrastructure
Ciena's network equipment includes optical network switches and routing platforms to manage data load on telecommunications networks. The company launched its WaveLogic 5 modem platform in 2019. The platform provides network capacity up to 800G. Ciena also provides technology and equipment for undersea cable networks.
Software and analytics
The company's Blue Planet software platform is used by telecoms companies for programming communications networks, including for network automation. It includes a service that uses machine learning algorithms that analyze anomalies in a network to predict issues, and identify actions for the network operators to take in order to prevent network outages and further disruptions.
See also
Ciena Optical Multiservice Edge 6500
References
Companies listed on the New York Stock Exchange
Companies formerly listed on the Nasdaq
Companies based in Anne Arundel County, Maryland
Networking companies of the United States
Networking hardware companies
Telecommunications equipment vendors
American companies established in 1992
Telecommunications companies established in 1992
1992 establishments in Maryland
1997 initial public offerings
Computer companies of the United States
Computer hardware companies
Software companies of the United States
Companies in the S&P 400 | Ciena | Technology | 2,470 |
65,001,360 | https://en.wikipedia.org/wiki/Ofqual%20exam%20results%20algorithm | In 2020, Ofqual, the regulator of qualifications, exams and tests in England, produced a grades standardisation algorithm to combat grade inflation and moderate the teacher-predicted grades for A level and GCSE qualifications in that year, after examinations were cancelled as part of the response to the COVID-19 pandemic.
History
In late March 2020, Gavin Williamson, the secretary of state for education in Boris Johnson's Conservative government, instructed the head of Ofqual, Sally Collier, to "ensure, as far as is possible, that qualification standards are maintained and the distribution of grades follows a similar profile to that in previous years". On 31 March, he issued a ministerial direction under the Children and Learning Act 2009.
Then, in August, 82% of 'A level' grades were computed using an algorithm devised by Ofqual. More than 4.6 million GCSEs in England – about 97% of the total – were assigned solely by the algorithm. Teacher rankings were taken into consideration, but not the teacher-assessed grades submitted by schools and colleges.
On 25 August, Collier, who oversaw the development of Williamson's algorithm calculation, resigned from the post of chief regulator of Ofqual following mounting pressure.
Vocational qualifications
The algorithm was not applied to vocational and technical qualifications (VTQs), such as BTECs, which are assessed on coursework or as short modules are completed, and in some cases adapted assessments were held. Nevertheless, because of the high level of grade inflation resulting from Ofqual's decision not to apply the algorithm to A levels and GCSEs, Pearson Edexcel, the BTEC examiner, decided to cancel the release of BTEC results on 19 August, the day before they were due to be released, to allow them to be re-moderated in line with Ofqual's grade inflation.
The algorithm
Ofqual's Direct Centre Performance model is based on the record of each centre (school or college) in the subject being assessed. Details of the algorithm were not released until after the results of its first use in August 2020, and then only in part.
{| border="1" cellpadding="5" cellspacing="0"
|
Synopsis
The examination centre provided a list of teacher predicted grades, called 'centre assessed grades' (CAGs)
The students were listed in rank order with no ties.
For large cohorts (over 15)
With exams with a large cohort; the previous results of the centre were consulted. For each of the three previous years, the number of students getting each grade (A* to U) is noted. A percentage average is taken.
This distribution is then applied to the current years students-irrespective of their individual CAG.
A further standardisation adjustment could be made on the basis of previous personal historic data: at A level this could be a GCSE result, at GCSE this could be a Key Stage 2 SAT.
For small cohorts, and minority interest exams (under 15).
The individual CAG is used unchanged
|-
|
The formulas
for large schools with
for small schools with
The variables
is the number of pupils in the subject being assessed
is a specific grade
indicates the school
is the historical grade distribution of grade at the school (centre) over the last three years, 2017-19.
is the predicted grade distribution based on the class’s prior attainment at GCSEs. A class with mostly 9s (the top grade) at GCSE will get a lot of predicted A*s; a class with mostly 1s at GCSEs will get a lot of predicted Us.
is the predicted grade distribution of the previous years, based on their GCSEs. You need to know that because, if previous years were predicted to do poorly and did well, then this year might do the same.
is the fraction of pupils in the class where historical data is available. If you can perfectly track down every GCSE result, then it is 1; if you cannot track down any, it is 0.
CAG is the centre assessed grade.
is the result, which is the grade distribution for each grade at each school .
|}
Schools were not only asked to make a fair and objective judgement of the grade they believed a student would have achieved, but also to rank the students within each grade. This was because the statistical standardisation process required more granular information than the grade alone. Some examining boards issued guidance on the process of forming the judgement to be used within centres, where several teachers taught a subject. This was to be submitted 29 May 2020.
For A-level students, their school had already included a predicted grade as part of the UCAS university application reference. This was submitted by 15 January (15 October 2019 for Oxbridge and medicine) and had been shared with the students. This UCAS predicted grade is not the same as the Ofqual predicted grade.
The normal way to test a predictive algorithm is to run it against the previous year's data: this was not possible as the teacher rank order was not collected in previous years. Instead, tests used the rank order that had emerged from the 2019 final results.
Effects of the algorithm
The A-level grades were announced in England, Wales and Northern Ireland on 13 August 2020. Nearly 36% were lower than teachers' assessments (the CAG) and 3% were down two grades.
Side-effects of the algorithm
Students at small schools or taking minority subjects, such as are offered at small private schools (which are also more likely to have fewer students even in popular subjects), could see their grades being higher than their teacher predictions, especially when falling into the small class/minority interest bracket. Such students traditionally have a narrower range of marks, the weaker students having been invited to leave. Students at large state schools, sixth-form colleges and FE colleges who have open access policies and historically have educated BAME students or vulnerable students saw their results plummet, in order to fit the historic distribution curve.
Students found the system unfair, and pressure was applied on Williamson to explain the results and to reverse his decision to use the algorithm that he had commissioned and Ofqual had implemented. On 12 August Williamson announced 'a triple lock' that let students appeal the result using an undefined valid mock result. But on 15 August, the advice was published with eight conditions set which differed from the minister's statement. Hours after the announcement, Ofqual suspended the system. On 17 August, Ofqual accepted that students should be awarded the CAG grade, instead of the grade predicted by the algorithm.
UCAS said on 19 August that 15,000 pupils were rejected by their first-choice university on the algorithm-generated grades. After the Ofqual decision to use unmoderated teacher predictions, many affected students had grades to meet their offer, and reapplied. 90% of them said they aimed to study at top-tier universities. The effect was that top-tier universities appeared to have a capacity problem.
The Royal Statistical Society said they had offered to help with the construction of the algorithm, but withdrew that offer when they saw the nature of the non-disclosure agreement they would have been required to sign. Ofqual was not prepared to discuss it and delayed replying by 55 days.
Legal opinion
Lord Falconer, a former attorney general, opined that three laws had been broken, and gave an example of where Ofqual had ignored a direct instruction of the Secretary of State for Education.
Falconer said the formula for standardising grades was in breach of the overarching objectives under which Ofqual was established by the Apprenticeships, Skills, Children and Learning Act 2009. The objectives require that the grading system gives a reliable indication of the knowledge, skills and understanding of the student, and that it allows for reliable comparisons to be made with students taking exams graded by other boards and to be made with students who took comparable exams in previous years.
The Labour Party suggested that the process was unlawful in that the students were given no appeal mechanism, stating: "There will be a mass of discriminatory impacts by operating the process on the basis of reflecting the previous years' results from their institutions", and "It is bound to disadvantage a whole range of groups with protected characteristics, in breach of a range of anti-discrimination legislation."
See also
2020 United Kingdom school exam grading controversy
References
External links
Requirements for the calculation of results in summer 2020 – Ofqual, 7 July 2020, updated 20 August
Student guide to post-16 qualification results: summer 2020 – Ofqual, 27 July 2020, updated 19 August
Taking exams during the coronavirus (COVID-19) outbreak – guidance from the Department for Education, published 20 March 2020, updated 27 August
Higher Education Policy Institute algorithm discussion, May 2020
Education Committee Oral evidence: The Impact of Covid-19 on education and children’s services, HC 254 Wednesday 2 September 2020
2020 in England
School examinations
Government by algorithm | Ofqual exam results algorithm | Engineering | 1,823 |
5,926,021 | https://en.wikipedia.org/wiki/14th%20FAI%20World%20Rally%20Flying%20Championship | 14th FAI World Rally Flying Championship took place between July 14 - July 20, 2004 in Herning, Denmark, altogether with the 16th FAI World Precision Flying Championship (July 19-24).
There were 50 crews from Czech Republic, Poland, France, South Africa, Denmark, Russia, Germany, United Kingdom, Austria, Spain, Chile, Slovakia, Italy, Lithuania and Cyprus.
Most numerous airplane was Cessna 172 (28), then Cessna 152 (10), Cessna 150 (6). Others: PZL Wilga 2000, 3Xtrim 3X55 Trener, HB-23, Glastar and Piper PA-28 were single ones.
Contest
On the July 14, 2004 there was an opening ceremony, on the next day an opening briefing and official practice.
On July 16 there was the first navigation competition, on July 17 the second competition, and on July 18 the third competition - observation test. On July 19 there was awards giving and closing ceremony (and opening ceremony of the 16th FAI World Precision Flying Championship, in which many competitors participated).
Results
Individual: (pilot / navigator)
1. Jiří Filip / Michal Filip (Czech Republic) - Cessna 152 (OK-IKF) (120 penalty points)
2. František Cihlář / Milos Fiala (Czech Republic) - Cessna 152 (OK-IKC) (226 pts)
3. Krzysztof Wieczorek / Krzysztof Skrętowicz (Poland) - 3Xtrim 3X55 Trener (SP-YEX) (298 pts)
4. Nigel Hopkins / Dale de Klerk (South Africa) - Cessna 172 (OY-BIK) (318 pts)
5. Petr Opat / Tomas Rajdl (Czech Republic) - Cessna 152 (OK-NAV) (330 pts)
6. Janusz Darocha / Zbigniew Chrząszcz (Poland) - Cessna 152 (SP-FZY) (342 pts)
7. Philippe Odeon / Philippe Muller (France) - Cessna 152 (F-GBQD) (434 pts)
8. Joël Tremblet / Jose Bertanier (France) - Cessna 152 (F-GBFB) (474 pts)
9. Michel Frere / Frédérick Saquet (France) - Cessna 152 (F-GBQD) (510 pts)
10. Claes Johanssen / Nathalie Strube (FAI) - Cessna 172 (SE-CXD) (522 pts)
11. Wacław Wieczorek / Michał Wieczorek (Poland) - PZL Wilga 2000 (SP-AHV) (568 pts)
Team (penalty points):
Czech Republic - 346
Poland - 640
France - 908
South Africa - 1504
Austria - 2051
Spain - 2320
Denmark - 2973
United Kingdom - 3073
Germany - 3493
Chile - 4210
Italy - 4568
Russia - 6159
Cyprus - 9648
External links
14th FAI World Rally Flying Championship
Rally Flying 14
Fédération Aéronautique Internationale
July 2004 events in Europe
2004 in Denmark
Aviation history of Denmark | 14th FAI World Rally Flying Championship | Engineering | 655 |
18,055,330 | https://en.wikipedia.org/wiki/Victimisation | Victimisation (or victimization) is the state or process of being victimised or becoming a victim. The field that studies the process, rates, incidence, effects, and prevalence of victimisation is called victimology.
Peer victimisation
Peer victimisation is the experience among children of being a target of the aggressive behaviour of other children, who are not siblings and not necessarily age-mates. Peer victimisation is correlated with an increased risk of depression and decreased well-being in adulthood.
Secondary victimisation
Secondary victimization (also known as post crime victimization or double victimization) refers to further victim-blaming from criminal justice authorities following a report of an original victimization.
Revictimisation
The term revictimisation refers to a pattern wherein the victim of abuse and/or crime has a statistically higher tendency to be victimised again, either shortly thereafter or much later in adulthood in the case of abuse as a child. This latter pattern is particularly notable in cases of sexual abuse. While an exact percentage is almost impossible to obtain, samples from many studies suggest the rate of revictimisation for people with histories of sexual abuse is very high. The vulnerability to victimisation experienced as an adult is also not limited to sexual assault, and may include physical abuse as well.
Reasons as to why revictimisation occurs vary by event type, and some mechanisms are unknown. Revictimisation in the short term is often the result of risk factors that were already present, which were not changed or mitigated after the first victimisation; sometimes the victim cannot control these factors. Examples of these risk factors include living or working in dangerous areas, chaotic familial relations, having an aggressive temperament, drug or alcohol usage and unemployment. Revictimisation may be "facilitated, tolerated, and even produced by particular institutional contexts, illustrating how the risk of revictimization is not a characteristic of the individual, nor is it destiny."
Revictimisation of adults who were previously sexually abused as children is more complex. Multiple theories exist as to how this functions. Some scientists propose a maladaptive form of learning; the initial abuse teaches inappropriate beliefs and behaviours that persist into adulthood. The victim believes that abusive behaviour is "normal" and comes to expect, or feel they deserve it from others in the context of relationships, and thus may unconsciously seek out abusive partners or cling to abusive relationships. Another theory draws on the principle of learned helplessness. As children, they are put in situations that they have little to no hope of escaping, especially when the abuse comes from a caregiver. One theory goes that this state of being unable to fight back or flee the danger leaves the last primitive option: freeze, an offshoot of death-feigning.
Revictimization has also been characterized as a phenomenon whereby the children depicted in child pornography have a feeling of the depicted event reoccurring every single time the image is viewed. Each time the image is viewed, the children relive the experience as if it were happening all over again. As the images are viewed over and over again, this leaves the children feeling, or being as if they were, raped all over again.
Offenders choosing pre-traumatized victims
In adulthood, the freeze response can remain, and some professionals have noted that victimisers sometimes seem to pick up subtle clues of this when choosing a victim. This behaviour can make the victim an easier target, as they sometimes make less effort to fight back or vocalise. Afterwards, they often make excuses and minimise what happened to them, sometimes never reporting the assault to the authorities.
Self-victimisation
Self-victimisation (or victim playing) is the fabrication of victimhood for a variety of reasons, such as to justify real or perceived abuse of others, to manipulate others, as a coping strategy, or for attention seeking. In a political context, self-victimisation could also be seen as an important political tool within post-conflict, nation-building societies. While failing to produce any affirmative values, the fetishistic lack of future is masked up by an excess of confirmation of its own status of victimhood, as noted by the Bosnian political theoretician Jasmin Hasanović, seeing it in the post-Yugoslav context as a form of auto-colonialism, where reproducing the narrative of victimhood corresponds with the balkanization stereotypes, being the very narrative of the colonizer where the permanence of war is the contemporaneity of fear, affirming the theses on eternal hatred thus strengthening ethnonationalism even more.
Self-image of victimisation (victim mentality)
Victims of abuse and manipulation sometimes get trapped into a self-image of victimisation. The psychological profile of victimisation includes a pervasive sense of helplessness, passivity, loss of control, pessimism, negative thinking, strong feelings of guilt, shame, self-blame and depression. This way of thinking can lead to hopelessness and despair.
Victimisation in Kazakhstan
At the end of 2012, a first-ever victimisation survey of 219,500 households (356,000 respondents) was conducted by the State Statistics Agency at the request of Marat Tazhin, the head of the Security Council and a sociologist by training. According to the survey, 3.5% of respondents reported being a victim of crime in the previous 12 months, and only half of those said that they had reported the crime to the police. The presidential administration chose not to release any further details from this survey to the public.
In May–June 2018, the first International Crime Victims Survey (ICVS) of nationally representative sample of 4,000 persons was conducted in Kazakhstan. It showed low levels of victimisation. The overall violent crime victimization rate among the population in a one-year period was 3.7%. Rates of violent victimization by strangers were somewhat higher among females (2.1%) than among males (1.8%). The rates of violence by persons known to them were as much as three times higher for women than for men (2.8% for females and 0.8% for males). In a one-year period, the highest rates of victimisation were consumer fraud (13.5% of respondents), theft from the car and personal theft (6.3% of respondents), and official bribe-seeking (5.2% of respondents). In almost half of bribe-seeking cases the bribe-seeker was a police officer. Taking only the adult population of Kazakhstan into account, the ICVS police bribery figures suggest around 400,000 incidents of police bribery every year in Kazakhstan. These calculations are most likely very conservative in that they only capture when a bribe has been solicited and exclude instances of citizen-initiated bribery. The ICVS revealed extremely low levels of reporting crime to the police. Only one in five crimes were reported to the police in Kazakhstan, down from the 46% reporting rate recorded in the government-conducted 2012 survey.
Rates of victimisation in United States
Levels of criminal activity are measured through three major data sources: the Uniform Crime Reports (UCR), self-report surveys of criminal offenders, and the National Crime Victimization Survey (NCVS). However, the UCR and self-report surveys generally report details regarding the offender and the criminal offense; information on the victim is only included so far as his/her relationship to the offender, and perhaps a superficial overview of his/her injuries. The NCVS is a tool used to measure the existence of actual, rather than only those reported, crimes—the victimisation rate—by asking individuals about incidents in which they may have been victimised. The National Crime Victimization Survey is the United States' primary source of information on crime victimisation.
Each year, data is obtained from a nationally represented sample of 77,200 households comprising nearly 134,000 persons on the frequency, characteristics and consequences of criminal victimisation in the United States. This survey enables the (government) to estimate the likelihood of victimisation by rape (more valid estimates were calculated after the surveys redesign in 1992 that better tapped instances of sexual assault, particularly of date rape), robbery, assault, theft, household burglary, and motor vehicle theft for the population as a whole as well as for segments of the population such as women, the elderly, members of various racial groups, city dwellers, or other groups. According to the Bureau of Justice Statistics (BJS), the NCVS reveals that, from 1994 to 2005, violent crime rates have declined, reaching the lowest levels ever recorded. Property crimes continue to decline.
In 2010, the National Institute of Justice reported that American adolescents were the age group most likely to be victims of violent crime, while American men were more likely than American women to be victims of violent crime, and blacks were more likely than Americans of other races to be victims of violent crime. With Black men being the most often victims of violent crime.
See also
References
Further reading
General
Catalano, Shannan, Intimate Partner Violence: Attributes of Victimization, 1993–2011 (2013)
Elias, Robert, The Politics of Victimization: Victims, Victimology, and Human Rights (1986)
Finkelhor, David Childhood Victimization: Violence, Crime, and Abuse in the Lives of Young People (Interpersonal Violence) (2008)
Harris, Monica J. Bullying, Rejection, & Peer Victimization: A Social Cognitive Neuroscience Perspective (2009)
Hazler, Richard J. Breaking The Cycle of Violence: Interventions For Bullying And Victimization (1996)
Maher, Charles A & Zins, Joseph & Elias, Maurice Bullying, Victimization, And Peer Harassment: A Handbook of Prevention And Intervention (2006)
Meadows, Robert J. Understanding Violence and Victimization (5th Edition) (2009)
Mullings, Janet & Marquart, James & Hartley, Deborah The Victimization of Children: Emerging Issues (2004)
Westervelt, Saundra Davis Shifting The Blame: How Victimization Became a Criminal Defense (1998)
Revictimisation
Carlton, Jean Victim No More: Your Guide to Overcome Revictimization (1995)
Schiller, Ulene Addressing re-victimization of the sexually abused child: Training programme for state prosecutors working with sexually abused children during forensic procedures (2009)
External links
"Fear of Crime and Perceived Risk." Oxford Bibliographies Online: Criminology.
NCVS Victimization Analysis Tool (NVAT) Bureau of Justice Statistics
gai xinh dep Bureau of Justice Statistics
Abuse
Harassment and bullying
Victimology | Victimisation | Biology | 2,174 |
67,841,351 | https://en.wikipedia.org/wiki/Dibutylmagnesium | Dibutylmagnesium is an organometallic chemical compound of magnesium. Its chemical formula is . Dibutylmagnesium is a chemical compound from the group of organomagnesium compounds. The pure substance is a waxy solid. Commercially, it is marketed as solution in heptane.
Synthesis
Dibutylmagnesium can be obtained by reaction of butyllithium with magnesium butylchloride and subsequent addition of magnesium 2-ethylhexanoate. The compound can also be prepared by hydrogenation of magnesium, followed by reaction with 1-butene. It is also possible to prepare dibutylmagnesium using 2-chlorobutane, magnesium powder, and n-butyllithium.
Use
Dibutylmagnesium is used as a convenient reagent for the preparation of organomagnesium compounds.
References
Magnesium compounds
Organomagnesium compounds
Butyl compounds
Pyrophoric materials | Dibutylmagnesium | Chemistry,Technology | 208 |
2,250,414 | https://en.wikipedia.org/wiki/Railroad%20Commission%20of%20Texas | The Railroad Commission of Texas (RRC; also sometimes called the Texas Railroad Commission, TRC) is the state agency that regulates the oil and gas industry, gas utilities, pipeline safety, safety in the liquefied petroleum gas industry, and surface coal and uranium mining. Despite its name, it ceased regulating railroads in 2005, when the last of the rail functions were transferred to the Texas Department of Transportation.
Established by the Texas Legislature in 1891, it is the state's oldest regulatory agency, and began as part of the Efficiency Movement of the Progressive Era. From the 1930s to the 1960s, it largely set world oil prices, but was displaced by OPEC (Organization of Petroleum Exporting Countries) after 1973. In 1984, the federal government took over transportation regulation for railroads, trucking, and buses, but the Railroad Commission kept its name. With an annual budget of $79 million, it now focuses entirely on oil, gas, mining, propane, and pipelines, setting allocations for production each month.
The three-member commission was initially appointed by the governor, but an amendment to the state's constitution in 1894 established the commissioners as elected officials who serve overlapping six-year terms, like the sequence in the U.S. Senate, elected statewide. No specific seat is designated as chairman; the commissioners choose the chairman from among themselves. Normally, the commissioner who faces reelection is the chairman for the preceding two years. The current commissioners are: Jim Wright since January 4, 2021; Wayne Christian since January 9, 2017; and Christi Craddick since December 17, 2012.
Origins
Attempts to establish a railroad commission in Texas began in 1876. After five legislative failures, an amendment to the state constitution that provided for a railroad commission was submitted to voters in 1890. The amendment's ratification and the 1890 election of Governor James S. Hogg, a Democrat, permitted the legislature in 1891 to pass legislation that constitutionally created the Railroad Commission of Texas, and gave it jurisdiction over the operations of railroads, terminals, wharves, and express companies. It could set rates, issue rules on how to classify freight, require adequate railroad reports, and prohibit and punish discrimination and extortion by corporations. George Clark, running as an independent “Jeffersonian Democratic” candidate for governor in 1892, denounced the TRC as being “Wrong in principle, undemocratic, and unrepublican.” Clark opined that the TRC and similar “Commissions do no good. They do harm. Their only function is to harass. I regard it as essentially foolish and essentially vicious.” Clark lost the 1892 election to Hogg, but federal judge Andrew Phelps McCormick granted an injunction preventing the TRC from enforcing compliance and seeking to prosecute or recover penalties from railroad companies the same year; the decision was overruled by the United States Supreme Court in 1894. The governor appointed the first members; the first elections to the commission were held in 1893, with three commissioners serving six-year, overlapping terms. The TRC did not have jurisdiction over interstate rates, but Texas was so large that the in-state traffic it regulated was of dominant importance.
The agency did not have the legal authority to set rates, nor did it have the resources to spend much of its time in court battles. The carrot was far more important than the stick. Freight rates continued to decline dramatically. In 1891, a typical rate was 1.403 cents per ton mile. By 1907, the rate was 1.039 cents—a decline of 25%. However, the railroads did not have rates high enough for them to upgrade their equipment and lower costs in the face of competition from pipelines, cars, and trucks, and the Texas railway system began a slow decline.
Members of the First Railroad Commission of Texas
John H. Reagan (1818–1903), the first chairman of the TRC (1891–1903), had been the most outspoken advocate in Congress of bills to regulate railroads in the 1880s. He feared the corruption caused by railroad monopolies, and considered their control a moral challenge. As chairman of the TRC, Reagan changed his views when he became acquainted with the realities of the complex forces affecting railroad management. Reagan turned to the Efficiency Movement for ideas, and established a pattern of regulatory practice that the TRC used for decades. He believed that the agency should pursue two main goals: to protect consumers from unfair railway practices and excessive rates, and to support the state's overall economic growth. To find the optimal rates that met these goals, he focused the TRC on the collection of data, direct negotiation with railway executives, and compromises with the parties involved.
Lafayette L. Foster (1851–1901) was a commissioner of the first TRC (1891–1895) appointed by Governor Hogg. He resigned in 1895, and became the vice president and general manager of the Velasco Terminal Railway. He was succeeded as commissioner by Nathan Alexander Stedman.
William P. McLean (1836–1925) was a commissioner of the first TRC (1891–1894) appointed by Governor Hogg. He was a judge before his appointment to the commission. He was re-elected in 1893, but resigned his position in 1894 to practice law in Fort Worth. He was succeeded as commissioner by Leonidas Jefferson Storey, who later became chairman of the TRC in 1903, following Reagan's death.
Segregation
From the 1890s through the 1960s, the Texas Railroad Commission found it difficult to fully enforce Jim Crow segregation legislation. Because of the expense involved, Texas railroads often allowed wealthier blacks to mix with whites, rather than provide separate cars, dining facilities, and even depots. In addition, West Texas authorities often refused to enforce Jim Crow laws because few African Americans resided there. In the 1940s, the railroad commission's enforcement of segregation laws began collapsing further, in part because of the great number of African American soldiers that were transported during World War II. The trains were integrated in the early 1960s.
Expansion to oil
The agency's reach expanded as it took over responsibility for regulating oil pipelines (in 1917), oil and gas production (1919), natural gas delivery systems (1920), bus lines (1927), and trucking (1929). It grew from 12 employees in 1916 to 69 in 1930 and 566 in 1939. It does not have jurisdiction over investor-owned electric utility companies; that falls under the jurisdiction of the Public Utility Commission of Texas.
A crisis for the petroleum industry was created by the East Texas oil boom of the 1930s, as prices plunged to 25¢ a barrel. The traditional TRC policy of negotiating compromises failed; the governor was forced to call in the state militia to enforce order. Texas oilmen decided they preferred state to federal regulation, and wanted the TRC to give out quotas so that every producer would get higher prices and profits. Pure Oil Company opposed the first statewide oil prorationing order, which was issued by the TRC in August 1930. The order, which was intended to conserve oil resources by limiting the number of barrels drilled per day, was seen by small producers, like Pure Oil, as a conspiracy between government and major companies to drive them out of business, and ultimately foster monopoly in the oil industry.
Ernest O. Thompson (1892–1966), head of the TRC from 1932 to 1965, took charge of the agency, and indeed the oil industry, by appealing to an ideal of Texas's role in the global oil order—the civil religion of Texas oil. He cajoled, harangued, and browbeat recalcitrant producers into compliance with the TRC's prorationing orders. The New Deal allowed the TRC to set national oil policy. As late as the 1950s, the TRC controlled over 40% of United States’ crude production, and approximately half of estimated national proved reserves. It served as a model in the creation of OPEC. Gordon M. Griffin, chief engineer of the TRC during World War II, developed the formula for prorationing to keep production flowing for the military.
Because the TRC needed access to the Texas headquarters of the various oil companies, it became a long term tenant at the Milam Building.
Operations
Regulation was a practical rather than ideological affair. The TRC typically worked with the regulated industries to improve operations, share best practices, and address consumer complaints. Radical activities—like heated court battles or rate-setting to favor shippers, producers, or consumers—were the exception rather than the rule.
Within the oil and gas industry, it took into account production in other states, in effect bringing total available supply (including imports, which were small) within the principle of prorationing to market demand. Allowable oilfield production was calculated as follows: estimated market demand, minus uncontrolled additions to supply, gave the Texas total; this was then prorated among fields and wells in a manner calculated to preserve equity among producers, and to prevent any well from producing beyond its maximum efficient rate (MER). Scheduled allowables are expressed in numbers of calendar days of permitted production per month at MER. In the spring of 2013, new hydraulic fracturing water recycling rules were adopted in the state of Texas by the Railroad Commission of Texas. The Water Recycling Rules are intended to encourage Texas hydraulic fracturing operators to conserve water used in the hydraulic fracturing process for oil and gas wells.
Recent history
As of March 2022, the commission members are Wayne Christian (chairman), Christi Craddick, and Jim Wright. All three members are Republicans. Christian was elected in 2016 as a commissioner, and was selected as chairman in 2019. Craddick was elected in 2012, and reelected in 2018. Wright was elected in 2020.
Effective October 1, 2005, as a result of House Bill 2702, the rail oversight functions of the Railroad Commission were transferred to the Texas Department of Transportation. The traditional name of the commission was not changed despite the loss of its titular regulatory duties.
Court cases involving the commission
The Shreveport Rate Case, also known as Houston E. & W. Ry. Co. v. United States, 234 U.S. 342 (1914) arose from the Railroad Commission's setting railroad freight rates unequally. Because of the low intrastate rates, shippers in eastern Texas tended to ship their wares to Dallas (in Texas), rather than to Shreveport, Louisiana, although Shreveport was considerably closer to much of eastern Texas. The Railroad Commission's (and the railroad's) position was that only the state could regulate commerce within a state, and that the federal government had no power so to do. The Supreme Court ruled that the federal government's ability to regulate interstate commerce necessarily included the ability to regulate intrastate “operations in all matters having a close and substantial relation to interstate traffic,” and to ensure that “interstate commerce may be conducted upon fair terms.”
The Railroad Commission has also figured prominently in two major U.S. Supreme Court cases on the doctrine of abstention:
Railroad Commission v. Pullman Co., a 1941 case in which the U.S. Supreme Court ruled that it was appropriate for federal courts to abstain from hearing a case to allow state courts to decide substantial constitutional issues that touch upon sensitive areas of state social policy, specifically the race of railroad employees.
Burford v. Sun Oil Co., a 1943 case in which the U.S. Supreme Court ruled that a federal court sitting in diversity jurisdiction may abstain from hearing the case where the state courts likely have greater expertise in a particularly complex and unclear area of state law which is of special significance to the state, where there is comprehensive state administrative/regulatory procedure, and where the federal issues cannot be decided without delving into state law.
Commissioners
The commissioners are elected in statewide partisan elections for six-year terms, with one commission seat up for election every two years. The commission selects a chairperson from among their members every year.
Offices and districts
The agency is headquartered in the William B. Travis State Office Building at 1701 North Congress Avenue in Austin. In addition, the Texas Railroad Commission has twelve oil and gas district offices located throughout the state. The district offices facilitate communication between industry representatives and the Commission.
See also
Oil and gas law in the United States
History of Texas
Bibliography
Childs, William R. The Texas Railroad Commission: Understanding Regulation in America to the Mid-Twentieth Century. (2005). 323 pp. the standard history; online review
Childs, William R. "Origins of the Texas Railroad Commission's Power to Control Production of Petroleum: Regulatory Strategies in the 1920s." Journal of Policy History 1990 2(4): 353–387.
De Chazeau, Melvin G., and Alfred E. Kahn. Integration and Competition in the Petroleum Industry (1959) online edition
Green, George N. "Thompson, Ernest Othmer," The Handbook of Texas Online (2008)
Norvell, James R. "The Railroad Commission of Texas: its Origin and History." Southwestern Historical Quarterly 1965 68(4): 465–480. online edition
Prindle, David F. Petroleum Politics and the Texas Railroad Commission. (1981). 230 pp., focuses on relations with independent oilmen
David F. Prindle, "Railroad Commission," Handbook of Texas Online (2008)
Procter, Ben H. Not Without Honor: The Life of John H. Reagan (1962).
Procter, Ben H. Reagan, John Henninger, Handbook of Texas Online (2008)
Splawn, W. M. W. "Valuation and Rate Regulation by the Railroad Commission of Texas," Journal of Political Economy Vol. 31, No. 5 (Oct., 1923), pp. 675–707 in JSTOR
References
External links
"Hazardous Business: Industry, Regulation, and the Texas Railroad Commission" from Texas State Library and Archives Commission
Conversion of EBCDIC files
State agencies of Texas
Petroleum in Texas
History of the petroleum industry in the United States
Oil and gas law
Government agencies established in 1891
1891 establishments in Texas
Petroleum politics
United States railroad regulation
Rail transportation in Texas | Railroad Commission of Texas | Chemistry | 2,880 |
24,891,124 | https://en.wikipedia.org/wiki/Across%20the%20Universe%20%28message%29 | Across the Universe is an interstellar radio message (IRM) consisting of the song "Across the Universe" by the Beatles that was transmitted on 4 February 2008, at 00:00 UTC by NASA in the direction of the star Polaris. This transmission was made using a 70-meter "DSS-63" dish in the NASA Deep Space Network's (DSN) Madrid Deep Space Communication Complex, located in Robledo, near Madrid, Spain. The transmission ran in the 4.2-cm band (around 7.14 GHz, C band) at a power of 18 kilowatt. The format was digital, transmitted at a rate of 128 kbps, lasting 3.6 minutes – the normal speed and data rate for a digital recording on Earth.
This action was done in order to celebrate the 40th anniversary of the song's recording, the 45th anniversary of the DSN, and the 50th anniversary of NASA. The idea was hatched by Beatles historian Martin Lewis, who encouraged all Beatles fans to play the track as it was beamed towards the distant star. The event marked the third time a song had ever been intentionally transmitted into deep space (the first being Russia's Teen Age Message in 2001, and the second being the 2003 Cosmic Call 2 message which included "Starman" by David Bowie and music from the Hungarian band KFT), and was approved by Paul McCartney, Yoko Ono, and Apple Records.
A. L. Zaitsev, part of the Teen Age Message project, argues that the NASA project is only a publicity stunt. The compressed digital format used makes the data more fragile to errors compared to TAM's analogue approach, not to mention aliens would not have knowledge on human audio compression algorithms. The transmission data rate is also too high to allow for a remote radio station to faithfully receive; a data rate 300,000 times lower would be required. Finally, the choice of Polaris also makes the message unlikely to reach any alien lifeform should they exist.
See also
List of interstellar radio messages
References
Search for extraterrestrial intelligence
Interstellar messages
Time capsules
Technology in society
Musical tributes to the Beatles
2008 in Spain
2008 in science
2008 in music | Across the Universe (message) | Astronomy | 448 |
50,984,744 | https://en.wikipedia.org/wiki/Prior-independent%20mechanism | A Prior-independent mechanism (PIM) is a mechanism in which the designer knows that the agents' valuations are drawn from some probability distribution, but does not know the distribution.
A typical application is a seller who wants to sell some items to potential buyers. The seller wants to price the items in a way that will maximize his profit. The optimal prices depend on the amount that each buyer is willing to pay for each item. The seller does not know these values, but he assumes that the values are random variables with some unknown probability distribution.
A PIM usually involves a random sampling process. The seller samples some valuations from the unknown distribution, and based on the samples, constructs an auction that yields approximately-optimal profits. The major research question in PIM design is: what is the sample complexity of the mechanism? I.e, how many agents it needs to sample in order to attain a reasonable approximation of the optimal welfare?
Single-item auctions
The results in imply several bounds on the sample-complexity of revenue-maximization of single-item auctions:
For a -approximation of the optimal expected revenue, the sample-complexity is - a single sample suffices. This is true even when the bidders are not i.i.d.
For a -approximation of the optimal expected revenue, when the bidders are i.i.d OR when there is an unlimited supply of items (digital goods), the sample-complexity is when the agents' distributions have monotone hazard rate, and when the agents' distributions are regular but do not have monotone-hazard-rate.
The situation becomes more complicated when the agents are not i.i.d (each agent's value is drawn from a different regular distribution) and the goods have limited supply. When the agents come from different distributions, the sample complexity of -approximation of the optimal expected revenue in single-item auctions is:
at most - using a variant of the empirical Myerson auction.
at least (for monotone-hazard-rate regular valuations) and at least (for arbitrary regular valuations).
Single-parametric agents
discuss arbitrary auctions with single-parameter utility agents (not only single-item auctions), and arbitrary auction-mechanisms (not only specific auctions). Based on known results about sample complexity, they show that the number of samples required to approximate the maximum-revenue auction from a given class of auctions is:
where:
the agents' valuations are bounded in ,
the pseudo-VC dimension of the class of auctions is at most ,
the required approximation factor is ,
the required success probability is .
In particular, they consider a class of simple auctions called -level auctions: auctions with reserve prices (a Vickrey auction with a single reserve price is a 1-level auction). They prove that the pseudo-VC-dimension of this class is , which immediately translates to a bound on their generalization error and sample-complexity. They also prove bounds on the representation error of this class of auctions.
Multi-parametric agents
Devanur et al study a market with different item types and unit demand agents.
Chawla et al study PIMs for the makespan minimization problem.
Hsu et al study a market with different item types. The supplies are fixed. The buyers can buy bundles of items, and have different valuations on bundles. They prove that, if buyers are sampled independently from some unknown distribution, an optimal price-vector is calculated, and this price-vector is then applied to a fresh sample of buyers, then the social welfare is approximately optimal. The competitive ratio implied by their Theorem 6.3 is, with probability , at least
.
Alternatives
Prior-independent mechanisms (PIM) should be contrasted with two other mechanism types:
Bayesian-optimal mechanisms (BOM) assume that the agents' valuations are drawn from a known probability distribution. The mechanism is tailored to the parameters of this distribution (e.g., its median or mean value).
Prior-free mechanisms (PFM) do not assume that the agents' valuations are drawn from any probability distribution (known or unknown). The seller's goal is to design an auction that will produce a reasonable profit even in worst-case scenarios.
From the point-of-view of the designer, BOM is the easiest, then PIM, then PFM. The approximation guarantees of BOM and PIM are in expectation, while those of PFM are in worst-case.
See also
Market research
Algorithmic pricing
References
Mechanism design
Sampling (statistics)
Market research | Prior-independent mechanism | Mathematics | 940 |
683,116 | https://en.wikipedia.org/wiki/SO%288%29 | In mathematics, SO(8) is the special orthogonal group acting on eight-dimensional Euclidean space. It could be either a real or complex simple Lie group of rank 4 and dimension 28.
Spin(8)
Like all special orthogonal groups SO(n) with n ≥ 2, SO(8) is not simply connected. And all like all SO(n) with n > 2, the fundamental group of SO(8) is isomorphic to Z2. The universal cover of SO(8) is the spin group Spin(8).
Center
The center of SO(8) is Z2, the diagonal matrices {±I} (as for all SO(2n) with 2n ≥ 4), while the center of Spin(8) is Z2×Z2 (as for all Spin(4n), 4n ≥ 4).
Triality
SO(8) is unique among the simple Lie groups in that its Dynkin diagram, (D4 under the Dynkin classification), possesses a three-fold symmetry. This gives rise to peculiar feature of Spin(8) known as triality. Related to this is the fact that the two spinor representations, as well as the fundamental vector representation, of Spin(8) are all eight-dimensional (for all other spin groups the spinor representation is either smaller or larger than the vector representation). The triality automorphism of Spin(8) lives in the outer automorphism group of Spin(8) which is isomorphic to the symmetric group S3 that permutes these three representations. The automorphism group acts on the center Z2 x Z2 (which also has automorphism group isomorphic to S3 which may also be considered as the general linear group over the finite field with two elements, S3 ≅GL(2,2)). When one quotients Spin(8) by one central Z2, breaking this symmetry and obtaining SO(8), the remaining outer automorphism group is only Z2. The triality symmetry acts again on the further quotient SO(8)/Z2.
Sometimes Spin(8) appears naturally in an "enlarged" form, as the automorphism group of Spin(8), which breaks up as a semidirect product: Aut(Spin(8)) ≅ PSO (8) ⋊ S3.
Unit octonions
Elements of SO(8) can be described with unit octonions, analogously to how elements of SO(2) can be described with unit complex numbers and elements of SO(4) can be described with unit quaternions. However the relationship is more complicated, partly due to the non-associativity of the octonions. A general element in SO(8) can be described as the product of 7 left-multiplications, 7 right-multiplications and also 7 bimultiplications by unit octonions (a bimultiplication being the composition of a left-multiplication and a right-multiplication by the same octonion and is unambiguously defined due to octonions obeying the Moufang identities).
It can be shown that an element of SO(8) can be constructed with bimultiplications, by first showing that pairs of reflections through the origin in 8-dimensional space correspond to pairs of bimultiplications by unit octonions. The triality automorphism of Spin(8) described below provides similar constructions with left multiplications and right multiplications.
Octonions and triality
If and , it can be shown that this is equivalent to , meaning that without ambiguity. A triple of maps that preserve this identity, so that is called an isotopy. If the three maps of an isotopy are in , the isotopy is called an orthogonal isotopy. If , then following the above can be described as the product of bimultiplications of unit octonions, say . Let be the corresponding products of left and right multiplications by the conjugates (i.e., the multiplicative inverses) of the same unit octonions, so , . A simple calculation shows that is an isotopy. As a result of the non-associativity of the octonions, the only other orthogonal isotopy for is . As the set of orthogonal isotopies produce a 2-to-1 cover of , they must in fact be .
Multiplicative inverses of octonions are two-sided, which means that is equivalent to . This means that a given isotopy can be permuted cyclically to give two further isotopies and . This produces an order 3 outer automorphism of . This "triality" automorphism is exceptional among spin groups. There is no triality automorphism of , as for a given the corresponding maps are only uniquely determined up to sign.
Root system
Weyl group
Its Weyl/Coxeter group has 4! × 8 = 192 elements.
Cartan matrix
See also
Octonions
Clifford algebra
G2
References
(originally published in 1954 by Columbia University Press)
Lie groups
Octonions | SO(8) | Mathematics | 1,068 |
10,110,554 | https://en.wikipedia.org/wiki/Composite%20transposon | A composite transposon is similar in function to simple transposons and insertion sequence (IS) elements in that it has protein coding DNA segments flanked by inverted, repeated sequences that can be recognized by transposase enzymes. A composite transposon, however, is flanked by two separate IS elements which may or may not be exact replicas. Instead of each IS element moving separately, the entire length of DNA spanning from one IS element to the other is transposed as one complete unit. Composite transposons will also often carry one or more genes conferring antibiotic resistance.
Flanked by SINEs in mammalian genomes
Two SINEs may act in concert to flank and mobilize an intervening single copy DNA sequence. This was reported for a 710 bp DNA sequence upstream of the bovine beta globin gene. The DNA arrangement forms a composite transposon whose presence has been confirmed by the complete bovine genomic sequence where the mobilized sequence may be found on bovine chromosome 15 in contig NW_001493315.1 nucleotides #1085432–1086142 and the originating sequence may be found on bovine chromosome 2 in contig NW_001501789.2 nucleotides #1096679–1097389. It is likely that similar composite transposons exist in other bovine genomic regions and other mammalian genomes. They could be detected with suitable algorithms.
See also
Tn10
References
Mobile genetic elements | Composite transposon | Biology | 306 |
4,769,321 | https://en.wikipedia.org/wiki/Timeline%20of%20fundamental%20physics%20discoveries | This timeline lists significant discoveries in physics and the laws of nature, including experimental discoveries, theoretical proposals that were confirmed experimentally, and theories that have significantly influenced current thinking in modern physics. Such discoveries are often a multi-step, multi-person process. Multiple discovery sometimes occurs when multiple research groups discover the same phenomenon at about the same time, and scientific priority is often disputed. The listings below include some of the most significant people and ideas by date of publication or experiment.
Antiquity
624–546 BCE – Thales of Miletus: Introduced natural philosophy
610–546 BCE – Anaximander: Concept of Earth floating in space
460–370 BCE – Democritus: Atomism via thought experiment
384–322 BCE – Aristotle: Aristotelian physics, earliest effective theory of physics
c. 300 BCE – Euclid: Euclidean geometry
c. 250 BCE – Archimedes: Archimedes' principle
310–230 BCE – Aristarchos: Proposed heliocentricism
276–194 BCE – Eratosthenes: Circumference of the Earth measured
190–150 BCE – Seleucus: Support of heliocentrism based on reasoning
220–150 BCE – Apollonius: and Hipparchus: Invention of Astrolabe
205–86 BCE – Hipparchus or unknown: Antikythera mechanism an analog computer of planetary motions
129 BCE – Hipparchus: Hipparchus star catalog of the entire sky and precession of the equinoxes
60 CE – Hero of Alexandria: Catoptrics: Hero's principle of the shortest path of light
c.150 CE – Ptolemy: Ptolomaic model standardized geocentricism
Middle Ages
500 CE – John Philoponus: Theory of impetus
984 CE – Ibn Sahl: Law of refraction
1010 – Ibn al-Haytham (Alhazen): Optics, finite speed of light
c. 1030 – Ibn Sina (Avicenna): Concept of force
c. 1050 – al-Biruni: Speed of light is much larger than speed of sound
c. 1100 – Al-Baghdadi: Theory of motion with distinction between velocity and acceleration
16th century
1514 – Nicolaus Copernicus: Heliocentrism
1586 – Simon Stevin: Delft tower experiment
17th century
1608 – Earliest known telescopes
1609, 1619 – Kepler: Kepler's laws of planetary motion
1610 – Galileo Galilei: discovered the Galilean moons of Jupiter
1613 – Galileo Galilei: Inertia
1621 – Willebrord Snellius: Snell's law
1632 – Galileo Galilei: The Galilean principle (the laws of motion are the same in all inertial frames)
1660 – Blaise Pascal: Pascal's law
1660 – Robert Hooke: Hooke's law
1662 – Robert Boyle: Boyle's law
1663 – Otto von Guericke: first electrostatic generator
1676 – Ole Rømer: Rømer's determination of the speed of light traveling from the moons of Jupiter.
1678 – Christiaan Huygens mathematical wave theory of light, published in his Treatise on Light
1687 – Isaac Newton: Newton's laws of motion, and Newton's law of universal gravitation
18th century
1738 – Daniel Bernoulli: First model of the kinetic theory of gases
1745–46 – Ewald Georg von Kleist and Pieter van Musschenbroek: discovery of the Leyden jar
1752 – Benjamin Franklin: kite experiment
1760 – Joseph-Louis Lagrange: Lagrangian mechanics
1782 – Antoine Lavoisier: conservation of mass
1785 – Charles-Augustin de Coulomb: Coulomb's inverse-square law for electric charges confirmed
1800 – Alessandro Volta: discovery of voltaic pile
19th century
1800 - William Herschel: Infrared light
1801 – Thomas Young: Wave theory of light
1801 - Johann Wilhelm Ritter: Ultraviolet light
1803 – John Dalton: Atomic theory of matter
1806 – Thomas Young: Kinetic energy
1814 – Augustin-Jean Fresnel: Wave theory of light, optical interference
1820 – André-Marie Ampère, Jean-Baptiste Biot, and Félix Savart: Evidence for electromagnetic interactions (Biot–Savart law)
1822 – Joseph Fourier: Heat equation
1824 – Nicolas Léonard Sadi Carnot: Ideal gas cycle analysis (Carnot cycle), internal combustion engine
1826 – Ampère's circuital law
1827 – Georg Ohm: Electrical resistance
1831 – Michael Faraday: Faraday's law of induction
1833 – William Rowan Hamilton: Hamiltonian mechanics
1838 – Michael Faraday: Lines of force
1838 – Wilhelm Eduard Weber and Carl Friedrich Gauss: Earth's magnetic field
1842–43 – William Thomson, 1st Baron Kelvin and Julius von Mayer: Conservation of energy
1842 – Christian Doppler: Doppler effect
1845 – Michael Faraday: Faraday rotation (interaction of light and magnetic field)
1847 – Hermann von Helmholtz & James Prescott Joule: Conservation of Energy 2
1850–51 – William Thomson, 1st Baron Kelvin & Rudolf Clausius: Second law of thermodynamics
1857 – Rudolf Clausius: Introduced translational, rotational, and vibrational molecular motions
1857 – Rudolf Clausius: Introduced the concept of mean free path
1860 – James Clerk Maxwell: Introduced statistical mechanics with the Maxwell distribution
1861 – Gustav Kirchhoff: Black body
1861–62 – Maxwell's equations
1863 – Rudolf Clausius: Entropy
1864 – James Clerk Maxwell: A Dynamical Theory of the Electromagnetic Field (electromagnetic radiation)
1867 – James Clerk Maxwell: On the Dynamical Theory of Gases (kinetic theory of gases)
1871–89 – Ludwig Boltzmann & Josiah Willard Gibbs: Statistical mechanics (Boltzmann equation, 1872)
1873 – Maxwell: A Treatise on Electricity and Magnetism
1884 – Boltzmann derives Stefan radiation law
1887 – Michelson–Morley experiment
1887 – Heinrich Rudolf Hertz: Electromagnetic waves
1888 – Johannes Rydberg: Rydberg formula
1889, 1892 – Lorentz-FitzGerald contraction
1893 – Wilhelm Wien: Wien's displacement law for black-body radiation
1895 – Wilhelm Röntgen: X-rays
1896 – Henri Becquerel: Radioactivity
1896 – Pieter Zeeman: Zeeman effect
1897 – J. J. Thomson: Electron discovered
1900 – Max Planck: Formula for black-body radiation – the quanta solution to radiation ultraviolet catastrophe
1900 - Paul Villard: Gamma rays
20th century
1904 – J. J. Thomson's plum pudding model of the atom 1904
1905 – Albert Einstein: Special relativity, proposes light quantum (later named photon) to explain the photoelectric effect, Brownian motion, Mass–energy equivalence
1908 – Hermann Minkowski: Minkowski space
1911 – Ernest Rutherford: Discovery of the atomic nucleus (Rutherford model)
1911 – Kamerlingh Onnes: Superconductivity
1912 - Victor Francis Hess: Cosmic rays
1913 – Niels Bohr: Bohr model of the atom
1915 – Albert Einstein: General relativity
1915 – Emmy Noether: Noether's theorem relates symmetries to conservation laws.
1916 – Schwarzschild metric modeling gravity outside a large sphere
1917 - Ernest Rutherford: Proton proved
1919 – Arthur Eddington:Light bending confirmed – evidence for general relativity
1919–1926 – Kaluza–Klein theory proposing unification of gravity and electromagnetism
1922 – Alexander Friedmann proposes expanding universe
1922–37 – Friedmann–Lemaître–Robertson–Walker metric cosmological model
1923 – Stern–Gerlach experiment
1923 – Edwin Hubble: Galaxies discovered
1923 – Arthur Compton: Particle nature of photons confirmed by observation of photon momentum
1924 – Bose–Einstein statistics
1924 – Louis de Broglie: De Broglie wave
1925 – Werner Heisenberg: Matrix mechanics
1925–27 – Niels Bohr & Max Planck: Quantum mechanics
1925 – Stellar structure understood
1926 – Fermi-Dirac Statistics
1926 – Erwin Schrödinger: Schrödinger Equation
1927 – Werner Heisenberg: Uncertainty principle
1927 – Georges Lemaître: Big Bang
1927 – Paul Dirac: Dirac equation
1927 – Max Born: Born rule
1928 – Paul Dirac proposes the antiparticle
1929 – Edwin Hubble: Expansion of the universe confirmed
1932 – Carl David Anderson: Antimatter (positrons) discovered
1932 – James Chadwick: Neutron discovered
1933 – Ernst Ruska: Invention of the electron microscope
1935 – Subrahmanyan Chandrasekhar: Chandrasekhar limit for black hole collapse
1937 - Majorana particle, hypothesized as a fermion that is its own antiparticle.
1937 – Muon discovered by Carl David Anderson and Seth Neddermeyer
1938 – Pyotr Kapitsa: Superfluidity discovered
1938 – Otto Hahn, Lise Meitner and Fritz Strassmann Nuclear fission discovered
1938–39 – Stellar fusion explains energy production in stars
1939 – Uranium fission discovered
1941 – Feynman path integral
1944 – Theory of magnetism in 2D: Ising model
1947 – C.F. Powell, Giuseppe Occhialini, César Lattes: Pion discovered
1948 – Richard Feynman, Shinichiro Tomonaga, Julian Schwinger, Freeman Dyson: Quantum electrodynamics
1948 – Invention of the maser and laser by Charles Townes
1948 – Feynman diagrams
1955 - Emilio Segrè and Owen Chamberlain: Antiproton discovered
1956 – Bruce Cork: Antineutron discovered
1956 – Electron neutrino discovered
1956–57 – Parity violation proved by Chien-Shiung Wu
1957 - Many-worlds, also called the relative state formulation or the Everett interpretation.
1957 – BCS theory explaining superconductivity
1959–60 – Role of topology in quantum physics predicted and confirmed
1962 – SU(3) theory of strong interactions
1962 – Muon neutrino discovered
1963 – Chien-Shiung Wu confirms the conserved vector current theory for weak interactions
1963 – Murray Gell-Mann and George Zweig: Quarks predicted
1964 – Bell's Theorem initiates quantitative study of quantum entanglement
1964 - First black hole, Cygnus X-1, discovered
1964 – CP violation discovered by James Cronin and Val Fitch.
1965 – Arno Penzias and Robert Wilson: Cosmic Microwave Background (CMB) discovered
1967 – Unification of weak interaction and electromagnetism (electroweak theory)
1967 – Solar neutrino problem found
1967 – Pulsars (rotating neutron stars) discovered
1968 – Experimental evidence for quarks found
1968 – Vera Rubin: Dark matter theories
1970–73 – Standard Model of elementary particles invented
1971 – Helium 3 superfluidity
1971–75 – Michael Fisher, Kenneth G. Wilson, and Leo Kadanoff: Renormalization group
1972 – Black Hole Entropy
1974 – Black hole radiation (Hawking radiation) predicted
1974 – Charmed quark discovered
1975 – Tau lepton found
1975 – Abraham Pais and Sam Treiman: Introduction of the Standard Model of particle physics term
1977 – Bottom quark found
1977 – Anderson localization recognised (Nobel prize in 1977, Philip W. Anderson, Mott, Van Fleck)
1980 – Strangeness as a signature of quark-gluon plasma predicted
1980 – Richard Feynman proposes quantum computing
1980 – Quantum Hall effect
1981 – Alan Guth Theory of cosmic inflation proposed
1982 – Aspect experiment confirms violations of Bell's inequalities
1981 – Fractional quantum Hall effect discovered
1983 – Simulated annealing
1984 – W and Z bosons directly observed
1984 – First laboratory implementation of quantum cryptography
1987 – High-temperature superconductivity discovered in 1986, awarded Nobel prize in 1987 (J. Georg Bednorz and K. Alexander Müller)
1989–98 – Quantum annealing
1993 – Quantum teleportation of unknown states proposed
1994 – Shor's algorithm discovered, initiating the serious study of quantum computation
1994–97 – Matrix models/M-theory
1995 – Wolfgang Ketterle: Bose–Einstein condensate observed
1995 – Top quark discovered
1995–2000 – Econophysics and Kinetic exchange models of markets
1997 – Juan Maldacena proposed the AdS/CFT correspondence
1998 – Accelerating expansion of the universe discovered by the Supernova Cosmology Project and the High-Z Supernova Search Team
1998 – Atmospheric neutrino oscillation established
1999 – Lene Vestergaard Hau: Slow light experimentally demonstrated
2000 – Quark-gluon plasma found
2000 – Tau neutrino found
21st century
2001 – Solar neutrino oscillation observed, resolving the solar neutrino problem
2003 – WMAP observations of cosmic microwave background
2004 – Exceptional properties of graphene discovered
2007 – Giant magnetoresistance recognized (Nobel prize, Albert Fert and Peter Grünberg)
2008 – First artificial production of antimatter (positrons), by the LLNL
2008 – 16-year study of stellar orbits around Sagittarius A* provides strong evidence for a supermassive black hole at the centre of the Milky Way galaxy
2009 – Planck begins observations of cosmic microwave background
2012 – Higgs boson found by the Compact Muon Solenoid and ATLAS experiments at the Large Hadron Collider
2015 – Gravitational waves are observed
2016 – Topological order – topological phase transitions and order – recognized (Nobel prize, David J. Thouless, F. Duncan M. Haldane and J. Michael Kosterlitz)
2019 – First image of a black hole
2023 – Experimental evidence of stochastic Gravitational wave background
2023 – First "image" of the Milky Way in neutrinos instead of light
See also
Physics
List of timelines
List of unsolved problems in physics
References
Theoretical physics
History of science
Fundamental Discoveries | Timeline of fundamental physics discoveries | Physics,Technology | 2,817 |
33,288,508 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2056 | In molecular biology, glycoside hydrolase family 56 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 56 CAZY GH_56 includes enzymes with hyaluronidase activity. The venom of Apis mellifera (Honeybee) contains several biologically-active peptides and two enzymes, one of which is a hyaluronidase. The amino acid sequence of bee venom hyaluronidase contains 349 amino acids, and includes four cysteines and a number of potential glycosylation sites. The sequence shows a high degree of similarity to PH-20, a membrane protein of mammalian sperm involved in sperm-egg adhesion, supporting the view that hyaluronidases play a role in fertilisation.
PH-20 is required for sperm adhesion to the egg zona pellucida; it is located on both the sperm plasma membrane and acrosomal membrane. The amino acid sequence of the mature protein contains 468 amino acids, and includes six potential N-linked glycosylation sites and twelve cysteines, eight of which are tightly clustered near the C-terminus.
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Glycoside hydrolase family 56 | Biology | 376 |
15,141,674 | https://en.wikipedia.org/wiki/SNPlex | SNPlex is a platform for SNP genotyping sold by Applied Biosystems (ABI). It is based on capillary electrophoresis to separate varying fragments of DNA, which allows the assay to be performed on ABI's 3730xl DNA analyzers. Currently, up to 48 SNPs can be genotyped in a single reaction.
References
External links
SNPlex Genotyping System
Molecular biology
DNA
Biotechnology | SNPlex | Chemistry,Biology | 99 |
978,650 | https://en.wikipedia.org/wiki/Triple%20product | In geometry and algebra, the triple product is a product of three 3-dimensional vectors, usually Euclidean vectors. The name "triple product" is used for two different products, the scalar-valued scalar triple product and, less often, the vector-valued vector triple product.
Scalar triple product
The scalar triple product (also called the mixed product, box product, or triple scalar product) is defined as the dot product of one of the vectors with the cross product of the other two.
Geometric interpretation
Geometrically, the scalar triple product
is the (signed) volume of the parallelepiped defined by the three vectors given.
Properties
The scalar triple product is unchanged under a circular shift of its three operands (a, b, c):
Swapping the positions of the operators without re-ordering the operands leaves the triple product unchanged. This follows from the preceding property and the commutative property of the dot product:
Swapping any two of the three operands negates the triple product. This follows from the circular-shift property and the anticommutativity of the cross product:
The scalar triple product can also be understood as the determinant of the matrix that has the three vectors either as its rows or its columns (a matrix has the same determinant as its transpose):
If the scalar triple product is equal to zero, then the three vectors a, b, and c are coplanar, since the parallelepiped defined by them would be flat and have no volume.
If any two vectors in the scalar triple product are equal, then its value is zero:
Also:
The simple product of two triple products (or the square of a triple product), may be expanded in terms of dot products:This restates in vector notation that the product of the determinants of two 3×3 matrices equals the determinant of their matrix product. As a special case, the square of a triple product is a Gram determinant.
The ratio of the triple product and the product of the three vector norms is known as a polar sine:which ranges between −1 and 1.
Scalar or pseudoscalar
Although the scalar triple product gives the volume of the parallelepiped, it is the signed volume, the sign depending on the orientation of the frame or the parity of the permutation of the vectors. This means the product is negated if the orientation is reversed, for example by a parity transformation, and so is more properly described as a pseudoscalar if the orientation can change.
This also relates to the handedness of the cross product; the cross product transforms as a pseudovector under parity transformations and so is properly described as a pseudovector. The dot product of two vectors is a scalar but the dot product of a pseudovector and a vector is a pseudoscalar, so the scalar triple product (of vectors) must be pseudoscalar-valued.
If T is a proper rotation then
but if T is an improper rotation then
Scalar or scalar density
Strictly speaking, a scalar does not change at all under a coordinate transformation. (For example, the factor of 2 used for doubling a vector does not change if the vector is in spherical vs. rectangular coordinates.) However, if each vector is transformed by a matrix then the triple product ends up being multiplied by the determinant of the transformation matrix, which could be quite arbitrary for a non-rotation. That is, the triple product is more properly described as a scalar density.
As an exterior product
In exterior algebra and geometric algebra the exterior product of two vectors is a bivector, while the exterior product of three vectors is a trivector. A bivector is an oriented plane element and a trivector is an oriented volume element, in the same way that a vector is an oriented line element.
Given vectors a, b and c, the product
is a trivector with magnitude equal to the scalar triple product, i.e.
,
and is the Hodge dual of the scalar triple product. As the exterior product is associative brackets are not needed as it does not matter which of or is calculated first, though the order of the vectors in the product does matter. Geometrically the trivector a ∧ b ∧ c corresponds to the parallelepiped spanned by a, b, and c, with bivectors , and matching the parallelogram faces of the parallelepiped.
As a trilinear function
The triple product is identical to the volume form of the Euclidean 3-space applied to the vectors via interior product. It also can be expressed as a contraction of vectors with a rank-3 tensor equivalent to the form (or a pseudotensor equivalent to the volume pseudoform); see below.
Vector triple product
The vector triple product is defined as the cross product of one vector with the cross product of the other two. The following relationship holds:
.
This is known as triple product expansion, or Lagrange's formula, although the latter name is also used for several other formulas. Its right hand side can be remembered by using the mnemonic "ACB − ABC", provided one keeps in mind which vectors are dotted together. A proof is provided below. Some textbooks write the identity as such that a more familiar mnemonic "BAC − CAB" is obtained, as in “back of the cab”.
Since the cross product is anticommutative, this formula may also be written (up to permutation of the letters) as:
From Lagrange's formula it follows that the vector triple product satisfies:
which is the Jacobi identity for the cross product. Another useful formula follows:
These formulas are very useful in simplifying vector calculations in physics. A related identity regarding gradients and useful in vector calculus is Lagrange's formula of vector cross-product identity:
This can be also regarded as a special case of the more general Laplace–de Rham operator .
Proof
The component of is given by:
Similarly, the and components of are given by:
By combining these three components we obtain:
Using geometric algebra
If geometric algebra is used the cross product b × c of vectors is expressed as their exterior product b∧c, a bivector. The second cross product cannot be expressed as an exterior product, otherwise the scalar triple product would result. Instead a left contraction can be used, so the formula becomes
The proof follows from the properties of the contraction. The result is the same vector as calculated using a × (b × c).
Interpretations
Tensor calculus
In tensor notation, the triple product is expressed using the Levi-Civita symbol:
and
referring to the -th component of the resulting vector. This can be simplified by performing a contraction on the Levi-Civita symbols,
where is the Kronecker delta function ( when and when ) and is the generalized Kronecker delta function. We can reason out this identity by recognizing that the index will be summed out leaving only and . In the first term, we fix and thus . Likewise, in the second term, we fix and thus .
Returning to the triple cross product,
Vector calculus
Consider the flux integral of the vector field across the parametrically-defined surface : . The unit normal vector to the surface is given by , so the integrand is a scalar triple product.
See also
Quadruple product
Vector algebra relations
Notes
References
External links
Khan Academy video of the proof of the triple product expansion
Articles containing proofs
Mathematical identities
Multilinear algebra
Operations on vectors
Ternary operations | Triple product | Mathematics | 1,562 |
67,834,566 | https://en.wikipedia.org/wiki/Rurouni%20Kenshin%20%281996%20TV%20series%29 | , sometimes called Samurai X, is a Japanese anime television series, based on Nobuhiro Watsuki's manga series Rurouni Kenshin. It was directed by Kazuhiro Furuhashi, produced by SPE Visual Works and Fuji Television, and animated by Studio Gallop (episodes 1–66) and Studio Deen (episodes 67–95). It was broadcast on Fuji TV from January 1996 to September 1998. Besides an animated feature film, three series of original video animations (OVAs) were also produced; the first adapts stories from the manga that were not featured in the anime series; the second is both a retelling and a sequel to the anime series; and the third was a reimagining of the second story arc of the series.
Sony Pictures Television International produced its own English dub of the series, releasing it as Samurai X in Southeast Asia. Media Blasters later licensed the series in North America and released it on home video from 2000 to 2002. The series was aired in the United States on Cartoon Network's Toonami programming block in 2003, only broadcasting the first 62 episodes.
Rurouni Kenshin has ranked among the 100 most-watched series in Japan multiple times.
A second anime television series adaptation by Liden Films premiered in 2023 on Fuji TV's Noitamina programming block.
Plot
When arriving in Tokyo in the 11th year of Meiji era (1878), the former Ishin Shishi Himura Kenshin wanders around Japan until reaching Tokyo. There, he is attacked by a young woman named Kamiya Kaoru, who believes him to be the Hitokiri Battōsai but ends up forgetting about him upon the appearance of a man claiming to be the Hitokiri Battōsai–tarnishing the name of the swordsmanship school that she teaches. Kenshin decides to help her and defeats the fake Battōsai, revealing himself as the actual former manslayer who has become a pacifist.
Kaoru invites Kenshin to stay at her dojo, claiming she is not interested in his past. Although Kenshin accepts the invitation, his fame causes him to accidentally attract other warriors who wish him dead. However, Kenshin also meets new friends including the young Myōjin Yahiko who wishes to reach his strength but ends up becoming Kaoru's student, the fighter-for-hire Sagara Sanosuke from the Sekihō Army who realizes the current Kenshin is different from the Ishin Shishi he detested for killing his leader Sagara Sōzō, and the doctor Takani Megumi who wishes to atone for her sins as a drug dealer, inspired by Kenshin's devotion to his past.
Production
In a manga volume prior to the release of the anime, Watsuki said that while some fans might object to the adaptation of the series into anime, Watsuki looked forward to the adaptation and felt it would work since the manga was already "anime-esque." He had some worries about the series since he felt since the creation of the series was sudden and the series had a "tight" production schedule. In another note in the same volume Watsuki added that he had little input in the series, as he was too busy with the publishing. In addition his schedule did not match the schedule of the anime production staff. Watsuki said that it would be impossible to make the anime and manga exactly the same, so he would feel fine with the anime adaptation as long as it took advantage of the strengths of an anime format.
After the anime began production, Watsuki said that the final product was "better than imagined" and that it was created with the "pride and soul of professionals." Watsuki criticized the timing, the "off-the-wall, embarrassing subtitles," and the condensing of the stories; for instance, he felt the Jin-e storyline would not sufficiently fit two episodes. Watsuki said that he consulted a director and that he felt the anime would improve after that point. The fact that the CD book voice actors, especially Megumi Ogata and Tomokazu Seki, who portrayed Kenshin and Sanosuke in the CD books, respectively, did not get their corresponding roles in the anime disappointed Watsuki. Watsuki reported receiving some letters of protest against the voice actor change and letters requesting that Ogata portray Seta Sōjirō; Watsuki said that he wanted Ogata to play Misao and that Ogata would likely find "stubborn girl" roles more challenging than the "pretty boy" roles she usually gets, though Watsuki felt Ogata would have "no problem" portraying a "stubborn girl." Watsuki said that the new voice actor arrangement "works out" and that he hoped that the CD book voice actors would find roles in the anime. Watsuki said that the reason why the CD book voice actors did not get the corresponding roles in the anime was due to the fact that many more companies were involved in the production of the anime than the production of the CD books, and therefore the "industry power-structure" affected the series.
The second season of the anime television series had some original stories, not in the manga. Watsuki said that some people disliked "TV originals," but to him, the concept was "exciting." Watsuki said that because the first half of the original storyline that existed by the time of the production of the tenth volume was "jammed" into the first season, he looked forward to a "more entertaining" second season. Watsuki added that it was obvious that the staff of the first season "put their hearts and souls" into the work, but that the second series will be "a much better stage for their talents."
In producing the English dub version of the series, Media Blasters considered following suit, with Mona Marshall considered a finalist to voice Kenshin. Richard Hayworth was eventually selected for the role, giving Kenshin's character a more masculine voice in the English adaptation. Marshall was also selected to voice the younger Kenshin during flashback scenes. Clark Cheng, Media Blasters dub script writer, said that localizing Kenshin's unusual speech was a difficult process. His use of de gozaru and oro were not only character trademarks that indicated his state of mind, but important elements to the story. However, neither is directly translatable into English, and in the end the company chose to replace de gozaru with "that I did," "that I am," or "that I do." Kenshin's signature oro was replaced with "huah" to simulate a "funny sound" that had no real meaning. Lex Lang is Sanosuke's voice actor. When writing Sanosuke's dialogue, Clark Cheng, the writer of the English dub script, noted that the character was smarter than he would have liked in the first few episodes, so Cheng tried slowly to change the character's dialogue to make Sanosuke seem less intelligent so he would be more similar to the equivalent in the Japanese version of the series.
Release
Directed by Kazuhiro Furuhashi, Rurouni Kenshin was broadcast for 94 episodes on Fuji TV from January 10, 1996, to September 8, 1998. It was produced by SPE Visual Works and Fuji TV, and animated by Studio Gallop (episodes 1–66) and Studio Deen (episode 67 onwards). The anime only adapts the manga up until the fight with Shishio, from then on it features original material not included in the manga. The unaired final episode was released on VHS on December 2, 1998. The episodes were collected on 26 VHS sets, released from September 21, 1997, to June 2, 1999; they were later collected on 26 DVD sets, released from June 19, 1999, to March 23, 2000. Three DVD box sets were released from September 5, 2001, to March 20, 2002.
Sony Pictures Television International produced its own English dub of the series, and released it under the name Samurai X in Southeast Asia. Sony attempted and failed to market Samurai X via an existing company in the United States. In October 1999, Media Blasters announced that it had licensed the series, later confirming that it would be released on home video. Media Blasters produced an English dub at Bang Zoom!, and 22 DVDs were released from July 25, 2000, to September 24, 2002. The series later aired in the United States on Cartoon Network, as a part of the Toonami programming block, starting on March 17, 2003, but ended with the 62nd episode, aired on October 18 of that same year. The series was heavily edited for content during its broadcast on Toonami. Media Blasters later split the series in three seasons and released each one as three premium DVD box sets from November 18, 2003, to July 27, 2004; they were re-released as "Economy" box sets from November 15, 2005, to February 15, 2006. The series, with both the original Japanese audio and the Media Blasters dub, was available on Netflix from 2016 to 2020.
Soundtracks
The music for the series was composed by Noriyuki Asakura. The first soundtrack album was released on April 1, 1996, containing 23 tracks. The second one, Rurouni Kenshin OST 2 – Departure was released on October 21, 1996, containing 15 tracks. The third one, Rurouni Kenshin OST 3 – Journey to Kyoto, was released on April 21, 1997, containing 13 tracks. The fourth one, Rurouni Kenshin OST 4 – Let it Burn was released on February 1, 1998, containing 12 tracks.
Several compilations of the songs were also released in collection CDs. 30 were selected and joined in a CD called Rurouni Kenshin – The Director's Collection, released on July 21, 1997. Rurouni Kenshin: Best Theme Collection, containing ten tracks, was released on March 21, 1998. All opening and ending themes were also collected in a CD, titled Rurouni Kenshin – Theme Song Collection, on December 6, 2000. Two Songs albums, containing tracks performed by the Japanese voice actors, were released on July 21, 1996, and July 18, 1998. All soundtrack albums, including OVAs and films, tracks were collected in Rurouni Kenshin Complete CD-Box, released on September 19, 2002. It contains the four TV OSTs, the two OVA OSTs, the movie OST, the two game OSTs, an opening and closing theme collection, and the two Character Songs albums. On July 27, 2011, Rurouni Kenshin Complete Collection, which includes all the opening and ending themes and the theme song of the animated film, was released.
Related media
Anime film
An anime film, Rurouni Kenshin: The Motion Picture, premiered on December 20, 1997.
Original video animations
A four-episode original video animation (OVA), titled Rurouni Kenshin: Trust & Betrayal, which served as a prequel to the series, was released in 1999.
A two-episode OVA, titled Rurouni Kenshin: Reflection, which served as a sequel to the series, was released from 2001 to 2002.
A two-episode OVA, Rurouni Kenshin: New Kyoto Arc, which remade the series' Kyoto arc, was released from 2011 to 2012.
Reception
On TV Asahi's top 100 most popular anime television series poll, Rurouni Kenshin ranked 66th. They also conducted an online web poll, in which the series ranked 62nd. Nearly a year later, TV Asahi once again conducted an online poll for the top one hundred anime, and Rurouni Kenshin anime advanced in rank and came in twenty-sixth place. It also ranked at tenth place in the Web's Most Wanted 2005, ranking in the animation category. The fourth DVD of the anime was also Anime Castle's best selling DVD in October 2001. Rurouni Kenshin was also a finalist in the American Anime Awards in the category "Long Series" but lost against Fullmetal Alchemist. In 2010, Mania.com's Briana Lawrence listed Rurouni Kenshin at number three of the website's "10 Anime Series That Need a Reboot".
The anime has also been commented by Chris Shepard from Anime News Network (ANN), noting a well-crafted plot and good action scenes. However, he also criticized that during the first episodes the fights never get quite interesting as it becomes a bit predictable that Kenshin is going to win as the music of moments of victory is repeated many times. Lynzee Loveridge from ANN highlighted as the most known series to use the Meiji period and saw the Kyoto arc as one of the best ones.
However, Mark A. Grey from the same site mentioned that all those negatives points disappear during the Kyoto arc due to amazing fights and a great soundtrack. Tasha Robinson from SciFi.com remarked "Kenshin's schizoid personal conflict between his ruthless-killer side and his country-bumpkin" side was a perfect way to develop good stories which was one of the factors that made the series popular. Anime News Network acclaimed both Shishio's characterization in regards to what he represents to Kenshin's past: "a merciless killer who believes his sword to be the only justice in the land." Similarly, Chris Beveridge Mania Entertainment praised the build up the anime's Kyoto arc has had as after fighting so much build up, Shishio fights and delivers skills that would amaze viewers despite suffering major wounds in the process. Beveridge reflected that while Shishio's death caused by his old wounds rather than an attack by Kenshin, the series' protagonist was also pushed down to his limits in the story arc due to fighting Sojiro and Shinomori before Shishio. Nevertheless, the writer concluded that it was still way paid off despite assumptions that Shishio's death might initially come across as a copout.
Although Carlos Ross from THEM Anime Reviews also liked the action scenes and storyline, he added that the number of childish and violent scenes make the show a bit unbalanced, saying it is not recommended for younger children. Daryl Surat of Otaku USA approved of the anime series, stating that while half of the first-season episodes consisted of filler, the situation "clicks" upon the introduction of Saitō Hajime and that he disagreed with people who disliked the television series compared to the OVAs. Surat said that while the Media Blasters anime dub is "well-cast," the English dub does not sound natural since the producers were too preoccupied with making the voice performances mimic the Japanese performances. Surat said that while he "didn't mind" the first filler arc with the Christianity sect, he could not stomach the final two filler arcs, and Japanese audiences disapproved of the final two filler arcs. Robin Brenner from Library Journal noted that despite its pacifist messages, Rurouni Kenshin was too violent, recommending it to older audiences.
In the making of the 2019 anime series Dororo, Kazuhiro Furuhashi was selected as its director mainly due to his experience directing Rurouni Kenshin.
Notes
References
Further reading
External links
Rurouni Kenshin
Adventure anime and manga
Anime series based on manga
Anime Works
Aniplex
Fiction set in 1878
Fuji Television original programming
Gallop (studio)
Historical anime and manga
Madman Entertainment anime
Martial arts anime and manga
Meiji era in fiction
Romance anime and manga
Samurai in anime and manga
Studio Deen
Television series by Sony Pictures Television
Television series set in the 1870s
Works about atonement | Rurouni Kenshin (1996 TV series) | Biology | 3,226 |
20,354,163 | https://en.wikipedia.org/wiki/Diamphotoxin | Diamphotoxin is a toxin produced by larvae and pupae of the beetle genus Diamphidia. Diamphotoxin is a hemolytic, cardiotoxic, and highly labile single-chain polypeptide bound to a protein that protects it from deactivation.
Diamphotoxin increases the permeability of cell membranes of red blood cells. Although this does not affect the normal flow of ions between cells, it allows all small ions to pass through cell membranes easily, which fatally disrupts the cells' ion levels. Although diamphotoxin has no neurotoxic effect, its hemolytic effect is lethal, and may reduce hemoglobin levels by as much as 75%.
The San people of Southern Africa use diamphotoxin as an arrow poison for hunting game. The toxin paralyses muscles gradually. Large mammals hunted in this way die slowly from a small injection of the poison.
Several leaf beetles species of genus Leptinotarsa produce a similar toxin, leptinotarsin.
See also
Palytoxin
Arrow poison
References
Further reading
External links
Diamphotoxin at PubChem. Retrieved 4 July 2013.
Insect toxins
Peptides | Diamphotoxin | Chemistry | 254 |
46,861,405 | https://en.wikipedia.org/wiki/Marine%20microbial%20symbiosis | Microbial symbiosis in marine animals was not discovered until 1981. In the time following, symbiotic relationships between marine invertebrates and chemoautotrophic bacteria have been found in a variety of ecosystems, ranging from shallow coastal waters to deep-sea hydrothermal vents. Symbiosis is a way for marine organisms to find creative ways to survive in a very dynamic environment. They are different in relation to how dependent the organisms are on each other or how they are associated. It is also considered a selective force behind evolution in some scientific aspects. The symbiotic relationships of organisms has the ability to change behavior, morphology and metabolic pathways. With increased recognition and research, new terminology also arises, such as holobiont, which the relationship between a host and its symbionts as one grouping. Many scientists will look at the hologenome, which is the combined genetic information of the host and its symbionts. These terms are more commonly used to describe microbial symbionts.
The type of marine animal vary greatly, for example, sponges, sea squirts, corals, worms, and algae all host a variety of unique symbionts. Each symbiotic relationship displays a unique ecological niche, which in turn can lead to entirely new species of host species and symbiont.
It is particularly interesting that it took so long to discover the marine microbial symbiosis because nearly every surface submerged in the oceans becomes covered with biofilm, including a large number of living organisms. Many marine organisms display symbiotic relationships with microbes. Epibiotic bacteria have been found to live on crustacean larvae and protect them from fungal infections. Other microbes in deep-sea vents have been found to prevent the settlement of barnacles and tunicate larvae.
Mechanisms of symbiosis
Various mechanisms are utilized in order to facilitate symbiotic relationships and to help these associates evolve alongside one another. By using horizontal gene transfer, certain genetic elements are able to pass from one organisms to another. In non-mating species, this helps with genetic differentiation and adaptive evolution. An example of this is the sponge Astroclera willeyana which has a gene that is used in expressing spherulite-forming cells which has an origin in bacteria. Another example is the starlet sea anemone, Nematostella vectensis, which has genes from bacteria that have a role in producing UV radiation protection in the form of shikimic acid. Another way for symbiotic relationships to co-evolve is through genome erosion. This is a process where genes that are typically used during free-living periods aren't necessary because of the symbioses of the organisms. Without that gene, the organism is able to decrease the energy necessary for cell maintenance and replication.
Types of symbiotic relationships
There are a variety of symbiotic relationships:
Mutualism is a relationship in which both partners benefit.
Commensalism is a relationship where one partner receives a benefit while the other is not affected.
Parasitism is where one partner benefits at the expense of the host.
Amensalism is a less common type of relationship where one organisms receives no benefit but the other still has negative ramifications.
The relationship can be either an ectosymbiont, a symbiont that survives by being attached to the surface of the host, which includes areas such as the inner surfaces of the gut cavity, or even the ducts of endocrine glands; or an endosymbiont, a symbiont that lives within its host and can be known as an intracellular symbiont.
They are further classified by their dependence on their host and can be a facultative symbiont that can exist in a free living condition and is not dependent on its host, or an obligate symbiont, which has adapted in such a way that it is not able to exit without the benefit it receives from its host. An example of an obligate symbioses is the relationship between microalgae and corals. The microalgae provides a large source of the coral diet
Some symbiotic relationships
Coral reef symbiosis
The most notable display of marine symbiotic relationship would be coral. Coral reefs are home to a variety of dinoflagellate symbiont, these symbionts give coral its bright coloring and are vital for the survival of the reef. The symbionts provide the coral with food in exchange for protection. If the waters warm or become too acidic, the symbionts are expelled, the coral bleaches and if conditions persist the coral will die. This in turn leads to the collapse of the entire reef ecosystem
Bone eating worm symbiosis
Osedax, also called the bone eating worm is a siboglinid worm from polychaete genus. It was discovered in a whalefall community on the surface of bones, in the axis of Monterey Canyon, California, in 2002. Osedax lacks a mouth, a functional gut and a trophosome. But female osedax have a vascularized root system originating from their ovisac which contains heterotrophic endosymbiotic bacterial community dominated by γ-proteobacteria clade. They use the vascularized root system to access the whale bones. The endosymbionts help the host utilize nutrients from the whale bones.
Hawaiian squid and Vibrio fischeri symbiosis
Hawaiian sepiolid squid Euprymna scolopes and bacterium Vibrio fischeri also show symbiosis. In this symbiosis, symbiont not only serve the host for defense, but also shapes the host morphology. Bioluminescent V. fischeri can be found in epithelial lined crypts of the light organ of the host. Symbiosis begins as soon as a newly hatched squid finds and houses V. fischeri bacteria.
The symbiosis process begins when Peptidoglycan shed by the sea water bacteria comes in contact to the ciliated epithelial cells of the light organ. It induces mucus production in the cells. Mucus entraps bacterial cells. Antimicrobial peptides, nitric oxide and sialyted mucins in the mucus then selectively allow only V. fischeri which encode gene rscS to adhere and win over gram positive and other gram negative bacteria. The symbiotic bacteria are then guided up to the light organ via chemotaxis. After successful colonization, symbionts induce loss of mucus and ciliated sites to prevent further attachment of bacterial cells via MAMP (microbe associated molecular pattern) signalling. Also, they induce changes in protein expression in the host symbiotic tissues and modify both physiology and morphology of light organs. After bacterial cells divide and increase in population, they begin expressing enzyme luciferase as a result of quorum sensing. Luciferase enzymes produce bioluminescence. Squids can then emit the luminescence from the light organ. Because Euprymna scolopes emerges only during night time, it helps them avoid predation. Bioluminescence allows them to camouflage with the light coming from moon and stars to ocean and avoid predators.
Pompeii worm
Alvinella pompejana, the Pompeii worm is a polychaete, found in the far depths of the sea, typically found near hydrothermal vents. They were originally discovered by French researchers in the early 1980s. They can grow as large as 5 inches long and are normally described as having pale gray coloring with red "tentacle-like" gills protruding from their heads. Their tails are most likely found in temperatures as high as 176 degrees Fahrenheit, while their heads, which stick out from the tubes they live in are only exposed to temperatures as high as 72 degrees Fahrenheit. Its ability to survive the temperatures of hydrothermal vents lies in its symbiotic relationship with the bacteria that resides on its back. It forms a "fleece-like" protective covering. Mucus is secreted from glands on the back of the Pompeii worm in order to provide nutrients for the bacteria. Further study of the bacteria led to the discovery that they are chemolithotrophic.
Hawaiian sea slug
Elysia rufescens grazes on Bryopsis sp., an alga that defends itself from predators by using peptide toxins with fatty acids, called kahalalides. A bacterial obligate symbiont produces many defensive molecules, including kahalalides, in order to protect the alga. This bacteria is able to use substrates derived from the host in order to synthesize the toxins. The Hawaiian Sea Slug grazes on the alga in order to accumulate kahalalide. This uptake of the toxin, which the slug is immune to, allows it to also become toxic to predators. This shared ability, both originating from the bacteria, provide protection within the marine ecosystems.
Marine sponges
Besides a one to one symbiotic relationship, it is possible for a host to become symbiotic with a microbial consortia. In the case of the sponge (phylum Porifera), they are able to host a lot of wide range of microbial communities that can also be very specific. The microbial communities that form a symbiotic relationship with the sponge can actually comprise up to 35% of the biomass of its host. The term for this specific symbiotic relationship, where a microbial consortia pairs with a host is called a holobiotic relationship. The sponge as well as the microbial community associated with it will produce a large range of secondary metabolites that help protect it against predators through mechanisms such as chemical defense. Some of these relationships include endosymbionts within bacteriocyte cells, and cyanobacteria or microalgae found below the pinacoderm cell layer where they are able to receive the highest amount of light, used for phototrophy. They can host approximately 52 different microbial phyla and candidate phyla, including Alphaproteobacteria, Actinobacteria, Chloroflexi, Nitrospirae, Cyanobacteria, the taxa Gamma-, and the candidate phylum Poribacteria, and Thaumarchaea.
Endozoicomonas
This type of bacteria was first described in 2007. It is able to form symbiotic relationships with a wide range of hosts in the marine environment such as cnidarians, poriferans, molluscs, annelids, tunicates, and fish. They are distributed through various marine zones from extreme depths to warm photic zones. Endozoicomonas is thought to acquisition nutrients from nitrogen/carbon recycling, methane/sulfur recycling, and synthesize amino acids and various other molecules necessary for life. It was also found that it has a correlation to photosymbionts which provide carbon and sulfur to the bacteria from dimethylsulfopropionate (DMSP). They are also suspected to help regulate bacterial colonization of the host by using bioactive secondary metabolites or even probiotic mechanisms like limiting pathogenic bacteria by means of competitive exclusion. When Endozoicomonas is removed from the host, there are often signs of lesions on corals and disease.
Chemosynthetic symbioses in ocean
Marine environment consists of a large number of chemosynthetic symbioses in different regions of the ocean: shallow-water coastal sediments, continental slope sediments, whale and wood falls, cold seeps and deep-sea hydrothermal vents. Organisms from seven phyla (ciliophora, porifera, platyhelminthes, nematoda, mollusca, annelida and arthropoda) are known to have chemosynthetic symbiosis till now. Some of them include nematode, tube worms, clam, sponge, hydrothermal vent shrimp, worms mollusc, mussels and so on. The symbionts can be ectosymbionts or endosymbionts. Some ectosymbionts are: symbionts of polychaete worm Alvinella which occur in their dorsal surface and symbionts occurring on the mouthparts and gill chamber of the vent shrimp Rimicaris. Endosymbionts include symbionts of gastropod snails which occur in their gill tissues. In the siboglinid tube worms of the groups Monilifera, Frenulata and Vestimentifera, symbionts can be found in an interior organ called trophosome.
Most of the animals in deep-sea hydrothermal vents exist in a symbiotic relationship with chemosynthetic bacteria. These chemosynthetic bacteria are found to be methane or sulphur oxidizers.
Microbial biotechnology
Marine invertebrates are the hosts of a wide spectrum of bioactive metabolites, which have vast potential as drugs and research tools. In many cases, microbes aid in or are responsible for marine invertebrates natural products. Certain marine microbes can provide insight into the biosynthesis mechanisms of natural products, which in turn could solve the current limitations on marine drug development.
References
microbial symbiosis
Marine symbiosis, microbial
Marine symbiosis, microbial
Marine symbiosis, microbial | Marine microbial symbiosis | Biology | 2,811 |
11,976,532 | https://en.wikipedia.org/wiki/TOP500 | The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
The most recent edition of TOP500 was published in November 2024 as the 64th edition of TOP500, while the next edition of TOP500 will be published in June 2025 as the 65th edition of TOP500. As of November 2024, the United States' El Capitan is the most powerful supercomputer in the TOP500, reaching 1742 petaFlops (1.742 exaFlops) on the LINPACK benchmarks. As of 2018, the United States has by far the highest share of total computing power on the list (nearly 50%). As of 2024, the United States has the highest number of systems with 173 supercomputers; China is in second place with 63, and Germany is third at 40.
The 59th edition of TOP500, published in June 2022, was the first edition of TOP500 to feature only 64-bit supercomputers; as of June 2022, 32-bit supercomputers are no longer listed. The TOP500 list is compiled by Jack Dongarra of the University of Tennessee, Knoxville, Erich Strohmaier and Horst Simon of the National Energy Research Scientific Computing Center (NERSC) and Lawrence Berkeley National Laboratory (LBNL), and, until his death in 2014, Hans Meuer of the University of Mannheim, Germany. The TOP500 project also includes lists such as Green500 (measuring energy efficiency) and HPCG (measuring I/O bandwidth).
History
In the early 1990s, a new definition of supercomputer was needed to produce meaningful statistics. After experimenting with metrics based on processor count in 1992, the idea arose at the University of Mannheim to use a detailed listing of installed systems as the basis. In early 1993, Jack Dongarra was persuaded to join the project with his LINPACK benchmarks. A first test version was produced in May 1993, partly based on data available on the Internet, including the following sources:
"List of the World's Most Powerful Computing Sites" maintained by Gunter Ahrendt
David Kahaner, the director of the Asian Technology Information Program (ATIP); published a report in 1992, titled "Kahaner Report on Supercomputer in Japan" which had an immense amount of data.
The information from those sources was used for the first two lists. Since June 1993, the TOP500 is produced bi-annually based on site and vendor submissions only. Since 1993, performance of the ranked position has grown steadily in accordance with Moore's law, doubling roughly every 14 months. In June 2018, Summit was fastest with an Rpeak of 187.6593 PFLOPS. For comparison, this is over 1,432,513 times faster than the Connection Machine CM-5/1024 (1,024 cores), which was the fastest system in November 1993 (twenty-five years prior) with an Rpeak of 131.0 GFLOPS.
Architecture and operating systems
, all supercomputers on TOP500 are 64-bit supercomputers, mostly based on CPUs with the x86-64 instruction set architecture, 384 of which are Intel EMT64-based and 101 of which are AMD AMD64-based, with the latter including the top eight supercomputers. 15 other supercomputers are all based on RISC architectures, including six based on ARM64 and seven based on the Power ISA used by IBM Power microprocessors.
In recent years, heterogeneous computing has dominated the TOP500, mostly using Nvidia's graphics processing units (GPUs) or Intel's x86-based Xeon Phi as coprocessors. This is because of better performance per watt ratios and higher absolute performance. AMD GPUs have taken the top 1 and displaced Nvidia in top 10 part of the list. The recent exceptions include the aforementioned Fugaku, Sunway TaihuLight, and K computer. Tianhe-2A is also an interesting exception, as US sanctions prevented use of Xeon Phi; instead, it was upgraded to use the Chinese-designed Matrix-2000 accelerators.
Two computers which first appeared on the list in 2018 were based on architectures new to the TOP500. One was a new x86-64 microarchitecture from Chinese manufacturer Sugon, using Hygon Dhyana CPUs (these resulted from a collaboration with AMD, and are a minor variant of Zen-based AMD EPYC) and was ranked 38th, now 117th, and the other was the first ARM-based computer on the list using Cavium ThunderX2 CPUs. Before the ascendancy of 32-bit x86 and later 64-bit x86-64 in the early 2000s, a variety of RISC processor families made up most TOP500 supercomputers, including SPARC, MIPS, PA-RISC, and Alpha.
All the fastest supercomputers since the Earth Simulator supercomputer have used operating systems based on Linux. , all the listed supercomputers use an operating system based on the Linux kernel.
Since November 2015, no computer on the list runs Windows (while Microsoft reappeared on the list in 2021 with Ubuntu based on Linux). In November 2014, Windows Azure cloud computer was no longer on the list of fastest supercomputers (its best rank was 165th in 2012), leaving the Shanghai Supercomputer Center's Magic Cube as the only Windows-based supercomputer on the list, until it also dropped off the list. It was ranked 436th in its last appearance on the list released in June 2015, while its best rank was 11th in 2008. There are no longer any Mac OS computers on the list. It had at most five such systems at a time, one more than the Windows systems that came later, while the total performance share for Windows was higher. Their relative performance share of the whole list was however similar, and never high for either. In 2004, the System X supercomputer based on Mac OS X (Xserve, with 2,200 PowerPC 970 processors) once ranked 7th place.
It has been well over a decade since MIPS systems dropped entirely off the list though the Gyoukou supercomputer that jumped to 4th place in November 2017 had a MIPS-based design as a small part of the coprocessors. Use of 2,048-core coprocessors (plus 8× 6-core MIPS, for each, that "no longer require to rely on an external Intel Xeon E5 host processor") made the supercomputer much more energy efficient than the other top 10 (i.e. it was 5th on Green500 and other such ZettaScaler-2.2-based systems take first three spots). At 19.86 million cores, it was by far the largest system by core-count, with almost double that of the then-best manycore system, the Chinese Sunway TaihuLight.
TOP500
, the number one supercomputer is El Capitan, the leader on Green500 is JEDI, a Bull Sequana XH3000 system using the Nvidia Grace Hopper GH200 Superchip. In June 2022, the top 4 systems of Graph500 used both AMD CPUs and AMD accelerators. After an upgrade, for the 56th TOP500 in November 2020,
Summit, a previously fastest supercomputer, is currently highest-ranked IBM-made supercomputer; with IBM POWER9 CPUs. Sequoia became the last IBM Blue Gene/Q model to drop completely off the list; it had been ranked 10th on the 52nd list (and 1st on the June 2012, 41st list, after an upgrade).
Microsoft is back on the TOP500 list with six Microsoft Azure instances (that use/are benchmarked with Ubuntu, so all the supercomputers are still Linux-based), with CPUs and GPUs from same vendors, the fastest one currently 11th, and another older/slower previously made 10th. And Amazon with one AWS instance currently ranked 64th (it was previously ranked 40th). The number of Arm-based supercomputers is 6; currently all Arm-based supercomputers use the same Fujitsu CPU as in the number 2 system, with the next one previously ranked 13th, now 25th.
Legend:
RankPosition within the TOP500 ranking. In the TOP500 list table, the computers are ordered first by their Rmax value. In the case of equal performances (Rmax value) for different computers, the order is by Rpeak. For sites that have the same computer, the order is by memory size and then alphabetically.
RmaxThe highest score measured using the LINPACK benchmarks suite. This is the number that is used to rank the computers. Measured in quadrillions of 64-bit floating point operations per second, i.e., petaFLOPS.
RpeakThis is the theoretical peak performance of the system. Computed in petaFLOPS.
NameSome supercomputers are unique, at least on its location, and are thus named by their owner.
ModelThe computing platform as it is marketed.
ProcessorThe instruction set architecture or processor microarchitecture, alongside GPU and accelerators when available.
InterconnectThe interconnect between computing nodes. InfiniBand is most used (38%) by performance share, while Gigabit Ethernet is most used (54%) by number of computers.
ManufacturerThe manufacturer of the platform and hardware.
SiteThe name of the facility operating the supercomputer.
CountryThe country in which the computer is located.
YearThe year of installation or last major update.
Operating systemThe operating system that the computer uses.
Other rankings
Top countries
Numbers below represent the number of computers in the TOP500 that are in each of the listed countries or territories. As of 2024, United States has the most supercomputers on the list, with 173 machines. The United States has the highest aggregate computational power at 6,324 Petaflops Rmax with Japan second (919 Pflop/s) and Germany third (396 Pflop/s).
Fastest supercomputer in TOP500 by country
(As of November 2023)
Systems ranked
HPE Cray El Capitan (Lawrence Livermore National Laboratory , November 2024Present)
HPE Cray Frontier (Oak Ridge National Laboratory , June 2022November 2024)
Supercomputer Fugaku (Riken Center for Computational Science , June 2020June 2022)
IBM Summit (Oak Ridge National Laboratory , June 2018June 2020)
NRCPC Sunway TaihuLight (National Supercomputing Center in Wuxi , June 2016November 2017)
NUDT Tianhe-2A (National Supercomputing Center of Guangzhou , June 2013June 2016)
Cray Titan (Oak Ridge National Laboratory , November 2012June 2013)
IBM Sequoia Blue Gene/Q (Lawrence Livermore National Laboratory , June 2012November 2012)
Fujitsu K computer (Riken Advanced Institute for Computational Science , June 2011June 2012)
NUDT Tianhe-1A (National Supercomputing Center of Tianjin , November 2010June 2011)
Cray Jaguar (Oak Ridge National Laboratory , November 2009November 2010)
IBM Roadrunner (Los Alamos National Laboratory , June 2008November 2009)
IBM Blue Gene/L (Lawrence Livermore National Laboratory , November 2004June 2008)
NEC Earth Simulator (Earth Simulator Center , June 2002November 2004)
IBM ASCI White (Lawrence Livermore National Laboratory , November 2000June 2002)
Intel ASCI Red (Sandia National Laboratories , June 1997November 2000)
Hitachi CP-PACS (University of Tsukuba , November 1996June 1997)
Hitachi SR2201 (University of Tokyo , June 1996November 1996)
Fujitsu Numerical Wind Tunnel (National Aerospace Laboratory of Japan , November 1994June 1996)
Intel Paragon XP/S140 (Sandia National Laboratories , June 1994November 1994)
Fujitsu Numerical Wind Tunnel (National Aerospace Laboratory of Japan , November 1993June 1994)
TMC CM-5 (Los Alamos National Laboratory , June 1993November 1993)
Additional statistics
By number of systems :
Note: All operating systems of the TOP500 systems are Linux-family based, but Linux above is generic Linux.
Sunway TaihuLight is the system with the most CPU cores (10,649,600). Tianhe-2 has the most GPU/accelerator cores (4,554,752). Aurora is the system with the greatest power consumption with 38,698 kilowatts.
New developments in supercomputing
In November 2014, it was announced that the United States was developing two new supercomputers to exceed China's Tianhe-2 in its place as world's fastest supercomputer. The two computers, Sierra and Summit, will each exceed Tianhe-2's 55 peak petaflops. Summit, the more powerful of the two, will deliver 150–300 peak petaflops. On 10 April 2015, US government agencies banned selling chips, from Nvidia to supercomputing centers in China as "acting contrary to the national security ... interests of the United States"; and Intel Corporation from providing Xeon chips to China due to their use, according to the US, in researching nuclear weaponsresearch to which US export control law bans US companies from contributing"The Department of Commerce refused, saying it was concerned about nuclear research being done with the machine."
On 29 July 2015, President Obama signed an executive order creating a National Strategic Computing Initiative calling for the accelerated development of an exascale (1000 petaflop) system and funding research into post-semiconductor computing.
In June 2016, Japanese firm Fujitsu announced at the International Supercomputing Conference that its future exascale supercomputer will feature processors of its own design that implement the ARMv8 architecture. The Flagship2020 program, by Fujitsu for RIKEN plans to break the exaflops barrier by 2020 through the Fugaku supercomputer, (and "it looks like China and France have a chance to do so and that the United States is contentfor the moment at leastto wait until 2023 to break through the exaflops barrier.") These processors will also implement extensions to the ARMv8 architecture equivalent to HPC-ACE2 that Fujitsu is developing with Arm.
In June 2016, Sunway TaihuLight became the No. 1 system with 93 petaflop/s (PFLOP/s) on the Linpack benchmark.
In November 2016, Piz Daint was upgraded, moving it from 8th to 3rd, leaving the US with no systems under the TOP3 for the 2nd time.
Inspur, based out of Jinan, China, is one of the largest HPC system manufacturers. , Inspur has become the third manufacturer to have manufactured a 64-way systema record that has previously been held by IBM and HP. The company has registered over $10B in revenue and has provided a number of systems to countries such as Sudan, Zimbabwe, Saudi Arabia and Venezuela. Inspur was also a major technology partner behind both the Tianhe-2 and Taihu supercomputers, occupying the top 2 positions of the TOP500 list up until November 2017. Inspur and Supermicro released a few platforms aimed at HPC using GPU such as SR-AI and AGX-2 in May 2017.
In June 2018, Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, US, took the No. 1 spot with a performance of 122.3 petaflop/s (PFLOP/s), and Sierra, a very similar system at the Lawrence Livermore National Laboratory, CA, US took #3. These systems also took the first two spots on the HPCG benchmark. Due to Summit and Sierra, the US took back the lead as consumer of HPC performance with 38.2% of the overall installed performance while China was second with 29.1% of the overall installed performance. For the first time ever, the leading HPC manufacturer was not a US company. Lenovo took the lead with 23.8% of systems installed. It is followed by HPE with 15.8%, Inspur with 13.6%, Cray with 11.2%, and Sugon with 11%.
On 18 March 2019, the United States Department of Energy and Intel announced the first exaFLOP supercomputer would be operational at Argonne National Laboratory by the end of 2021. The computer, named Aurora, was delivered to Argonne by Intel and Cray.
On 7 May 2019, The U.S. Department of Energy announced a contract with Cray to build the "Frontier" supercomputer at Oak Ridge National Laboratory. Frontier is anticipated to be operational in 2021 and, with a performance of greater than 1.5 exaflops, should then be the world's most powerful computer.
Since June 2019, all TOP500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops.
In May 2022, the Frontier supercomputer broke the exascale barrier, completing more than a quintillion 64-bit floating point arithmetic calculations per second. Frontier clocked in at approximately 1.1 exaflops, beating out the previous record-holder, Fugaku.
Large machines not on the list
Some major systems are not on the list. A prominent example is the NCSA's Blue Waters which publicly announced the decision not to participate in the list because they do not feel it accurately indicates the ability of any system to do useful work. Other organizations decide not to list systems for security and/or commercial competitiveness reasons. One such example is the National Supercomputing Center at Qingdao's OceanLight supercomputer, completed in March 2021, which was submitted for, and won, the Gordon Bell Prize. The computer is an exaflop computer, but was not submitted to the TOP500 list; the first exaflop machine submitted to the TOP500 list was Frontier. Analysts suspected that the reason the NSCQ did not submit what would otherwise have been the world's first exascale supercomputer was to avoid inflaming political sentiments and fears within the United States, in the context of the United States – China trade war. Additional purpose-built machines that are not capable or do not run the benchmark were not included, such as RIKEN MDGRAPE-3 and MDGRAPE-4.
A Google Tensor Processing Unit v4 pod is capable of 1.1 exaflops of peak performance, while TPU v5p claims over 4 exaflops in Bfloat16 floating-point format, however these units are highly specialized to run machine learning workloads and the TOP500 measures a specific benchmark algorithm using a specific numeric precision.
In March 2024, Meta AI disclosed the operation of two datacenters with 24,576 H100 GPUs, which is almost 2x as on the Microsoft Azure Eagle (#3 as of September 2024), which could have made them occupy 3rd and 4th places in TOP500, but neither have been benchmarked. During company's Q3 2024 earnings call in October, M. Zuckerberg disclosed usage of a cluster with over 100,000 H100s.
xAI Memphis Supercluster (also known as "Colossus") allegedly features 100,000 of the same H100 GPUs, which could have put in on the first place, but it is reportedly not in full operation due to power shortages.
Computers and architectures that have dropped off the list
IBM Roadrunner is no longer on the list (nor is any other using the Cell coprocessor, or PowerXCell).
Although Itanium-based systems reached second rank in 2004, none now remain.
Similarly (non-SIMD-style) vector processors (NEC-based such as the Earth simulator that was fastest in 2002) have also fallen off the list. Also the Sun Starfire computers that occupied many spots in the past now no longer appear.
The last non-Linux computers on the list the two AIX ones running on POWER7 (in July 2017 ranked 494th and 495th, originally 86th and 85th), dropped off the list in November 2017.
Notes
The first edition of TOP500 to feature only 64-bit supercomputers was the 59th edition of TOP500, which was published in June 2022.
As of June 2022, TOP500 features only 64-bit supercomputers.
The world’s most powerful supercomputers are from the United States and Japan.
See also
Computer science
Computing
Graph500
Green500
HPC Challenge Benchmark
Instructions per second
LINPACK benchmarks
List of fastest computers
References
External links
LINPACK benchmarks at TOP500
Supercomputer benchmarks
Supercomputer sites
Top lists | TOP500 | Technology | 4,572 |
41,720,459 | https://en.wikipedia.org/wiki/Tesla%20STEM%20High%20School | Tesla STEM High School (officially Nikola Tesla Science, Technology, Engineering & Math High School, formerly STEM High School) is a magnet high school in Redmond, Washington operated by the Lake Washington School District. It serves as a lottery-selected choice program and offers a STEM-based curriculum.
History and facilities
In February 2011, facing substantial sustained and projected future enrollment growth, the Lake Washington School District issued a levy measure to raise $65,400,000 in property taxes from King County residents to fund the construction of expanded facilities at Redmond High School and Eastlake High School as well as the construction of the new STEM High School. The ballot measure passed, and preparations on these population expansion projects began immediately. In December 2011, the Absher Construction Company won the lowest bid for the STEM High School project at $24,080,000 and began construction work in February 2012.
The school's faculty and programs began accepting ninth and tenth grade students in September 2012 for the 2012-2013 school year, but the students were located in the facilities of Eastlake High School until the new dedicated STEM High School was completed in January 2013. The school began admitting eleventh and twelfth grade students in September 2013 and September 2014, respectively.
In 2014, STEM High School formally changed its name to Nikola Tesla Science, Technology, Engineering & Math High School, shortened to Tesla STEM High School.
The two-story school occupies a 21-acre campus. Modular building techniques were used to construct the school due to permitting and time restrictions. The majority of the building was fabricated offsite, with four sections, including the common area, built on site.
Students are admitted from across the district on a lottery basis with 150 students per grade for a total enrollment of approximately 600 students.
Awards
In 2017, two Tesla STEM students were awarded the President’s Environmental Youth Award from the United States Environmental Protection Agency.
In 2021, students 3D printed thousands of masks for local hospitals.
In 2021 and 2022, Tesla STEM won the Washington Sea Grant's Orca bowl, a marine science competition.
In 2022, U.S. News & World Report ranked Tesla STEM High School first in its annual "Best Washington High Schools" list and twelfth in its "Best U.S. High Schools" list.
In 2023, four students won awards in the Regeneron Pharmaceuticals International Science and Engineering Fair, and in 2024, two students and a teacher were honored with Regeneron Science Talent Search awards. Tesla STEM also won first place in the regional National Science Bowl.
In 2024, the school was rated the third best high school in the nation.
Academics
The school's course offerings and overall academic approach are focused on emphasizing the four STEM fields: science, technology, engineering, and mathematics. Students are required to take courses in science and math, as well as engineering and technology via indirect integration in other courses, through twelfth grade. Key tenets of the school's curriculum include leveraging problem-based learning, a professional learning community, integrated curricula, scientific inquiry, and constructivist learning.
In principle, the school's curriculum is designed such that ninth and tenth grade focus on foundation: building skills such as understanding and applying the engineering design process, collaboratively working in a Problem-Based Learning (PBL) environment, as well as other critical thinking aspects. Eleventh and twelfth grade students focus on application: selecting a concentration of study and conducting independent research.
In the 2014-2015 academic year, the school began four “Signature Programs," open to students from all comprehensive high schools in the Lake Washington School District: Eastlake High School, Juanita High School, Lake Washington High School, and Redmond High School. Eleventh graders are given the choice between selecting two signature labs: Environmental Engineering and Sustainable Design, or AP Psychology and Forensics. Twelfth graders may either take the Advanced Physics Lab (AP Physics C: Electricity and Magnetism and AP Physics C: Mechanics) or Biomedical Engineering alongside Anatomy and Physiology.
In 2023, the school had 100% participation in AP coursework.
Athletics and clubs
Athletics and sports programs are not offered at the school. Students who wish to participate in such programs must do so at one of the four aforementioned comprehensive high schools whose boundaries within which they reside.
The school has nearly 40 clubs. The TED-Ed club hosted TEDx independent events from 2016-2019.
Courses and pathways
Tesla STEM's course catalog for the 2021-2022 school year can be found here.
A total of 12 AP courses are offered, including AP Computer Science Principles (9), AP Biology (10, optional), AP Environmental Science (10), AP Language and Composition (11), AP Physics C: Electricity and Magnetism and AP Physics C: Mechanics (12, optional senior lab), AP Calculus AB (varies, 11th most common), AP Calculus BC (varies and optional, 12th most common), AP Computer Science A (elective), AP Chemistry (elective), and AP Statistics (elective).
Typical learning pathways at Tesla STEM include the computer science pathway (AP CS Principles (9), AP CSA (10), Data Structures (11), Advanced Projects in Java (12)), the engineering pathway (Engineering 1, Engineering 2, Engineering 3), and the life sciences pathway (AP Biology (10), AP Psychology (11), Biomedical Engineering and Anatomy/Physiology (12)).
Four signature labs are offered at STEM, with students being given the option to choose two of them. In 11th grade, students have the choice between Environmental Engineering / Sustainable Design and AP Psychology / Forensics, whereas in 12th grade students choose between the Advanced Physics Lab and Biomedical Engineering / Anatomy and Physiology.
The school is partnered with nearby businesses to offer junior-year internships.
References
External links
LWSD School Web Page
OSPI School report card for "Unnamed STEM School Under Construction" 2013
Grand Challenges for Engineering, The National Academy of Engineering
M SPACE Holdings
Public high schools in Washington (state)
High schools in King County, Washington
Schools in Redmond, Washington
2012 establishments in Washington (state)
Prefabricated buildings | Tesla STEM High School | Engineering | 1,245 |
364,328 | https://en.wikipedia.org/wiki/Big%20Bounce | The Big Bounce hypothesis is a cosmological model for the origin of the known universe. It was originally suggested as a phase of the cyclic model or oscillatory universe interpretation of the Big Bang, where the first cosmological event was the result of the collapse of a previous universe. It receded from serious consideration in the early 1980s after inflation theory emerged as a solution to the horizon problem, which had arisen from advances in observations revealing the large-scale structure of the universe.
Inflation was found to be inevitably eternal, creating an infinity of different universes with typically different properties, suggesting that the properties of the observable universe are a matter of chance. An alternative concept that included a Big Bounce was conceived as a predictive and falsifiable possible solution to the horizon problem. Investigation continued as of 2022.
Expansion and contraction
The concept of the Big Bounce envisions the Big Bang as the beginning of a period of expansion that followed a period of contraction. In this view, one could talk of a "Big Crunch" followed by a "Big Bang" or, more simply, a "Big Bounce". This concept suggests that we could exist at any point in an infinite sequence of universes, or conversely, the current universe could be the very first iteration. However, if the condition of the interval phase "between bounces"—considered the "hypothesis of the primeval atom"—is taken into full contingency, such enumeration may be meaningless because that condition could represent a singularity in time at each instance if such perpetual repeats (cycles) were absolute and undifferentiated.
The main idea behind the quantum theory of a Big Bounce is that, as density approaches infinity, the behavior of quantum foam changes. All the so-called fundamental physical constants, including the speed of light in vacuum, need not remain constant during a Big Crunch, especially in the time interval smaller than that in which measurement may never be possible (one unit of Planck time, roughly 10−43 seconds) spanning or bracketing the point of inflection.
History
Big Bounce models were endorsed on largely aesthetic grounds by cosmologists including Willem de Sitter, Carl Friedrich von Weizsäcker, George McVittie, and George Gamow (who stressed that "from the physical point of view we must forget entirely about the precollapse period").
By the early 1980s, the advancing precision and scope of observational cosmology had revealed that the large-scale structure of the universe is flat, homogeneous, and isotropic, a finding later accepted as the cosmological principle to apply at scales beyond roughly 300 million light-years. This led cosmologists to seek an explanation to the horizon problem, which questioned how distant regions of the universe could have identical properties without ever being in light-like communication. A solution was proposed to be a period of exponential expansion of space in the early universe, which formed the basis of what became known as inflation theory. Following the brief inflationary period, the universe continues to expand at a slower rate.
Various formulations of inflation theory and their detailed implications became the subject of intense theoretical study. Without a compelling alternative, inflation became the leading solution to the horizon problem.
The phrase "Big Bounce" appeared in scientific literature in 1987, when it was first used in the title of a pair of articles (in German) in Stern und Weltraum by Wolfgang Priester and Hans-Joachim Blome. It reappeared in 1988 in Iosif Rozental's Big Bang, Big Bounce, a revised English-language translation of a Russian-language book (by a different title), and in a 1991 English-language article by Priester and Blome in Astronomy and Astrophysics. The phrase originated as the title of a novel by Elmore Leonard in 1969, shortly after increased public awareness of the Big Bang model with of the discovery of the cosmic microwave background by Penzias and Wilson in 1965.
The idea of the existence of a big bounce in the very early universe has found diverse support in works based on loop quantum gravity. In loop quantum cosmology, a branch of loop quantum gravity, the big bounce was first discovered in February 2006 for isotropic and homogeneous models by Abhay Ashtekar, Tomasz Pawlowski, and Parampreet Singh at Pennsylvania State University. This result has been generalized to various other models by different groups, and includes the case of spatial curvature, cosmological constant, anisotropies, and Fock quantized inhomogeneities.
Martin Bojowald, an assistant professor of physics at Pennsylvania State University, published a study in July 2007 detailing work related to loop quantum gravity that claimed to mathematically solve the time before the Big Bang, which would give new weight to the oscillatory universe and Big Bounce theories.
One of the main problems with the Big Bang theory is that there is a singularity of zero volume and infinite energy at the moment of the Big Bang. This is normally interpreted as a breakdown of physics as we know it; in this case, of the theory of general relativity. This is why one expects quantum effects to become important and avoid a singularity.
However, research in loop quantum cosmology purported to show that a previously existing universe collapses not to a singularity, but to a point where the quantum effects of gravity become so strongly repulsive that the universe rebounds back out, forming a new branch. Throughout this collapse and bounce, the evolution is unitary.
Bojowald also claimed that some properties of the universe that collapsed to form ours can be determined; however, other properties are not determinable due to some uncertainty principle. This result has been disputed by different groups, which show that due to restrictions on fluctuations stemming from the uncertainty principle, there are strong constraints on the change in relative fluctuations across the bounce.
While the existence of the Big Bounce has still to be demonstrated from loop quantum gravity, the robustness of its main features has been confirmed using exact results and several studies involving numerical simulations using high performance computing in loop quantum cosmology.
In 2006, it was proposed that the application of loop quantum gravity techniques to Big Bang cosmology can lead to a bounce that need not be cyclic.
In 2010, Roger Penrose advanced a general relativity-based theory which he called the "conformal cyclic cosmology". The theory explains that the universe will expand until all matter decays and ultimately turns to light. Since nothing in the universe would have any time or distance scale associated with it, the universe becomes identical with the Big Bang, resulting in a type of Big Crunch that becomes the next Big Bang, thus perpetuating the next cycle.
In 2011, Nikodem Popławski showed that a nonsingular Big Bounce appears naturally in the Einstein–Cartan–Sciama–Kibble theory of gravity. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction avoids the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the universe was contracting. This scenario also explains why the present Universe at the largest scales appears spatially flat, homogeneous, and isotropic, providing a physical alternative to cosmic inflation.
In 2012, a new theory of a nonsingular Big Bounce was constructed within the frame of standard Einstein gravity. This theory combines the benefits of matter bounce and ekpyrotic cosmology. Particularly, in the homogeneous and isotropic background cosmological solution, the BKL instability is unstable to the growth of anisotropic stress, which is resolved in this theory. Moreover, curvature perturbations seeded in matter contraction can form a nearly scale-invariant primordial power spectrum and thus provide a consistent mechanism to explain the cosmic microwave background (CMB) observations.
A few sources argue that distant supermassive black holes whose large size is hard to explain so soon after the Big Bang, such as ULAS J1342+0928, may be evidence for a Big Bounce, with these supermassive black holes being formed before the Big Bounce.
Critics
According to a study published in Physical Review Letters in May 2023, the Big Bounce should have left marks in the primordial light, known as the cosmic microwave background (CMB), but comparing observations conducted by the Planck satellite with the simulated CMB in the case the Universe bounced on itself only once, that particular bounce signature was not found.
See also
References
Further reading
Angha, Nader (2001). Expansion & Contraction Within Being (Dahm). Riverside, California: M.T.O Shahmaghsoudi Publications. .
Taiebyzadeh, Payam (2017). String Theory; A unified theory and inner dimension of elementary particles (BazDahm). Riverside, Iran: Shamloo Publications Center. .
External links
Penn State Researchers Look Beyond The Birth Of The Universe (Penn State) May 12, 2006
What Happened Before the Big Bang? (Penn State) July 1, 2007
From big bang to big bounce (Penn State) NewScientist December 13, 2008
Physical cosmology
Ultimate fate of the universe | Big Bounce | Physics,Astronomy | 1,932 |
34,180,753 | https://en.wikipedia.org/wiki/Variable-mass%20system | In mechanics, a variable-mass system is a collection of matter whose mass varies with time. It can be confusing to try to apply Newton's second law of motion directly to such a system. Instead, the time dependence of the mass m can be calculated by rearranging Newton's second law and adding a term to account for the momentum carried by mass entering or leaving the system. The general equation of variable-mass motion is written as
where Fext is the net external force on the body, vrel is the relative velocity of the escaping or incoming mass with respect to the center of mass of the body, and v is the velocity of the body. In astrodynamics, which deals with the mechanics of rockets, the term vrel is often called the effective exhaust velocity and denoted ve.
Derivation
There are different derivations for the variable-mass system motion equation, depending on whether the mass is entering or leaving a body (in other words, whether the moving body's mass is increasing or decreasing, respectively). To simplify calculations, all bodies are considered as particles. It is also assumed that the mass is unable to apply external forces on the body outside of accretion/ablation events.
Mass accretion
The following derivation is for a body that is gaining mass (accretion). A body of time-varying mass m moves at a velocity v at an initial time t. In the same instant, a particle of mass dm moves with velocity u with respect to ground. The initial momentum can be written as
Now at a time t + dt, let both the main body and the particle accrete into a body of velocity v + dv. Thus the new momentum of the system can be written as
Since dmdv is the product of two small values, it can be ignored, meaning during dt the momentum of the system varies for
Therefore, by Newton's second law
Noting that u - v is the velocity of dm relative to m, symbolized as vrel, this final equation can be arranged as
Mass ablation/ejection
In a system where mass is being ejected or ablated from a main body, the derivation is slightly different. At time t, let a mass m travel at a velocity v, meaning the initial momentum of the system is
Assuming u to be the velocity of the ablated mass dm with respect to the ground, at a time t + dt the momentum of the system becomes
where u is the velocity of the ejected mass with respect to ground, and is negative because the ablated mass moves in opposite direction to the mass. Thus during dt the momentum of the system varies for
Relative velocity vrel of the ablated mass with respect to the mass m is written as
Therefore, change in momentum can be written as
Therefore, by Newton's second law
Therefore, the final equation can be arranged as
Forms
By the definition of acceleration, a = dv/dt, so the variable-mass system motion equation can be written as
In bodies that are not treated as particles a must be replaced by acm, the acceleration of the center of mass of the system, meaning
Often the force due to thrust is defined as so that
This form shows that a body can have acceleration due to thrust even if no external forces act on it (Fext = 0). Note finally that if one lets Fnet be the sum of Fext and Fthrust then the equation regains the usual form of Newton's second law:
Ideal rocket equation
The ideal rocket equation, or the Tsiolkovsky rocket equation, can be used to study the motion of vehicles that behave like a rocket (where a body accelerates itself by ejecting part of its mass, a propellant, with high speed). It can be derived from the general equation of motion for variable-mass systems as follows: when no external forces act on a body (Fext = 0) the variable-mass system motion equation reduces to
If the velocity of the ejected propellant, vrel, is assumed have the opposite direction as the rocket's acceleration, dv/dt, the scalar equivalent of this equation can be written as
from which dt can be canceled out to give
Integration by separation of variables gives
By rearranging and letting Δv = v1 - v0, one arrives at the standard form of the ideal rocket equation:
where m0 is the initial total mass, including propellant, m1 is the final total mass, vrel is the effective exhaust velocity (often denoted as ve), and Δv is the maximum change of speed of the vehicle (when no external forces are acting).
References
Classical mechanics
Mechanics | Variable-mass system | Physics,Engineering | 948 |
1,686,492 | https://en.wikipedia.org/wiki/Gold%20extraction | Gold extraction is the extraction of gold from dilute ores using a combination of chemical processes. Gold mining produces about 3600 tons annually, and another 300 tons is produced from recycling.
Since the 20th century, gold has been principally extracted in a cyanide process by leaching the ore with cyanide solution. The gold may then be further refined by gold parting, which removes other metals (principally silver) by blowing chlorine gas through the molten metal. Historically, small particles of gold were amalgamated with mercury, and then concentrated by boiling away the mercury. The mercury method is still used in some small operations.
Types of ore
Gold occurs principally as a native metal, i.e., gold itself. Sometimes it is alloyed to a greater or lesser extent with silver, which is called electrum. Native gold can occur as sizeable nuggets, as fine grains or flakes in alluvial deposits, or as grains or microscopic particles (known as colour) embedded in rock minerals. Other forms of gold are the minerals calaverite (AuTe), aurostibnite (AuSb2), and maldonite (Au2Bi). These latter three, although rarer that native gold, can be slow to react with cyanide and thus difficult to process. Still other gold-containing ores include various tellurides (sylvanite, nagyagite, petzite, and krennerite).
Certain contaminants in ores can interfere with the extractability of gold by cyanide. These interfering agents are called "preg-robbing ores". For example, gold can bind tightly to carbon, resisting normal cyanide extraction. Gold cyanides bind also to some clays.
Concentration
While the romantic picture of gold mining focuses on nuggets, the reality is that gold is typically recovered from ores containing >10 ppm of the metal. Thus, the main challenge is concentrating this trace amount.
Cyanidation (and thiosulfate)
The principal technology is the cyanide process, in which gold is leached from the ore by treatment with a solution of cyanide. The first step is comminution (grinding) to increase surface area and expose the gold to the extracting solution. The extraction is conducted by dump leaching or heap leaching processes. Sodium cyanide is produced on a billion-ton/year scale mainly for this purpose. "Black cyanide", a carbon-contaminated form of calcium cyanide (Ca(CN)2) is often used because it is cheap. The crude ore is washed with a c. 0.3% solution of cyanide in air, often repeatedly, and the aqueous extract is collected and refined further. Recovery from solution typically involves adsorption on activated carbon, the carbon in pulp process.
Thiosulfate leaching has been proven to be effective on ores with high soluble copper values or ores which experience preg-robbing.
Leaching through bulk leach extractable gold, or BLEG, is also a process that is used to test an area for gold concentrations where gold may not be immediately visible.
Mercury amalgamation
Amalgamation with mercury can be used to recover very small gold particles, and mercury is still widely used in small-scale artisanal mining across the world. Mercury forms a mercury-gold amalgam with smaller gold particles, and then the gold is concentrated by boiling away the mercury from the amalgam. This is effective in extracting very small gold particles, but the process is hazardous due to the toxicity of mercury vapour. Large-scale use of mercury stopped in the 1960s. However, mercury is still used in artisanal and small-scale gold mining (ASGM). One mechanism by which mercury is employed in hydraulic mining is as an "undercurrent", in which the flow of smaller grains is diverted over mercury-coated copper plates. High flow velocities associated with hydraulic mining cause flouring of mercury, the wearing down of mercury particles that contributes to mercury loss into the environment.
Over of mercury contaminated the environment in California as a result of placer mining in the late nineteenth and early twentieth centuries. Stamp mill mining contributed an additional of mercury contamination. Mercury contamination in California waterways is a major contemporary environmental issue, as is groundwater pollution, mostly by inorganic mercury.
Refractory gold processes
A "refractory" gold ore is an ore that has ultra-fine gold particles disseminated throughout its gold occluded minerals. These ores are naturally resistant to recovery by standard cyanidation and carbon adsorption processes. These refractory ores require pre-treatment in order for cyanidation to be effective in recovery of the gold. A refractory ore generally contains sulphide minerals, organic carbon, or both. Sulphide minerals are impermeable minerals that occlude gold particles, making it difficult for the leach solution to form a complex with the gold. Organic carbon present in gold ore may adsorb dissolved gold-cyanide complexes in much the same way as activated carbon. This so-called "preg-robbing" carbon is washed away because it is significantly finer than the carbon recovery screens typically used to recover activated carbon.
Pre-treatment options for refractory ores include:
Roasting
Bio-oxidation, such as bacterial oxidation
Pressure oxidation
Albion process
The refractory ore treatment processes may be preceded by concentration (usually sulphide flotation). Roasting is used to oxidize both the sulphur and organic carbon at high temperatures using air and/or oxygen. Bio-oxidation involves the use of bacteria that promote oxidation reactions in an aqueous environment. Pressure oxidation is an aqueous process for sulphur removal carried out in a continuous autoclave, operating at high pressures and somewhat elevated temperatures. The Albion process utilises a combination of ultrafine grinding and atmospheric, auto-thermal, oxidative leaching.
Gold refining and parting
Parting is a process by which gold is purified to a commercially-tradeable standard, typically ≥99.5%. Removal of silver is of particular interest since the two metals often co-purify. The standard procedure is based on the Miller process. The separation is achieved by passing chlorine gas into a molten alloy. The technique is practiced on a large scale (e.g. 500 kg). The principle of the method exploits the nobility of gold, such that at high temperatures, gold does not react with chlorine, but virtually all contaminating metals do. Thus, at c. 500 °C, as the chlorine gas is passed through molten mixture (again, mainly gold), a low-density slag forms on top, which can be decanted from the liquid gold. Silver chloride and other precious metals can be recovered from this slag. The slag layer is often diluted with a flux like borax to facilitate the separation.
Alternative methods exist for parting gold. Silver can be dissolved selectively by boiling the mixture with 30% nitric acid, a process sometimes called inquartation. Affination is a largely obsolete process of removing silver from gold using concentrated sulfuric acid. Electrolysis using the Wohlwill process is yet another approach.
History
The smelting of gold began sometime around 6000 – 3000 BC. According to one source the technique began to be in use in Mesopotamia or Syria. In ancient Greece, Heraclitus wrote on the subject.
According to de Lecerda and Salomons (1997) mercury was first in use for extraction at about 1000 BC, according to Meech and others (1998), mercury was used in obtaining gold until the latter period of the first millennia.
A technique known to Pliny the Elder was extraction by way of crushing, washing, and then applying heat, with the resultant material powdered.
Industrial era
Like all metals, gold is insoluble in a water. Gold does however exhibit the distinctive properties that in the presence of cyanide ions, it dissolves in the presence of oxygen (or air). This transformation was reported in 1783 by Carl Wilhelm Scheele, but it was not until the late 19th century, that the reactions were exploited commercially. The expansion of gold mining in the Rand of South Africa began to slow down in the 1880s, as the new deposits being found tended to be pyritic ore. The gold was difficult to extract from such ores.
A process known as chlorination was once used to treat pyritic gold ore. Typically, the ore was roasted and then treated with chlorine gas. The residue was extracted to give an aqueous solution of gold chloride. It was used, notably at the Mount Morgan mine, where it remained in use until 1911. The chloride process became obsolete with the development of the cyanide process.
In 1887, John Stewart MacArthur, working in collaboration with brothers Dr Robert and Dr William Forrest for the Tennant Company in Glasgow, Scotland, developed the MacArthur-Forrest Process for the extraction of gold ores. By suspending the crushed ore in a cyanide solution, up to 96 percent gold was extracted.
The process was first used on a large scale at the Witwatersrand in 1890, leading to a boom of investment as larger gold mines were opened up. In 1896, Bodländer confirmed that oxygen was necessary for the process, something that had been doubted by MacArthur, and discovered that hydrogen peroxide was formed as an intermediate.
The method known as heap leaching was first proposed in 1969 by the United States Bureau of Mines, and was in use by the 1970s.
See also
Digger gold
Ore genesis
References
Gold
Metallurgical processes
de:Gold#Gewinnung | Gold extraction | Chemistry,Materials_science | 2,015 |
19,425,495 | https://en.wikipedia.org/wiki/Microtrabeculae | In cell biology, microtrabeculae were a hypothesised fourth element of the cytoskeleton (the other three being microfilaments, microtubules and intermediate filaments), proposed by Keith Porter based on images obtained from high-voltage electron microscopy of whole cells in the 1970s. The images showed short, filamentous structures of unknown molecular composition associated with known cytoplasmic structures. It is now generally accepted that microtrabeculae are nothing more than an artifact of certain types of fixation treatment, although the complexity of the cell's cytoskeleton is not yet fully understood.
References
Cell biology
Cytoskeleton | Microtrabeculae | Biology | 140 |
21,921,858 | https://en.wikipedia.org/wiki/Basic%20sediment%20and%20water | Basic sediment and water (BS&W) is both a technical specification of certain impurities in crude oil and the method for measuring it. When extracted from an oil reservoir, the crude oil will contain some amount of water and suspended solids from the reservoir formation. The particulate matter is known as sediment or mud. The water content can vary greatly from field to field. It may be present in large quantities for older fields, or if oil extraction is enhanced using water injection technology. The bulk of the water and sediment is usually separated at the field to minimize the quantity that needs to be transported further. The residual content of these unwanted impurities is measured as BS&W. Oil refineries may either buy crude to a certain BS&W specification or may alternatively have initial crude oil dehydration and desalting process units that reduce the BS&W to acceptable limits, or a combination thereof.
There are several ways to reduce the amount of water and sediment in crude oil. Gravity settling over several days allows water and solids settle out. Heating crude oil reduces its viscosity aiding further separation of these components. Certain chemicals added to crude oil can act to aid separation. Surfactants help water to separate from the oil. Paraffin thinners allow heavier fractions in the oil to flow more easily. Demulsifiers breakdown the oil/water emulsions that may have formed and thereby help to separate different elements of the crude oil.
Testing
ASTM method D4007 or API Manual of Petroleum Measurement Standards chapter 10.4 are commonly used to measure BS&W. These methods both consist of mixing equal volumes of solvent and crude oil then centrifuging in order to separate any solids, free water, or suspended particles.
More precise methods beyond BS&W are available to independently measure water or solids present in
a sample of crude oil.
BS&W and Free Water in practice
All unrefined crude oil has some water entrained within it. During transportation by ship, separation occurs naturally and water collects at the base of the tank below the oil, this is known as free water (FW).
Sales contracts for crude oil will typically specify the BS&W and FW to ensure the cargo meets quality standards. In one case in 2020 the quality documents required a BS&W of 0.2% and did not provide for any free water. Upon loading a Very Large Crude Carrier (VLCC) in Porto do Acu in Brazil, 4827 barrels of FW was measured. By the time the ship had reached its destination in the Far East, free water had settled and was measured at 8767 barrels. The BS&W parameter was thereby significantly exceeded.
References
Sources
Industrial processes
Chemical process engineering
Oil refining | Basic sediment and water | Chemistry,Engineering | 556 |
14,873,776 | https://en.wikipedia.org/wiki/3C%20401 | 3C 401 is a powerful radio galaxy located in the constellation Draco. It is near the center of a rich cluster of galaxies and dominates the cluster. That is, it is the type-cD galaxy of its cluster. It has a double nucleus, indicating that it is merging with another galaxy.
3C 401 is classified as a Fanaroff and Riley class II radio source (FR II), but has characteristics of both types of sources. FR II radio sources are brightest at the ends of their radio lobes while FR I sources are brightest toward their centers. 3C 401 has hot spots at the ends of its two extended radio lobes, but also has a bright one-sided jet like a FR I source. The spectra of this jet is also intermediate between the spectra of jets in the two types of sources.
References
Draco (constellation)
401
2605547
60.29
1939+605
Radio galaxies | 3C 401 | Astronomy | 185 |
7,619,130 | https://en.wikipedia.org/wiki/Habit%20reversal%20training | Habit reversal training (HRT) is a "multicomponent behavioral treatment package originally developed to address a wide variety of repetitive behavior disorders".
Behavioral disorders treated with HRT include tics, trichotillomania, nail biting, thumb sucking, skin picking, temporomandibular disorder (TMJ), lip-cheek biting and stuttering. It consists of five components: awareness training, competing response training, contingency management, relaxation training, and generalization training.
Research on the efficacy of HRT for behavioral disorders have produced consistent, large effect sizes (approximately 0.80 across the disorders). It has met the standard of a well-established treatment for stuttering, thumb sucking, nail biting, and TMJ disorders. According to a meta-analysis from 2012, decoupling, a self-help variant of HRT, also shows efficacy.
For tic disorders
In case of a tic, these components are intended to increase tic awareness, develop a competing response to the tic, and build treatment motivation and compliance. HRT is based on the presence of a premonitory urge, or sensation occurring before a tic. HRT involves replacing a tic with a competing response—a more comfortable or acceptable movement or sound—when a patient feels a premonitory urge building.
Controlled trials have demonstrated that HRT is an acceptable, tolerable, effective and durable treatment for tics; HRT reduces the severity of vocal tics, and results in enduring improvement of tics when compared with supportive therapy. HRT has been shown to be more effective than supportive therapy and, in some studies, medication. HRT is not yet proven or widely accepted, but large-scale trials are ongoing and should provide better information about its efficacy in treating Tourette syndrome. Studies through 2006 are "characterized by a number of design limitations, including relatively small sample sizes, limited characterization of study participants, limited data on children and adolescents, lack of attention to the assessment of treatment integrity and adherence, and limited attention to the identification of potential clinical and neurocognitive mechanisms and predictors of treatment response". Additional controlled studies of HRT are needed to address whether HRT, medication, or a combination of both is most effective, but in the interim, "HRT either alone or in combination with medication should be considered as a viable treatment" for tic disorders.
Comprehensive Behavioral Intervention for Tics
Comprehensive Behavioral Intervention for Tics (CBIT), based on HRT, is a first-line treatment for Tourette syndrome and tic disorders. With a high level of confidence, CBIT has been shown to be more likely to lead to a reduction in tics than other supportive therapies or psychoeducation. Some limitations are: children younger than ten may not understand the treatment, people with severe tics or ADHD may not be able to suppress their tics or sustain the required focus to benefit from behavioral treatments, there is a lack of therapists trained in behavioral interventions, finding practitioners outside of specialty clinics can be difficult, and costs may limit accessibility. Whether increased awareness of tics through HRT/CBIT (as opposed to moving attention away from them) leads to further increases in tics later in life is a subject of discussion among TS experts.
See also
Cognitive behavioral therapy
Operant conditioning
Behaviour therapy
Decoupling
References and notes
Behavior therapy
Behavior modification | Habit reversal training | Biology | 697 |
9,891,751 | https://en.wikipedia.org/wiki/Standard%20Assessment%20Procedure | The Standard Assessment Procedure (SAP) is the UK government's recommended method system for measuring the energy rating of residential dwellings. The methodology is owned by the Department for Business, Energy & Industrial Strategy, and produced under licence by BRE. The first version was published in 1995, and was replaced by newer versions in 1998, 2001, 2005, 2009, 2012, and 2021. It calculates the typical annual energy costs for space and water heating, and, from 2005, lighting. The CO2 emissions are also calculated. The SAP runs from 1 to 100+, with dwellings that have SAP>100 being net exporters of energy.
SAP 2012 has been used as the basis for checking new dwellings for compliance with building regulations in the United Kingdom requiring the conservation of fuel and power since 6 April 2014 in England or 31 July 2014 in Wales.
A reduced data version of SAP, RDSAP, is used for existing dwellings. SAP or RDSAP was used to produce the energy report and Energy Performance Certificate in Home Information Packs (HIPs). A document was published by the UK government in 2007, looking towards SAP and energy standards in the future.
A number of comparisons have indicated that SAP does not provide an accurate model for low-energy buildings.
The Standard Assessment Procedure evolved from the National Home Energy Rating scheme, which was based upon the Milton Keynes Energy Cost Index created for the Energy World demonstration buildings in the 1980s.
References
External links
Department for Communities . . . Building Regulations: Energy efficiency requirements for new dwellings - A forward look at what standards may be in 2010 and 2013 .
UK Building Regulations - Full text of the regulations in .pdf format - UK Government Planning Portal site
Housing in the United Kingdom
Building engineering
Construction industry of the United Kingdom | Standard Assessment Procedure | Engineering | 351 |
8,364,986 | https://en.wikipedia.org/wiki/Chhajja | A chhajja is an overhanging eave or roof covering found in Indian architecture. It is characterised with large support brackets with different artistic designs. Variation is also seen in its size depending on the importance of the building on which it features or the choice of the designer.
Its function is similar to that of overhangs or eaves; it adorns and protects entrances, arches, and windows from the outside elements, and provides shade from radiation. Chhajjas also aid in the facade-making in Rajasthani architecture. Some styles of roof can be considered large chhajja as well.
History
Although there is no conclusive agreement on when the chhajja emerged as an architectural element, it can be traced back to before the rise of the Mughal Empire in India. However, much of its popular use seems to be during this time.
The original inspiration of the chhajja and much of the other Indian architectural elements with which it is commonly not seen can be traced back to building design from older periods, such as that of bamboo and thatch village huts that can still be found today. The elements of these buildings may have simply been built in stone and made to have a more dignified look that can be seen in many buildings today. This works especially well to counteract the specific climate of the region, as many older architectural designs have been honed to deal with this. Simply adapting a tried and tested design with stronger materials and may be the best course of action.
Curved chhajja became popular in Mughal architecture particularly during and after the reign of Shah Jahan.
By the time that buildings like the Jahangiri Mahal at Agra and the palace complex at Fathpur Sikri were built, it emerged as a popular and important architectural element of Mughal architecture.
Later in the Mughal rule, buildings like the Zafar Mahal also illustrated a use for the chhajja for both practical and ornamental means.
Usage
Although chhajja are generally constructed in a manner which can be seen as aesthetically artistic, they have many usage cases in many different types of buildings.
Usage in Mughal Architecture
Despite initial Mughal built mosques not featuring chhajja, the Baburi (Babur style) mosque built in Ayodhya features eaves in the form of chhajja. After this, chhajja were not a rare sight in mosque architecture within the Indian subcontinent, such as those within the mosques of Sirhind, where many of the arches are adorned with chhajja.
Although Mughal architecture is the dominant user of chhajja, lesser known constructions undertaken by the Maratha empire in occupied territory also features this, such as in the architectural remains in Bahadurgarh, formerly known as Saydabad. Despite the usual aesthetically eloquent chhajja constructions seen in Indian architecture, a more practical utilitarian version is used in forts as found in the remains at Bahadurgarh.
What appears to be chhajja also appears on fortifications in Mughal Sarai such as the one found at Doraha for both a practical and decorative purpose. Here, chhajja are seen in an elegant semi-hexagonal configuration. There is speculation that here there were more chhajja that have since crumbled.
Mahals and palaces were frequently built with extravagant artistic chhajja. This is seen in buildings such as the Zafar Mahal constructed during the late Mughal rule. This features a chhajja formed with multi-foliated arches resting on four baluster columns, creating an extravagant appearance.
Chhajja and other architectural elements that supplement it appear mostly in buildings such as residential, administrative and formal buildings and pavilions. Much like on other ways of life, imperial builders possibly wished to represent a sincere desire to find emotional rapport with the local people through identifying with local architectural elements. This suggests chhajja have been used for longer than current standing structures would represent.
Modern Usage
Chhajja are seen in contemporary architecture where there have been attempts to replicate a traditional middle eastern or Indian subcontinent architectural style.
The common usage of chhajja is portrayed in the first two stanzas of Ashwini Magotra's 2004 poem "Lohri":
References
Architecture in India
Mughal architecture elements
Architectural elements
Roofs
Islamic architectural elements | Chhajja | Technology,Engineering | 882 |
56,596,473 | https://en.wikipedia.org/wiki/Vlastimil%20Dlab | Vlastimil Dlab (born 5 August 1932) is a Czech-born Canadian mathematician who has worked in Czechoslovakia, Sudan, Australia and especially Canada where he founded and led an influential department of modern mathematics.
Biography
Dlab was born on August 5, 1932, in Bzí, Czechoslovakia, a historical village whose territory currently belongs to Železný Brod. He studied at Charles University in Prague, and worked at the Czechoslovak Academy of Sciences for a while in 1956. At Charles University, he was gradually promoted to associate professor. However. Between 1954 and 1964, he was doing university research in Khartoum in Sudan. Between 1964 and 1965 he returned Prague but the Institute of Advanced Studies in Canberra, Australia attracted him between 1965 and 1968.
After the 1968 Warsaw Pact invasion of Czechoslovakia, he wasn't quite embraced with open arms. So in 1971, he left for Ottawa, Canada where he founded and led a department of modern mathematics at Carleton University that has significantly influenced the world of algebra, probability, and statistics.
Because his father was ill in the early 1980s, Dlab – as an alien – was allowed to visit Czechoslovakia and he restored his relationship with Charles University. In the late 1980s, he taught some courses again there, and he regained full professorship in 1992.
Academic ancestry and collaborators
Dlab was a postdoctoral student of renowned Czech mathematician Eduard Čech.
While in Canada, Dlab worked as the editor-of-chief of mathematical journals and chairman of assorted organizations and institutions. In 1977, he was elected a fellow of the Royal Society of Canada.
Claus Michael Ringel was the co-author of some of the most famous academic works by Dlab, such as the 1976 book Indecomposable representations of graphs and algebras. Dlab helped to educate numerous students of mathematics who became successful by themselves.
Teaching of mathematics
In recent years, Dlab was very active in efforts to improve the mathematics education. In the Czech Republic, he's been often quoted as an authority that is skeptical towards modern methods to teach, e.g. the method of Milan Hejný. He emphasizes the key role played by the quality of teachers.
See also
Eduard Čech
References
External links
Personal web page
1932 births
Czech mathematicians
20th-century Czech mathematicians
Algebraists
Charles University alumni
Institute for Advanced Study visiting scholars
Fellows of the Royal Society of Canada
Academic staff of Charles University
Living people
Czech exiles
Canadian mathematicians | Vlastimil Dlab | Mathematics | 489 |
49,642,374 | https://en.wikipedia.org/wiki/Moir%C3%A9%20Phase%20Tracking | Moiré Phase Tracking (MPT) is 3D tracking technology developed by Metria Innovation based on optical moiré patterns.
Moiré phase tracking is an approach to deriving three dimensional spatial and rotational information of tracked objects. By fixing a special marker with designs that result in moiré patterns, a camera can infer the pose of the marker and the object to which it is attached. Metria's software converts the resultant moiré patterns into position and orientation data. This data has been used in 3D animation software, and ergonomics programs.
Limited studies authored by the company founder suggest the approach has greater accuracy than some traditional methods.
References
Tracking | Moiré Phase Tracking | Technology | 132 |
64,243,012 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Note%2020 | The Samsung Galaxy Note 20 and Galaxy Note 20 Ultra (stylized and marketed as Samsung Galaxy Note20 and Galaxy Note20 Ultra) are a series of high-end Android-based smartphones developed, produced, and marketed by Samsung Electronics as part of their Samsung Galaxy Note series, succeeding the Samsung Galaxy Note 10 series. The devices were announced on 5 August 2020 alongside the Samsung Galaxy Z Fold 2, Galaxy Watch 3, Galaxy Buds Live and Samsung Galaxy Tab S7 during Samsung's Unpacked Event. It was the final model in the Galaxy Note series, with Samsung beginning to integrate the functionality from the Note series into its S series "Ultra" models, starting with the Galaxy S22 Ultra released in February 2022.
Due to restrictions of the COVID-19 pandemic on public and social gatherings, Note 20 devices were unveiled virtually at Samsung's newsroom in Suwon, South Korea. At the event, Samsung announced that the smartphones include support for 5G connectivity, which allows for higher-bandwidth and lower-latency mobile connections where 5G network coverage is available. The Note 20's S-Pen has up to 4× better latency than that of previous generations. Mystic Green, Mystic Bronze, and Mystic Grey are colour options for the Note 20; Mystic Bronze, Mystic Black and Mystic White are colour options for the Note 20 Ultra. Unlike its predecessor, the Note 20 range does not feature a "+" model.
The Galaxy Note 20 series also include a number of new software features, which include performance optimization for mobile gaming, wireless sync with desktop and laptop PCs, and improved DeX features for remotely connected to compatible devices.
Design
The Galaxy Note 20 series maintains a similar design with the Galaxy Note 10 and Galaxy S20, with an Infinity-O display (first introduced on the Galaxy S10) containing a circular punch hole in the top center for the frontal selfie camera. The rear camera array is located in the corner with a rectangular protrusion like the Galaxy S20, housing three cameras.
Unlike their predecessors, the Note 20 Ultra is the first Samsung phone that uses stainless steel as the frame material, while the regular Note 20 sticks to the more classic anodized aluminum. The Note 20 uses Gorilla Glass 5 for the screen; the back panel is reinforced polycarbonate, which has not been seen on a Note series phone since the Note 4 and Note Edge. The Note 20 Ultra has Gorilla Glass Victus for the screen. Global color options are, Mystic Bronze, Mystic Grey, Mystic Green, Mystic Black and Mystic White. Moreover, the Mystic Green, Mystic Bronze and Mystic Grey color options on the Note 20, have a matte finish, whereas, only the Mystic Bronze on the Note 20 Ultra, has a matte finish. Mystic Bronze is available on both models, whereas Mystic Grey and Mystic Green, are limited to the Note 20; Mystic Black and Mystic Crush White are limited to the Note 20 Ultra. For the Note 20, Aura Red is exclusive to SK Telecom with 256 GB of storage, replacing Mystic Green in South Korea; Prism Blue will be sold in India.
Specifications
Hardware
Chipsets
The Galaxy Note 20 line comprises two models with various hardware specifications; international models of the Note 20 utilize the Exynos 990 system-on-chip, while the United States, Korean and Chinese models utilize the Qualcomm Snapdragon 865+. Both of the SoCs are based on a 7 nm+ processing technology node. The Exynos chipset comes with the Mali-G77 MP11 GPU, whereas the Snapdragon chipset comes with the Adreno 650 GPU.
Display
The Galaxy Note 20 does not feature a curved display like the one found on the Note 20 Ultra. The Note 20 and Note 20 Ultra feature a 6.7-inch 1080p and 6.9-inch 1440p display, respectively. Both use an AMOLED with HDR10+ support and "dynamic tone mapping" technology, marketed as Super AMOLED Plus for the Note 20 and Dynamic AMOLED 2X for the Note 20 Ultra. The Note 20 has a fixed 60 Hz refresh rate, however, the Note 20 Ultra offers a variable 120 Hz refresh rate. The settings have two options, 60 Hz and Adaptive, the latter of which uses a variable refresh rate that can adjust based on the content being displayed, enabled by a more energy efficient LTPO backplane. Unlike the S20 series, the display will remain at 120 Hz regardless of the device's battery level, and can handle slightly higher temperatures before switching to 60 Hz. Adaptive mode is limited to a FHD resolution, requiring users to switch to 60 Hz mode to enable QHD resolution. Both models utilize an ultrasonic in-screen fingerprint sensor.
Storage
The base amount of RAM is 8 GB, paired with 128 or 256 GB of internal storage standard. The Note 20 Ultra has 12 GB RAM and 512 GB UFS options, and has up to 1 TB of expandable storage via the microSD card slot.
Batteries
The Note 20 and Note 20 Ultra use non-removable Li-Ion batteries, rated at 4300 mAh and 4500 mAh respectively.
Qi inductive charging is supported as well as the ability to charge other Qi-compatible device from the Note 20's own battery power, which is branded as "Samsung PowerShare"; wired charging is supported over USB-C at up to 25 W.
Connectivity
The two come with 5G standard connectivity, though some regions may have special LTE or sub-6 GHz only variants, and both omit the audio jack.
It has NFC, eSIM, and Ultra-wideband technology.
On April 14, 2021, the Galaxy Note20 Ultra 5G T-Mobile updated software to support eSIM and dual SIM (DSDS). Other carriers still do not enable the two features even though the Galaxy Note20 Ultra already supports eSIM out of the box.
Cameras
The Note 20 features similar camera specifications to that of the Samsung Galaxy S20, which include a 12 MP wide sensor with 1.8 aperture, a 64 MP telephoto sensor with 2.0 aperture, and a 12 MP ultrawide sensor with 12 mm equivalent focal length. The telephoto camera supports 3× hybrid optical zoom and 10× digital zoom, which combined enables 30× hybrid zoom.
The Note 20 Ultra has a more advanced camera setup than its counterpart, including a 108 MP wide sensor, a 12 MP "periscope" telephoto sensor, and a 12 MP ultrawide sensor. The telephoto camera has a focal length of 120 mm (35mm equivalent), which equals 5× optical zoom, and allows for 50× hybrid zoom (assisted by digital zoom). Laser autofocus is used in favor of the S20 Ultra's time-of-flight camera.
The Note 20's telephoto sensor and the Note 20 Ultra's wide sensor use pixel binning to output higher quality images at a standard resolution, with the wide-angle sensor using Nonacell technology which groups 3x3 pixels to capture more light.
The front camera uses a 10 MP sensor, and can record 4K video.
Single Take, introduced on the S20 series, allows users to capture photos or videos simultaneously with different sensors automatically. Both models can record 8K video at 24fps. On the Note 20, this is enabled by the 64 MP telephoto sensor, whereas the Note 20 Ultra's 108 MP wide sensor natively supports 8K video.
S-Pen
The S-Pen has better latency at 26ms on the Note 20 and 9ms on the Note 20 Ultra, reduced from 42ms on the Note 10 and Note 10+. Additionally, it gains five new Air gestures that work across the UI by utilizing the accelerometers and gyroscope, as well as 'AI-based point prediction'. Battery life has also been improved from 10 hours to 24 hours.
Accessories
Earbuds are included in some countries such as the UK, but are not bundled in others such as the US.
Software
The devices were shipped with Android 10 and One UI 2.5. A beta test for Android 11 was released later on in the year. Android 11 with One UI 3.0 was sent OTA (over-the-air) to the majority of Note 20 and Note 20 Ultra devices by January 2021. Both the Note 20 and Note 20 Ultra received the Android 12 update with One UI 4.0 by January 2022. Android 13 with One UI 5.1, issued by December 2022, was the last major OS update for the Note 20 and Note 20 Ultra.
Software support
On August 18, 2020, the Note 20 series along with a selection of other Samsung Galaxy devices, were announced to receive three generations of Android software update support.
Xbox Game Pass
Samsung has partnered with Xbox to offer Xbox games on the Note 20. In certain markets, the Galaxy Note 20 has been offered with three months of free Xbox game pass along with an Xbox game pad; Xbox games will be playable from the phone to the TV. More than 90 Xbox games are playable on the Note 20.
Reception
The Note 20 received mixed reviews. Reviews from various technology websites, such as TechRadar and The Verge, praised the Note 20 series for its redesigned S-Pen and camera performance. However, the baseline Note 20 was criticized for its lower quality display and plastic back panel despite the high starting price point. Writing for TechRadar, James Peckham said in his verdict, "the Galaxy Note 20 is Samsung's new entry-level stylus-included smartphone for 2020, but it's one that doesn't seem particularly exciting for the usual Note-loving crowd. It highlights some more affordable features compared to its more exciting Ultra sibling but it may well be just as good for those who don't want to spend top dollar." There was a pronounced difference in the performance of the two processors available, which caused concern that Exynos models were an inferior product, as the differentials were not as large in previous models. The cooling system introduced in the Galaxy Note 10 was also removed in the Snapdragon Variants of the Note 20 series.
See also
Samsung Galaxy S20
Samsung Galaxy Z Fold 2
Samsung Galaxy Note series
References
External links
Samsung smartphones
Samsung mobile phones
Samsung Galaxy
20
Mobile phones with stylus
Mobile phones with 4K video recording
Mobile phones with 8K video recording
Mobile phones with multiple rear cameras
Mobile phones introduced in 2020
Discontinued flagship smartphones
Discontinued Samsung Galaxy smartphones | Samsung Galaxy Note 20 | Technology | 2,162 |
41,134,793 | https://en.wikipedia.org/wiki/Labrador%20Sea%20Water | Labrador Sea Water is an intermediate water mass characterized by cold water, relatively low salinity compared to other intermediate water masses, and high concentrations of both oxygen and anthropogenic tracers. It is formed by convective processes in the Labrador Sea located between Greenland and the northeast coast of the Labrador Peninsula. Deep convection in the Labrador Sea allows colder water to sink forming this water mass, which is a contributor to the upper layer of North Atlantic Deep Water. North Atlantic Deep Water flowing southward is integral to the Atlantic Meridional Overturning Circulation. The Labrador Sea experiences a net heat loss to the atmosphere annually.
Formation
Convection in the Labrador Sea is the result of a combination of cyclonic oceanographic circulation of the sea currents and cyclonic atmospheric forcing. At the southern tip of Greenland, water enters the West Greenland Current from the East Greenland Current, continues to flow northwest around the Baffin Bay, and then southeast into the Baffin Island Current continuing in the same direction in the Labrador Current. Sea ice in the winter months inhibits surface flow into Baffin Bay. The Labrador Current and the Western Greenland Current flow in opposite directions resulting in a cyclonic eddy. During winter months low pressure dominates in this region, and in years with a positive North Atlantic Oscillation deeper convection is observed.
Spreading
Labrador Sea Water spreads through the North Atlantic Ocean by three routes: northeast directly into the Irminger Sea, into the eastern North Atlantic by means of the deep North Atlantic current, and meridionally via the Deep Western Boundary Current. Oceanographer Robert Pickart, in a paper published in 2002, presented data that suggest that the Labrador Sea is not the only formation site for Labrador Sea Water. They observed similar convective processes in the Irminger Sea and noted that transit times for Labrador Sea Water into Irminger Sea were unusually fast, suggesting that there is another source in the Irminger Sea.
Variability
Labrador Sea Water properties experience seasonal and interannual variations. In late spring and summer, large amounts of cold freshwater accumulate from melting ice and are mixed downward during convection. The source for heat in the Labrador Sea is modified North Atlantic Current water after circulating the subpolar gyre. In winter the sea becomes more saline as freshwater freezes to form sea ice. The greatest seasonal variability is largely confined to the surface waters, however an annual cycle of convective mixing and re-stratification is observed throughout the water column. Warming and increased salinity in the lower level and freshening at the surface is associated with re-stratification (May–December), whereas a convective mixing period (January–April) leads to cooling and a decrease in salt content in intermediate and deep waters and an increase in salt content at the surface.
Interannual variations in the intermediate Labrador Sea Water are due largely to changes in convection throughout these periods. Weak convective periods are associated with more heat in the water column and deep convective periods are characterized by cold water. In the early 1990s, several consecutive severe winters contributed towards deep convection in the Labrador Sea. These winters were also associated with strong positive fluctuations in the North Atlantic Oscillation. Labrador Sea Water became very cold, fresh, and dense during this period, and the layer extended to depths of 2300m in the spring of 1994. Due to weakened convection, Labrador Sea Water began warming significantly and increased in salinity over the following decade. This trend continued through 2010 and 2011 when weak convection was observed in relation with negative North Atlantic Oscillation. Deep convection was observed again in 2012 with the Labrador Sea Water reaching 1400m, corresponding with a positive North Atlantic Oscillation similar to those seen in the early 1990s.
See also
Labrador Sea
North Atlantic Deep Water
Atlantic Ocean
North Atlantic Oscillation
References
Oceanography | Labrador Sea Water | Physics,Environmental_science | 774 |
26,452 | https://en.wikipedia.org/wiki/Riesz%20representation%20theorem | The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.
Preliminaries and notation
Let be a Hilbert space over a field where is either the real numbers or the complex numbers If (resp. if ) then is called a (resp. a ). Every real Hilbert space can be extended to be a dense subset of a unique (up to bijective isometry) complex Hilbert space, called its complexification, which is why Hilbert spaces are often automatically assumed to be complex. Real and complex Hilbert spaces have in common many, but by no means all, properties and results/theorems.
This article is intended for both mathematicians and physicists and will describe the theorem for both.
In both mathematics and physics, if a Hilbert space is assumed to be real (that is, if ) then this will usually be made clear. Often in mathematics, and especially in physics, unless indicated otherwise, "Hilbert space" is usually automatically assumed to mean "complex Hilbert space." Depending on the author, in mathematics, "Hilbert space" usually means either (1) a complex Hilbert space, or (2) a real complex Hilbert space.
Linear and antilinear maps
By definition, an (also called a ) is a map between vector spaces that is :
and (also called or ):
where is the conjugate of the complex number , given by .
In contrast, a map is linear if it is additive and :
Every constant map is always both linear and antilinear. If then the definitions of linear maps and antilinear maps are completely identical. A linear map from a Hilbert space into a Banach space (or more generally, from any Banach space into any topological vector space) is continuous if and only if it is bounded; the same is true of antilinear maps. The inverse of any antilinear (resp. linear) bijection is again an antilinear (resp. linear) bijection. The composition of two linear maps is a map.
Continuous dual and anti-dual spaces
A on is a function whose codomain is the underlying scalar field
Denote by (resp. by the set of all continuous linear (resp. continuous antilinear) functionals on which is called the (resp. the ) of
If then linear functionals on are the same as antilinear functionals and consequently, the same is true for such continuous maps: that is,
One-to-one correspondence between linear and antilinear functionals
Given any functional the is the functional
This assignment is most useful when because if then and the assignment reduces down to the identity map.
The assignment defines an antilinear bijective correspondence from the set of
all functionals (resp. all linear functionals, all continuous linear functionals ) on
onto the set of
all functionals (resp. all linear functionals, all continuous linear functionals ) on
Mathematics vs. physics notations and definitions of inner product
The Hilbert space has an associated inner product valued in 's underlying scalar field that is linear in one coordinate and antilinear in the other (as specified below).
If is a complex Hilbert space (), then there is a crucial difference between the notations prevailing in mathematics versus physics, regarding which of the two variables is linear.
However, for real Hilbert spaces (), the inner product is a symmetric map that is linear in each coordinate (bilinear), so there can be no such confusion.
In mathematics, the inner product on a Hilbert space is often denoted by or while in physics, the bra–ket notation or is typically used. In this article, these two notations will be related by the equality:
These have the following properties:The map is linear in its first coordinate; equivalently, the map is linear in its second coordinate. That is, for fixed the map
with
is a linear functional on This linear functional is continuous, so
The map is antilinear in its coordinate; equivalently, the map is antilinear in its coordinate. That is, for fixed the map
with
is an antilinear functional on This antilinear functional is continuous, so
In computations, one must consistently use either the mathematics notation , which is (linear, antilinear); or the physics notation , whch is (antilinear | linear).
Canonical norm and inner product on the dual space and anti-dual space
If then is a non-negative real number and the map
defines a canonical norm on that makes into a normed space.
As with all normed spaces, the (continuous) dual space carries a canonical norm, called the , that is defined by
The canonical norm on the (continuous) anti-dual space denoted by is defined by using this same equation:
This canonical norm on satisfies the parallelogram law, which means that the polarization identity can be used to define a which this article will denote by the notations
where this inner product turns into a Hilbert space. There are now two ways of defining a norm on the norm induced by this inner product (that is, the norm defined by ) and the usual dual norm (defined as the supremum over the closed unit ball). These norms are the same; explicitly, this means that the following holds for every
As will be described later, the Riesz representation theorem can be used to give an equivalent definition of the canonical norm and the canonical inner product on
The same equations that were used above can also be used to define a norm and inner product on 's anti-dual space
Canonical isometry between the dual and antidual
The complex conjugate of a functional which was defined above, satisfies
for every and every
This says exactly that the canonical antilinear bijection defined by
as well as its inverse are antilinear isometries and consequently also homeomorphisms.
The inner products on the dual space and the anti-dual space denoted respectively by and are related by
and
If then and this canonical map reduces down to the identity map.
Riesz representation theorem
Two vectors and are if which happens if and only if for all scalars The orthogonal complement of a subset is
which is always a closed vector subspace of
The Hilbert projection theorem guarantees that for any nonempty closed convex subset of a Hilbert space there exists a unique vector such that that is, is the (unique) global minimum point of the function defined by
Statement
Historically, the theorem is often attributed simultaneously to Riesz and Fréchet in 1907 (see references).
Let denote the underlying scalar field of
Fix
Define by which is a linear functional on since is in the linear argument.
By the Cauchy–Schwarz inequality,
which shows that is bounded (equivalently, continuous) and that
It remains to show that
By using in place of it follows that
(the equality holds because is real and non-negative).
Thus that
The proof above did not use the fact that is complete, which shows that the formula for the norm holds more generally for all inner product spaces.
Suppose are such that and for all
Then
which shows that is the constant linear functional.
Consequently which implies that
Let
If (or equivalently, if ) then taking completes the proof so assume that and
The continuity of implies that is a closed subspace of (because and is a closed subset of ).
Let
denote the orthogonal complement of in
Because is closed and is a Hilbert space, can be written as the direct sum (a proof of this is given in the article on the Hilbert projection theorem).
Because there exists some non-zero
For any
which shows that where now implies
Solving for shows that
which proves that the vector satisfies
Applying the norm formula that was proved above with shows that
Also, the vector has norm and satisfies
It can now be deduced that is -dimensional when
Let be any non-zero vector. Replacing with in the proof above shows that the vector satisfies for every The uniqueness of the (non-zero) vector representing implies that which in turn implies that and Thus every vector in is a scalar multiple of
The formulas for the inner products follow from the polarization identity.
Observations
If then
So in particular, is always real and furthermore, if and only if if and only if
Linear functionals as affine hyperplanes
A non-trivial continuous linear functional is often interpreted geometrically by identifying it with the affine hyperplane (the kernel is also often visualized alongside although knowing is enough to reconstruct because if then and otherwise ). In particular, the norm of should somehow be interpretable as the "norm of the hyperplane ". When then the Riesz representation theorem provides such an interpretation of in terms of the affine hyperplane
as follows: using the notation from the theorem's statement, from it follows that and so implies and thus
This can also be seen by applying the Hilbert projection theorem to and concluding that the global minimum point of the map defined by is
The formulas
provide the promised interpretation of the linear functional's norm entirely in terms of its associated affine hyperplane (because with this formula, knowing only the is enough to describe the norm of its associated linear ). Defining the infimum formula
will also hold when
When the supremum is taken in (as is typically assumed), then the supremum of the empty set is but if the supremum is taken in the non-negative reals (which is the image/range of the norm when ) then this supremum is instead in which case the supremum formula will also hold when (although the atypical equality is usually unexpected and so risks causing confusion).
Constructions of the representing vector
Using the notation from the theorem above, several ways of constructing from are now described.
If then ; in other words,
This special case of is henceforth assumed to be known, which is why some of the constructions given below start by assuming
Orthogonal complement of kernel
If then for any
If is a unit vector (meaning ) then
(this is true even if because in this case ).
If is a unit vector satisfying the above condition then the same is true of which is also a unit vector in However, so both these vectors result in the same
Orthogonal projection onto kernel
If is such that and if is the orthogonal projection of onto then
Orthonormal basis
Given an orthonormal basis of and a continuous linear functional the vector can be constructed uniquely by
where all but at most countably many will be equal to and where the value of does not actually depend on choice of orthonormal basis (that is, using any other orthonormal basis for will result in the same vector).
If is written as then
and
If the orthonormal basis is a sequence then this becomes
and if is written as then
Example in finite dimensions using matrix transformations
Consider the special case of (where is an integer) with the standard inner product
where are represented as column matrices and with respect to the standard orthonormal basis on (here, is at its th coordinate and everywhere else; as usual, will now be associated with the dual basis) and where denotes the conjugate transpose of
Let be any linear functional and let be the unique scalars such that
where it can be shown that for all
Then the Riesz representation of is the vector
To see why, identify every vector in with the column matrix
so that is identified with
As usual, also identify the linear functional with its transformation matrix, which is the row matrix so that and the function is the assignment where the right hand side is matrix multiplication. Then for all
which shows that satisfies the defining condition of the Riesz representation of
The bijective antilinear isometry defined in the corollary to the Riesz representation theorem is the assignment that sends to the linear functional on defined by
where under the identification of vectors in with column matrices and vector in with row matrices, is just the assignment
As described in the corollary, 's inverse is the antilinear isometry which was just shown above to be:
where in terms of matrices, is the assignment
Thus in terms of matrices, each of and is just the operation of conjugate transposition (although between different spaces of matrices: if is identified with the space of all column (respectively, row) matrices then is identified with the space of all row (respectively, column matrices).
This example used the standard inner product, which is the map but if a different inner product is used, such as where is any Hermitian positive-definite matrix, or if a different orthonormal basis is used then the transformation matrices, and thus also the above formulas, will be different.
Relationship with the associated real Hilbert space
Assume that is a complex Hilbert space with inner product
When the Hilbert space is reinterpreted as a real Hilbert space then it will be denoted by where the (real) inner-product on is the real part of 's inner product; that is:
The norm on induced by is equal to the original norm on and the continuous dual space of is the set of all -valued bounded -linear functionals on (see the article about the polarization identity for additional details about this relationship).
Let and denote the real and imaginary parts of a linear functional so that
The formula expressing a linear functional in terms of its real part is
where for all
It follows that and that if and only if
It can also be shown that where and are the usual operator norms.
In particular, a linear functional is bounded if and only if its real part is bounded.
Representing a functional and its real part
The Riesz representation of a continuous linear function on a complex Hilbert space is equal to the Riesz representation of its real part on its associated real Hilbert space.
Explicitly, let and as above, let be the Riesz representation of obtained in so it is the unique vector that satisfies for all
The real part of is a continuous real linear functional on and so the Riesz representation theorem may be applied to and the associated real Hilbert space to produce its Riesz representation, which will be denoted by
That is, is the unique vector in that satisfies for all
The conclusion is
This follows from the main theorem because and if then
and consequently, if then which shows that
Moreover, being a real number implies that
In other words, in the theorem and constructions above, if is replaced with its real Hilbert space counterpart and if is replaced with then This means that vector obtained by using and the real linear functional is the equal to the vector obtained by using the origin complex Hilbert space and original complex linear functional (with identical norm values as well).
Furthermore, if then is perpendicular to with respect to where the kernel of is be a proper subspace of the kernel of its real part Assume now that
Then because and is a proper subset of The vector subspace has real codimension in while has codimension in and That is, is perpendicular to with respect to
Canonical injections into the dual and anti-dual
Induced linear map into anti-dual
The map defined by placing into the coordinate of the inner product and letting the variable vary over the coordinate results in an functional:
This map is an element of which is the continuous anti-dual space of
The is the operator
which is also an injective isometry.
The Fundamental theorem of Hilbert spaces, which is related to Riesz representation theorem, states that this map is surjective (and thus bijective). Consequently, every antilinear functional on can be written (uniquely) in this form.
If is the canonical linear bijective isometry that was defined above, then the following equality holds:
Extending the bra–ket notation to bras and kets
Let be a Hilbert space and as before, let
Let
which is a bijective antilinear isometry that satisfies
Bras
Given a vector let denote the continuous linear functional ; that is,
so that this functional is defined by This map was denoted by earlier in this article.
The assignment is just the isometric antilinear isomorphism which is why holds for all and all scalars
The result of plugging some given into the functional is the scalar which may be denoted by
Bra of a linear functional
Given a continuous linear functional let denote the vector ; that is,
The assignment is just the isometric antilinear isomorphism which is why holds for all and all scalars
The defining condition of the vector is the technically correct but unsightly equality
which is why the notation is used in place of With this notation, the defining condition becomes
Kets
For any given vector the notation is used to denote ; that is,
The assignment is just the identity map which is why holds for all and all scalars
The notation and is used in place of and respectively. As expected, and really is just the scalar
Adjoints and transposes
Let be a continuous linear operator between Hilbert spaces and As before, let and
Denote by
the usual bijective antilinear isometries that satisfy:
Definition of the adjoint
For every the scalar-valued map on defined by
is a continuous linear functional on and so by the Riesz representation theorem, there exists a unique vector in denoted by such that or equivalently, such that
The assignment thus induces a function called the of whose defining condition is
The adjoint is necessarily a continuous (equivalently, a bounded) linear operator.
If is finite dimensional with the standard inner product and if is the transformation matrix of with respect to the standard orthonormal basis then 's conjugate transpose is the transformation matrix of the adjoint
Adjoints are transposes
It is also possible to define the or of which is the map defined by sending a continuous linear functionals to
where the composition is always a continuous linear functional on and it satisfies (this is true more generally, when and are merely normed spaces).
So for example, if then sends the continuous linear functional (defined on by ) to the continuous linear functional (defined on by );
using bra-ket notation, this can be written as where the juxtaposition of with on the right hand side denotes function composition:
The adjoint is actually just to the transpose when the Riesz representation theorem is used to identify with and with
Explicitly, the relationship between the adjoint and transpose is:
which can be rewritten as:
Alternatively, the value of the left and right hand sides of () at any given can be rewritten in terms of the inner products as:
so that holds if and only if holds; but the equality on the right holds by definition of
The defining condition of can also be written
if bra-ket notation is used.
Descriptions of self-adjoint, normal, and unitary operators
Assume and let
Let be a continuous (that is, bounded) linear operator.
Whether or not is self-adjoint, normal, or unitary depends entirely on whether or not satisfies certain defining conditions related to its adjoint, which was shown by () to essentially be just the transpose
Because the transpose of is a map between continuous linear functionals, these defining conditions can consequently be re-expressed entirely in terms of linear functionals, as the remainder of subsection will now describe in detail.
The linear functionals that are involved are the simplest possible continuous linear functionals on that can be defined entirely in terms of the inner product on and some given vector
Specifically, these are and where
Self-adjoint operators
A continuous linear operator is called self-adjoint if it is equal to its own adjoint; that is, if Using (), this happens if and only if:
where this equality can be rewritten in the following two equivalent forms:
Unraveling notation and definitions produces the following characterization of self-adjoint operators in terms of the aforementioned continuous linear functionals: is self-adjoint if and only if for all the linear functional is equal to the linear functional ; that is, if and only if
where if bra-ket notation is used, this is
Normal operators
A continuous linear operator is called normal if which happens if and only if for all
Using () and unraveling notation and definitions produces the following characterization of normal operators in terms of inner products of continuous linear functionals: is a normal operator if and only if
where the left hand side is also equal to
The left hand side of this characterization involves only linear functionals of the form while the right hand side involves only linear functions of the form (defined as above).
So in plain English, characterization () says that an operator is normal when the inner product of any two linear functions of the first form is equal to the inner product of their second form (using the same vectors for both forms).
In other words, if it happens to be the case (and when is injective or self-adjoint, it is) that the assignment of linear functionals is well-defined (or alternatively, if is well-defined) where ranges over then is a normal operator if and only if this assignment preserves the inner product on
The fact that every self-adjoint bounded linear operator is normal follows readily by direct substitution of into either side of
This same fact also follows immediately from the direct substitution of the equalities () into either side of ().
Alternatively, for a complex Hilbert space, the continuous linear operator is a normal operator if and only if for every which happens if and only if
Unitary operators
An invertible bounded linear operator is said to be unitary if its inverse is its adjoint:
By using (), this is seen to be equivalent to
Unraveling notation and definitions, it follows that is unitary if and only if
The fact that a bounded invertible linear operator is unitary if and only if (or equivalently, ) produces another (well-known) characterization: an invertible bounded linear map is unitary if and only if
Because is invertible (and so in particular a bijection), this is also true of the transpose This fact also allows the vector in the above characterizations to be replaced with or thereby producing many more equalities. Similarly, can be replaced with or
See also
Citations
Notes
Proofs
Bibliography
P. Halmos Measure Theory, D. van Nostrand and Co., 1950.
P. Halmos, A Hilbert Space Problem Book, Springer, New York 1982 (problem 3 contains version for vector spaces with coordinate systems).
Walter Rudin, Real and Complex Analysis, McGraw-Hill, 1966, .
Articles containing proofs
Duality theories
Hilbert spaces
Integral representations
Linear functionals
Theorems in functional analysis | Riesz representation theorem | Physics,Mathematics | 4,726 |
704,266 | https://en.wikipedia.org/wiki/Dicofol | Dicofol is an insecticide, an organochlorine that is chemically related to DDT. Dicofol is a miticide that is very effective against spider mite. Its production and use is banned internationally under the Stockholm Convention.
One of the intermediates used in its production is DDT. This has caused criticism by many environmentalists; however, the World Health Organization classifies dicofol as a Level II, "moderately hazardous" pesticide. It is known to be harmful to aquatic animals, and can cause eggshell thinning in various species of birds.
Difference between dicofol and DDT
Dicofol is structurally similar to DDT. It differs from DDT by the replacement of the hydrogen (H) on C-1 by a hydroxyl (OH) functional group. One of the intermediates used in its production is DDT.
Chemistry
Dicofol is usually synthesized from technical DDT. During the synthesis, DDT is first chlorinated to an intermediate, Cl-DDT, followed by hydrolyzing to dicofol. After the synthesis reaction, DDT and Cl-DDT may remain in the dicofol product as impurities.
Formula: C14H9Cl5O
Chemical names: 2,2,2-Trichloro-1,1-bis(4-chlorophenyl)ethanol
Appearance: Pure dicofol is a white crystalline solid. Technical dicofol is a red-brown or amber viscous liquid with an odor like fresh-cut hay.
Solubility: It is stable under cool and dry conditions, is practically insoluble in water but soluble in organic solvents. Solubility: 0.8 mg/L (25 °C) in water.
Melting Point: 78.5 - 79.5 °C for pure dicofol, 50 °C for technical dicofol
Vapor Pressure: Negligible at room temperature
Molecular Weight: 370.49 g/mol
Partition Coefficient: 4.2788
Adsorption Coefficient: 5000 (estimated)
Impurities
Manufacturing-use dicofol products contain a number of DDT analogs as manufacturing impurities. These include the o,p' and p,p' isomers of DDT, DDE, DDD, and a substance called extra-chlorine DDT or Cl-DDT
Use and formulations
Foliar spray on agricultural crops and ornamentals, and in or around agricultural and domestic buildings for mite control. It is formulated as emulsifiable concentrates, wettable powders, dusts, ready-to-use liquids, and aerosol sprays. In many countries, dicofol is also used in combination with other pesticides such as the organophosphates, methyl parathion, and dimethoate.
Producers
Dicofol first appeared in the scientific literature in 1956, and was introduced onto the market by the US-based multinational company Rohm & Haas in 1957. Other current manufacturers include Hindustan Insecticides Limited (India), Lainco (Spain), and ADAMA Agricultural Solutions (Formerly Makhteshim-Agan) (Israel). It is sold under a number of trade names, including Hilfol, Kelthane and Acarin.
In 1986, the US Environmental Protection Agency (EPA) temporarily canceled the use of dicofol because relatively high levels of DDT contamination were ending up in the final product. Modern processes can produce technical grade dicofol that contains less than 0.1% DDT.
Estimated usage as a pesticide
The Pesticide Survey, USA 1987 through 1996, reports that the total annual domestic agricultural usage of dicofol averaged about 860,000 pounds active ingredient (a.i.) for about treated. Most of the area is treated with 2 pounds a.i. or less per application, and the average acre is treated with about 1.2 pounds a.i. per year (1.3 kg/(ha·yr)). Fruits tend to have the highest application rates.
The largest markets for dicofol in terms of total pounds active ingredient are cotton (over 50%) and citrus (almost 30%). Although only about 4% of the cotton acres grown are treated with dicofol, over 60% of all crop acres treated with dicofol are cotton acres. The remaining usage is primarily on other fruits and vegetables. Most of the US usage is in California and Florida.
Effects
The California Department of Food and Agriculture has one of the world's most extensive incident reporting systems. Between 1982 and 1992, 38 incidents involving dicofol alone were reported: systemic 19 (50%); skin 10 (26%); eye 8 (21%); and eye/skin 1 (3%). The number of incidents per 1,000 applications for all illnesses ranged from 0.11 to 0.21.
The US National Pesticides Telecommunications Network database collected reports from 1984 to 1991 showing 91 human, 9 animal and 31 other poisoning incidents for a total of 131 incidents involving dicofol from 571 phone calls made to the hotline.
An assessment of dicofol by the UK Pesticides Safety Directorate in 1996 found that residues in apples, pears, blackcurrants and strawberries were higher than expected.
There is no established US maximum contaminant level (MCL) or health advisory levels for residues of dicofol in drinking water. In the European Union, the maximum level is the same for all active ingredients 0.1 mg/L.
In 1990, the use of dicofol was suspended in Sweden for environmental reasons. In Switzerland its use is permitted for research purposes only. Throughout the European Union dicofol containing more than lg/kg (0.1%) of DDT or DDT related compounds cannot be used.
The 1998 US EPA review of dicofol recommended a number of changes in order to protect the environment and wildlife. Dicofol applications are limited to no more than one per year. In the UK, the maximum number of treatments permitted is two per year for apples and hops, and two per crop for strawberries, protected crops and tomatoes.
In 1980, an accident at the US Tower Chemical Company led to a release of dicofol into Lake Apopka in Florida. Ten years later Dr Guillette of Florida University linked this incident to a subsequent decline in the fertility of alligators in the lake. The US EPA is still not clear whether dicofol is involved in the reproductive failure of the alligator population following the accidental spill.
Toxicity
It is classified by the World Health Organisation as a Class II, 'moderately hazardous' pesticide.
The acute oral for dicofol is 587 mg/kg for rats.
Dicofol is a nerve poison. The exact mode of action is not known, although in mammals it causes hyperstimulation of nerve transmission along nerve axons (cells). This effect is thought to be related to the inhibition of certain enzymes in the central nervous system.
Symptoms of ingestion and/or respiratory exposure include nausea, dizziness, weakness and vomiting; dermal exposure may cause skin irritation or a rash; and eye contact may cause conjunctivitis. Poisoning may affect the liver, kidneys or the central nervous system. Very severe cases may result in convulsions, coma, or death from respiratory failure.
Dicofol can be stored in fatty tissue. Intense activity or starvation may mobilize the chemical, resulting in the reappearance of toxic symptoms long after actual exposure.
Chronic effects
Tests on laboratory animals show that the primary effects after long term exposure to dicofol include increases in liver weight and enzyme induction in the rat, mouse and dog.
There are also effects relating to altered adrenocorticoid metabolism (part of the hormonal system). In the rat hormonal changes were accompanied by the histological observation of vacuolation (empty cavities) of the cells of the adrenal cortex.
Carcinogenecity
The US EPA has classified dicofol as a Group C, possible human carcinogen. There is limited evidence that it may cause cancer in laboratory animals, but there is no evidence that it causes cancer in humans. This classification was based on animal test data that showed an increase in the incidence of liver adenomas (benign tumour) and combined liver adenomas and carcinomas in male mice.
Reproductive effects
Reproductive effects in rat offspring have been observed only at doses high enough to also cause toxic effects on the livers, ovaries, and feeding behavior of the parents. Rats fed diets containing dicofol through two generations exhibited adverse effects on the survival and/or growth of newborns at 6.25 and 12.5 mg/kg/day
Teratogenic effects: No teratogenic effects are observed when rats were given up to 25 mg/kg/day on days 6 through 15 of pregnancy
Mutagenic effects: Laboratory tests have shown that dicofol is not mutagenic
Endocrine disruption: Evidence for dicofol to cause endocrine disruption is suggestive, but not definitive
A 2007 study by the California Department of Public Health found that women in the first eight weeks of pregnancy who live near farm fields sprayed with dicofol and the related organochloride pesticide endosulfan are several times more likely to give birth to children with autism. These results are highly preliminary due to the small number of women and children involved and lack of evidence from other studies.
Metabolism
Dicofol is converted in rats to the metabolites 4,4'-dichlorobenzophenone and 4,4'-dichlorodicofol.
Studies of the metabolism of dicofol in rats, mice, and rabbits have shown that ingested dicofol is rapidly absorbed, distributed primarily to fat, and readily eliminated in feces. When mice were given a single oral dose of 25 mg/kg dicofol, approximately 60% of the dose was eliminated within 96 hours, 20% in the urine, and 40% in the feces. Concentrations in body tissues peaked between 24 and 48 hours following dosing, with 10% of the dose found in fat, followed by the liver and other tissues. Levels in tissues other than fat declined sharply after the peak.
Ecological effects
Effects on birds: Dicofol is slightly toxic to birds. The 8-day dietary LC50 is 3010 ppm in bobwhite quail, 1418 ppm in Japanese quail, and 2126 ppm in ring-necked pheasant. Eggshell thinning and reduced offspring survival were noted in the mallard duck, American kestrel, ring dove, and screech owl.
Effects on aquatic organisms: Dicofol is highly toxic to fish, aquatic invertebrates, and algae. The LC50 is 0.12 mg/L in rainbow trout, 0.37 mg/L in sheepshead minnow, 0.06 mg/L in mysid shrimp, 0.015 mg/L in shell oysters, and 0.075 mg/L in algae.
Effects on other organisms: Dicofol is not toxic to bees.
Degradation
Breakdown in soil and groundwater: Dicofol is moderately persistent in soil, with a half-life of 60 days. Dicofol is susceptible to chemical breakdown in moist soils. It is also subject to degradation by UV light. In a silty loam soil, its photodegradation half-life was 30 days. Under anaerobic soil conditions, the half-life for dicofol was 15.9 days.
Dicofol is practically insoluble in water and adsorbs very strongly to soil particles. It is therefore nearly immobile in soils and unlikely to infiltrate groundwater. Even in sandy soil, dicofol was not detected below the top in standard soil column tests. It is possible for dicofol to enter surface waters when soil erosion occurs.
Breakdown in water: Dicofol degrades in water or when exposed to UV light at pH levels above 7. Its half-life in solution at pH 5 is 47 to 85 days. Because of its very high absorption coefficient (Koc), dicofol is expected to adsorb to sediment when released into open waters.
Breakdown in vegetation: In a number of studies, dicofol residues on treated plant tissues have been shown to remain unchanged for up to 2 years.
See also
DDT
Methoxychlor
References
External links
Chemical Information about Dicofol
Organochloride insecticides
Endocrine disruptors
Acaricides
Trichloromethyl compounds
Halohydrins
Persistent organic pollutants under the Stockholm Convention
4-Chlorophenyl compounds | Dicofol | Chemistry | 2,687 |
380,338 | https://en.wikipedia.org/wiki/Polysome | A polyribosome (or polysome or ergosome) is a group of ribosomes bound to an mRNA molecule like “beads” on a “thread”. It consists of a complex of an mRNA molecule and two or more ribosomes that act to translate mRNA instructions into polypeptides. Originally coined "ergosomes" in 1963, they were further characterized by Jonathan Warner, Paul M. Knopf, and Alex Rich.
Polysomes are formed during the elongation phase when ribosomes and elongation factors synthesize the encoded polypeptide. Multiple ribosomes move along the coding region of mRNA, creating a polysome. The ability of multiple ribosomes to function on an mRNA molecule explains the limited abundance of mRNA in the cell. Polyribosome structure differs between prokaryotic polysomes, eukaryotic polysomes, and membrane bound polysomes. Polysome activity can be used to measure the level of gene expression through a technique called polysomal profiling.
Structure
Electron microscopy technologies such as staining, metal shadowing, and ultra-thin cell sections were the original methods to determine polysome structure. The development of cryo-electron microscopy techniques has allowed for increased resolution of the image, leading to a more precise method to determine structure. Different structural configurations of polyribosomes could reflect a variety in translation of mRNAs. An investigation of the ratio of polyribosomal shape elucidated that a high number of circular and zigzag polysomes were found after several rounds of translation. A longer period of translation caused the formation of densely packed 3-D helical polysomes. Different cells produce different structures of polysomes.
Prokaryotic
Bacterial polysomes have been found to form double-row structures. In this conformation, the ribosomes are contacting each other through smaller subunits. These double row structures generally have a “sinusoidal” (zigzag) or 3-D helical path. In the “sinusoidal” path, there are two types of contact between the small subunits- “top-to-top” or “top-to-bottom”. In the 3-D helical path, only “top-to-top” contact is observed.
Polysomes are present in archaea, but not much is known about the structure.
Eukaryotic
In cells
in situ (in cell) studies have shown that eukaryotic polysomes exhibit linear configurations. Densely packed 3-D helices and planar double-row polysomes were found with variable packing including “top-to-top” contacts similar to prokaryotic polysomes. Eukaryotic 3-D polyribosomes are similar to prokaryotic 3-D polyribosomes in that they are “densely packed left-handed helices with four ribosomes per turn”. This dense packing can determine their function as regulators of translation, with 3-D polyribosomes being found in sarcoma cells using fluorescence microscopy.
Cell free
Atomic force microscopy used in in vitro studies have shown that circular eukaryotic polysomes can be formed by free polyadenylated mRNA in the presence of initiation factor eIF4E bound to the 5’ cap and PABP bound to the 3’-poly(A) tail. However, this interaction between cap and the poly(A)-tail mediated by a protein complex is not a unique way of circularizing polysomal mRNA. It has been found that topologically circular polyribosomes can be successfully formed in the translational system with mRNA with no cap and no poly(A) tail as well as a capped mRNA without a 3’-poly(A) tail.
Membrane-bound
Polyribosomes bound to membranes are restricted by a 2 dimensional space given by the membrane surface. The restriction of inter-ribosomal contacts causes a round-shape configuration that arranges ribosomes along the mRNA so that the entry and exit sites form a smooth pathway. Each ribosome is turned relative to the previous one, resembling a planar spiral.
Profiling
Polysomal profiling is a technique that uses cycloheximide to arrest translation and a sucrose gradient to separate the resulting cell extract by centrifugation. Ribosome-associated mRNAs migrate faster than free mRNAs and polysome associated mRNAs migrate faster than ribosome associated mRNAs. Several peaks corresponding to mRNA are revealed by the measurement of total protein across the gradient. The corresponding mRNA is associated with increasing numbers of ribosomes as polysomes. The presence of mRNA across the gradient reveals the translation of the mRNA. Polysomal profiling is optimally applied to cultured cells and tissues to track the translational status of an identified mRNA as well as measure ribosome density. This technique has been used to compare the translational status of mRNAs in different cell types.
For example, polysomal profiling was used in a study to investigate the effect of vesicular stomatitis virus (VSV) in mammalian cells. The data from polysomal profiling showed that host mRNAs are outcompeted by viral mRNAs for polysomes, therefore decreasing the translation of host mRNA and increasing the translation of viral mRNA.
References
External links
Theoretical and experimental structure of polysome
Protein biosynthesis | Polysome | Chemistry | 1,136 |
47,486,581 | https://en.wikipedia.org/wiki/PT%20Puppis | PT Puppis (PT Pup) is a star in the constellation Puppis. Anamarija Stankov confirmed this star as a Beta Cephei variable. Analysis of its spectrum and allowing for extinction gives a mass 7.94 times that of the Sun, a surface temperature of 19,400 K and luminosity of 6405 Suns.
The star was discovered to be variable by Janet Rountree Lesh and P. R. Wesselius in 1979. It was given its variable star designation in 1981.
References
Puppis
Puppis, PT
Beta Cephei variables
2928
037036
061068
BD-19 1967
B-type bright giants | PT Puppis | Astronomy | 137 |
7,148,302 | https://en.wikipedia.org/wiki/Ptolemy%27s%20inequality | In Euclidean geometry, Ptolemy's inequality relates the six distances determined by four points in the plane or in a higher-dimensional space. It states that, for any four points , , , and , the following inequality holds:
It is named after the Greek astronomer and mathematician Ptolemy.
The four points can be ordered in any of three distinct ways (counting reversals as not distinct) to form three different quadrilaterals, for each of which the sum of the products of opposite sides is at least as large as the product of the diagonals. Thus, the three product terms in the inequality can be additively permuted to put any one of them on the right side of the inequality, so the three products of opposite sides or of diagonals of any one of the quadrilaterals must obey the triangle inequality.
As a special case, Ptolemy's theorem states that the inequality becomes an equality when the four points lie in cyclic order on a circle.
The other case of equality occurs when the four points are collinear in order. The inequality does not generalize from Euclidean spaces to arbitrary metric spaces. The spaces where it remains valid are called the Ptolemaic spaces; they include the inner product spaces, Hadamard spaces, and shortest path distances on Ptolemaic graphs.
Assumptions and derivation
Ptolemy's inequality is often stated for a special case, in which the four points are the vertices of a convex quadrilateral, given in cyclic order. However, the theorem applies more generally to any four points; it is not required that the quadrilateral they form be convex, simple, or even planar.
For points in the plane, Ptolemy's inequality can be derived from the triangle inequality by an inversion centered at one of the four points. Alternatively, it can be derived by interpreting the four points as complex numbers, using the complex number identity:
to construct a triangle whose side lengths are the products of sides of the given quadrilateral, and applying the triangle inequality to this triangle. One can also view the points as belonging to the complex projective line, express the inequality in the form that the absolute values of two cross-ratios of the points sum to at least one, and deduce this from the fact that the cross-ratios themselves add to exactly one.
A proof of the inequality for points in three-dimensional space can be reduced to the planar case, by observing that for any non-planar quadrilateral, it is possible to rotate one of the points around the diagonal until the quadrilateral becomes planar, increasing the other diagonal's length and keeping the other five distances constant. In spaces of higher dimension than three, any four points lie in a three-dimensional subspace, and the same three-dimensional proof can be used.
Four concyclic points
For four points in order around a circle, Ptolemy's inequality becomes an equality, known as Ptolemy's theorem:
In the inversion-based proof of Ptolemy's inequality, transforming four co-circular points by an inversion centered at one of them causes the other three to become collinear, so the triangle equality for these three points (from which Ptolemy's inequality may be derived) also becomes an equality. For any other four points, Ptolemy's inequality is strict.
In three dimensions
Four non-coplanar points , , , and in 3D form a tetrahedron. In this case, the strict inequality holds:
.
In general metric spaces
Ptolemy's inequality holds more generally in any inner product space, and whenever it is true for a real normed vector space, that space must be an inner product space.
For other types of metric space, the inequality may or may not be valid. A space in which it holds is called Ptolemaic. For instance, consider the four-vertex cycle graph, shown in the figure, with all edge lengths equal to 1. The sum of the products of opposite sides is 2. However, diagonally opposite vertices are at distance 2 from each other, so the product of the diagonals is 4, bigger than the sum of products of sides. Therefore, the shortest path distances in this graph are not Ptolemaic. The graphs in which the distances obey Ptolemy's inequality are called the Ptolemaic graphs and have a restricted structure compared to arbitrary graphs; in particular, they disallow induced cycles of length greater than three, such as the one shown.
The Ptolemaic spaces include all CAT(0) spaces and in particular all Hadamard spaces. If a complete Riemannian manifold is Ptolemaic, it is necessarily a Hadamard space.
Inner product spaces
Suppose that is a norm on a vector space Then this norm satisfies Ptolemy's inequality:
if and only if there exists an inner product on such that for all vectors Another necessary and sufficient condition for there to exist such an inner product is for the norm to satisfy the parallelogram law:
If this is the case then this inner product will be unique and it can be defined in terms of the norm by using the polarization identity.
See also
References
Geometric inequalities
Ptolemy | Ptolemy's inequality | Mathematics | 1,053 |
12,742,172 | https://en.wikipedia.org/wiki/Strassmann%27s%20theorem | In mathematics, Strassmann's theorem is a result in field theory. It states that, for suitable fields, suitable formal power series with coefficients in the valuation ring of the field have only finitely many zeroes.
History
It was introduced by .
Statement of the theorem
Let K be a field with a non-Archimedean absolute value | · | and let R be the valuation ring of K. Let f(x) be a formal power series with coefficients in R other than the zero series, with coefficients an converging to zero with respect to | · |. Then f(x) has only finitely many zeroes in R. More precisely, the number of zeros is at most N, where N is the largest index with |aN| = max |an|.
As a corollary, there is no analogue of Euler's identity, e2πi = 1, in Cp, the field of p-adic complex numbers.
See also
p-adic exponential function
References
External links
Field (mathematics)
Theorems in abstract algebra | Strassmann's theorem | Mathematics | 222 |
72,703,400 | https://en.wikipedia.org/wiki/Amanita%20galactica | Amanita galactica is a species of agaric fungus in the family Amanitaceae, first described by Giuliana Furci and Bryn Dentinger in 2020. The species was discovered in the Andes of southern Chile, living at the base of trees such as Nothofagus and Araucaria araucana. The epithet galactica was given by Furci, and was inspired by the bright white spots on the black cap that reminded her of a galaxy dotted with stars.
References
External links
galactica
Fungi of South America
Fungi described in 2020
Fungus species | Amanita galactica | Biology | 116 |
54,072,128 | https://en.wikipedia.org/wiki/Gallery%20%28theatre%29 | The gallery of a theatre or church is a form of balcony, an elevated platform generally supported by columns or brackets, which projects from an interior wall, in order to accommodate additional audience.
It may specifically refer to the highest such platform, and carries the cheapest seats in theatres.
References
See also
Peanut gallery
Parts of a theatre | Gallery (theatre) | Technology | 67 |
8,284,364 | https://en.wikipedia.org/wiki/Colitose | Colitose is a mannose-derived 3,6-dideoxysugar produced by certain bacteria. It is a constituent of the lipopolysaccharide. It is the enantiomer of abequose.
Biological role
Colitose is found in the O-antigen of certain Gram-negative bacteria such as Escherichia coli, Yersinia pseudotuberculosis, Salmonella enterica, Vibrio cholerae, and in marine bacteria such as Pseudoalteromonas sp. The sugar was first isolated in 1958, and subsequently was enzymatically synthesized in 1962.
Biosynthesis
The biosynthesis of colitose begins with ColE, a mannose-1-phosphate guanylyltransferase that catalyzes the addition of a GMP moiety to mannose, yielding GDP-mannose. In the next step, ColB, an NADP-dependent short-chain dehydrogenase-reductase enzyme, catalyzes the oxidation at C-4 and the removal of the hydroxyl group at C-6. The resulting product, GDP-4-keto-6-deoxymannose, then reacts with the PLP-dependent enzyme GDP-4-keto-6-deoxymannose-3-dehydratase (ColD), which removes the hydroxyl at C-3 in a manner similar to that of serine dehydratase. In the final step, the product of ColD, GDP-4-keto-3,6-dideoxymannose, reacts with ColC, which reduces the ketone functionality at C-4 back to an alcohol and inverts the configuration about C-5.
The resulting product, GDP-L-colitose, is then incorporated into the O-antigen by glycosyltransferases and O-antigen processing proteins. Further reactions join the O-antigen to the core polysaccharide to form the full lipopolysaccharide.
GDP-4-keto-6-deoxymannose-3-dehydratase (ColD)
ColD is a PLP-dependent enzyme responsible for the removal of the C-3' hydroxyl group during the biosynthesis of GDP-colitose. It is a product of the Wbdk or ColD genes in Escherichia coli O55 or Salmonella enterica, respectively, and is commonly referred to as ColD.
Usage in biotechnology
Although the sugar is relatively rare, recent work with glycosyltransferases suggests that obscure sugars such as colitose can be incorporated into existing natural-product scaffolds, thereby constructing novel and potentially therapeutic compounds.
References
Deoxy sugars
Aldohexoses | Colitose | Chemistry | 581 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.