id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,194,130 | https://en.wikipedia.org/wiki/Picrotoxin | Picrotoxin, also known as cocculin, is a poisonous crystalline plant compound. It was first isolated by the French pharmacist and chemist Pierre François Guillaume Boullay (1777–1869) in 1812. The name "picrotoxin" is a combination of the Greek words "picros" (bitter) and "toxicon" (poison). A mixture of two different compounds, picrotoxin occurs naturally in the fruit of the Anamirta cocculus plant, although it can also be synthesized chemically.
Due to its interactions with the inhibitory neurotransmitter GABA, picrotoxin acts as a stimulant and convulsant. It mainly impacts the central nervous system, causing seizures and respiratory paralysis in high enough doses.
Chemical structure and synthesis
Picrotoxin is an equimolar mixture of two compounds, picrotoxinin (C15H16O6; CAS# 17617-45-7) and picrotin (C15H18O7; CAS# 21416-53-5). Of the two compounds, picrotin is less active.
Picrotoxin occurs naturally in the fruit of the Anamirta cocculus, a climbing plant from India and other parts of Southeast Asia. The plant is known for its large stems of white wood and sweetly-scented flowers. It produces small stone fruits, Cocculus indicus, which are typically dried.
Currently, there are as many as five total syntheses of picrotoxinin — one of which was published as recently as June 2020. Most syntheses use carvone as a stereochemical template.
In 1988, researchers from Tohoku University in Japan completed a total stereoselective synthesis of both ()picrotoxinin and beginning with (+)5βhydroxycarvone. In this synthesis, eight asymmetric centers were stereoselectively prepared on a cis-fused hydrindane ring system using several different reactions: a Claisen rearrangement to introduce the quaternary center, an organoselenium-mediated reduction of an epoxy ketone, and a stereospecific construction of a glycidic ester.
The June 2020 synthesis instead employed the quick formation of the polycyclic core, followed by the manipulation of oxidation states of key carbon atoms in order to produce the target molecule.
Some research suggests that picritoxin can be made by the cyclofunctionalization of cycloalkenyl systems. Under kinetically controlled conditions, this process generally results in exo cyclization and forms bridged ring systems like those found in picrotoxin.
Several techniques have been developed to isolate picrotoxinin and picrotin individually. Reaction with the nearby cis alcohol is the key obstruction, and can be inhibited by pretreatment (protection) with trifluoroacetic anhydride in pyridine:
Picrotoxin has also been used as a starting material in several synthetic processes, including the creation of dl-picrotoxadiene, which retains certain features of the picrotoxin skeleton.
Mechanism of action
Some crustacean muscle fibers have excitatory and inhibitory innervation. Picrotoxin blocks inhibition. Two different but related theories have been proposed for the mechanism by which picrotoxin acts on synapses. One theory is that it acts as a non-competitive channel blocker for GABAA receptor chloride channels, specifically the gamma-aminobutyric acid-activated chloride ionophore. A 2006 study found that, while not structurally similar to GABA, picrotoxin prevents ion flow through the chloride channels activated by GABA. It likely acts within the ion channels themselves, rather than at GABA recognition sites. Because it inhibits channels activated by GABA, GABA-enhancing drugs like barbiturates and benzodiazepines can be used as an antidote.
Other research suggests that the toxin acts instead as a non-competitive antagonist, or inhibitor, for GABA receptors. A study by Newland and Cull-Candy found that, in high enough concentrations, picrotoxin reduced the amplitude of GABA currents. Their data indicated that it was unlikely that picrotoxin acted simply as a voltage-gated channel blocker, although it did reduce the frequency of channel openings. Rather, they found that picrotoxin “binds preferentially to an agonist bound form of the receptor.” This means that, even in the presence of low concentrations of picrotoxin, the response of neurons to GABA is reduced.
Toxicity
Picrotoxin acts as a central nervous system and respiratory stimulant. It is extremely toxic to fish and humans, as well as rodents and other mammals. According to the Register of Toxic Effects of Chemical Substances, the LDLo, or lowest reported lethal dose, is 0.357 mg/kg. Symptoms of picrotoxin poisoning include coughing, difficulty breathing, headache, dizziness, confusion, gastro-intestinal distress, nausea or vomiting, and changes in heart rate and blood pressure. Although especially dangerous if swallowed, systemic effects can also result from inhalation or absorption into the blood stream through lesions in the skin. Picrotoxin also acts as a convulsant. In larger doses, it has been found to induce clonic seizures or cardiac dysrhythmias, with especially high doses ultimately proving fatal, typically due to respiratory paralysis.
Clinical applications and other uses
Due to its toxicity, picrotoxin is now most commonly used as a research tool. However, due to its antagonist effect on GABA receptors, it has been used as a central nervous system stimulant. It was also previously used as an antidote for poisoning by CNS depressants, especially barbiturates.
Although not commonly used, picrotoxin is effective as both a pesticide and a pediculicide. In the 19th century, it was used in the preparation of hard multum, which was added to beer to make it more intoxicating. This preparation has since been outlawed.
Despite its potential toxicity to mammals in large enough doses, picrotoxin is also sometimes used as a performance enhancer in horses. It is classified as an illegal "Class I substance" by the American Quarter Horse Association. Substances that are classified as “Class I” are likely to affect performance and have no therapeutic use in equine medicine. In 2010, quarter horse trainer Robert Dimitt was suspended after his horse, Stoli Signature, tested positive for the substance. As with humans, it is used to counteract barbiturate poisoning.
See also
GABAA receptor negative allosteric modulator
GABAA receptor § Ligands
References
Further reading
GABAA receptor negative allosteric modulators
GABAA-rho receptor negative allosteric modulators
Glycine receptor antagonists
Convulsants
Lactones
Epoxides
Chloride channel blockers
Neurotoxins
Plant toxins | Picrotoxin | Chemistry | 1,471 |
57,024,626 | https://en.wikipedia.org/wiki/Belgrade%20Design%20Week | Belgrade Design Week is a one-week design festival held once a year in Belgrade, Serbia. First held in 2005, the festival is organized every spring and is the largest design initiative in South-Eastern Europe. The festival covers architecture, design, fashion, publishing, and new media, as well as related fields like communications, marketing, advertising, and arts management. The event includes design labs, competitions, and presentations from international speakers like Karim Rashid and Daniel Libeskind.
During the rest of the year, there are related projects such as promoting local designers, holding a branding competition for the Serbian Center for the Promotion of Science, and participating in the Partners of the Human Cities/ Project. The organization also spearheaded the Belgrade 2020 project, promoting the city as a candidate for the European Capital of Culture.
History
First held in 2005, the Belgrade Design Week festival was founded by architect and brand consultant Jovan Jelovac. The conference costs about half a million euros to produce and is funded mostly through commercial and media sponsorships. The reasons for creating the event were for education and inspiring the country's citizens through design. In 2014, the president of Serbia, Tomislav Nikolić, was the patron for the event, and mentioned how design can contribute to the country's economy.
In 2006, designer Karim Rashid was the ambassador for the second Belgrade Design Week and spoke at the conference. Subsequently, he headed several design projects in Serbia's capital city. Rashid stated that he is fascinated by Belgrade and Eastern Europe in general, seeing it "as the next upcoming place - everyone is psyched and enthusiastic about the rebuilding of these poetic, romantic, artistic, and very intellectual places."
After attending the design festival in 2008, architect Daniel Libeskind was awarded a billion dollar contract to redevelop the Belgrade waterfront.
Festival events
The principal site of the festival changes annually with various run-down or abandoned buildings being renovated for the event. Past locations have included a palace, the bombed-out hotel Jugoslavija, an abandoned department store, the closed contemporary art museum, and an old factory. Every year, the conference also focuses on a common thread, for example, in 2013 the theme was the square shape.
Approximately thirty international speakers present each year and cover a wide range of subjects. Designers and artists who have spoken at the event include Konstantin Grcic, Aylin Langreuter, Christophe de la Fontaine, Hella Jongerius, Daan Roosegaarde, Ross Lovegrove, Christophe Pillet, Sacha Lakic, and Patricia Urquiola.
The events at the festival include presentations, workshops, design competitions and exhibits from designers (such as Israeli Eilon Armon and Swiss architect duo Lang/Baumann).
See also
Belgrade Fashion Week
The Applied Artists and Designers Association of Serbia
References
External links
Official website
http://www.gaf.ni.ac.rs/_news/_info/conf11/BDW2011_Brochure.pdf
https://eastwest.eu/attachments/article/1137/169_173_lucicINGL.pdf
https://issuu.com/advserb/docs/dizajnpark_magazine_2011
https://www.designboom.com/art/nikola-bozovic-car-parts-phantasms-and-phalluses-belgrade-design-week-10-17-2014/
https://www.designboom.com/design/tom-strala-belgrade-design-week-2014-11-17-2014/
Fashion festivals
International conferences
Fashion events in Serbia
Events in Belgrade
Industrial design awards
Festivals in Serbia
Design events
June
Spring (season) events
Annual events in Serbia
Spring (season) events in Serbia
Serbian fashion | Belgrade Design Week | Engineering | 781 |
47,602,799 | https://en.wikipedia.org/wiki/1-Hydroxypyrene | 1-Hydroxypyrene is a human metabolite. It can be found in urine of outdoor workers exposed to air pollution.
Biochemistry
Experiments in pig show that urinary 1-hydroxypyrene is a metabolite of pyrene, when given orally.
A Mycobacterium sp. strain isolated from mangrove sediments produced 1-hydroxypyrene during the degradation of pyrene.
Relationship with smoking
Highly significant differences and dose-response relationships with regard to cigarettes smoked per day were found for 2-, 3- and 4-hydroxyphenanthrene and 1-hydroxypyrene, but not for 1-hydroxyphenanthrene.
References
Human metabolites
Air pollution
Recreational drug metabolites
Smoking
Pyrenes
Hydroxyarenes | 1-Hydroxypyrene | Chemistry | 168 |
7,820,748 | https://en.wikipedia.org/wiki/Jugaad | or (in Hindustani: जुगाड़ / جگاڑ) is a non-conventional, frugal innovation, in Indian subcontinent. It also includes innovative fixes or a simple workarounds, solutions that bend the rules, or resources that can be used in such a way. It is considered creative to make existing things work and create new things with meager resources.
is increasingly accepted as a management technique and is recognized all over the world as a form of frugal innovation. Companies in Southeast Asia are adopting jugaad as a practice to reduce research and development costs. Jugaad also applies to any kind of creative and out-of-the-box thinking or life hacks that maximize resources for a company and its stakeholders.
According to author and professor Jaideep Prabhu, is an "important way out of the current economic crisis in developed economies and also holds important lessons for emerging economies".
Improvised vehicles
can also refer to a homemade or locally made vehicle in India, Pakistan and Bangladesh. They are made by local mechanics using wooden planks, metal sheets and parts taken from different machines and vehicles.
One type of is a quadricycle, a vehicle made of wooden planks and old SUV parts, variously known as kuddukka and in Northern India. However, is also used as a term for any low-cost vehicle which typically costs around Rs 50,000 (US$). may be powered by a diesel engine originally intended to power agricultural irrigation pumps. They are known for poor brakes, and cannot go faster than about 60 km/h (37 mph). The vehicle often carries more than 20 people at a time in remote locations and poor road conditions.
Though no statistical data is available, it is reported that there are a number of instances of failing brakes, requiring a passenger to jump off and manually apply a wooden block as a brake. As part of research for his 2013 book, Innovation and a Global Knowledge Economy in India, Thomas Birtchnell, a lecturer of Sustainable Communities at University of Wollongong, Australia, found that of 2,139 cases of road traffic casualties in 72 hours at J N Medical College hospital in Aligarh, 13.88% of pedestrian casualties were due to . It was stated by Minister of Road Transport and Highways Pon Radhakrishnan that jugaad do not conform to the specifications of a motor vehicle under the Motor Vehicles Act, 1988. These vehicles hence do not have any vehicle registration plate and they are not registered with the Regional Transport Office (RTO). Hence, no road tax is paid on them, neither there exists any official count of such vehicles.
Jugaad vehicles are not officially recognized as road-worthy, and despite a few proposals to regulate them, vote-bank politics have trumped safety concerns. The improvised vehicles have now become rather popular as a means to transport all manner of burdens, from lumber to steel rods to school children. For safety reasons the Government of India has officially banned vehicles.
Another type of called bike- or motorcycle-, a motorcycle, moped or scooter modified into motorized trikes are used in the northern states of India, especially Punjab.
Another type of called rickshaw or rickshaw, WWII-era Harley Davidson motorcycles modified into motorized trikes which were earlier used in New Delhi.
A variant of the vehicle in Tamil Nadu state of Southern India is the . This roughly translates to 'fish bed vehicle' because they originated among local fishermen who needed a quick and cheap transport system to transport fish. It is a motorized tri-wheeler (derived from the non-motorized variant) with a heavy-duty suspension and a motorcycle engine—typically recycled from Czech Yezdi or Enfield Bullet vehicles. Its origins are typical of other innovations—dead fish are typically considered unhygienic, and vehicles that carry them cannot be typically used to carry anything else. Similar vehicles can be found throughout much of Southeast Asia.
Another variant of the called rickshaw, a motorcycle modified into a tri-wheeler with truck wheels in the rear is used in the Gujarat state of India.
A variant of in Pakistan is a motorcycle made into a motorized trike called meaning "moon vehicle" or after the Chinese company Jinan Qingqi who first introduced these to the market.
Today, a is one of the most cost-effective transportation solutions for rural Indians, Pakistanis, and Bangladeshis.
See also
Transport in India
Transport in Pakistan
Transport in Bangladesh
Kludge
Notes
Further reading
Also published as:
Hindi words and phrases
Indian slang | Jugaad | Engineering | 919 |
701,207 | https://en.wikipedia.org/wiki/Radix | In a positional numeral system, the radix (:radices) or base is the number of unique digits, including the digit zero, used to represent numbers. For example, for the decimal system (the most common system in use today) the radix is ten, because it uses the ten digits from 0 through 9.
In any standard positional numeral system, a number is conventionally written as with x as the string of digits and y as its base, although for base ten the subscript is usually assumed (and omitted, together with the pair of parentheses), as it is the most common way to express value. For example, (100)10 is equivalent to 100 (the decimal system is implied in the latter) and represents the number one hundred, while (100)2 (in the binary system with base 2) represents the number four.
Etymology
Radix is a Latin word for "root". Root can be considered a synonym for base, in the arithmetical sense.
In numeral systems
Generally, in a system with radix b (), a string of digits denotes the number , where . In contrast to decimal, or radix 10, which has a ones' place, tens' place, hundreds' place, and so on, radix b would have a ones' place, then a b1s' place, a b2s' place, etc.
For example, if b = 12, a string of digits such as 59A (where the letter "A" represents the value of ten) would represent the value = 838 in base 10.
Commonly used numeral systems include:
The octal and hexadecimal systems are often used in computing because of their ease as shorthand for binary. Every hexadecimal digit corresponds to a sequence of four binary digits, since sixteen is the fourth power of two; for example, hexadecimal 7816 is binary 2. Similarly, every octal digit corresponds to a unique sequence of three binary digits, since eight is the cube of two.
This representation is unique. Let b be a positive integer greater than 1. Then every positive integer a can be expressed uniquely in the form
where m is a nonnegative integer and the r'''s are integers such that
0 < rm < b and 0 ≤ ri < b for i = 0, 1, ... , m − 1.
Radices are usually natural numbers. However, other positional systems are possible, for example, golden ratio base (whose radix is a non-integer algebraic number), and negative base (whose radix is negative).
A negative base allows the representation of negative numbers without the use of a minus sign. For example, let b'' = −10. Then a string of digits such as 19 denotes the (decimal) number = −1.
See also
Base (exponentiation)
Mixed radix
Polynomial
Radix economy
Radix sort
Non-standard positional numeral systems
List of numeral systems
Notes
References
External links
MathWorld entry on base
Elementary mathematics
Numeral systems | Radix | Mathematics | 644 |
4,611,642 | https://en.wikipedia.org/wiki/Nonene | Nonene is an alkene with the molecular formula C9H18. Many structural isomers are possible, depending on the location of the C=C double bond and the branching of the other parts of the molecule. Industrially, the most important nonenes are trimers of propene: Tripropylene. This mixture of branched nonenes is used in the alkylation of phenol to produce nonylphenol, a precursor to detergents, which are also controversial pollutants.
Linear nonenes
References
Alkenes
D | Nonene | Chemistry | 115 |
37,302,456 | https://en.wikipedia.org/wiki/Remote%20Skylights | Remote Skylights are optical systems capable of providing natural light to unlit locations. An arrangement of parabolic reflectors and optical fiber cables, transport natural sunlight to areas that would otherwise be dark or be lit artificially.
Remote skylights are composed chiefly of a solar collection dish, a "heliotube" and a distribution dish. The collection and distribution dishes are both parabolic reflectors. The collection dish is connected to a heliostat, a mechanism which tracks the transit of the sun across the sky, so as to maximize the intensity of light falling upon it. The heliotube is a fiber light tube, a bundle of optical fibers that channel the collected sunlight from the collection dish to the distribution dish. Unlike a typical skylight, the heliotube allows the two dishes to be in different places.
Remote Skylights were invented by RAAD studio in order to provide natural illumination to the proposed Lowline underground park.
Benefits
Remote Skylights provide two key advantages over artificial illumination:
The transported light contains the frequencies necessary for photosynthesis. (Though it is reported that harmful UV rays are filtered out.)
No power is required to sustain the illumination. This means that (after construction) no harmful greenhouse gases are produced.
See also
Light tube
References
Fiber optics
Lighting
Solar architecture
Energy-saving lighting
Sustainable building
Windows | Remote Skylights | Engineering | 271 |
24,962,428 | https://en.wikipedia.org/wiki/Rhamnogalacturonan-II | Rhamnogalacturonan-II (RG-II) is a complex polysaccharide component of pectin that is found in the primary cell walls of dicotyledonous and monocotyledonous plants and gymnosperms. It is supposed to be crucial for the plant cell wall integrity. RG-II is also likely to be present in the walls of some lower plants (ferns, horsetails, and lycopods). Its global structure is conserved across vascular plants, albeit a number of variations within the RGII side chains have been observed between different plants. RG-II is composed of 12 different glycosyl residues including D-rhamnose, D-apiose, D-galactose, L-galactose, Kdo, D-galacturonic acid, L-arabinose, D-xylose, and L-aceric acid, linked together by at least 21 distinct glycosidic linkages. Some resides are further modified via methylation and acetylation. It moreover supports borate mediated cross-linking between different RGII side-chain apiosyl residues. The backbone consists of a linear polymer of alpha-1,4-linked D-galactopyranosiduronic acid. RG-II can be isolated from different sources, such as apple juice and red wine.
The gut bacterium Bacteroides thetaiotaomicron has a polysaccharide utilization locus that contains enzymes that allows deconstruction of rhamnogalacturonan-II, cleaving all but 1 of its 21 distinct glycosidic linkages.
See also
Pectin
References
Polysaccharides
Wine chemistry | Rhamnogalacturonan-II | Chemistry | 364 |
12,237,604 | https://en.wikipedia.org/wiki/Ufer%20ground | The Ufer ground is an electrical earth grounding method developed during World War II. It uses a concrete-encased electrode to improve grounding in dry areas. The technique is used in construction of concrete foundations.
History
During World War II, the U.S. Army required a grounding system for bomb storage vaults near Tucson and Flagstaff, Arizona. Conventional grounding systems did not work well in this location since the desert terrain had no water table and very little rainfall. The extremely dry soil conditions would have required hundreds of feet of rods to be driven into the earth to create a low impedance ground to protect the buildings from lightning strikes.
In 1942, Herbert G. Ufer was a consultant working for the U.S. Army. Ufer was given the task of finding a lower cost and more practical alternative to traditional copper rod grounds for these dry locations. Ufer discovered that concrete had better conductivity than most types of soil. Ufer then developed a grounding scheme based on encasing the grounding conductors in concrete. This method proved to be very effective, and was implemented throughout the Arizona test site.
After the war, Ufer continued to test his grounding method, and his results were published in a paper presented at the IEEE Western Appliance Technical Conference in 1963. The use of concrete enclosed grounding conductors was added to the U.S. National Electrical Code (NEC) in 1968. It was not required to be used if a water pipe or other grounding electrode was present. In 1978, the NEC allowed 1/2 inch rebar to be used as a grounding electrode [NEC 250.52(A)(3)]. The NEC refers to this type of ground as a "Concrete Encased Electrode" (CEE) instead of using the name Ufer ground.
Over the years, the term "Ufer ground" has become synonymous with the use of any type of concrete enclosed grounding conductor, whether it conforms to Ufer's original grounding scheme or not.
Construction
Concrete is naturally basic (has high pH). Ufer observed this meant that it had a ready supply of ions and so provides a better electrical ground than almost any type of soil. Ufer also found that the soil around the concrete became "doped", and its subsequent rise in pH caused the overall impedance of the soil itself to be reduced. The concrete enclosure also increases the surface area of the connection between the grounding conductor and the surrounding soil, which also helps to reduce the overall impedance of the connection.
Ufer's original grounding scheme used copper encased in concrete. However, the high pH of concrete often causes the copper to chip and flake. For this reason, steel is often used instead of copper.
When homes are built on concrete slabs, it is common practice to bring one end of the rebar up out of the concrete at a convenient location to make an easy connection point for the grounding electrode.
Ufer grounds, when present, are preferred over the use of grounding rods. In some areas (like Des Moines, Iowa) Ufer grounds are required for all residential and commercial buildings. The conductivity of the soil usually determines if Ufer grounds are required in any particular area.
An Ufer ground of specified minimum dimensions is recognized by the U.S. National Electrical Code as a grounding electrode. The grounding conductors must have sufficient cover by the concrete to prevent damage when dissipating high-current lightning strikes.
A disadvantage of Ufer grounds is that the moisture in the concrete can flash into steam during a lightning strike or similar high energy fault condition. This can crack the surrounding concrete and damage the building foundation.
References
External links
A new look at the Ufer ground system
Electrical safety
Electrical wiring
Foundations (buildings and structures) | Ufer ground | Physics,Engineering | 775 |
36,969,542 | https://en.wikipedia.org/wiki/36%20G.%20Doradus | 36 G. Doradus (HD 40409) is a suspected astrometric binary star system in the southern constellation of Dorado. It is a faint system but visible to the naked eye with an apparent visual magnitude of 4.65. Based upon an annual parallax shift of , it is located 89 light years away from the Sun. It is moving further away with a heliocentric radial velocity of +25 km/s. The system has a relatively high proper motion, traversing the celestial sphere at the rate of per year along a position angle of 14.51°.
Based on the stellar classification of K2 III assigned by Gray et al. (2006), the visible component is a K-type giant star. In contrast, Keenan and McNeil (1989) gave it a somewhat less evolved classification of K2 III–IV. It is about eight billion years old with 28% more mass than the Sun, and has expanded to 4.76 times the Sun's radius. The star is radiating 10 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,807 K.
References
K-type giants
Astrometric binaries
Dorado
Durchmusterung objects
Gliese and GJ objects
Doradus, 36
040409
027890
2102 | 36 G. Doradus | Astronomy | 272 |
1,895,162 | https://en.wikipedia.org/wiki/SARA%20%28computer%29 | SARA (SAABs räkneautomat, SAAB's calculating machine) was developed by SAAB when the capacity of BESK was insufficient for their needs. The project was started the fall of 1955 and became operational in 1956. SARA was built using the drawings of BESK that SAAB had bought for a symbolic sum and with the help of people who had worked with BESK, but didn't stay when Matematikmaskinnämnden decided that there would be no second generation. SARA wasn't used much, but it became the start of DataSAAB and the development of CK37 and D2.
References
IAS architecture computers
Science and technology in Sweden | SARA (computer) | Technology | 140 |
21,042,117 | https://en.wikipedia.org/wiki/Brooks%27%20theorem | In graph theory, Brooks' theorem states a relationship between the maximum degree of a graph and its chromatic number. According to the theorem, in a connected graph in which every vertex has at most Δ neighbors, the vertices can be colored with only Δ colors, except for two cases, complete graphs and cycle graphs of odd length, which require Δ + 1 colors.
The theorem is named after R. Leonard Brooks, who published a proof of it in 1941. A coloring with the number of colors described by Brooks' theorem is sometimes called a Brooks coloring or a Δ-coloring.
Formal statement
For any connected undirected graph G with maximum degree Δ,
the chromatic number of G is at most Δ, unless G is a complete graph or an odd cycle, in which case the chromatic number is Δ + 1.
Proof
László Lovász gives a simplified proof of Brooks' theorem. If the graph is not biconnected, its biconnected components may be colored separately and then the colorings combined. If the graph has a vertex v with degree less than Δ, then a greedy coloring algorithm that colors vertices farther from v before closer ones uses at most Δ colors. This is because at the time that each vertex other than v is colored, at least one of its neighbors (the one on a shortest path to v) is uncolored, so it has fewer than Δ colored neighbors and has a free color. When the algorithm reaches v, its small number of neighbors allows it to be colored. Therefore, the most difficult case of the proof concerns biconnected Δ-regular graphs with Δ ≥ 3. In this case, Lovász shows that one can find a spanning tree such that two nonadjacent neighbors u and w of the root v are leaves in the tree. A greedy coloring starting from u and w and processing the remaining vertices of the spanning tree in bottom-up order, ending at v, uses at most Δ colors. For, when every vertex other than v is colored, it has an uncolored parent, so its already-colored neighbors cannot use up all the free colors, while at v the two neighbors u and w have equal colors so again a free color remains for v itself.
Extensions
A more general version of the theorem applies to list coloring: given any connected undirected graph with maximum degree Δ that is neither a clique nor an odd cycle, and a list of Δ colors for each vertex, it is possible to choose a color for each vertex from its list so that no two adjacent vertices have the same color. In other words, the list chromatic number of a connected undirected graph G never exceeds Δ, unless G is a clique or an odd cycle.
For certain graphs, even fewer than Δ colors may be needed. Δ − 1 colors suffice if and only if the given graph has no Δ-clique, provided Δ is large enough. For triangle-free graphs, or more generally graphs in which the neighborhood of every vertex is sufficiently sparse, O(Δ/log Δ) colors suffice.
The degree of a graph also appears in upper bounds for other types of coloring; for edge coloring, the result that the chromatic index is at most Δ + 1 is Vizing's theorem. An extension of Brooks' theorem to total coloring, stating that the total chromatic number is at most Δ + 2, has been conjectured by Mehdi Behzad and Vizing. The Hajnal–Szemerédi theorem on equitable coloring states that any graph has a (Δ + 1)-coloring in which the sizes of any two color classes differ by at most one.
Algorithms
A Δ-coloring, or even a Δ-list-coloring, of a degree-Δ graph may be found in linear time. Efficient algorithms are also known for finding Brooks colorings in parallel and distributed models of computation.
Notes
References
.
.
.
.
.
.
.
.
.
External links
Graph coloring
Theorems in graph theory | Brooks' theorem | Mathematics | 810 |
7,155,145 | https://en.wikipedia.org/wiki/Dagger%20compact%20category | In category theory, a branch of mathematics, dagger compact categories (or dagger compact closed categories) first appeared in 1989 in the work of Sergio Doplicher and John E. Roberts on the reconstruction of compact topological groups from their category of finite-dimensional continuous unitary representations (that is, Tannakian categories). They also appeared in the work of John Baez and James Dolan as an instance of semistrict k-tuply monoidal n-categories, which describe general topological quantum field theories, for n = 1 and k = 3. They are a fundamental structure in Samson Abramsky and Bob Coecke's categorical quantum mechanics.
Overview
Dagger compact categories can be used to express and verify some fundamental quantum information protocols, namely: teleportation, logic gate teleportation and entanglement swapping, and standard notions such as unitarity, inner-product, trace, Choi–Jamiolkowsky duality, complete positivity, Bell states and many other notions are captured by the language of dagger compact categories. All this follows from the completeness theorem, below. Categorical quantum mechanics takes dagger compact categories as a background structure relative to which other quantum mechanical notions like quantum observables and complementarity thereof can be abstractly defined. This forms the basis for a high-level approach to quantum information processing.
Formal definition
A dagger compact category is a dagger symmetric monoidal category which is also compact closed, together with a relation to tie together the dagger structure to the compact structure. Specifically, the dagger is used to connect the unit to the counit, so that, for all in , the following diagram commutes:
To summarize all of these points:
A category is closed if it has an internal hom functor; that is, if the hom-set of morphisms between two objects of the category is an object of the category itself (rather than of Set).
A category is monoidal if it is equipped with an associative bifunctor that is associative, natural and has left and right identities obeying certain coherence conditions.
A monoidal category is symmetric monoidal, if, for every pair A, B of objects in C, there is an isomorphism that is natural in both A and B, and, again, obeys certain coherence conditions (see symmetric monoidal category for details).
A monoidal category is compact closed, if every object has a dual object . Categories with dual objects are equipped with two morphisms, the unit and the counit , which satisfy certain coherence or yanking conditions.
A category is a dagger category if it is equipped with an involutive functor that is the identity on objects, but maps morphisms to their adjoints.
A monoidal category is dagger symmetric if it is a dagger category and is symmetric, and has coherence conditions that make the various functors natural.
A dagger compact category is then a category that is each of the above, and, in addition, has a condition to relate the dagger structure to the compact structure. This is done by relating the unit to the counit via the dagger:
shown in the commuting diagram above. In the category FdHilb of finite-dimensional Hilbert spaces, this last condition can be understood as defining the dagger (the Hermitian conjugate) as the transpose of the complex conjugate.
Examples
The following categories are dagger compact.
The category FdHilb of finite dimensional Hilbert spaces and linear maps. The morphisms are linear operators between Hilbert spaces. The product is the usual tensor product, and the dagger here is the Hermitian conjugate.
The category Rel of Sets and relations. The product is, of course, the Cartesian product. The dagger here is just the opposite.
The category of finitely generated projective modules over a commutative ring. The dagger here is just the matrix transpose.
The category nCob of cobordisms. Here, the n-dimensional cobordisms are the morphisms, the disjoint union is the tensor, and the reversal of the objects (closed manifolds) is the dagger. A topological quantum field theory can be defined as a functor from nCob into FdHilb.
The category Span(C) of spans for any category C with finite limits.
Infinite-dimensional Hilbert spaces are not dagger compact, and are described by dagger symmetric monoidal categories.
Structural theorems
Selinger showed that dagger compact categories admit a Joyal-Street style diagrammatic language and proved that dagger compact categories are complete with respect to finite dimensional Hilbert spaces i.e. an equational statement in the language of dagger compact categories holds if and only if it can be derived in the concrete category of finite dimensional Hilbert spaces and linear maps. There is no analogous completeness for Rel or nCob.
This completeness result implies that various theorems from Hilbert spaces extend to this category. For example, the no-cloning theorem implies that there is no universal cloning morphism. Completeness also implies far more mundane features as well: dagger compact categories can be given a basis in the same way that a Hilbert space can have a basis. Operators can be decomposed in the basis; operators can have eigenvectors, etc.. This is reviewed in the next section.
Basis
The completeness theorem implies that basic notions from Hilbert spaces carry over to any dagger compact category. The typical language employed, however, changes. The notion of a basis is given in terms of a coalgebra. Given an object A from a dagger compact category, a basis is a comonoid object . The two operations are a copying or comultiplication δ: A → A ⊗ A morphism that is cocommutative and coassociative, and a deleting operation or counit morphism ε: A → I . Together, these obey five axioms:
Comultiplicativity:
Coassociativity:
Cocommutativity:
Isometry:
Frobenius law:
To see that these relations define a basis of a vector space in the traditional sense, write the comultiplication and counit using bra–ket notation, and understanding that these are now linear operators acting on vectors in a Hilbert space H:
and
The only vectors that can satisfy the above five axioms must be orthogonal to one-another; the counit then uniquely specifies the basis. The suggestive names copying and deleting for the comultiplication and counit operators come from the idea that the no-cloning theorem and no-deleting theorem state that the only vectors that it is possible to copy or delete are orthogonal basis vectors.
General results
Given the above definition of a basis, a number of results for Hilbert spaces can be stated for compact dagger categories. We list some of these below, taken from unless otherwise noted.
A basis can also be understood to correspond to an observable, in that a given observable factors on (orthogonal) basis vectors. That is, an observable is represented by an object A together with the two morphisms that define a basis: .
An eigenstate of the observable is any object for which
Eigenstates are orthogonal to one another.
An object is complementary to the observable if
(In quantum mechanics, a state vector is said to be complementary to an observable if any measurement result is equiprobable. viz. an spin eigenstate of Sx is equiprobable when measured in the basis Sz, or momentum eigenstates are equiprobable when measured in the position basis.)
Two observables and are complementary if
Complementary objects generate unitary transformations. That is,
is unitary if and only if is complementary to the observable
References
Monoidal categories
Dagger categories | Dagger compact category | Mathematics | 1,639 |
46,521,179 | https://en.wikipedia.org/wiki/Spoofing%20%28finance%29 | Spoofing is a disruptive algorithmic trading activity employed by traders to outpace other market participants and to manipulate markets. Spoofers feign interest in trading futures, stocks, and other products in financial markets creating an illusion of the demand and supply of the traded asset. In an order driven market, spoofers post a relatively large number of limit orders on one side of the limit order book to make other market participants believe that there is pressure to sell (limit orders are posted on the offer side of the book) or to buy (limit orders are posted on the bid side of the book) the asset.
Spoofing may cause prices to change because the market interprets the one-sided pressure in the limit order book as a shift in the balance of the number of investors who wish to purchase or sell the asset, which causes prices to increase (more buyers than sellers) or prices to decline (more sellers than buyers). Spoofers bid or offer with intent to cancel before the orders are filled. The flurry of activity around the buy or sell orders is intended to attract other traders to induce a particular market reaction. Spoofing can be a factor in the rise and fall of the price of shares and can be very profitable to the spoofer who can time buying and selling based on this manipulation.
Under the 2010 Dodd–Frank Act, spoofing is defined as "the illegal practice of bidding or offering with intent to cancel before execution." Spoofing can be used with layering algorithms and front-running, activities which are also illegal.
High-frequency trading, the primary form of algorithmic trading used in financial markets is very profitable as it deals in high volumes of transactions. The five-year delay in arresting the lone spoofer, Navinder Singh Sarao, accused of exacerbating the 2010 Flash Crash—one of the most turbulent periods in the history of financial markets—has placed the self-regulatory bodies such as the Commodity Futures Trading Commission (CFTC) and Chicago Mercantile Exchange & Chicago Board of Trade under scrutiny. The CME was described as being in a "massively conflicted" position as they make huge profits from the HFT and algorithmic trading.
Definition
In Australia, layering and spoofing in 2014 referred to the act of "submitting a genuine order on one side of the book and multiple orders at different prices on the other side of the book to give the impression of substantial supply/demand, with a view to sucking in other orders to hit the genuine order. After the genuine order trades, the multiple orders on the other side are rapidly withdrawn."
In a 2012 report Finansinspektionen (FI), the Swedish Financial Supervisory Authority defined spoofing/layering as "a strategy of placing orders that is intended to manipulate the price of an instrument, for example through a combination of buy and sell orders."
In the U.S. Department of Justice April 21, 2015 complaint of market manipulation and fraud laid against Navinder Singh Sarao, — dubbed the Hounslow day-trader — appeared "to have used this 188-and-289-lot spoofing technique in certain instances to intensify the manipulative effects of his dynamic layering technique...The purpose of these bogus orders is to trick other market participants and manipulate the product's market price." He employed the technique of dynamic layering, a form of market manipulation in which traders "place large sell orders for contracts" tied to the Standard & Poor's 500 Index. Sarao used his customized computer-trading program from 2009 onwards.
Milestone case against spoofing
In July 2013 the US Commodity Futures Trading Commission (CFTC) and Britain's Financial Conduct Authority (FCA) brought a milestone case against spoofing which represents the first Dodd-Frank Act application. A federal grand jury in Chicago indicted Panther Energy Trading and Michael Coscia, a high-frequency trader. In 2011 Coscia placed spoofed orders through CME Group Inc. and European futures markets with profits of almost $1.6 million. Coscia was charged with six counts of spoofing with each count carrying a maximum sentence of ten years in prison and a maximum fine of one million dollars. The illegal activity undertaken by Coscia and his firm took place in a six-week period from "August 8, 2011 through October 18, 2011 on CME Group’s Globex trading platform." They used a "computer algorithm that was designed to unlawfully place and quickly cancel orders in exchange-traded futures contracts." They placed a "relatively small order to sell futures that they did want to execute, which they quickly followed with several large buy orders at successively higher prices that they intended to cancel. By placing the large buy orders, Mr. Coscia and Panther sought to give the market the impression that there was significant buying interest, which suggested that prices would soon rise, raising the likelihood that other market participants would buy from the small order Coscia and Panther were then offering to sell."
Britain's FCA is also fining Coscia and his firm approximately $900,000 for "taking advantage of the price movements generated by his layering strategy" relating to his market abuse activities on the ICE Futures Europe exchange. They earned US$279,920 in profits over the six weeks period "at the expense of other market participants – primarily other High Frequency Traders or traders using algorithmic and/or automated systems."
Providence vs Wall Street
On 18 April 2014 Robbins Geller Rudman & Dowd LLP filed a class-action lawsuit on behalf of the city of Providence, Rhode Island in Federal Court in the Southern District of New York. The complaint in the high frequency matter named "every major stock exchange in the U.S." This includes the New York Stock Exchange, Nasdaq, Better Alternative Trading System (Bats) — an electronic communication network (ECN) –
and Direct Edge among others. The suit also names major Wall Street firms including but not limited to, Goldman Sachs, Citigroup, JPMorgan and the Bank of America. High-frequency trading firms and hedge funds are also named in the lawsuit. The lawsuit claimed that, "For at least the last five years, the Defendants routinely engaged in at least the following manipulative, self-dealing and deceptive conduct," which included "spoofing – where the HFT Defendants send out orders with corresponding cancellations, often at the opening or closing of the stock market, in order to manipulate the market price of a security and/or induce a particular market reaction."
Dodd–Frank Wall Street Reform and Consumer Protection Act
CFTC's Enforcement Director, David Meister, explained the difference between legal and illegal use of algorithmic trading,
It is "against the law to spoof, or post requests to buy or sell futures, stocks and other products in financial markets without intending to actually follow through on those orders." Anti-spoofing statute is part of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act passed on July 21, 2010. The Dodd-Frank brought significant changes to financial regulation in the United States. It made changes in the American financial regulatory environment that affect all federal financial regulatory agencies and almost every part of the nation's financial services industry.
Eric Moncada, another trader is accused of spoofing in wheat futures markets and faces CFTC fines of $1.56 million.
2010 Flash Crash and the lone Hounslow day-trader
On April 21, 2015, five years after the incident, the U.S. Department of Justice laid "22 criminal counts, including fraud and market manipulation" against Navinder Singh Sarao, who became known as the Hounslow day-trader. Among the charges included was the use of spoofing algorithms, in which first, just prior to the Flash Crash, he placed thousands of E-mini S&P 500 stock index futures contract orders. These orders, amounting to about "$200 million worth of bets that the market would fall" were "replaced or modified 19,000 times" before they were cancelled that afternoon. Spoofing, layering and front-running are now banned. The CTFC concluded that Sarao "was at least significantly responsible for the order imbalances" in the derivatives market which affected stock markets and exacerbated the flash crash. Sarao began his alleged market manipulation in 2009 with commercially available trading software whose code he modified "so he could rapidly place and cancel orders automatically." Sarao is a 36-year-old small-time trader who worked from his parents’ modest semi-attached stucco house in Hounslow in suburban west London. Traders Magazine correspondent John Bates argues that by April 2015, traders can still manipulate and impact markets in spite of regulators and banks' new, improved monitoring of automated trade systems. For years, Sarao denounced high-frequency traders, some of them billion-dollar organisations, who mass manipulate the market by generating and retract numerous buy and sell orders every millisecond ("quote stuffing") — which he witnessed when placing trades at the Chicago Mercantile Exchange (CME). Sarao claimed that he made his choices to buy and sell based on opportunity and intuition and did not consider himself to be one of the HFTs.
The 2010 Flash Crash was a United States trillion-dollar stock market crash, in which the "S&P 500, the Nasdaq 100, and the Russell 2000 collapsed and rebounded with extraordinary velocity." Dow Jones Industrial Average "experienced the biggest intraday point decline in its entire history," plunging 998.5 points (about 9%), most within minutes, only to recover a large part of the loss. A CFTC 2014 report described it as one of the most turbulent periods in the history of financial markets.
In 2011 the chief economist of the Bank of England — Andrew Haldane — delivered a famous speech entitled the "Race to Zero" at the International Economic Association Sixteenth World Congress in which he described how "equity prices of some of the world’s biggest companies were in freefall. They appeared to be in a race to zero. Peak to trough." At the time of the speech Haldane acknowledged that there were many theories about the cause of the Flash Crash but that academics, governments and financial experts remained "agog."
References
See also
Algorithmic trading
Complex event processing
Computational finance
Dark liquidity
Data mining
Erlang (programming language) used by Goldman Sachs
Flash trading
Front running
Hedge fund
Hot money
IEX
Market maker
Mathematical finance
Offshore fund
Pump and dump
Quantitative trading
Short (finance)
Statistical arbitrage
Derivatives (finance)
Futures exchanges
Financial markets
Electronic trading systems
Stock market
Mathematical finance
Securities (finance)
Financial crimes | Spoofing (finance) | Mathematics | 2,202 |
45,383,325 | https://en.wikipedia.org/wiki/Tabersonine | Tabersonine is a terpene indole alkaloid found in the medicinal plant Catharanthus roseus and also in the genus Voacanga (both taxa belonging to the alkaloid-rich family Apocynaceae). Tabersonine is hydroxylated at the 16 position by the enzyme tabersonine 16-hydroxylase (T16H) to form 16-hydroxytabersonine. The enzyme leading to its formation is currently unknown. Tabersonine is the first intermediate leading to the formation of vindoline one of the two precursors required for vinblastine biosynthesis.
See also
Conopharyngine
Tabernanthine
Vinblastine
References
Indole alkaloids | Tabersonine | Chemistry | 154 |
37,565,754 | https://en.wikipedia.org/wiki/Stream%20metabolism | Stream metabolism, often referred to as aquatic ecosystem metabolism in both freshwater (lakes, rivers, wetlands, streams, reservoirs) and marine ecosystems, includes gross primary productivity (GPP) and ecosystem respiration (ER) and can be expressed as net ecosystem production (NEP = GPP - ER). Analogous to metabolism within an individual organism, stream metabolism represents how energy is created (primary production) and used (respiration) within an aquatic ecosystem. In heterotrophic ecosystems, GPP:ER is <1 (ecosystem using more energy than it is creating); in autotrophic ecosystems it is >1 (ecosystem creating more energy than it is using). Most streams are heterotrophic. A heterotrophic ecosystem often means that allochthonous (coming from outside the ecosystem) inputs of organic matter, such as leaves or debris fuel ecosystem respiration rates, resulting in respiration greater than production within the ecosystem. However, autochthonous (coming from within the ecosystem) pathways also remain important to metabolism in heterotrophic ecosystems. In an autotrophic ecosystem, conversely, primary production (by algae, macrophytes) exceeds respiration, meaning that ecosystem is producing more organic carbon than it is respiring.
Stream metabolism can be influenced by a variety of factors, including physical characteristics of the stream (slope, width, depth, and speed/volume of flow), biotic characteristics of the stream (abundance and diversity of organisms ranging from bacteria to fish), light and nutrient availability to fuel primary production, organic matter to fuel respiration, water chemistry and temperature, and natural or human-caused disturbance, such as dams, removal of riparian vegetation, nutrient pollution, wildfire or flooding.
Measuring stream metabolic state is important to understand how disturbance may change the available primary productivity, and whether and how that increase or decrease in NEP influences foodweb dynamics, allochthonous/autochthonous pathways, and trophic interactions. Metabolism (encompassing both ER and GPP) must be measured rather than primary productivity alone, because simply measuring primary productivity does not indicate excess production available for higher trophic levels. One commonly used method for determining metabolic state in an aquatic system is daily changes in oxygen concentration, from which GPP, ER, and net daily metabolism can be estimated.
Disturbances can affect trophic relationships in a variety of ways, such as simplifying foodwebs, causing trophic cascades, and shifting carbon sources and major pathways of energy flow (Power et al. 1985, Power et al. 2008). Part of understanding how disturbance will impact trophic dynamics lies in understanding disturbance impacts to stream metabolism (Holtgrieve et al. 2010). For example, in Alaska streams, disturbance of the benthos by spawning salmon caused distinct changes in stream metabolism; autotrophic streams became net heterotrophic during the spawning run, then reverted to autotrophy after the spawning season (Holtgrieve and Schindler 2011). There is evidence that this seasonal disturbance impacts trophic dynamics of benthic invertebrates and in turn their vertebrate predators (Holtgrieve and Schindler 2011, Moore and Schindler 2008). Wildfire disturbance may have similar metabolic and trophic impacts in streams.
See also
Overflow metabolism
Lake metabolism
Apparent oxygen utilisation
References
Odum, Howard T., "Primary production in flowing waters", Limnology and Oceanography, vol. 1, no. 2, pp. 102–117, April 1956.
Power, M. E.; Matthews, W. J.; Stewart, A. J., "Grazing minnow, piscivorous bass, and stream algae: dynamics of a strong interaction", Ecology, vol. 66, pp. 1448–1456.
Holtgrieve, Gordon W.; Schindler, Daniel E.; Branch, Trevor A.; A’mar, Z. Teresa, "Simultaneous quantification of aquatic ecosystem metabolism and reaeration using a Bayesian statistical model of oxygen dynamics", Limnology and Oceanography, vol. 55, no. 3, pp. 1047–1063, 2010.
Holtgrieve, Gordon W.; Schindler, Daniel E., "Marine-derived nutrients, bioturbation, and ecosystem metabolism: reconsidering the role of salmon in streams", Ecology, vol. 92, pp. 373–385.
Moore, Jonathan W.; Schindler, Daniel E., "Biotic disturbance and benthic community dynamics in salmon-bearing streams", Journal of Animal Ecology, vol. 77, iss. 2, pp. 275–284, March 2008.
Aquatic ecology
Ecosystems
Metabolism
Water streams | Stream metabolism | Chemistry,Biology | 980 |
26,469,223 | https://en.wikipedia.org/wiki/Ocean%20Traveler | Ocean Traveler was a drilling platform built in the United States and used in the Gulf of Mexico. In 1966, it was transferred to Esso for the first exploration wells on the Norwegian continental shelf in the North Sea, following the Dutch discovery of Groningen gas field in 1959. On 16 July 1966, the platform did a limited discovery in the North Sea, in a block that much later became the discovery place of the Balder Field.
Ocean Traveler had big problems with the weather conditions in the North Sea, the weather being much tougher in the North Sea than the areas off the southern coast of the United States. These experiences laid the foundation for the sister platforms, especially the Norwegian-built "Ocean Viking", which had a significantly strengthened structure.
References
Ocean Traveler - first drilling platform on the Norwegian continental shelf
Odyssey Of The Ocean Traveler
History of the petroleum industry in Norway
Oil platforms | Ocean Traveler | Chemistry,Engineering | 177 |
38,347,573 | https://en.wikipedia.org/wiki/19%20Puppis | 19 Puppis is a binary star system in the southern constellation of Puppis, near the northern border with Hydra and Monoceros. It is visible to the naked eye as a faint, yellow-hued star with an apparent visual magnitude of 4.72. The system is located approximately 177 light years away from the system based on parallax. It is receding from the Earth with a heliocentric radial velocity of +36 km/s, having come to within some 1.4 million years ago.
The primary, component A, is an aging giant star with a stellar classification of G9III-IIIb. It is a red clump giant, which indicates it is on the horizontal branch and is generating energy through helium fusion at its core. The star is about one billion years old with 1.05 times the mass of the Sun and 8.9 times the Sun's radius. It is radiating 43 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,750 K.
The secondary member, component B, is a magnitude 11.2 star at an angular separation of from the primary. Four visual companions have been reported. These are component C, at magnitude 13.2 and separation 30.7", D, at magnitude 8.9 and separation 57.8", E, at magnitude 9.37 and separation 70.1", and F, at magnitude 10.74 and separation 114.1".
References
G-type giants
Horizontal-branch stars
Binary stars
Puppis
BD-12 2385
Puppis, 19
068290
040084
3211
2542 | 19 Puppis | Astronomy | 338 |
48,222,573 | https://en.wikipedia.org/wiki/Chloride%20channel%20blocker | A chloride channel blocker is a type of drug which inhibits the transmission of ions (Cl−) through chloride channels.
Niflumic acid is a chloride channel blocker that has been used in experimental scientific research. Another example is anthracene-9-carboxylic acid, a potent blocker of the CLCN1-type chloride channel found in skeletal muscle, which is used to study animal models of myotonia congenita.
Some antagonists of glycine receptors and GABAA receptors also act as chloride channel blockers.
See also
Chloride channel opener
References
Further reading
Drugs by mechanism of action
Ion channel blockers | Chloride channel blocker | Chemistry | 134 |
9,111,697 | https://en.wikipedia.org/wiki/Missing%20Link%20%28puzzle%29 | Missing Link is a mechanical puzzle invented in 1981 by Steven P. Hanson and Jeffrey D. Breslow.
The puzzle has four sides, each depicting a chain of a different color. Each side contains four tiles, except one which contains three tiles and a gap. The top and bottom rows can be rotated, and tiles can slide up or down into the gap. The objective is to scramble the tiles and then restore them to their original configuration.
The two middle rows cannot be rotated. To move tiles in these rows, you need to loop the tiles from one row to another, up and down.
There are 15 tiles and a gap, giving a maximum of 16! arrangements. However, the middle tiles of each four-tile chain are identical, and each position is equivalent to seven other positions obtained by rotating the entire puzzle (about its axis or upside-down), reducing the number of arrangements to 16! / 8 / 8 = 326,918,592,000. If the three long chains are also considered interchangeable, then the number of arrangements is further reduced to 16! / 8 / 8 / 6 = 54,486,432,000.
Reception
Games magazine included Inner Circle in their "Top 100 Games of 1981", expecting it to be "the torture of the season" but instead found it was "solvable by ordinary human beings and will not provoke a rash of how-to books".
See also
Combination puzzles
Mechanical puzzles
References
External links
Picture and solution
1980s toys
1981 introductions
1981 works
Combination puzzles
Mechanical puzzles | Missing Link (puzzle) | Mathematics | 313 |
64,287,800 | https://en.wikipedia.org/wiki/Scandium%20perchlorate | Scandium perchlorate is an inorganic compound with the chemical formula Sc(ClO4)3.
Production
Scandium perchlorate can be prepared by dissolving scandium oxide in perchloric acid:
References
Scandium compounds
Perchlorates | Scandium perchlorate | Chemistry | 52 |
3,573,834 | https://en.wikipedia.org/wiki/Branching%20quantifier | In logic a branching quantifier, also called a Henkin quantifier, finite partially ordered quantifier or even nonlinear quantifier, is a partial ordering
of quantifiers for Q ∈ {∀,∃}. It is a special case of generalized quantifier. In classical logic, quantifier prefixes are linearly ordered such that the value of a variable ym bound by a quantifier Qm depends on the value of the variables
y1, ..., ym−1
bound by quantifiers
Qy1, ..., Qym−1
preceding Qm. In a logic with (finite) partially ordered quantification this is not in general the case.
Branching quantification first appeared in a 1959 conference paper of Leon Henkin. Systems of partially ordered quantification are intermediate in strength between first-order logic and second-order logic. They are being used as a basis for Hintikka's and Gabriel Sandu's independence-friendly logic.
Definition and properties
The simplest Henkin quantifier is
It (in fact every formula with a Henkin prefix, not just the simplest one) is equivalent to its second-order Skolemization, i.e.
It is also powerful enough to define the quantifier (i.e. "there are infinitely many") defined as
Several things follow from this, including the nonaxiomatizability of first-order logic with (first observed by Ehrenfeucht), and its equivalence to the -fragment of second-order logic (existential second-order logic)—the latter result published independently in 1970 by Herbert Enderton and W. Walkoe.
The following quantifiers are also definable by .
Rescher: "The number of φs is less than or equal to the number of ψs"
Härtig: "The φs are equinumerous with the ψs"
Chang: "The number of φs is equinumerous with the domain of the model"
The Henkin quantifier can itself be expressed as a type (4) Lindström quantifier.
Relation to natural languages
Hintikka in a 1973 paper advanced the hypothesis that some sentences in natural languages are best understood in terms of branching quantifiers, for example: "some relative of each villager and some relative of each townsman hate each other" is supposed to be interpreted, according to Hintikka, as:
which is known to have no first-order logic equivalent.
The idea of branching is not necessarily restricted to using the classical quantifiers as leaves. In a 1979 paper, Jon Barwise proposed variations of Hintikka sentences (as the above is sometimes called) in which the inner quantifiers are themselves generalized quantifiers, for example: "Most villagers and most townsmen hate each other." Observing that is not closed under negation, Barwise also proposed a practical test to determine whether natural language sentences really involve branching quantifiers, namely to test whether their natural-language negation involves universal quantification over a set variable (a sentence).
Hintikka's proposal was met with skepticism by a number of logicians because some first-order sentences like the one below appear to capture well enough the natural language Hintikka sentence.
where
denotes
Although much purely theoretical debate followed, it wasn't until 2009 that some empirical tests with students trained in logic found that they are more likely to assign models matching the "bidirectional" first-order sentence rather than branching-quantifier sentence to several natural-language constructs derived from the Hintikka sentence. For instance students were shown undirected bipartite graphs—with squares and circles as vertices—and asked to say whether sentences like "more than 3 circles and more than 3 squares are connected by lines" were correctly describing the diagrams.
See also
Game semantics
Dependence logic
Independence-friendly logic (IF logic)
Mostowski quantifier
Lindström quantifier
Nonfirstorderizability
References
External links
Game-theoretical quantifier at PlanetMath.
Quantifier (logic) | Branching quantifier | Mathematics | 873 |
25,120,737 | https://en.wikipedia.org/wiki/AXIe | AdvancedTCA Extensions for Instrumentation and Test (AXIe) is a modular instrumentation standard created by
Aeroflex, Keysight Technologies, and Test Evolution Corporation.
(In October 2008, Aeroflex had purchased a 40% shareholding in Test Evolution.)
AXIe was targeted for general-purpose instrumentation and semiconductor test. AXIe is based on standards from AdvancedTCA (ATCA), PXI, LAN eXtensions for Instrumentation (LXI), and Interchangeable Virtual Instruments (IVI). AXIe was formally launched on November 10, 2009.
Additional members joining the AXIe Consortium were: Viavi Solutions, Guzik Technical Enterprises (December 2009), Giga-tronics (January 2010), ADLINK Technology, Conduant (2019), Elma Electronic, Samtec, Informtest, Power Value Technologies, Synopsis and Modular Methods.
In October, 2017, the AXIe consortium announced a new specification, Optical Data Interface (ODI), suitable for high-speed instrumentation systems addressing challenging applications in 5G communications, mil/aero, and advanced communication research. The new standard enables a 20 GB/s (160 Gbit/s) data transfer connection between instruments and/or data recorders using multi-mode optical fibers.
References
External links
An overview presentation of AXIe
An overview of ODI (Optical Data Interface)
Electronics standards
Ethernet standards
Networking standards | AXIe | Technology,Engineering | 283 |
6,869,559 | https://en.wikipedia.org/wiki/Cray%20X1 | The Cray X1 is a non-uniform memory access, vector processor supercomputer manufactured and sold by Cray Inc. since 2003. The X1 is often described as the unification of the Cray T90, Cray SV1, and Cray T3E architectures into a single machine. The X1 shares the multistreaming processors, vector caches, and CMOS design of the SV1, the highly scalable distributed memory design of the T3E, and the high memory bandwidth and liquid cooling of the T90.
The X1 uses a 1.2 ns (800 MHz) clock cycle, and eight-wide vector pipes in MSP mode, offering a peak speed of 12.8 gigaflops per processor. Air-cooled models are available with up to 64 processors. Liquid-cooled systems scale to a theoretical maximum of 4096 processors, comprising 1024 shared-memory nodes connected in a two-dimensional torus network, in 32 frames. Such a system would supply a peak speed of 50 teraflops. The largest unclassified X1 system was the 512-processor system at Oak Ridge National Laboratory, though this has since been upgraded to an X1E system.
The X1 can be programmed either with widely used message passing software like MPI and PVM, or with shared-memory languages like Unified Parallel C programming language or Co-array Fortran. The X1 runs an operating system called UNICOS/mp which shares more with the SGI IRIX operating system than it does with the UNICOS found on prior-generation Cray machines.
In 2005, Cray released the X1E upgrade, which uses dual-core processors, allowing two quad-processor nodes to fit on a node board. The processors are also upgraded to 1150 MHz. This upgrade almost triples the peak performance per board, but reduces the per-processor memory and interconnect bandwidth. X1 and X1E boards can be combined within the same system.
The X1 is notable for its development being partly funded by United States
Government's National Security Agency (under the code name
SV2).
The X1 was not a financially successful product and it seems doubtful that it
or its successors would have been produced without this support.
References
External links
ORNL X1 evaluation
Cray Legacy Products
Cray X1E at top500.org
X1
Vector supercomputers
Computer-related introductions in 2003 | Cray X1 | Technology | 505 |
2,834,500 | https://en.wikipedia.org/wiki/Pentyl%20group | Pentyl is a five-carbon alkyl group or substituent with chemical formula -C5H11. It is the substituent form of the alkane pentane.
In older literature, the common non-systematic name amyl was often used for the pentyl group. Conversely, the name pentyl was used for several five-carbon branched alkyl groups, distinguished by various prefixes. The nomenclature has now reversed, with "amyl" being more often used to refer to the terminally branched group also called isopentyl, as in amobarbital.
A cyclopentyl group is a ring with the formula -C5H9.
The name is also used for the pentyl radical, a pentyl group as an isolated molecule. This free radical is only observed in extreme conditions. Its formula is often written "•" or "• " to indicate that it has one unsatisfied valence bond. Radicals like pentyl are reactive, they react with neighboring atoms or molecules (like oxygen, water, etc.)
Older "pentyl" groups
The following names are still used sometimes:
Pentyl radical
The free radical pentyl was studied by J. Pacansky and A. Gutierrez in 1983. The radical was obtained by exposing bishexanoyl peroxide trapped in frozen argon to ultraviolet light, that caused its decomposition into two carbon dioxide () molecules and two pentyl radicals.
Examples
Pentanol
Pentyl pentanoate
Amylamine
Amyl acetate
Amyl alcohol
Amylmetacresol
Isoamyl acetate
Isoamyl alcohol
References
Alkyl groups | Pentyl group | Chemistry | 352 |
35,704,471 | https://en.wikipedia.org/wiki/List%20of%20red%20seaweeds%20of%20the%20Cape%20Peninsula%20and%20False%20Bay | This is a list of red seaweeds recorded from the oceans bordering The Cape Peninsula in South Africa from Melkbosstrand on the West Coast to Cape Hangklip on the South Coast.
This list comprises locally used common names, scientific names with author citation and recorded ranges. Ranges specified may not be the entire known range for the species, but should include the known range within the waters surrounding the Republic of South Africa.
Red seaweed refers to thousands of species of macroscopic, multicellular, marine algae in the taxon Rhodophyta
The marine ecology is unusually varied for an area of this size, as a result of the meeting of two major oceanic water masses near Cape Point, and the park extends into two coastal marine bioregions. The ecology of the west or "Atlantic Seaboard" side of the Cape Peninsula is noticeably different in character and biodiversity to that of the east, or "False Bay" side. Both sides are classified as temperate waters, but there is a significant difference in average temperature, with the Atlantic side being noticeably colder on average.
List ordering and taxonomy complies where possible with the current usage in Algaebase, and may differ from the cited source, as listed citations are primarily for range or existence of records for the region.
Sub-taxa within any given taxon are arranged alphabetically as a general rule.
Details of each species may be available through the relevant internal links. Synonyms may be listed where useful.
Class: Bangiophyceae
Order: Bangiales
Family Bangiaceae
Bangia atropurpurea (Mertens ex Roth) C.Agardh 1824, syn. Conferva atropurpurea Mertens ex Roth 1806, Oscillatoria atropurpurea (Roth) C.Agardh 1817, Bangia fuscopurpurea var. atropurpurea (Roth) Lyngbye 1819, Bangia atropurpurea (Mertens ex Roth) C.Agardh 1824, Bangiella atropurpurea (Roth) Gaillon 1833, Bangiadulcis atropurpurea (Roth) W.A.Nelson 2007, (Cosmopolitan)
Purple laver, Porphyra capensis Kützing 1843, (Abundant on whole of west coast extending into Namibia and along south coast of Western and Eastern Cape. Endemic)
Pyropia gardneri (G.M.Smith & Hollenberg) S.C.Lindstrom in Sutherland et al. 2011, syn. Porphyrella gardneri G.M.Smith & Hollenberg 1943, Porphyra gardneri (G.M.Smith & Hollenberg) M.W.Hawkes 1977, (Cape of Good Hope to Brandfontein)
Pyropia saldanhae (Stegenga, J.J.Bolton & R.J.Anderson) J.E.Sutherland in Sutherland et al. 2011, syn. Porphyra saldanhae Stegenga, Bolton & R.J.Anderson 1997, (Hondeklip Bay and Olifantsbos, endemic)
Order: Porphyridiales
Family Phragmonemataceae
Neevea cf. repens Batters 1900, (Hout Bay)
Class: Compsopogonophyceae
Order: Erythropeltidales
Family Erythrotrichiaceae
Erythrocladia cf. polystromatica P.J.L.Dangeard 1932, (St James, False Bay and Cape Hangklip)
Erythrotrichia carnea (Dillwyn) J.Agardh 1883, syn. Erythrocladia carnea, Conferva carnea Dillwyn 1807, Bangia ciliaris subsp. pulchella (Harvey) De Toni 1897, (Probably fairly common, but South African distribution uncertain)
Erythrotrichia welwitschii (Ruprecht) Batters 1902, syn. Cruoria welwitschii Ruprecht 1850, (Cape of Good Hope and False Bay extending eastwards at least as far as Port Elizabeth)
Membranella africana Stegenga, Bolton & Anderson 1997, (Cape of Good Hope at least as far as Port Alfred)
Porphyrostromium boryanum (Montagne) P.C.Silva in Silva, Basson & Moe 1996, Porphyra boryana Montagne 1846, Erythrotrichia boryana (Montagne) Berthold 1882, Phyllona boryana (Montagne) Kuntze 1891, Erythrotrichopeltis boryana (Montagne) Kornmann 1984, Porphyrostromium boryanum (Montagne) M.J.Wynne 1986, (Yzerfontein to Oatlands Point, False Bay)
Sahlingia subintegra (Rosenvinge, 1909) Kornmann 1989, syn. Erythrocladia subintegra Rosenvinge 1909, Erythrocladia irregularis f. subintegra (Rosenvinge) Garbary, Hansen & Scagel 1981. Erythropeltis subintegra (Rosenvinge) Kornmann & Sahling 1985, Erythrotrichopeltis subintegra (Rosenvinge) Kornmann & Sahling 1985, (Worldwide – probably widely distributed in SA )
Class: Florideophyceae
Order: Acrochaetiales
Family Acrochaetiaceae
Acrochaetium brebneri (Batters) G.Hamel 1928, syn. Rhodochorton brebneri Batters 1897, Chantransia brebneri (Batters) Rosenvinge 1909, Audouinella brebneri (Batters) P.S.Dixon 1976, (False Bay side of the Cape Peninsula)
Acrochaetium balliae (Stegenga), nom. illeg. syn. Audouinella balliae Stegenga 1985, (Port Nolloth to Hout Bay)
Acrochaetium catenulatum M.A.Howe 1914, (Namibia to Eastern Cape)
Acrochaetium endozoicum (Darbishire) Batters 1902, syn. Chantransia endozoica Darbishire 1899, Rhodochorton endozoicum (Darbishire) Drew 1928, Audouinella endozoica (Darbishire) P.S.Dixon 1976, (Cape Peninsula)
Acrochaetium moniliforme (Rosenvinge) Børgesen 1915, Chantransia moniliformis Rosenvinge 1909, Rhodochorton moniliforme (Rosenvinge) Drew 1928, Kylinia moniliformis (Rosenvinge) Kylin 1944, Chromastrum moniliforme (Rosenvinge) Papenfuss 1945, Audouinella moniliformis (Rosenvinge) Garbary 1979, (False Bay eastward at least as far as Transkei)
Acrochaetium plumosum (K.M.Drew) G.M.Smith 1944, syn. Colaconema plumosum (Drew) Woelkerling 1971, Rhodochorton Plumosum Drew 1928, (Hondeklip Bay to Betty's Bay)
Acrochaetium reductum (Rosenvinge) G.Hamel 1927, syn. Chantransia reducta Rosenvinge 1909, (Between False Bay and Plettenberg Bay)
Acrochaetium secundatum (Lyngbye) Nägeli 1858, syn. Callithamnion daviesii var. secundatum Lyngbye 1819, (Namibia to False Bay)
Audouinella occulta H.Stegenga 1985, (Hout Bay)
Audouinella monorhiza (Stegenga) Garbary 1987, syn. Colaconema monorhiza Stegenga 1985, (Noordhoek and Olifantsbos, Cape Peninsula)
Audouinella pectinata (Kylin) Papenfuss 1945, syn. Chantransia pectinata Kylin, 1906, (Doring Bay to Olifantsbos)
Audouinella spongicola (Weber-van Bosse) Stegenga 1985, syn. Acrochaetium spongicola Weber-van Bosse 1921, (Hout Bay to Bird Island, Eastern Province)
Order Balliales
Family Balliaceae
Ballia callitricha (C.Agardh) Kützing 1843, syn. Sphacelaria callitricha C.Agardh 1824, (West side of Cape Peninsula to Cape Agulhas)
Ballia sertularioides (Suhr) Papenfuss 1940, syn. Callithamnion sertularioides Suhr 1840, (Lüderitz in Namibia to Hout Bay, Southern African endemic)
Order Bonnemaisoniales
Family Bonnemaisoniaceae
Asparagopsis armata Harvey 1855syn. Falkenbergia rufolanosa (Harvey) F.Schmitz in Engler & Prantl 1897, (Platbank, Cape Peninsula eastwards)
Bonnemaisonia hamifera Hariot 1891, (Known from one collection at Strandfontein, False Bay)
Delisea flaccida (Suhr) Papenfuss 1940, syn. Sphaerococcus flaccidus Suhr 1834, (Olifantsbos on the Cape Peninsula eastwards)
Order Ceramiales
Family Callithamniaceae
Aglaothamnion hookeri (Dillwyn) Maggs & Hommersand 1993, syn. Conferva hookeri Dillwyn 1809, Callithamnion hookeri (Dillwyn) S.F.Gray 1821, (Namibia to East London)
Callithamnion stuposum Suhr 1840, syn. Phlebothamnion stuposum (Suhr) Kützing 1843, Spongoclonium stuposum (Suhr) De Toni 1903, (Swartklip to KwaZulu-Natal)
Callithamnion sp. indet. (Cape Peninsula to East London)
Heteroptilon pappeanum (Kützing) Hommersand in Hommersand, D.W. Freshwater, J. López-Bautista, & S. Fredericq 2006, syn. Euptilota pappeana Kützing 1849, (Hondeklipbaai to Cape Agulhas, endemic)
Family Ceramiaceae
Antithamnion diminuatum var. polyglandulum Stegenga 1986, (Olifantsbos in the southern Cape Peninsula eastward to KwaZulu-Natal)
Antithamnion pseudoarmatum Stegenga 1986, (Olifantsbos and Brandfontein, endemic)
Antithamnionella tasmanica Wollaston 1968, (Kalk Bay to Kowie River)
Antithamnionella tormentosa Stegenga 1986, (Cape Peninsula from Three Anchor Bay to Muizenberg, endemic)
Antithamnionella verticillata (Suhr) Lyle 1922, syn. Callithamnion vertillatum Suhr 1840, Antithamnion verticillatum (Suhr) De Toni 1903, (Swartklip in False Bay to Transkei)
Bornetia repens Stegenga 1985, (Swartklip in False Bay to Transkei, possibly KwaZulu-Natal)
Callithamniella capensis Simons 1970, (Muizenberg to East London, endemic)
Callithamnion stuposum Suhr 1840, syn. Phlebothamnion stuposum (Suhr) Kützing 1843, Spongoclonium stuposum (Suhr) De Toni 1903, (Rare on weat coast, common on south coast and KwaZulu-Natal at least as far north as Mabibi)
Flaccid kelp-weed, Carpoblepharis flaccida (J.V.Lamouroux) Kützing 1849, syn. Ptilota flaccida (J.V.Lamouroux) C.Agardh 1822, Delesseria flaccida J.V.Lamouroux 1813, (Namibia to the Kei river, Southern African endemic)
Carpoblepharis minima E.S.Barton 1893, (Möwe Bay in Namibia to Buffels Bay on the Cape Peninsula)
Centroceras clavulatum (C.Agardh) Montagne 1846, syn. Ceramium clavulatum C.Agardh 1822, Spyridia clavulata (C.Agardh) J.Agardh 1842, (Whole southern African coast)
Centroceras distichum Okamura 1934, (Cape Hangklip)
Curl-claw, Centroceras spp.
Beaded ceramium, Ceramium arenarium Simons 1966, (Swakopmund in Namibia to East London, Southern African endemic)
Black-red ceramium, Ceramium atrorubescens Kylin, 1938.
Ceramium aff. callipterum Mazoyer 1938, (West side of southern Cape Peninsula)
Ceramium camouii E.Y.Dawson 1944, (Cape Point eastwards along south coast)
Cape ceramium, Ceramium capense Kützing 1841, (Lüderitz to Kommetjie, endemic)
Ceramium centroceratiforme Simons 1966, (Cape Hangklip to Kei River, endemic)
Ceramium dawsonii A.B.Joly 1957, (False Bay eastward along entire Cape south coast)
Ceramium glanduliferum Kylin 1938, (Sea Point on Cape Peninsula eastward into KwaZulu-Natal, Southern African endemic)
Coarse ceramium, Ceramium obsoletum C.Agardh 1828, (From Namibia south and eastwards along the whole Cape South coast, Southern African endemic)
Ceramium papenfussianum Simons 1966, (Primarily a West Coast species, endemic)
Flat-fern ceramium, Ceramium planum Kützing 1849, (Swakopmund in Namibia to False Bay. Southern African endemic)
Ceramium tenerrimum (G.Martens) Okamura 1921, syn. Hormoceras tenerrimum G.Martens 1866, (Whole of Cape west coast and east to Knysna)
Compsothamnionella sciadophila Stegenga 1990, (Found once at Muizenberg, endemic)
Crouania attenuata (C.Agardh) J.Agardh 1842, syn. Mesogloia attenuata C.Agardh 1824, (Kalk Bay eastward into tropical East Africa)
Crouania francescoi Cormaci, G.Furnari & Scammacca 1978, (False Bay eastwards to northern KwaZulu-Natal)
Laurenciophila minima Stegenga 1986, (Clovelly in False Bay to Kowie river, endemic)
Microcladia gloria-spei Stegenga 1986, (Port Nolloth to Southern Cape Peninsula),
Platythamnion capense Stegenga 1986, (Only known from Platboombaai, endemic)
Pterothamnion recurvatum (Wollaston) Athanasiadis & Kraft 1994, syn. Platythamnion recurvatum E.M.Wollaston 1972, (Paternoster to Olifantsbos, Port Alfred)
Family Callithamniaceae
Aristocratic plume-weed, Callithamnion collabens (Rudolphi) L.McIvor & Maggs 2002, syn. Asperocaulon collabens Rudolphi 1831, Aristothamnion collabens (Rudolphi) Papenfuss 1968, (Namibia to Port Alfred, Southern African endemic)
Iridescent plume-weed, Callithamnion stuposum Suhr, 1840
Family Dasyaceae
Dasya echinata Stegenga, Bolton & R.J.Anderson 1997, (Brandfontein, Strandfontein in False Bay, and Waterloo Bay in Eastern Cape, endemic)
Bottlebrush, Dasya scoparia Harvey in J. Agardh 1841, (Lambert's Bay to East London), (Lamberts Bay to Mabibi in northern KwaZulu-Natal)
Heterosiphonia arenaria Kylin 1938, (Swartklip. Brandfontein, and Port Elizabeth to East London, endemic)
Heterosiphonia crispa (Suhr) Falkenberg 1901, syn. Dasya crispa Suhr 1840, (Lamberts Bay to KwaZulu-Natal, endemic)
Heterosiphonia dubia (Suhr) Falkenberg 1901, syn. Dasya dubia Suhr 1840, (Paternoster to KwaZulu-Natal, Southern African endemic)
Heterosiphonia pellucida (Harvey) Falkenberg 1901, syn. Dasya pellucida Harvey 1849, (Lamberts Bay to Brandfontein, endemic)
Family Delesseriaceae
Acrosorium acrospermum (J.Agardh) Kylin 1938, Plain acrosorium, syn. Nitophyllum ascospermum J.Agardh 1852, (False Bay to Eastern Cape, endemic)
Acrosorium maculatum (Sonder ex Kützing) Papenfuss 1940, syn. Aglaophyllum maculatum Sonder ex Kützing 1866, Nitophyllum uncinatum var. maculatum (Sonder ex Kützing) De Toni 1900, (Southern Cape Peninsula to KwaZulu-Natal)
Acrosorium ciliolatum (Harvey) Kylin 1924, crled acrosorium, syn. Nitophyllum ciliolatum Harvey 1855, Aglaophyllum ciliolatum (Harvey) Kützing 1869, Nitophyllum venulosum Zanardini 1866, Acrosorium venulosum (Zanardini) Kylin 1924, (as A. venulosum, Kommetjie to KwaZulu-Natal) (as A. ciliatum, Kommetjie eastward extending into KwaZulu-Natal at least as far as Sodwana Bay)
Apoglossum ruscifolium (Turner) J.Agardh 1898, syn. Fucus ruscifolius Turner 1802, Delesseria ruscifolia (Turner) J.V.Lamouroux 1813, (Oudekraal to Brandfontein)
Bartoniella crenata (J.Agardh ex Mazza) Kylin 1924, syn. Phitymophora crenata J.Agardh ex Mazza 1908, (Muizenberg and Cape Hangklip at least as far as Mission Rocks, endemic)
Black spot, Botryocarpa prolifera Greville 1830, (Namibia to southern Cape Peninsula)
Botryoglossum, Botryoglossum platycarpum (Turner) Kützing 1843, syn. Fucus platycarpus Turner 1809, Delesseria platycarpa (Turner) J.V.Lamouroux 1813, Phyllophora platycarpa (Turner) Greville ex Krauss 1846, Nitophyllum platycarpum (Turner) J.Agardh 1876, (Namibia to Cape of Good Hope. Southern African endemic)
Erythroglossum sp. indet. (Glencairn to Hluleka, endemic)
Gonimophyllum africanum M.T.Martin & M.A.Pocock 1953, (Table Bay to Kei River)
Haraldiophyllum bonnemaisonii (Kylin) A.D.Zinova 1981, syn. Myriogramme bonnemaisonii Kylin 1924, Nitophyllum bonnemaisonii (Kylin) Kylin 1934, (Kommetjie to Muizenberg on the Cape Peninsula)
Veined oil-weed, Hymenena venosa (Linnaeus) Krauss 1846, syn. Fucus venosus Linnaeus 1771, Delesseria venosa (Turner) J.V.Lamouroux 1813, (Namibia to southern Cape Peninsula)
Martensia elegans Hering 1841, syn. Capraella elegans (Harvey) J.De Toni 1936, Mesotrema elegans (Hering) Papenfuss 1942, (Common south coast species, extending into KwaZulu-Natal at least as far as Sodwana Bay)
Myriogramme eckloniae Stegenga, Bolton & R.J.Anderson 1997, (Drift material at Muizenberg, endemic)
Myriogramme livida (J.D.Hooker & Harvey) Kylin 1924, syn. Nitophyllum lividum J.D.Hooker & Harvey 1845, Cryptopleura livida (J.D.Hooker & Harvey) Kützing 1868, (Swakopmund to Kommetjie)
Veined tongues, Neuroglossum binderianum Kützing 1843, (Namibia to southern Cape Peninsula)
Papenfussia laciniata (Harvey) M.D. Guiry 2005?, syn. Pollexfenia laciniata Harvey 1844, (Both sides of the Cape Peninsula, endemic)
Frilly broekies, Paraglossum papenfussii (M.J.Wynne) S.-M.Lin, Fredericq & Hommersand, 2012, also recorded as syn. Delesseria papenfussii M.J.Wynne 1984, (Port Nolloth to Brandfontein. Southern African endemic)
Platyclinia sp. (Olifantsbos)
Platysiphonia intermedia (Grunow) M.J.Wynne 1983, syn. Sarcomenia intermedia Grunow 1867, (Port Nolloth to Cape Agulhas)
Delesseriaceae vel. aff. (Oudekraal, endemic)
Family Rhodomelaceae
Aiolocolax pulchellus M.A.Pocock 1956, (Blaauwberg eastwards)
Bostrychia intricata (Bory de Saint-Vincent) Montagne 1852, syn. Scytonema intricatum Bory de Saint-Vincent 1828, Stictosiphonia intricata (Bory de Saint-Vincent) P.C.Silva 1996, (Saldanha Bay, Kommetjie on Cape Peninsula eastward along whole of south coast)
Kelp fern, Carradoriella virgata (C.Agardh) P.C.Silva, 1996, recorded as syn. Polysiphonia virgata (C.Agardh) Sprengel 1827, syn. Hutchinsia virgata C.Agardh 1824, Carradoria virgata (C.Agardh) Kylin 1956, (Namibia to Brandfontein)
Cape chondria, Chondria capensis (Harvey) Askenasy 1888, syn. Laurencia capensis Harvey 1849, Chondriopsis capensis (Harvey) J.Agardh 1863, (Namibia to just east of Cape Agulhas. Southern African endemic.)
Falkenbergiella capensis Kylin 1938, (St James, Muizenberg and Swartklip in False Bay. Cape south coast. Endemic)
Herposiphonia didymosporangia Stegenga & Kemperman 1987, (St James, Brandfontein and coast of De Hoop nature reserve, Southern African endemic)
Herposiphonia heringii (Harvey) Falkenberg 1901, syn. Polysiphonia heringii Harvey 1847, (Between Hondeklipbaai and St James, endemic)
Herposiphonia secunda (C.Agardh) Ambronn 1880, syn. Hutchinsia secunda C.Agardh 1824, Polysiphonia secunda (C.Agardh) Zanardini 1840, Herposiphonia tenella f. secunda (C.Agardh) Hollenberg 1968, (Muizenberg, Cape Agulhas eastward to the tropics)
Flexuose laurencia, Laurencia flexuosa Kützing 1849, (False Bay to KwaZulu-Natal at least as far north as Mabibi, endemic))
Grape laurencia, Laurencia glomerata (Kützing) Kützing 1849, syn. Chondria glomerata Kützing 1847, (Port Nolloth?, Melbosstrand?, Cape Peninsula eastward)
Laurencia peninsularis Stegenga, Bolton & R.J.Anderson 1987, (False Bay to East London, endemic)
Ophidocladus simpliciusculus (P.L.Crouan & H.M.Crouan) Falkenberg in Schmitz & Falkenberg 1897, syn. Polysiphonia simpliciuscula P.L.Crouan & H.M.Crouan 1852, (Hondeklipbaai?, Platboombaai on Cape Peninsula to Mozambique)
Pachychaeta cryptoclada Falkenberg 1901, (Swartklip, Brandfontein, more common in Eastern Cape, endemic)
Placophora binderi (J.Agardh) J.Agardh 1863, syn. Amansia binderi J.Agardh 1841, Micramansia binderi (J.Agardh) Kützing 1865, (Kalk Bay on the Cape Peninsula extending along south and east coast to southern Mozambique)
Placophora monocarpa (Montagne) Papenfuss 1956, syn. Polysiphonia monocarpa Montagne 1842, (Melkbosstrand to Strandfontein in False Bay, possibly further east, endemic)
Polysiphonia incompta Harvey 1847, (Namibia, the entire South African coast into Mozambique)
Polysiphonia namibiensis Stegenga & Engeldow in Stegenga, Bolton & Anderson 1997, (Olifantsbos, Cape Agulhas, Eastern Cape. Southern African endemic)
Polysiphonia scopulorum Harvey 1855, syn. Vertebrata scopulorum (Harvey) Kuntze 1891, Lophosiphonia scopulorum (Harvey) Womersley 1950, (Muizenberg and Clovelly in False Bay)
Polysiphonia urbana Harvey 1847, (Port Nolloth to Cape Agulhas. Southern African endemic)
Polysiphonia sp.1 (Muizenberg, endemic)
Red feather-weed, Pterosiphonia cloiophylla (C.Agardh) Falkenberg in Schmitz & Falkenberg 1897, syn. Rytiphlaea cloiophylla (C.Agardh) J.Agardh, Rhodomela cloiophylla C.Agardh 1822, Polysiphonia cloiophylla (C.Agardh) J.Agardh 1863, (Namibia, Cape west coast and Cape south coast)
Pterosiphonia stangeri (J.Agardh) Falkenberg 1901, syn. Polysiphonia stangeri J.Agardh 1863, Vertebrata stangeri (J.Agardh) Kuntze 1891, (Swartklip in False Bay, Cape south coast and KwaZulu-Natal. Southern African endemic)
Streblocladia camptoclada (Montagne) Falkenberg 1901, syn. Polysiphonia camptoclada Montagne 1837, (Yzerfontein to Clovelly in False Bay)
Streblocladia corymbifera (C.Agardh) Kylin 1938, syn. Hutchinsia corymbifera C.Agardh 1828, Polysiphonia corymbifera (C.Agardh) Endlicher 1843, (Saldanha to St. James in False Bay)
Stromatocarpus parasiticus Falkenberg in Schmitz & Falkenberg 1897, (Blaauwberg to Cape Hangklip, endemic)
Tayloriella tenebrosa (Harvey) Kylin 1938, Polysiphonia tenebrosa Harvey 1847, (Doring Bay, Muizenberg and Glencairn in False Bay eastward, Southern African endemic)
Family Spyridiaceae
Spyridia filamentosa (Wulfen) Harvey in Hooker 1833, syn. Fucus filamentosus Wulfen 1803, Hutchinsia filamentosa (Wulfen) C.Agardh 1824, Polysiphonia filamentosa (Wulfen) Sprengel 1827, Ceramium filamentosum (Wulfen) C.Agardh 1828, (Rare in Western Cape. False Bay, Eastern Cape to tropical East Africa)
Spyridia plumosa F.Schmitz ex J.Agardh 1897, (Camps Bay, Kowie area, extending into KwaZulu-Natal as far as Shelly Beach, endemic)
Family Wrangeliaceae
Anotrichium furcellatum (J.Agardh) Baldock 1976, syn. Griffithsia furcellata J.Agardh 1842, Neomonospora furcellata (J.Agardh) Feldmann-Mazoyer & Meslin 1939, Corynospora furcellata (J.Agardh) Levring 1974, (False Bay, Kowie)
Anotrichium tenue (C.Agardh) Nägeli 1862, syn. Griffithsia tenuis C.Agardh 1828, (Doring Bay to Cape Agulhas and further east to KwaZulu-Natal)
Griffithsia confervoides Suhr 1840, (Namibia to KwaZulu-Natal, Southern African endemic)
Gymnothamnion elegans (Schousboe ex C.Agardh) J.Agardh 1892, syn. Callithamnion elegans Schousboe ex C.Agardh 1828, (Bakoven on Cape Peninsula to KwaZulu-Natal)
Gymnothamnion elegans var. bisporum Stegenga 1986, (Hout Bay to East London, endemic)
Hommersandiella humilis (Kützing) Alongi, Cormaci & G.Furnari 2007, syn. Callithamnion humile Kützing 1849, Lomathamnion humile (Kützing) Stegenga 1989, (Namibia to Cape Hangklip, Southern African endemic)
Lomathamnion capense Stegenga 1984, (Cape Point to Arniston, endemic)
Pleonosporium filicinum (Harvey ex J.Agardh) De Toni 1903, syn. Halothamnion filicinum Harvey ex J.Agardh 1876, (Swartklip in False Bay to Natal, Southern African endemic)
Pleonosporium harveyanum (J.Agardh) De Toni 1903, syn. Halothamnion harveyanum J.Agardh 1876, (Namibia to East London, Southern African endemic)
Pleonosporium paternoster Stegenga 1986, (Paternoster and Oudekraal, endemic)
Pleonosporium ramulosum (J.Agardh) De Toni 1903, Corynospora ramulosa J.Agardh 1851, (Port Nolloth to southern Cape Peninsula, endemic)
Ptilothamnion polysporum Gordon-Mills & Wollaston in Wollaston 1984; (Swartklip in False Bay to Mozambique)
Spongoclonium caribaeum (Børgesen) M.J.Wynne 2005, syn. Mesothamnion caribaeum Børgesen 1917, Pleonosporium caribaeum (Børgesen) R.E.Norris 1985, (Clovelly in False Bay to KwaZulu-Natal, Widespread in tropical regions.)
Tiffaniella cymodoceae (Børgesen) E.M.Gordon 1972, syn. Spermothamnion cymodoceae Børgesen 1952, (Platbank on Cape Peninsula to Mozambique)
Tiffaniella schmitziana (E.S.Barton) Bolton & Stegenga 1987, syn. Spermothamnion schmitzianum E.S.Barton 1893, (Kraalbaai, Strandfontein, Port Elizabeth to Hluleka, endemic)
Wrangelia purpurifera J.Agardh 1863, (Paternoster to Kowie River, endemic)
Order Colaconematales
Family Colaconemataceae
Colaconema caespitosum (J.Agardh) Jackelman, Stegenga & J.J.Bolton 1991, (Kommetjie eastward entire south coast and Eastern Cape)
Colaconema daviesii (Dillwyn) Stegenga 1985, (Hondeklipbaai to Transkei)
Colaconema interpositum (Heydrich) H.Stegenga, J.J.Bolton & R.J.Anderson 1997, (Platbank, Cape Peninsula)
Colaconema nemalionis (De Notaris ex L.Dufour) Stegenga 1985, (Hondeklip Bay to East London)
Colaconema panduripodium H.Stegenga, J.J.Bolton & R.J.Anderson 1997, (Hondeklip Bay and Oudekraal, endemic)
Order Corallinales
Family Corallinaceae
Amphiroa capensis Areschoug 1852, (Llandudno on Cape Peninsula to Kowie River, endemic)
Amphiroa epheddraea horsetail coralline,
Arthrocardia corymbosa (Lamarck) Decaisne 1842, syn. Corallina corymbosa Lamarck 1815, Amphiroa corymbosa (Lamarck) Decaisne 1842, Cheilosporum corymbosum (Lamarck) Decaisne 1842, (Southern Cape Peninsula eastward)
Arthrocardia flabellata (Kützing) Manza 1940, syn. Corallina flabellata Kützing 1858, (Probably along most of the southern African coast) (Probably along the entire South African coast extending into Mozambique)
Arthrocardia filicula (Lamarck) Johansen 1984, syn. Corallina filicula Lamarck 1815, Cheilosporum palmatum var. filicula (Lamarck) Yendo 1902, (Namibia and west coast)
Feather coralline, Corallina officinalis Linnaeus 1758, (Oudekraal to Kowie river) (Oudekraal eastward to Mission Rocks)
Hydrolithon samoënse (Foslie) Keats & Y.M.Chamberlain 1994, syn. Lithophyllum samoënse Foslie 1906, Pseudolithophyllum samoënse (Foslie) Adey 1970, (Yzerfontein, Western Cape to Sodwana Bay, KwaZulu-Natal.)
Finely forked coralline, Jania adhaerens J.V.Lamouroux, 1816 (TMNPMPA)
Jania crassa J.V.Lamouroux 1821, (St. James in False Bay, Eastern Cape and KwaZulu-Natal)
Arrowhead coralline, Jania cultrata (Harvey) J.H.Kim, Guiry & H.-G.Choi 2007, syn. Amphiroa cultrata Harvey 1849, Cheilosporum cultratum (Harvey) Areschoug 1852, (as Cheilosporum cultratum, Platboombaai on Cape Peninsula to Mozambique)
Jania sagittata (J.V.Lamouroux) Blainville 1834, syn. Corallina sagittata J.V.Lamouroux 1824, Amphiroa sagittata (J.V.Lamouroux) Decaisne 1842, Arthrocardia sagittata (J.V.Lamouroux) Decaisne 1842, Cheilosporum sagittatum (J.V.Lamouroux) Areschoug 1852, (as Cheilosporum sagittatum, Melkbosstrand to northern Mabibi in KwaZulu-Natal.)
Jania verrucosa J.V.Lamouroux 1816, syn. Corallina verrucosa (Lamouroux) Kützing 1858, (False Bay eastward)
Lithophyllum corallinae (P.L.Crouan & H.M.Crouan) Heydrich 1897, syn. Melobesia corallinae P.L.Crouan & H.M.Crouan 1867, Dermatolithon corallinae (P.L.Crouan & H.M.Crouan) Foslie 1902, Lithophyllum pustulatum var. corallinae (P.L.Crouan & H.M.Crouan) Foslie 1905, Lithophyllum macrocarpum f. corallinae (P.L.Crouan & H.M.Crouan) Foslie 1909, Tenarea corallinae (P.L.Crouan & H.M.Crouan) Notoya 1974, Titanoderma corallinae (P.L.Crouan & H.M.Crouan) Woelkerling, Y.M.Chamberlain & P.C.Silva 1985, (Kommetjie (Western Cape) to KwaZulu-Natal)
Lithophyllum neoatalayense Masaki 1968, (as Titanoderma neoatalayense, Groenriviermond (Northern Cape) to Cape Agulhas (Western Cape))
Lithophyllum polycephalum Foslie, syn. Titanoderma polycephalum (Foslie) Woelkerling, Y.M.Chamberlain & P.C.Silva 1985, (as Titanoderma polycephalum, False Bay to Cape Agulhas (Western Cape))
Lithophyllum pustulatum (J.V.Lamouroux) Foslie 1904, syn. Melobesia pustulata J.V.Lamouroux 1816, Titanoderma pustulatum (J.V.Lamouroux) Nägeli 1858, Dermatolithon pustulatum (J.V.Lamouroux) Foslie 1898, Epilithon pustulatum (J.V.Lamouroux) M.Lemoine 1921, Tenarea pustulata (J.V.Lamouroux) Shameel 1983, (as Titanoderma pustulatum, Occasional throughout the west coast and increasing in abundance toward KwaZulu-Natal where it is particularly abundant.)
Pneophyllum coronatum (Rosanoff) Penrose in Chamberlain 1994, syn. Melobesia coronata Rosanoff 1866, (Oudekraal, western Cape Peninsula, Western Cape.)
Pneophyllum fragile Kützing 1843, (Widespread along the west coast.)
Pneophyllum keatsii Y.M.Chamberlain 1994, (Oudekraal, western Cape Peninsula, Western Cape, to Cape Agulhas, Western Cape.)
Spongites discoidea (Foslie) D.Penrose & Woelkerling 1988, syn. Lithophyllum discoideum Foslie 1900, Hydrolithon discoideum (Foslie) M.L.Mendoza & J.Cabioch 1985, (Port Nolloth, Northern Cape, to Cape Agulhas, Western Cape.)
Scrolled coralline crust, Spongites impar (Foslie) Y.M.Chamberlain 1994, syn. Lithophyllum impar Foslie 1909, (Cape St. Martin just south of St. Helena Bay, Western Cape, to Oudekraal, western Cape Peninsula, Western Cape.)
Cochlear coralline crust, Spongites yendoi (Foslie) Y.M.Chamberlain 1993, syn. Lithophyllum yendoi (Foslie) Foslie 1900, Goniolithon yendoi Foslie 1900, Lithothamnion yendoi (Foslie) Lemoine 1965, Pseudolithophyllum yendoi (Foslie) Adey 1970, (Throughout South Africa (Namibia to the Mozambican border). Most abundant along the southern west and south coasts, becoming less common toward the east.)
Order Gelidiales
Family Gelidiaceae
Gelidium abbottiorum R.E.Norris, 1990 (TMNPMPA)
Gelidium applanatum Stegenga, Bolton & R.J.Anderson 1997, (Vulcan Rock off Hout Bay and Muizenberg)
Cape jelly-weed, Gelidium capense (S.G.Gmelin) P.C.Silva in P.C.Silva, E.G.Meñez, & Moe 1987, (Melkbosstrand to Kenton on Sea Eastern Cape. Endemic?)
Gelidium micropterum Kützing 1868, (Cape Peninsula to Knysna)
Saw-edged jelly-weed, Gelidium pristoides (Turner) Kützing 1843, (Sea Point and False Bay eastwards)
Fern-leafed jelly-weed, Gelidium pteridifolium R.E.Norris, Hommersand & Fredericq 1987, (Glencairn, Cape Hangklip, Eastern Cape and southern KwaZulu-Natal up to Tinley Manor just north of Durban)
Turf jelly-weed, Gelidium reptans (Suhr) Kylin 1938, syn. Phyllophora reptans Suhr 1841, (Cape Peninsula and False Bay to KwaZulu-Natal and Mozambique)
Red ribbons, Gelidium vittatum (Linnaeus) Kützing 1843, syn. Fucus vittatus Linnaeus 1767, Suhria vittata (Linnaeus) Endlicher 1843, Chaetangium vittatum (Linnaeus) P.G.Parkinson 1981, (Möwe Bay, Nabibia to Brandfontein, drift specimens to Port Elizabeth)
Order Gigartinales
Family Caulacanthaceae
Spiky turf-weed, Caulacanthus ustulatus (Mertens ex Turner) Kützing 1843, syn. Fucus acicularis var. ustulatus Mertens ex Turner 1808, Sphaerococcus ustulatus (Mertens ex Turner) C.Agardh 1828, Gigartina ustulata (Mertens ex Turner) Greville 1830, Hypnea ustulata (Mertens ex Turner) Montagne 1840, Gelidium ustulatum (Mertens ex Turner) J.Agardh 1842, Olivia ustulata (Mertens ex Turner) Montagne 1846, (Whole South African coast)
Heringia mirabilis (C.Agardh) J.Agardh 1846, syn. Sphaerococcus mirabilis C.Agardh 1820, (Namibia to East london, Southern African endemic)
Family Cystocloniaceae
Straight-tipped hypnea, Hypnea ecklonii Suhr 1836, (Pearly Beach to Namibia, Southern African endemic)
Hypnea rosea Papenfuss 1947, (Strand in False Bay and Die Walle, just west of Cape Agulhas, and south and east coasts, endemic)
Green tips, Hypnea spicifera (Suhr) Harvey in J. Agardh 1847, syn. Gracilaria spicifera Suhr 1834, Hypnophycus spicifera (Suhr) Kützing 1843, (virtually the entire South African coast, Southern African endemic)
Fine hypnea, Hypnea tenuis Kylin 1938, (Mainly south and east coast, as far west as Swartklip in False Bay)
Roseleaf, Rhodophyllis reptans (Suhr) Papenfuss 1956, syn. Halymenia reptans Suhr 1834, Euhymenia reptans (Suhr) Kützing 1849, Kallymenia reptans (Suhr) E.S.Barton 1893, (Hondeklipbaai to KwaZulu-Natal, Southern African endemic)
Family Gigartinaceae
Red tongue-weed, Gigartina bracteata (S.G.Gmelin) Setchell & N.L.Gardner 1933, syn. Fucus bracteatus S.G.Gmelin 1768, (Namibia to Cape of Good Hope, drift material from Muizenberg, Southern African endemic)
Gigartina insignis (Endlicher & Diesing) F.Schmitz in E.S.Barton 1896, syn. Iridaea insignis Endlicher & Diesing 1845, (Muizenberg, Cape Hangklip to Kowie River, Southern African endemic)
Gigartina pistillata (S.G.Gmelin) Stackhouse 1809, syn. Fucus pistillatus S.G.Gmelin 1768, (Smitswinkel Bay and Swartklip east to the Kowie area)
Tongue-weed, Gigartina polycarpa (Kützing) Setchell & N.L.Gardner, 1933 (TMNPMPA)
Gigartina tysonii Reinbold in Tyson 1912, (Three Anchor Bay to Camps Bay, drift specimens from Platboombaai and Olifantsbos, endemic)
Gigartina scabiosa (Kützing) Papenfuss (date not specified) (TMNPMPA)
Iridaea convoluta (Areschoug ex J Agardh) Hewitt 1960, syn. Gigartina convoluta Areschoug ex J.Agardh 1899, (Table Bay to Cape of Good Hope, endemic)
Spotted mazzaella, Mazzaella capensis (J.Agardh) Fredericq in Hommersand et al. 1993, Iridaea capensis J.Agardh 1848, Iridophycus capensis (J.Agardh) Setchell & N.L.Gardner 1936, Gigartina capensis (J.Agardh) D.H.Kim 1976, (Port Nolloth to Cape Agulhas, extending into Namibia, Southern African endemic)
Convoluted mazzaella Mazzaella convoluta (Areschoug ex J.Agardh) Hommersand, 1994, (TMNPMPA)
Rhodoglossum alcicorne Stegenga, Bolton & R.J.Anderson 1997, (Hout Bay, endemic)
Sarcothalia radula (Esper) Edyvane & Womersley 1994, syn. Fucus radula Esper 1802, Sphaerococcus radula (Esper) C.Agardh 1822, Iridaea radula (Esper) Bory de Saint-Vincent 1828, Gigartina radula (Esper) J.Agardh 1851, (Port Nolloth to Cape Agulhas, rare at De Hoop, extending into Namibia)
Forked gigartina, Sarcothalia scutellata (Hering) Leister 1993, syn. Sphaerococcus scutellatus Hering 1841, Dicurella scutellata (Hering) Papenfuss 1940, Gigartina scutellata (Hering) Simons 1983, (Namibia to Cape Hangklip)
Twisted togue-weed, Sarcothalia stiriata (Turner) Leister in Hommersand, Guiry, Fredericq & Leister 1993, syn. Fucus stiriata Turner 1807, Sphaerococcus stiriatus (Turner) C.Agardh 1817, Sphaerococcus radula var. stiriatus (Turner) Rudolphi 1831, Mastocarpus stiriatus (Turner) Kützing 1843, Gigartina stiriata (Turner) J.Agardh 1851, (Namibia and Port Nolloth to Cape Agulhas) as Gigartina stiriata in TMNPMPA.
Family Kallymeniaceae
Kallymenia agardhii R.E.Norris 1964, (Namibia to Cape Agulhas, Southern African endemic)
Kallymenia schizophylla J.Agardh 1848, (Namibia to southern Cape Peninsula and Cape Hangklip. Southern African endemic)
Pugetia harveyana (J.Agardh) R.E.Norris 1964, syn. Kallymenia harveyana J.Agardh 1844, (Namibia to southern Cape Peninsula, Drift material from Muizenberg)
Split disc-weed, Thamnophyllis discigera (J.Agardh) R.E.Norris 1964, syn. Rhodymenia discigera J.Agardh 1841, Callophyllis discigera (J.Agardh) J.Agardh 1847, (Port Nolloth to Cape Agulhas)
Thamnophyllis pocockiae R.E.Norris 1964, (St Helena bay to East London)
Family Phyllophoraceae
Ahnfeltiopsis complicata (Kützing) P.C.Silva & DeCew 1992, complicated gymnogongrus, syn. Chondrus complicatus Kützing 1849, Gymnogongrus complicatus (Kützing) Papenfuss 1943, (Namibia to False Bay, Southern African endemic)
Ahnfeltiopsis glomerata (J.Agardh) P.C.Silva & DeCew 1992, clustered gymnogongrus, syn. Gymnogongrus glomeratus J.Agardh 1849, (Namibia to Cape Agulhas, Southern African endemic)
Ahnfeltiopsis intermedia (Kylin) Stegenga, Bolton & R.J.Anderson 1997, gymnogongrus, syn. Gymnogongrus intermedius Kylin 1938, (Kalk Bay, Sea Point and possibly Keurboomstrand in Plettenberg Bay)
Ahnfeltiopsis polyclada (Kützing) P.C.Silva & DeCew 1992, fine gymnogongrus, syn. Chondrus polycladus Kützing 1849, Gymnogongrus polycladus (Kützing) J.Agardh 1851, (False Bay to Brandfontein, possibly Melkbosstrand and Postberg)
Ahnfeltiopsis vermicularis (C.Agardh) P.C.Silva & DeCew 1992, fine gymnogongrus, syn. Sphaerococcus vermicularis C.Agardh 1817, Gymnogongrus vermicularis (C.Agardh) J.Agardh 1851, (Hondeklipbaai to False Bay, South African endemic)
Dilated gymnogongrus, Gymnogongrus dilatatus (Turner) J.Agardh 1851, syn. Fucus dilatatus Turner 1811, Sphaerococcus dilatatus (Turner) C.Agardh 1817, Pachycarpus dilatatus (Turner) Kützing 1843, (Namibia to southern Cape Peninsula, drift material from Muizenberg)
Family Rhizophyllidaceae
Portieria hornemannii (Lyngbye) P.C.Silva in P.C. Silva, Meñez & Moe 1987, syn. Desmia hornemannii Lyngbye 1819, Chondrococcus hornemannii (Lyngbye) F.Schmitz 1895, (Table Bay, False Bay, south and east coast, extending into Mozambique)
Order Gracilariales
Family Gracilariaceae
Agar-weed, Gracilaria gracilis (Stackhouse) Steentoft, L.M.Irvine & Farnham, 1995 (TMNPMPA)
Gracilaria verrucosa (Hudson) Papenfuss 1950, syn. Fucus verrucosus Hudson 1762, (recorded from: St Helena Bay, Velddrif, Saldanha Bay, Langebaan Lagoon, Table Bay, False bay, Swartkops River)
Gracilariopsis lemaneiformis(Bory de Saint-Vincent) E.Y.Dawson, Acleto & Foldvik 1964, syn. Gigartina lemaneiformis Bory de Saint-Vincent 1828, Gracilaria lemaneiformis (Bory de Saint-Vincent) Greville 1830, Cordylecladia lemanaeformis (Bory de Saint-Vincent) M.A.Howe 1914, (Simon's Town in False Bay)
Family Pterocladiophilaceae
Gelidiocolax suhriae (M.T.Martin & M.A.Pocock) K.-C.Fan & Papenfuss 1959, syn. Choreocolax suhriae M.T.Martin & M.A.Pocock 1953, (Blaauwberg to Strandfontein, endemic)
Order Halymeniales
Family Grateloupiaceae
Grateloupia doryphora (Montagne) M.A.Howe 1914, syn. Halymenia doryphora Montagne 1839, (Port Nolloth to Cape Agulhas)
Grateloupia filicina (J.V.Lamouroux) C.Agardh 1822, syn. Delesseria filicina J.V.Lamouroux 1813, (Whole west coast and south coast to Eastern Cape as far as the Kowie area)
Grateloupia longifolia Kylin, 1938, (TMNPMPA),
Family Halymeniaceae
Red rubber-weed, Pachymenia carnosa (J.Agardh) J.Agardh 1876, syn. Platymenia carnosa J.Agardh 1848. Iridaea carnosa (J.Agardh) Kützing 1849, Schizymenia carnosa (J.Agardh) J.Agardh 1851, (Whole west coast into Namibia, eastward to Brandtfontein)
Pachymenia cornea (Kützing) Chiang 1970, syn. Iridaea cornea Kützing 1867, Cyrtymenia cornea (Kützing) F.Schmitz 1897, Phyllymenia cornea (Kützing) Setchell & Gardner 1936, (Doring Bay to East London)
Pachymenia orbitosa (Suhr) L.K.Russell in L.K. Russell et al. 2009, slippery orbits, syn. Iridaea orbitosa Suhr 1840, Aeodes orbitosa (Suhr) F.Schmitz 1894, (Whole Cape west coast, extending into Namibia, and eastward at least as far as Cape Agulhas, endemic)
Corrugated red algae, Phyllymenia belangeri (Bory de Saint-Vincent) Setchell & N.L.Gardner, 1936 recorded as syn. Grateloupia belangeri (Bory de Saint-Vincent) De Clerck, Gavio, Fredericq, Cocquyt & Coppejans, 2005, syn. Iridaea belangeri Bory de Saint-Vincent 1834, (Whole west coast extending into Namibia. Southernmost record from Platboombaai, endemic) (TMNPMPA)
Tattered rag-weed, Phyllymenia capensis (O.De Clerck) Gargiulo, M.Morabito & Manghisi, 2013, (TMNPMPA) recorded as syn. Grateloupia capensis O.De Clerck, 2005.
Constricted polyopes, Polyopes constrictus (Turner) J.Agardh 1851, syn. Fucus constrictus Turner 1809, Sphaerococcus constrictus (Turner) C.Agardh 1822, Gelidium constrictum (Turner) Kützing 1849, (Doring Bay to Kei River mouth)
Family Tsengiaceae
Lance-weed, Tsengia lanceolata (J.Agardh) Saunders & Kraft 2002, syn. Nemastoma lanceolatum J.Agardh 1847, (Hondeklipbaai to Cape Hangklip)
Tsengia pulchra (Baardseth) Masuda & Guiry 1994, syn. Nemastoma pulchrum Baardseth 1941, (found only once at the Cape of Good Hope)
Order Hapalidiales
Family Hapalidiaceae
Thin coralline crust, Phymatolithon foveatum (Y.M.Chamberlain & Keats) Maneveldt & E.Van der Merwe 2014, (TMNPMPA), syn. Leptophytum foveatum Y.M.Chamberlain & D.W.Keats, 1994.
Order Hildenbrandiales
Family Hildenbrandiaceae
Tar crust, Hildenbrandia lecannellieri Hariot 1887, (Entire west coast and east coast as far as Port Elizabeth)
Hildenbrandia rubra (Sommerfelt) Meneghini 1841, (Probably the whole of the west coast)
Order Nemaliales
Family Liagoraceae
Helminthocladia papenfussii Kylin 1938, (Oudekraal eastward at least as far as Cape Morgan)
Helminthora furcellata (Reinbold ex Tyson) M.T.Martin 1947, (Endemic, Three Anchor Bay to Cape Hangklip)
Family Scinaiaceae
Hedgehog seaweed, Nothogenia erinacea (Turner) P.G.Parkinson 1983, (Cape Fria, Namibia to East London)
Balloon weed, Nothogenia ovalis (Suhr) P.G.Parkinson 1983, syn. Dumontia ovalis Suhr 1840, (Endemic, Möwe Bay, Namibia to Cape Agulhas)
Scinaia capensis (Setchell) Huisman 1985, syn. Gloiophloea capensis Setchell 1914, (Endemic, Melkbosstrand to Kowie area of Eastern Cape)
Ramrod weed, Scinaia salicornioides (Kützing) J.Agardh 1851, syn. Ginnania salicornioides Kützing, (Endemic, Muizenberg to east coast)
Order Nemastomatales
Family Schizymeniaceae
Schizymenia apoda (J.Agardh) J.Agardh 1851, syn. Platymenia apoda J.Agardh 1848, Platymenia undulata var. obovata J.Agardh 1848, Schizymenia obovata (J.Agardh) J.Agardh 1851, (Port Nolloth to Cape Agulhas)
Order Palmariales
Family Meiodiscaceae
Meiodiscus concrescens (K.M.Drew) P.W.Gabrielson in Gabrielsen et al. 2000, syn. Audouinella concrescens (K.M.Drew) P.S.Dixon 1976, Rhodochorton concrescens, K.M. Drew 1928, (Hout Bay)
Family Rhodophysemataceae
Rhodophysema feldmannii Cabioch 1975, (Hout Bay to Platbank on Cape Peninsula)
Family Rhodothamniellaceae
Rhodothamniella floridula (Dillwyn) Feldmann in T.Christensen 1978, (Lambert's Bay to Hluleka, Transkei)
Order Peyssonneliales
Family Peyssonneliaceae
Peyssonnelia atropurpurea P.L.Crouan & H.M.Crouan 1867, (Yzerfontein to Brandfontein)
Red Fan-weed, Sonderophycus capensis (Montagne) M.J.Wynne 2011, Peyssonnelia capensis Montagne 1847, Pterigospermum capense (Montagne) Kuntze 1891, Sonderopelta capensis (Montagne) A.D.Krayesky 2009, (as Peyssonnelia capensis, Hout Bay on Cape Peninsula eastwards extending into Mozambique)
Order Plocamiales
Family Plocamiaceae
Plocamiocolax papenfussianus M.F.Martin & M.A.Pocock 1953, (Melkbosstrand to East London, endemic) (Arniston north to Rabbit Rock in KwaZulu-Natal)
Plocamium beckeri F.Schmitz ex Simons 1964, (Collected at Muizenberg, Eastern Cape and KwaZulu-Natal)
Coral plocamium Plocamium corallorhiza (Turner) J.D.Hooker & Harvey 1845, syn. Fucus corallorhiza Turner 1808, Thamnophora corallorhiza (Turner) C.Agardh 1822, (Yzerfontein to KwaZulu-Natal extending into southern Mozambique)
Horny plocamium, Plocamium cornutum (Turner) Harvey 1849, syn. Fucus cornutus Turner 1819, Thamnophora cornuta (Turner) Greville 1830, Thamnocarpus cornutus (Turner) Kützing 1843, (entire coastline of the Western Cape to Namibia, rarer in the Eastern Cape, Southern African endemic)
Plocamium glomeratum J.Agardh 1851, (Namibia to Still Bay, Southern African endemic)
Plocamium maxillosum (Poiret) J.V.Lamouroux 1813, syn. Fucus maxillosus Poiret 1808, (Hondeklipbaai to Cape Agulhas, endemic)
Rigid plocamium, Plocamium rigidum Bory de Saint-Vincent in Bélanger & Bory de Saint-Vincent 1834, syn. Nereidea rigida (Bory de Saint-Vincent) Kuntze 1891, (Namibia to Eastern Cape, Southern African endemic)
Plocamium sp. indet. (False Bay coast, endemic?)
Family Sarcodiaceae
Comb-fan weed, Trematocarpus flabellatus (J.Agardh) De Toni 1900, syn. Phyllotylus flabellatus J.Agardh 1847, Dicurella flabellata (J.Agardh) J.Agardh 1852, (Lüderitz to Port Elizabeth, Southern African endemic)
Trematocarpus fragilis (C.Agardh) De Toni 1900, syn. Sphaerococcus fragilis C.Agardh 1822, Chondrus fragilis (C.Agardh) Greville 1830, Dicurella fragilis (C.Agardh) J.Agardh 1852, (Port Nolloth to Brandfontein, Southern African endemic)
Order Rhodymeniales
Family Champiaceae
Compressed champia. Champia compressa Harvey 1838, (False Bay eastward to northern KwaZulu-Natal and extending into Mozambique. Rarer on west side of Cape Peninsula and also found at Kraalbaai and Paternoster)
Earthworm champia, Champia lumbricalis (Linnaeus) Desvaux, 1809.
Family Lomentariaceae
Lomentaria diffusa Stegenga, Bolton & R.J.Anderson 1997, (Saldanha Bay and Kraalbaai to Brandfontein, endemic)
Family Rhodymeniaceae
Botryocladia paucivesicaria Stegenga, Bolton & R.J.Anderson 1997, (Known from drift specimens collected on the west side of Cape peninsula at Noordhoek Beach and Olifantsbos, endemic)
Cape wine-weed, Rhodymenia capensis J.Agardh 1894, syn. Epymenia capensis (J.Agardh) Papenfuss 1940,
Rhodymenia holmesii Ardissone 1893, (drift material from Olifantsbos) (Southern half of the Cape Peninsula, endemic)
Stalked roseweed, Rhodymenia natalensis Kylin 1938, (From Namibia along the whole of the South African coast extending into southern Mozambique)
Broad wine weed, Rhodymenia obtusa (Greville) Womersley 1996, syn. Phyllophora obtusa Greville 1831, Epymenia obtusa (Greville) Kützing 1849, (Muizenberg and the southern Cape Peninsula to Namibia)
Palmate roseweed, Rhodymenia pseudopalmata (J.V.Lamouroux) P.C.Silva 1952, syn. Fucus pseudopalmatus J.V.Lamouroux 1805, Delesseria pseudopalmata (J.V.Lamouroux) J.V.Lamouroux 1813, (From drift at Strandfontein)
Order Sporolithales
Family Sporolithaceae
Velvety coralline crust, Heydrichia woelkerlingii R.A.Townsend, Y.M.Chamberlain & Keats, 1994 (TMNPMPA)<ref> Guiry, M.D. & Guiry, G.M. (2023). AlgaeBase. World-wide electronic publication, National University of Ireland, Galway (taxonomic information republished from AlgaeBase with permission of M.D. Guiry). Heydrichia woelkerlingii''' R.A.Townsend, Y.M.Chamberlain & Keats, 1994. Accessed through: World Register of Marine Species at: https://www.marinespecies.org/aphia.php?p=taxdetails&id=213933 on 2023-10-13 </ref>
Class Rhodophyta incertae sedis
Order Rhodophycophyta incertae sedis
Family Rhodophycophyta incertae sedisCallophycus densus (Sonder) Kraft 1984, syn. Thysanocladia densa Sonder 1871, (Olifantsbos to southern KwaZulu-Natal)
Class: Stylonematophyceae
Order: Stylonematales
Family StylonemataceaeStylonema alsidii'' (Zanardini, 1840) K.M.Drew 1956, (Saldanha Bay southward, and south coast of Western Cape, Eastern Cape to Kwa-Zulu Natal)
Neevea cf repens Batters 1900, (Hout Bay)
Geographical position of places mentioned in species ranges
Algoa Bay, Eastern Cape,
Aliwal shoal, KwaZulu-Natal,
Arniston (Waenhuiskrans), Western Cape,
Betty's Bay, Western Cape,
Bhanga Neck, KwaZulu-Natal,
Bird Island, Eastern Cape,
Blaauwberg, Western Cape,
Black Rock, Northern KwaZulu-Natal,
Brandfontein, Western Cape,
Buffelsbaai (Cape Peninsula), Western Cape,
Buffelsbaai (west coast), Western Cape,
Buffelsbaai (south coast), Western Cape,
Cape Agulhas, Western Cape,
Cape Columbine, Western Cape,
Cape Frio, Namibia,
Cape of Good Hope, Western Cape, (sometimes used historically to refer to the Cape Province, or South Africa)
Cape Peninsula, Western Cape
Cape Hangklip, Western Cape,
Cape Infanta, Western Cape,
Clovelly, False Bay, Western Cape,
Dalebrook, False Bay, Western Cape,
Danger Point, Western Cape,
De Hoop, Western Cape, (just west of Cape Infanta)
De Walle, (Die Walle), (Just west of Agulhas)
Die Dam (Quoin Point), Western Cape,
Doring Bay (Doringbaai), Western Cape,
Durban, KwaZulu-Natal,
Dwesa, Eastern Cape,
East London, Eastern Cape,
False Bay, Western Cape,
Glencairn, False Bay, Western Cape,
Groenrivier (Groen River),
Groot Bergrivier estuary (Berg River, Velddrif), Western Cape,
Haga Haga, Eastern Cape (N of E.London)
The Haven, Eastern Cape, 150 km west of Port St. Johns,
Hermanus, Western Cape,
Hluleka, Eastern Cape,
Hondeklipbaai, Northern Cape,
Hout Bay, Cape Peninsula, Western Cape,
Isipingo, KwaZulu-Natal,
Island Rock, KwaZulu-Natal,
Kalk Bay, False Bay, Western Cape,
Kei River, Eastern Cape,
Kenton-on-Sea, Eastern Cape,
Keurboomstrand, Plettenberg Bay, Western Cape,
Knysna, Western Cape,
Kommetjie, Western Cape,
Koppie Alleen, De Hoop, Western Cape,
Kosi Bay, Kwa-Zulu-Natal,
Kowie River, Eastern Cape,
Kraalbaai, Langebaan lagoon, Western Cape,
Lala Nek, KwaZulu-Natal,
Lamberts Bay, Western Cape,
Leadsman shoal, KwaZulu-Natal,
Langebaan Lagoon, Western Cape,
Llandudno, Cape Peninsula, Western Cape,
Lüderitz, Namibia,
Mabibi, Kwa-Zulu-Natal,
Mapelane, Maphelana, KwaZulu-Natal, near St. Lucia,
Melkbosstrand, Western Cape,
Mission Rocks, KwaZulu-Natal,
Mkambati, KwaZulu-Natal,
Morgan's Bay, Eastern Cape, (Near Kei mouth)
Möwe Bay, Namibia, (Möwe Point lighthouse)
Mtwalume river, KwaZulu-Natal,
Noordhoek, Cape Peninsula, Western Cape,
Muizenberg, False Bay, Western Cape,
Oatlands Point, False Bay, Western Cape,
Oudekraal, Cape Peninsula, Western Cape,
Olifantsbos, Cape Peninsula, Western Cape,
Palm Beach, South Africa,
Park Rynie, KwaZulu-Natal,
Paternoster, Western Cape,
Papenkuilsfontein, Western Cape, 10 km west of Agulhas
Pearly Beach, Western Cape,
Platbank, Cape Peninsula, Western Cape, °'"S °'E
Platboombaai,
Plettenberg Bay, Western Cape,
Ponta do Ouro, Mozambique border,
Port Alfred, Eastern Cape,
Port Edward, KwaZulu-Natal
Port Elizabeth, Eastern Cape,
Port Nolloth, Northern Cape,
Port St. Johns, KwaZulu-Natal,
Postberg, Western Cape,
Protea Banks, KwaZulu-Natal,
Rabbit Rock, KwaZulu-Natal,
Robberg, Western Cape,
Rocky Point, Namibia,
Saldanha Bay, Western Cape,
Saxon Reef, KwaZulu-Natal, (near Mozambique border),
Scarborough, Cape Peninsula, Western Cape,
Scottburgh, KwaZulu-Natal,
Sea Point, Cape Peninsula, Western Cape,
Shelly Beach, KwaZulu-Natal, KwaZulu-Natal,
Simon's Town, Western Cape,
Smitswinkel Bay, False Bay, Western Cape,
Sodwana Bay, KwaZulu-Natal,
Soetwater,
Stilbaai (Still Bay), Western Cape, E
St Helena Bay, Western Cape,
St. James, False Bay, Western Cape,
St Lucia, KwaZulu-Natal,
Strand, Western Cape,
Strandfontein, False Bay, Western Cape,
Strandfontein, Western Cape,
Swakopmund, Namibia,
Swartklip, False Bay, Western Cape,
Swartkops River,
Table Bay, Western Cape,
Three Anchor Bay, Cape Peninsula, Western Cape,
Three Sisters (Eastern Cape), Riet River, 10 km west of Port Alfred, Eastern Cape,
Trafalgar, KwaZulu-Natal,
Tsitsikamma, Eastern Cape,
Umhlali, KwaZulu-Natal, (mHlali river mouth)
Umpangazi, KwaZulu-Natal, (Cape Vidal?)
Uvongo, KwaZulu-Natal,
Waterloo Bay, Eastern Cape,
Yzerfontein, Western Cape,
See also
References
South Africa
Biology-related lists
Lists of biota of South Africa
Marine biota of South Africa | List of red seaweeds of the Cape Peninsula and False Bay | Biology | 15,347 |
22,058,786 | https://en.wikipedia.org/wiki/Drug-induced%20aseptic%20meningitis | Drug-Induced Aseptic Meningitis (DIAM) is a type of aseptic meningitis related to the use of medications such as nonsteroidal anti-inflammatory drugs (NSAIDs) or biologic drugs such as intravenous immunoglobulin (IVIG). Additionally, this condition generally shows clinical improvement after cessation of the medication, as well as a tendency to relapse with resumption of the medication.
Signs and Symptoms
The signs and symptoms of DIAM are similar to infectious meningitis including but not limited to headache, fever, neck stiffness, altered mental status and other neurological deficits such as numbness, paresthesias, seizure or weakness. Notably, the patient will have had recent exposure to one of the causative medications.
Causes
The following is a list of medications associated with DIAM.
Nonsteroidal anti-inflammatory drugs (NSAIDs)
Biologic drugs such as intravenous immunoglobulin (IVIG).
Antibiotics such as sulfonamides, isoniazid, ciprofloxacin, penicillin
Antiepileptic drugs such as Carbamazepine and Lamotrigine
Phenazopyridine
Monoclonal antibodies such as Infliximab, Adalimumab, Etanercept, Efalizumab, Cetuximab, and OKT3 antibodies.
Chemotherapeutic drugs such as Pemetrexed and Cytarabine.
Zimelidine (SSRI that is no longer available)
Azathioprine
Methotrexate
Allopurinol
Ranitidine
Pathophysiology
Meningitis, whether acute or chronic, is by definition an inflammation of the meninges. This can be due to both infectious or non-infectious reasons. DIAM is a noninfectious meningitis that is associated with the use of certain medications listed above. The pathogenesis of DIAM is poorly understood and may be related to autoimmune hypersensitivity reactions, although it may vary depending on the inciting medication. For instance, DIAM caused by OKT3 antibodies may be mediated by cytokine release rather than hypersensitivity reactions. There is an association with certain underlying conditions such as Systemic Lupus Erythematosus (SLE), and other underlying conditions that were present in a small amount of patients included Sjögren syndrome, Idiopathic thrombocytopenic purpura, rheumatoid arthritis, HIV, Crohn's disease.
Diagnosis
Historically, the process of diagnosis involved attempting to identify any infectious causes as these may be treatable with antibiotics or other medications. Lumbar puncture would be performed to collect cerebral spinal fluid (CSF) to culture for bacterial growth. Growth indicated a bacterial meningitis, while no growth indicated another cause denoted "aseptic" meningitis. The most common form of this is viral meningitis. Recent medical advances allows rapid polymerase chain reaction (PCR) testing that analyzes the CSF for DNA or RNA. This can quickly determine if there are bacterial or viral species present in the CSF. If these are ruled out, as well as other causes such as parasitic or fungal causes, then the cause of the meningitis is likely noninfectious in nature. DIAM is among these noninfectious causes of aseptic meningitis.
Once infectious causes ruled out, noninfectious causes should be investigated. These include a history of chemical irritation from recent surgery or chemicals injected into the subarachnoid space such as spinal anesthesia, other inflammatory or vascular conditions such as sarcoidosis or vasculitis, as well as Neoplastic conditions such as lymphoma.
CSF analysis tends to show inflammatory changes in DIAM such as elevated white blood cells and elevated protein levels. Glucose was either normal or low. MRI and CT imaging of the brain has shown changes consistent with Blood-brain barrier disruption or cerebral edema including T2-weighted changes that were normalized after resolution of the condition.
In patients with SLE, DIAM should be distinguished from lupus aseptic meningitis (LAM). This can be done by CSF analysis (DIAM has a neutrophilic predominance while LAM has a lymphocytic predominance), as well as assessment of relevant labs such as complement levels and signs of Lupus flareup.
Treatment
Immediate cessation of offending medications.
Prognosis
Generally, excellent prognosis with complete recovery if the offending medication is ceased.
Epidemiology
Drug induced aseptic meningitis occurrence is a subgroup of the occurrence of aseptic meningitis in general, which is approximately 20 per 100,000, but the most common cause of aseptic meningitis is viral.
References
Meningitis
Drug-induced diseases | Drug-induced aseptic meningitis | Chemistry | 1,032 |
62,017,767 | https://en.wikipedia.org/wiki/Superplan | Superplan was a high-level programming language developed between 1949 and 1951 by Heinz Rutishauser, the name being a reference to "" (i.e. computation plan), in Konrad Zuse's terminology designating a single program.
The language was described in Rutishauser's 1951 publication (i.e. Automatically created Computation Plans for Program-Controlled Computing Machines).
Superplan introduced the keyword as for loop, which had the following form ( being an array item):
Für i=base(increment)limit: + addend =
See also
Compiler
Translator
References
Further reading
(77 pages)
Programming languages created in 1949
Procedural programming languages
Non-English-based programming languages
Swiss inventions
Heinz Rutishauser
Konrad Zuse
1940s establishments in Switzerland | Superplan | Technology | 162 |
537,026 | https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20quantum%20theory | This is a list of mathematical topics in quantum theory, by Wikipedia page. See also list of functional analysis topics, list of Lie group topics, list of quantum-mechanical systems with analytical solutions.
Mathematical formulation of quantum mechanics
bra–ket notation
canonical commutation relation
complete set of commuting observables
Heisenberg picture
Hilbert space
Interaction picture
Measurement in quantum mechanics
quantum field theory
quantum logic
quantum operation
Schrödinger picture
semiclassical
statistical ensemble
wavefunction
wave–particle duality
Wightman axioms
WKB approximation
Schrödinger equation
quantum mechanics, matrix mechanics, Hamiltonian (quantum mechanics)
particle in a box
particle in a ring
particle in a spherically symmetric potential
quantum harmonic oscillator
hydrogen atom
ring wave guide
particle in a one-dimensional lattice (periodic potential)
Fock symmetry in theory of hydrogen
Symmetry
identical particles
angular momentum
angular momentum operator
rotational invariance
rotational symmetry
rotation operator
translational symmetry
Lorentz symmetry
Parity transformation
Noether's theorem
Noether charge
Spin (physics)
isospin
Aman matrices
scale invariance
spontaneous symmetry breaking
supersymmetry breaking
Quantum states
quantum number
Pauli exclusion principle
quantum indeterminacy
uncertainty principle
wavefunction collapse
zero-point energy
bound state
coherent state
squeezed coherent state
density state
Fock state, Fock space
vacuum state
quasinormal mode
no-cloning theorem
quantum entanglement
Dirac equation
spinor, spinor group, spinor bundle
Dirac sea
Spin foam
Poincaré group
gamma matrices
Dirac adjoint
Wigner's classification
anyon
Interpretations of quantum mechanics
Copenhagen interpretation
locality principle
Bell's theorem
Bell test loopholes
CHSH inequality
hidden variable theory
path integral formulation, quantum action
Bohm interpretation
many-worlds interpretation
Tsirelson's bound
Quantum field theory
Feynman diagram
One-loop Feynman diagram
Schwinger's quantum action principle
Propagator
Annihilation operator
S-matrix
Standard Model
Local quantum physics
Nonlocal
Effective field theory
Correlation function (quantum field theory)
Renormalizable
Cutoff
Infrared divergence, infrared fixed point
Ultraviolet divergence
Fermi's interaction
Path-ordering
Landau pole
Higgs mechanism
Wilson line
Wilson loop
Tadpole (physics)
Lattice gauge theory
BRST charge
Anomaly (physics)
Chiral anomaly
Braid statistics
Plekton
Computation
quantum computing
qubit
qutrit
pure qubit state
quantum dot
Kane quantum computer
quantum cryptography
quantum decoherence
quantum circuit
universal quantum computer
measurement based Quantum Computing
timeline of quantum computing
Supersymmetry
Lie superalgebra
supergroup (physics)
supercharge
supermultiplet
supergravity
Quantum gravity
theory of everything
loop quantum gravity
spin network
black hole thermodynamics
Non-commutative geometry
Quantum group
Hopf algebra
Noncommutative quantum field theory
String theory
See list of string theory topics
Matrix model
Quantum theory
Mathematics | List of mathematical topics in quantum theory | Physics | 578 |
6,443,993 | https://en.wikipedia.org/wiki/Thorpe%E2%80%93Ingold%20effect | The Thorpe–Ingold effect, gem-dimethyl effect, or angle compression is an effect observed in chemistry where increasing steric hindrance favours ring closure and intramolecular reactions. The effect was first reported by Beesley, Thorpe, and Ingold in 1915 as part of a study of cyclization reactions. It has since been generalized to many areas of chemistry.
The comparative rates of lactone formation (lactonization) of various 2-hydroxybenzenepropionic acids illustrate the effect. The placement of an increasing number of methyl groups accelerates the cyclization process.
One application of this effect is addition of a quaternary carbon (e.g., a gem-dimethyl group) in an alkyl chain to increase the reaction rate and/or equilibrium constant of cyclization reactions. An example of this is an olefin metathesis reaction: In the field of peptide foldamers, amino acid residues containing quaternary carbons such as 2-aminoisobutyric acid are used to promote formation of certain types of helices.
One proposed explanation for this effect is that the increased size of the substituents increases the angle between them. As a result, the angle between the other two substituents decreases. By moving them closer together, reactions between them are accelerated. It is thus a kinetic effect.
The effect also has some thermodynamic contribution as the in silico strain energy decreases on going from cyclobutane to 1-methylcyclobutane and 1,1-dimethylcyclobutane by a value between 8 kcal/mole and 1.5 kcal/mole.
A noteworthy example of the Thorpe-Ingold effect in supramolecular catalysis is given by diphenylmethane derivatives provided with guanidinium groups. These compounds are active in the cleavage of the RNA model compound HPNP. Substitution of the methylene group of the parent diphenylmethane spacer with cyclohexylidene and adamantylidene moieties enhances catalytic efficiency, with gem dialkyl effect accelerations of 4.5 and 9.1, respectively.
See also
Chelate effect
Flippin-Lodge angle
References
Physical organic chemistry
Chemical kinetics
Stereochemistry | Thorpe–Ingold effect | Physics,Chemistry | 480 |
8,371,384 | https://en.wikipedia.org/wiki/3-j%20symbol | In quantum mechanics, the Wigner's 3-j symbols, also called 3-jm symbols, are an alternative to Clebsch–Gordan coefficients for the purpose of adding angular momenta. While the two approaches address exactly the same physical problem, the 3-j symbols do so more symmetrically.
Mathematical relation to Clebsch–Gordan coefficients
The 3-j symbols are given in terms of the Clebsch–Gordan coefficients by
The j and m components are angular-momentum quantum numbers, i.e., every (and every corresponding ) is either a nonnegative integer or half-odd-integer. The exponent of the sign factor is always an integer, so it remains the same when transposed to the left, and the inverse relation follows upon making the substitution :
Explicit expression
where is the Kronecker delta.
The summation is performed over those integer values for which the argument of each factorial in the denominator is non-negative, i.e. summation limits and are taken equal: the lower one the upper one Factorials of negative numbers are conventionally taken equal to zero, so that the values of the 3j symbol at, for example, or are automatically set to zero.
Definitional relation to Clebsch–Gordan coefficients
The CG coefficients are defined so as to express the addition of two angular momenta in terms of a third:
The 3-j symbols, on the other hand, are the coefficients with which three angular momenta must be added so that the resultant is zero:
Here is the zero-angular-momentum state (). It is apparent that the 3-j symbol treats all three angular momenta involved in the addition problem on an equal footing and is therefore more symmetrical than the CG coefficient.
Since the state is unchanged by rotation, one also says that the contraction of the product of three rotational states with a 3-j symbol is invariant under rotations.
Selection rules
The Wigner 3-j symbol is zero unless all these conditions are satisfied:
Symmetry properties
A 3-j symbol is invariant under an even permutation of its columns:
An odd permutation of the columns gives a phase factor:
Changing the sign of the quantum numbers (time reversal) also gives a phase:
The 3-j symbols also have so-called Regge symmetries, which are not due to permutations or time reversal. These symmetries are:
With the Regge symmetries, the 3-j symbol has a total of 72 symmetries. These are best displayed by the definition of a Regge symbol, which is a one-to-one correspondence between it and a 3-j symbol and assumes the properties of a semi-magic square:
whereby the 72 symmetries now correspond to 3! row and 3! column interchanges plus a transposition of the matrix. These facts can be used to devise an effective storage scheme.
Orthogonality relations
A system of two angular momenta with magnitudes and can be described either in terms of the uncoupled basis states (labeled by the quantum numbers and ), or the coupled basis states (labeled by and ). The 3-j symbols constitute a unitary transformation between these two bases, and this unitarity implies the orthogonality relations
The triangular delta is equal to 1 when the triad (j1, j2, j3) satisfies the triangle conditions, and is zero otherwise. The triangular delta itself is sometimes confusingly called a "3-j symbol" (without the m) in analogy to 6-j and 9-j symbols, all of which are irreducible summations of 3-jm symbols where no variables remain.
Relation to spherical harmonics; Gaunt coefficients
The 3-jm symbols give the integral of the products of three spherical harmonics
with , and integers. These integrals are called Gaunt coefficients.
Relation to integrals of spin-weighted spherical harmonics
Similar relations exist for the spin-weighted spherical harmonics if :
Recursion relations
Asymptotic expressions
For a non-zero 3-j symbol is
where , and is a Wigner function. Generally a better approximation obeying the Regge symmetry is given by
where .
Metric tensor
The following quantity acts as a metric tensor in angular-momentum theory and is also known as a Wigner 1-jm symbol:
It can be used to perform time reversal on angular momenta.
Special cases and other properties
From equation (3.7.9) in
where P are Legendre polynomials.
Relation to Racah -coefficients
Wigner 3-j symbols are related to Racah -coefficients by a simple phase:
Relation to group theory
This section essentially recasts the definitional relation
in the language of group theory.
A group representation of a group is a homomorphism of the group into
a group of linear transformations over some vector space. The linear
transformations can be given by a group of matrices with respect to some basis of the vector space.
The group of transformations leaving angular momenta invariant is the three dimensional rotation group SO(3).
When "spin" angular momenta are included, the group is its double covering group, SU(2).
A reducible representation is one where a change of basis can be applied to bring all the matrices into block diagonal form. A representation
is irreducible (irrep) if no such transformation exists.
For each value of j, the 2j+1 kets form a basis for an irreducible representation (irrep)
of SO(3)/SU(2) over the complex numbers. Given two
irreps, the tensor direct product can be reduced to a
sum of irreps, giving rise to the Clebcsh-Gordon coefficients, or by reduction of the triple product
of three irreps to the trivial irrep 1 giving rise to the 3j symbols.
3j symbols for other groups
The symbol has been most intensely studied in the context of the
coupling of angular momentum. For this, it is strongly related to the
group representation theory of the groups SU(2) and SO(3)
as discussed above. However, many
other groups are of importance in physics and
chemistry,
and there has been much work on the symbol for these other groups.
In this section, some of that work is considered.
Simply reducible groups
The original paper by Wigner
was not restricted to SO(3)/SU(2)
but instead focussed on simply reducible (SR) groups.
These are groups in which
all classes are ambivalent i.e. if is a member of a class then so is
the Kronecker product of two irreps is multiplicity free i.e. does not contain any irrep more than once.
For SR groups, every irrep is equivalent to its complex conjugate,
and under permutations of the columns the absolute value of the
symbol is invariant and the phase of each can be chosen so that
they at most change sign under odd permutations and remain
unchanged under even permutations.
General compact groups
Compact groups form a wide class of groups with topological structure.
They include the finite groups with added discrete topology
and many of the Lie groups.
General compact groups will neither be ambivalent nor multiplicity free.
Derome and Sharp
and Derome examined the symbol
for the general case using the relation to the Clebsch-Gordon coefficients of
where is the dimension of the representation space of and is the complex conjugate
representation to .
By examining permutations of columns of the symbol, they showed three cases:
if all of are inequivalent then the symbol may be chosen to be invariant under any permutation of its columns
if exactly two are equivalent, then transpositions of its columns may be chosen so that some symbols will be invariant while others will change sign. An approach using a wreath product of the group with showed that these correspond to the representations or of the symmetric group . Cyclic permutations leave the symbol invariant.
if all three are equivalent, the behaviour is dependent on the representations of the symmetric group. Wreath group representations corresponding to are invariant under transpositions of the columns, corresponding to change sign under transpositions, while a pair corresponding to the two dimensional representation transform according to that.
Further research into symbols for compact groups has been performed based on these principles.
SU(n)
The Special unitary group SU(n) is the Lie group of n × n unitary matrices with determinant 1.
The group SU(3) is important in particle theory.
There are many papers dealing with the or
equivalent symbol
The symbol for the group SU(4) has been studied
while there is also work on the general SU(n) groups
Crystallographic point groups
There are many papers dealing with the symbols or Clebsch-Gordon coefficients for the finite crystallographic point groups
and the double point groups
The book by Butler
references these and details the theory along with tables.
Magnetic groups
Magnetic groups include antilinear operators as well as linear operators. They need to be dealt with using
Wigner's theory of corepresentations of unitary and antiunitary groups.
A significant departure from standard representation theory is that the multiplicity of the irreducible corepresentation
in the direct product of the irreducible corepresentations
is generally smaller than the multiplicity of the trivial corepresentation in the triple
product , leading to significant differences between the Clebsch-Gordon
coefficients and the symbol.
The symbols have been examined for the grey groups
and for the magnetic point groups
See also
Clebsch–Gordan coefficients
Spherical harmonics
6-j symbol
9-j symbol
Representations of classical Lie groups
References
L. C. Biedenharn and J. D. Louck, Angular Momentum in Quantum Physics, volume 8 of Encyclopedia of Mathematics, Addison-Wesley, Reading, 1981.
D. M. Brink and G. R. Satchler, Angular Momentum, 3rd edition, Clarendon, Oxford, 1993.
A. R. Edmonds, Angular Momentum in Quantum Mechanics, 2nd edition, Princeton University Press, Princeton, 1960.
External links
(Numerical)
369j-symbol calculator at the Plasma Laboratory of Weizmann Institute of Science (Numerical)
Frederik J Simons: Matlab software archive, the code THREEJ.M
Sage (mathematics software) Gives exact answer for any value of j, m
(accurate; C, fortran, python)
(fast lookup, accurate; C, fortran)
Rotational symmetry
Representation theory of Lie groups
Quantum mechanics | 3-j symbol | Physics | 2,159 |
22,190,240 | https://en.wikipedia.org/wiki/Conocybe%20rickenii | Conocybe rickenii is a mushroom from the genus Conocybe. Its edibility is disputed, and it has the appearance of a typical little brown mushroom with a small, conical cap, and long, thin stem. In colour, it is generally a cream-brown, lighter on the stem, and it has a thin layer of flesh with no distinct smell or taste. It is a coprophilous fungus, feeding off dung and it is most common on very rich soil or growing directly from dung. It can be found in Europe, Australia and Pacific islands.
Taxonomy
Conocybe rickenii was first described in 1930 by German mycologist Julius Schäffer and named Galera rickenii. It was reclassified by Robert Kühner, who placed it in the genus Conocybe.
Description
Conocybe rickenii has a conical cap of across, which is an ochre-brown, sometimes becoming a little more grey at the centre. The stem is typically in height, by in thickness, and is whitish cream, darkening to a dirty brown with age. The thin layer of flesh is grey-brown in the cap, while lighter in the stem. It has ochre-cream (later darkening to rusty-ochre) gills, which are adnate, leaving a brown spore print. The spores themselves are elliptic to oval, measuring between 10–20 μm by 6–12 μm. It has two-spored basidia, and a cellular cap cuticle.
It is generally a little larger than the slightly more common coprophilous C. pubescens, while it can be differentiated from other dung-loving Conocybe by its two-spored basidia, large spores and the fact it does not have lecythiform (flask-shaped) caulocystidia.
Edibility
British mycologist Roger Phillips lists the edibility as unknown, while David Pegler considers it inedible. The flesh has no distinct smell or taste.
Distribution and habitat
Conocybe rickenii grows on extremely rich soil, especially on dung and compost heaps. It can be found in very large numbers in gardens where horse manure has been used to enrich the soil. It can be found in Europe, Australia, and Pacific islands and the United States.
References
Bolbitiaceae
Fungi described in 1930
Fungi of Europe
Fungi of Oceania
Fungi of North America
Fungi without expected TNC conservation status
Fungus species | Conocybe rickenii | Biology | 509 |
13,941,848 | https://en.wikipedia.org/wiki/Comparison%20of%20programming%20languages%20%28associative%20array%29 | This comparison of programming languages (associative arrays) compares the features of associative array data structures or array-lookup processing for over 40 computer programming languages.
Language support
The following is a comparison of associative arrays (also "mapping", "hash", and "dictionary") in various programming languages.
AWK
AWK has built-in, language-level support for associative arrays.
For example:
phonebook["Sally Smart"] = "555-9999"
phonebook["John Doe"] = "555-1212"
phonebook["J. Random Hacker"] = "555-1337"
The following code loops through an associated array and prints its contents:
for (name in phonebook) {
print name, " ", phonebook[name]
}
The user can search for elements in an associative array, and delete elements from the array.
The following shows how multi-dimensional associative arrays can be simulated in standard AWK using concatenation and the built-in string-separator variable SUBSEP:
{ # for every input line
multi[$1 SUBSEP $2]++;
}
#
END {
for (x in multi) {
split(x, arr, SUBSEP);
print arr[1], arr[2], multi[x];
}
}
C
There is no standard implementation of associative arrays in C, but a 3rd-party library, C Hash Table, with BSD license, is available.
Another 3rd-party library, uthash, also creates associative arrays from C structures. A structure represents a value, and one of the structure fields serves as the key.
Finally, the GLib library also supports associative arrays, along with many other advanced data types and is the recommended implementation of the GNU Project.
Similar to GLib, Apple's cross-platform Core Foundation framework provides several basic data types. In particular, there are reference-counted CFDictionary and CFMutableDictionary.
C#
C# uses the collection classes provided by the .NET Framework. The most commonly used associative array type is System.Collections.Generic.Dictionary<TKey, TValue>, which is implemented as a mutable hash table. The relatively new System.Collections.Immutable package, available in .NET Framework versions 4.5 and above, and in all versions of .NET Core, also includes the System.Collections.Immutable.Dictionary<TKey, TValue> type, which is implemented using an AVL tree. The methods that would normally mutate the object in-place instead return a new object that represents the state of the original object after mutation.
Creation
The following demonstrates three means of populating a mutable dictionary:
the Add method, which adds a key and value and throws an exception if the key already exists in the dictionary;
assigning to the indexer, which overwrites any existing value, if present; and
assigning to the backing property of the indexer, for which the indexer is syntactic sugar (not applicable to C#, see F# or VB.NET examples).
var dictionary = new Dictionary<string, string>();
dictionary.Add("Sally Smart", "555-9999");
dictionary["John Doe"] = "555-1212";
// Not allowed in C#.
// dictionary.Item("J. Random Hacker") = "553-1337";
dictionary["J. Random Hacker"] = "553-1337";
The dictionary can also be initialized during construction using a "collection initializer", which compiles to repeated calls to Add.
var dictionary = new Dictionary<string, string> {
{ "Sally Smart", "555-9999" },
{ "John Doe", "555-1212" },
{ "J. Random Hacker", "553-1337" }
};
Access by key
Values are primarily retrieved using the indexer (which throws an exception if the key does not exist) and the TryGetValue method, which has an output parameter for the sought value and a Boolean return-value indicating whether the key was found.
var sallyNumber = dictionary["Sally Smart"];
var sallyNumber = (dictionary.TryGetValue("Sally Smart", out var result) ? result : "n/a";
In this example, the sallyNumber value will now contain the string "555-9999".
Enumeration
A dictionary can be viewed as a sequence of keys, sequence of values, or sequence of pairs of keys and values represented by instances of the KeyValuePair<TKey, TValue> type, although there is no guarantee of order. For a sorted dictionary, the programmer could choose to use a SortedDictionary<TKey, TValue> or use the .Sort LINQ extension method when enumerating.
The following demonstrates enumeration using a foreach loop:
// loop through the collection and display each entry.
foreach (KeyValuePair<string,string> kvp in dictionary)
{
Console.WriteLine("Phone number for {0} is {1}", kvp.Key, kvp.Value);
}
C++
C++ has a form of associative array called std::map (see Standard Template Library#Containers). One could create a phone-book map with the following code in C++:
#include <map>
#include <string>
#include <utility>
int main() {
std::map<std::string, std::string> phone_book;
phone_book.insert(std::make_pair("Sally Smart", "555-9999"));
phone_book.insert(std::make_pair("John Doe", "555-1212"));
phone_book.insert(std::make_pair("J. Random Hacker", "553-1337"));
}
Or less efficiently, as this creates temporary std::string values:
#include <map>
#include <string>
int main() {
std::map<std::string, std::string> phone_book;
phone_book["Sally Smart"] = "555-9999";
phone_book["John Doe"] = "555-1212";
phone_book["J. Random Hacker"] = "553-1337";
}
With the extension of initialization lists in C++11, entries can be added during a map's construction as shown below:
#include <map>
#include <string>
int main() {
std::map<std::string, std::string> phone_book {
{"Sally Smart", "555-9999"},
{"John Doe", "555-1212"},
{"J. Random Hacker", "553-1337"}
};
}
You can iterate through the list with the following code (C++03):
std::map<std::string, std::string>::iterator curr, end;
for(curr = phone_book.begin(), end = phone_book.end(); curr != end; ++curr)
std::cout << curr->first << " = " << curr->second << std::endl;
The same task in C++11:
for(const auto& curr : phone_book)
std::cout << curr.first << " = " << curr.second << std::endl;
Using the structured binding available in C++17:
for (const auto& [name, number] : phone_book) {
std::cout << name << " = " << number << std::endl;
}
In C++, the std::map class is templated which allows the data types of keys and values to be different for different map instances. For a given instance of the map class the keys must be of the same base type. The same must be true for all of the values. Although std::map is typically implemented using a self-balancing binary search tree, C++11 defines a second map called std::unordered_map, which has the algorithmic characteristics of a hash table. This is a common vendor extension to the Standard Template Library (STL) as well, usually called hash_map, available from such implementations as SGI and STLPort.
Cobra
Initializing an empty dictionary and adding items in Cobra:
Alternatively, a dictionary can be initialized with all items during construction:
The dictionary can be enumerated by a for-loop, but there is no guaranteed order:
ColdFusion Markup Language
A structure in ColdFusion Markup Language (CFML) is equivalent to an associative array:
dynamicKeyName = "John Doe";
phoneBook = {
"Sally Smart" = "555-9999",
"#dynamicKeyName#" = "555-4321",
"J. Random Hacker" = "555-1337",
UnknownComic = "???"
};
writeOutput(phoneBook.UnknownComic); // ???
writeDump(phoneBook); // entire struct
D
D offers direct support for associative arrays in the core language; such arrays are implemented as a chaining hash table with binary trees. The equivalent example would be:
int main() {
string[ string ] phone_book;
phone_book["Sally Smart"] = "555-9999";
phone_book["John Doe"] = "555-1212";
phone_book["J. Random Hacker"] = "553-1337";
return 0;
}
Keys and values can be any types, but all the keys in an associative array must be of the same type, and the same goes for dependent values.
Looping through all properties and associated values, and printing them, can be coded as follows:
foreach (key, value; phone_book) {
writeln("Number for " ~ key ~ ": " ~ value );
}
A property can be removed as follows:
phone_book.remove("Sally Smart");
Delphi
Delphi supports several standard containers, including TDictionary<T>:
uses
SysUtils,
Generics.Collections;
var
PhoneBook: TDictionary<string, string>;
Entry: TPair<string, string>;
begin
PhoneBook := TDictionary<string, string>.Create;
PhoneBook.Add('Sally Smart', '555-9999');
PhoneBook.Add('John Doe', '555-1212');
PhoneBook.Add('J. Random Hacker', '553-1337');
for Entry in PhoneBook do
Writeln(Format('Number for %s: %s',[Entry.Key, Entry.Value]));
end.
Pre-2009 Delphi versions do not support associative arrays directly. Such arrays can be simulated using the TStrings class:
procedure TForm1.Button1Click(Sender: TObject);
var
DataField: TStrings;
i: Integer;
begin
DataField := TStringList.Create;
DataField.Values['Sally Smart'] := '555-9999';
DataField.Values['John Doe'] := '555-1212';
DataField.Values['J. Random Hacker'] := '553-1337';
// access an entry and display it in a message box
ShowMessage(DataField.Values['Sally Smart']);
// loop through the associative array
for i := 0 to DataField.Count - 1 do
begin
ShowMessage('Number for ' + DataField.Names[i] + ': ' + DataField.ValueFromIndex[i]);
end;
DataField.Free;
end;
Erlang
Erlang offers many ways to represent mappings; three of the most common in the standard library are keylists, dictionaries, and maps.
Keylists
Keylists are lists of tuples, where the first element of each tuple is a key, and the second is a value. Functions for operating on keylists are provided in the lists module.
PhoneBook = [{"Sally Smith", "555-9999"},
{"John Doe", "555-1212"},
{"J. Random Hacker", "553-1337"}].
Accessing an element of the keylist can be done with the lists:keyfind/3 function:
{_, Phone} = lists:keyfind("Sally Smith", 1, PhoneBook),
io:format("Phone number: ~s~n", [Phone]).
Dictionaries
Dictionaries are implemented in the dict module of the standard library. A new dictionary is created using the dict:new/0 function and new key/value pairs are stored using the dict:store/3 function:
PhoneBook1 = dict:new(),
PhoneBook2 = dict:store("Sally Smith", "555-9999", Dict1),
PhoneBook3 = dict:store("John Doe", "555-1212", Dict2),
PhoneBook = dict:store("J. Random Hacker", "553-1337", Dict3).
Such a serial initialization would be more idiomatically represented in Erlang with the appropriate function:
PhoneBook = dict:from_list([{"Sally Smith", "555-9999"},
{"John Doe", "555-1212"},
{"J. Random Hacker", "553-1337"}]).
The dictionary can be accessed using the dict:find/2 function:
{ok, Phone} = dict:find("Sally Smith", PhoneBook),
io:format("Phone: ~s~n", [Phone]).
In both cases, any Erlang term can be used as the key. Variations include the orddict module, implementing ordered dictionaries, and gb_trees, implementing general balanced trees.
Maps
Maps were introduced in OTP 17.0, and combine the strengths of keylists and dictionaries. A map is defined using the syntax #{ K1 => V1, ... Kn => Vn }:
PhoneBook = #{"Sally Smith" => "555-9999",
"John Doe" => "555-1212",
"J. Random Hacker" => "553-1337"}.
Basic functions to interact with maps are available from the maps module. For example, the maps:find/2 function returns the value associated with a key:
{ok, Phone} = maps:find("Sally Smith", PhoneBook),
io:format("Phone: ~s~n", [Phone]).
Unlike dictionaries, maps can be pattern matched upon:
#{"Sally Smith", Phone} = PhoneBook,
io:format("Phone: ~s~n", [Phone]).
Erlang also provides syntax sugar for functional updates—creating a new map based on an existing one, but with modified values or additional keys:
PhoneBook2 = PhoneBook#{
% the `:=` operator updates the value associated with an existing key
"J. Random Hacker" := "355-7331",
% the `=>` operator adds a new key-value pair, potentially replacing an existing one
"Alice Wonderland" => "555-1865"
}
F#
Map<'Key,'Value>
At runtime, F# provides the Collections.Map<'Key,'Value> type, which is an immutable AVL tree.
Creation
The following example calls the Map constructor, which operates on a list (a semicolon delimited sequence of elements enclosed in square brackets) of tuples (which in F# are comma-delimited sequences of elements).
let numbers =
[
"Sally Smart", "555-9999";
"John Doe", "555-1212";
"J. Random Hacker", "555-1337"
] |> Map
Access by key
Values can be looked up via one of the Map members, such as its indexer or Item property (which throw an exception if the key does not exist) or the TryFind function, which returns an option type with a value of Some <result>, for a successful lookup, or None, for an unsuccessful one. Pattern matching can then be used to extract the raw value from the result, or a default value can be set.
let sallyNumber = numbers.["Sally Smart"]
// or
let sallyNumber = numbers.Item("Sally Smart")
let sallyNumber =
match numbers.TryFind("Sally Smart") with
| Some(number) -> number
| None -> "n/a"
In both examples above, the sallyNumber value would contain the string "555-9999".
Dictionary<'TKey,'TValue>
Because F# is a .NET language, it also has access to features of the .NET Framework, including the type (which is implemented as a hash table), which is the primary associative array type used in C# and Visual Basic. This type may be preferred when writing code that is intended to operate with other languages on the .NET Framework, or when the performance characteristics of a hash table are preferred over those of an AVL tree.
Creation
The dict function provides a means of conveniently creating a .NET dictionary that is not intended to be mutated; it accepts a sequence of tuples and returns an immutable object that implements IDictionary<'TKey,'TValue>.
let numbers =
[
"Sally Smart", "555-9999";
"John Doe", "555-1212";
"J. Random Hacker", "555-1337"
] |> dict
When a mutable dictionary is needed, the constructor of can be called directly. See the C# example on this page for additional information.
let numbers = System.Collections.Generic.Dictionary<string, string>()
numbers.Add("Sally Smart", "555-9999")
numbers.["John Doe"] <- "555-1212"
numbers.Item("J. Random Hacker") <- "555-1337"
Access by key
IDictionary instances have an indexer that is used in the same way as Map, although the equivalent to TryFind is TryGetValue, which has an output parameter for the sought value and a Boolean return value indicating whether the key was found.
let sallyNumber =
let mutable result = ""
if numbers.TryGetValue("Sally Smart", &result) then result else "n/a"
F# also allows the function to be called as if it had no output parameter and instead returned a tuple containing its regular return value and the value assigned to the output parameter:
let sallyNumber =
match numbers.TryGetValue("Sally Smart") with
| true, number -> number
| _ -> "n/a"
Enumeration
A dictionary or map can be enumerated using Seq.map.
// loop through the collection and display each entry.
numbers |> Seq.map (fun kvp -> printfn "Phone number for %O is %O" kvp.Key kvp.Value)
FoxPro
Visual FoxPro implements mapping with the Collection Class.
mapping = NEWOBJECT("Collection")
mapping.Add("Daffodils", "flower2") && Add(object, key) – key must be character
index = mapping.GetKey("flower2") && returns the index value 1
object = mapping("flower2") && returns "Daffodils" (retrieve by key)
object = mapping(1) && returns "Daffodils" (retrieve by index)
GetKey returns 0 if the key is not found.
Go
Go has built-in, language-level support for associative arrays, called "maps". A map's key type may only be a boolean, numeric, string, array, struct, pointer, interface, or channel type.
A map type is written: map[keytype]valuetype
Adding elements one at a time:
phone_book := make(map[string] string) // make an empty map
phone_book["Sally Smart"] = "555-9999"
phone_book["John Doe"] = "555-1212"
phone_book["J. Random Hacker"] = "553-1337"
A map literal:
phone_book := map[string] string {
"Sally Smart": "555-9999",
"John Doe": "555-1212",
"J. Random Hacker": "553-1337",
}
Iterating through a map:
// over both keys and values
for key, value := range phone_book {
fmt.Printf("Number for %s: %s\n", key, value)
}
// over just keys
for key := range phone_book {
fmt.Printf("Name: %s\n", key)
}
Haskell
The Haskell programming language provides only one kind of associative container – a list of pairs:
m = [("Sally Smart", "555-9999"), ("John Doe", "555-1212"), ("J. Random Hacker", "553-1337")]
main = print (lookup "John Doe" m)
output:
Just "555-1212"
Note that the lookup function returns a "Maybe" value, which is "Nothing" if not found, or "Just 'result when found.
The Glasgow Haskell Compiler (GHC), the most commonly used implementation of Haskell, provides two more types of associative containers. Other implementations may also provide these.
One is polymorphic functional maps (represented as immutable balanced binary trees):
import qualified Data.Map as M
m = M.insert "Sally Smart" "555-9999" M.empty
m' = M.insert "John Doe" "555-1212" m
m'' = M.insert "J. Random Hacker" "553-1337" m'
main = print (M.lookup "John Doe" m'' :: Maybe String)
output:
Just "555-1212"
A specialized version for integer keys also exists as Data.IntMap.
Finally, a polymorphic hash table:
import qualified Data.HashTable as H
main = do m <- H.new (==) H.hashString
H.insert m "Sally Smart" "555-9999"
H.insert m "John Doe" "555-1212"
H.insert m "J. Random Hacker" "553-1337"
foo <- H.lookup m "John Doe"
print foo
output:
Just "555-1212"
Lists of pairs and functional maps both provide a purely functional interface, which is more idiomatic in Haskell. In contrast, hash tables provide an imperative interface in the IO monad.
Java
In Java associative arrays are implemented as "maps", which are part of the Java collections framework. Since J2SE 5.0 and the introduction of generics into Java, collections can have a type specified; for example, an associative array that maps strings to strings might be specified as follows:
Map<String, String> phoneBook = new HashMap<String, String>();
phoneBook.put("Sally Smart", "555-9999");
phoneBook.put("John Doe", "555-1212");
phoneBook.put("J. Random Hacker", "555-1337");
The method is used to access a key; for example, the value of the expression phoneBook.get("Sally Smart") is "555-9999". This code uses a hash map to store the associative array, by calling the constructor of the class. However, since the code only uses methods common to the interface , a self-balancing binary tree could be used by calling the constructor of the class (which implements the subinterface ), without changing the definition of the phoneBook variable, or the rest of the code, or using other underlying data structures that implement the Map interface.
The hash function in Java, used by HashMap and HashSet, is provided by the method. Since every class in Java inherits from , every object has a hash function. A class can override the default implementation of hashCode() to provide a custom hash function more in accordance with the properties of the object.
The Object class also contains the method, which tests an object for equality with another object. Hashed data structures in Java rely on objects maintaining the following contract between their hashCode() and equals() methods:
For two objects a and b,
a.equals(b) == b.equals(a)
if a.equals(b), then a.hashCode() == b.hashCode()
In order to maintain this contract, a class that overrides equals() must also override hashCode(), and vice versa, so that hashCode() is based on the same properties (or a subset of the properties) as equals().
A further contract that a hashed data structure has with the object is that the results of the hashCode() and equals() methods will not change once the object has been inserted into the map. For this reason, it is generally a good practice to base the hash function on immutable properties of the object.
Analogously, TreeMap, and other sorted data structures, require that an ordering be defined on the data type. Either the data type must already have defined its own ordering, by implementing the interface; or a custom must be provided at the time the map is constructed. As with HashMap above, the relative ordering of keys in a TreeMap should not change once they have been inserted into the map.
JavaScript
JavaScript (and its standardized version, ECMAScript) is a prototype-based object-oriented language.
Map and WeakMap
Modern JavaScript handles associative arrays, using the Map and WeakMap classes. A map does not contain any keys by default; it only contains what is explicitly put into it. The keys and values can be any type (including functions, objects, or any primitive).
Creation
A map can be initialized with all items during construction:
const phoneBook = new Map([
["Sally Smart", "555-9999"],
["John Doe", "555-1212"],
["J. Random Hacker", "553-1337"],
]);
Alternatively, you can initialize an empty map and then add items:
const phoneBook = new Map();
phoneBook.set("Sally Smart", "555-9999");
phoneBook.set("John Doe", "555-1212");
phoneBook.set("J. Random Hacker", "553-1337");
Access by key
Accessing an element of the map can be done with the get method:
const sallyNumber = phoneBook.get("Sally Smart");
In this example, the value sallyNumber will now contain the string "555-9999".
Enumeration
The keys in a map are ordered. Thus, when iterating through it, a map object returns keys in order of insertion. The following demonstrates enumeration using a for-loop:
// loop through the collection and display each entry.
for (const [name, number] of phoneBook) {
console.log(`Phone number for ${name} is ${number}`);
}
A key can be removed as follows:
phoneBook.delete("Sally Smart");
Object
An object is similar to a map—both let you set keys to values, retrieve those values, delete keys, and detect whether a value is stored at a key. For this reason (and because there were no built-in alternatives), objects historically have been used as maps.
However, there are important differences that make a map preferable in certain cases. In JavaScript an object is a mapping from property names to values—that is, an associative array with one caveat: the keys of an object must be either a string or a symbol (native objects and primitives implicitly converted to a string keys are allowed). Objects also include one feature unrelated to associative arrays: an object has a prototype, so it contains default keys that could conflict with user-defined keys. So, doing a lookup for a property will point the lookup to the prototype's definition if the object does not define the property.
An object literal is written as { property1: value1, property2: value2, ... }. For example:
const myObject = {
"Sally Smart": "555-9999",
"John Doe": "555-1212",
"J. Random Hacker": "553-1337",
};
To prevent the lookup from using the prototype's properties, you can use the Object.setPrototypeOf function:
Object.setPrototypeOf(myObject, null);
As of ECMAScript 5 (ES5), the prototype can also be bypassed by using Object.create(null):
const myObject = Object.create(null);
Object.assign(myObject, {
"Sally Smart": "555-9999",
"John Doe": "555-1212",
"J. Random Hacker": "553-1337",
});
If the property name is a valid identifier, the quotes can be omitted, e.g.:
const myOtherObject = { foo: 42, bar: false };
Lookup is written using property-access notation, either square brackets, which always work, or dot notation, which only works for identifier keys:
myObject["John Doe"]
myOtherObject.foo
You can also loop through all enumerable properties and associated values as follows (a for-in loop):
for (const property in myObject) {
const value = myObject[property];
console.log(`myObject[${property}] = ${value}`);
}
Or (a for-of loop):
for (const [property, value] of Object.entries(myObject)) {
console.log(`${property} = ${value}`);
}
A property can be removed as follows:
delete myObject["Sally Smart"];
As mentioned before, properties are strings and symbols. Since every native object and primitive can be implicitly converted to a string, you can do:
myObject[1] // key is "1"; note that myObject[1] == myObject["1"]
myObject[["a", "b"]] // key is "a,b"
myObject[{ toString() { return "hello world"; } }] // key is "hello world"
In modern JavaScript it's considered bad form to use the Array type as an associative array. Consensus is that the Object type and Map/WeakMap classes are best for this purpose. The reasoning behind this is that if Array is extended via prototype and Object is kept pristine, for and for-in loops will work as expected on associative 'arrays'. This issue has been brought to the fore by the popularity of JavaScript frameworks that make heavy and sometimes indiscriminate use of prototypes to extend JavaScript's inbuilt types.
See JavaScript Array And Object Prototype Awareness Day for more information on the issue.
Julia
In Julia, the following operations manage associative arrays.
Declare dictionary:
phonebook = Dict( "Sally Smart" => "555-9999", "John Doe" => "555-1212", "J. Random Hacker" => "555-1337" )
Access element:
phonebook["Sally Smart"]
Add element:
phonebook["New Contact"] = "555-2222"
Delete element:
delete!(phonebook, "Sally Smart")
Get keys and values as iterables:
keys(phonebook)
values(phonebook)
KornShell 93, and compliant shells
In KornShell 93, and compliant shells (ksh93, bash4...), the following operations can be used with associative arrays.
Definition:
typeset -A phonebook; # ksh93; in bash4+, "typeset" is a synonym of the more preferred "declare", which works identically in this case
phonebook=(["Sally Smart"]="555-9999" ["John Doe"]="555-1212" ["[[J. Random Hacker]]"]="555-1337");
Dereference:
${phonebook["John Doe"]};
Lisp
Lisp was originally conceived as a "LISt Processing" language, and one of its most important data types is the linked list, which can be treated as an association list ("alist").
'(("Sally Smart" . "555-9999")
("John Doe" . "555-1212")
("J. Random Hacker" . "553-1337"))
The syntax (x . y) is used to indicate a consed pair. Keys and values need not be the same type within an alist. Lisp and Scheme provide operators such as assoc to manipulate alists in ways similar to associative arrays.
A set of operations specific to the handling of association lists exists for Common Lisp, each of these working non-destructively.
To add an entry the acons function is employed, creating and returning a new association list. An association list in Common Lisp mimicks a stack, that is, adheres to the last-in-first-out (LIFO) principle, and hence prepends to the list head.
(let ((phone-book NIL))
(setf phone-book (acons "Sally Smart" "555-9999" phone-book))
(setf phone-book (acons "John Doe" "555-1212" phone-book))
(setf phone-book (acons "J. Random Hacker" "555-1337" phone-book)))
This function can be construed as an accommodation for cons operations.
;; The effect of
;; (cons (cons KEY VALUE) ALIST)
;; is equivalent to
;; (acons KEY VALUE ALIST)
(let ((phone-book '(("Sally Smart" . "555-9999") ("John Doe" . "555-1212"))))
(cons (cons "J. Random Hacker" "555-1337") phone-book))
Of course, the destructive push operation also allows inserting entries into an association list, an entry having to constitute a key-value cons in order to retain the mapping's validity.
(push (cons "Dummy" "123-4567") phone-book)
Searching for an entry by its key is performed via assoc, which might be configured for the test predicate and direction, especially searching the association list from its end to its front. The result, if positive, returns the entire entry cons, not only its value. Failure to obtain a matching key leads to a return of the NIL value.
(assoc "John Doe" phone-book :test #'string=)
Two generalizations of assoc exist: assoc-if expects a predicate function that tests each entry's key, returning the first entry for which the predicate produces a non-NIL value upon invocation. assoc-if-not inverts the logic, accepting the same arguments, but returning the first entry generating NIL.
;; Find the first entry whose key equals "John Doe".
(assoc-if
#'(lambda (key)
(string= key "John Doe"))
phone-book)
;; Finds the first entry whose key is neither "Sally Smart" nor "John Doe"
(assoc-if-not
#'(lambda (key)
(member key '("Sally Smart" "John Doe") :test #'string=))
phone-book)
The inverse process, the detection of an entry by its value, utilizes rassoc.
;; Find the first entry with a value of "555-9999".
;; We test the entry string values with the "string=" predicate.
(rassoc "555-9999" phone-book :test #'string=)
The corresponding generalizations rassoc-if and rassoc-if-not exist.
;; Finds the first entry whose value is "555-9999".
(rassoc-if
#'(lambda (value)
(string= value "555-9999"))
phone-book)
;; Finds the first entry whose value is not "555-9999".
(rassoc-if-not
#'(lambda (value)
(string= value "555-9999"))
phone-book)
All of the previous entry search functions can be replaced by general list-centric variants, such as find, find-if, find-if-not, as well as pertinent functions like position and its derivates.
;; Find an entry with the key "John Doe" and the value "555-1212".
(find (cons "John Doe" "555-1212") phone-book :test #'equal)
Deletion, lacking a specific counterpart, is based upon the list facilities, including destructive ones.
;; Create and return an alist without any entry whose key equals "John Doe".
(remove-if
#'(lambda (entry)
(string= (car entry) "John Doe"))
phone-book)
Iteration is accomplished with the aid of any function that expects a list.
;; Iterate via "map".
(map NIL
#'(lambda (entry)
(destructuring-bind (key . value) entry
(format T "~&~s => ~s" key value)))
phone-book)
;; Iterate via "dolist".
(dolist (entry phone-book)
(destructuring-bind (key . value) entry
(format T "~&~s => ~s" key value)))
These being structured lists, processing and transformation operations can be applied without constraints.
;; Return a vector of the "phone-book" values.
(map 'vector #'cdr phone-book)
;; Destructively modify the "phone-book" via "map-into".
(map-into phone-book
#'(lambda (entry)
(destructuring-bind (key . value) entry
(cons (reverse key) (reverse value))))
phone-book)
Because of their linear nature, alists are used for relatively small sets of data. Common Lisp also supports a hash table data type, and for Scheme they are implemented in SRFI 69. Hash tables have greater overhead than alists, but provide much faster access when there are many elements. A further characteristic is the fact that Common Lisp hash tables do not, as opposed to association lists, maintain the order of entry insertion.
Common Lisp hash tables are constructed via the make-hash-table function, whose arguments encompass, among other configurations, a predicate to test the entry key. While tolerating arbitrary objects, even heterogeneity within a single hash table instance, the specification of this key :test function is confined to distinguishable entities: the Common Lisp standard only mandates the support of eq, eql, equal, and equalp, yet designating additional or custom operations as permissive for concrete implementations.
(let ((phone-book (make-hash-table :test #'equal)))
(setf (gethash "Sally Smart" phone-book) "555-9999")
(setf (gethash "John Doe" phone-book) "555-1212")
(setf (gethash "J. Random Hacker" phone-book) "553-1337"))
The gethash function permits obtaining the value associated with a key.
(gethash "John Doe" phone-book)
Additionally, a default value for the case of an absent key may be specified.
(gethash "Incognito" phone-book 'no-such-key)
An invocation of gethash actually returns two values: the value or substitute value for the key and a boolean indicator, returning T if the hash table contains the key and NIL to signal its absence.
(multiple-value-bind (value contains-key) (gethash "Sally Smart" phone-book)
(if contains-key
(format T "~&The associated value is: ~s" value)
(format T "~&The key could not be found.")))
Use remhash for deleting the entry associated with a key.
(remhash "J. Random Hacker" phone-book)
clrhash completely empties the hash table.
(clrhash phone-book)
The dedicated maphash function specializes in iterating hash tables.
(maphash
#'(lambda (key value)
(format T "~&~s => ~s" key value))
phone-book)
Alternatively, the loop construct makes provisions for iterations, through keys, values, or conjunctions of both.
;; Iterate the keys and values of the hash table.
(loop
for key being the hash-keys of phone-book
using (hash-value value)
do (format T "~&~s => ~s" key value))
;; Iterate the values of the hash table.
(loop
for value being the hash-values of phone-book
do (print value))
A further option invokes with-hash-table-iterator, an iterator-creating macro, the processing of which is intended to be driven by the caller.
(with-hash-table-iterator (entry-generator phone-book)
(loop do
(multiple-value-bind (has-entry key value) (entry-generator)
(if has-entry
(format T "~&~s => ~s" key value)
(loop-finish)))))
It is easy to construct composite abstract data types in Lisp, using structures or object-oriented programming features, in conjunction with lists, arrays, and hash tables.
LPC
LPC implements associative arrays as a fundamental type known as either "map" or "mapping", depending on the driver. The keys and values can be of any type. A mapping literal is written as ([ key_1 : value_1, key_2 : value_2 ]). Procedural code looks like:
mapping phone_book = ([]);
phone_book["Sally Smart"] = "555-9999";
phone_book["John Doe"] = "555-1212";
phone_book["J. Random Hacker"] = "555-1337";
Mappings are accessed for reading using the indexing operator in the same way as they are for writing, as shown above. So phone_book["Sally Smart"] would return the string "555-9999", and phone_book["John Smith"] would return 0. Testing for presence is done using the function member(), e.g. if(member(phone_book, "John Smith")) write("John Smith is listed.\n");
Deletion is accomplished using a function called either m_delete() or map_delete(), depending on the driver: m_delete(phone_book, "Sally Smart");
LPC drivers of the Amylaar family implement multivalued mappings using a secondary, numeric index (other drivers of the MudOS family do not support multivalued mappings.) Example syntax:
mapping phone_book = ([:2]);
phone_book["Sally Smart", 0] = "555-9999";
phone_book["Sally Smart", 1] = "99 Sharp Way";
phone_book["John Doe", 0] = "555-1212";
phone_book["John Doe", 1] = "3 Nigma Drive";
phone_book["J. Random Hacker", 0] = "555-1337";
phone_book["J. Random Hacker", 1] = "77 Massachusetts Avenue";
LPC drivers modern enough to support a foreach() construct use it to iterate through their mapping types.
Lua
In Lua, "table" is a fundamental type that can be used either as an array (numerical index, fast) or as an associative array.
The keys and values can be of any type, except nil. The following focuses on non-numerical indexes.
A table literal is written as { value, key = value, [index] = value, ["non id string"] = value }. For example:
phone_book = {
["Sally Smart"] = "555-9999",
["John Doe"] = "555-1212",
["J. Random Hacker"] = "553-1337", -- Trailing comma is OK
}
aTable = {
-- Table as value
subTable = { 5, 7.5, k = true }, -- key is "subTable"
-- Function as value
['John Doe'] = function (age) if age < 18 then return "Young" else return "Old!" end end,
-- Table and function (and other types) can also be used as keys
}
If the key is a valid identifier (not a reserved word), the quotes can be omitted. Identifiers are case sensitive.
Lookup is written using either square brackets, which always works, or dot notation, which only works for identifier keys:
print(aTable["John Doe"](45))
x = aTable.subTable.k
You can also loop through all keys and associated values with iterators or for-loops:
simple = { [true] = 1, [false] = 0, [3.14] = math.pi, x = 'x', ["!"] = 42 }
function FormatElement(key, value)
return "[" .. tostring(key) .. "] = " .. value .. ", "
end
-- Iterate on all keys
table.foreach(simple, function (k, v) io.write(FormatElement(k, v)) end)
print""
for k, v in pairs(simple) do io.write(FormatElement(k, v)) end
print""
k= nil
repeat
k, v = next(simple, k)
if k ~= nil then io.write(FormatElement(k, v)) end
until k == nil
print""
An entry can be removed by setting it to nil:
simple.x = nil
Likewise, you can overwrite values or add them:
simple['%'] = "percent"
simple['!'] = 111
Mathematica and Wolfram Language
Mathematica and Wolfram Language use the Association expression to represent associative arrays.
phonebook = <| "Sally Smart" -> "555-9999",
"John Doe" -> "555-1212",
"J. Random Hacker" -> "553-1337" |>;
To access:
phonebook[[Key["Sally Smart"]]]
If the keys are strings, the Key keyword is not necessary, so:
phonebook[["Sally Smart"]]
To list keys: and values
Keys[phonebook]
Values[phonebook]
MUMPS
In MUMPS every array is an associative array. The built-in, language-level, direct support for associative arrays
applies to private, process-specific arrays stored in memory called "locals" as well as to the permanent, shared, global arrays stored on disk which are available concurrently to multiple jobs. The name for globals is preceded by the circumflex "^" to distinguish them from local variables.
SET ^phonebook("Sally Smart")="555-9999" ;; storing permanent data
SET phonebook("John Doe")="555-1212" ;; storing temporary data
SET phonebook("J. Random Hacker")="553-1337" ;; storing temporary data
MERGE ^phonebook=phonebook ;; copying temporary data into permanent data
Accessing the value of an element simply requires using the name with the subscript:
WRITE "Phone Number :",^phonebook("Sally Smart"),!
You can also loop through an associated array as follows:
SET NAME=""
FOR S NAME=$ORDER(^phonebook(NAME)) QUIT:NAME="" WRITE NAME," Phone Number :",^phonebook(NAME),!
Objective-C (Cocoa/GNUstep)
Cocoa and GNUstep, written in Objective-C, handle associative arrays using NSMutableDictionary (a mutable version of NSDictionary) class cluster. This class allows assignments between any two objects. A copy of the key object is made before it is inserted into NSMutableDictionary, therefore the keys must conform to the NSCopying protocol. When being inserted to a dictionary, the value object receives a retain message to increase its reference count. The value object will receive the release message when it will be deleted from the dictionary (either explicitly or by adding to the dictionary a different object with the same key).
NSMutableDictionary *aDictionary = [[NSMutableDictionary alloc] init];
[aDictionary setObject:@"555-9999" forKey:@"Sally Smart"];
[aDictionary setObject:@"555-1212" forKey:@"John Doe"];
[aDictionary setObject:@"553-1337" forKey:@"Random Hacker"];
To access assigned objects, this command may be used:
id anObject = [aDictionary objectForKey:@"Sally Smart"];
All keys or values can be enumerated using NSEnumerator:
NSEnumerator *keyEnumerator = [aDictionary keyEnumerator];
id key;
while ((key = [keyEnumerator nextObject]))
{
// ... process it here ...
}
In Mac OS X 10.5+ and iPhone OS, dictionary keys can be enumerated more concisely using the NSFastEnumeration construct:
for (id key in aDictionary) {
// ... process it here ...
}
What is even more practical, structured data graphs may be easily created using Cocoa, especially NSDictionary (NSMutableDictionary). This can be illustrated with this compact example:
NSDictionary *aDictionary =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSDictionary dictionaryWithObjectsAndKeys:
@"555-9999", @"Sally Smart",
@"555-1212", @"John Doe",
nil], @"students",
[NSDictionary dictionaryWithObjectsAndKeys:
@"553-1337", @"Random Hacker",
nil], @"hackers",
nil];
Relevant fields can be quickly accessed using key paths:
id anObject = [aDictionary valueForKeyPath:@"students.Sally Smart"];
OCaml
The OCaml programming language provides three different associative containers. The simplest is a list of pairs:
# let m = [
"Sally Smart", "555-9999";
"John Doe", "555-1212";
"J. Random Hacker", "553-1337"];;
val m : (string * string) list = [
("Sally Smart", "555-9999");
("John Doe", "555-1212");
("J. Random Hacker", "553-1337")
]
# List.assoc "John Doe" m;;
- : string = "555-1212"
The second is a polymorphic hash table:
# let m = Hashtbl.create 3;;
val m : ('_a, '_b) Hashtbl.t = <abstr>
# Hashtbl.add m "Sally Smart" "555-9999";
Hashtbl.add m "John Doe" "555-1212";
Hashtbl.add m "J. Random Hacker" "553-1337";;
- : unit = ()
# Hashtbl.find m "John Doe";;
- : string = "555-1212"
The code above uses OCaml's default hash function Hashtbl.hash, which is defined automatically for all types. To use a modified hash function, use the functor interface Hashtbl.Make to create a module, such as with Map.
Finally, functional maps (represented as immutable balanced binary trees):
# module StringMap = Map.Make(String);;
...
# let m = StringMap.add "Sally Smart" "555-9999" StringMap.empty
let m = StringMap.add "John Doe" "555-1212" m
let m = StringMap.add "J. Random Hacker" "553-1337" m;;
val m : string StringMap.t = <abstr>
# StringMap.find "John Doe" m;;
- : string = "555-1212"
Note that in order to use Map, you have to provide the functor Map.Make with a module which defines the key type and the comparison function. The third-party library ExtLib provides a polymorphic version of functional maps, called PMap, which is given a comparison function upon creation.
Lists of pairs and functional maps both provide a purely functional interface. By contrast, hash tables provide an imperative interface. For many operations, hash tables are significantly faster than lists of pairs and functional maps.
OptimJ
The OptimJ programming language is an extension of Java 5. As does Java, Optimj provides maps; but OptimJ also provides true associative arrays. Java arrays are indexed with non-negative integers; associative arrays are indexed with any type of key.
String[String] phoneBook = {
"Sally Smart" -> "555-9999",
"John Doe" -> "555-1212",
"J. Random Hacker" -> "553-1337"
};
// String[String] is not a java type but an optimj type:
// associative array of strings indexed by strings.
// iterate over the values
for (String number : phoneBook) {
System.out.println(number);
}
// The previous statement prints: "555-9999" "555-1212" "553-1337"
// iterate over the keys
for (String name : phoneBook.keys) {
System.out.println(name + " -> " + phoneBook[name]);
}
// phoneBook[name] access a value by a key (it looks like java array access)
// i.e. phoneBook["John Doe"] returns "555-1212"
Of course, it is possible to define multi-dimensional arrays, to mix Java arrays and associative arrays, to mix maps and associative arrays.
int[String][][double] a;
java.util.Map<String[Object], Integer> b;
Perl 5
Perl 5 has built-in, language-level support for associative arrays. Modern Perl refers to associative arrays as hashes; the term associative array is found in older documentation but is considered somewhat archaic. Perl 5 hashes are flat: keys are strings and values are scalars. However, values may be references to arrays or other hashes, and the standard Perl 5 module Tie::RefHash enables hashes to be used with reference keys.
A hash variable is marked by a % sigil, to distinguish it from scalar, array, and other data types. A hash literal is a key-value list, with the preferred form using Perl's => token, which is semantically mostly identical to the comma and makes the key-value association clearer:
my %phone_book = (
'Sally Smart' => '555-9999',
'John Doe' => '555-1212',
'J. Random Hacker' => '553-1337',
);
Accessing a hash element uses the syntax $hash_name{$key} – the key is surrounded by curly braces and the hash name is prefixed by a $, indicating that the hash element itself is a scalar value, even though it is part of a hash. The value of $phone_book{'John Doe'} is '555-1212'. The % sigil is only used when referring to the hash as a whole, such as when asking for keys %phone_book.
The list of keys and values can be extracted using the built-in functions keys and values, respectively. So, for example, to print all the keys of a hash:
foreach $name (keys %phone_book) {
print $name, "\n";
}
One can iterate through (key, value) pairs using the each function:
while (($name, $number) = each %phone_book) {
print 'Number for ', $name, ': ', $number, "\n";
}
A hash "reference", which is a scalar value that points to a hash, is specified in literal form using curly braces as delimiters, with syntax otherwise similar to specifying a hash literal:
my $phone_book = {
'Sally Smart' => '555-9999',
'John Doe' => '555-1212',
'J. Random Hacker' => '553-1337',
};
Values in a hash reference are accessed using the dereferencing operator:
print $phone_book->{'Sally Smart'};
When the hash contained in the hash reference needs to be referred to as a whole, as with the keys function, the syntax is as follows:
foreach $name (keys %{$phone_book}) {
print 'Number for ', $name, ': ', $phone_book->{$name}, "\n";
}
Perl 6 (Raku)
Perl 6, renamed as "Raku", also has built-in, language-level support for associative arrays, which are referred to as hashes or as objects performing the "associative" role. As in Perl 5, Perl 6 default hashes are flat: keys are strings and values are scalars. One can define a hash to not coerce all keys to strings automatically: these are referred to as "object hashes", because the keys of such hashes remain the original object rather than a stringification thereof.
A hash variable is typically marked by a % sigil, to visually distinguish it from scalar, array, and other data types, and to define its behaviour towards iteration. A hash literal is a key-value list, with the preferred form using Perl's => token, which makes the key-value association clearer:
my %phone-book =
'Sally Smart' => '555-9999',
'John Doe' => '555-1212',
'J. Random Hacker' => '553-1337',
;
Accessing a hash element uses the syntax %hash_name{$key} – the key is surrounded by curly braces and the hash name (note that the sigil does not change, contrary to Perl 5). The value of %phone-book{'John Doe'} is '555-1212'.
The list of keys and values can be extracted using the built-in functions keys and values, respectively. So, for example, to print all the keys of a hash:
for %phone-book.keys -> $name {
say $name;
}
By default, when iterating through a hash, one gets key–value pairs.
for %phone-book -> $entry {
say "Number for $entry.key(): $entry.value()"; # using extended interpolation features
}
It is also possible to get alternating key values and value values by using the kv method:
for %phone-book.kv -> $name, $number {
say "Number for $name: $number";
}
Raku doesn't have any references. Hashes can be passed as single parameters that are not flattened. If you want to make sure that a subroutine only accepts hashes, use the % sigil in the Signature.
sub list-phone-book(%pb) {
for %pb.kv -> $name, $number {
say "Number for $name: $number";
}
}
list-phone-book(%phone-book);
In compliance with gradual typing, hashes may be subjected to type constraints, confining a set of valid keys to a certain type.
# Define a hash whose keys may only be integer numbers ("Int" type).
my %numbersWithNames{Int};
# Keys must be integer numbers, as in this case.
%numbersWithNames.push(1 => "one");
# This will cause an error, as strings as keys are invalid.
%numbersWithNames.push("key" => "two");
PHP
PHP's built-in array type is, in reality, an associative array. Even when using numerical indexes, PHP internally stores arrays as associative arrays. So, PHP can have non-consecutively numerically indexed arrays. The keys have to be of integer (floating point numbers are truncated to integer) or string type, while values can be of arbitrary types, including other arrays and objects. The arrays are heterogeneous: a single array can have keys of different types. PHP's associative arrays can be used to represent trees, lists, stacks, queues, and other common data structures not built into PHP.
An associative array can be declared using the following syntax:
$phonebook = array();
$phonebook['Sally Smart'] = '555-9999';
$phonebook['John Doe'] = '555-1212';
$phonebook['J. Random Hacker'] = '555-1337';
// or
$phonebook = array(
'Sally Smart' => '555-9999',
'John Doe' => '555-1212',
'J. Random Hacker' => '555-1337',
);
// or, as of PHP 5.4
$phonebook = [
'Sally Smart' => '555-9999',
'John Doe' => '555-1212',
'J. Random Hacker' => '555-1337',
];
// or
$phonebook['contacts']['Sally Smart']['number'] = '555-9999';
$phonebook['contacts']['John Doe']['number'] = '555-1212';
$phonebook['contacts']['J. Random Hacker']['number'] = '555-1337';
PHP can loop through an associative array as follows:
foreach ($phonebook as $name => $number) {
echo 'Number for ', $name, ': ', $number, "\n";
}
// For the last array example it is used like this
foreach ($phonebook['contacts'] as $name => $num) {
echo 'Name: ', $name, ', number: ', $num['number'], "\n";
}
PHP has an extensive set of functions to operate on arrays.
Associative arrays that can use objects as keys, instead of strings and integers, can be implemented with the SplObjectStorage class from the Standard PHP Library (SPL).
Pike
Pike has built-in support for associative arrays, which are referred to as mappings. Mappings are created as follows:
mapping(string:string) phonebook = ([
"Sally Smart":"555-9999",
"John Doe":"555-1212",
"J. Random Hacker":"555-1337"
]);
Accessing and testing for presence in mappings is done using the indexing operator. So phonebook["Sally Smart"] would return the string "555-9999", and phonebook["John Smith"] would return 0.
Iterating through a mapping can be done using foreach:
foreach(phonebook; string key; string value) {
write("%s:%s\n", key, value);
}
Or using an iterator object:
Mapping.Iterator i = get_iterator(phonebook);
while (i->index()) {
write("%s:%s\n", i->index(), i->value());
i->next();
}
Elements of a mapping can be removed using m_delete, which returns the value of the removed index:
string sallys_number = m_delete(phonebook, "Sally Smart");
PostScript
In PostScript, associative arrays are called dictionaries. In Level 1 PostScript they must be created explicitly, but Level 2 introduced direct declaration using a double-angled-bracket syntax:
% Level 1 declaration
3 dict dup begin
/red (rouge) def
/green (vert) def
/blue (bleu) def
end
% Level 2 declaration
<<
/red (rot)
/green (gruen)
/blue (blau)
>>
% Both methods leave the dictionary on the operand stack
Dictionaries can be accessed directly, using get, or implicitly, by placing the dictionary on the dictionary stack using begin:
% With the previous two dictionaries still on the operand stack
/red get print % outputs 'rot'
begin
green print % outputs 'vert'
end
Dictionary contents can be iterated through using forall, though not in any particular order:
% Level 2 example
<<
/This 1
/That 2
/Other 3
>> {exch =print ( is ) print ==} forall
Which may output:
That is 2
This is 1
Other is 3
Dictionaries can be augmented (up to their defined size only in Level 1) or altered using put, and entries can be removed using undef:
% define a dictionary for easy reuse:
/MyDict <<
/rouge (red)
/vert (gruen)
>> def
% add to it
MyDict /bleu (blue) put
% change it
MyDict /vert (green) put
% remove something
MyDict /rouge undef
Prolog
Some versions of Prolog include dictionary ("dict") utilities.
Python
In Python, associative arrays are called "dictionaries". Dictionary literals are delimited by curly braces:
phonebook = {
"Sally Smart": "555-9999",
"John Doe": "555-1212",
"J. Random Hacker": "553-1337",
}
Dictionary items can be accessed using the array indexing operator:
>>> phonebook["Sally Smart"]
'555-9999'
Loop iterating through all the keys of the dictionary:
>>> for key in phonebook:
... print(key, phonebook[key])
Sally Smart 555-9999
J. Random Hacker 553-1337
John Doe 555-1212
Iterating through (key, value) tuples:
>>> for key, value in phonebook.items():
... print(key, value)
Sally Smart 555-9999
J. Random Hacker 553-1337
John Doe 555-1212
Dictionary keys can be individually deleted using the del statement. The corresponding value can be returned before the key-value pair is deleted using the "pop" method of "dict" type:
>>> del phonebook["John Doe"]
>>> val = phonebook.pop("Sally Smart")
>>> phonebook.keys() # Only one key left
['J. Random Hacker']
Python 2.7 and 3.x also support dict comprehensions (similar to list comprehensions), a compact syntax for generating a dictionary from any iterator:
>>> square_dict = {i: i*i for i in range(5)}
>>> square_dict
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}
>>> {key: value for key, value in phonebook.items() if "J" in key}
{'J. Random Hacker': '553-1337', 'John Doe': '555-1212'}
Strictly speaking, a dictionary is a super-set of an associative array, since neither the keys or values are limited to a single datatype. One could think of a dictionary as an "associative list" using the nomenclature of Python. For example, the following is also legitimate:
phonebook = {
"Sally Smart": "555-9999",
"John Doe": None,
"J. Random Hacker": -3.32,
14: "555-3322",
}
The dictionary keys must be of an immutable data type. In Python, strings are immutable due to their method of implementation.
Red
In Red the built-in map! datatype provides an associative array that maps values of word, string, and scalar key types to values of any type. A hash table is used internally for lookup.
A map can be written as a literal, such as #(key1 value1 key2 value2 ...), or can be created using make map! [key1 value1 key2 value2 ...]:
Red [Title:"My map"]
my-map: make map! [
"Sally Smart" "555-9999"
"John Doe" "555-1212"
"J. Random Hacker" "553-1337"
]
; Red preserves case for both keys and values, however lookups are case insensitive by default; it is possible to force case sensitivity using the <code>/case</code> refinement for <code>select</code> and <code>put</code>.
; It is of course possible to use <code>word!</code> values as keys, in which case it is generally preferred to use <code>set-word!</code> values when creating the map, but any word type can be used for lookup or creation.
my-other-map: make map! [foo: 42 bar: false]
; Notice that the block is not reduced or evaluated in any way, therefore in the above example the key <code>bar</code> is associated with the <code>word!</code> <code>false</code> rather than the <code>logic!</code> value false; literal syntax can be used if the latter is desired:
my-other-map: make map! [foo: 42 bar: #[false]]
; or keys can be added after creation:
my-other-map: make map! [foo: 42]
my-other-map/bar: false
; Lookup can be written using <code>path!</code> notation or using the <code>select</code> action:
select my-map "Sally Smart"
my-other-map/foo
; You can also loop through all keys and values with <code>foreach</code>:
foreach [key value] my-map [
print [key "is associated to" value]
]
; A key can be removed using <code>remove/key</code>:
remove/key my-map "Sally Smart"
REXX
In REXX, associative arrays are called "stem variables" or "Compound variables".
KEY = 'Sally Smart'
PHONEBOOK.KEY = '555-9999'
KEY = 'John Doe'
PHONEBOOK.KEY = '555-1212'
KEY = 'J. Random Hacker'
PHONEBOOK.KEY = '553-1337'
Stem variables with numeric keys typically start at 1 and go up from there. The 0-key stem variable
by convention contains the total number of items in the stem:
NAME.1 = 'Sally Smart'
NAME.2 = 'John Doe'
NAME.3 = 'J. Random Hacker'
NAME.0 = 3
REXX has no easy way of automatically accessing the keys of a stem variable; and typically the
keys are stored in a separate associative array, with numeric keys.
Ruby
In Ruby a hash table is used as follows:
phonebook = {
'Sally Smart' => '555-9999',
'John Doe' => '555-1212',
'J. Random Hacker' => '553-1337'
}
phonebook['John Doe']
Ruby supports hash looping and iteration with the following syntax:
irb(main):007:0> ### iterate over keys and values
irb(main):008:0* phonebook.each {|key, value| puts key + " => " + value}
Sally Smart => 555-9999
John Doe => 555-1212
J. Random Hacker => 553-1337
=> {"Sally Smart"=>"555-9999", "John Doe"=>"555-1212", "J. Random Hacker"=>"553-1337"}
irb(main):009:0> ### iterate keys only
irb(main):010:0* phonebook.each_key {|key| puts key}
Sally Smart
John Doe
J. Random Hacker
=> {"Sally Smart"=>"555-9999", "John Doe"=>"555-1212", "J. Random Hacker"=>"553-1337"}
irb(main):011:0> ### iterate values only
irb(main):012:0* phonebook.each_value {|value| puts value}
555-9999
555-1212
553-1337
=> {"Sally Smart"=>"555-9999", "John Doe"=>"555-1212", "J. Random Hacker"=>"553-1337"}
Ruby also supports many other useful operations on hashes, such as merging hashes, selecting or rejecting elements that meet some criteria, inverting (swapping the keys and values), and flattening a hash into an array.
Rust
The Rust standard library provides a hash map (std::collections::HashMap) and a B-tree map (std::collections::BTreeMap). They share several methods with the same names, but have different requirements for the types of keys that can be inserted. The HashMap requires keys to implement the Eq (equivalence relation) and Hash (hashability) traits and it stores entries in an unspecified order, and the BTreeMap requires the Ord (total order) trait for its keys and it stores entries in an order defined by the key type. The order is reflected by the default iterators.
use std::collections::HashMap;
let mut phone_book = HashMap::new();
phone_book.insert("Sally Smart", "555-9999");
phone_book.insert("John Doe", "555-1212");
phone_book.insert("J. Random Hacker", "555-1337");
The default iterators visit all entries as tuples. The HashMap iterators visit entries in an unspecified order and the BTreeMap iterator visits entries in the order defined by the key type.
for (name, number) in &phone_book {
println!("{} {}", name, number);
}
There is also an iterator for keys:
for name in phone_book.keys() {
println!("{}", name);
}
S-Lang
S-Lang has an associative array type:
phonebook = Assoc_Type[];
phonebook["Sally Smart"] = "555-9999"
phonebook["John Doe"] = "555-1212"
phonebook["J. Random Hacker"] = "555-1337"
You can also loop through an associated array in a number of ways:
foreach name (phonebook) {
vmessage ("%s %s", name, phonebook[name]);
}
To print a sorted-list, it is better to take advantage of S-lang's strong
support for standard arrays:
keys = assoc_get_keys(phonebook);
i = array_sort(keys);
vals = assoc_get_values(phonebook);
array_map (Void_Type, &vmessage, "%s %s", keys[i], vals[i]);
Scala
Scala provides an immutable Map class as part of the scala.collection framework:
val phonebook = Map("Sally Smart" -> "555-9999",
"John Doe" -> "555-1212",
"J. Random Hacker" -> "553-1337")
Scala's type inference will decide that this is a Map[String, String]. To access the array:
phonebook.get("Sally Smart")
This returns an Option type, Scala's equivalent of the Maybe monad in Haskell.
Smalltalk
In Smalltalk a Dictionary is used:
phonebook := Dictionary new.
phonebook at: 'Sally Smart' put: '555-9999'.
phonebook at: 'John Doe' put: '555-1212'.
phonebook at: 'J. Random Hacker' put: '553-1337'.
To access an entry the message #at: is sent to the dictionary object:
phonebook at: 'Sally Smart'
Which gives:
'555-9999'
A dictionary hashes, or compares, based on equality and marks both key and value as
strong references. Variants exist in which hash/compare on identity (IdentityDictionary) or keep weak references (WeakKeyDictionary / WeakValueDictionary).
Because every object implements #hash, any object can be used as key (and of course also as value).
SNOBOL
SNOBOL is one of the first (if not the first) programming languages to use associative arrays. Associative arrays in SNOBOL are called Tables.
PHONEBOOK = TABLE()
PHONEBOOK['Sally Smart'] = '555-9999'
PHONEBOOK['John Doe'] = '555-1212'
PHONEBOOK['J. Random Hacker'] = '553-1337'
Standard ML
The SML'97 standard of the Standard ML programming language does not provide any associative containers. However, various implementations of Standard ML do provide associative containers.
The library of the popular Standard ML of New Jersey (SML/NJ) implementation provides a signature (somewhat like an "interface"), ORD_MAP, which defines a common interface for ordered functional (immutable) associative arrays. There are several general functors—BinaryMapFn, ListMapFn, RedBlackMapFn, and SplayMapFn—that allow you to create the corresponding type of ordered map (the types are a self-balancing binary search tree, sorted association list, red–black tree, and splay tree, respectively) using a user-provided structure to describe the key type and comparator. The functor returns a structure in accordance with the ORD_MAP interface. In addition, there are two pre-defined modules for associative arrays that employ integer keys: IntBinaryMap and IntListMap.
- structure StringMap = BinaryMapFn (struct
type ord_key = string
val compare = String.compare
end);
structure StringMap : ORD_MAP
- val m = StringMap.insert (StringMap.empty, "Sally Smart", "555-9999")
val m = StringMap.insert (m, "John Doe", "555-1212")
val m = StringMap.insert (m, "J. Random Hacker", "553-1337");
val m =
T
{cnt=3,key="John Doe",
left=T {cnt=1,key="J. Random Hacker",left=E,right=E,value="553-1337"},
right=T {cnt=1,key="Sally Smart",left=E,right=E,value="555-9999"},
value="555-1212"} : string StringMap.map
- StringMap.find (m, "John Doe");
val it = SOME "555-1212" : string option
SML/NJ also provides a polymorphic hash table:
- exception NotFound;
exception NotFound
- val m : (string, string) HashTable.hash_table = HashTable.mkTable (HashString.hashString, op=) (3, NotFound);
val m =
HT
{eq_pred=fn,hash_fn=fn,n_items=ref 0,not_found=NotFound(-),
table=ref [|NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,...|]}
: (string,string) HashTable.hash_table
- HashTable.insert m ("Sally Smart", "555-9999");
val it = () : unit
- HashTable.insert m ("John Doe", "555-1212");
val it = () : unit
- HashTable.insert m ("J. Random Hacker", "553-1337");
val it = () : unit
HashTable.find m "John Doe"; (* returns NONE if not found *)
val it = SOME "555-1212" : string option
- HashTable.lookup m "John Doe"; (* raises the exception if not found *)
val it = "555-1212" : string
Monomorphic hash tables are also supported, using the HashTableFn functor.
Another Standard ML implementation, Moscow ML, also provides some associative containers. First, it provides polymorphic hash tables in the Polyhash structure. Also, some functional maps from the SML/NJ library above are available as Binarymap, Splaymap, and Intmap structures.
Tcl
There are two Tcl facilities that support associative-array semantics. An "array" is a collection of variables. A "dict" is a full implementation of associative arrays.
array
set {phonebook(Sally Smart)} 555-9999
set john {John Doe}
set phonebook($john) 555-1212
set {phonebook(J. Random Hacker)} 553-1337
If there is a space character in the variable name, the name must be grouped using either curly brackets (no substitution performed) or double quotes (substitution is performed).
Alternatively, several array elements can be set by a single command, by presenting their mappings as a list (words containing whitespace are braced):
array set phonebook [list {Sally Smart} 555-9999 {John Doe} 555-1212 {J. Random Hacker} 553-1337]
To access one array entry and put it to standard output:
puts $phonebook(Sally\ Smart)
Which returns this result:
555-9999
To retrieve the entire array as a dictionary:
array get phonebook
The result can be (order of keys is unspecified, not because the dictionary is unordered, but because the array is):
{Sally Smart} 555-9999 {J. Random Hacker} 553-1337 {John Doe} 555-1212
dict
set phonebook [dict create {Sally Smart} 555-9999 {John Doe} 555-1212 {J. Random Hacker} 553-1337]
To look up an item:
dict get $phonebook {John Doe}
To iterate through a dict:
foreach {name number} $phonebook {
puts "name: $name\nnumber: $number"
}
Visual Basic
Visual Basic can use the Dictionary class from the Microsoft Scripting Runtime (which is shipped with Visual Basic 6). There is no standard implementation common to all versions:
' Requires a reference to SCRRUN.DLL in Project Properties
Dim phoneBook As New Dictionary
phoneBook.Add "Sally Smart", "555-9999"
phoneBook.Item("John Doe") = "555-1212"
phoneBook("J. Random Hacker") = "553-1337"
For Each name In phoneBook
MsgBox name & " = " & phoneBook(name)
Next
Visual Basic .NET
Visual Basic .NET uses the collection classes provided by the .NET Framework.
Creation
The following code demonstrates the creation and population of a dictionary (see the C# example on this page for additional information):
Dim dic As New System.Collections.Generic.Dictionary(Of String, String)
dic.Add("Sally Smart", "555-9999")
dic("John Doe") = "555-1212"
dic.Item("J. Random Hacker") = "553-1337"
An alternate syntax would be to use a collection initializer, which compiles down to individual calls to Add:
Dim dic As New System.Collections.Dictionary(Of String, String) From {
{"Sally Smart", "555-9999"},
{"John Doe", "555-1212"},
{"J. Random Hacker", "553-1337"}
}
Access by key
Example demonstrating access (see C# access):
Dim sallyNumber = dic("Sally Smart")
' or
Dim sallyNumber = dic.Item("Sally Smart")
Dim result As String = Nothing
Dim sallyNumber = If(dic.TryGetValue("Sally Smart", result), result, "n/a")
Enumeration
Example demonstrating enumeration (see #C# enumeration):
' loop through the collection and display each entry.
For Each kvp As KeyValuePair(Of String, String) In dic
Console.WriteLine("Phone number for {0} is {1}", kvp.Key, kvp.Value)
Next
Windows PowerShell
Unlike many other command line interpreters, Windows PowerShell has built-in, language-level support for defining associative arrays:
$phonebook = @{
'Sally Smart' = '555-9999';
'John Doe' = '555-1212';
'J. Random Hacker' = '553-1337'
}
As in JavaScript, if the property name is a valid identifier, the quotes can be omitted:
$myOtherObject = @{ foo = 42; bar = $false }
Entries can be separated by either a semicolon or a newline:
$myOtherObject = @{ foo = 42
bar = $false ;
zaz = 3
}
Keys and values can be any .NET object type:
$now = [DateTime]::Now
$tomorrow = $now.AddDays(1)
$ProcessDeletionSchedule = @{
(Get-Process notepad) = $now
(Get-Process calc) = $tomorrow
}
It is also possible to create an empty associative array and add single entries, or even other associative arrays, to it later on:
$phonebook = @{}
$phonebook += @{ 'Sally Smart' = '555-9999' }
$phonebook += @{ 'John Doe' = '555-1212'; 'J. Random Hacker' = '553-1337' }
New entries can also be added by using the array index operator, the property operator, or the Add() method of the underlying .NET object:
$phonebook = @{}
$phonebook['Sally Smart'] = '555-9999'
$phonebook.'John Doe' = '555-1212'
$phonebook.Add('J. Random Hacker', '553-1337')
To dereference assigned objects, the array index operator, the property operator, or the parameterized property Item() of the .NET object can be used:
$phonebook['Sally Smart']
$phonebook.'John Doe'
$phonebook.Item('J. Random Hacker')
You can loop through an associative array as follows:
$phonebook.Keys | foreach { "Number for {0}: {1}" -f $_,$phonebook.$_ }
An entry can be removed using the Remove() method of the underlying .NET object:
$phonebook.Remove('Sally Smart')
Hash tables can be added:
$hash1 = @{ a=1; b=2 }
$hash2 = @{ c=3; d=4 }
$hash3 = $hash1 + $hash2
Data serialization formats support
Many data serialization formats also support associative arrays (see this table)
JSON
In JSON, associative arrays are also referred to as objects. Keys can only be strings.
{
"Sally Smart": "555-9999",
"John Doe": "555-1212",
"J. Random Hacker": "555-1337"
}
YAML
YAML associative arrays are also called map elements or key-value pairs. YAML places no restrictions on the types of keys; in particular, they are not restricted to being scalar or string values.
Sally Smart: 555-9999
John Doe: 555-1212
J. Random Hacker: 555-1337
References
Programming language comparison
Mapping
Articles with example Julia code | Comparison of programming languages (associative array) | Technology | 20,000 |
5,676,819 | https://en.wikipedia.org/wiki/Lotus%20Dev.%20Corp.%20v.%20Borland%20Int%27l%2C%20Inc. | Lotus Dev. Corp. v. Borland Int'l, Inc., 516 U.S. 233 (1996), is a United States Supreme Court case that tested the extent of software copyright. The lower court had held that copyright does not extend to the user interface of a computer program, such as the text and layout of menus. Due to the recusal of one justice, the Supreme Court decided the case with an eight-member bench split evenly, leaving the lower court's decision affirmed but setting no national precedent.
Background information
Borland released a spreadsheet product, Quattro Pro, with a compatibility mode in which its menu imitated Lotus 1-2-3, a competing product. None of the source code or machine code that generated the menus was copied, but the names of the commands and the organization of those commands into a hierarchy were virtually identical.
Quattro Pro also contained a "Key Reader" feature, which allowed it to execute Lotus 1-2-3 keyboard macros. To support this feature, Quattro Pro's code contained a copy of Lotus's menu hierarchy in which each command was represented by its first letter instead of its entire name.
Borland CEO Philippe Kahn took the case to the software development community arguing that Lotus's position would stifle innovation and damage the future of software development. The vast majority of the software development community supported Borland's position.
District Court case
Lotus filed suit in the United States District Court for the District of Massachusetts on July 2, 1990, claiming that the structure of the menus was copyrighted by Lotus. The district court ruled that Borland had infringed Lotus's copyright. The ruling was based in part on the fact that an alternative satisfactory menu structure could be designed. For example, the "Quit" command could be changed to "Exit".
Borland immediately removed the Lotus-based menu system from Quattro Pro, but retained support for its "Key Reader" feature, and Lotus filed a supplemental claim against this feature. A district court held that this also constituted copyright infringement.
Circuit Court case
Borland appealed the decision of the district court arguing that the menu hierarchy is a "method of operation", which is not copyrightable according to 17 U.S.C. § 102(b).
The United States Court of Appeals for the First Circuit reversed the district court's decision, agreeing with Borland's legal theory that considered the menu hierarchy a "method of operation". The court agreed with the district court that an alternative menu hierarchy could be devised, but argued that despite this, the menu hierarchy is an uncopyrightable "method of operation".We hold that the Lotus menu command hierarchy is an uncopyrightable “method of operation.” The Lotus menu command hierarchy provides the means by which users control and operate Lotus 1–2–3. If users wish to copy material, for example, they use the “Copy” command. If users wish to print material, they use the “Print” command. Users must use the command terms to tell the computer what to do. Without the menu command hierarchy, users would not be able to access and control, or indeed make use of, Lotus 1–2–3's functional capabilities.The court made an analogy between the menu hierarchy and the arrangement of buttons on a VCR. The buttons are used to control the playback of a video tape, just as the menu commands are used to control the operations of Lotus 1-2-3. Since the buttons are essential to operating the VCR, their layout cannot be copyrighted. Likewise, the menu commands, including the textual labels and the hierarchical layout, are essential to operating Lotus 1-2-3.
The court also considered the impact of their decision on users of software. If menu hierarchies were copyrightable, users would be required to learn how to perform the same operation in a different way for every program, which the court finds "absurd". Additionally, all macros would have to be re-written for each different program, which places an undue burden on users.
Concurring opinion
Judge Michael Boudin wrote a concurring opinion for this case. In this opinion, he discusses the costs and benefits of copyright protection, as well as the potential similarity of software copyright protection to patent protection. He argues that software is different from creative works, which makes it difficult to apply copyright law to software.
His opinion also considers the theory that Borland's use of the Lotus menu is "privileged". That is, because Borland copied the menu for a legitimate purpose of compatibility, its use should be allowed. This decision, if issued by the majority of the court, would have been narrower in scope than the "method of operations" decision. Copying a menu hierarchy would be allowed in some circumstances, and disallowed in others.
Supreme Court case
Lotus petitioned the United States Supreme Court for a writ of certiorari. In a per curiam opinion, the Supreme Court affirmed the circuit court's judgment due to an evenly divided court, with Justice Stevens recusing. Because the Court split evenly, it affirmed the First Circuit's decision without discussion and did not establish any national precedent on the copyright issue. Lotus's petition for a rehearing by the full court was denied. By the time the lawsuit ended, Borland had sold Quattro Pro to Novell, and Microsoft's Excel spreadsheet had emerged as the main challenger to Lotus 1-2-3.
Impact
The Lotus decision establishes a distinction in copyright law between the interface of a software product and its implementation. The implementation is subject to copyright. The public interface may also be subject to copyright to the extent that it contains expression (for example, the appearance of an icon). However, the set of available operations and the mechanics of how they are activated are not copyrightable. This standard allows software developers to create competing versions of copyrighted software products without infringing the copyright. See software clone for infringement and compliance cases.
Lotus v. Borland has been used as a lens through which to view the controversial case in Oracle America, Inc. v. Google, Inc., dealing with the copyrightability of software application programming interfaces (APIs) and interoperability of software. Software APIs are designed to allow developers to insure compatibility, but should APIs be found to be copyrightable, that could drastically affect the development of software, as the threat of litigation for building interoperability (a core feature of computing, as it has developed over the decades of worldwide use) would present a chilling effect and coerce the establishment of walled gardens around islands of mutually-incompatible software ecosystems, causing millions of man-hours to be lost in re-implementation and quality assurance testing of the same software across multiple concurrent systems, leading to divergent software development paths and a drastically increased attack surface for potential illicit exploitation.
See also
List of United States Supreme Court cases, volume 516
List of United States Supreme Court cases
Lists of United States Supreme Court cases by volume
List of United States Supreme Court cases by the Rehnquist Court
References
External links
17 U.S.C. § 102(b)
Perspective: Lotus Development Corp. v. Borland International, Massachusetts Lawyers Weekly, April 1995
United States Supreme Court cases
United States copyright case law
1996 in United States case law
United States computer case law
Borland
IBM
Spreadsheet software
United States Supreme Court cases of the Rehnquist Court
Tie votes of the United States Supreme Court
Copyrightability case law | Lotus Dev. Corp. v. Borland Int'l, Inc. | Mathematics | 1,550 |
1,536,701 | https://en.wikipedia.org/wiki/Evacuation%20simulation | Evacuation simulation is a method to determine evacuation times for areas, buildings, or vessels. It is based on the simulation of crowd dynamics and pedestrian motion. The number of evacuation software have been increased dramatically in the last 25 years. A similar trend has been observed in term of the number of scientific papers published on this subject. One of the latest survey indicate the existence of over 70 pedestrian evacuation models. Today there are two conferences dedicated to this subject: "Pedestrian Evacuation Dynamics" and "Human Behavior in Fire".
The distinction between buildings, ships, and vessels on the one hand and settlements and areas on the other hand is important for the simulation of evacuation processes. In the case of the evacuation of a whole district, the transport phase (see emergency evacuation) is usually covered by queueing models (see below).
Pedestrian evacuation simulation are popular in the fire safety design of building when a performance based approach is used. Simulations are not primarily methods for optimization. To optimize the geometry of a building or the procedure with respect to evacuation time, a target function has to be specified and minimized. Accordingly, one or several variables must be identified which are subject to variation.
Classification of models
Modelling approaches in the field of evacuation simulation:
Cellular automaton: discrete, microscopic models, where the pedestrian is represented by a cell state. In this case both statics and dynamic floor fields (i.e., distance maps) are used to navigate agents toward exits moving from a cell to adjacent cells which can have different shapes. There exist models for ship evacuation processes, bi-directional pedestrian flows, general models with bionics aspects
Agent-based models: microscopic models, where the pedestrian is represented by an agent. The agents can have human attributes besides the coordinates. Their behavior can integrate stochastic nature. There exist general models with spatial aspects of pedestrian steps
Social Force Model: continuous, microscopic model, based on equations from physics
Queuing models: macroscopic models which are based on the graphical representation of the geometry. The movement of the persons is represented as a flow on this graph.
Particle swarm optimization models: microscopic model, based on a fitness function which minimizes some properties of the evacuation (distance between pedestrians, distance between pedestrians and exits)
Fluid-dynamic models: continuous, macroscopic models, where large crowds are modeled with coupled, nonlinear, partial differential equations
Simulation of evacuations
Buildings (train stations, sports stadia), ships, aircraft, tunnels, and trains are similar concerning their evacuation: the persons are walking towards a safe area. In addition, persons might use slides or similar evacuation systems and for ships the lowering of life-boats.
Tunnels
Tunnels are unique environments with their own specific characteristics: underground spaces, unknown to users, no natural light, etc. which affect different aspects of evacuees behaviours such as pre-evacuation times (e.g. occupants' reluctance to leave the vehicles), occupant–occupant and occupant–environment interactions, herding behaviour and exit selection.
Ships
Four aspects are particular for ship evacuation:
Ratio of number of crew to number of passengers,
Ship motion,
Floating position
The evacuation system (e.g., slides, life-boats).
Ship motion and/or abnormal floating position may decrease the ability to move. This influence has been investigated experimentally and can be taken into account by reduction factors.
The evacuation of a ship is divided into two separate phases: assembly phase and embarkation phase.
Aircraft
The American Federal Aviation Administration requires that aircraft have to be able to be evacuated within 90 seconds. This criterion has to be checked before approval of the aircraft.
The 90-second rule requires the demonstration that all passengers and crew members can safely abandon the aircraft cabin in less than 90 seconds, with half of the usable exits blocked, with the minimum illumination provided by floor proximity lighting, and a certain age-gender mix in the simulated occupants.
The rule was established in 1965 with 120 seconds, and has been evolving over the years to encompass the improvements in escape equipment, changes in cabin and seat material, and more complete and appropriate crew training.
References
Literature
A. Schadschneider, W. Klingsch, H. Klüpfel, T. Kretz, C. Rogsch, and A. Seyfried. Evacuation Dynamics: Empirical Results, Modeling and Applications. In R.A. Meyers, editor, Encyclopedia of Complexity and System Science. Springer, Berlin Heidelberg New York, 2009. (to be published in April 2009, available at arXiv:0802.1620v1).
Lord J, Meacham B, Moore A, Fahy R, Proulx G (2005). Guide for evaluating the predictive capabilities of computer egress models, NIST Report GCR 06-886. http://www.fire.nist.gov/bfrlpubs/fire05/PDF/f05156.pdf
E. Ronchi, P. Colonna, J. Capote, D. Alvear, N. Berloco, A. Cuesta. The evaluation of different evacuation models for road tunnel safety analyses. Tunnelling and Underground Space Technology Vol. 30, July 2012, pp74–84.
Kuligowski ED, Peacock RD, Hoskins, BL (2010). A Review of Building Evacuation Models NIST, Fire Research Division. 2nd edition. Technical Note 1680 Washington, US.
International Maritime Organization (2007). Guidelines for Evacuation Analyses for New and Existing Passenger Ships, MSC/Circ.1238, International Maritime Organization, London, UK.
R. Lovreglio, E. Ronchi, M. J. Kinsey (2019). An online survey of pedestrian evacuation model usage and users. Fire Technology. https://doi.org/10.1007/s10694-019-00923-8
Emergency simulation
Stochastic simulation
Social physics | Evacuation simulation | Physics | 1,221 |
40,975,991 | https://en.wikipedia.org/wiki/Pupil%20function | The pupil function or aperture function describes how a light wave is affected upon transmission through an optical imaging system such as a camera, microscope, or the human eye. More specifically, it is a complex function of the position in the pupil or aperture (often an iris) that indicates the relative change in amplitude and phase of the light wave. Sometimes this function is referred to as the generalized pupil function, in which case pupil function only indicates whether light is transmitted or not. Imperfections in the optics typically have a direct effect on the pupil function, it is therefore an important tool to study optical imaging systems and their performance.
Relationship with other functions in optics
The complex pupil function can be written in polar coordinates using two real functions:
,
where is the phase change (in radians) introduced by the optics, or the surrounding medium. It captures all optical aberrations that occur between the image plane and the focal plane in the scene or sample. The light may also be attenuated differently at different positions in the pupil, sometimes deliberately for the purpose of apodization. Such change in amplitude of the light wave is described by the factor .
The pupil function is also directly related to the point spread function by its Fourier transform. As such, the effect of aberrations on the point spread function can be described mathematically using the concept of the pupil function.
Since the (incoherent) point spread function is also related to the optical transfer function via a Fourier transform, a direct relationship exists between the pupil function and the optical transfer function. In the case of an incoherent optical imaging system, the optical transfer function is the auto correlation of the pupil function.
Examples
In focus
In a homogeneous medium, a point source emits light with spherical wave fronts. A lens that is focused onto the point source will have optics that change the spherical wave front into a planar wave before it passes through the pupil or aperture stop. Often, additional lens element refocus the light onto a sensor or photographic film, by converting the planar wave front to a spherical wave front, centered onto the image plane. The pupil function of such an ideal system is equal to one at every point within the pupil, and zero out with it. In case of a circular pupil, this can be written mathematically as:
where is the pupil radius.
Out of focus
When the point source is out of focus, the spherical wave will not be completely made planar by the optics, but will have an approximately parabolic wave front: . Such a variation in optical path length corresponds to a radial variation in the complex argument of the pupil function:
otherwise.
It is thus possible to deduce the point-spread function of the out of focus point source as the Fourier transform of the pupil function.
Aberrated Optics
The spherical wave could also be deformed by imperfect optics to an approximately cylindrical wave front: .
otherwise.
Such a variation in optical path length will create an image that is blurred only in one dimension as is typical of systems with astigmatism.
See also
Fourier optics
Point spread function
Optical transfer function
References
Optics | Pupil function | Physics,Chemistry | 630 |
1,069,856 | https://en.wikipedia.org/wiki/Pistia | Pistia is a genus of aquatic plants in the arum family, Araceae. It is the sole genus in the tribe Pistieae which reflects its systematic isolation within the family. The single species it comprises, Pistia stratiotes, is often called water cabbage, water lettuce, Nile cabbage, or shellflower. Its native distribution is uncertain but is probably pantropical; it was first scientifically described from plants found on the Nile near Lake Victoria in Africa. It is now present, either naturally or through human introduction, in nearly all tropical and subtropical fresh waterways and is considered an invasive species as well as a mosquito breeding habitat. The specific epithet is derived from a Greek word, στρατιώτης, meaning "soldier", which references the sword-shaped leaves of some plants in the Stratiotes genus.
Description
Pistia stratiotes is a perennial monocotyledon with thick, soft leaves that form a rosette. It floats on the surface of the water, its roots hanging submersed beneath floating leaves. The leaves can measure 2 – 15 cm long and are light green, with parallel venations and wavy margins. The surface of the leaves is covered in short, white hairs which form basket-like structures that can trap air bubbles and increase the plant's buoyancy. The spongy parenchyma with large intercellular spaces in the leaves also aids the plant in floating. The flowers are dioecious, lack petals, and are hidden in the middle of the plant amongst the leaves. Pistia stratiotes has a spadix inflorescence, containing one pistillate flower with one ovary and 2–8 staminate flowers with two stamens. The pistillate and carpellate flowers are separated by folds in the spathe, where the male flowers are located above the female flowers. Oval, green berries with ovoid seeds form after successful fertilization. The plant undergoes asexual reproduction by propagating through stolons, yet evidence of sexual reproduction has also been observed in the ponds of Southern Brazil.
Pistia stratiotes are found in slow-moving rivers, lakes, and ponds. The species displays optimal growth in the temperature range of 22–30 °C, but can endure extreme temperatures up to 35 °C. As a result, Pistia stratiotes do not grow in colder temperatures, beyond the tropics of Cancer and Capricorn. The species also require slightly acidic water in the pH range of 6.5–7.2 for optimal growth.
Invasion
Water lettuce is among the world's most productive freshwater aquatic plants and is considered an invasive species. The species can be introduced to new areas by water dispersal, fragmentation, and hitchhiking on marine transportation or fishing equipment. The invasion of Pistia stratiotes in the ecosystem can lead to environmental and socio-economic ramifications to the community it serves. In waters with high nutrient content, particularly those that have been contaminated with human loading of sewage or fertilizers, water lettuce can exhibit weedy overgrowth. It may also become invasive in hydrologically altered systems such as flood control canals and reservoirs. The severe overgrowth of water lettuce can block gas exchange in the surface water, creating hypoxic conditions and eliminating or disrupting various native marine organisms. Blocking access to sunlight, large mats of water lettuce can shade native submerged plants and alter communities relying on these native plants as a source of food. The growth of these mats can also get tangled in boat propellers and create challenges for boaters or recreational fishermen.
Pistia stratiotes feature in the life cycles of certain insect vectors for malaria and filariasis. Mosquitoes of the genus Mansonia can lay their eggs under the leaves of aquatic plants, such as Pistia stratiotes. Twenty-four hours later, the emerging larvae attach to the plant's roots using its siphon tube for respiration. Within a week, larvae can develop into adult mosquitos, making Pistia stratiotes a potential breeding ground for vectors of infectious disease. The moth Samea multiplicalis also uses Pistia stratiotes as its primary host plant. Eggs are laid among leaves and stems of the host plant and larvae hatch and feed intensively as they develop.
Control
Chemical control: Herbicides have been effective in controlling Pistia stratiotes: diquat, glyphosate, terbutryn, 2,4-D, among many others. Yet, the use of herbicides must be critically assessed to prevent negative environmental impacts and possible toxic effects on marine life and human health.
Physical control: Pistia stratiotes can be controlled with mechanical harvesters that remove the water lettuce from the infested waters and transport it to disposal onshore. Larger infestations can be removed with the aid of hydraulic excavators and tractors. To prevent the re-growth of Pistia stratiotes colonies, a long-term maintenance program should be implemented.
Biological control: Two species of insects are also being used as a biological control. Adults and larvae of the South American weevil Neohydronomous affinis feed on Pistia leaves, as do the larvae of the moth Spodoptera pectinicornis from Thailand. Both are proving to be useful tools in the management of Pistia stratiotes through the experimental recovery of benthic communities with hypoxic conditions.
The species is set to be banned in the EU from August of 2024 to prevent spreading.
Range
The center of origin of Pistia stratiotes has long been a source of debate. Nativity to northern Africa is indicated by Egyptian hieroglyphics and reports of plants meeting the description of Pistia by Greek botanists, Dioscorides and Theophrastus in the Nile River. In addition, the co-evolution of Pistia stratiotes with various insects native to Brazil and Argentina, such as the water lettuce weevil, indicates a long-term native tenure in South America. Fossil specimens dating back to the late Pleistocene (~12,000 BP) and early Holocene (~3,500 BP) period are reported from Florida, indicating a native presence in the southeastern United States. Recent genetic evidence also suggests that Pistia is not actually a monotypic genus, as had been long assumed. Instead, Pistia appears to be composed of at least three genetically distinct, but morphologically and ecologically similar, species at a global scale.
Temperate occurrences
Though Pistia stratiotes is intolerant of cold temperatures, it has been recorded growing at least temporarily in temperate areas of North America and Europe. In the United States north of the Gulf of Mexico it has been found growing in Colorado, Connecticut, Delaware, Illinois, Kansas, Maryland,
Michigan, Minnesota, Missouri, New York, North Carolina, Ohio, Rhode Island, South Carolina,
and Wisconsin. One of these occurrences, in Idaho, survives in an area of a river fed by a hot spring. The rest are thought either to be completely eradicated by cold weather or possibly to survive by seed production.
Fossil record
Pistia-like plants appear in the fossil record during the Late Cretaceous epoch in rock strata from the western interior of North America. They were first described as †Pistia corrugata by Leo Lesquereux in 1876 based on specimens from the Almond Formation of Wyoming (late Campanian age). However, based on more complete specimens from the Campanian Dinosaur Park Formation of southern Alberta, Canada, and other areas, they were redescribed as a separate genus, †Cobbania, primarily due to differences in leaf morphology. Younger fossils attributed to Pistia stratiotes have described from Eocene strata in the southeastern United States, and 350 fossil seeds of †Pistia sibirica have been described from middle Miocene strata of the Fasterholt area near Silkeborg in Central Jutland, Denmark. Fossils of this species have also been described from the Oligocene and Miocene of Western Siberia and from the Miocene of Germany.
A specimen of Pistia from the Florida peninsula dating from at least 3,550 years Before Present and a report of Holocene Pistia fossils from a lake in south central Florida are consistent with genetic evidence indicating that some varieties of Pistia stratiotes are native to the southeastern United States.
Uses
Consumption
While considered edible, Pistia stratiotes is not palatable as it is rich in calcium oxalate crystals that are bitter in taste. Nevertheless, there are records of the plant being utilized as famine food in India during the Great Famine of 1876–1878.
The Hausa people of Nigeria used the ash of the plant as a substitute for salt due to its high concentration of potassium chloride, a mineral salt. This salt substitute, also called zakankau, was of high importance, especially when imported salt was unavailable.
Caution is advised when consuming Pistia stratiotes, as the plant is a hyperaccumulator, and can absorb and accumulate toxic heavy metals present in its environment. The presence of high concentrations of calcium oxalate crystals can induce various health concerns, such as inhibited mineral absorption and kidney stones.
In Singapore and Southern China, Pistia stratiotes is commonly grown or collected as animal feed for ducks and pigs. Water lettuce is also considered an alternative for poultry feed in Indonesia due to its high content of crude protein.
Medical treatment
There are various medical uses of Pistia stratiotes throughout regions in Asia and Africa. In Nigeria, the dried leaves are prepared into a powder form and are applied to wounds and sores for disinfection. A similar use is present in Indian traditional medicine, where the powdered leaf is applied to syphilitic eruptions and skin infections. In Nigeria and Gambia, the leaf is infused in water to create an eyewash to treat allergic conjunctivitis. The eyewash is known to have a cooling and analgesic effect. Therefore, the plant is commonly called 'eye-pity' in Africa. In addition, the leaves of Pistia stratiotes can be burned into ash, and in Indian and Nigerian traditional medicine, the ash is used in treating ringworm infections of the scalp.
Medicinal properties
Anti-inflammatory properties: Extractions of the leaves of P. stratiotes reduces mast infiltration and degranulation in allergic reactions and presents anti-inflammatory properties. The ethanolic extracts have also been positively correlated with a reduction in inflammatory disorders, such as arthritis and fevers.
Antifungal properties: With the popular use of Pistia stratiotes as a traditional treatment for ringworms, researchers have tested P. stratiotes methanolic extracts on dermatophyte fungi. The results of the studies depicted significant fungicidal activity on T. rubrum, T. mentagrophytes, and E. floccosum.
Environmental remediation
The high sorption property of water lettuce makes it a great candidate for biodegradable oil sorbents in marine oil spills. Particularly, the leaves of Pistia stratiotes can efficiently absorb significant amounts of hydrocarbons due to its large surface area and hydrophobicity.
As a hyper-accumulator, Pistia stratiotes has been studied as a potential candidate for wastewater treatment plants. The roots and leaves of the plant have been found to absorb excess nutrients and heavy metals, such as zinc, chromium, and cadmium in contaminated waters.
Pistia stratiotes can be grown in water gardens to reduce harmful algal blooms and eutrophic conditions. The plant is able to control the growth of algae by restricting light penetration in the water column and competing for nutrients, with significant uptake of phosphorus and ammonia nitrogen.
See also
Phytoremediation plants
Hyperaccumulators table – 3
References
"Biogeography of the Pistia clade (Araceae): Based on chloroplast and mitochondrial DNA sequences and Bayesian divergence time inference"
External links
Centre for Aquatic and Invasive Plants
Pistia stratiotes information from the Hawaiian Ecosystems at Risk project (HEAR)
Species Profile- Water Lettuce (Pistia stratiotes), National Invasive Species Information Center, United States National Agricultural Library. Lists general information and resources for Water Lettuce.
Aroideae
Monotypic Araceae genera
Taxa named by Carl Linnaeus
Pantropical flora
Aquatic plants
Phytoremediation plants
Invasive plant species in Sri Lanka
Cretaceous plants | Pistia | Biology | 2,641 |
77,538,830 | https://en.wikipedia.org/wiki/NanoTritium%20batteries | NanoTritium batteries are ultra-low-power, long-life betavoltaic devices developed by City Labs, Inc. These nanowatt-to-microwatt batteries utilize the natural decay of tritium, a radioactive isotope of hydrogen, to generate continuous power for over 20 years.
History
The first NanoTritium battery prototypes were developed in 2008 for encryption security memory backup power by City Labs, Inc., a regulatory-licensed R&D and manufacturing facility located in Miami, Florida. The company originated at Florida International University in 2003 as part of the Office of Entrepreneurial Science founded by current City Labs CEO, Peter Cabauy. The company was eventually joined by Larry C. Olsen, founder of Betacel, who served as Director of Research.
NanoTritium batteries were released commercially in 2012. This marked the first time tritium batteries could be purchased without requiring a radiation license. To date, this is the only General License granted to the betavoltaic industry.
Technology
NanoTritium batteries employ principles of betavoltaic conversion and radioactive beta decay rather than conventional electrochemical cells to generate power, harnessing electrons released as the contained tritium naturally decays into helium-3, a non-radioactive isotope. Current models are capable of producing an output voltage of 0.8 to 1.1 V with a current density of 150 nA/cm2. Tritium's 12.32-year half-life and the relatively low amount of radiation emitted allow these batteries to safely output electrical power for decades.
Testing performed by Lockheed Martin during an industry-wide survey found NanoTritium batteries to be resistant to vibration, altitude, and temperatures ranging from -55°C to +150°C. Repeated temperature cycling has been shown to have no effect on the performance of the batteries.
While current P100 series NanoTritium batteries are limited to powering low-power microelectronic devices, future batteries are expected to produce a larger power output to expand use cases for higher-power devices.
Applications
NanoTritium batteries have been employed for various applications where accessibility is limited and long-term power is beneficial, including powering components on COMSEC devices, satellites, unattended sensors, and implantable medical devices. Despite containing radioactive materials, the batteries are considered safe for implants due to their engineering and inherently low radiation levels, which prevent an individual from receiving a dose higher than the set 15 rem whole body limit even in the event of catastrophic failure.
City Labs is also designing tritium-powered devices for NASA applications, including autonomous sensors for the Moon.
References
Nuclear technology
Battery types | NanoTritium batteries | Physics | 531 |
46,206,971 | https://en.wikipedia.org/wiki/BAT99-98 | BAT99-98 is a Wolf–Rayet star located in the Large Magellanic Cloud, in NGC 2070 near the R136 cluster in the Tarantula Nebula (30 Doradus). At and it is the most massive known star, and close to one of the most luminous stars currently known.
Observations
A 1978 survey carried out by Jorge Melnick covered the 30 Doradus region and found six new Wolf–Rayet (WR) stars, all belonging to the WN sequence. The survey observed stars that were above apparent magnitude 14 and within 2 arcminutes of the centre of the 30 Doradus nebula, and the star now known as BAT9998 was labelled as star J. It was found to have a magnitude of 13.5 and a spectral type of WN5.
The following year, thirteen new WR stars in the Large Magellanic Cloud were reported, one of which was Mel J. It was numbered 12, and referred to as AB12, or LMC AB12 to distinguish it from the better-known AB stars in the Small Magellanic Cloud.
Melnick conducted another study of stars in NGC 2070 and gave BAT99-98 the number 49, reclassifying its spectral type as WN7.
Neither the AB12 nor the Mel J designation is in common use, although "Melnick 49" is sometimes seen. More commonly, LMC Wolf–Rayet stars are referred to by R (Radcliffe Observatory) numbers, Brey (Breysacher catalogue numbers), or BAT99 numbers.
Characteristics
BAT9998 is located near the R136 cluster and has similar mass–luminosity properties to the massive stars in the cluster itself. It is estimated that the star held at its birth and has since lost . It sheds a large amount of mass through a stellar wind that moves at . The star has a surface temperature of and a luminosity of . Although the star is very luminous due to its high temperature, much of that light is ultraviolet and invisible to humans – making it 141,000 times brighter than the Sun visually. It is now classified as a WN6 star, and models suggest that it is 7.5 million years old.
Fate
The future of BAT99-98 depends on its mass loss. It is thought that stars this massive can never lose enough mass to avoid a catastrophic end. The result is likely to be a supernova, hypernova, gamma-ray burst, or perhaps almost no visible explosion, leaving behind a black hole or neutron star. The exact details depend heavily on the timing and amount of the mass loss, with current models not fully reproducing observed stars, but the majority of massive stars in the Local Group are expected to produce Type Ib or Ic supernovae, sometimes with a gamma-ray burst, and leave behind a black hole. However, for some stars of exceptionally high mass, the supernova event is triggered by pair instability and leaves behind no remnant at all.
See also
List of most massive stars
List of most luminous stars
Lynx Arc
References
Stars in the Large Magellanic Cloud
Tarantula Nebula
Extragalactic stars
Wolf–Rayet stars
Dorado
?
J05383914-6906211
Large Magellanic Cloud | BAT99-98 | Astronomy | 671 |
31,319,138 | https://en.wikipedia.org/wiki/Severinghaus%20electrode | The Severinghaus electrode is an electrode that measures carbon dioxide (CO2). It was developed by Dr. John W. Severinghaus and his technician A. Freeman Bradley in 1958.
It utilizes a CO2-sensitive glass electrode in a surrounding film of bicarbonate solution covered by a thin plastic carbon dioxide permeable membrane, but impermeable to water and electrolytic solutes. The carbon dioxide pressure of a sample gas or liquid equilibrates through the membrane and the glass electrode measures the resulting pH of the bicarbonate solution.
Clark, galvanic, and paramagnetic electrodes measure oxygen. Severinghaus electrode measures . Sanz electrode measures pH.
References
External links
Electrodes
Gas sensors | Severinghaus electrode | Chemistry | 156 |
1,745,546 | https://en.wikipedia.org/wiki/Monfrag%C3%BCe | Monfragüe (Spanish: Parque Nacional de Monfragüe, or simply Monfragüe ) is a Spanish national park noted for its bird-life. It is situated in the center of a triangle formed by Plasencia, Trujillo and the city of Cáceres within the province of Cáceres. Monfragüe is also a comarca (county, with no administrative role) of Extremadura, western Spain.
Location
Monfragüe is a comarca in Spain, i.e. a county, with no administrative role in Extremadura, western Spain.
Monfragüe is famous for its national park by the same name, which is noted for its bird-life. It is situated in the center of a triangle formed by Plasencia, Trujillo and the city of Cáceres within the province of Cáceres. The park runs from east to west along the valley of the River Tagus or Tajo. which cut through a long mountainous ridge, and created a rock face, the Peña Falcon, 'falcon rock' on the western side. On the eastern side is the Castle of Monfragüe. The River Tietar enters the park from the north-east and joins the Tagus just to the east of Peña Falcon. The only village in the park is Villareal de San Carlos (population 28).
The park occupies an area of 18,118 hectares.
History
The area's and the Park's name comes from the Latin Monsfragorum, "monte fragoso" (Spanish) which means "lush mountain".
Prehistoric period
The mountains of Monfragüe house a great number of caves with prehistoric paintings from the Copper Age, Bronze Age and Iron Age for example the "Cueva del Castillo", located on the south face of the Sierra de las Corchuelas.
Around the Park are remains of pre-Roman times. In Miravete remnants of an old castle exist, and in Malpartida de Plasencia there is an estate known as "El Calamoco". A warrior stele found in Torrejón el Rubio and the Treasury of Serradilla are evidence of a highly hierarchical agricultural society inhabiting this area.
Roman period
Remains of Roman roads, bridges, fountains and gravestones can be found, since the park is close to the Ruta de la Plata (Silver Route). A section of the route, which goes down to the bridge of the Cardinal from Villarreal, can be considered as a vestige of Roman road. As in almost all Spanish geography, valleys provide the layout for the road. Remains of watchtowers exist, in Cerro Gimio for example.
9th–19th century
During the ninth century, the castle of Monfragüe was built with five towers and two perimeters of walls. What is visible today are remnants of multiple restorations after military orders conquered it for King Alfonso VIII, with a round tower from the twelfth century and a pentagonal one from the fifteenth century.
In 1450, Juan de Carvajal ordered the Cardinal's Bridge to be built entirely from granite ashlars; it facilitated communications between Plasencia and Trujillo. Since the bridge was practically the only one crossing the Tagus in the Extremadura, it gave rise to pillage, turning the area into a "paradise" of bandits and robbers hidden in its steep and impenetrable mountain ranges.
At the beginning of the eighteenth century, the Spanish War of Succession seriously affected the area: The village of Monfragüe disappeared, inhabitants took refuge in the nearby village of Corchuelas, and the village of Piñuela at the other end of the mountain range was seriously damaged. Carlos III de Espana founded a village halfway between the "port of La Serrana" and the Puente del Cardenal, called Villarreal of San Carlos. It had a church, a fountain and barracks, but in spite of the privileges granted to its inhabitants, it never became more than a small village linked to Serradilla due to the danger and poverty of the area.
The Spanish War of Independence destroyed the Castle of Monfragüe, the Bridge of the Cardinal and Castillejo del Pico in Miravete and Corchuelas, whose inhabitants fled to Torrejón the Rubio, Serradilla and Malpartida de Plasencia.
20th century
During the Spanish Civil War in the nineteenthirties, the Extremadura was taken over rapidly. Rather than the conflict itself, the worst aspects were the hunger and poverty which followed. The impenetrable mountains with their maquis shrubland of the region were important to the highlander groups commanded by famous guerrillas like "Quincoces", "Chaquetalarga" (Joaquín Ventas Cintas) and "the French" (Pedro Díaz Monje),
In 1966, construction of the dam at Torrejón el Rubio, and the Alcántara Dam in 1969 altered the landscape irreversibly, as it submerged the wild beauty of the Tagus riverbanks along with its ecological and ethnological wealth.
In 1968, Jesus Garzón arrived in the area, enamored of the beauty of Monfragüe and dedicated himself to nature conservation. He battled with the administration, the owners of neighboring estates, politicians and mayors of the area, but his commitment, supported by scientists and nature lovers resulted in the 4 April 1979 declaration of Monfragüe as a natural park, a lower level of protection than a national park.
In 1991, Monfragüe was declared as a Special Protection Area for birds,
During the following years, the conservationist mentality, the infrastructure in Villarreal and publication efforts about the riches of the Park were strengthened. Since 2003, it has been recognised by UNESCO as a Biosphere reserve. In May 2004, it was enlarged to the actual ZEPA "Monfragüe y Dehesas del entomo", which covers 116,160 hectares.
After twenty-five years Monfragüe became a national park by law on 2 March 2007.
At the end of 2016, the area also received recognition as a dark-sky preserve.
Biodiversity
Habitas in the park include extensive dense scrub, small oak woodlands, and numerous cliffs and rock faces.
The land is mainly used for traditional, low-intensive farming. However, there were two major changes in the years 1960–70: the river Tagus was dammed, affecting its course through the park and in 1970 brutal reforestation with non-indigenous eucalyptus and pine began. For a planned but never built paper industry in Navalmoral de la Mata many hectares of the Park were desolated and irreversibly altered by terraces built with heavy machinery. The Sierra de Miravete and ravines of the streams Malvecino and Barbaón received a hard blow and important thickets of the Mediterranean forest disappeared.
The non-indigenous species are being eradicated. Commercial forestry is prohibited in Spanish national parks.
Birds
In 1988 the European Union designated Monfrague a Special Protection Area (SPA) for bird-life. The SPA (or ZEPA, the equivalent acronym in Spanish) extends beyond the park, where the nesting sites are concentrated, into the surrounding dehesas, which provide food for the birds.
Monfrague is an outstanding site for raptors, with more than 15 regular breeding species. including the world's largest colony of Eurasian black vulture (over 600 pairs), it has the world's highest concentration of imperial eagles (more than 10 pairs), a large population of griffon vulture (over 600 pairs), and several pairs of Egyptian vulture, golden eagle and Bonelli's eagle. The crags and cliffs on the north side of the river midway through the park draw photographers from all over Europe and the Americas. The government has built observation blinds throughout the course of the river.
Other breeding birds for which the park is important are black stork and Eurasian eagle owl and there is a high density of azure-winged magpie. It is also one of the few locations in Europe where white-rumped swift breed.
Other wildlife
Iberian lynx survived for a long time before numbers decreased. They were reintroduced and have since been doing well.
Deer and wild boar live in the park.
Gallery
References
External links
Official site
Official site Ambiente, Gobierno de Extremadura
Magazine about Monfragüe Reddeparquesnacionales.com
Website about Monfrague National Park
National parks of Spain
Protected areas of Extremadura
Biosphere reserves of Spain
Birdwatching sites in Spain
Dark-sky preserves in Spain
Important Bird Areas of Spain
Special Protection Areas of Extremadura
Protected areas established in 1979 | Monfragüe | Astronomy | 1,778 |
125,293 | https://en.wikipedia.org/wiki/Copper | Copper is a chemical element. It has the symbol Cu (), and the atomic number 29. It is a soft, malleable, and ductile metal with very high thermal and electrical conductivity. A freshly exposed surface of pure copper has a pinkish-orange color. Copper is used as a conductor of heat and electricity, as a building material, and as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins, and constantan used in strain gauges and thermocouples for temperature measurement.
Copper is one of the few metals that can occur in nature in a directly usable metallic form. This means that copper is a native metal. This led to very early human use in several regions, from . Thousands of years later, it was the first metal to be smelted from sulfide ores, ; the first metal to be cast into a shape in a mold, ; and the first metal to be purposely alloyed with another metal, tin, to create bronze, .
Commonly encountered compounds are copper(II) salts, which often impart blue or green colors to such minerals as azurite, malachite, and turquoise, and have been used widely and historically as pigments.
Copper used in buildings, usually for roofing, oxidizes to form a green patina of compounds called verdigris. Copper is sometimes used in decorative art, both in its elemental metal form and in compounds as pigments. Copper compounds are used as bacteriostatic agents, fungicides, and wood preservatives.
Copper is essential to all living organisms as a trace dietary mineral because it is a key constituent of the respiratory enzyme complex cytochrome c oxidase. In molluscs and crustaceans, copper is a constituent of the blood pigment hemocyanin, replaced by the iron-complexed hemoglobin in fish and other vertebrates. In humans, copper is found mainly in the liver, muscle, and bone. The adult body contains between 1.4 and 2.1 mg of copper per kilogram of body weight.
Etymology
In the Roman era, copper was mined principally on Cyprus, the origin of the name of the metal, from aes cyprium (metal of Cyprus), later corrupted to (Latin). (Old English) and copper were derived from this, the later spelling first used around 1530.
Characteristics
Physical
Copper, silver, and gold are in group 11 of the periodic table; these three metals have one s-orbital electron on top of a filled d-electron shell and are characterized by high ductility, and electrical and thermal conductivity. The filled d-shells in these elements contribute little to interatomic interactions, which are dominated by the s-electrons through metallic bonds. Unlike metals with incomplete d-shells, metallic bonds in copper are lacking a covalent character and are relatively weak. This observation explains the low hardness and high ductility of single crystals of copper. At the macroscopic scale, introduction of extended defects to the crystal lattice, such as grain boundaries, hinders flow of the material under applied stress, thereby increasing its hardness. For this reason, copper is usually supplied in a fine-grained polycrystalline form, which has greater strength than monocrystalline forms.
The softness of copper partly explains its high electrical conductivity () and high thermal conductivity, second highest (second only to silver) among pure metals at room temperature. This is because the resistivity to electron transport in metals at room temperature originates primarily from scattering of electrons on thermal vibrations of the lattice, which are relatively weak in a soft metal. The maximum possible current density of copper in open air is approximately , above which it begins to heat excessively.
Copper is one of a few metallic elements with a natural color other than gray or silver. Pure copper is orange-red and acquires a reddish tarnish when exposed to air. This is due to the low plasma frequency of the metal, which lies in the red part of the visible spectrum, causing it to absorb the higher-frequency green and blue colors.
As with other metals, if copper is put in contact with another metal in the presence of an electrolyte, galvanic corrosion will occur.
Chemical
Copper does not react with water, but it does slowly react with atmospheric oxygen to form a layer of brown-black copper oxide which, unlike the rust that forms on iron in moist air, protects the underlying metal from further corrosion (passivation). A green layer of verdigris (copper carbonate) can often be seen on old copper structures, such as the roofing of many older buildings and the Statue of Liberty. Copper tarnishes when exposed to some sulfur compounds, with which it reacts to form various copper sulfides.
Isotopes
There are 29 isotopes of copper. and are stable, with comprising approximately 69% of naturally occurring copper; both have a spin of . The other isotopes are radioactive, with the most stable being with a half-life of 61.83 hours. Seven metastable isomers have been characterized; is the longest-lived with a half-life of 3.8 minutes. Isotopes with a mass number above 64 decay by β−, whereas those with a mass number below 64 decay by β+. , which has a half-life of 12.7 hours, decays both ways.
and have significant applications. is used in Cu-PTSM as a radioactive tracer for positron emission tomography.
Occurrence
Copper is produced in massive stars and is present in the Earth's crust in a proportion of about 50 parts per million (ppm). In nature, copper occurs in a variety of minerals, including native copper, copper sulfides such as chalcopyrite, bornite, digenite, covellite, and chalcocite, copper sulfosalts such as tetrahedite-tennantite, and enargite, copper carbonates such as azurite and malachite, and as copper(I) or copper(II) oxides such as cuprite and tenorite, respectively. The largest mass of elemental copper discovered weighed 420 tonnes and was found in 1857 on the Keweenaw Peninsula in Michigan, US. Native copper is a polycrystal, with the largest single crystal ever described measuring . Copper is the 26th most abundant element in Earth's crust, representing 50 ppm compared with 75 ppm for zinc, and 14 ppm for lead.
Typical background concentrations of copper do not exceed in the atmosphere; in soil; in vegetation; 2 μg/L in freshwater and in seawater.
Production
Most copper is mined or extracted as copper sulfides from large open pit mines in porphyry copper deposits that contain 0.4 to 1.0% copper. Sites include Chuquicamata, in Chile, Bingham Canyon Mine, in Utah, United States, and El Chino Mine, in New Mexico, United States. According to the British Geological Survey, in 2005, Chile was the top producer of copper with at least one-third of the world share followed by the United States, Indonesia and Peru. Copper can also be recovered through the in-situ leach process. Several sites in the state of Arizona are considered prime candidates for this method. The amount of copper in use is increasing and the quantity available is barely sufficient to allow all countries to reach developed world levels of usage. An alternative source of copper for collection currently being researched are polymetallic nodules, which are located at the depths of the Pacific Ocean approximately 3000–6500 meters below sea level. These nodules contain other valuable metals such as cobalt and nickel.
Reserves and prices
Copper has been in use for at least 10,000 years, but more than 95% of all copper ever mined and smelted has been extracted since 1900. As with many natural resources, the total amount of copper on Earth is vast, with around 1014 tons in the top kilometer of Earth's crust, which is about 5 million years' worth at the current rate of extraction. However, only a tiny fraction of these reserves is economically viable with present-day prices and technologies. Estimates of copper reserves available for mining vary from 25 to 60 years, depending on core assumptions such as the growth rate. Recycling is a major source of copper in the modern world.
The price of copper is volatile. After a peak in 2022 the price unexpectedly fell.
The global market for copper is one of the most commodified and financialized of the commodity markets, and has been so for decades.
Extraction
The great majority of copper ores are sulfides. Common ores are the sulfides chalcopyrite (CuFeS2), bornite (Cu5FeS4) and, to a lesser extent, covellite (CuS) and chalcocite (Cu2S). These ores occur at the level of <1% Cu. Concentration of the ore is required, which begins with comminution followed by froth flotation. The remaining concentrate is smelted, which can be described with two simplified equations:
Cuprous sulfide is oxidized to cuprous oxide:
2 Cu2S + 3 O2 → 2 Cu2O + 2 SO2
Cuprous oxide reacts with cuprous sulfide to convert to blister copper upon heating:
2 Cu2O + Cu2S → 6 Cu + 2 SO2
This roasting gives matte copper, roughly 50% Cu by weight, which is purified by electrolysis. Depending on the ore, sometimes other metals are obtained during the electrolysis including platinum and gold.
Aside from sulfides, another family of ores are oxides. Approximately 15% of the world's copper supply derives from these oxides. The beneficiation process for oxides involves extraction with sulfuric acid solutions followed by electrolysis. In parallel with the above method for "concentrated" sulfide and oxide ores, copper is recovered from mine tailings and heaps. A variety of methods are used including leaching with sulfuric acid, ammonia, ferric chloride. Biological methods are also used.
A potential source of copper is polymetallic nodules, which have an estimated concentration 1.3%.
Recycling
According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of copper in use in society is 35–55 kg. Much of this is in more-developed countries (140–300 kg per capita) rather than less-developed countries (30–40 kg per capita). In 2001, a typical automobile contained 20–30 kg of copper.
Like aluminium, copper is recyclable without any loss of quality, both from raw state and from manufactured products. An estimated 80% of all copper ever mined is still in use today. In volume, copper is the third most recycled metal after iron and aluminium. , recycled copper supplies about one-third of global demand.
The process of recycling copper is roughly the same as is used to extract copper but requires fewer steps. High-purity scrap copper is melted in a furnace and then reduced and cast into billets and ingots. Lower-purity scrap is melted to form black copper (70–90% pure, containing impurities such as iron, zinc, tin, and nickel), followed by oxidation of impurities in a converter to form blister copper (96–98% pure), which is then refined as before.
Environmental impacts
The environmental cost of copper mining was estimated at 3.7 kg -eq per kg of copper in 2019. Codelco, a major producer in Chile, reported that in 2020 the company emitted 2.8 t -eq per ton (2.8 kg -eq per kg) of fine copper. Greenhouse gas emissions primarily arise from electricity consumed by the company, especially when sourced from fossil fuels, and from engines required for copper extraction and refinement. Companies that mine land often mismanage waste, rendering the area sterile for life. Additionally, nearby rivers and forests are also negatively impacted. The Philippines is an example of a region where land is overexploited by mining companies.
Copper mining waste in Valea Şesei, Romania, has significantly altered nearby water properties. The water in the affected areas is highly acidic, with a pH range of 2.1–4.9, and shows elevated electrical conductivity levels between 280 and 1561 mS/cm. These changes in water chemistry make the environment inhospitable for fish, essentially rendering the water uninhabitable for aquatic life.
Alloys
Numerous copper alloys have been formulated, many with important uses. Brass is an alloy of copper and zinc. Bronze usually refers to copper-tin alloys, but can refer to any alloy of copper such as aluminium bronze. Copper is one of the most important constituents of silver and karat gold solders used in the jewelry industry, modifying the color, hardness and melting point of the resulting alloys. Some lead-free solders consist of tin alloyed with a small proportion of copper and other metals.
The alloy of copper and nickel, called cupronickel, is used in low-denomination coins, often for the outer cladding. The US five-cent coin (currently called a nickel) consists of 75% copper and 25% nickel in homogeneous composition. Prior to the introduction of cupronickel, which was widely adopted by countries in the latter half of the 20th century, alloys of copper and silver were also used, with the United States using an alloy of 90% silver and 10% copper until 1965, when circulating silver was removed from all coins with the exception of the half dollar—these were debased to an alloy of 40% silver and 60% copper between 1965 and 1970. The alloy of 90% copper and 10% nickel, remarkable for its resistance to corrosion, is used for various objects exposed to seawater, though it is vulnerable to the sulfides sometimes found in polluted harbors and estuaries. Alloys of copper with aluminium (about 7%) have a golden color and are used in decorations. Shakudō is a Japanese decorative alloy of copper containing a low percentage of gold, typically 4–10%, that can be patinated to a dark blue or black color.
Compounds
Copper forms a rich variety of compounds, usually with oxidation states +1 and +2, which are often called cuprous and cupric, respectively. Copper compounds promote or catalyse numerous chemical and biological processes.
Binary compounds
As with other elements, the simplest compounds of copper are binary compounds, i.e. those containing only two elements, the principal examples being oxides, sulfides, and halides. Both cuprous and cupric oxides are known. Among the numerous copper sulfides, important examples include copper(I) sulfide () and copper monosulfide ().
Cuprous halides with fluorine, chlorine, bromine, and iodine are known, as are cupric halides with fluorine, chlorine, and bromine. Attempts to prepare copper(II) iodide yield only copper(I) iodide and iodine.
2 Cu2+ + 4 I− → 2 CuI + I2
Coordination chemistry
Copper forms coordination complexes with ligands. In aqueous solution, copper(II) exists as . This complex exhibits the fastest water exchange rate (speed of water ligands attaching and detaching) for any transition metal aquo complex. Adding aqueous sodium hydroxide causes the precipitation of light blue solid copper(II) hydroxide. A simplified equation is:
Cu2+ + 2 OH− → Cu(OH)2
Aqueous ammonia results in the same precipitate. Upon adding excess ammonia, the precipitate dissolves, forming tetraamminecopper(II):
+ 4 NH3 → + 2 H2O + 2 OH−
Many other oxyanions form complexes; these include copper(II) acetate, copper(II) nitrate, and copper(II) carbonate. Copper(II) sulfate forms a blue crystalline pentahydrate, the most familiar copper compound in the laboratory. It is used in a fungicide called the Bordeaux mixture.
Polyols, compounds containing more than one alcohol functional group, generally interact with cupric salts. For example, copper salts are used to test for reducing sugars. Specifically, using Benedict's reagent and Fehling's solution the presence of the sugar is signaled by a color change from blue Cu(II) to reddish copper(I) oxide. Schweizer's reagent and related complexes with ethylenediamine and other amines dissolve cellulose. Amino acids such as cystine form very stable chelate complexes with copper(II) including in the form of metal-organic biohybrids (MOBs). Many wet-chemical tests for copper ions exist, one involving potassium ferricyanide, which gives a red-brown precipitate with copper(II) salts.
Organocopper chemistry
Compounds that contain a carbon-copper bond are known as organocopper compounds. They are very reactive towards oxygen to form copper(I) oxide and have many uses in chemistry. They are synthesized by treating copper(I) compounds with Grignard reagents, terminal alkynes or organolithium reagents; in particular, the last reaction described produces a Gilman reagent. These can undergo substitution with alkyl halides to form coupling products; as such, they are important in the field of organic synthesis. Copper(I) acetylide is highly shock-sensitive but is an intermediate in reactions such as the Cadiot–Chodkiewicz coupling and the Sonogashira coupling. Conjugate addition to enones and carbocupration of alkynes can also be achieved with organocopper compounds. Copper(I) forms a variety of weak complexes with alkenes and carbon monoxide, especially in the presence of amine ligands.
Copper(III) and copper(IV)
Copper(III) is most often found in oxides. A simple example is potassium cuprate, KCuO2, a blue-black solid. The most extensively studied copper(III) compounds are the cuprate superconductors. Yttrium barium copper oxide (YBa2Cu3O7) consists of both Cu(II) and Cu(III) centres. Like oxide, fluoride is a highly basic anion and is known to stabilize metal ions in high oxidation states. Both copper(III) and even copper(IV) fluorides are known, K3CuF6 and Cs2CuF6, respectively.
Some copper proteins form oxo complexes, which, in extensively studied synthetic analog systems, feature copper(III). With tetrapeptides, purple-colored copper(III) complexes are stabilized by the deprotonated amide ligands.
Complexes of copper(III) are also found as intermediates in reactions of organocopper compounds, for example in the Kharasch–Sosnovsky reaction.
History
A timeline of copper illustrates how this metal has advanced human civilization for the past 11,000 years.
Prehistoric
Copper Age
Copper occurs naturally as native metallic copper and was known to some of the oldest civilizations on record. The history of copper use dates to 9000 BC in the Middle East; a copper pendant was found in northern Iraq that dates to 8700 BC. Evidence suggests that gold and meteoric iron (but not smelted iron) were the only metals used by humans before copper. The history of copper metallurgy is thought to follow this sequence: first, cold working of native copper, then annealing, smelting, and, finally, lost-wax casting. In southeastern Anatolia, all four of these techniques appear more or less simultaneously at the beginning of the Neolithic .
Copper smelting was independently invented in different places. The earliest evidence of lost-wax casting copper comes from an amulet found in Mehrgarh, Pakistan, and is dated to 4000 BC. Investment casting was invented in 4500–4000 BC in Southeast Asia Smelting was probably discovered in China before 2800 BC, in Central America around 600 AD, and in West Africa about the 9th or 10th century AD. Carbon dating has established mining at Alderley Edge in Cheshire, UK, at 2280 to 1890 BC.
Ötzi the Iceman, a male dated from 3300 to 3200 BC, was found with an axe with a copper head 99.7% pure; high levels of arsenic in his hair suggest an involvement in copper smelting. Experience with copper has assisted the development of other metals; in particular, copper smelting likely led to the discovery of iron smelting.
Production in the Old Copper Complex in Michigan and Wisconsin is dated between 6500 and 3000 BC. A copper spearpoint found in Wisconsin has been dated to 6500 BC. Copper usage by the indigenous peoples of the Old Copper Complex from the Great Lakes region of North America has been radiometrically dated to as far back as 7500 BC. Indigenous peoples of North America around the Great Lakes may have also been mining copper during this time, making it one of the oldest known examples of copper extraction in the world. There is evidence from prehistoric lead pollution from lakes in Michigan that people in the region began mining copper . Evidence suggests that utilitarian copper objects fell increasingly out of use in the Old Copper Complex of North America during the Bronze Age and a shift towards an increased production of ornamental copper objects occurred.
Bronze Age
Natural bronze, a type of copper made from ores rich in silicon, arsenic, and (rarely) tin, came into general use in the Balkans around 5500 BC. Alloying copper with tin to make bronze was first practiced about 4000 years after the discovery of copper smelting, and about 2000 years after "natural bronze" had come into general use. Bronze artifacts from the Vinča culture date to 4500 BC. Sumerian and Egyptian artifacts of copper and bronze alloys date to 3000 BC. Egyptian Blue, or cuprorivaite (calcium copper silicate) is a synthetic pigment that contains copper and started being used in ancient Egypt around 3250 BC. The manufacturing process of Egyptian blue was known to the Romans, but by the fourth century AD the pigment fell out of use and the secret to its manufacturing process became lost. The Romans said the blue pigment was made from copper, silica, lime and natron and was known to them as caeruleum.
The Bronze Age began in Southeastern Europe around 3700–3300 BC, in Northwestern Europe about 2500 BC. It ended with the beginning of the Iron Age, 2000–1000 BC in the Near East, and 600 BC in Northern Europe. The transition between the Neolithic period and the Bronze Age was formerly termed the Chalcolithic period (copper-stone), when copper tools were used with stone tools. The term has gradually fallen out of favor because in some parts of the world, the Chalcolithic and Neolithic are coterminous at both ends. Brass, an alloy of copper and zinc, is of much more recent origin. It was known to the Greeks, but became a significant supplement to bronze during the Roman Empire.
Ancient and post-classical
In Greece, copper was known by the name (χαλκός). It was an important resource for the Romans, Greeks and other ancient peoples. In Roman times, it was known as aes Cyprium, being the generic Latin term for copper alloys and Cyprium from Cyprus, where much copper was mined. The phrase was simplified to cuprum, hence the English copper. Aphrodite (Venus in Rome) represented copper in mythology and alchemy because of its lustrous beauty and its ancient use in producing mirrors; Cyprus, the source of copper, was sacred to the goddess. The seven heavenly bodies known to the ancients were associated with the seven metals known in antiquity, and Venus was assigned to copper, both because of the connection to the goddess and because Venus was the brightest heavenly body after the Sun and Moon and so corresponded to the most lustrous and desirable metal after gold and silver.
Copper was first mined in ancient Britain as early as 2100 BC. Mining at the largest of these mines, the Great Orme, continued into the late Bronze Age. Mining seems to have been largely restricted to supergene ores, which were easier to smelt. The rich copper deposits of Cornwall seem to have been largely untouched, in spite of extensive tin mining in the region, for reasons likely social and political rather than technological.
In North America, native copper is known to have been extracted from sites on Isle Royale with primitive stone tools between 800 and 1600 AD. Copper annealing was being performed in the North American city of Cahokia around 1000–1300 AD. There are several exquisite copper plates, known as the Mississippian copper plates that have been found in North America in the area around Cahokia dating from this time period (1000–1300 AD). The copper plates were thought to have been manufactured at Cahokia before ending up elsewhere in the Midwest and southeastern United States like the Wulfing cache and Etowah plates.
In South America a copper mask dated to 1000 BC found in the Argentinian Andes is the oldest known copper artifact discovered in the Andes. Peru has been considered the origin for early copper metallurgy in pre-Columbian America, but the copper mask from Argentina suggests that the Cajón del Maipo of the southern Andes was another important center for early copper workings in South America. Copper metallurgy was flourishing in South America, particularly in Peru around 1000 AD. Copper burial ornamentals from the 15th century have been uncovered, but the metal's commercial production did not start until the early 20th century.
The cultural role of copper has been important, particularly in currency. Romans in the 6th through 3rd centuries BC used copper lumps as money. At first, the copper itself was valued, but gradually the shape and look of the copper became more important. Julius Caesar had his own coins made from brass, while Octavianus Augustus Caesar's coins were made from Cu-Pb-Sn alloys. With an estimated annual output of around 15,000 t, Roman copper mining and smelting activities reached a scale unsurpassed until the time of the Industrial Revolution; the provinces most intensely mined were those of Hispania, Cyprus and in Central Europe.
The gates of the Temple of Jerusalem used Corinthian bronze treated with depletion gilding. The process was most prevalent in Alexandria, where alchemy is thought to have begun. In ancient India, copper was used in the holistic medical science Ayurveda for surgical instruments and other medical equipment. Ancient Egyptians (~2400 BC) used copper for sterilizing wounds and drinking water, and later to treat headaches, burns, and itching.
Modern
The Great Copper Mountain was a mine in Falun, Sweden, that operated from the 10th century to 1992. It satisfied two-thirds of Europe's copper consumption in the 17th century and helped fund many of Sweden's wars during that time. It was referred to as the nation's treasury; Sweden had a copper backed currency.
Copper is used in roofing, currency, and for photographic technology known as the daguerreotype. Copper was used in Renaissance sculpture, and was used to construct the Statue of Liberty; copper continues to be used in construction of various types. Copper plating and copper sheathing were widely used to protect the under-water hulls of ships, a technique pioneered by the British Admiralty in the 18th century. The Norddeutsche Affinerie in Hamburg was the first modern electroplating plant, starting its production in 1876. The German scientist Gottfried Osann invented powder metallurgy in 1830 while determining the metal's atomic mass; around then it was discovered that the amount and type of alloying element (e.g., tin) to copper would affect bell tones.
During the rise in demand for copper for the Age of Electricity, from the 1880s until the Great Depression of the 1930s, the United States produced one third to half the world's newly mined copper. Major districts included the Keweenaw district of northern Michigan, primarily native copper deposits, which was eclipsed by the vast sulphide deposits of Butte, Montana, in the late 1880s, which itself was eclipsed by porphyry deposits of the Southwest United States, especially at Bingham Canyon, Utah, and Morenci, Arizona. Introduction of open pit steam shovel mining and innovations in smelting, refining, flotation concentration and other processing steps led to mass production. Early in the twentieth century, Arizona ranked first, followed by Montana, then Utah and Michigan.
Flash smelting was developed by Outokumpu in Finland and first applied at Harjavalta in 1949; the energy-efficient process accounts for 50% of the world's primary copper production.
The Intergovernmental Council of Copper Exporting Countries, formed in 1967 by Chile, Peru, Zaire and Zambia, operated in the copper market as OPEC does in oil, though it never achieved the same influence, particularly because the second-largest producer, the United States, was never a member; it was dissolved in 1988.
In 2008, China became the world's largest importer of copper and has continued to be as of at least 2023.
Applications
The major applications of copper are electrical wire (60%), roofing and plumbing (20%), and industrial machinery (15%). Copper is used mostly as a pure metal, but when greater hardness is required, it is put into such alloys as brass and bronze (5% of total use). For more than two centuries, copper paint has been used on boat hulls to control the growth of plants and shellfish. A small part of the copper supply is used for nutritional supplements and fungicides in agriculture. Machining of copper is possible, although alloys are preferred for good machinability in creating intricate parts.
Wire and cable
Despite competition from other materials, copper remains the preferred electrical conductor in nearly all categories of electrical wiring except overhead electric power transmission where aluminium is often preferred. Copper wire is used in power generation, power transmission, power distribution, telecommunications, electronics circuitry, and countless types of electrical equipment. Electrical wiring is the most important market for the copper industry. This includes structural power wiring, power distribution cable, appliance wire, communications cable, automotive wire and cable, and magnet wire. Roughly half of all copper mined is used for electrical wire and cable conductors. Many electrical devices rely on copper wiring because of its multitude of inherent beneficial properties, such as its high electrical conductivity, tensile strength, ductility, creep (deformation) resistance, corrosion resistance, low thermal expansion, high thermal conductivity, ease of soldering, malleability, and ease of installation.
For a short period from the late 1960s to the late 1970s, copper wiring was replaced by aluminium wiring in many housing construction projects in America. The new wiring was implicated in a number of house fires and the industry returned to copper.
Electronics and related devices
Integrated circuits and printed circuit boards increasingly feature copper in place of aluminium because of its superior electrical conductivity; heat sinks and heat exchangers use copper because of its superior heat dissipation properties. Electromagnets, vacuum tubes, cathode-ray tubes, and magnetrons in microwave ovens use copper, as do waveguides for microwave radiation.
Electric motors
Copper's superior conductivity enhances the efficiency of electrical motors. This is important because motors and motor-driven systems account for 43–46% of all global electricity consumption and 69% of all electricity used by industry. Increasing the mass and cross section of copper in a coil increases the efficiency of the motor. Copper motor rotors, a new technology designed for motor applications where energy savings are prime design objectives, are enabling general-purpose induction motors to meet and exceed National Electrical Manufacturers Association (NEMA) premium efficiency standards.
Renewable energy production
Architecture
Copper has been used since ancient times as a durable, corrosion resistant, and weatherproof architectural material. Roofs, flashings, rain gutters, downspouts, domes, spires, vaults, and doors have been made from copper for hundreds or thousands of years. Copper's architectural use has been expanded in modern times to include interior and exterior wall cladding, building expansion joints, radio frequency shielding, and antimicrobial and decorative indoor products such as attractive handrails, bathroom fixtures, and counter tops. Some of copper's other important benefits as an architectural material include low thermal movement, light weight, lightning protection, and recyclability.
The metal's distinctive natural green patina has long been coveted by architects and designers. The final patina is a particularly durable layer that is highly resistant to atmospheric corrosion, thereby protecting the underlying metal against further weathering. It can be a mixture of carbonate and sulfate compounds in various amounts, depending upon environmental conditions such as sulfur-containing acid rain. Architectural copper and its alloys can also be 'finished' to take on a particular look, feel, or color. Finishes include mechanical surface treatments, chemical coloring, and coatings.
Copper has excellent brazing and soldering properties and can be welded; the best results are obtained with gas metal arc welding.
Antibiofouling
Copper is biostatic, meaning bacteria and many other forms of life will not grow on it. For this reason it has long been used to line parts of ships to protect against barnacles and mussels. It was originally used pure, but has since been superseded by Muntz metal and copper-based paint. Similarly, as discussed in copper alloys in aquaculture, copper alloys have become important netting materials in the aquaculture industry because they are antimicrobial and prevent biofouling, even in extreme conditions and have strong structural and corrosion-resistant properties in marine environments.
Antimicrobial
Copper-alloy touch surfaces have natural properties that destroy a wide range of microorganisms (e.g., E. coli O157:H7, methicillin-resistant Staphylococcus aureus (MRSA), Staphylococcus, Clostridium difficile, influenza A virus, adenovirus, SARS-CoV-2, and fungi). Indians have been using copper vessels since ancient times for storing water, even before modern science realized its antimicrobial properties. Some copper alloys were proven to kill more than 99.9% of disease-causing bacteria within just two hours when cleaned regularly. The United States Environmental Protection Agency (EPA) has approved the registrations of these copper alloys as "antimicrobial materials with public health benefits"; that approval allows manufacturers to make legal claims to the public health benefits of products made of registered alloys. In addition, the EPA has approved a long list of antimicrobial copper products made from these alloys, such as bedrails, handrails, over-bed tables, sinks, faucets, door knobs, toilet hardware, computer keyboards, health club equipment, and shopping cart handles. Copper doorknobs are used by hospitals to reduce the transfer of disease, and Legionnaires' disease is suppressed by copper tubing in plumbing systems. Antimicrobial copper alloy products are now being installed in healthcare facilities in the U.K., Ireland, Japan, Korea, France, Denmark, and Brazil, as well as being called for in the US, and in the subway transit system in Santiago, Chile, where copper–zinc alloy handrails were installed in some 30 stations between 2011 and 2014.
Textile fibers can be blended with copper to create antimicrobial protective fabrics.
Copper demand
Total world production in 2023 is expected to be almost 23 million metric tons. Copper demand is increasing due to the ongoing energy transition to electricity. China accounts for over half the demand.
For some purposes, other metals can substitute, aluminium wire was substituted in many applications, but improper design resulted in fire hazards. The safety issues have since been solved by use of larger sizes of aluminium wire (#8AWG and up), and properly designed aluminium wiring is still being installed in place of copper. For example, the Airbus A380 uses aluminum wire in place of copper wire for electrical power transmission.
Speculative investing
Copper may be used as a speculative investment due to the predicted increase in use from worldwide infrastructure growth, and the important role it has in producing wind turbines, solar panels, and other renewable energy sources. Another reason predicted demand increases is the fact that electric cars contain an average of 3.6 times as much copper as conventional cars, although the effect of electric cars on copper demand is debated. Some people invest in copper through copper mining stocks, ETFs, and futures. Others store physical copper in the form of copper bars or rounds although these tend to carry a higher premium in comparison to precious metals. Those who want to avoid the premiums of copper bullion alternatively store old copper wire, copper tubing or American pennies made before 1982.
Folk medicine
Copper is commonly used in jewelry, and according to some folklore, copper bracelets relieve arthritis symptoms. In one trial for osteoarthritis and one trial for rheumatoid arthritis, no differences were found between copper bracelet and control (non-copper) bracelet. No evidence shows that copper can be absorbed through the skin. If it were, it might lead to copper poisoning.
Degradation
Chromobacterium violaceum and Pseudomonas fluorescens can both mobilize solid copper as a cyanide compound. The ericoid mycorrhizal fungi associated with Calluna, Erica and Vaccinium can grow in metalliferous soils containing copper. The ectomycorrhizal fungus Suillus luteus protects young pine trees from copper toxicity. A sample of the fungus Aspergillus niger was found growing from gold mining solution and was found to contain cyano complexes of such metals as gold, silver, copper, iron, and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides.
Biological role
Biochemistry
Copper proteins have diverse roles in biological electron transport and oxygen transportation, processes that exploit the easy interconversion of Cu(I) and Cu(II). Copper is essential in the aerobic respiration of all eukaryotes. In mitochondria, it is found in cytochrome c oxidase, which is the last protein in oxidative phosphorylation. Cytochrome c oxidase is the protein that binds the O2 between a copper and an iron; the protein transfers 4 electrons to the O2 molecule to reduce it to two molecules of water. Copper is also found in many superoxide dismutases, proteins that catalyze the decomposition of superoxides by converting it (by disproportionation) to oxygen and hydrogen peroxide:
Cu2+-SOD + O2− → Cu+-SOD + O2 (reduction of copper; oxidation of superoxide)
Cu+-SOD + O2− + 2H+ → Cu2+-SOD + H2O2 (oxidation of copper; reduction of superoxide)
The protein hemocyanin is the oxygen carrier in most mollusks and some arthropods such as the horseshoe crab (Limulus polyphemus). Because hemocyanin is blue, these organisms have blue blood rather than the red blood of iron-based hemoglobin. Structurally related to hemocyanin are the laccases and tyrosinases. Instead of reversibly binding oxygen, these proteins hydroxylate substrates, illustrated by their role in the formation of lacquers. The biological role for copper commenced with the appearance of oxygen in Earth's atmosphere. Several copper proteins, such as the "blue copper proteins", do not interact directly with substrates; hence they are not enzymes. These proteins relay electrons by the process called electron transfer.
A unique tetranuclear copper center has been found in nitrous-oxide reductase.
Chemical compounds which were developed for treatment of Wilson's disease have been investigated for use in cancer therapy.
Nutrition
Copper is an essential trace element in plants and animals, but not all microorganisms. The human body contains copper at a level of about 1.4 to 2.1 mg per kg of body mass.
Absorption
Copper is absorbed in the gut, then transported to the liver bound to albumin. After processing in the liver, copper is distributed to other tissues in a second phase, which involves the protein ceruloplasmin, carrying the majority of copper in blood. Ceruloplasmin also carries the copper that is excreted in milk, and is particularly well-absorbed as a copper source. Copper in the body normally undergoes enterohepatic circulation (about 5 mg a day, vs. about 1 mg per day absorbed in the diet and excreted from the body), and the body is able to excrete some excess copper, if needed, via bile, which carries some copper out of the liver that is not then reabsorbed by the intestine.
Dietary recommendations
The U.S. Institute of Medicine (IOM) updated the estimated average requirements (EARs) and recommended dietary allowances (RDAs) for copper in 2001. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. The AIs for copper are: 200 μg of copper for 0–6-month-old males and females, and 220 μg of copper for 7–12-month-old males and females. For both sexes, the RDAs for copper are: 340 μg of copper for 1–3 years old, 440 μg of copper for 4–8 years old, 700 μg of copper for 9–13 years old, 890 μg of copper for 14–18 years old and 900 μg of copper for ages 19 years and older. For pregnancy, 1,000 μg. For lactation, 1,300 μg. As for safety, the IOM also sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of copper, the UL is set at 10 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes.
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men ages 18 and older, the AIs are set at 1.3 and 1.6 mg/day, respectively. AIs for pregnancy and lactation is 1.5 mg/day. For children ages 1–17 years, the AIs increase with age from 0.7 to 1.3 mg/day. These AIs are higher than the U.S. RDAs. The European Food Safety Authority reviewed the same safety question and set its UL at 5 mg/day, which is half the U.S. value.
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For copper labeling purposes, 100% of the Daily Value was 2.0 mg, but , it was revised to 0.9 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake.
Deficiency
Because of its role in facilitating iron uptake, copper deficiency can produce anemia-like symptoms, neutropenia, bone abnormalities, hypopigmentation, impaired growth, increased incidence of infections, osteoporosis, hyperthyroidism, and abnormalities in glucose and cholesterol metabolism. Conversely, Wilson's disease causes an accumulation of copper in body tissues.
Severe deficiency can be found by testing for low plasma or serum copper levels, low ceruloplasmin, and low red blood cell superoxide dismutase levels; these are not sensitive to marginal copper status. The "cytochrome c oxidase activity of leucocytes and platelets" has been stated as another factor in deficiency, but the results have not been confirmed by replication.
Toxicity
Gram quantities of various copper salts have been taken in suicide attempts and produced acute copper toxicity in humans, possibly due to redox cycling and the generation of reactive oxygen species that damage DNA. Corresponding amounts of copper salts (30 mg/kg) are toxic in animals. A minimum dietary value for healthy growth in rabbits has been reported to be at least 3 ppm in the diet. However, higher concentrations of copper (100 ppm, 200 ppm, or 500 ppm) in the diet of rabbits may favorably influence feed conversion efficiency, growth rates, and carcass dressing percentages.
Chronic copper toxicity does not normally occur in humans because of transport systems that regulate absorption and excretion. Autosomal recessive mutations in copper transport proteins can disable these systems, leading to Wilson's disease with copper accumulation and cirrhosis of the liver in persons who have inherited two defective genes.
Elevated copper levels have also been linked to worsening symptoms of Alzheimer's disease.
Human exposure
In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) for copper dust and fumes in the workplace as a time-weighted average (TWA) of 1 mg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 1 mg/m3, time-weighted average. The IDLH (immediately dangerous to life and health) value is 100 mg/m3.
Copper is a constituent of tobacco smoke. The tobacco plant readily absorbs and accumulates heavy metals, such as copper from the surrounding soil into its leaves. These are readily absorbed into the user's body following smoke inhalation. The health implications are not clear.
See also
Copper in renewable energy
Copper nanoparticle
Erosion corrosion of copper water tubes
Cold water pitting of copper tube
List of countries by copper production
Metal theft
Operation Tremor
Anaconda Copper
Antofagasta PLC
Codelco
El Boleo mine
Grasberg mine
Copper foil
References
Notes
Further reading
Current Medicinal Chemistry, Volume 12, Number 10, May 2005, pp. 1161–1208(48) Metals, Toxicity and Oxidative Stress
Material: Copper (Cu), bulk, MEMS and Nanotechnology Clearinghouse.
External links
Copper at The Periodic Table of Videos (University of Nottingham)
Copper and compounds fact sheet from the National Pollutant Inventory of Australia
International Copper Association and the Copper Alliance, a business interest group
Copper.org – official website of the Copper Development Association, a North American industry association with an extensive site of properties and uses of copper
Price history of LME Copper, according to the IMF
Chemical elements
Transition metals
Dietary minerals
Electrical conductors
Cubic minerals
Crystals in space group 225
Native element minerals
Symbols of Arizona
Chemical elements with face-centered cubic structure
Coinage metals and alloys | Copper | Physics,Chemistry | 9,787 |
17,066 | https://en.wikipedia.org/wiki/Kirlian%20photography | Kirlian photography is a collection of photographic techniques used to capture the phenomenon of electrical coronal discharges. It is named after Soviet scientist Semyon Kirlian, who, in 1939, accidentally discovered that if an object on a photographic plate is connected to a high-voltage source, an image is produced on the photographic plate.
The technique has been variously known as
"electrography",
"electrophotography",
"corona discharge photography" (CDP),
"bioelectrography",
"gas discharge visualization (GDV)",
"electrophotonic imaging (EPI)", and, in Russian literature, "Kirlianography".
Kirlian photography has been the subject of scientific research, parapsychology research, and art. Paranormal claims have been made about Kirlian photography, but these claims are rejected by the scientific community. To a large extent, it has been used in alternative medicine research.
History
In 1889, Czech coined the word "electrography". Seven years later in 1896, a French experimenter, Hippolyte Baraduc, created electrographs of hands and leaves.
In 1898, Polish-Belarusian engineer Jakub Jodko-Narkiewicz demonstrated electrography at the fifth exhibition of the Russian Technical Society.
In 1939, two Czechs, S. Pratt and J. Schlemmer, published photographs showing a glow around leaves. The same year, Russian electrical engineer Semyon Kirlian and his wife Valentina developed Kirlian photography after observing a patient in Krasnodar Hospital who was receiving medical treatment from a high-frequency electrical generator. They had noticed that when the electrodes were brought near the patient's skin, there was a glow similar to that of a neon discharge tube.
The Kirlians conducted experiments in which photographic film was placed on top of a conducting plate, and another conductor was attached to a hand, a leaf or other plant material. The conductors were energized by a high-frequency high-voltage power source, producing photographic images typically showing a silhouette of the object surrounded by an aura of light.
In 1958, the Kirlians reported the results of their experiments for the first time. Their work was virtually unknown until 1970, when two Americans, Lynn Schroeder and Sheila Ostrander, published a book, Psychic Discoveries Behind the Iron Curtain. High-voltage electrophotography soon became known to the general public as Kirlian photography. Although little interest was generated among western scientists, Russians held a conference on the subject in 1972 at Kazakh State University.
Kirlian photography was used in the former Eastern Bloc in the 1970s. The corona discharge glow at the surface of an object subjected to a high-voltage electrical field was referred to as a "Kirlian aura" in Russia and Eastern Europe. In 1975, soviet scientist Victor Adamenko wrote a dissertation titled Research of the structure of High-frequency electric discharge (Kirlian effect) images. Scientific study of what the researchers called the Kirlian effect was conducted by Victor Inyushin at Kazakh State University.
Early in the 1970s, Thelma Moss and Kendall Johnson at the Center for Health Sciences at the UCLA conducted extensive research into Kirlian photography. Moss led an independent and unsupported parapsychology laboratory that was shut down by the university in 1979.
Overview
Kirlian photography is a technique for creating contact print photographs using high voltage. The process entails placing sheet photographic film on top of a metal discharge plate. The object to be photographed is then placed directly on top of the film. High voltage current is momentarily applied to the object, thus creating an exposure. The corona discharge between the object and the plate due to high-voltage is captured by the film. The developed film results in a Kirlian photograph of the object.
Color photographic film is calibrated to produce faithful colors when exposed to normal light. Corona discharges can interact with minute variations in the different layers of dye used in the film, resulting in a wide variety of colors depending on the local intensity of the discharge. Film and digital imaging techniques also record light produced by photons emitted during corona discharge (see Mechanism of corona discharge).
Photographs of inanimate objects such as a coins, keys and leaves can be made more effectively by grounding the object to the earth, a cold water pipe or to the opposite (polarity) side of the high-voltage source. Grounding the object creates a stronger corona discharge.
Kirlian photography does not require the use of a camera or a lens because it is a contact print process. It is possible to use a transparent electrode in place of the high-voltage discharge plate, for capturing the resulting corona discharge with a standard photo or video camera.
Visual artists such as Robert Buelteman, Ted Hiebert, and Dick Lane have used Kirlian photography to produce artistic images of a variety of subjects.
Research
Kirlian photography has been a subject of scientific research, parapsychology research and pseudoscientific claims.
Scientific research
Results of scientific experiments published in 1976 involving Kirlian photography of living tissue (human finger tips) showed that most of the variations in corona discharge streamer length, density, curvature, and color can be accounted for by the moisture content on the surface of and within the living tissue.
Konstantin Korotkov developed a technique similar to Kirlian photography called "gas discharge visualization" (GDV). Korotkov's GDV camera system consists of hardware and software to directly record, process and interpret GDV images with a computer. Korotkov promotes the device and research in a medical context. Izabela Ciesielska at the Institute of Architecture of Textiles in Poland used Korotkov's GDV camera to evaluate the effects of human contact with various textiles on biological factors such as heart rate and blood pressure, as well as corona discharge images. The experiments captured corona discharge images of subjects' fingertips while the subjects wore sleeves of various natural and synthetic materials on their forearms. The results failed to establish a relationship between human contact with the textiles and the corona discharge images and were considered inconclusive.
Parapsychology research
In 1968, Thelma Moss, a psychology professor, headed University of California, Los Angeles (UCLA)'s Neuropsychiatric Institute (NPI), which was later renamed the Semel Institute. The NPI had a laboratory dedicated to parapsychology research and staffed mostly with volunteers. The lab was unfunded, unsanctioned and eventually shut down by the university. Toward the end of her tenure at UCLA, Moss became interested in Kirlian photography, a technique that supposedly measured the "auras" of a living being. According to Kerry Gaynor, one of her former research assistants, "many felt Kirlian photography's effects were just a natural occurrence."
Paranormal claims of Kirlian photography have not been observed or replicated in experiments by the scientific community. The physiologist Gordon Stein has written that Kirlian photography is a hoax that has "nothing to do with health, vitality, or mood of a subject photographed."
Claims
Kirlian believed that images created by Kirlian photography might depict a conjectural energy field, or aura, thought, by some, to surround living things. Kirlian and his wife were convinced that their images showed a life force or energy field that reflected the physical and emotional states of their living subjects. They thought that these images could be used to diagnose illnesses. In 1961, they published their first article on the subject in the Russian Journal of Scientific and Applied Photography. Kirlian's claims were embraced by energy treatments practitioners.
Torn leaf experiment
A typical demonstration used as evidence for the existence of these energy fields involved taking Kirlian photographs of a picked leaf at set intervals. The gradual withering of the leaf was thought to correspond with a decline in the strength of the aura. In some experiments, if a section of a leaf was torn away after the first photograph, a faint image of the missing section sometimes remains when a second photograph was taken. However, if the imaging surface is cleaned of contaminants and residual moisture before the second image is taken, then no image of the missing section will appear.
The living aura theory is at least partially repudiated by demonstrating that leaf moisture content has a pronounced effect on the electric discharge coronas; more moisture creates larger corona discharges. As the leaf dehydrates, the coronas will naturally decrease in variability and intensity. As a result, the changing water content of the leaf can affect the so-called Kirlian aura. Kirlian's experiments did not provide evidence for an energy field other than the electric fields produced by chemical processes and the streaming process of coronal discharges.
The coronal discharges identified as Kirlian auras are the result of stochastic electric ionization processes and are greatly affected by many factors, including the voltage and frequency of the stimulus, the pressure with which a person or object touches the imaging surface, the local humidity around the object being imaged, how well grounded the person or object is, and other local factors affecting the conductivity of the person or object being imaged. Oils, sweat, bacteria, and other ionizing contaminants found on living tissues can also affect the resulting images.
Qi
Scientists such as Beverly Rubik have explored the idea of a human biofield using Kirlian photography research, attempting to explain the Chinese discipline of Qigong. Qigong teaches that there is a vitalistic energy called qi (or chi) that permeates all living things.
Rubik's experiments relied on Konstantin Korotkov's GDV device to produce images, which were thought to visualize these qi biofields in chronically ill patients. Rubik acknowledges that the small sample size in her experiments "was too small to permit a meaningful statistical analysis". Claims that these energies can be captured by special photographic equipment are criticized by skeptics.
In popular culture
Kirlian photography has appeared as a fictional element in numerous books, films, television series, and media productions, including the 1975 film The Kirlian Force, re-released under the more sensational title Psychic Killer. Kirlian photographs have been used as visual components in various media, such as the sleeve of George Harrison's 1973 album Living in the Material World, which features Kirlian photographs of his hand holding a Hindu medallion on the front sleeve and American coins on the back, shot at Thelma Moss's UCLA parapsychology laboratory.
The artwork of David Bowie's 1997 album Earthling has reproductions of Kirlian photographs taken by Bowie. The photographs, which show a crucifix Bowie wore around his neck and the imprint of his "forefinger" tip, date to April 1975 when Bowie was living in Los Angeles and fascinated with the paranormal. The photographs were taken before consuming cocaine and 30 minutes afterwards. The after photograph apparently shows a substantial increase in the "aura" around the crucifix and forefinger.
The Cluster novels by science fiction author Piers Anthony uses the concept of the Kirlian Aura as a way to transfer a person's personality into another body, even an alien body, across light years. The book The Anarchistic Colossus (1977) by A. E. van Vogt involves an anarchistic society controlled by ‘Kirlian computers’.
The opening credits during the first seven seasons of the television series The X-Files shows a Kirlian image of a left human hand. The image appears as the 11th clip in the introductory video montage and is formed by a bluish coronal discharge as the primary outline, with only the proximal phalange of the index finger shown cryptically in red. A human silhouette, in white, seemingly falls towards the hand.
The Italian electronic darkwave band Kirlian Camera was named after the device used for Kirlian photography.
British industrial band Cabaret Voltaire's first album Mix-Up features a track called Kirlian Photograph.
See also
Bioelectromagnetism
L-field
List of topics characterized as pseudoscience
Magnetic particle inspection (Magnaflux)
Thoughtography
Timeline of Russian innovation
Notes
References
Further reading
External links
Kirlian Photography and the "Aura", Dr. Rory Coker, Professor of Physics at the University of Texas at Austin
, Victor J. Stenger, University of Hawaii at Manoa
Electrical breakdown
Electrical phenomena
Paranormal
Parapsychology
Photographic techniques
Photography by genre
Photography in the Soviet Union
Pseudoscience
Russian inventions
Soviet inventions | Kirlian photography | Physics | 2,595 |
60,786,023 | https://en.wikipedia.org/wiki/Chrysanthemum%20stone | Chrysanthemum stone, sometimes called "flower stone," is a stone "flower" produced millions of years ago due to geological movement and natural formation in the rock. The stone's pattern resembles the chrysanthemum flower. The flower is milky white and grain is clear.
Chrysanthemum stone is generally dark-gray or black, and does not contain radioactive elements, so it has a high collection value. Although the composition of chrysanthemum stone itself is not very rare, the formation is uncommon, so the stone is listed as a gem.
Vancouver Island in British Columbia Canada, is a well known place to find Flower Stone, mainly on the east coast. It was once commercially mined on Texada island which is off the east coast of Vancouver Island, but there is now a moratorium on mining there, although rock hounds may still hand pick it. Walking on the beach at low tide on either island flower stone can be found, along with Dalasite and Red Jasper.
The basic features
The main component of chrysanthemum stone is Andalusite, so the basic components of the rock are very similar to that of Andalusite. The main component of the mineral is Al2SiO5 and it typically forms a rhombic crystal system with columnar crystals. The cross section is close to the regular quadrilateral, and bicrystals are rare.
Chrysanthemum stone has a hard texture. The stone is dark-gray to black in color with naturally formed white chrysanthemum shaped crystals. The "chrysanthemum" part of the "flower" is a collection of crystalline minerals. Each "petal" is a rhombohedral crystal form. The mineral composition varies according to the variety. Chysanthemum stone from Liuyang City, Hunan Province is mainly calcite and chalcedony (quartz), some with ling strontium ore and lapis lazuli.
Chrysanthemum stone carving is a unique handicraft in Liuyang City. Work is created from stone formed approximately 270 million years ago.
The physical and chemical analysis
Assay proved: chrysanthemum stone does not contain radioactive elements, so it is harmless to human body. The content of beneficial elements such as Fe, Zn, Ca and Se is high.
According to archaeological findings, about 200 million years ago, the hometown of chrysanthemum was still vast ocean and sea. Later, due to the changes of the earth, Liuyang in Hunan entered a period of recession and the sea water accumulated in low-lying places on the surface continued to evaporate. When the concentration of strontium sulfate in seawater increased to a certain extent, crystals are formed and gradually attached to the core of flint.
Color classification
Colored chrysanthemum stone
Colored chrysanthemum stone is a natural flower in geology. It is only produced in Liuyang, Hunan province. The shape of the petals is lifelike, rich in layers, hard and fine in texture, and the jade is crystal clear, just like the autumn chrysanthemum in full bloom against the frost. It is rich in stone color, tough and fine in stone, and contains strontium, selenium and other trace elements beneficial to human body, which highlights the rarity of chrysanthemum stone.
Brown chrysanthemum stone
The brown chrysanthemum stone is mainly produced in Liuyang, Hunan province. It is light gray-brown and off-white, and its matrix color is either brown, light gray-black or light gray-brown. Generally, it needs coloring. Brown river stone with smaller flower shape is harder, the flower layer is less, and is easily polished. The brown river stone with larger flower shape has moderate hardness and is easy to sculpt.
Black chrysanthemum stone
The black chrysanthemum stone is mainly produced in Xuanen, Hubei province. Hubei chrysanthemum stone flower has the shape of irises, claw flower shape, and cylindrical flower shape. The cylindrical flower-shaped stamens are obvious, three-dimensional shaped into a rod with a certain bending. The iris-shaped ones are more common, they are not conspicuous. Petals generally range from 10 to 40, the size is not uniform, the branching is compound with presence of interpenetration and other phenomena.
Authenticity identification
The intact chrysanthemum stone from the naturally exposed environment has become extinct, but in general, it is fragile and easily damaged after being changed by chemical methods. Using the method of a few chemistry or physics to pledge the stone material with loose material consolidates will only strengthen the surface.
The crystal in the heart of the chrysanthemum stone is its soul.
In addition, it can also be judged from the chrysanthemum stone carving as a whole. The crystal in the center of the real chrysanthemum stone is basically the same color as the petal, and the petal is radially diffused around. And theoretically, a complete three-dimensional chrysanthemum can be obtained by grinding along the central axis of the petals in any direction. Some chrysanthemum stone carvings are combined with real chrysanthemum stone flowers through ordinary stone bodies. The value of this chrysanthemum stone is much lower than that of the real chrysanthemum stone, and such a finished product is also easy to find and identify with the naked eye.
The difference with peony stone
People often confuse the chrysanthemum stone with peony. Peony stone is also found in the Luoyang area and is also a kind of natural stone, the quality of the material is black, white or green flowers. The stone distribution is like a peony in full bloom, peony flower petals are fuller, the size is evener and different from chrysanthemum stone strip petals.
Chrysanthemum stone and peony stone are called strange stones. Peony is also a natural mineral like chrysanthemum stone and can not be regenerated, so it also has a high collection value. In the world, peony stone is also recognized as rare, with collection significance and ornamental value. Peony stone originated in Luoyang, China, and its composition belongs to neutral salt rock.
Although chrysanthemum stone is as rare as peony stone and is often regarded as the same object, the two are completely different. First of all, the composition of the two is different, peony stone is said to be more delicate, the color more obvious. However, the flower pattern of chrysanthemum stone is more three-dimensional and lifelike, so some collectors are more inclined to collect chrysanthemum stone. In addition, when grinding, chrysanthemum stone takes shape faster and is not easy to destroy.
The meaning of culture
It is said that in ancient times, there was a pair of immortals in heaven who fell in love with each other. They sprinkled chrysanthemum which fell in Liuyang river and over time, turned into today's chrysanthemum stone. There is another saying that a pair of lovers fell in love, one of them turned into a stone, the other into a chrysanthemum. They loved each other and did not wish to depart till death, so they became today's chrysanthemum stone finally.
As Hunan's golden card, chrysanthemum stone carving technology came into being in 1740 and had a history of 270 years. Because chrysanthemum stone belongs to non-renewable resources, and only one place in Liuyang belongs to the concentrated origin in the world, it has the title of "the first stone in the world". In 2008, Liuyang chrysanthemum stone carving technology- with its exquisite craftsmanship, ingenious conception and unique natural strange existence- has become the second batch of national intangible cultural heritage projects.
The value and significance of chrysanthemum stone collection lies in its absolute naturalness.
Historical development
The earliest chrysanthemum stone found in China was from the underlying rocks of Liuyang river. According to the records in Liuyang county annals, during the reign of the Qianlong Emperor of the Qing dynasty, a man called Ouxifan accidentally found chrysanthemum stone.
Chrysanthemum stone is collected and exhibited in the state guesthouse, China art museum, Hunan art museum, etc. Also, great Chairman Mao, revolutionary martyr Tan sitong and others- among other favorite things- have used the chrysanthemum stone. These items are now displayed in the memorial hall.
In 1915, the Panamanian World Expo- the exhibition of chrysanthemum stone carvings- surprised the world, the "stones can bloom" won the "rare treasures gold award" and has been preserved in the United Nations Museum. In 1959, during the founding of the people's Republic of China, the people of Liuyang presented a huge three-dimensional sculpture "Shi Jusen Mountain" to the Great Hall of the people in Beijing for viewing by the people of all ethnic groups. From 1997 to 1999, the whole country rejoiced, celebrating the return of Hong Kong and Macao. Liuyang people specially created two commemorative chrysanthemum stone carvings dedicated to the Hong Kong and MSAR governments.
Because the formation of chrysanthemum stone requires specific physical and chemical conditions, and time, the number of chrysanthemum stone is very small in nature and rare in the world, so the related industry of chrysanthemum stone belongs to a typical resource-constrained industry.
References
Stones | Chrysanthemum stone | Physics | 2,025 |
2,690,240 | https://en.wikipedia.org/wiki/David%20G.%20Cory | David G. Cory is a Professor of Chemistry at the University of Waterloo where he holds the Canada Excellence Research Chair in Quantum Information Processing. He works at the Institute for Quantum Computing, and is also associated with the Waterloo Institute for Nanotechnology.
Education and career
Cory was educated at Case Western Reserve University, earning a bachelor's degree there in 1981 and a Ph.D. in chemistry in 1987. He carried out postdoctoral research at Radboud University Nijmegen in the Netherlands and at Naval Research Laboratory in Washington, D.C. He was a Professor of Nuclear Engineering at Massachusetts Institute of Technology prior to his 2010 appointment at Waterloo. At MIT, he worked on NMR, including his work on NMR quantum computation. Together with Amr Fahmy and Timothy Havel he developed the concept of pseudo-pure states and performed the first experimental demonstrations of NMR quantum computing.
Cory's research also concerns the realization and application of quantum control in various physical systems and devices. In 2015, he and teams from University of Waterloo, National Institute of Standards and Technology and Boston University demonstrated the generation and control of orbital angular momentum of neutron beams using a fork-dislocation grating, extending the existing work in optical and electron beams to neutrons. They subsequently demonstrated the control of both the spin and orbital angular momentum degrees of freedom of neutron beams.
See also
NMR quantum computer
Randomized benchmarking
List of University of Waterloo people
References
External links
Living people
Year of birth missing (living people)
Case Western Reserve University alumni
MIT School of Engineering faculty
Academic staff of the University of Waterloo
21st-century American chemists
Quantum physicists
Canadian chemical engineers
21st-century chemists
Quantum information scientists
Canadian physicists
Physical chemists
Fellows of the American Physical Society | David G. Cory | Physics,Chemistry | 356 |
55,538,688 | https://en.wikipedia.org/wiki/MRI%20pulse%20sequence | An MRI pulse sequence in magnetic resonance imaging (MRI) is a particular setting of pulse sequences and pulsed field gradients, resulting in a particular image appearance.
A multiparametric MRI is a combination of two or more sequences, and/or including other specialized MRI configurations such as spectroscopy.
Spin echo
T1 and T2
Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 (spin-lattice; that is, magnetization in the same direction as the static magnetic field) and T2 (spin-spin; transverse to the static magnetic field).
To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
To create a T2-weighted image, magnetization is allowed to decay before measuring the MR signal by changing the echo time (TE). This image weighting is useful for detecting edema and inflammation, revealing white matter lesions, and assessing zonal anatomy in the prostate and uterus.
The standard display of MRI images is to represent fluid characteristics in black and white images, where different tissues turn out as follows:
Proton density
Proton density (PD)- weighted images are created by having a long repetition time (TR) and a short echo time (TE). On images of the brain, this sequence has a more pronounced distinction between grey matter (bright) and white matter (darker grey), but with little contrast between brain and CSF. It is very useful for the detection of arthropathy and injury.
Gradient echo
A gradient echo sequence does not use a 180 degrees RF pulse to make the spins of particles coherent. Instead, it uses magnetic gradients to manipulate the spins, allowing the spins to dephase and rephase when required. After an excitation pulse, the spins are dephased, no signal is produced because the spins are not coherent. When the spins are rephased, they become coherent, and thus signal (or "echo") is generated to form images. Unlike spin echo, gradient echo does not need to wait for transverse magnetisation to decay completely before initiating another sequence, thus it requires very short repetition times (TR), and therefore to acquire images in a short time. After echo is formed, some transverse magnetisations remains. Manipulating gradients during this time will produce images with different contrast. There are three main methods of manipulating contrast at this stage, namely steady-state free-precession (SSFP) that does not spoil the remaining transverse magnetisation, but attempts to recover them (thus producing T2-weighted images); the sequence with spoiler gradient that averages the transverse magnetisations (thus producing mixed T1 and T2-weighted images), and RF spoiler that vary the phases of RF pulse to eliminates the transverse magnetisation, thus producing pure T1-weighted images.
For comparison purposes, the repetition time of a gradient echo sequence is of the order of 3 milliseconds, versus about 30 ms of a spin echo sequence.
Inversion recovery
Inversion recovery is an MRI sequence that provides high contrast between tissue and lesion. It can be used to provide high T1 weighted image, high T2 weighted image, and to suppress the signals from fat, blood, or cerebrospinal fluid (CSF).
Diffusion weighted
Diffusion MRI measures the diffusion of water molecules in biological tissues. Clinically, diffusion MRI is useful for the diagnoses of conditions (e.g., stroke) or neurological disorders (e.g., multiple sclerosis), and helps better understand the connectivity of white matter axons in the central nervous system. In an isotropic medium (inside a glass of water for example), water molecules naturally move randomly according to turbulence and Brownian motion. In biological tissues however, where the Reynolds number is low enough for laminar flow, the diffusion may be anisotropic. For example, a molecule inside the axon of a neuron has a low probability of crossing the myelin membrane. Therefore, the molecule moves principally along the axis of the neural fiber. If it is known that molecules in a particular voxel diffuse principally in one direction, the assumption can be made that the majority of the fibers in this area are parallel to that direction.
The recent development of diffusion tensor imaging (DTI) enables diffusion to be measured in multiple directions, and the fractional anisotropy in each direction to be calculated for each voxel. This enables researchers to make brain maps of fiber directions to examine the connectivity of different regions in the brain (using tractography) or to examine areas of neural degeneration and demyelination in diseases like multiple sclerosis.
Another application of diffusion MRI is diffusion-weighted imaging (DWI). Following an ischemic stroke, DWI is highly sensitive to the changes occurring in the lesion. It is speculated that increases in restriction (barriers) to water diffusion, as a result of cytotoxic edema (cellular swelling), is responsible for the increase in signal on a DWI scan. The DWI enhancement appears within 5–10 minutes of the onset of stroke symptoms (as compared to computed tomography, which often does not detect changes of acute infarct for up to 4–6 hours) and remains for up to two weeks. Coupled with imaging of cerebral perfusion, researchers can highlight regions of "perfusion/diffusion mismatch" that may indicate regions capable of salvage by reperfusion therapy.
Like many other specialized applications, this technique is usually coupled with a fast image acquisition sequence, such as echo planar imaging sequence.
Perfusion weighted
Perfusion-weighted imaging (PWI) is performed by 3 main techniques:
Dynamic susceptibility contrast (DSC): Gadolinium contrast is injected, and rapid repeated imaging (generally gradient-echo echo-planar T2 weighted) quantifies susceptibility-induced signal loss.
Dynamic contrast enhanced (DCE): Measuring shortening of the spin–lattice relaxation (T1) induced by a gadolinium contrast bolus.
Arterial spin labelling (ASL): Magnetic labeling of arterial blood below the imaging slab, without the need of gadolinium contrast.
The acquired data is then postprocessed to obtain perfusion maps with different parameters, such as BV (blood volume), BF (blood flow), MTT (mean transit time) and TTP (time to peak).
In cerebral infarction, the penumbra has decreased perfusion. Another MRI sequence, diffusion-weighted MRI, estimates the amount of tissue that is already necrotic, and the combination of those sequences can therefore be used to estimate the amount of brain tissue that is salvageable by thrombolysis and/or thrombectomy.
Functional MRI
Functional MRI (fMRI) measures signal changes in the brain that are due to changing neural activity. It is used to understand how different parts of the brain respond to external stimuli or passive activity in a resting state, and has applications in behavioral and cognitive research, and in planning neurosurgery of eloquent brain areas. Researchers use statistical methods to construct a 3-D parametric map of the brain indicating the regions of the cortex that demonstrate a significant change in activity in response to the task. Compared to anatomical T1W imaging, the brain is scanned at lower spatial resolution but at a higher temporal resolution (typically once every 2–3 seconds). Increases in neural activity cause changes in the MR signal via T changes; this mechanism is referred to as the BOLD (blood-oxygen-level dependent) effect. Increased neural activity causes an increased demand for oxygen, and the vascular system actually overcompensates for this, increasing the amount of oxygenated hemoglobin relative to deoxygenated hemoglobin. Because deoxygenated hemoglobin attenuates the MR signal, the vascular response leads to a signal increase that is related to the neural activity. The precise nature of the relationship between neural activity and the BOLD signal is a subject of current research. The BOLD effect also allows for the generation of high resolution 3D maps of the venous vasculature within neural tissue.
While BOLD signal analysis is the most common method employed for neuroscience studies in human subjects, the flexible nature of MR imaging provides means to sensitize the signal to other aspects of the blood supply. Alternative techniques employ arterial spin labeling (ASL) or weighting the MRI signal by cerebral blood flow (CBF) and cerebral blood volume (CBV). The CBV method requires injection of a class of MRI contrast agents that are now in human clinical trials. Because this method has been shown to be far more sensitive than the BOLD technique in preclinical studies, it may potentially expand the role of fMRI in clinical applications. The CBF method provides more quantitative information than the BOLD signal, albeit at a significant loss of detection sensitivity.
Magnetic resonance angiography
Magnetic resonance angiography (MRA) is a group of techniques based to image blood vessels. Magnetic resonance angiography is used to generate images of arteries (and less commonly veins) in order to evaluate them for stenosis (abnormal narrowing), occlusions, aneurysms (vessel wall dilatations, at risk of rupture) or other abnormalities. MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (the latter exam is often referred to as a "run-off").
Phase contrast
Phase contrast MRI (PC-MRI) is used to measure flow velocities in the body. It is used mainly to measure blood flow in the heart and throughout the body. PC-MRI may be considered a method of magnetic resonance velocimetry. Since modern PC-MRI typically is time-resolved, it also may be referred to as 4-D imaging (three spatial dimensions plus time).
Susceptibility weighted imaging
Susceptibility-weighted imaging (SWI) is a new type of contrast in MRI different from spin density, T1, or T2 imaging. This method exploits the susceptibility differences between tissues and uses a fully velocity-compensated, three-dimensional, RF-spoiled, high-resolution, 3D-gradient echo scan. This special data acquisition and image processing produces an enhanced contrast magnitude image very sensitive to venous blood, hemorrhage and iron storage. It is used to enhance the detection and diagnosis of tumors, vascular and neurovascular diseases (stroke and hemorrhage), multiple sclerosis, Alzheimer's, and also detects traumatic brain injuries that may not be diagnosed using other methods.
Magnetization transfer
Magnetization transfer (MT) is a technique to enhance image contrast in certain applications of MRI.
Bound protons are associated with proteins and as they have a very short T2 decay they do not normally contribute to image contrast. However, because these protons have a broad resonance peak they can be excited by a radiofrequency pulse that has no effect on free protons. Their excitation increases image contrast by transfer of saturated spins from the bound pool into the free pool, thereby reducing the signal of free water. This homonuclear magnetization transfer provides an indirect measurement of macromolecular content in tissue. Implementation of homonuclear magnetization transfer involves choosing suitable frequency offsets and pulse shapes to saturate the bound spins sufficiently strongly, within the safety limits of specific absorption rate for MRI.
The most common use of this technique is for suppression of background signal in time of flight MR angiography. There are also applications in neuroimaging particularly in the characterization of white matter lesions in multiple sclerosis.
Fat suppression
Fat suppression is useful for example to distinguish active inflammation in the intestines from fat deposition such as can be caused by long-standing (but possibly inactive) inflammatory bowel disease, but also obesity, chemotherapy and celiac disease. Without fat suppression techniques, fat and fluid will have similar signal intensities on fast spin-echo sequences.
Techniques to suppress fat on MRI mainly include:
Identifying fat by the chemical shift of its atoms, causing different time-dependent phase shifts compared to water.
Frequency-selective saturation of the spectral peak of fat by a "fat sat" pulse before imaging.
Short tau inversion recovery (STIR), a T1-dependent method
Spectral presaturation with inversion recovery (SPIR)
Neuromelanin imaging
This method exploits the paramagnetic properties of neuromelanin and can be used to visualize the substantia nigra and the locus coeruleus. It is used to detect the atrophy of these nuclei in Parkinson's disease and other parkinsonisms, and also detects signal intensity changes in major depressive disorder and schizophrenia.
Uncommon and experimental sequences
The following sequences are not commonly used clinically, and/or are at an experimental stage.
T1 rho (T1ρ)
T1 rho (T1ρ) is an experimental MRI sequence that may be used in musculoskeletal imaging. It does not yet have widespread use.
Molecules have a kinetic energy that is a function of the temperature and is expressed as translational and rotational motions, and by collisions between molecules. The moving dipoles disturb the magnetic field but are often extremely rapid so that the average effect over a long time-scale may be zero. However, depending on the time-scale, the interactions between the dipoles do not always average away. At the slowest extreme the interaction time is effectively infinite and occurs where there are large, stationary field disturbances (e.g., a metallic implant). In this case the loss of coherence is described as a "static dephasing". T2* is a measure of the loss of coherence in an ensemble of spins that includes all interactions (including static dephasing). T2 is a measure of the loss of coherence that excludes static dephasing, using an RF pulse to reverse the slowest types of dipolar interaction. There is in fact a continuum of interaction time-scales in a given biological sample, and the properties of the refocusing RF pulse can be tuned to refocus more than just static dephasing. In general, the rate of decay of an ensemble of spins is a function of the interaction times and also the power of the RF pulse. This type of decay, occurring under the influence of RF, is known as T1ρ. It is similar to T2 decay but with some slower dipolar interactions refocused, as well as static interactions, hence T1ρ≥T2.
Others
Saturation recovery sequences are rarely used, but can measure spin-lattice relaxation time (T1) more quickly than an inversion recovery pulse sequence.
Double-oscillating-diffusion-encoding (DODE) and double diffusion encoding (DDE) imaging are specific forms of MRI diffusion imaging, which can be used to measure diameters and lengths of axon pores.
References
Magnetic resonance imaging
Nuclear magnetic resonance | MRI pulse sequence | Physics,Chemistry | 3,160 |
12,049,745 | https://en.wikipedia.org/wiki/Mater%20Dei%20Hospital | Mater Dei Hospital (MDH; ), also known simply as Mater Dei, is an acute general and teaching hospital in Msida, Malta. It was opened in 2007, replacing St. Luke's Hospital. It is a public hospital affiliated to the University of Malta, offering general and specialist services.
History
The hospital opened on 29 June 2007, replacing St. Luke's Hospital as the main public general hospital. The 250,000 square metre complex includes 825 beds and 25 operating theaters. It was built by Skanska Malta JV, a subsidiary of the Swedish construction firm Skanska. The project was planned to cost Lm 50,000,000 (around €116,000,000), but the cost skyrocketed to more than Lm 250,000,000 (around €582,000,000).
Skanska was entrusted with the building of a new general hospital in Malta, and the "state-of-the-art" Mater Dei Hospital cost over €700,000,000. Later, however, it was discovered that Skanska had used lower-quality cement of the kind that is generally used to build pavements. As a result, the hospital could not develop further floors or build a helipad on the roof.
University of Malta affiliation
The hospital is located adjacent to the University of Malta, and contains the faculties of Health Sciences, Medicine and Surgery, and Dental Surgery in a purpose built Medical School wing. Prior to the COVID-19 pandemic, the hospital used to house the Health Sciences Library, which is a branch library of the University of Malta Library.
Sir Anthony Mamo Oncology Centre
The Sir Anthony Mamo Oncology Centre welcomed its first 50 outpatients on 22 December 2014. The hospital started being excavated in 2010 and building started in 2012. It is named after the first president of Malta Sir Anthony Mamo. It cost €52 million and an estimated €8 million a year is required to run it. The hospital is offering more advanced radiotherapy with two machines commissioned from the Leeds Spencer Centre, where they were introduced in 2013. The machines enable more precise radiotherapy and stronger doses reducing the length and frequency of sessions. Considerations by the Maltese Government for expanding radiotherapy services to include autologous transplants have also been made. The government has also considered the development of a clinical trials unit through which Maltese patients would be able to benefit from new medicines not yet on the market. Beds at the new hospital increased from the 78 at Boffa Hospital to 113 and the outpatient clinics from two to 12. The type of chemotherapy provided is more advanced. Palliative care beds were also increased from the 10 at Boffa to 16. A new MRI machine will help reduce waiting lists. Patients and their families would be followed before, during and after treatment and more training provided for staff. A total of 47 new professionals have been recruited on its opening day.
Further reading
See also
List of hospitals in Malta
References
Hospital buildings completed in 2007
Hospitals in Malta
Hospitals established in 2007
Msida
Architectural controversies
Controversies in Malta
21st-century controversies
2007 establishments in Malta | Mater Dei Hospital | Engineering | 637 |
30,001 | https://en.wikipedia.org/wiki/Theory%20of%20relativity | The theory of relativity usually encompasses two interrelated physics theories by Albert Einstein: special relativity and general relativity, proposed and published in 1905 and 1915, respectively. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to the forces of nature. It applies to the cosmological and astrophysical realm, including astronomy.
The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.
Development and acceptance
Albert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work.
Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.
The term "theory of relativity" was based on the expression "relative theory" () used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression "theory of relativity" ().
By the 1920s, the physics community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of atomic physics, nuclear physics, and quantum mechanics.
By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian gravitation theory. It seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics seemed difficult and fully understandable only by a small number of people. Around 1960, general relativity became central to physics and astronomy. New mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. As astronomical phenomena were discovered, such as quasars (1963), the 3-kelvin microwave background radiation (1965), pulsars (1967), and the first black hole candidates (1981), the theory explained their attributes, and measurement of them further confirmed the theory.
Special relativity
Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:
The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity).
The speed of light in vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.
The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:
Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion.
Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock.
Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer.
Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in vacuum.
The effect of gravity can only travel through space at the speed of light, not faster or instantaneously.
Mass–energy equivalence: , energy and mass are equivalent and transmutable.
Relativistic mass, idea used by some researchers.
The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism.)
General relativity
General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. Einstein discussed his idea with mathematician Marcel Grossmann and they concluded that general relativity could be formulated in the context of Riemannian geometry which had been developed in the 1800s.
In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it.
Some of the consequences of general relativity are:
Gravitational time dilation: Clocks run slower in deeper gravitational wells.
Precession: Orbits precess in a way unexpected in Newton's theory of gravity. (This has been observed in the orbit of Mercury and in binary pulsars).
Light deflection: Rays of light bend in the presence of a gravitational field.
Frame-dragging: Rotating masses "drag along" the spacetime around them.
Expansion of the universe: The universe is expanding, and certain components within the universe can accelerate the expansion.
Technically, general relativity is a theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially.
Experimental evidence
Einstein stated that the theory of relativity belongs to a class of "principle-theories". As such, it employs an analytic method, which means that the elements of this theory are not based on hypothesis but on empirical discovery. By observing natural processes, we understand their general characteristics, devise mathematical models to describe what we observed, and by analytical means we deduce the necessary conditions that have to be satisfied. Measurement of separate events must satisfy these conditions and match the theory's conclusions.
Tests of special relativity
Relativity is a falsifiable theory: It makes predictions that can be tested by experiment. In the case of special relativity, these include the principle of relativity, the constancy of the speed of light, and time dilation. The predictions of special relativity have been confirmed in numerous tests since Einstein published his paper in 1905, but three experiments conducted between 1881 and 1938 were critical to its validation. These are the Michelson–Morley experiment, the Kennedy–Thorndike experiment, and the Ives–Stilwell experiment. Einstein derived the Lorentz transformations from first principles in 1905, but these three experiments allow the transformations to be induced from experimental evidence.
Maxwell's equations—the foundation of classical electromagnetism—describe light as a wave that moves with a characteristic velocity. The modern view is that light needs no medium of transmission, but Maxwell and his contemporaries were convinced that light waves were propagated in a medium, analogous to sound propagating in air, and ripples propagating on the surface of a pond. This hypothetical medium was called the luminiferous aether, at rest relative to the "fixed stars" and through which the Earth moves. Fresnel's partial ether dragging hypothesis ruled out the measurement of first-order (v/c) effects, and although observations of second-order effects (v2/c2) were possible in principle, Maxwell thought they were too small to be detected with then-current technology.
The Michelson–Morley experiment was designed to detect second-order effects of the "aether wind"—the motion of the aether relative to the Earth. Michelson designed an instrument called the Michelson interferometer to accomplish this. The apparatus was sufficiently accurate to detect the expected effects, but he obtained a null result when the first experiment was conducted in 1881, and again in 1887. Although the failure to detect an aether wind was a disappointment, the results were accepted by the scientific community. In an attempt to salvage the aether paradigm, FitzGerald and Lorentz independently created an ad hoc hypothesis in which the length of material bodies changes according to their motion through the aether. This was the origin of FitzGerald–Lorentz contraction, and their hypothesis had no theoretical basis. The interpretation of the null result of the Michelson–Morley experiment is that the round-trip travel time for light is isotropic (independent of direction), but the result alone is not enough to discount the theory of the aether or validate the predictions of special relativity.
While the Michelson–Morley experiment showed that the velocity of light is isotropic, it said nothing about how the magnitude of the velocity changed (if at all) in different inertial frames. The Kennedy–Thorndike experiment was designed to do that, and was first performed in 1932 by Roy Kennedy and Edward Thorndike. They obtained a null result, and concluded that "there is no effect ... unless the velocity of the solar system in space is no more than about half that of the earth in its orbit". That possibility was thought to be too coincidental to provide an acceptable explanation, so from the null result of their experiment it was concluded that the round-trip time for light is the same in all inertial reference frames.
The Ives–Stilwell experiment was carried out by Herbert Ives and G.R. Stilwell first in 1938 and with better accuracy in 1941. It was designed to test the transverse Doppler effect the redshift of light from a moving source in a direction perpendicular to its velocity—which had been predicted by Einstein in 1905. The strategy was to compare observed Doppler shifts with what was predicted by classical theory, and look for a Lorentz factor correction. Such a correction was observed, from which was concluded that the frequency of a moving atomic clock is altered according to special relativity.
Those classic experiments have been repeated many times with increased precision. Other experiments include, for instance, relativistic energy and momentum increase at high velocities, experimental testing of time dilation, and modern searches for Lorentz violations.
Tests of general relativity
General relativity has also been confirmed many times, the classic experiments being the perihelion precession of Mercury's orbit, the deflection of light by the Sun, and the gravitational redshift of light. Other tests confirmed the equivalence principle and frame dragging.
Modern applications
Far from being simply of theoretical interest, relativistic effects are important practical engineering concerns. Satellite-based measurement needs to take into account relativistic effects, as each satellite is in motion relative to an Earth-bound user, and is thus in a different frame of reference under the theory of relativity. Global positioning systems such as GPS, GLONASS, and Galileo, must account for all of the relativistic effects in order to work with precision, such as the consequences of the Earth's gravitational field. This is also the case in the high-precision measurement of time. Instruments ranging from electron microscopes to particle accelerators would not work if relativistic considerations were omitted.
See also
Doubly special relativity
Galilean invariance
List of textbooks on relativity
References
Further reading
The Meaning of Relativity Albert Einstein: Four lectures delivered at Princeton University, May 1921
How I created the theory of relativity Albert Einstein, 14 December 1922; Physics Today August 1982
Relativity Sidney Perkowitz Encyclopædia Britannica
External links
Albert Einstein
Articles containing video clips
Theoretical physics | Theory of relativity | Physics | 2,515 |
16,957,984 | https://en.wikipedia.org/wiki/Navini%20Networks | Navini Networks was a company that developed an Internet access system based on WiMAX wireless communication standards. This access system was subsequently acquired by Cisco Systems in October, 2007.
Company
In January 2000, Wu-Fu Chen and Guanghan Xu formed Navini Networks and developed a wireless Internet access system.
The company was based in Richardson, Texas and was privately funded by several investment-funds.
In 2001 it was awarded the 'Start-Up of the Year' award by KPMG and in 2002 it won some national and regional prizes.
Between the formation and early 2003 it attracted $66.5 million from private investors and employed 130 employees.
When it was sold in October 2007 for $330 million to Cisco Systems, Navini had 70 customers.
A Navini customer would be an Internet service provider providing wireless Internet access, mainly in areas where there are only limited wired alternatives available (such as Docsis access via a cable-TV network or DSL via the telephone network).
Products
Navini developed a WiMAX wireless internet-access infrastructure consisting of two main parts: the central headend system with the special antennas and the RipWave modems or customer premises equipment
The Navini products offered a non line-of-sight wireless access system. The popular Wi-Fi systems require an unobstructed view between the antenna of the transmitter and the receiver for a good reception of the signals: when the view is obstructed the signal strength decreases and the reach of the signal is very small.
By using a technique called spot beaming, normally used in satellite communications, it was possible to use radio-signals on frequencies that would normally require an unobstructed path between the transmitter and receiver or high-power transmitters.
A Navini system consists of one management-system, one or more base-systems and the user-modems or customer premises equipment.
Ripwave EMS
At the heart of a Navini-based internet access system is the EMS or Element Management System. The EMS is a network management system and can manage one or more base-systems. The EMS is a server application to manage the base-systems and end-user equipment. The Navibi EMS is a Java based IP-network management system and could run on a Windows or SUN server platform using SNMP.
Base System
The base system is the head-end equipment to which users within the reach connect to. A base-system can be compared to a base system or GSM-mast in a cellular telephone network.
The central system consisted of an indoor unit and an outdoor eight element antenna system.
A single BTS could allow up to 1000 end users connected to it. An end-user could connect to different base-systems, depending on which station gave the best connection at that time, but it wasn't possible to 'hop' from one BTS to another without losing the connection: the system wasn't designed for mobile communication. The Ripwave system is based on the TD-SCDMA technology and one of the founders of the company, Dr. Xu, wrote the initial drafts for this standard.
The RipWave system was one of the first land-based systems for private use that uses spot-beaming to realise the non-line of sight connection between the CPE and the BTS. Spot-beaming is used in satellite communications to aim a signal from a satellite to a specific area and so increase the signal-strength in that area.
Originally the base-station was sold as the RipWave MX8 system but after the acquisition of the company by Cisco the base-systems were sold as Cisco BWX 8300 series until it was marked as End of Life in 2008. The MX8 was a Navini proprietary protocol. It was followed up by BWX2300 WiMAX certified systems.
Customer premises equipment
To get access to a Navini WiMAX base-system the customer uses a special radio-transceiver: the customer premises equipment or CPE.
The Navini CPEs or modems introduced since September 2007 are based on the IEEE 802.16 standard. The old modems, sold as BWX100 systems, are EOL from 18 September 2009.
A CPE consists of a modem, which is in reality a radio transceiver, and has a built-on antenna. To improve signal-quality it is possible to connect an external antenna to the modem. The Ripwave CPE uses an active antenna.
Although the Ripwave technology doesn't support the active handover of a call from one base-station to another (such as in cellular networks) it does support nomadic use: a CPE isn't fixed to a specific base-station: if the provider allows it, a CPE connect to any base-station in their network or even allow connections from modems of another ISP's.
High costs
Worldwide there were 70 deployments. One relative early example in Europe was the Dutch ISP Introweb who were planning to offer wireless broadband internet access in rural areas in The Netherlands. The Dutch incumbent telco KPN had announced that they wouldn't roll-out DSL in these rural areas and the cable-companies like UPC and Ziggo had stopped upgrading their cable-TV networks to offer Docsis after the dot.com collapse of 2001. To offer 'always on' broadband internet this ISP was going to deploy the Navini product range on large scale.
While the network was being built, KPN changed their plans and upgraded their entire network so they could offer DSL in the whole country (including the rural areas Introweb was targeting with the Navini systems) and the cable TV operators also continued expanding their Docsis coverage. The costs of a Navini-based connection was much higher than a DSL or Docsis connection and Introweb could not compete with DSL or Docsis on both price and speed. Introweb subsequently went bankrupt.
References
2000 establishments in Texas
2007 disestablishments in Texas
American companies established in 2000
American companies disestablished in 2007
Broadband
Cisco products
Cisco Systems acquisitions
Computer companies established in 2000
Computer companies disestablished in 2007
Defunct computer companies of the United States
Defunct computer hardware companies
Metropolitan area networks
Network access
Wireless networking | Navini Networks | Technology,Engineering | 1,282 |
77,421,630 | https://en.wikipedia.org/wiki/International%20Drug%20Users%20Remembrance%20Day | International Drug Users Remembrance Day is a health awareness day observed on 21 July each year. It is a day where friends and family can meet together to memorialise and remember loved ones whose lives were cut short due to drug use and the criminalisation and stigmatisation of people who use drugs.
It is also a day to remember everyone who has worked to advance the health and human rights of people who use drugs, many of whom have provided services borne out of civil disobedience such as needle and syringe programs and medically supervised injecting center's which have saved many lives.
When talking about what it meant to him, as a young drug user, Matthew Bonn said:
Bonn also talks about specific friends that he had lost to drug use and tells some of their stories.
It is similar to International Overdose Awareness Day (31 August) and International Drug Users Day (1 November).
References
Drug culture
Drug overdose
Drug policy
Drug-related deaths
Drug safety
Health awareness days
Harm reduction
Public health
Substance intoxication
Substance abuse
July observances
Health observances | International Drug Users Remembrance Day | Chemistry | 217 |
63,467,615 | https://en.wikipedia.org/wiki/Iron%28tetraphenylporphyrinato%29%20chloride | Iron(tetraporphyrinato) chloride is the coordination complex with the formula Fe(TPP)Cl where TPP is the dianion [C44H28N4]2-. The compound forms blue microcrystals that dissolve in chlorinated solvent to give brown solutions. In terms of structure, the complex is five-coordinate with idealized C4v point group symmetry. It is one of more common transition metal porphyrin complexes.
Synthesis and reactions
Fe(TPP)Cl is prepared by the reaction of tetraphenylporphyrin (H2TPP) and ferrous chloride in the presence of air:
H2TPP + FeCl2 + 1/4 O2 → Fe(TPP)Cl + HCl + 1/2 H2O
The chloride can be replaced with other halides and pseudohalides. Base gives the "mu-oxo dimer":
2 Fe(TPP)Cl + 2 NaOH → [Fe(TPP)]2O + 2 NaCl + H2O
Most relevant to catalysis, the complex is easily reduced to give ferrous derivatives (L = pyridine, imidazole):
Fe(TPP)Cl + e- + 2 L → Fe(TPP)L2 + Cl−
The complex is widely studied as a catalyst.
References
Chelating agents
Tetrapyrroles
Macrocycles
Phenyl compounds | Iron(tetraphenylporphyrinato) chloride | Chemistry | 304 |
64,244,683 | https://en.wikipedia.org/wiki/Leibniz%20Institute%20of%20Plant%20Biochemistry | The Leibniz Institute of Plant Biochemistry (German: Leibniz-Institut für Pflanzenbiochemie, abbreviated: IPB) is a non-university, public research institute located in Halle (Saale), Germany. It carries out basic and applied plant research on model, cultivated and wild plants. Research activities at the institute include natural product chemistry, metabolism and protein biochemistry, cell and plant biology, as well as synthetic biology and biotechnology. The institute is a foundation under public law of the State of Saxony-Anhalt and is a member of the Leibniz Association.
History
The institute was founded in 1958 under Kurt Mothes as "Arbeitsstelle Biochemie der Pflanzen". Shortly after it was named "Institute for Biochemistry of Plants" (IBP) and became a member institution of the German Academy of Sciences of then East Germany. From 1968 to 1989, Klaus Schreiber served as director of the institute, followed by Klaus Müntz (1989-1990) and Benno Parthier (1990-1997).
After the German reunification, the institute was refounded in 1992 as "Leibniz Institute of Plant Biochemistry" (IPB) and became part of the Leibniz Association of research institutions publicly funded by the federal government and the federal state.
Activities and structure
IPB research focuses on plant-related small molecules. It aims to explore the chemical diversity, biosynthesis, biological roles and mechanisms of action of plant and fungal natural products including specialized metabolites and chemical mediators. Interdisciplinary approaches at the interface between chemistry and biology encompass
natural product chemistry,
synthetic chemistry,
plant metabolism and protein biochemistry,
cell biology and plant physiology,
synthetic biology and biotechnology.
Research at the IPB is conducted in four scientific departments
Bioorganic Chemistry,
Molecular Signal Processing,
Cell and Metabolic Biology,
Biochemistry of Plant Interactions,
as well as two Junior Research Groups and one Program Center for Plant Metabolomics and Computational Biochemistry (MetaCom)
The IPB employs ca. 200 people, 100 of whom are scientists.
Cooperation
The IPB and the Martin Luther University of Halle-Wittenberg (MLU) entertain close collaborations in research and teaching. The IPB's four department chairs and one junior research group leader are university professors jointly appointed by the MLU and the IPB. In addition, close collaborative ties exist with the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) in Gatersleben. The IPB is a founding member of the ScienceCampus Halle - Plant-based Bioeconomy and a member of the German Centre for Integrative Biodiversity Research (iDiv) consortium.
References
External links
www.ipb-halle.de/en
Leibniz Association
Halle (Saale)
Research institutes established in 1958
Biochemistry research institutes
1958 establishments in East Germany
Scientific organisations based in East Germany | Leibniz Institute of Plant Biochemistry | Chemistry | 587 |
64,252,681 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Z%20Fold%202 | The Samsung Galaxy Z Fold 2 (stylized as Samsung Galaxy Z Fold2, sold as Samsung Galaxy Fold 2 in certain territories) is an Android-based foldable smartphone developed by Samsung Electronics for its Samsung Galaxy Z series, succeeding the Samsung Galaxy Fold. It was announced on 5 August 2020 alongside the Samsung Galaxy Note 20, the Samsung Galaxy Tab S7, the Galaxy Buds Live, and the Galaxy Watch 3. Samsung later revealed pricing and availability details on 1 September.
On October 9, 2024, Samsung stopped security updates to the phone.
Specifications
Design
Unlike the original Fold which had an entirely plastic screen, the screen is protected by -thick "ultra-thin glass" with a plastic layer like the Z Flip, manufactured by Samsung with materials from Schott AG; conventional Gorilla Glass is used for the back panels with an aluminum frame. The hinge mechanism is also borrowed from the Z Flip, using nylon fibers designed to keep dust out; it is self-supporting from 75 to 115 degrees. The power button is embedded in the frame and doubles as the fingerprint sensor, with the volume rocker located above. The device comes in two colors, Mystic Bronze and Mystic Black, as well as a Limited Edition Thom Browne model. In select regions, users are able to customize the hinge color when ordering the phone from Samsung's website.
Hardware
The Galaxy Z Fold 2 contains two screens: its front cover uses a 6.2-inch display in the center with minimal bezels, significantly larger than its predecessor's 4.6-inch display, and the device can fold open to expose a 7.6-inch display, with a circular cutout in the top center right replacing the notch along with a thinner border. Both displays support HDR10+; the internal display benefits from an adaptive 120 Hz refresh rate like the S20 series and Note 20 Ultra.
The device has 12 GB of LPDDR5 RAM, and either 256 or 512 GB of non-expandable UFS 3.1. Storage availability varies by country, the 512 GB version is the most scarce by far. The Z Fold 2 is powered by the Qualcomm Snapdragon 865+, which is used in all regions (unlike other flagship Samsung phones that have been split between Snapdragon and Samsung's in-house Exynos chips depending on the market). It uses two batteries split between the two halves, totaling a slightly larger 4500 mAh capacity; fast charging is supported over USB-C at up to 25 W or wirelessly via Qi at up to 11 W. The Z Fold 2 contains 5 cameras, including three rear-facing camera lenses (12-megapixel, 12-megapixel telephoto, and 12-megapixel ultra wide-angle), as well as a 10-megapixel front-facing camera on the cover, and a second 10-megapixel front-facing camera on the inside screen.
Software
The Galaxy Z Fold 2 shipped with Android 10 and Samsung's One UI software; by means of an improved Multi Window mode, up to three supported apps can be placed on-screen at once. Apps open on the smaller screen can expand into their larger, tablet-oriented layouts when the user unfolds the device. Additionally, supported apps will now automatically get a split-screen view with a sidebar and main app pane. New to the Z Fold 2 is split-screen functionality, called "Flex Mode", which is compatible with certain apps like YouTube and Google Duo along with native Samsung apps.
Luxury model
In November 2020, Samsung unveiled the Samsung W21 5G, a luxury version of the Z Fold2, exclusively available to the Chinese market. The phone is identical to its counterparts, in terms of its design and specifications, with the exception of a slightly taller build and two SIM card slots. The phone features an exclusive "Glitter Gold" color, which consists of a seven-layer nano-level optical film attached to the glass back that has vertical ridges for added texture.
Gallery
See also
Samsung Galaxy Z series
Samsung Galaxy Z Flip
References
External links
Samsung Galaxy
Foldable smartphones
Mobile phones introduced in 2020
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording
Discontinued flagship smartphones
Discontinued Samsung Galaxy smartphones
Samsung smartphones | Samsung Galaxy Z Fold 2 | Technology | 882 |
35,892,084 | https://en.wikipedia.org/wiki/C16H18N2O2 | The molecular formula C16H18N2O2 (molar mass: 270.33 g/mol, exact mass: 270.1368 u) may refer to:
Ciproxifan
Domoxin
Ethonam
Penniclavine
Molecular formulas | C16H18N2O2 | Physics,Chemistry | 55 |
326,182 | https://en.wikipedia.org/wiki/Isoperimetric%20inequality | In mathematics, the isoperimetric inequality is a geometric inequality involving the square of the circumference of a closed curve in the plane and the area of a plane region it encloses, as well as its various generalizations. Isoperimetric literally means "having the same perimeter". Specifically, the isoperimetric inequality states, for the length L of a closed curve and the area A of the planar region that it encloses, that
and that equality holds if and only if the curve is a circle.
The isoperimetric problem is to determine a plane figure of the largest possible area whose boundary has a specified length. The closely related Dido's problem asks for a region of the maximal area bounded by a straight line and a curvilinear arc whose endpoints belong to that line. It is named after Dido, the legendary founder and first queen of Carthage. The solution to the isoperimetric problem is given by a circle and was known already in Ancient Greece. However, the first mathematically rigorous proof of this fact was obtained only in the 19th century. Since then, many other proofs have been found.
The isoperimetric problem has been extended in multiple ways, for example, to curves on surfaces and to regions in higher-dimensional spaces. Perhaps the most familiar physical manifestation of the 3-dimensional isoperimetric inequality is the shape of a drop of water. Namely, a drop will typically assume a symmetric round shape. Since the amount of water in a drop is fixed, surface tension forces the drop into a shape which minimizes the surface area of the drop, namely a round sphere.
The isoperimetric problem in the plane
The classical isoperimetric problem dates back to antiquity. The problem can be stated as follows: Among all closed curves in the plane of fixed perimeter, which curve (if any) maximizes the area of its enclosed region? This question can be shown to be equivalent to the following problem: Among all closed curves in the plane enclosing a fixed area, which curve (if any) minimizes the perimeter?
This problem is conceptually related to the principle of least action in physics, in that it can be restated: what is the principle of action which encloses the greatest area, with the greatest economy of effort? The 15th-century philosopher and scientist, Cardinal Nicholas of Cusa, considered rotational action, the process by which a circle is generated, to be the most direct reflection, in the realm of sensory impressions, of the process by which the universe is created. German astronomer and astrologer Johannes Kepler invoked the isoperimetric principle in discussing the morphology of the solar system, in Mysterium Cosmographicum (The Sacred Mystery of the Cosmos, 1596).
Although the circle appears to be an obvious solution to the problem, proving this fact is rather difficult. The first progress toward the solution was made by Swiss geometer Jakob Steiner in 1838, using a geometric method later named Steiner symmetrisation. Steiner showed that if a solution existed, then it must be the circle. Steiner's proof was completed later by several other mathematicians.
Steiner begins with some geometric constructions which are easily understood; for example, it can be shown that any closed curve enclosing a region that is not fully convex can be modified to enclose more area, by "flipping" the concave areas so that they become convex. It can further be shown that any closed curve which is not fully symmetrical can be "tilted" so that it encloses more area. The one shape that is perfectly convex and symmetrical is the circle, although this, in itself, does not represent a rigorous proof of the isoperimetric theorem (see external links).
On a plane
The solution to the isoperimetric problem is usually expressed in the form of an inequality that relates the length L of a closed curve and the area A of the planar region that it encloses. The isoperimetric inequality states that
and that the equality holds if and only if the curve is a circle. The area of a disk of radius R is πR2 and the circumference of the circle is 2πR, so both sides of the inequality are equal to 4π2R2 in this case.
Dozens of proofs of the isoperimetric inequality have been found. In 1902, Hurwitz published a short proof using the Fourier series that applies to arbitrary rectifiable curves (not assumed to be smooth). An elegant direct proof based on comparison of a smooth simple closed curve with an appropriate circle was given by E. Schmidt in 1938. It uses only the arc length formula, expression for the area of a plane region from Green's theorem, and the Cauchy–Schwarz inequality.
For a given closed curve, the isoperimetric quotient is defined as the ratio of its area and that of the circle having the same perimeter. This is equal to
and the isoperimetric inequality says that Q ≤ 1. Equivalently, the isoperimetric ratio is at least 4 for every curve.
The isoperimetric quotient of a regular n-gon is
Let be a smooth regular convex closed curve. Then the improved isoperimetric inequality states the following
where denote the length of , the area of the region bounded by and the oriented area of the Wigner caustic of , respectively, and the equality holds if and only if is a curve of constant width.
On a sphere
Let C be a simple closed curve on a sphere of radius 1. Denote by L the length of C and by A the area enclosed by C. The spherical isoperimetric inequality states that
and that the equality holds if and only if the curve is a circle. There are, in fact, two ways to measure the spherical area enclosed by a simple closed curve, but the inequality is symmetric with the respect to taking the complement.
This inequality was discovered by Paul Lévy (1919) who also extended it to higher dimensions and general surfaces.
In the more general case of arbitrary radius R, it is known that
In Euclidean space
The isoperimetric inequality states that a sphere has the smallest surface area per given volume. Given a bounded open set with boundary, having surface area and volume , the isoperimetric inequality states
where is a unit ball. The equality holds when is a ball in . Under additional restrictions on the set (such as convexity, regularity, smooth boundary), the equality holds for a ball only. But in full generality the situation is more complicated. The relevant result of (for a simpler proof see ) is clarified in as follows. An extremal set consists of a ball and a "corona" that contributes neither to the volume nor to the surface area. That is, the equality holds for a compact set if and only if contains a closed ball such that and For example, the "corona" may be a curve.
The proof of the inequality follows directly from Brunn–Minkowski inequality between a set and a ball with radius , i.e. . By taking Brunn–Minkowski inequality to the power , subtracting from both sides, dividing them by , and taking the limit as (; ).
In full generality , the isoperimetric inequality states that for any set whose closure has finite Lebesgue measure
where is the (n-1)-dimensional Minkowski content, Ln is the n-dimensional Lebesgue measure, and ωn is the volume of the unit ball in . If the boundary of S is rectifiable, then the Minkowski content is the (n-1)-dimensional Hausdorff measure.
The n-dimensional isoperimetric inequality is equivalent (for sufficiently smooth domains) to the Sobolev inequality on with optimal constant:
for all .
In Hadamard manifolds
Hadamard manifolds are complete simply connected manifolds with nonpositive curvature. Thus they generalize the Euclidean space , which is a Hadamard manifold with curvature zero. In 1970's and early 80's, Thierry Aubin, Misha Gromov, Yuri Burago, and Viktor Zalgaller conjectured that the Euclidean isoperimetric inequality
holds for bounded sets in Hadamard manifolds, which has become known as the Cartan–Hadamard conjecture.
In dimension 2 this had already been established in 1926 by André Weil, who was a student of Hadamard at the time.
In dimensions 3 and 4 the conjecture was proved by Bruce Kleiner in 1992, and Chris Croke in 1984 respectively.
In a metric measure space
Most of the work on isoperimetric problem has been done in the context of smooth regions in Euclidean spaces, or more generally, in Riemannian manifolds. However, the isoperimetric problem can be formulated in much greater generality, using the notion of Minkowski content. Let be a metric measure space: X is a metric space with metric d, and μ is a Borel measure on X. The boundary measure, or Minkowski content, of a measurable subset A of X is defined as the lim inf
where
is the ε-extension of A.
The isoperimetric problem in X asks how small can be for a given μ(A). If X is the Euclidean plane with the usual distance and the Lebesgue measure then this question generalizes the classical isoperimetric problem to planar regions whose boundary is not necessarily smooth, although the answer turns out to be the same.
The function
is called the isoperimetric profile of the metric measure space . Isoperimetric profiles have been studied for Cayley graphs of discrete groups and for special classes of Riemannian manifolds (where usually only regions A with regular boundary are considered).
For graphs
In graph theory, isoperimetric inequalities are at the heart of the study of expander graphs, which are sparse graphs that have strong connectivity properties. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.
Isoperimetric inequalities for graphs relate the size of vertex subsets to the size of their boundary, which is usually measured by the number of edges leaving the subset (edge expansion) or by the number of neighbouring vertices (vertex expansion). For a graph and a number , the following are two standard isoperimetric parameters for graphs.
The edge isoperimetric parameter:
The vertex isoperimetric parameter:
Here denotes the set of edges leaving and denotes the set of vertices that have a neighbour in . The isoperimetric problem consists of understanding how the parameters and behave for natural families of graphs.
Example: Isoperimetric inequalities for hypercubes
The -dimensional hypercube is the graph whose vertices are all Boolean vectors of length , that is, the set . Two such vectors are connected by an edge in if they are equal up to a single bit flip, that is, their Hamming distance is exactly one.
The following are the isoperimetric inequalities for the Boolean hypercube.
Edge isoperimetric inequality
The edge isoperimetric inequality of the hypercube is . This bound is tight, as is witnessed by each set that is the set of vertices of any subcube of .
Vertex isoperimetric inequality
Harper's theorem says that Hamming balls have the smallest vertex boundary among all sets of a given size. Hamming balls are sets that contain all points of Hamming weight at most and no points of Hamming weight larger than for some integer . This theorem implies that any set with
satisfies
As a special case, consider set sizes of the form
for some integer . Then the above implies that the exact vertex isoperimetric parameter is
Isoperimetric inequality for triangles
The isoperimetric inequality for triangles in terms of perimeter p and area T states that
with equality for the equilateral triangle. This is implied, via the AM–GM inequality, by a stronger inequality which has also been called the isoperimetric inequality for triangles:
See also
Blaschke–Lebesgue theorem
Chaplygin problem: isoperimetric problem is a zero wind speed case of Chaplygin problem
Curve-shortening flow
Expander graph
Gaussian isoperimetric inequality
Isoperimetric dimension
Isoperimetric point
List of triangle inequalities
Planar separator theorem
Mixed volume
Notes
References
Blaschke and Leichtweiß, Elementare Differentialgeometrie (in German), 5th edition, completely revised by K. Leichtweiß. Die Grundlehren der mathematischen Wissenschaften, Band 1. Springer-Verlag, New York Heidelberg Berlin, 1973
.
Gromov, M.: "Paul Levy's isoperimetric inequality". Appendix C in Metric structures for Riemannian and non-Riemannian spaces. Based on the 1981 French original. With appendices by M. Katz, P. Pansu and S. Semmes. Translated from the French by Sean Michael Bates. Progress in Mathematics, 152. Birkhäuser Boston, Inc., Boston, Massachusetts, 1999.
.
.
.
.
External links
History of the Isoperimetric Problem at Convergence
Treiberg: Several proofs of the isoperimetric inequality
Isoperimetric Theorem at cut-the-knot
Analytic geometry
Calculus of variations
Geometric inequalities
Multivariable calculus
Theorems in measure theory | Isoperimetric inequality | Mathematics | 2,804 |
72,460,383 | https://en.wikipedia.org/wiki/Turkey%20illusion | Turkey illusion is a cognitive bias describing the surprise resulting from a break in a trend, if one does not know the causes or the framework conditions for this trend. The concept was first introduced by Bertrand Russell to illustrate a problem with inductive reasoning.
Relevant disciplines for uncovering such biases include psychology and behavioral economics.
The story
In a variation from Russell's original, a turkey designated for Thanksgiving is fed and cared for every day until it is slaughtered. With each feeding, its certainty or confidence that nothing will happen to it increases, based on past experience. From the turkey's point of view, the certainty that it will be fed and cared for again the next day is greatest on the night before it dies, of all days. Nevertheless, it is slaughtered that day, by the very person who cared for it.
The story appears in Bertrand Russell's 1912 The Problems of Philosophy as relating to a chicken:
Interpretation
The slaughter comes as a complete surprise to the turkey, who - in anthropomorphic formulation - "only extrapolates a trend" and "does not recognize the impending trend break". To recognize this trend break, the turkey would have had to find out the causes of the trend. By doing so, it would have known about the motivational state of the human who feeds it every day. In order to "think outside the box" and leave known or familiar thought patterns, creativity and the ability to change perspectives are necessary. This was not possible for the turkey due to insufficient information.
References
Cognitive biases
Illusions
Behavioral economics | Turkey illusion | Biology | 315 |
513,815 | https://en.wikipedia.org/wiki/Shroud | Shroud usually refers to an item, such as a cloth, that covers or protects some other object. The term is most often used in reference to burial sheets, mound shroud, grave clothes, winding-cloths or winding-sheets, such as the Jewish tachrichim or Muslim kaffan, that the body is wrapped in for burial. A famous example of this is the Shroud of Turin.
A traditional Jewish shroud consists of a tunic; a hood; pants that are extra-long and sewn shut at the bottom, so that separate foot coverings are not required; and a belt, which is tied in a knot shaped like the Hebrew letter shin, mnemonic of one of God's names, Shaddai. Traditionally, mound shrouds are made of white cotton, wool or linen, though any material can be used so long as it is made of natural fibre. Intermixture of two or more such fibres is forbidden, due to the prohibition of Shaatnez. A pious Jewish man may next be enwrapped in either his kittel or his tallit, one tassel of which is defaced to render the garment ritually unfit, symbolizing the fact that the decedent is free from the stringent requirements of the 613 mitzvot (commandments). The shrouded body is wrapped in a winding sheet, termed a sovev in Hebrew (a cognate of svivon, the spinning Hanukkah toy that is familiar under its Yiddish name, dreidel), before being placed directly in the earth (or in a plain coffin of soft wood where it is required by governing health codes).
The Early Christian Church also strongly encouraged the use of winding-sheets, except for monarchs and bishops. The rich were wrapped in cerecloths, which are fine fabrics soaked or painted in wax to hold the fabric close to the flesh. Early Christian shrouds incorporated a cloth, the sudarium, that covered the face, as depicted in traditional artistic representations of the entombed Jesus or his friend, Lazarus (John 11, q.v.). An account of the opening of the coffin of Edward I says that the "innermost covering seems to have been a very fine linen cerecloth, dressed close to every part of the body". The use of burial shrouds was general until at least the Renaissance – for much of history, a new set of clothing was an expensive purchase, so preparing the deceased in this manner ensured that a good set of clothes could be retained for further use by the family.
In Europe in the Middle Ages, coarse linen shrouds were used to bury most poor without a coffin. In poetry shrouds have been described as of sable, and they were later embroidered in black, becoming more elaborate and cut like shirts or shifts.
Orthodox Christians still use a burial shroud, usually decorated with a cross and the Trisagion. The special shroud that is used during the Orthodox Holy Week services is called an Epitaphios. Some Christians also use the burial shroud, particularly the Catholics (Roman/Eastern), among others.
Muslims as well use burial shrouds that are made of white cotton or linen. The Burying in Woollen Acts 1666–80 in England were meant to support the production of woollen cloth.
See also
Sudarium of Oviedo
Islamic funeral
References
External links
Eastern Christian liturgical objects
Catholic liturgy
Death customs
Religious practices | Shroud | Biology | 716 |
3,091,815 | https://en.wikipedia.org/wiki/Strecker%20amino%20acid%20synthesis | The Strecker amino acid synthesis, also known simply as the Strecker synthesis, is a method for the synthesis of amino acids by the reaction of an aldehyde with cyanide in the presence of ammonia. The condensation reaction yields an α-aminonitrile, which is subsequently hydrolyzed to give the desired amino acid. The method is used for the commercial production of racemic methionine from methional.
Primary and secondary amines also give N-substituted amino acids. Likewise, the usage of ketones, instead of aldehydes, gives α,α-disubstituted amino acids.
Reaction mechanism
In the first part of the reaction process, the carbonyl is converted to an [[[iminium ion|iminium]], to which a cyanide ion adds. First, the carbonyl oxygen of an aldehyde is protonated, followed by a nucleophilic attack of ammonia to the carbonyl carbon. After subsequent proton exchange, water is cleaved to form the iminium ion intermediate. A cyanide ion then attacks the iminium carbon yielding an aminonitrile.
In the second part of the reaction process, the nitrile is hydrolzed. First, the nitrile nitrogen of the aminonitrile is protonated, and the nitrile carbon is attacked by a water molecule. A 1,2-diamino-diol is then formed after proton exchange and a nucleophilic attack of water to the former nitrile carbon. Ammonia is subsequently eliminated after the protonation of the amino group, and finally the deprotonation of a hydroxyl group produces an amino acid.
Asymmetric Strecker reactions
One example of the Strecker synthesis is a multikilogram scale synthesis of an L-valine derivative starting from Methyl isopropyl ketone:
The initial reaction product of 3-methyl-2butanone with sodium cyanide and ammonia is resolved by application of L-tartaric acid. In contrast, asymmetric Strecker reactions require no resolving agent. By replacing ammonia with (S)-alpha-phenylethylamine as chiral auxiliary the ultimate reaction product was chiral alanine.
Catalytic asymmetric Strecker reaction can be effected using thiourea-derived catalysts. In 2012, a BINOL-derived catalyst was employed to generate chiral cyanide anion (see figure).
History
The German chemist Adolph Strecker discovered the series of chemical reactions that produce an amino acid from an aldehyde or ketone. Using ammonia or ammonium salts in this reaction gives unsubstituted amino acids. In the original Strecker reaction acetaldehyde, ammonia, and hydrogen cyanide combined to form after hydrolysis alanine. Using primary and secondary amines in place of ammonium was shown to yield N-substituted amino acids.
The classical Strecker synthesis gives racemic mixtures of α-amino acids as products, but several alternative procedures using asymmetric auxiliaries or asymmetric catalysts have been developed.
The asymmetric Strecker reaction was reported by Harada in 1963. The first reported asymmetric synthesis via a chiral catalyst was published in 1996. However, this was retracted in 2023.
Commercial syntheses of amino acids
Several methods exist to synthesize amino acids aside from the Strecker synthesis.
The commercial production of amino acids, however, usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Otherwise amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase.
References
See also
Bucherer–Bergs reaction
Multiple component reactions
Substitution reactions
Name reactions
Chemical synthesis of amino acids | Strecker amino acid synthesis | Chemistry | 833 |
1,288,948 | https://en.wikipedia.org/wiki/Capability%20Maturity%20Model%20Integration | Capability Maturity Model Integration (CMMI) is a process level improvement training and appraisal program. Administered by the CMMI Institute, a subsidiary of ISACA, it was developed at Carnegie Mellon University (CMU). It is required by many U.S. Government contracts, especially in software development. CMU claims CMMI can be used to guide process improvement across a project, division, or an entire organization.
CMMI defines the following five maturity levels (1 to 5) for processes: Initial, Managed, Defined, Quantitatively Managed, and Optimizing. CMMI Version 3.0 was published in 2023; Version 2.0 was published in 2018; Version 1.3 was published in 2010, and is the reference model for the rest of the information in this article. CMMI is registered in the U.S. Patent and Trademark Office by CMU.
Overview
Originally CMMI addresses three areas of interest:
Product and service development – CMMI for Development (CMMI-DEV),
Service establishment, management, – CMMI for Services (CMMI-SVC), and
Product and service acquisition – CMMI for Acquisition (CMMI-ACQ).
In version 2.0 these three areas (that previously had a separate model each) were merged into a single model.
CMMI was developed by a group from industry, government, and the Software Engineering Institute (SEI) at CMU. CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization. By January 2013, the entire CMMI product suite was transferred from the SEI to the CMMI Institute, a newly created organization at Carnegie Mellon.
History
CMMI was developed by the CMMI project, which aimed to improve the usability of maturity models by integrating many different models into one framework. The project consisted of members of industry, government and the Carnegie Mellon Software Engineering Institute (SEI). The main sponsors included the Office of the Secretary of Defense (OSD) and the National Defense Industrial Association.
CMMI is the successor of the capability maturity model (CMM) or Software CMM. The CMM was developed from 1987 until 1997. In 2002, version 1.1 was released, version 1.2 followed in August 2006, and version 1.3 in November 2010. Some major changes in CMMI V1.3 are the support of agile software development, improvements to high maturity practices and alignment of the representation (staged and continuous).
According to the Software Engineering Institute (SEI, 2008), CMMI helps "integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes."
Mary Beth Chrissis, Mike Konrad, and Sandy Shrum Rawdon were the authorship team for the hard copy publication of CMMI for Development Version 1.2 and 1.3. The Addison-Wesley publication of Version 1.3 was dedicated to the memory of Watts Humphry. Eileen C. Forrester, Brandon L. Buteau, and Sandy Shrum were the authorship team for the hard copy publication of CMMI for Services Version 1.3. Rawdon "Rusty" Young was the chief architect for the development of CMMI version 2.0. He was previously the CMMI Product Owner and the SCAMPI Quality Lead for the Software Engineering Institute.
In March 2016, the CMMI Institute was acquired by ISACA.
In April 2023, the CMMI V3.0 was released.
Topics
Representation
In version 1.3 CMMI existed in two representations: continuous and staged. The continuous representation is designed to allow the user to focus on the specific processes that are considered important for the organization's immediate business objectives, or those to which the organization assigns a high degree of risks. The staged representation is designed to provide a standard sequence of improvements, and can serve as a basis for comparing the maturity of different projects and organizations. The staged representation also provides for an easy migration from the SW-CMM to CMMI.
In version 2.0 the above representation separation was cancelled and there is now only one cohesive model.
Model framework (v1.3)
Depending on the areas of interest (acquisition, services, development) used, the process areas it contains will vary. Process areas are the areas that will be covered by the organization's processes. The table below lists the seventeen CMMI core process areas that are present for all CMMI areas of interest in version 1.3.
Maturity levels for services
The process areas below and their maturity levels are listed for the CMMI for services model:
Maturity Level 2 – Managed
CM – Configuration Management
MA – Measurement and Analysis
PPQA – Process and Quality Assurance
REQM – Requirements Management
SAM – Supplier Agreement Management
SD – Service Delivery
WMC – Work Monitoring and Control
WP – Work Planning
Maturity Level 3 – Defined
CAM – Capacity and Availability Management
DAR – Decision Analysis and Resolution
IRP – Incident Resolution and Prevention
IWM – Integrated Work Managements
OPD – Organizational Process Definition
OPF – Organizational Process Focus...
OT – Organizational Training
RSKM – Risk Management
SCON – Service Continuity
SSD – Service System Development
SST – Service System Transition
STSM – Strategic Service Management
Maturity Level 4 – Quantitatively Managed
OPP – Organizational Process Performance
QWM – Quantitative Work Management
Maturity Level 5 – Optimizing
CAR – Causal Analysis and Resolution.
OPM – Organizational Performance Management.
Models (v1.3)
CMMI best practices are published in documents called models, each of which addresses a different area of interest. Version 1.3 provides models for three areas of interest: development, acquisition, and services.
CMMI for Development (CMMI-DEV), v1.3 was released in November 2010. It addresses product and service development processes.
CMMI for Acquisition (CMMI-ACQ), v1.3 was released in November 2010. It addresses supply chain management, acquisition, and outsourcing processes in government and industry.
CMMI for Services (CMMI-SVC), v1.3 was released in November 2010. It addresses guidance for delivering services within an organization and to external customers.
Model (v2.0)
In version 2.0 DEV, ACQ and SVC were merged into a single model where each process area potentially has a specific reference to one or more of these three aspects. Trying to keep up with the industry the model also has explicit reference to agile aspects in some process areas.
Some key differences between v1.3 and v2.0 models are given below:
"Process Areas" have been replaced with "Practice Areas (PA's)". The latter is arranged by levels, not "Specific Goals".
Each PA is composed of a "core" [i.e. a generic and terminology-free description] and "context-specific" [ i.e. description from the perspective of Agile/ Scrum, development, services, etc.] section.
Since all practices are now compulsory to comply, "Expected" section has been removed.
"Generic Practices" have been put under a new area called "Governance and Implementation Infrastructure", while "Specific practices" have been omitted.
Emphasis on ensuring implementation of PA's and that these are practised continuously until they become a "habit".
All maturity levels focus on the keyword "performance".
Two and five optional PA's from "Safety" and "Security" purview have been included.
PCMM process areas have been merged.
Appraisal
An organization cannot be certified in CMMI; instead, an organization is appraised. Depending on the type of appraisal, the organization can be awarded a maturity level rating (1–5) or a capability level achievement profile.
Many organizations find value in measuring their progress by conducting an appraisal. Appraisals are typically conducted for one or more of the following reasons:
To determine how well the organization's processes compare to CMMI best practices, and to identify areas where improvement can be made
To inform external customers and suppliers of how well the organization's processes compare to CMMI best practices
To meet the contractual requirements of one or more customers
Appraisals of organizations using a CMMI model must conform to the requirements defined in the Appraisal Requirements for CMMI (ARC) document. There are three classes of appraisals, A, B and C, which focus on identifying improvement opportunities and comparing the organization's processes to CMMI best practices. Of these, class A appraisal is the most formal and is the only one that can result in a level rating. Appraisal teams use a CMMI model and ARC-conformant appraisal method to guide their evaluation of the organization and their reporting of conclusions. The appraisal results can then be used (e.g., by a process group) to plan improvements for the organization.
The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is an appraisal method that meets all of the ARC requirements. Results of a SCAMPI appraisal may be published (if the appraised organization approves) on the CMMI Web site of the SEI: Published SCAMPI Appraisal Results. SCAMPI also supports the conduct of ISO/IEC 15504, also known as SPICE (Software Process Improvement and Capability Determination), assessments etc.
This approach promotes that members of the EPG and PATs be trained in the CMMI, that an informal (SCAMPI C) appraisal be performed, and that process areas be prioritized for improvement. More modern approaches, that involve the deployment of commercially available, CMMI-compliant processes, can significantly reduce the time to achieve compliance. SEI has maintained statistics on the "time to move up" for organizations adopting the earlier Software CMM as well as CMMI. These statistics indicate that, since 1987, the median times to move from Level 1 to Level 2 is 23 months, and from Level 2 to Level 3 is an additional 20 months. Since the release of the CMMI, the median times to move from Level 1 to Level 2 is 5 months, with median movement to Level 3 another 21 months. These statistics are updated and published every six months in a maturity profile.
The Software Engineering Institute's (SEI) team software process methodology and the use of CMMI models can be used to raise the maturity level. A new product called Accelerated Improvement Method (AIM) combines the use of CMMI and the TSP.
Security
To address user security concerns, two unofficial security guides are available. Considering the Case for Security Content in CMMI for Services has one process area, Security Management. Security by Design with CMMI for Development, Version 1.3 has the following process areas:
OPSD – Organizational Preparedness for Secure Development
SMP – Secure Management in Projects
SRTS – Security Requirements and Technical Solution
SVV – Security Verification and Validation
While they do not affect maturity or capability levels, these process areas can be reported in appraisal results.
Applications
The SEI published a study saying 60 organizations measured increases of performance in the categories of cost, schedule, productivity, quality and customer satisfaction. The median increase in performance varied between 14% (customer satisfaction) and 62% (productivity). However, the CMMI model mostly deals with what processes should be implemented, and not so much with how they can be implemented. These results do not guarantee that applying CMMI will increase performance in every organization. A small company with few resources may be less likely to benefit from CMMI; this view is supported by the process maturity profile (page 10). Of the small organizations (<25 employees), 70.5% are assessed at level 2: Managed, while 52.8% of the organizations with 1,001–2,000 employees are rated at the highest level (5: Optimizing).
Turner & Jain (2002) argue that although it is obvious there are large differences between CMMI and agile software development, both approaches have much in common. They believe neither way is the 'right' way to develop software, but that there are phases in a project where one of the two is better suited. They suggest one should combine the different fragments of the methods into a new hybrid method. Sutherland et al. (2007) assert that a combination of Scrum and CMMI brings more adaptability and predictability than either one alone. David J. Anderson (2005) gives hints on how to interpret CMMI in an agile manner.
CMMI Roadmaps, which are a goal-driven approach to selecting and deploying relevant process areas from the CMMI-DEV model, can provide guidance and focus for effective CMMI adoption. There are several CMMI roadmaps for the continuous representation, each with a specific set of improvement goals. Examples are the CMMI Project Roadmap, CMMI Product and Product Integration Roadmaps and the CMMI Process and Measurements Roadmaps. These roadmaps combine the strengths of both the staged and the continuous representations.
The combination of the project management technique earned value management (EVM) with CMMI has been described. To conclude with a similar use of CMMI, Extreme Programming (XP), a software engineering method, has been evaluated with CMM/CMMI (Nawrocki et al., 2002). For example, the XP requirements management approach, which relies on oral communication, was evaluated as not compliant with CMMI.
CMMI can be appraised using two different approaches: staged and continuous. The staged approach yields appraisal results as one of five maturity levels. The continuous approach yields one of four capability levels. The differences in these approaches are felt only in the appraisal; the best practices are equivalent resulting in equivalent process improvement results.
See also
Capability Immaturity Model
Capability Maturity Model
Enterprise Architecture Assessment Framework
LeanCMMI
People Capability Maturity Model
Software Engineering Process Group
References
External links
Maturity models
Software development process
Standards
Systems engineering
Carnegie Mellon University software | Capability Maturity Model Integration | Engineering | 2,889 |
1,930,923 | https://en.wikipedia.org/wiki/Arsenide | In chemistry, an arsenide is a compound of arsenic with a less electronegative element or elements. Many metals form binary compounds containing arsenic, and these are called arsenides. They exist with many stoichiometries, and in this respect arsenides are similar to phosphides.
Alkali metal and alkaline earth arsenides
The group 1 alkali metals and the group 2, alkaline earth metals, form arsenides with isolated arsenic atoms. They form upon heating arsenic powder with excess sodium gives sodium arsenide (Na3As). The structure of Na3As is complex with unusually short Na–Na distances of 328–330 pm which are shorter than in sodium metal. This short distance indicates the complex bonding in these simple phases, i.e. they are not simply salts of As3− anion, for example. The compound LiAs, has a metallic lustre and electrical conductivity indicating some metallic bonding. These compounds are mainly of academic interest. For example, "sodium arsenide" is a structural motif adopted by many compounds with the A3B stoichiometry.
Indicative of their salt-like properties, hydrolysis of alkali metal arsenides gives arsine:
Na3As + 3 H2O → AsH3 + 3 NaOH
III–V compounds
Many arsenides of the group 13 elements (group III) are valuable semiconductors. Gallium arsenide (GaAs) features isolated arsenic centers with a zincblende structure (wurtzite structure can eventually also form in nanostructures), and with predominantly covalent bonding – it is a III–V semiconductor.
II–V compounds
Arsenides of the group 12 elements (group II) are also noteworthy. Cadmium arsenide (Cd3As2) was shown to be a three-dimensional (3D) topological Dirac semimetal analogous to graphene. Cd3As2, Zn3As2 and other compounds of the Zn-Cd-P-As quaternary system have very similar crystalline structures, which can be considered distorted mixtures of the zincblende and antifluorite crystalline structures.
Polyarsenides
Transition metal arsenides
Arsenic anionics are known to catenate, that is, form chains, rings, and cages. The mineral skutterudite (CoAs3) features rings that are usually described as . Assigning formal oxidation numbers is difficult because these materials are highly covalent and often are best described with band theory. Sperrylite (PtAs2) is usually described as . The arsenides of the transition metals are mainly of interest because they contaminate sulfidic ores of commercial interest. The extraction of the metals – nickel, iron, cobalt, copper – entails chemical processes such as smelting that poses environmental risks. In the mineral, arsenic is immobile and poses no environmental risk. Released from the mineral, arsenic is poisonous and mobile.
Zintl phases
Partial reduction of arsenic with alkali metals (and related electropositive elements) affords polyarsenic compounds, which are members of the Zintl phases.
See also
See :Category:Arsenides for a list.
References
Anions
Arsenic(−III) compounds | Arsenide | Physics,Chemistry | 678 |
306,543 | https://en.wikipedia.org/wiki/Structural%20analysis | Structural analysis is a branch of solid mechanics which uses simplified models for solids like bars, beams and shells for engineering decision making. Its main objective is to determine the effect of loads on physical structures and their components. In contrast to theory of elasticity, the models used in structural analysis are often differential equations in one spatial variable. Structures subject to this type of analysis include all that must withstand loads, such as buildings, bridges, aircraft and ships. Structural analysis uses ideas from applied mechanics, materials science and applied mathematics to compute a structure's deformations, internal forces, stresses, support reactions, velocity, accelerations, and stability. The results of the analysis are used to verify a structure's fitness for use, often precluding physical tests. Structural analysis is thus a key part of the engineering design of structures.
Structures and loads
In the context to structural analysis, a structure refers to a body or system of connected parts used to support a load. Important examples related to Civil Engineering include buildings, bridges, and towers; and in other branches of engineering, ship and aircraft frames, tanks, pressure vessels, mechanical systems, and electrical supporting structures are important. To design a structure, an engineer must account for its safety, aesthetics, and serviceability, while considering economic and environmental constraints. Other branches of engineering work on a wide variety of non-building structures.
Classification of structures
A structural system is the combination of structural elements and their materials. It is important for a structural engineer to be able to classify a structure by either its form or its function, by recognizing the various elements composing that structure.
The structural elements guiding the systemic forces through the materials are not only such as a connecting rod, a truss, a beam, or a column, but also a cable, an arch, a cavity or channel, and even an angle, a surface structure, or a frame.
Loads
Once the dimensional requirement for a structure have been defined, it becomes necessary to determine the loads the structure must support. Structural design, therefore begins with specifying loads that act on the structure. The design loading for a structure is often specified in building codes. There are two types of codes: general building codes and design codes, engineers must satisfy all of the code's requirements in order for the structure to remain reliable.
There are two types of loads that structure engineering must encounter in the design. The first type of loads are dead loads that consist of the weights of the various structural members and the weights of any objects that are permanently attached to the structure. For example, columns, beams, girders, the floor slab, roofing, walls, windows, plumbing, electrical fixtures, and other miscellaneous attachments. The second type of loads are live loads which vary in their magnitude and location. There are many different types of live loads like building loads, highway bridge loads, railroad bridge loads, impact loads, wind loads, snow loads, earthquake loads, and other natural loads.
Analytical methods
To perform an accurate analysis a structural engineer must determine information such as structural loads, geometry, support conditions, and material properties. The results of such an analysis typically include support reactions, stresses and displacements. This information is then compared to criteria that indicate the conditions of failure. Advanced structural analysis may examine dynamic response, stability and non-linear behavior.
There are three approaches to the analysis: the mechanics of materials approach (also known as strength of materials), the elasticity theory approach (which is actually a special case of the more general field of continuum mechanics), and the finite element approach. The first two make use of analytical formulations which apply mostly simple linear elastic models, leading to closed-form solutions, and can often be solved by hand. The finite element approach is actually a numerical method for solving differential equations generated by theories of mechanics such as elasticity theory and strength of materials. However, the finite-element method depends heavily on the processing power of computers and is more applicable to structures of arbitrary size and complexity.
Regardless of approach, the formulation is based on the same three fundamental relations: equilibrium, constitutive, and compatibility. The solutions are approximate when any of these relations are only approximately satisfied, or only an approximation of reality.
Limitations
Each method has noteworthy limitations. The method of mechanics of materials is limited to very simple structural elements under relatively simple loading conditions. The structural elements and loading conditions allowed, however, are sufficient to solve many useful engineering problems. The theory of elasticity allows the solution of structural elements of general geometry under general loading conditions, in principle. Analytical solution, however, is limited to relatively simple cases. The solution of elasticity problems also requires the solution of a system of partial differential equations, which is considerably more mathematically demanding than the solution of mechanics of materials problems, which require at most the solution of an ordinary differential equation. The finite element method is perhaps the most restrictive and most useful at the same time. This method itself relies upon other structural theories (such as the other two discussed here) for equations to solve. It does, however, make it generally possible to solve these equations, even with highly complex geometry and loading conditions, with the restriction that there is always some numerical error. Effective and reliable use of this method requires a solid understanding of its limitations.
Strength of materials methods (classical methods)
The simplest of the three methods here discussed, the mechanics of materials method is available for simple structural members subject to specific loadings such as axially loaded bars, prismatic beams in a state of pure bending, and circular shafts subject to torsion. The solutions can under certain conditions be superimposed using the superposition principle to analyze a member undergoing combined loading. Solutions for special cases exist for common structures such as thin-walled pressure vessels.
For the analysis of entire systems, this approach can be used in conjunction with statics, giving rise to the method of sections and method of joints for truss analysis, moment distribution method for small rigid frames, and portal frame and cantilever method for large rigid frames. Except for moment distribution, which came into use in the 1930s, these methods were developed in their current forms in the second half of the nineteenth century. They are still used for small structures and for preliminary design of large structures.
The solutions are based on linear isotropic infinitesimal elasticity and Euler–Bernoulli beam theory. In other words, they contain the assumptions (among others) that the materials in question are elastic, that stress is related linearly to strain, that the material (but not the structure) behaves identically regardless of direction of the applied load, that all deformations are small, and that beams are long relative to their depth. As with any simplifying assumption in engineering, the more the model strays from reality, the less useful (and more dangerous) the result.
Example
There are 2 commonly used methods to find the truss element forces, namely the method of joints and the method of sections. Below is an example that is solved using both of these methods. The first diagram below is the presented problem for which the truss element forces have to be found. The second diagram is the loading diagram and contains the reaction forces from the joints.
Since there is a pin joint at A, it will have 2 reaction forces. One in the x direction and the other in the y direction. At point B, there is a roller joint and hence only 1 reaction force in the y direction. Assuming these forces to be in their respective positive directions (if they are not in the positive directions, the value will be negative).
Since the system is in static equilibrium, the sum of forces in any direction is zero and the sum of moments about any point is zero.
Therefore, the magnitude and direction of the reaction forces can be calculated.
Method of joints
This type of method uses the force balance in the x and y directions at each of the joints in the truss structure.
At A,
At D,
At C,
Although the forces in each of the truss elements are found, it is a good practice to verify the results by completing the remaining force balances.
At B,
Method of sections
This method can be used when the truss element forces of only a few members are to be found. This method is used by introducing a single straight line cutting through the member whose force has to be calculated. However this method has a limit in that the cutting line can pass through a maximum of only 3 members of the truss structure. This restriction is because this method uses the force balances in the x and y direction and the moment balance, which gives a maximum of 3 equations to find a maximum of 3 unknown truss element forces through which this cut is made. Find the forces FAB, FBD and FCD in the above example
Method 1: Ignore the right side
Method 2: Ignore the left side
The truss elements forces in the remaining members can be found by using the above method with a section passing through the remaining members.
Elasticity methods
Elasticity methods are available generally for an elastic solid of any shape. Individual members such as beams, columns, shafts, plates and shells may be modeled. The solutions are derived from the equations of linear elasticity. The equations of elasticity are a system of 15 partial differential equations. Due to the nature of the mathematics involved, analytical solutions may only be produced for relatively simple geometries. For complex geometries, a numerical solution method such as the finite element method is necessary.
Methods using numerical approximation
It is common practice to use approximate solutions of differential equations as the basis for structural analysis. This is usually done using numerical approximation techniques. The most commonly used numerical approximation in structural analysis is the Finite Element Method.
The finite element method approximates a structure as an assembly of elements or components with various forms of connection between them and each element of which has an associated stiffness. Thus, a continuous system such as a plate or shell is modeled as a discrete system with a finite number of elements interconnected at finite number of nodes and the overall stiffness is the result of the addition of the stiffness of the various elements. The behaviour of individual elements is characterized by the element's stiffness (or flexibility) relation. The assemblage of the various stiffness's into a master stiffness matrix that represents the entire structure leads to the system's stiffness or flexibility relation. To establish the stiffness (or flexibility) of a particular element, we can use the mechanics of materials approach for simple one-dimensional bar elements, and the elasticity approach for more complex two- and three-dimensional elements. The analytical and computational development are best effected throughout by means of matrix algebra, solving partial differential equations.
Early applications of matrix methods were applied to articulated frameworks with truss, beam and column elements; later and more advanced matrix methods, referred to as "finite element analysis", model an entire structure with one-, two-, and three-dimensional elements and can be used for articulated systems together with continuous systems such as a pressure vessel, plates, shells, and three-dimensional solids. Commercial computer software for structural analysis typically uses matrix finite-element analysis, which can be further classified into two main approaches: the displacement or stiffness method and the force or flexibility method. The stiffness method is the most popular by far thanks to its ease of implementation as well as of formulation for advanced applications. The finite-element technology is now sophisticated enough to handle just about any system as long as sufficient computing power is available. Its applicability includes, but is not limited to, linear and non-linear analysis, solid and fluid interactions, materials that are isotropic, orthotropic, or anisotropic, and external effects that are static, dynamic, and environmental factors. This, however, does not imply that the computed solution will automatically be reliable because much depends on the model and the reliability of the data input.
Timeline
1452–1519 Leonardo da Vinci made many contributions
1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure of simple structures
1660: Hooke's law by Robert Hooke
1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains the Newton's laws of motion
1750: Euler–Bernoulli beam equation
1700–1782: Daniel Bernoulli introduced the principle of virtual work
1707–1783: Leonhard Euler developed the theory of buckling of columns
1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures
1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy. This theorem includes the method of 'least work' as a special case
1878-1972 Stephen Timoshenko father of modern Applied mechanics including the Timoshenko–Ehrenfest beam theory
1936: Hardy Cross' publication of the moment distribution method which was later recognized as a form of the relaxation method applicable to the problem of flow in pipe-network
1941: Alexander Hrennikoff submitted his D.Sc. thesis in MIT on the discretization of plane elasticity problems using a lattice framework
1942: R. Courant divided a domain into finite subregions
1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today
See also
Geometrically and materially nonlinear analysis with imperfections included
Limit state design
Structural engineering theory
Structural integrity and failure
Stress–strain analysis
von Mises yield criterion
Probabilistic Assessment of Structures
Structural testing
References | Structural analysis | Engineering | 2,804 |
1,216,060 | https://en.wikipedia.org/wiki/Nitride | In chemistry, a nitride is a chemical compound of nitrogen. Nitrides can be inorganic or organic, ionic or covalent. The nitride anion, N3- ion, is very elusive but compounds of nitride are numerous, although rarely naturally occurring. Some nitrides have a found applications, such as wear-resistant coatings (e.g., titanium nitride, TiN), hard ceramic materials (e.g., silicon nitride, Si3N4), and semiconductors (e.g., gallium nitride, GaN). The development of GaN-based light emitting diodes was recognized by the 2014 Nobel Prize in Physics. Metal nitrido complexes are also common.
Synthesis of inorganic metal nitrides is challenging because nitrogen gas (N2) is not very reactive at low temperatures, but it becomes more reactive at higher temperatures. Therefore, a balance must be achieved between the low reactivity of nitrogen gas at low temperatures and the entropy driven formation of N2 at high temperatures. However, synthetic methods for nitrides are growing more sophisticated and the materials are of increasing technological relevance.
Uses of nitrides
Like carbides, nitrides are often refractory materials owing to their high lattice energy, which reflects the strong bonding of "N3−" to metal cation(s). Thus, cubic boron nitride, titanium nitride, and silicon nitride are used as cutting materials and hard coatings. Hexagonal boron nitride, which adopts a layered structure, is a useful high-temperature lubricant akin to molybdenum disulfide. Nitride compounds often have large band gaps, thus nitrides are usually insulators or wide-bandgap semiconductors; examples include boron nitride and silicon nitride. The wide-band gap material gallium nitride is prized for emitting blue light in LEDs. Like some oxides, nitrides can absorb hydrogen and have been discussed in the context of hydrogen storage, e.g. lithium nitride.
Examples
Classification of such a varied group of compounds is somewhat arbitrary. Compounds where nitrogen is not assigned −3 oxidation state are not included, such as nitrogen trichloride where the oxidation state is +3; nor are ammonia and its many organic derivatives.
Nitrides of the s-block elements
Only one alkali metal nitride is stable, the purple-reddish lithium nitride (), which forms when lithium burns in an atmosphere of . Sodium nitride and potassium nitride has been generated, but remains a laboratory curiosity. The nitrides of the alkaline earth metals that have the formula are however numerous. Examples include beryllium nitride (), magnesium nitride (), calcium nitride (), and strontium nitride (). The nitrides of electropositive metals (including Li, Zn, and the alkaline earth metals) readily hydrolyze upon contact with water, including the moisture in the air:
Nitrides of the p-block elements
Boron nitride exists as several forms (polymorphs). Nitrides of silicon and phosphorus are also known, but only the former is commercially important. The nitrides of aluminium, gallium, and indium adopt the hexagonal wurtzite structure in which each atom occupies tetrahedral sites. For example, in aluminium nitride, each aluminium atom has four neighboring nitrogen atoms at the corners of a tetrahedron and similarly each nitrogen atom has four neighboring aluminium atoms at the corners of a tetrahedron. This structure is like hexagonal diamond (lonsdaleite) where every carbon atom occupies a tetrahedral site (however wurtzite differs from sphalerite and diamond in the relative orientation of tetrahedra). Thallium(I) nitride () is known, but thallium(III) nitride (TlN) is not.
Transition metal nitrides
Most metal-rich transition metal nitrides adopt a relatively ordered face-centered cubic or hexagonal close-packed crystal structure, with octahedral coordination. Sometimes these materials are called "interstitial nitrides". They are essential for industrial metallurgy, because they are typically much harder and less ductile than their parent metal, and resist air-oxidation. For the group 3 metals, ScN and YN are both known. Group 4, 5, and 6 transition metals (the titanium, vanadium and chromium groups) all form chemically stable, refractory nitrides with high melting point. Thin films of titanium nitride, zirconium nitride, and tantalum nitride protect many industrial surfaces.
Nitrides of the group 7 and 8 transition metals tend to be nitrogen-poor, and decompose readily at elevated temperatures. For example, iron nitride, decomposes at 200 °C. Platinum nitride and osmium nitride may contain units, and as such should not be called nitrides.
Nitrides of heavier members from group 11 and 12 are less stable than copper nitride () and zinc nitride (): dry silver nitride () is a contact explosive which may detonate from the slightest touch, even a falling water droplet.
Nitrides of the lanthanides and actinides
Nitride containing species of the lanthanides and actinides are of scientific interest as they can provide a useful handle for determining covalency of bonding. Nuclear magnetic resonance (NMR) spectroscopy along with quantum chemical analysis has often been used to determine the degree to which metal nitride bonds are ionic or covalent in character. One example, a uranium nitride, has the highest known nitrogen-15 chemical shift.
Molecular nitrides
Many metals form molecular nitrido complexes, as discussed in the specialized article. The main group elements also form some molecular nitrides. Cyanogen () and tetrasulfur tetranitride () are rare examples of a molecular binary (containing one element aside from nitrogen) nitrides. They dissolve in nonpolar solvents. Both undergo polymerization. is also unstable with respect to the elements, but less so that the isostructural . Heating gives a polymer, and a variety of molecular sulfur nitride anions and cations are also known.
Related to but distinct from nitride is pernitride diatomic anion () and the azide triatomic anion (N3−).
References
it is Nitride
Anions
Nitrides | Nitride | Physics,Chemistry | 1,411 |
12,349,156 | https://en.wikipedia.org/wiki/Jitterlyzer | The FS5000 Jitterlyzer performs physical layer serial bus jitter evaluation. It can inject controlled jitter and measure the characteristics of incoming jitter. When teamed with a logic analyzer or protocol analyzer, it can correlate these measurements with protocol analysis. Physical-layer tests can be performed while the system under test is processing live bus traffic.
Jitter measurements
The FS5000 measures jitter in two categories:
Timing
There are four different timing measurements:
Bathtub Plot - The bathtub curve can provide tremendous insight into the BER performance of a link under test. A bathtub curve is obtained by drawing a horizontal line across the waveform under test. The probability distribution function for signal transitions (zero crossings) from a high voltage to a low voltage or a low voltage to a high voltage is then computed. The bathtub curve is useful because it can provide a lot of insight into the behavior of a system. Apart from estimating BER, it also provides an indication for the amount of margin that is in the system. When coupled with protocol testing with the Jitterlyzer, a high margin enables engineers to quickly rule out the physical layer as a potential cause of certain protocol errors.
Statistics - The Jitterlyzer’s measurement routines uncover total jitter and BER directly (without requiring mathematical extrapolation). This routine for random jitter (RJ) and deterministic jitter (DJ) separation is included for completeness. Also a number is provided for RJ and DJ on real-life traffic.
Bus View - Shows channel to channel skew of four channels simultaneously.
Jitter Histogram - This routine is performed on real-life traffic. It selects the zero-crossing voltage in the incoming data. It then counts the number of transitions of the high-speed serial signal as a function of phase positions. A histogram of number of hits versus delay is then plotted.
Eye
There are three different eye measurements
Eye diagram - This routine is performed on real-life traffic. It provides much more information than just a vertical eye opening or a horizontal eye opening. Instead, it provides a first indication of parameters such as dispersion in the signal path, rise-time issues, etc. Click on the link and you will see an example of an eye diagram that is obtained using the Jitterlyzer. As can be seen, an indication of noise throughout the whole eye is made, as opposed to, for example, only at the center.
Oscilloscope - This measurement allows the user to see an oscilloscope trace in persistent mode, allowing direct inspection of the eye. Color represents the frequency with which the measured trace passes through each point, with red representing the most frequent, and black representing the least. Note that frequency is normalized to a maximum value of 1.
Voltage Histogram - This measurement is similar to the jitter histogram except that it is done in the vertical domain. It shows the amount of voltage noise that exists on the high-speed serial signal and is similar in principle to the vertical eye height.
Jitter generation
Data pattern
LIVE traffic
Preprogrammed Compliance patterns
Jitter profile
Frequency range 19.07 kHz - 19.99 MHz
Amplitude range 40ps - 1200ps
Differential swing
400-1600mv
External links
Jitter analysis tool from FuturePlus Systems.
References
Digital electronics
Electronic test equipment | Jitterlyzer | Technology,Engineering | 692 |
59,409,767 | https://en.wikipedia.org/wiki/Suchi%20Saria | Suchi Saria is an Associate Professor of Machine Learning and Healthcare at Johns Hopkins University, where she uses big data to improve patient outcomes. She is a World Economic Forum Young Global Leader. From 2022 to 2023, she was an investment partner at AIX Ventures. AIX Ventures is a venture capital fund that invests in artificial intelligence startups.
Early life and education
Saria is from Darjeeling. She earned her bachelor's degree at Mount Holyoke College. She was awarded a full scholarship from Microsoft. In 2004 she joined Stanford University as a Rambus Corporation Fellow. She earned her Master of Science and Doctor of Philosophy degrees at Stanford University, supervised by Daphne Koller and advised by Anna Asher Penn and Sebastian Thrun. At Stanford University, Saria developed a statistical model that could predict premature baby outcomes with a 90% accuracy. The model used data from monitors, birth weight and length of time spent in the womb to predict whether a preemie would develop an illness. She worked in the startup Aster Data Systems.
Career and research
Saria believes that big data can be used to personalise healthcare. She is considered an expert in computational statistics and their applications to the real world. She uses Bayesian and probabilistic modelling. In 2014 Saria was funded by a $1.5 million Gordon and Betty Moore Foundation project that looked to make intensive care units safer. The project used data collected at patients' bedsides along with noninvasive 3D sensors that monitor care in patient's hospital rooms. The sensors collect information on steps that might have been missed by doctors; like washing hands.
Saria uses big data to manage chronic diseases. She is part of a National Science Foundation (NSF) award that looks at scleroderma. She uses machine learning to analyse medical records and identify similar patterns of disease progression. The system works out which treatments have been effectively used for various symptoms to aid doctors in choosing treatment plans for specific patients. She has developed another algorithm that can be used to predict and treat Septic shock. The algorithm used 16,000 items of patient health records and generates a targeted real-time warning (TREWS) score. She collaborated with David N. Hager to use the algorithm in clinics, and it was correct 86% of the time. Saria modified the algorithm to avoid missing high risk patients- for example, those who have suffered from septic shock previously and who have sought successful treatment. She was described by XRDS magazine as being a Pioneer in transforming healthcare. In 2016 Saria spoke at about using machine learning for medicine at TEDxBoston. The talk has been viewed over 100,170 times.
Awards and honours
Her awards and honors include:
2018 Sloan Research Fellowship
2018 World Economic Forum Young Global Leader
2017 MIT Technology Review 35 Innovators Under 35
2017 Defense Advanced Projects Research Agency (DARPA) Young Faculty Fellowship
2016 Brilliant 10 award by Popular Science
2015 IEEE Intelligent Systems Young Star in Artificial Intelligence
2015 Johns Hopkins Discovery Award
2014 National Science Foundation (NSF) Smart and Connected Health Research Grant
2014 Google Research Award
2014 Society of Critical Care Medicine Annual Scientific Award
2013 Gordon and Betty Moore Foundation Research Award
References
American academics of Indian descent
1980s births
Living people
Mount Holyoke College alumni
Stanford University alumni
People from Darjeeling
Johns Hopkins University faculty
Women data scientists
Data scientists
Bioinformaticians | Suchi Saria | Biology | 679 |
50,879,654 | https://en.wikipedia.org/wiki/Sunway%20TaihuLight | The Sunway TaihuLight ( Shénwēi·tàihú zhī guāng) is a Chinese supercomputer which, , is ranked 11th in the TOP500 list, with a LINPACK benchmark rating of 93 petaflops. The name is translated as divine power, the light of Taihu Lake. This is nearly three times as fast as the previous Tianhe-2, which ran at 34 petaflops. , it is ranked as the 16th most energy-efficient supercomputer in the Green500, with an efficiency of 6.1 GFlops/watt. It was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi in the city of Wuxi, in Jiangsu province, China.
The Sunway TaihuLight was the world's fastest supercomputer for two years, from June 2016 to June 2018, according to the TOP500 lists. The record was surpassed in June 2018 by IBM's Summit.
Architecture
The Sunway TaihuLight utilizes domestically developed semiconductors, including a total of 40,960 Chinese-designed SW26010 manycore 64-bit RISC processors based on the Sunway architecture. Each processor chip contains 256 processing cores, and an additional four auxiliary cores for system management (also RISC cores, just more fully featured) for a total of 10,649,600 CPU cores across the entire system.
The processing cores feature 64 KB of scratchpad memory for data (and 16 KB for instructions) and communicate via a network on a chip, instead of having a traditional cache hierarchy.
Software
The system runs on its own operating system, Sunway RaiseOS 2.0.5, which is based on Linux. The system has its own customized implementation of OpenACC 2.0 to aid the parallelization of code.
Future development
China's first exascale supercomputer was scheduled to enter service by 2020 according to the head of the school of computing at the National University of Defense Technology (NUDT). According to the national plan for the next generation of high performance computers, the country would have develop an exascale computer during the 13th Five-Year-Plan period (2016–2020). The government of Tianjin Binhai New Area, NUDT and the National Supercomputing Center of Tianjin are working on the project. The investment is likely to hit 3 billion yuan ($470.6 million).
See also
Sunway BlueLight
Manycore processor
Massively parallel processor array
Supercomputing in China
Summit (supercomputer)
References
External links
Top500 list entry for the Sunway TaihuLight
CCTV video news story on Sunway TaihuLight
Hardware of Sunway TaihuLight
- BBC 5-minute video
2016 in technology
Petascale computers
Supercomputers
Supercomputing in China
64-bit computers | Sunway TaihuLight | Technology | 601 |
2,710,462 | https://en.wikipedia.org/wiki/Theta%20Ursae%20Majoris | Theta Ursae Majoris (Theta UMa, θ Ursae Majoris, θ UMa) is a suspected spectroscopic binary star system in the northern circumpolar constellation of Ursa Major. It has an apparent visual magnitude of 3.17, placing it among the brighter members of this constellation. The distance to this star has been measured directly using the parallax method, yielding an estimated value of .
In 1976, this was reported as a spectroscopic binary system by Helmut A. Abt and Saul G. Levy, giving it an orbital period of 371 days. However, this was brought into question by Christopher L. Morbey and Roger F. Griffin in 1987, who suggested that the data could be explained by random chance. Further observations in 2009 with observations with the Bok Telescope in Arizona did show changes of 180 m/s in radial velocity, although there was not sufficient evidence to support a Keplerian orbit. There is a 14th-magnitude common proper motion companion to Theta Ursae Majoris at an angular separation of 4.1 arcseconds, so this may potentially be a triple star system.
The primary component of this putative system has a published stellar classification of F6 IV, indicating it is a subgiant star that is evolving away from the main sequence. In 2009, Helmut A. Abt listed it with a stellar classification of F7 V, suggesting that it is still on the main sequence. It is larger than the Sun with 141% of the Sun's mass and 241% of the Sun's radius. Consequently, it is shining brighter and evolving more rapidly than the Sun, with a luminosity nearly eight times the Sun's at an age of 2.2 billion years. This energy is being radiated from the star's outer atmosphere at an effective temperature of 6,256 K. At this heat, the star glows with the yellow-white hue of an F-type star.
The McDonald Observatory team has set limits to the hypothetical presence of one or more planets around the primary with masses between 0.24 and 4.6 Jupiter masses and average separations spanning between 0.05 and 5.2 AU.
Naming and etymology
With τ, h, υ, φ, e, and f, it composed the Arabic asterism Sarīr Banāt al-Na'sh, the Throne of the daughters of Na'sh, and Al-Haud, the Pond. According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al-Haud were the title for seven stars : f as Alhaud I, τ as Alhaud II, e as Alhaud III, h as Alhaud IV, this star (θ) as Alhaud V, υ as Alhaud VI and φ as Alhaud VII .
In Chinese, (), meaning Administrative Center, refers to an asterism consisting of θ Ursae Majoris, φ Ursae Majoris, υ Ursae Majoris, 15 Ursae Majoris and 18 Ursae Majoris. Consequently, the Chinese name for φ Ursae Majoris itself is known as (, .).
References
Ursae Majoris, Theta
Ursa Major
Binary stars
F-type subgiants
Alhaud V
Ursae Majoris, 25
046853
3775
082328
Durchmusterung objects | Theta Ursae Majoris | Astronomy | 710 |
23,092,516 | https://en.wikipedia.org/wiki/Atmospheric%20methane | Atmospheric methane is the methane present in Earth's atmosphere. The concentration of atmospheric methane is increasing due to methane emissions, and is causing climate change. Methane is one of the most potent greenhouse gases. Methane's radiative forcing (RF) of climate is direct, and it is the second largest contributor to human-caused climate forcing in the historical period. Methane is a major source of water vapour in the stratosphere through oxidation; and water vapour adds about 15% to methane's radiative forcing effect. The global warming potential (GWP) for methane is about 84 in terms of its impact over a 20-year timeframe, and 28 in terms of its impact over a 100-year timeframe.
Since the beginning of the Industrial Revolution (around 1750), the methane concentration in the atmosphere has increased by about 160%, and human activities almost entirely caused this increase. Since 1750 methane has contributed 3% of greenhouse gas (GHG) emissions in terms of mass but is responsible for approximately 23% of radiative or climate forcing. By 2019, global methane concentrations had risen from 722 parts per billion (ppb) in pre-industrial times to 1866 ppb. This is an increase by a factor of 2.6 and the highest value in at least 800,000 years.
Methane increases the amount of ozone (O3) in the troposphere ( to from the Earth's surface) and also in the stratosphere (from the troposphere to above the Earth's surface). Both water vapour and ozone are GHGs, which in turn add to climate warming.
Role in climate change
Methane (CH4) in the Earth's atmosphere is a powerful greenhouse gas with a global warming potential (GWP) 84 times greater than CO2 over a 20-year time frame. Methane is not as persistent as CO2, and tails off to about 28 times greater than CO2 over a 100-year time frame.
Radiative or climate forcing is the scientific concept used to measure the human impact on the environment in watts per square meter (W/m2). It refers to the "difference between solar irradiance absorbed by the Earth and energy radiated back to space" The direct radiative greenhouse gas forcing effect of methane was estimated to be an increase of 0.5 W/m2 relative to the year 1750 (estimate in 2007).
In their 2021 "Global Methane Assessment" report, the UNEP and CCAC said that their "understanding of methane's effect on radiative forcing" improved with research by teams led by M. Etminan in 2016, and William Collins in 2018. This resulted in an "upward revision" since the 2014 IPCC Fifth Assessment Report (AR5). The "improved understanding" says that prior estimates of the "overall societal impact of methane emissions" were likely underestimated.
Etminan et al. published their new calculations for methane's radiative forcing (RF) in a 2016 Geophysical Research Letters journal article which incorporated the shortwave bands of CH4 in measuring forcing, not used in previous, simpler IPCC methods. Their new RF calculations which significantly revised those cited in earlier, successive IPCC reports for well mixed greenhouse gases (WMGHG) forcings by including the shortwave forcing component due to CH4, resulted in estimates that were approximately 20–25% higher. Collins et al. said that CH4 mitigation that reduces atmospheric methane by the end of the century, could "make a substantial difference to the feasibility of achieving the Paris climate targets", and would provide us with more "allowable carbon emissions to 2100".
In addition to the direct heating effect and the normal feedbacks, the methane breaks down to carbon dioxide and water. This water is often above the tropopause, where little water usually reaches. Ramanathan (1998) notes that both water and ice clouds, when formed at cold lower stratospheric temperatures, are extremely efficient in enhancing the atmospheric greenhouse effect. He also notes that there is a distinct possibility that large increases in methane in future may lead to a surface warming that increases nonlinearly with the methane concentration.
Mitigation efforts to reduce short-lived climate pollutants like methane and black carbon would help combat "near-term climate change" and would support Sustainable Development Goals.
Sources
Any process that results in the production of methane and its release into the atmosphere can be considered a "source". The known sources of methane are predominantly located near the Earth's surface. Two main processes that are responsible for methane production include microorganisms anaerobically converting organic compounds into methane (methanogenesis), which are widespread in aquatic ecosystems, and ruminant animals.
Methane is also released in the Arctic for example from thawing permafrost.
Measurement techniques
Methane was typically measured using gas chromatography. Gas chromatography is a type of chromatography used for separating or analyzing chemical compounds. It is less expensive in general, compared to more advanced methods, but it is more time and labor-intensive.
Spectroscopic methods were the preferred method for atmospheric gas measurements due to its sensitivity and precision. Also, spectroscopic methods are the only way of remotely sensing the atmospheric gases. Infrared spectroscopy covers a large spectrum of techniques, one of which detects gases based on absorption spectroscopy. There are various methods for spectroscopic methods, including Differential optical absorption spectroscopy, Laser-induced fluorescence, and Fourier Transform Infrared.
In 2011, cavity ring-down spectroscopy was the most widely used IR absorption technique of detecting methane. It is a form of laser absorption spectroscopy which determines the mole fraction to the order of parts per trillion.
Global monitoring
CH4 has been measured directly in the environment since the 1970s. The Earth's atmospheric methane concentration has increased 160% since preindustrial levels in the mid-18th century.
Long term atmospheric measurements of methane by NOAA show that the build up of methane nearly tripled since pre-industrial times since 1750. In 1991 and 1998 there was a sudden growth rate of methane representing a doubling of growth rates in previous years. The June 15, 1991 eruption of Mount Pinatubo, measuring VEI-6was the second-largest terrestrial eruption of the 20th century. In 2007 it was reported that unprecedented warm temperatures in 1998the warmest year since surface records were recordedcould have induced elevated methane emissions, along with an increase in wetland and rice field emissions and the amount of biomass burning.
Data from 2007 suggested methane concentrations were beginning to rise again. This was confirmed in 2010 when a study showed methane levels were on the rise for the 3 years 2007 to 2009. After a decade of near-zero growth in methane levels, "globally averaged atmospheric methane increased by [approximately] 7 nmol/mol per year during 2007 and 2008. During the first half of 2009, globally averaged atmospheric CH4 was [approximately] 7 nmol/mol greater than it was in 2008, suggesting that the increase will continue in 2009." From 2015 to 2019 sharp rises in levels of atmospheric methane have been recorded.
In 2010, methane levels in the Arctic were measured at 1850 nmol/mol which is over twice as high as at any time in the last 400,000 years. According to the IPCC AR5, since 2011 concentrations continued to increase. After 2014, the increase accelerated and by 2017, it reached 1,850 (parts per billion) ppb. The annual average for methane (CH4) was 1866 ppb in 2019 and scientists reported with "very high confidence" that concentrations of CH4 were higher than at any time in at least 800,000 years. The largest annual increase occurred in 2021 with current concentrations reaching a record 260% of pre-industrialwith the overwhelming percentage caused by human activity.
In 2013, IPCC scientists said with "very high confidence", that concentrations of atmospheric methane CH4 "exceeded the pre-industrial levels by about 150% which represented "levels unprecedented in at least the last 800,000 years." The globally averaged concentration of methane in Earth's atmosphere increased by about 150% from 722 ± 25 ppb in 1750 to 1803.1 ± 0.6 ppb in 2011. As of 2016, methane contributed radiative forcing of 0.62 ± 14% Wm−2, or about 20% of the total radiative forcing from all of the long-lived and globally mixed greenhouse gases. The atmospheric methane concentration has continued to increase since 2011 to an average global concentration of 1911.8 ± 0.6 ppb as of 2022. The May 2021 peak was 1891.6 ppb, while the April 2022 peak was 1909.4 ppb, a 0.9% increase. The Global Carbon Project consortium produces the Global Methane Budget. Working with over fifty international research institutions and 100 stations globally, it updates the methane budget every few years.
In 2013, the balance between sources and sinks of methane was not yet fully understood. Scientists were unable to explain why the atmospheric concentration of methane had temporarily ceased to increase.
The focus on the role of methane in anthropogenic climate change has become more relevant since the mid-2010s.
Natural sinks or removal of atmospheric methane
The amount of methane in the atmosphere is the result of a balance between the production of methane on the Earth's surfaceits sourceand the destruction or removal of methane, mainly in the atmosphereits sink in an atmospheric chemical process.
Another major natural sink is through oxidation by methanotrophic or methane-consuming bacteria in Earth's soils.
These 2005 NASA computer model simulationscalculated based on data available at that timeillustrate how methane is destroyed as it rises.
As air rises in the tropics, methane is carried upwards through the tropospherethe lowest portion of Earth's atmosphere which is to from the Earth's surface, into the lower stratospherethe ozone layerand then the upper portion of the stratosphere.
This atmospheric chemical process is the most effective methane sink, as it removes 90% of atmospheric methane. This global destruction of atmospheric methane mainly occurs in the troposphere.
Methane molecules react with hydroxyl radicals (OH)the "major chemical scavenger in the troposphere" that "controls the atmospheric lifetime of most gases in the troposphere". Through this CH4 oxidation process, atmospheric methane is destroyed and water vapor and carbon dioxide are produced.
While this decreases the concentration of methane in the atmosphere, it is unclear if this leads to a net positive increase in radiative forcing because both water vapor and carbon dioxide are more powerful GHGs factors in terms of affecting the warming of Earth.
This additional water vapor in the stratosphere caused by CH4 oxidation, adds approximately 15% to methane's radiative forcing effect.
By the 1980s, the global warming problem had been transformed by the inclusion of methane and other non-CO2 trace-gasesCFCs, N2O, and O3 on global warming, instead of focusing primarily on carbon dioxide. Both water and ice clouds, when formed at cold lower stratospheric temperatures, have a significant impact by increasing the atmospheric greenhouse effect. Large increases in future methane could lead to a surface warming that increases nonlinearly with the methane concentration.
Methane also affects the degradation of the ozone layerthe lowest layer of the stratosphere from about above Earth, just above the troposphere. NASA researchers in 2001, had said that this process was enhanced by global warming, because warmer air holds more water vapor than colder air, so the amount of water vapor in the atmosphere increases as it is warmed by the greenhouse effect. Their climate models based on data available at that time, had indicated that carbon dioxide and methane enhanced the transport of water into the stratosphere.
Atmospheric methane could last about 120 years in the stratosphere until it is eventually destroyed through the hydroxyl radicals oxidation process.
Mean lifespan
There are different ways to quantify the period of time that methane impacts the atmosphere. The average time that a physical methane molecule is in the atmosphere is estimated to be around 9.6 years. However, the average time that the atmosphere will be affected by the emission of that molecule before reaching equilibrium – known as its 'perturbation lifetime' – is approximately twelve years.
The reaction of methane and chlorine atoms acts as a primary sink of Cl atoms and is a primary source of hydrochloric acid (HCl) in the stratosphere.
CH4 + Cl → CH3 + HCl
The HCl produced in this reaction leads to catalytic ozone destruction in the stratosphere.
Methanotrophs in soils and sediments
Soils act as a major sink for atmospheric methane through the methanotrophic bacteria that reside within them. This occurs with two different types of bacteria. "High capacity-low affinity" methanotrophic bacteria grow in areas of high methane concentration, such as waterlogged soils in wetlands and other moist environments. And in areas of low methane concentration, "low capacity-high affinity" methanotrophic bacteria make use of the methane in the atmosphere to grow, rather than relying on methane in their immediate environment. Methane oxidation allows methanotrophic bacteria to use methane as a source of energy, reacting methane with oxygen and as a result producing carbon dioxide and water.
CH4 + 2O2 → CO2 + 2H2O
Forest soils act as good sinks for atmospheric methane because soils are optimally moist for methanotroph activity, and the movement of gases between soil and atmosphere (soil diffusivity) is high. With a lower water table, any methane in the soil has to make it past the methanotrophic bacteria before it can reach the atmosphere. Wetland soils, however, are often sources of atmospheric methane rather than sinks because the water table is much higher, and the methane can be diffused fairly easily into the air without having to compete with the soil's methanotrophs.
Methanotrophic bacteria also occur in the underwater sediments. Their presence can often efficiently limit emissions from sources such as the underwater permafrost in areas like the Laptev Sea.
Removal technologies
Methane concentrations in the geologic past
From 1996 to 2004, researchers in the European Project for Ice Coring in Antarctica (EPICA) project were able to drill and analyze gases trapped in the ice cores in Antarctica to reconstruct GHG concentrations in the atmosphere over the past 800,000 years". They found that prior to approximately 900,000 years ago, the cycle of ice ages followed by relatively short warm periods lasted about 40,000 years, but by 800,000 years ago the time interval changed dramatically to cycles that lasted 100,000 years. There were low values of GHG in ice ages, and high values during the warm periods.
This 2016 EPA illustration above is a compilation of paleoclimatology showing methane concentrations over time based on analysis of gas bubbles from EPICA Dome C, Antarcticaapproximately 797,446 BCE to 1937 CE, Law Dome, Antarcticaapproximately 1008 CE to 1980 CE Cape Grim, Australia1985 CE to 2015 CE Mauna Loa, Hawaii1984 CE to 2015 CE and Shetland Islands, Scotland: 1993 CE to 2001 CE
The massive and rapid release of large volumes of methane gas from such sediments into the atmosphere has been suggested as a possible cause for rapid global warming events in the Earth's distant past, such as the Paleocene–Eocene Thermal Maximum, and the Great Dying.
In 2001, NASA's Goddard Institute for Space Studies and Columbia University's Center for Climate Systems Research scientists confirmed that other greenhouse gases apart from carbon dioxide were important factors in climate change in research presented at the annual meeting of the American Geophysical Union (AGU). They offered a theory on the 100,000-year long Paleocene–Eocene Thermal Maximum that occurred approximately 55 million years ago. They posited that there was a vast release of methane that had previously been kept stable through "cold temperatures and high pressure...beneath the ocean floor". This methane release into the atmosphere resulted in the warming of the earth. A 2009 journal article in Science, confirmed NASA research that the contribution of methane to global warming had previously been underestimated.
Early in the Earth's history carbon dioxide and methane likely produced a greenhouse effect. The carbon dioxide would have been produced by volcanoes and the methane by early microbes. During this time, Earth's earliest life appeared. According to a 2003 article in the journal Geology, these first, ancient bacteria added to the methane concentration by converting hydrogen and carbon dioxide into methane and water. Oxygen did not become a major part of the atmosphere until photosynthetic organisms evolved later in Earth's history. With no oxygen, methane stayed in the atmosphere longer and at higher concentrations than it does today.
References
Methane
Atmosphere
Greenhouse gases | Atmospheric methane | Chemistry,Environmental_science | 3,475 |
70,196,516 | https://en.wikipedia.org/wiki/Fertility%20fraud | Fertility fraud is the failure on the part of a fertility doctor to obtain consent from a patient before inseminating her with his own sperm. This normally occurs in the context of people using assisted reproductive technology (ART) to address fertility issues.
The term is also used in cases where donor eggs are used without consent and more broadly, in instances where doctors and other medical professionals exploit opportunities that arise when people use assisted reproductive technology to address fertility issues. This may give rise to a number of different types of fraud involving insurance, unnecessary procedures, theft of eggs, and other issues related to fertility treatment.
Types
The main sense of fertility fraud is non-consensual insemination of a patient by her doctor, but there are other types as well.
Egg theft
The first "test tube baby" was facilitated by Robert Edwards in 1978, and he allegedly used eggs without the consent of the women involved.
One of the earliest cases involving egg theft occurred in 1987 in Garden Grove, California, in a clinic run by doctor Ricardo Asch, and his partners doctors Sergio Stone and Jose Balmaceda. Asch took eggs from women undergoing diagnostic procedures and used them in fertility procedures in other women.
Asch and his two partners were accused of taking eggs and embryos from patients without their consent, using them to cause pregnancies in other women, and defrauding insurance companies. The eggs of at least 20 women were used, and at least fifteen live births resulted. Thirty-five patients filed legal actions against Asch. An estimated 67 women were victims of egg or embryo theft. Asch and Balmaceda left the country and avoided trial. Stone faced trial in the case and was sentenced to three years probation for mail fraud. He was fined $50,000 by the judge in the case, required to repay more than $14,000 in restitution to insurance companies, and had to wear an electronic monitoring device.
In the "Egg Affair" in Israel in 2000, police investigated two doctors who were accused of intentionally creating extra eggs in patients needing fertility procedures, and then without their patients' knowledge harvesting and selling the eggs to other fertility patients.
In Italy in 2016, famed Italian gynecologist Severino Antinori, known as the "grandmothers' obstetrician" because of his reputation for helping women over 60 to bear children, was arrested on suspicion of stealing eggs by removing them from a patient's ovaries without her consent under the guise of performing a procedure on her to remove an ovarian cyst. Antinori had recently hired a Spanish nurse at his clinic, and then diagnosed her with an ovarian cyst for the sole purpose of harvesting her eggs without her knowledge. Antinori was arrested at a Rome airport, charged with aggravated robbery and causing personal injury, and placed under house arrest.
Insemination fraud
There have been numerous cases of a healthcare provider fraudulently substituting their own sperm for donor sperm, resulting in pregnancy and birth.
Quincy Fortier, a fertility specialist in Las Vegas, Nevada, in the early 1960s, impregnated female patients with his own sperm leading to 26 children during his 40-year practice. He died in 2006, aged 94 and the story was uncovered only in 2018 when a woman used a home DNA test to celebrate her retirement. The HBO documentary Baby God aired in 2020 was based on the story of Fortier and his decades-long fertility fraud scheme.
Cecil Jacobson, a fertility doctor in the 1980s in Virginia, was originally found to be the biological father of at least seven of his patients' children, including one patient who was supposed to have been inseminated with sperm provided by her husband. DNA tests have since linked Jacobson to at least 15 such children, and it has been suspected that he fathered as many as 75 children by impregnating patients with his own sperm.
In 2018, a woman in Washington State filed suit in U.S. District Court in Idaho against Gerald Mortimer, who was her mother's fertility doctor when her parents resided in Idaho Falls. After having difficulty becoming pregnant, her mother sought help from Mortimer and eventually became pregnant in 1980. The connection to Mortimer was hidden for 37 years until it was finally revealed when the now adult daughter used a DNA kit which returned the connection to Mortimer as her biological father, who had used his own sperm rather than an anonymous donor as agreed.
Donald Cline used his own sperm in his fertility practice in Indianapolis between 1974 and 1987 to covertly father at least 94 offspring. This came to light in 2014, when home DNA test kits were proliferating, and led to the discovery of Cline having used his own sperm to fertilize his patients' eggs. Because there was no law concerning the practice in Indiana, he was charged with obstruction of justice, false advertising, and immoral conduct, and lost his license to practice medicine. Cline pleaded guilty to two Level 6 felony counts of obstruction of justice and received a one-year suspended sentence. The first law in the United States came into effect in 2019 in the state of Indiana as a result of this case. As of May 2022, Cline has paid over $1.35 million to settle three lawsuits, with three more pending. Similar cases were found in other states.
John Boyd Coates III, a Vermont fertility doctor, has had two lawsuits filed against him and has been charged with using his own sperm in cases going back 40 years. His license has since been revoked and a $5.25 million judgment in damages was awarded to the first plaintiff.
Jos Beek, a gynecologist in the Netherlands, conceived 21 children and potentially dozens more using his own sperm after prospective parents turned to him for fertility treatment, an investigation has discovered. He worked at Elisabeth hospital in Leiderdorp, now part of Alrijne hospital, between 1973 and 1998. He died in 2019.
In September 2020, a San Diego woman sued Dr. Phillip M. Milgram for having used his own sperm to inseminate her three decades earlier, instead of anonymous donor sperm. The deception was discovered when her adult son found that Milgram was his biological father after using a home DNA test kit from 23andMe.
In November 2020, a northern California woman sued her former fertility doctor Michael Kiken for having falsely inseminated her with his own sperm forty years prior. She bore two children, but only learned in 2019 from a DNA test kit that her daughter had received as a gift showed that her former fertility doctor is her children's biological father. In addition, her children may have inherited a genetic disease passed on by Kiken.
Jan Karbaat, a fertility doctor in the Netherlands, fathered 90 confirmed children and may have as many as 200 children. He died in 2017.
In 2021, Norman Barwin, an Ottawa fertility doctor, paid out a settlement of $13.375 million to his seventeen children conceived in his clinic in Canada in the 1980s. A total of 244 former patients and their children, including the seventeen conceived using his own sperm, are among the claimants.
In April 2022, a Colorado jury awarded $8.75 million to the families of a dozen women who became pregnant while being treated for infertility using artificial insemination techniques by doctor Paul Brennan Jones of Grand Junction who used his own sperm while the women were his patients in the 1980s. The jury found Jones liable for negligence, fraud, and other claims.
Fertility doctor Burton Caldwell created at least 22 children using his own sperm. Two of his offspring, both the result of insemination fraud, dated in high school, which is the first verified case of accidental half-siblings incest occurring as a consequence of insemination fraud. It has long been theorized that a large number of people with unrecognized very close genetic relationships living in the same community could result in accidental incest.
Other
There are many other types of fertility fraud, and they may take place at various stages of the process:
Competing for patients via misleading information about success rates, either in advertising or during personal interviews
Performing an assisted reproductive technology procedure not covered by insurance, and then billing for a different procedure
Performing unnecessary or futile procedures on patients who are misinformed or poorly informed
False claims of pregnancy, followed by assertions of fetal death
Misuse of sperm, eggs, and embryos, in particular, a health care person substituting their own sperm for donor sperm
Inadequate screening of donors
Embezzlement from sperm banks, theft of human eggs ("egg-snatching") or embryos, or use of eggs without consent
Legal status
Hundreds of children have been fathered by non-consensual insemination worldwide by their physicians, including in the United States, Canada, and the Netherlands, but without specific laws outlawing it, the legal consequences are unclear. Sometimes other laws related to fertility fraud are used against the physician, such as mail, travel, or wire fraud, while others face civil suits. Some physicians have faced ethics charges by the governing bodies of their profession and lost their license to practice medicine.
United States
In the United States, medical students in the 1960s and 1970s donated sperm, and later while trying to develop their practice as a physician, may have gone on to use their own sperm in order to establish a track record of success. There were no laws on the books at the time prohibiting such activity.
Activists have pushed for legislation that would make fertility fraud a crime, and as of February 2022, seven U.S. states have passed laws, and seven others were considering it.
Scope
In the United States, over fifty fertility doctors have been accused of fraud in connection with donating sperm according to a February 2022 news report.
Media adaptations
In 2020, Somethin' Else and Sony Music Entertainment released a podcast telling the story of Jan Karbaat and his children called "The Immaculate Deception".
In 2020, HBO released the documentary Baby God chronicling the life of Quincy Fortier.
In 2021, The Dutch three-part miniseries Seeds of Deceit tells the story of Dutch fertility doctor Jan Karbaat, who inseminated his patients with his own sperm.
In 2022, Netflix released the documentary Our Father by Jason Blum in the true crime genre about the Donald Cline case in the 1970s and 1980s, to mixed reviews.
See also
References
Works cited
Further reading
External links
Centers for Disease Control and Prevention (CDC), Assisted Reproductive Technology
Assisted reproductive technology
Applied genetics
Biotechnology
Bioethics
Fertility medicine
Genetic engineering
Human reproduction
Medical crime
Medical ethics
Obstetrical procedures
Reproductive rights
Health fraud | Fertility fraud | Chemistry,Technology,Engineering,Biology | 2,157 |
37,280,130 | https://en.wikipedia.org/wiki/Triamiphos | Triamiphos (chemical formula: C12H19N6OP) is an organophosphate used as a pesticide and fungicide. It is used to control powdery mildews on apples and ornamentals. It was discontinued by the US manufacturer in 1998.
History
The phosphoramide Triamiphos is thought to be the first commercially available systemic fungicide. Despite its prominent use in the years following its discovery, no long-term toxicity studies were undertaken until 1974. Further, it has since been replaced by other pesticides. The WHO recommended classification of pesticides by hazard considers triamiphos to be discontinued as use for pesticide.
Structure and Reactivity
It is classified as an organophosphorus compound O=P(R)3 and more specifically as a phosphoramide O=P(NR2)3. The bis(dimethylamido)phosphoryl group (Me2N)2-P(O)- is present in triamiphos and also a number of other fungicides.
It contains two chemical groups used in pesticide synthesis (triazole, phosphoryl). The most relevant distinct subparts of the molecule are the oxon centre (O=P) and the leaving group (the triazole aromatic moiety). Triamiphos technically is not an organophosphate O=P(OR)3, a subclass of organophosphorus O=PR3 compounds. However, the distinction is not always consistent throughout literature where organophosphorus compounds without the alkoxy sidechains or even with a O=S group instead of a O=P group are still classified as organophosphate pesticides (OPs).
Schradan, another organophosphorus pesticide, can be seen as analogous to triamiphos, differing only in the leaving group. As both have comparable toxic properties, it can be concluded that the phenylaminotriazole moiety of triamiphos does not appear to be vital for its anticholinesterase property.
Synthesis
Triamiphos was first synthesised by Van den Bos et al. (1960) by adding the salt of 3-amino-5-phenyl-1,2,4-triazole to a solution of phosphoryl chloride. Subsequently, gaseous dimethylamine is introduced into the reaction mixture to yield triamiphos.
Biotransformation
There were no studies on the exact determination of the biotransformation route and the active metabolite’s structure of Triamiphos.
Mechanism of Action and Toxicity in Animal Studies
The toxic effect of Triamiphos ties back to the acetylcholinesterase inhibition ability of its active metabolite. This inhibitory effect is observed for absorption routes through the skin, respiratory or digestive tract.
The National Institute of Public Health in The Netherlands reported a dose-dependent effect of Triamiphos from a short-term study in rats. They found inhibition of acetylcholinesterase activity at a concentration of 1 ppm during the feeding period. After a recovery period the enzyme activity returned back to normal. A long-term feeding and a three-generation reproduction study performed by Verschuuren et al. (1974), however, found inhibitory effects at an even lower concentration of 0.5 ppm. At this concentration, cholinesterase activity was inhibited in the P-, but not in the F1, F2 or F3 generations. Inhibition in all generations was observed at a concentration of 2.5 ppm, in which the subsequent generations were already exposed to the toxicant from the moment of conception.
A no-effect level of 0.1 ppm was reported by both studies.
Furthermore, a greater inhibitory effect on erythrocyte cholinesterase compared to plasma or brain cholinesterase activity was reported. Therefore, the active metabolite does not appear to readily enter the brain and primarily muscarinic and nicotinic effects are observed. The LD50 (i.p. route) was determined to be between 15 and 18 mg/kg in rats and 10–30 mg/kg in mice. Animals receiving the lethal dose were reported to be maintained upon administration of atropine as antidote. An important factor responsible for the acute toxicity of Triamiphos is the rate of cholinesterase inhibition: if the activity is reduced by 70% within a few minutes, death primarily due to paralysis of the respiratory muscles in rats was reported. The inhibited enzyme is not reactivated and the above-mentioned recovery of the animals was only possible due to its resynthesis. A 1976-study suggested an increased cholesterol content in rat aorta and changes in lipid metabolism as further effects of Triamiphos, which could however not be confirmed by another, more elaborate study. An overview of the effects of Triamiphos at different concentrations can be found in the table below.
Indications
Triamiphos is suspected to exert the same toxic side effects to humans as other organophosphorus pesticides, though no human data on specifically Triamiphos exposure seems to be available.
References
Fungicides
Triazoles
Phosphoramides | Triamiphos | Biology | 1,104 |
54,698,311 | https://en.wikipedia.org/wiki/Video%20line%20selector | A video line selector is an electronic circuit or device for picking a line from an analog video signal. The input of the circuit is connected to an analog video source, the output triggers an oscilloscope, so display the selected line on the oscilloscope or similar device.
Properties
Video line selectors are circuits or units of other devices, fitted to the demand of the unit or a separate device for use in workshops, production and laboratories. They contain analog and digital circuits and an internal or external DC power supply. There's a video signal input, sometimes an output to prevent reflexions of the video signal and the cause of shadows of the video picture, also a trigger output. There is also an input or adjust for the line number(s) to be picked out and as an option an automatic or manual setting to fit other video standards and non-interlaced video. Video line selectors do not need all the picture signal, just the synchronisation signals are needed. Sometimes inputs for H- and V-sync were installed, only.
Setup and References
The video signal input is 75 Ω terminated or connected to the video output for a monitor. The amplified video signal is connected to the inputs of the H- und V-sync detector circuits. The H-sync detector outputs the horizontal synchronisation pulse filtered from the video signal. This is the line synchronisation and makes the lines fit vertically. The V-sync detector filters the vertical synchronisation and makes the picture fit the same position on the screen than the previous one.
Both synchronisation output pulses are fed to a digital synchron counter. The V-sync resets the counter. The H-sync is being counted. On every frame picture, the counter is being reset and the lines were counted. Most often interlaced video was used, spitting up a picture in the odd numbered lines, followed by the even-numbered lines in a half picture each. (→deninterlacing).
Interlace video requires a V-sync detector which detects first a second scan of the interlaced frame.
Some reset the counter and toggle an interlace bit, others ignore the sync after the odd-numbered lines and continue counting.
Broadcast television systems were based on a nearly identical monochrome video signal with minor changes all over the world, which a number of lines can be covered by 10 bit counter (29 < lines < 210 → 512 < 576 < 1024). The digital comparator, feed by the line number preset and the counter detects the logical equivalence as match of the binary numbers, which is the output pulse of the video line selector. When fed to the trigger input of an oscilloscope, the signal of the selected video line is displayed on the oscilloscope when the test probe is fed by the video signal. A precision timer can trigger a pixel or dot of the line.
In order to simplify the digital part of the circuit, it is possible to load the preset line number into the counter and have it count descending. When the counter reaches zero, the trigger output is set. A 10 inputs NOR gate is more sufficient than a 10 bits digital comparator, but evaluating several lines per picture is no longer possible. Decreasing the line number by one, the carry bit of the counter can be used as trigger output, replacing a 10 inputs NOR-Gate.
Applications
Video line selecting was used in laboratory, production, and workshop: (selection only)
focussing CCD-Sensors in cameras (all areas of the picture)
analyzing a television signal on quality and troubleshooting video devices
Monitoring television picture content:
decoding teletext
was used on decoding Channel Videodat, a former television service in Germany, broadcasting software and data over television
Restoring data like „ArVid“, using a videocassette recorder for data storage
For modifying television signals:
teletext output to screen
merge on-screen displays, logos or text into the television picture
As precise optical sensor:
use a camera as optical sensor for analyzing a taken picture in automation,
use a camera as a line sensor,
use a camera as vertical selective line sensor.
See also
component video, HD-MAC, back porch
References
External links
Video Line Selector Circuit and documentation at elm-chan.org 12 April 2002
(German) PAL Line-Selector at controller-designs.de
Electronic circuits
Electronic test equipment
Television technology
Signal processing | Video line selector | Technology,Engineering | 906 |
1,394,307 | https://en.wikipedia.org/wiki/Photophosphorylation | In the process of photosynthesis, the phosphorylation of ADP to form ATP using the energy of sunlight is called photophosphorylation. Cyclic photophosphorylation occurs in both aerobic and anaerobic conditions, driven by the main primary source of energy available to living organisms, which is sunlight. All organisms produce a phosphate compound, ATP, which is the universal energy currency of life. In photophosphorylation, light energy is used to pump protons across a biological membrane, mediated by flow of electrons through an electron transport chain. This stores energy in a proton gradient. As the protons flow back through an enzyme called ATP synthase, ATP is generated from ADP and inorganic phosphate. ATP is essential in the Calvin cycle to assist in the synthesis of carbohydrates from carbon dioxide and NADPH.
ATP and reactions
Both the structure of ATP synthase and its underlying gene are remarkably similar in all known forms of life. ATP synthase is powered by a transmembrane electrochemical potential gradient, usually in the form of a proton gradient. In all living organisms, a series of redox reactions is used to produce a transmembrane electrochemical potential gradient, or a so-called proton motive force (pmf).
Redox reactions are chemical reactions in which electrons are transferred from a donor molecule to an acceptor molecule. The underlying force driving these reactions is the Gibbs free energy of the reactants relative to the products. If donor and acceptor (the reactants) are of higher free energy than the reaction products, the electron transfer may occur spontaneously. The Gibbs free energy is the energy available ("free") to do work. Any reaction that decreases the overall Gibbs free energy of a system will proceed spontaneously (given that the system is isobaric and also at constant temperature), although the reaction may proceed slowly if it is kinetically inhibited.
The fact that a reaction is thermodynamically possible does not mean that it will actually occur. A mixture of hydrogen gas and oxygen gas does not spontaneously ignite. It is necessary either to supply an activation energy or to lower the intrinsic activation energy of the system, in order to make most biochemical reactions proceed at a useful rate. Living systems use complex macromolecular structures to lower the activation energies of biochemical reactions.
It is possible to couple a thermodynamically favorable reaction (a transition from a high-energy state to a lower-energy state) to a thermodynamically unfavorable reaction (such as a separation of charges, or the creation of an osmotic gradient), in such a way that the overall free energy of the system decreases (making it thermodynamically possible), while useful work is done at the same time. The principle that biological macromolecules catalyze a thermodynamically unfavorable reaction if and only if a thermodynamically favorable reaction occurs simultaneously, underlies all known forms of life.
The transfer of electrons from a donor molecule to an acceptor molecule can be spatially separated into a series of intermediate redox reactions. This is an electron transport chain (ETC). Electron transport chains often produce energy in the form of a transmembrane electrochemical potential gradient. The gradient can be used to transport molecules across membranes. Its energy can be used to produce ATP or to do useful work, for instance mechanical work of a rotating bacterial flagella.
Cyclic photophosphorylation
This form of photophosphorylation occurs on the stroma lamella, or fret channels. In cyclic photophosphorylation, the high-energy electron released from P700, a pigment in a complex called photosystem I, flows in a cyclic pathway. The electron starts in photosystem I, passes from the primary electron acceptor to ferredoxin and then to plastoquinone, next to cytochrome bf (a similar complex to that found in mitochondria), and finally to plastocyanin before returning to photosystem I. This transport chain produces a proton-motive force, pumping H ions across the membrane and producing a concentration gradient that can be used to power ATP synthase during chemiosmosis. This pathway is known as cyclic photophosphorylation, and it produces neither O nor NADPH. Unlike non-cyclic photophosphorylation, NADP does not accept the electrons; they are instead sent back to the cytochrome bf complex.
In bacterial photosynthesis, a single photosystem is used, and therefore is involved in cyclic photophosphorylation.
It is favored in anaerobic conditions and conditions of high irradiance and CO compensation points.
Non-cyclic photophosphorylation
The other pathway, non-cyclic photophosphorylation, is a two-stage process involving two different chlorophyll photosystems in the thylakoid membrane. First, a photon is absorbed by chlorophyll pigments surrounding the reaction core center of photosystem II. The light excites an electron in the pigment P680 at the core of photosystem II, which is transferred to the primary electron acceptor, pheophytin, leaving behind P680. The energy of P680 is used in two steps to split a water molecule into 2H + 1/2 O + 2e (photolysis or light-splitting). An electron from the water molecule reduces P680 back to P680, while the H and oxygen are released. The electron transfers from pheophytin to plastoquinone (PQ), which takes 2e (in two steps) from pheophytin, and two H Ions from the stroma to form PQH. This plastoquinol is later oxidized back to PQ, releasing the 2e to the cytochrome bf complex and the two H ions into the thylakoid lumen. The electrons then pass through Cyt b and Cyt f to plastocyanin, using energy from photosystem I to pump hydrogen ions (H) into the thylakoid space. This creates a H gradient, making H ions flow back into the stroma of the chloroplast, providing the energy for the (re)generation of ATP.
The photosystem II complex replaced its lost electrons from HO, so electrons are not returned to photosystem II as they would in the analogous cyclic pathway. Instead, they are transferred to the photosystem I complex, which boosts their energy to a higher level using a second solar photon. The excited electrons are transferred to a series of acceptor molecules, but this time are passed on to an enzyme called ferredoxin-NADP reductase, which uses them to catalyze the reaction
NADP + 2H + 2e → NADPH + H
This consumes the H ions produced by the splitting of water, leading to a net production of 1/2O, ATP, and NADPH + H with the consumption of solar photons and water.
The concentration of NADPH in the chloroplast may help regulate which pathway electrons take through the light reactions. When the chloroplast runs low on ATP for the Calvin cycle, NADPH will accumulate and the plant may shift from noncyclic to cyclic electron flow.
Early history of research
In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation.
In 1954, Daniel I. Arnon et.al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32.
His first review on the early research of photophosphorylation was published in 1956.
References
Professor Luis Gordillo
Fenchel T, King GM, Blackburn TH. Bacterial Biogeochemistry: The Ecophysiology of Mineral Cycling. 2nd ed. Elsevier; 1998.
Lengeler JW, Drews G, Schlegel HG, editors. Biology of the Prokaryotes. Blackwell Sci; 1999.
Nelson DL, Cox MM. Lehninger Principles of Biochemistry. 4th ed. Freeman; 2005.
Stumm W, Morgan JJ. Aquatic Chemistry. 3rd ed. Wiley; 1996.
Thauer RK, Jungermann K, Decker K. Energy Conservation in Chemotrophic Anaerobic Bacteria. Bacteriol. Rev. 41:100–180; 1977.
White D. The Physiology and Biochemistry of Prokaryotes. 2nd ed. Oxford University Press; 2000.
Voet D, Voet JG. Biochemistry. 3rd ed. Wiley; 2004.
Cj C. Enverg
Photosynthesis
Light reactions | Photophosphorylation | Chemistry,Biology | 1,849 |
3,633,535 | https://en.wikipedia.org/wiki/Magnitizdat | Magnitizdat () was the process of copying and distributing audio tape recordings that were not commercially available in the Soviet Union. It is analogous to samizdat, the method of disseminating written works that could not be officially published under Soviet political censorship. It is technically similar to bootleg recordings, except it has a political dimension not usually present in the latter term.
Terminology
The term magnitizdat comes from the Russian words magnitofon () and izdatel’stvo ().
Technology
Magnetic tape recorders were rare in the Soviet Union before the 1960s. During the 1960s, the Soviet Union mass-produced reel-to-reel tape recorders for the consumer market. In addition, Western and Japanese tape recorders were sold through secondhand shops and the black market.
According to Alexei Yurchak, in contrast to samizdat, “magnitizdat managed to elude state control by virtue of its technological availability and privacy.” While the state controlled the ownership of printing presses, Soviet citizens were allowed to own reel-to-reel tape recorders. Making more than six typewritten copies of a document to distribute was forbidden, but there was no legal limit on copying tapes. In addition, only the performer on the recording was considered responsible for the content.
Bard songs
Live recordings of bard songs performed at informal gatherings were the first works to be distributed as magnitizdat. Bulat Okudzhava, Alexander Galich, Vladimir Vysotsky, and Yuli Kim were among the bards whose music was distributed as magnitizdat. Their lyrics dealt with political themes and contained criticisms of Stalin, labor camps, and contemporary Soviet life.
The recordings were copied and recopied in private and distributed through networks of friends and acquaintances throughout the Soviet Union. Recordings of bard songs were also brought to the West by tourists and emigres and then broadcast on Radio Liberty.
Rock music
In rock music circles, magnitizdat was initially used for recording short-wave radio broadcasts and copying vinyl records of Western rock music. Reel-to-reel reproductions of Western rock were sold on black market. Recordings of Western artists such as The Beatles, Led Zeppelin, Deep Purple, and Donna Summer were distributed throughout the Soviet Union as magnitizdat.
By the late 1970s, magnitizdat was used to distribute Soviet rock music as well. Soviet rock groups began recording albums, also known as magnitoal'bomy, as opposed to live concert recordings.
Andrei Tropillo was the first to set up a studio to record Russian rock bands on a regular basis. The AnTrop logo appeared on recordings from Tropillo's studio. Tropillo’s distribution method usually consisted of handing ten master copies on reel-to-reel tapes to recording cooperatives, which then re-copied and distributed the tapes to other cooperatives and cities.
In 1986, Red Wave, a compilation album featuring tracks from several bands associated with the Leningrad Rock Club, was released in the U.S. by Big Time Records. The album contained tracks from magnitoal’bomy originally recorded in Tropillo’s studio and brought out of the Soviet Union by Joanna Stingray.
Punk
The first punk recording in the Soviet Union has been attributed to the band Avtomaticheskie Udovletvoriteli. One of their performances in Moscow was recorded with a single microphone and released as magnitizdat in 1981.
The Siberian punk group Grazhdanskaya Oborona recorded songs on minimal equipment in Egor Letov's home studio. Letov would then send his albums to acquaintances across the country, who made further copies of the tapes. Other Siberian punk bands followed Letov's example by limiting their live performances to apartment concerts and making recordings with reel-to-reel tape recorders and microphones.
See also
Samizdat
Roentgenizdat
Notes
References
Bibliography
Underground culture
Smuggling
Culture of the Soviet Union
Music industry
Tape recording | Magnitizdat | Technology | 824 |
24,370,288 | https://en.wikipedia.org/wiki/Pop-up%20satellite%20archival%20tag | Pop-up satellite archival tags (PSATs) are used to track movements of (usually large, migratory) marine animals. A PSAT (also commonly referred to as a PAT tag) is an archival tag (or data logger) that is equipped with a means to transmit the collected data via the Argos satellite system. Though the data are physically stored on the tag, its major advantage is that it does not have to be physically retrieved like an archival tag for the data to be available making it a viable, fishery independent tool for animal behavior and migration studies. They have been used to track movements of ocean sunfish, marlin, blue sharks, bluefin tuna, swordfish and sea turtles to name a few species. Location, depth, temperature, oxygen levels, and body movement data are used to answer questions about migratory patterns, seasonal feeding movements, daily habits, and survival after catch and release, for examples.
A satellite tag is generally constructed of several components: a data-logging section, a release section, a float, and an antenna. The release sections include an energetically popped off release section or a corrosive pin that is actively corroded on a preset date or after a specified period of time. Some limitations of using satellite tags are their depth limitations (2000m), their costs ($499–$4000+), their vulnerability to loss by environmental issues (biofouling), or premature release through ingestion by a predator.
There are two methods of underwater geolocation that PSATs employ. The first method is through light based geolocation which uses the length of the day and a noon time calculation to estimate the tags location while underwater. This method has a functional depth limitation of light penetration which can be as shallow as a few meters to upwards of hundreds of meters. Geolocation estimates based on light are usually coupled with additional satellite data like sea surface temperature or other available data input such as bathymetry, land avoidance, and physical limitations of the tagged animal. The other method available is through measuring ambient light and the Earth's magnetic field. This method has a functional depth limitation equivalent of the maximum depth limitation, generally 1800m. Magnetic based geolocation is generally not coupled with additional satellite data or other inputs, and relies on the Earth magnetic field for latitude estimations and light (noon time) for longitude estimations.
General information
Pop-up satellite tags range in length from about and weigh 36-108 grams in air. A tag must be small compared to the size of the animal, anywhere from 3-5% of the total fish weight, so that it does not interfere with normal behavior.
These tags record information such as temperature, magnetics, acceleration, light level, oxygen levels and pressure at set intervals of a few seconds to several hours. Data are often collected for several weeks or months, but with new advances in memory technology microSD cards tags can store data for centuries. PSATs record data in non-volatile memory so that data are retained even if the power source fails.
When the PSAT releases from the animal on which it was attached, it floats to the surface, and begins to transmit data to the Argos satellites at a frequency of 401.65 MHz +/-. Therefore, the tag does not have to be physically recovered for the data to be obtained. Summarized data illustrating where the fish's migration started and ended is usually recovered from the tag within about seven days; however, tags can transmit significant amounts of oceanic data for months after they release from the fish.
Limitations of PSAT technology are that it is subject to loss by malfunction of the power source, environmental effects such as biofouling, ingestion by a predator, its depth limitation and cost. Most PSATS have internal software designed to detect damaging or sub-optimal conditions that will trigger an early release and transmission of data. For example, PSATs can withstand pressures to depths of depending on the model. If data indicate no change in pressure (depth) for a period of time, this could trigger an early release due to premature release (a tag pulling out of the fish early) or death of the animal to which it was attached. Such internal checks can alert researchers to unexpected or undesirable events. Ingestion by a predator is more difficult to detect in the sense of forcing a tag to report; however, in data processing it is indicated by an immediate loss of light and an increase in temperature that stabilizes while it is inside the predator.
Types
Using light level
The most popular method of determining an animal's location underwater requires the tag to acquire light levels throughout the day. Observing the length of the day, from when the tag observed the first light until the last light, the tag can determine its latitudinal location (with accuracy exceeding 1 degree). From the length of day the tag computes the noon time which is converted to a longitude location (with accuracy averaging about 0.5 degree or 30–50 nautical miles). This method of geolocation is suitable for animals that inhabit clear waters near the surface. At depths or in turbid waters, light based geolocation does not work as well due to light attenuation. It also does not work well during the equinoxes when the length of day is globally uniform. Manufacturers of this technology include Wildlife Computers, Microwave Telemetry, and Lotek Wireless. Star-Oddi is in the development phase of a pop-up satellite tag as well.
Using Earth’s magnetic field and light level
Another approach to geolocation couples light and magnetics. This method measures the total Earth's magnetic field for latitude estimations while using light based noon time detection for longitude. These tags measure the Earth's magnetic field on their built-in magnetometers throughout the day and then take the average value as the tag's daily location. Average accuracy of this method is approximately 35 nautical miles. Manufacturers of this technology include Desert Star Systems.
See also
Acoustic tag
Argos system
Animal migration tracking
Data storage tag
GIS and aquatic science
References
External links
Variety of PSAT products offered by Desert Star Systems
Description and specifications of a PSAT offered by Microwave Telemetry, Inc.
Description and specifications of a smaller size PSAT offered by Microwave Telemetry, Inc.
Specification of a PSAT product offered by Wildlife Computers
Description of a smaller "miniPAT" offered by Wildlife Computers
Specifications and description of PSAT made by Lotek.
Data collection
Marine biology
Articles containing video clips | Pop-up satellite archival tag | Technology,Biology | 1,331 |
29,823 | https://en.wikipedia.org/wiki/Thomas%20Hobbes | Thomas Hobbes ( ; 5 April 1588 – 4 December 1679) was an English philosopher, best known for his 1651 book Leviathan, in which he expounds an influential formulation of social contract theory. He is considered to be one of the founders of modern political philosophy.
In his early life, overshadowed by his father's departure following a fight, he was taken under the care of his wealthy uncle. Hobbes's academic journey began in Westport, leading him to Oxford University, where he was exposed to classical literature and mathematics. He then graduated from the University of Cambridge in 1608. He became a tutor to the Cavendish family, which connected him to intellectual circles and initiated his extensive travels across Europe. These experiences, including meetings with figures like Galileo, shaped his intellectual development.
After returning to England from France in 1637, Hobbes witnessed the destruction and brutality of the English Civil War from 1642 to 1651 between Parliamentarians and Royalists, which heavily influenced his advocacy for governance by an absolute sovereign in Leviathan, as the solution to human conflict and societal breakdown. Aside from social contract theory, Leviathan also popularized ideas such as the state of nature ("war of all against all") and laws of nature. His other major works include the trilogy De Cive (1642), De Corpore (1655), and De Homine (1658) as well as the posthumous work Behemoth (1681).
Hobbes contributed to a diverse array of fields, including history, jurisprudence, geometry, optics, theology, classical translations, ethics, as well as philosophy in general, marking him as a polymath. Despite controversies and challenges, including accusations of atheism and contentious debates with contemporaries, Hobbes's work profoundly influenced the understanding of political structure and human nature.
Biography
Early life
Thomas Hobbes was born on 5 April 1588 (Old Style), in Westport, now part of Malmesbury in Wiltshire, England. Having been born prematurely when his mother heard of the coming invasion of the Spanish Armada, Hobbes later reported that "my mother gave birth to twins: myself and fear." Hobbes had a brother, Edmund, about two years older, as well as a sister, Anne.
Although Thomas Hobbes's childhood is unknown to a large extent, as is his mother's name, it is known that Hobbes's father, Thomas Sr., was the vicar of both Charlton and Westport. Hobbes's father was uneducated, according to John Aubrey, Hobbes's biographer, and he "disesteemed learning." Thomas Sr. was involved in a fight with the local clergy outside his church, forcing him to leave London. As a result, the family was left in the care of Thomas Sr.'s older brother, Francis, a wealthy glove manufacturer with no family of his own.
Education
Hobbes was educated at Westport church from age four, went to the Malmesbury school, and then to a private school kept by a young man named Robert Latimer, a graduate of the University of Oxford. Hobbes was a good pupil, and between 1601 and 1602 he went to Magdalen Hall, the predecessor to Hertford College, Oxford, where he was taught scholastic logic and mathematics. The principal, John Wilkinson, was a Puritan and had some influence on Hobbes. Before going up to Oxford, Hobbes translated Euripides' Medea from Greek into Latin verse.
At university, Thomas Hobbes appears to have followed his own curriculum as he was little attracted by the scholastic learning. Leaving Oxford, Hobbes completed his B.A. degree by incorporation at St John's College, Cambridge, in 1608. He was recommended by Sir James Hussey, his master at Magdalen, as tutor to William, the son of William Cavendish, Baron of Hardwick (and later Earl of Devonshire), and began a lifelong connection with that family. William Cavendish was elevated to the peerage on his father's death in 1626, holding it for two years before his death in 1628. His son, also William, likewise became the 3rd Earl of Devonshire. Hobbes served as a tutor and secretary to both men. The 1st Earl's younger brother, Charles Cavendish, had two sons who were patrons of Hobbes. The elder son, William Cavendish, later 1st Duke of Newcastle, was a leading supporter of Charles I during the Civil War in which he personally financed an army for the king, having been governor to the Prince of Wales, Charles James, Duke of Cornwall. It was to this William Cavendish that Hobbes dedicated his Elements of Law.
Hobbes became a companion to the younger William Cavendish and they both took part in a grand tour of Europe between 1610 and 1615. Hobbes was exposed to European scientific and critical methods during the tour, in contrast to the scholastic philosophy that he had learned in Oxford. In Venice, Hobbes made the acquaintance of Fulgenzio Micanzio, an associate of Paolo Sarpi, a Venetian scholar and statesman.
His scholarly efforts at the time were aimed at a careful study of classical Greek and Latin authors, the outcome of which was, in 1628, his edition of Thucydides' History of the Peloponnesian War, the first translation of that work into English directly from a Greek manuscript. Hobbes professed a deep admiration for Thucydides, praising him as "the most politic historiographer that ever writ," and one scholar has suggested that "Hobbes' reading of Thucydides confirmed, or perhaps crystallized, the broad outlines and many of the details of [Hobbes'] own thought." It has been argued that three of the discourses in the 1620 publication known as Horae Subsecivae: Observations and Discourses also represent the work of Hobbes from this period.
Although he did associate with literary figures like Ben Jonson and briefly worked as Francis Bacon's amanuensis, translating several of his Essays into Latin, he did not extend his efforts into philosophy until after 1629. In June 1628, his employer Cavendish, then the Earl of Devonshire, died of the plague, and his widow, the countess Christian, dismissed Hobbes.
In Paris (1629–1637)
Hobbes soon (in 1629) found work as a tutor to Gervase Clifton, the son of Sir Gervase Clifton, 1st Baronet, and continued in this role until November 1630. He spent most of this time in Paris. Thereafter, he again found work with the Cavendish family, tutoring William Cavendish, 3rd Earl of Devonshire, the eldest son of his previous pupil. Over the next seven years, as well as tutoring, he expanded his own knowledge of philosophy, awakening in him curiosity over key philosophic debates. He visited Galileo Galilei in Florence while he was under house arrest upon condemnation, in 1636, and was later a regular debater in philosophic groups in Paris, held together by Marin Mersenne.
Hobbes's first area of study was an interest in the physical doctrine of motion and physical momentum. Despite his interest in this phenomenon, he disdained experimental work as in physics. He went on to conceive the system of thought to the elaboration of which he would devote his life. His scheme was first to work out, in a separate treatise, a systematic doctrine of body, showing how physical phenomena were universally explicable in terms of motion, at least as motion or mechanical action was then understood. He then singled out Man from the realm of Nature and plants. Then, in another treatise, he showed what specific bodily motions were involved in the production of the peculiar phenomena of sensation, knowledge, affections and passions whereby Man came into relation with Man. Finally, he considered, in his crowning treatise, how Men were moved to enter into society, and argued how this must be regulated if people were not to fall back into "brutishness and misery". Thus he proposed to unite the separate phenomena of Body, Man, and the State.
In England (1637–1641)
Hobbes came back home from Paris, in 1637, to a country riven with discontent, which disrupted him from the orderly execution of his philosophic plan. However, by the end of the Short Parliament in 1640, he had written a short treatise called The Elements of Law, Natural and Politic. It was not published and only circulated as a manuscript among his acquaintances. A pirated version, however, was published about ten years later. Although it seems that much of The Elements of Law was composed before the sitting of the Short Parliament, there are polemical pieces of the work that clearly mark the influences of the rising political crisis. Nevertheless, many (though not all) elements of Hobbes's political thought were unchanged between The Elements of Law and Leviathan, which demonstrates that the events of the English Civil War had little effect on his contractarian methodology. However, the arguments in Leviathan were modified from The Elements of Law when it came to the necessity of consent in creating political obligation: Hobbes wrote in The Elements of Law that patrimonial kingdoms were not necessarily formed by the consent of the governed, while in Leviathan he argued that they were. This was perhaps a reflection either of Hobbes's thoughts about the engagement controversy or of his reaction to treatises published by Patriarchalists, such as Sir Robert Filmer, between 1640 and 1651.
When in November 1640 the Long Parliament succeeded the Short, Hobbes felt that he was in disfavour due to the circulation of his treatise and fled to Paris. He did not return for 11 years. In Paris, he rejoined the coterie around Mersenne and wrote a critique of the Meditations on First Philosophy of René Descartes, which was printed as third among the sets of "Objections" appended, with "Replies" from Descartes, in 1641. A different set of remarks on other works by Descartes succeeded only in ending all correspondence between the two.
Hobbes also extended his own works in a way, working on the third section, De Cive, which was finished in November 1641. Although it was initially only circulated privately, it was well received, and included lines of argumentation that were repeated a decade later in Leviathan. He then returned to hard work on the first two sections of his work and published little except a short treatise on optics (Tractatus opticus), included in the collection of scientific tracts published by Mersenne as Cogitata physico-mathematica in 1644. He built a good reputation in philosophic circles and in 1645 was chosen with Descartes, Gilles de Roberval and others to referee the controversy between John Pell and Longomontanus over the problem of squaring the circle.
Civil War Period (1642–1651)
The English Civil War began in 1642, and when the royalist cause began to decline in mid-1644, many royalists came to Paris and were known to Hobbes. This revitalised Hobbes's political interests, and the De Cive was republished and more widely distributed. The printing began in 1646 by Samuel de Sorbiere through the Elsevier press in Amsterdam with a new preface and some new notes in reply to objections.
In 1647, Hobbes took up a position as mathematical instructor to the young Charles, Prince of Wales, who had come to Paris from Jersey around July. This engagement lasted until 1648 when Charles went to Holland.
The company of the exiled royalists led Hobbes to produce Leviathan, which set forth his theory of civil government in relation to the political crisis resulting from the war. Hobbes compared the State to a monster (leviathan) composed of men, created under pressure of human needs and dissolved by civil strife due to human passions. The work closed with a general "Review and Conclusion", in response to the war, which answered the question: Does a subject have the right to change allegiance when a former sovereign's power to protect is irrevocably lost?
During the years of composing Leviathan, Hobbes remained in or near Paris. In 1647, he suffered a near-fatal illness that disabled him for six months. On recovering, he resumed his literary task and completed it by 1650. Meanwhile, a translation of De Cive was being produced; scholars disagree about whether it was Hobbes who translated it.
In 1650, a pirated edition of The Elements of Law, Natural and Politic was published. It was divided into two small volumes: Human Nature, or the Fundamental Elements of Policie; and De corpore politico, or the Elements of Law, Moral and Politick.
In 1651, the translation of De Cive was published under the title Philosophical Rudiments concerning Government and Society. Also, the printing of the greater work proceeded, and finally appeared in mid-1651, titled Leviathan, or the Matter, Forme, and Power of a Common Wealth, Ecclesiastical and Civil. It had a famous title-page engraving depicting a crowned giant above the waist towering above hills overlooking a landscape, holding a sword and a crozier and made up of tiny human figures. The work had immediate impact. Soon, Hobbes was more lauded and decried than any other thinker of his time. The first effect of its publication was to sever his link with the exiled royalists, who might well have killed him. The secularist spirit of his book greatly angered both Anglicans and French Catholics. Hobbes appealed to the revolutionary English government for protection and fled back to London in winter 1651. After his submission to the Council of State, he was allowed to subside into private life in Fetter Lane.
Later life
In 1658, Hobbes published the final section of his philosophical system, completing the scheme he had planned more than 19 years before. De Homine consisted for the most part of an elaborate theory of vision. The remainder of the treatise dealt partially with some of the topics more fully treated in the Human Nature and the Leviathan. In addition to publishing some controversial writings on mathematics, including disciplines like geometry, Hobbes also continued to produce philosophical works.
From the time of the Restoration, he acquired a new prominence; "Hobbism" became a byword for all that respectable society ought to denounce. The young king, Hobbes's former pupil, now Charles II, remembered Hobbes and called him to the court to grant him a pension of £100.
The king was important in protecting Hobbes when, in 1666, the House of Commons introduced a bill against atheism and profaneness. That same year, on 17 October 1666, it was ordered that the committee to which the bill was referred "should be empowered to receive information touching such books as tend to atheism, blasphemy and profaneness... in particular... the book of Mr. Hobbes called the Leviathan." Hobbes was terrified at the prospect of being labelled a heretic, and proceeded to burn some of his compromising papers. At the same time, he examined the actual state of the law of heresy. The results of his investigation were first announced in three short Dialogues added as an Appendix to his Latin translation of Leviathan, published in Amsterdam in 1668. In this appendix, Hobbes aimed to show that, since the High Court of Commission had been put down, there remained no court of heresy at all to which he was amenable, and that nothing could be heresy except opposing the Nicene Creed, which, he maintained, Leviathan did not do.
The only consequence that came of the bill was that Hobbes could never thereafter publish anything in England on subjects relating to human conduct. The 1668 edition of his works was printed in Amsterdam because he could not obtain the censor's licence for its publication in England. Other writings were not made public until after his death, including Behemoth: the History of the Causes of the Civil Wars of England and of the Counsels and Artifices by which they were carried on from the year 1640 to the year 1662. For some time, Hobbes was not even allowed to respond to any attacks by his enemies. Despite this, his reputation abroad was formidable.
Hobbes spent the last four or five years of his life with his patron, William Cavendish, 1st Duke of Devonshire, at the family's Chatsworth House estate. He had been a friend of the family since 1608 when he first tutored an earlier William Cavendish. After Hobbes's death, many of his manuscripts would be found at Chatsworth House.
His final works were an autobiography in Latin verse in 1672, and a translation of four books of the Odyssey into "rugged" English rhymes that in 1673 led to a complete translation of both Iliad and Odyssey in 1675.
Death
In October 1679 Hobbes suffered a bladder disorder, and then a paralytic stroke, from which he died on 4 December 1679, aged 91,
at Hardwick Hall, owned by the Cavendish family.
His last words were said to have been "A great leap in the dark", uttered in his final conscious moments. His body was interred in St John the Baptist's Church, Ault Hucknall, in Derbyshire.
Political theory
Hobbes, influenced by contemporary scientific ideas, had intended for his political theory to be a quasi-geometrical system, in which the conclusions followed inevitably from the premises. The main practical conclusion of Hobbes's political theory is that state or society cannot be secure unless at the disposal of an absolute sovereign. From this follows the view that no individual can hold rights of property against the sovereign, and that the sovereign may therefore take the goods of its subjects without their consent. This particular view owes its significance to it being first developed in the 1630s when Charles I had sought to raise revenues without the consent of Parliament, and therefore of his subjects. Hobbes rejected one of the most famous theses of Aristotle's politics, namely that human beings are naturally suited to life in a polis and do not fully realize their natures until they exercise the role of citizen. It is perhaps also important to note that Hobbes extrapolated his mechanistic understanding of nature into the social and political realm, making him a progenitor of the term 'social structure.'
Leviathan
In Leviathan, Hobbes set out his doctrine of the foundation of states and legitimate governments and creating an objective science of morality. Much of the book is occupied with demonstrating the necessity of a strong central authority to avoid the evil of discord and civil war.
Beginning from a mechanistic understanding of human beings and their passions, Hobbes postulates what life would be like without government, a condition which he calls the state of nature. In that state, each person would have a right, or license, to everything in the world. This, Hobbes argues, would lead to a "war of all against all" (). The description contains what has been called one of the best-known passages in English philosophy, which describes the natural state humankind would be in, were it not for political community:
In such states, people fear death and lack both the things necessary to comfortable living, and the hope of being able to obtain them. So, in order to avoid it, people accede to a social contract and establish a civil society. According to Hobbes, society is a population and a sovereign authority, to whom all individuals in that society cede some right for the sake of protection. Power exercised by this authority cannot be resisted, because the protector's sovereign power derives from individuals' surrendering their own sovereign power for protection. The individuals are thereby the authors of all decisions made by the sovereign: "he that complaineth of injury from his sovereign complaineth that whereof he himself is the author, and therefore ought not to accuse any man but himself, no nor himself of injury because to do injury to one's self is impossible". There is no doctrine of separation of powers in Hobbes's discussion. He argues that any division of authority would lead to internal strife, jeopardizing the stability provided by an absolute sovereign. According to Hobbes, the sovereign must control civil, military, judicial and ecclesiastical powers, even the words.
Opposition
John Bramhall
In 1654 a small treatise, Of Liberty and Necessity, directed at Hobbes, was published by Bishop John Bramhall. Bramhall, a strong Arminian, had met and debated with Hobbes and afterwards wrote down his views and sent them privately to be answered in this form by Hobbes. Hobbes duly replied, but not for publication. However, a French acquaintance took a copy of the reply and published it with "an extravagantly laudatory epistle". Bramhall countered in 1655, when he printed everything that had passed between them (under the title of A Defence of the True Liberty of Human Actions from Antecedent or Extrinsic Necessity).
In 1656, Hobbes was ready with The Questions Concerning Liberty, Necessity and Chance, in which he replied "with astonishing force" to the bishop. As perhaps the first clear exposition of the psychological doctrine of determinism, Hobbes's own two pieces were important in the history of the free will controversy. The bishop returned to the charge in 1658 with Castigations of Mr Hobbes's Animadversions, and also included a bulky appendix entitled The Catching of Leviathan the Great Whale.
John Wallis
Hobbes opposed the existing academic arrangements, and assailed the system of the original universities in Leviathan. He went on to publish De Corpore, which contained not only tendentious views on mathematics but also an erroneous proof of the squaring of the circle. This all led mathematicians to target him for polemics and sparked John Wallis to become one of his most persistent opponents. From 1655, the publishing date of De Corpore, Hobbes and Wallis continued name-calling and bickering for nearly a quarter of a century, with Hobbes failing to admit his error to the end of his life. After years of debate, the spat over proving the squaring of the circle gained such notoriety that it has become one of the most infamous feuds in mathematical history.
Religious views
The religious opinions of Hobbes remain controversial as many positions have been attributed to him and range from atheism to orthodox Christianity. In The Elements of Law, Hobbes provided a cosmological argument for the existence of God, saying that God is "the first cause of all causes".
Hobbes was accused of atheism by several contemporaries; Bramhall accused him of teachings that could lead to atheism. This was an important accusation, and Hobbes himself wrote, in his answer to Bramhall's The Catching of Leviathan, that "atheism, impiety, and the like are words of the greatest defamation possible". Hobbes always defended himself from such accusations. In more recent times also, much has been made of his religious views by scholars such as Richard Tuck and J. G. A. Pocock, but there is still widespread disagreement about the exact significance of Hobbes's unusual views on religion.
As Martinich has pointed out, in Hobbes's time the term "atheist" was often applied to people who believed in God but not in divine providence, or to people who believed in God but also maintained other beliefs that were considered to be inconsistent with such belief or judged incompatible with orthodox Christianity. He says that this "sort of discrepancy has led to many errors in determining who was an atheist in the early modern period". In this extended early modern sense of atheism, Hobbes did take positions that strongly disagreed with church teachings of his time. For example, he argued repeatedly that there are no incorporeal substances, and that all things, including human thoughts, and even God, heaven, and hell are corporeal, matter in motion. He argued that "though Scripture acknowledge spirits, yet doth it nowhere say, that they are incorporeal, meaning thereby without dimensions and quantity". (In this view, Hobbes claimed to be following Tertullian.) Like John Locke, he also stated that true revelation can never disagree with human reason and experience, although he also argued that people should accept revelation and its interpretations for the same reason that they should accept the commands of their sovereign: in order to avoid war.
While in Venice on tour, Hobbes made the acquaintance of Fulgenzio Micanzio, a close associate of Paolo Sarpi, who had written against the pretensions of the papacy to temporal power in response to the Interdict of Pope Paul V against Venice, which refused to recognise papal prerogatives. James I had invited both men to England in 1612. Micanzio and Sarpi had argued that God willed human nature, and that human nature indicated the autonomy of the state in temporal affairs. When he returned to England in 1615, William Cavendish maintained correspondence with Micanzio and Sarpi, and Hobbes translated the latter's letters from Italian, which were circulated among the Duke's circle.
Works
1602. Latin translation of Euripides' Medea (lost).
1620. "A Discourse of Tacitus", "A Discourse of Rome", and "A Discourse of Laws." In The Horae Subsecivae: Observation and Discourses.
1626. "De Mirabilis Pecci, Being the Wonders of the Peak in Darby-shire" (publ. 1636) – a poem on the Seven Wonders of the Peak
1629. Eight Books of the Peloponnese Warre, translation with an Introduction of Thucydides, History of the Peloponnesian War
1630. A Short Tract on First Principles.
Authorship doubtful, as this work is attributed by important critics to Robert Payne.
1637. A Briefe of the Art of Rhetorique
Molesworth edition title: The Whole Art of Rhetoric.
Authorship probable: While Schuhmann (1998) firmly rejects the attribution of this work to Hobbes, a preponderance of scholarship disagrees with Schuhmann's idiosyncratic assessment. Schuhmann disagrees with historian Quentin Skinner, who would come to agree with Schuhmann.
1639. Tractatus opticus II (also known as Latin Optical Manuscript)
1640. Elements of Law, Natural and Politic
Initially circulated only in handwritten copies; without Hobbes's permission, the first printed edition would be in 1650.
1641. Objectiones ad Cartesii Meditationes de Prima Philosophia 3rd series of Objections
1642. Elementorum Philosophiae Sectio Tertia de Cive (Latin, 1st limited ed.).
1643. De Motu, Loco et Tempore
First edition (1973) with the title: Thomas White's De Mundo Examined
1644. Part of the "Praefatio to Mersenni Ballistica." In F. Marini Mersenni minimi Cogitata physico-mathematica. In quibus tam naturae quàm artis effectus admirandi certissimis demonstrationibus explicantur.
1644. "Opticae, liber septimus" (also known as Tractatus opticus I written in 1640). In Universae geometriae mixtaeque mathematicae synopsis, edited by Marin Mersenne.
Molesworth edition (OL V, pp. 215–248) title: "Tractatus Opticus"
1646. A Minute or First Draught of the Optiques (Harley MS 3360)
Molesworth published only the dedication to Cavendish and the conclusion in EW VII, pp. 467–471.
1646. Of Liberty and Necessity (publ. 1654)
Published without the permission of Hobbes
1647. Elementa Philosophica de Cive
Second expanded edition with a new Preface to the Reader
1650. Answer to Sir William Davenant's Preface before Gondibert
1650. Human Nature: or The fundamental Elements of Policie
Includes first thirteen chapters of The Elements of Law, Natural and Politic
Published without Hobbes's authorisation
1650. The Elements of Law, Natural and Politic (pirated ed.)
Repackaged to include two parts:
"Human Nature, or the Fundamental Elements of Policie," ch. 14–19 of Elements, Part One (1640)
"De Corpore Politico", Elements, Part Two (1640)
1651. Philosophicall Rudiments concerning Government and Society – English translation of De Cive
1651. Leviathan, or the Matter, Forme, and Power of a Commonwealth, Ecclesiasticall and Civil
1654. Of Libertie and Necessitie, a Treatise
1655. De Corpore (in Latin)
1656. Elements of Philosophy, The First Section, Concerning Body – anonymous English translation of De Corpore
1656. Six Lessons to the Professor of Mathematics
1656. The Questions concerning Liberty, Necessity and Chance – reprint of Of Libertie and Necessitie, a Treatise, with the addition of Bramhall's reply and Hobbes's reply to Bramahall's reply.
1657. Stigmai, or Marks of the Absurd Geometry, Rural Language, Scottish Church Politics, and Barbarisms of John Wallis
1658. Elementorum Philosophiae Sectio Secunda De Homine
1660. Examinatio et emendatio mathematicae hodiernae qualis explicatur in libris Johannis Wallisii
1661. Dialogus physicus, sive De natura aeris
1662. Problematica Physica
English translation titled: Seven Philosophical Problems (1682)
1662. Seven Philosophical Problems, and Two Propositions of Geometry – published posthumously
1662. Mr. Hobbes Considered in his Loyalty, Religion, Reputation, and Manners. By way of Letter to Dr. Wallis – English autobiography
1666. De Principis & Ratiocinatione Geometrarum
1666. A Dialogue between a Philosopher and a Student of the Common Laws of England (publ. 1681)
1668. Leviathan – Latin translation
1668. An answer to a book published by Dr. Bramhall, late bishop of Derry; called the Catching of the leviathan. Together with an historical narration concerning heresie, and the punishment thereof (publ. 1682)
1671. Three Papers Presented to the Royal Society Against Dr. Wallis. Together with Considerations on Dr. Wallis his Answer to them
1671. Rosetum Geometricum, sive Propositiones Aliquot Frustra antehac tentatae. Cum Censura brevi Doctrinae Wallisianae de Motu
1672. Lux Mathematica. Excussa Collisionibus Johannis Wallisii
1673. English translation of Homer's Iliad and Odyssey
1674. Principia et Problemata Aliquot Geometrica Antè Desperata, Nunc breviter Explicata & Demonstrata
1678. Decameron Physiologicum: Or, Ten Dialogues of Natural Philosophy
1679. Thomae Hobbessii Malmesburiensis Vita. Authore seipso – Latin autobiography
Translated into English in 1680
Posthumous works
1680. An Historical Narration concerning Heresie, And the Punishment thereof
1681. Behemoth, or The Long Parliament
Written in 1668, it was unpublished at the request of the King
First pirated edition: 1679
1682. Seven Philosophical Problems (English translation of Problematica Physica, 1662)
1682. A Garden of Geometrical Roses (English translation of Rosetum Geometricum, 1671)
1682. Some Principles and Problems in Geometry (English translation of Principia et Problemata, 1674)
1688. Historia Ecclesiastica Carmine Elegiaco Concinnata
Complete editions
Molesworth editions
Editions compiled by William Molesworth.
Posthumous works not included in the Molesworth editions
Translations in modern English
De Corpore, Part I. Computatio Sive Logica. Edited with an Introductory Essay by L C. Hungerland and G. R. Vick. Translation and Commentary by A. Martinich. New York: Abaris Books, 1981.
Thomas White's De mundo Examined, translation by H. W. Jones, Bradford: Bradford University Press, 1976 (the appendixes of the Latin edition (1973) are not enclosed).
New critical editions of Hobbes's works
Clarendon Edition of the Works of Thomas Hobbes, Oxford: Clarendon Press (10 volumes published of 27 planned).
Traduction des œuvres latines de Hobbes, under the direction of Yves Charles Zarka, Paris: Vrin (5 volumes published of 17 planned).
See also
Joseph Butler
Hobbesian trap
Hobbes's moral and political philosophy
Leviathan and the Air-Pump
Social physics
References
Citations
Sources
Attribution:
Further reading
General resources
MacDonald, Hugh & Hargreaves, Mary. Thomas Hobbes, a Bibliography, London: The Bibliographical Society, 1952.
Hinnant, Charles H. (1980). Thomas Hobbes: A Reference Guide, Boston: G. K. Hall & Co.
Garcia, Alfred (1986). Thomas Hobbes: bibliographie internationale de 1620 à 1986 , Caen: Centre de Philosophie politique et juridique Université de Caen.
Critical studies
Brandt, Frithiof (1928). Thomas Hobbes' Mechanical Conception of Nature, Copenhagen: Levin & Munksgaard.
Jesseph, Douglas M. (1999). Squaring the Circle. The War Between Hobbes and Wallis, Chicago: University of Chicago Press.
Leijenhorst, Cees (2002). The Mechanisation of Aristotelianism. The Late Aristotelian Setting of Thomas Hobbes' Natural Philosophy, Leiden: Brill.
Lemetti, Juhana (2011). Historical Dictionary of Hobbes's Philosophy, Lanham: Scarecrow Press.
Macpherson, C. B. (1962). The Political Theory of Possessive Individualism: Hobbes to Locke, Oxford: Oxford University Press.
Malcolm, Noel (2002). Aspects of Hobbes, New York: Oxford University Press.
MacKay-Pritchard, Noah (2019). "Origins of the State of Nature", London
Malcolm, Noel (2007). Reason of State, Propaganda, and the Thirty Years' War: An Unknown Translation by Thomas Hobbes, New York: Oxford University Press.
Manent, Pierre (1996). An Intellectual History of Liberalism, Princeton: Princeton University Press.
Martinich, A. P. (2003) "Thomas Hobbes" in The Dictionary of Literary Biography, Volume 281: British Rhetoricians and Logicians, 1500–1660, Second Series, Detroit: Gale, pp. 130–144.
Martinich, A. P. (1995). A Hobbes Dictionary, Cambridge: Blackwell.
Martinich, A. P. (1997). Thomas Hobbes, New York: St. Martin's Press.
Martinich, A. P. (1992). The Two Gods of Leviathan: Thomas Hobbes on Religion and Politics, Cambridge: Cambridge University Press.
Martinich, A. P. (1999). Hobbes: A Biography, Cambridge: Cambridge University Press.
Oakeshott, Michael (1975). Hobbes on Civil Association, Oxford: Basil Blackwell.
Parkin, Jon, (2007), Taming the Leviathan: The Reception of the Political and Religious Ideas of Thomas Hobbes in England 1640–1700, [Cambridge: Cambridge University Press]
Pettit, Philip (2008). Made with Words. Hobbes on Language, Mind, and Politics, Princeton: Princeton University Press.
Robinson, Dave and Groves, Judy (2003). Introducing Political Philosophy, Icon Books. .
Ross, George MacDonald (2009). Starting with Hobbes, London: Continuum.
Shapin, Steven and Schaffer, Simon (1995). Leviathan and the Air-Pump. Princeton: Princeton University Press.
Skinner, Quentin (1996). Reason and Rhetoric in the Philosophy of Hobbes, Cambridge: Cambridge University Press.
Skinner, Quentin (2002). Visions of Politics. Vol. III: Hobbes and Civil Science, Cambridge: Cambridge University Press
Skinner, Quentin (2008). Hobbes and Republican Liberty, Cambridge: Cambridge University Press.
Skinner, Quentin (2018). From Humanism to Hobbes: Studies in Rhetoric and Politics, Cambridge: Cambridge University Press.
Stauffer, Devin (2018). Hobbes's Kingdom of Light: A Study of the Foundations of Modern Political Philosophy, Chicago: The University of Chicago Press.
Stomp, Gabriella (ed.) (2008). Thomas Hobbes, Aldershot: Ashgate.
Strauss, Leo (1936). The Political Philosophy of Hobbes; Its Basis and Its Genesis, Oxford: Clarendon Press.
Strauss, Leo (1959). "On the Basis of Hobbes's Political Philosophy" in What Is Political Philosophy?, Glencoe, IL: Free Press, chap. 7.
Tönnies, Ferdinand (1925). Hobbes. Leben und Lehre, Stuttgart: Frommann, 3rd ed.
Tuck, Richard (1993). Philosophy and Government, 1572–1651, Cambridge: Cambridge University Press.
Vélez, Fabio (2014). La palabra y la espada: a vueltas con Hobbes, Madrid: Maia.
Vieira, Monica Brito (2009). The Elements of Representation in Hobbes, Leiden: Brill Publishers.
Zagorin, Perez (2009). Hobbes and the Law of Nature, Princeton NJ: Princeton University Press.
External links
Digital collections
Hobbes Texts English translations by George Mac Donald Ross
Contains Leviathan, lightly edited for easier reading, earlymoderntexts.com
Clarendon Edition of the Works of Thomas Hobbes
Physical collections
Philosophy encyclopedia entries
Thomas Hobbes at the Stanford Encyclopedia of Philosophy
Hobbes's Moral and Political Philosophy at the Stanford Encyclopedia of Philosophy
Hobbes: Methodology at the Internet Encyclopedia of Philosophy
Hobbes: Moral and Political Philosophy at the Internet Encyclopedia of Philosophy
Biographical information
A Brief Life of Thomas Hobbes, 1588–1679 by John Aubrey
A short biography of Thomas Hobbes, atheisme.free.fr
Hobbes biography, Philosophypages.com
Other links
Thomas Hobbes nominated by Steven Pinker for the BBC Radio 4 programme Great Lives.
Richard A. Talaska (ed.), The Hardwick Library and Hobbes's Early Intellectual Development
Hobbes studies Online edition
1588 births
1679 deaths
17th-century English writers
17th-century writers in Latin
17th-century English male writers
17th-century English philosophers
Alumni of Magdalen Hall, Oxford
Atomists
British critics of Christianity
British critics of religions
Critics of the Catholic Church
Empiricists
English expatriates in France
English mathematicians
English physicists
English political philosophers
English theologians
Epistemologists
Materialists
Metaphysicians
Ontologists
Writers from Malmesbury
Philosophers of culture
British philosophers of education
Philosophers of history
Philosophers of language
Philosophers of law
Philosophers of mathematics
Philosophers of mind
Philosophers of religion
Philosophers of science
Political realists
Rhetoric theorists
Social philosophers
Theorists on Western civilization
Alumni of St John's College, Cambridge
Natural law ethicists | Thomas Hobbes | Physics,Mathematics | 8,251 |
25,029,425 | https://en.wikipedia.org/wiki/Concolic%20testing | Concolic testing (a portmanteau of concrete and symbolic, also known as dynamic symbolic execution) is a hybrid software verification technique that performs symbolic execution, a classical technique that treats program variables as symbolic variables, along a concrete execution (testing on particular inputs) path. Symbolic execution is used in conjunction with an automated theorem prover or constraint solver based on constraint logic programming to generate new concrete inputs (test cases) with the aim of maximizing code coverage. Its main focus is finding bugs in real-world software, rather than demonstrating program correctness.
A description and discussion of the concept was introduced in "DART: Directed Automated Random Testing" by Patrice Godefroid, Nils Klarlund, and Koushik Sen. The paper "CUTE: A concolic unit testing engine for C", by Koushik Sen, Darko Marinov, and Gul Agha, further extended the idea to data structures, and first coined the term concolic testing. Another tool, called EGT (renamed to EXE and later improved and renamed to KLEE), based on similar ideas was independently developed by Cristian Cadar and Dawson Engler in 2005, and published in 2005 and 2006. PathCrawler first proposed to perform symbolic execution along a concrete execution path, but unlike concolic testing PathCrawler does not simplify complex symbolic constraints using concrete values. These tools (DART and CUTE, EXE) applied concolic testing to unit testing of C programs and concolic testing was originally conceived as a white box improvement upon established random testing methodologies. The technique was later generalized to testing multithreaded Java programs with , and unit testing programs from their executable codes (tool OSMOSE). It was also combined with fuzz testing and extended to detect exploitable security issues in large-scale x86 binaries by Microsoft Research's SAGE.
The concolic approach is also applicable to model checking. In a concolic model checker, the model checker traverses states of the model representing the software being checked, while storing both a concrete state and a symbolic state. The symbolic state is used for checking properties on the software, while the concrete state is used to avoid reaching unreachable states. One such tool is ExpliSAT by Sharon Barner, Cindy Eisner, Ziv Glazberg, Daniel Kroening and Ishai Rabinovitz
Birth of concolic testing
Implementation of traditional symbolic execution based testing requires the implementation of a full-fledged symbolic interpreter for a programming language. Concolic testing implementors noticed that implementation of full-fledged symbolic execution can be avoided if symbolic execution can be piggy-backed with the normal execution of a program through instrumentation. This idea of simplifying implementation of symbolic execution gave birth to concolic testing.
Development of SMT solvers
An important reason for the rise of concolic testing (and more generally, symbolic-execution based analysis of programs) in the decade since it was introduced in 2005 is the dramatic improvement in the efficiency and expressive power of SMT Solvers. The key technical developments that lead to the rapid development of SMT solvers include combination of theories, lazy solving, DPLL(T) and the huge improvements in the speed of SAT solvers. SMT solvers that are particularly tuned for concolic testing include Z3, STP, Z3str2, and Boolector.
Example
Consider the following simple example, written in C:
void f(int x, int y) {
int z = 2*y;
if (x == 100000) {
if (x < z) {
assert(0); /* error */
}
}
}
Simple random testing, trying random values of x and y, would require an impractically large number of tests to reproduce the failure.
We begin with an arbitrary choice for x and y, for example x = y = 1. In the concrete execution, line 2 sets z to 2, and the test in line 3 fails since 1 ≠ 100000. Concurrently, the symbolic execution follows the same path but treats x and y as symbolic variables. It sets z to the expression 2y and notes that, because the test in line 3 failed, x ≠ 100000. This inequality is called a path condition and must be true for all executions following the same execution path as the current one.
Since we'd like the program to follow a different execution path on the next run, we take the last path condition encountered, x ≠ 100000, and negate it, giving x = 100000. An automated theorem prover is then invoked to find values for the input variables x and y given the complete set of symbolic variable values and path conditions constructed during symbolic execution. In this case, a valid response from the theorem prover might be x = 100000, y = 0.
Running the program on this input allows it to reach the inner branch on line 4, which is not taken since 100000 (x) is not less than 0 (z = 2y). The path conditions are x = 100000 and x ≥ z. The latter is negated, giving x < z. The theorem prover then looks for x, y satisfying x = 100000, x < z, and z = 2y; for example, x = 100000, y = 50001. This input reaches the error.
Algorithm
Essentially, a concolic testing algorithm operates as follows:
Classify a particular set of variables as input variables. These variables will be treated as symbolic variables during symbolic execution. All other variables will be treated as concrete values.
Instrument the program so that each operation which may affect a symbolic variable value or a path condition is logged to a trace file, as well as any error that occurs.
Choose an arbitrary input to begin with.
Execute the program.
Symbolically re-execute the program on the trace, generating a set of symbolic constraints (including path conditions).
Negate the last path condition not already negated in order to visit a new execution path. If there is no such path condition, the algorithm terminates.
Invoke an automated satisfiability solver on the new set of path conditions to generate a new input. If there is no input satisfying the constraints, return to step 6 to try the next execution path.
Return to step 4.
There are a few complications to the above procedure:
The algorithm performs a depth-first search over an implicit tree of possible execution paths. In practice programs may have very large or infinite path trees – a common example is testing data structures that have an unbounded size or length. To prevent spending too much time on one small area of the program, the search may be depth-limited (bounded).
Symbolic execution and automated theorem provers have limitations on the classes of constraints they can represent and solve. For example, a theorem prover based on linear arithmetic will be unable to cope with the nonlinear path condition xy = 6. Any time that such constraints arise, the symbolic execution may substitute the current concrete value of one of the variables to simplify the problem. An important part of the design of a concolic testing system is selecting a symbolic representation precise enough to represent the constraints of interest.
Commercial success
Symbolic-execution based analysis and testing, in general, has witnessed a significant level of interest from industry. Perhaps the most famous commercial tool that uses dynamic symbolic execution (aka concolic testing) is the SAGE tool from Microsoft. The KLEE and S2E tools (both of which are open-source tools, and use the STP constraint solver) are widely used in many companies including Micro Focus Fortify, NVIDIA, and IBM. Increasingly these technologies are being used by many security companies and hackers alike to find security vulnerabilities.
Limitations
Concolic testing has a number of limitations:
If the program exhibits nondeterministic behavior, it may follow a different path than the intended one. This can lead to nontermination of the search and poor coverage.
Even in a deterministic program, a number of factors may lead to poor coverage, including imprecise symbolic representations, incomplete theorem proving, and failure to search the most fruitful portion of a large or infinite path tree.
Programs which thoroughly mix the state of their variables, such as cryptographic primitives, generate very large symbolic representations that cannot be solved in practice. For example, the condition if(sha256_hash(input) == 0x12345678) { ... } requires the theorem prover to invert SHA256, which is an open problem.
Tools
pathcrawler-online.com is a restricted version of the current PathCrawler tool which is publicly available as an online test-case server for evaluation and education purposes.
jCUTE is available as binary under a research-use only license by Urbana-Champaign for Java.
CREST is an open-source solution for C that replaced CUTE (modified BSD license).
KLEE is an open source solution built on-top of the LLVM infrastructure (UIUC license).
CATG is an open-source solution for Java (BSD license).
Jalangi is an open-source concolic testing and symbolic execution tool for JavaScript. Jalangi supports integers and strings.
Microsoft Pex, developed at Microsoft Rise, is publicly available as a Microsoft Visual Studio 2010 Power Tool for the NET Framework.
Triton is an open-source concolic execution library for binary code.
CutEr is an open-source concolic testing tool for the Erlang functional programming language.
Many tools, notably DART and SAGE, have not been made available to the public at large. Note however that for instance SAGE is "used daily" for internal security testing at Microsoft.
References
Automated theorem proving
Software testing | Concolic testing | Mathematics,Engineering | 2,021 |
1,839,724 | https://en.wikipedia.org/wiki/Irresistible%20force%20paradox | The irresistible force paradox (also unstoppable force paradox or shield and spear paradox), is a classic paradox formulated as "What happens when an unstoppable force meets an immovable object?" The immovable object and the unstoppable force are both implicitly assumed to be indestructible, or else the question would have a trivial resolution. Furthermore, it is assumed that they are two entities.
The paradox arises because it rests on two incompatible premises—that there can exist simultaneously such things as unstoppable forces and immovable objects.
Origins
An example of this paradox in eastern thought can be found in the origin of the Chinese word for contradiction (). This term originates from a story (see ) in the 3rd century BC philosophical book Han Feizi. In the story, a man trying to sell a spear and a shield claimed that his spear could pierce any shield, and then claimed that his shield was unpierceable. Then, asked about what would happen if he were to take his spear to strike his shield, the seller could not answer. This led to the idiom of "zìxīang máodùn" (自相矛盾, "from each-other spear shield"), or "self-contradictory".
Another ancient and mythological example illustrating this theme can be found in the story of the Teumessian fox, which can never be caught, and the hound Laelaps, which never misses what it hunts. Realizing the paradox, Zeus, Lord of the Sky, turns both creatures into static constellations.
Applications
The problems associated with this paradox can be applied to any other conflict between two abstractly defined extremes that are opposite.
One of the answers generated by seeming paradoxes like these is that there is no contradiction – that there is not a false dilemma. Christopher Kaczor suggested that the need to change indicates a lack of power rather than the possession thereof, and as such a person who was omniscient would never need to change their mind – not changing the future would be consistent with omniscience rather than contradicting it.
Cultural references
In the graphic novel All-Star Superman by Grant Morrison, Superman is posed this question by the Ultra-Sphinx. His response to the paradox is "They surrender."
In the original Knight Rider television series from 1982, KITT brings up the irresistible force paradox when Michael asks him how they could defeat K.A.R.R., who is built the same as KITT. This is in the ninth episode of season 1, named "Trust Doesn't Rust", first aired November 19, 1982.
In Iain Banks's novel, Walking on Glass, a solution to the paradox is given.
The 2005 video game Phoenix Wright: Ace Attorney retells the story of the spear and shield from Han Feizi during its fifth case. A "King of Prosecutors" trophy that homages the story, depicting a cracked shield and a broken halberd, becomes an important piece of evidence during the case's events.
In the MOBA game League of Legends, the champion Xin Zhao wields a spear and has the line "Find me an immovable object, and I'll put this question to rest." His ultimate move, Crescent Guard, creates a circle around him and makes him impervious to damage dealt by champions outside that circle, implying he is an unstoppable force.
In Pokémon Sword and Shield, Zacian and Zamazenta, the games' mascot legendary Pokémon, are hinted to be an unstoppable sword and impenetrable shield. The two have the same total base stats, one having a higher attack stat and the other having a better defense stat, but due to their different typing, Zacian (the sword) has better resistances, making it better defensively as well.
In The Rising of the Shield Hero, during the event where the Shield Hero and the Spear Hero had a duel for the first time, the both of them make a few comments on "the tale about the all-penetrating Spear and the unbreakable Shield".
The 2023 UEFA Europa League Final was described by online news outlets and commentators as "an example of an unstoppable force taking on an immovable object". Prior to the match, Sevilla FC, nicknamed "King of the Europa League", had won the competition 6 times out of 6 finals, twice the amount of any other club. On the other side, José Mourinho (managing AS Roma) had never lost a European final, and the team had won the Conference League the year prior. Both teams scored one goal in an exceptionally long game, lasting 150 minutes with added time, and were only separated through penalties, where Sevilla scored 4 and Roma 1.
See also
Newton's Flaming Laser Sword
Omnipotence paradox
On Contradiction
References
Paradoxes
Force
Infinity | Irresistible force paradox | Physics,Mathematics | 997 |
53,179,717 | https://en.wikipedia.org/wiki/Capitanian%20mass%20extinction%20event | The Capitanian mass extinction event, also known as the end-Guadalupian extinction event, the Guadalupian-Lopingian boundary mass extinction, the pre-Lopingian crisis, or the Middle Permian extinction, was an extinction event that predated the infamous end-Permian extinction. The mass extinction occurred during a period of decreased species richness and increased extinction rates near the end of the Middle Permian, also known as the Guadalupian epoch. It is often called the end-Guadalupian extinction event because of its initial recognition between the Guadalupian and Lopingian series; however, more refined stratigraphic study suggests that extinction peaks in many taxonomic groups occurred within the Guadalupian, in the latter half of the Capitanian age. The extinction event has been argued to have begun around 262 million years ago with the Late Guadalupian crisis, though its most intense pulse occurred 259 million years ago in what is known as the Guadalupian-Lopingian boundary event.
Having historically been considered as part of the end-Permian extinction event, and only viewed as separate relatively recently, this mass extinction is believed to be the third largest of the Phanerozoic in terms of the percentage of species lost, after the end-Permian and Late Ordovician mass extinctions, respectively, while being the fifth worst in terms of ecological severity. The global nature of the Capitanian mass extinction has been called into question by some palaeontologists as a result of some analyses finding it to have affected only low-latitude taxa in the Northern Hemisphere.
Magnitude
In the aftermath of Olson's Extinction, global diversity rose during the Capitanian. This was probably the result of disaster taxa replacing extinct guilds. The Capitanian mass extinction greatly reduced disparity (the range of different guilds); eight guilds were lost. It impacted the diversity within individual communities more severely than the Permian–Triassic extinction event. Although faunas began recovery immediately after the Capitanian extinction event, rebuilding complex trophic structures and refilling guilds, diversity and disparity fell further until the boundary.
Marine ecosystems
The impact of the Capitanian extinction event on marine ecosystems is still heavily debated by palaeontologists. Early estimates indicated a loss of marine invertebrate genera between 35 and 47%, while an estimate published in 2016 suggested a loss of 33–35% of marine genera when corrected for background extinction, the Signor–Lipps effect and clustering of extinctions in certain taxa. The loss of marine invertebrates during the Capitanian mass extinction was comparable in magnitude to the Cretaceous–Paleogene extinction event. Some studies have considered it the third or fourth greatest mass extinction in terms of the proportion of marine invertebrate genera lost; a different study found the Capitanian extinction event to be only the ninth worst in terms of taxonomic severity (number of genera lost) but found it to be the fifth worst with regard to its ecological impact (i.e., the degree of taxonomic restructuring within ecosystems or the loss of ecological niches or even entire ecosystems themselves).
Terrestrial ecosystems
Few published estimates for the impact on terrestrial ecosystems exist for the Capitanian mass extinction. Among vertebrates, Day and colleagues suggested a 74–80% loss of generic richness in tetrapods of the Karoo Basin in South Africa, including the extinction of the dinocephalians. In land plants, Stevens and colleagues found an extinction of 56% of plant species recorded in the mid-Upper Shihhotse Formation in North China, which was approximately mid-Capitanian in age. 24% of plant species in South China went extinct.
Timing
Although it is known that the Capitanian mass extinction occurred after Olson's Extinction and before the Permian–Triassic extinction event, the exact age of the Capitanian mass extinction remains controversial. This is partly due to the somewhat circumstantial age of the Capitanian–Wuchiapingian boundary itself, which is currently estimated to be approximately 259.1 million years old, but is subject to change by the Subcommission on Permian Stratigraphy of the International Commission on Stratigraphy. Additionally, there is a dispute regarding the severity of the extinction and whether the extinction in China happened at the same time as the extinction in Spitsbergen. According to one study, the Capitanian mass extinction was not one discrete event but a continuous decline in diversity that began at the end of the Wordian. Another study examining fossiliferous facies in Svalbard found no evidence for a sudden mass extinction, instead attributing local biotic changes during the Capitanian to the southward migration of many taxa through the Zechstein Sea. Carbonate platform deposits in Hungary and Hydra show no sign of an extinction event at the end of the Capitanian; the extinction event there is recorded in the middle Capitanian.
The volcanics of the Emeishan Traps, which are interbedded with tropical carbonate platforms of the Maokou Formation, are unique for preserving a mass extinction and the cause of that mass extinction. Large phreatomagmatic eruptions occurred when the Emeishan Traps first started to erupt, leading to the extinction of fusulinacean foraminifera and calcareous algae.
In the absence of radiometric ages directly constraining the extinction horizons themselves in the marine sections, most recent studies refrain from placing a number on its age, but based on extrapolations from the Permian timescale an age of approximately 260–262 Ma has been estimated; this fits broadly with radiometric ages from the terrestrial realm, assuming the two events are contemporaneous. Plant losses occurred either at the same time as the marine extinction or after it.
Marine realm
The extinction of fusulinacean foraminifera in Southwest China was originally dated to the end of the Guadalupian, but studies published in 2009 and 2010 dated the extinction of these fusulinaceans to the mid-Capitanian. Brachiopod and coral losses occurred in the middle of the Capitanian stage. The extinction suffered by the ammonoids may have occurred in the early Wuchiapingian.
Terrestrial realm
The existence of change in tetrapod faunas in the mid-Permian has long been known in South Africa and Russia. In Russia, it corresponded to the boundary between what became known as the Titanophoneus Superzone and the Scutosaurus Superzone and later the Dinocephalian Superassemblage and the Theriodontian Superassemblage, respectively. In South Africa, this corresponded to the boundary between the variously named Pareiasaurus, Dinocephalian or Tapinocephalus Assemblage Zone and the overlying assemblages. In both Russia and South Africa, this transition was associated with the extinction of the previously dominant group of therapsid amniotes, the dinocephalians, which led to its later designation as the dinocephalian extinction. Post-extinction origination rates remained low through the Pristerognathus Assemblage Zone for at least 1 million years, which suggests that there was a delayed recovery of Karoo Basin ecosystems.
After the recognition of a separate marine mass extinction at the end of the Guadalupian, the dinocephalian extinction was seen to represent its terrestrial correlate. Though it was subsequently suggested that because the Russian Ischeevo fauna, which was considered the youngest dinocephalian fauna in that region, was constrained to below the Illawarra magnetic reversal and therefore had to have occurred in the Wordian stage, well before the end of the Guadalupian, this constraint applied to the type locality only. The recognition of a younger dinocephalian fauna in Russia (the Sundyr Tetrapod Assemblage) and the retrieval of biostratigraphically radiometric ages via uranium–lead dating of a tuff from the Tapinocephalus Assemblage Zone of the Karoo Basin demonstrated that the dinocephalian extinction did occur in the late Capitanian, around 260 million years ago.
Effects on life
Marine life
In the oceans, the Capitanian extinction event led to high extinction rates among ammonoids, corals and calcareous algal reef-building organisms, foraminifera, bryozoans, and brachiopods. It was more severe in restricted marine basins than in the open oceans. It appears to have been particularly selective against shallow-water taxa that relied on photosynthesis or a photosymbiotic relationship; many species with poorly buffered respiratory physiologies also became extinct. The extinction event led to a collapse of the reef carbonate factory in the shallow seas surrounding South China.
The ammonoids, which had been in a long-term decline for a 30 million year period since the Roadian, suffered a selective extinction pulse at the end of the Capitanian. 75.6% of coral families, 77.8% of coral genera and 82.2% of coral species that were in Permian China were lost during the Capitanian mass extinction. The Verbeekinidae, a family of large fusuline foraminifera, went extinct.
87% of brachiopod species found at the Kapp Starostin Formation on Spitsbergen disappeared over a period of tens of thousands of years; though new brachiopod and bivalve species emerged after the extinction, the dominant position of the brachiopods was taken over by the bivalves. Approximately 70% of other species found at the Kapp Starostin Formation also vanished. The fossil record of East Greenland is similar to that of Spitsbergen; the faunal losses in Canada's Sverdrup Basin are comparable to the extinctions in Spitsbergen and East Greenland, but the post-extinction recovery that happened in Spitsbergen and East Greenland did not occur in the Sverdrup Basin. Whereas rhynchonelliform brachiopods made up 99.1% of the individuals found in tropical carbonates in the Western United States, South China and Greece prior to the extinction, molluscs made up 61.2% of the individuals found in similar environments after the extinction. 87% of brachiopod species and 82% of fusulinacean foraminifer species in South China were lost. Although severe for brachiopods, the Capitanian extinction's impact on their diversity was nowhere near as strong as that of the later end-Permian extinction.
Biomarker evidence indicates red algae and photoautotrophic bacteria dominated marine microbial communities. Significant turnovers in microbial ecosystems occurred during the Capitanian mass extinction, though they were smaller in magnitude than those associated with the end-Permian extinction.
Most of the marine victims of the extinction were either endemic species of epicontinental seas around Pangaea that died when the seas closed, or were dominant species of the Paleotethys Ocean. Evidence from marine deposits in Japan and Primorye suggests that mid-latitude marine life became affected earlier by the extinction event than marine organisms of the tropics.
Whether and to what degree latitude affected the likelihood of taxa to go extinct remains disputed amongst palaeontologists. Whereas some studies conclude that the extinction event was a regional one limited to tropical areas, others suggest that there was little latitudinal variation in extinction patterns. A study examining foraminiferal extinctions in particular found that the Central and Western Palaeotethys experienced taxonomic losses of a lower magnitude than the Northern and Eastern Palaeotethys, which had the highest extinction magnitude. The same study found that Panthalassa's overall extinction magnitude was similar to that of the Central and Western Palaeotethys, but that it had a high magnitude of extinction of endemic taxa.
This mass extinction marked the beginning of the transition between the Palaeozoic and Modern evolutionary faunas. The brachiopod-mollusc transition that characterised the broader shift from the Palaeozoic to Modern evolutionary faunas has been suggested to have had its roots in the Capitanian mass extinction event, although other research has concluded that this may be an illusion created by taphonomic bias in silicified fossil assemblages, with the transition beginning only in the aftermath of the more cataclysmic end-Permian extinction. After the Capitanian mass extinction, disaster taxa such as Earlandia and Diplosphaerina became abundant in what is now South China. The initial recovery of reefs consisted of non-metazoan reefs: algal bioherms and algal-sponge reef buildups. This initial recovery interval was followed by an interval of Tubiphytes-dominated reefs, which in turn was followed by a return of metazoan, sponge-dominated reefs. Overall, reef recovery took approximately 2.5 million years.
Terrestrial life
Among terrestrial vertebrates, the main victims were dinocephalian therapsids, which were one of the most common elements of tetrapod fauna of the Guadalupian; only one dinocephalian genus survived the Capitanian extinction event. The diversity of the anomodonts that lived during the late Guadalupian was cut in half by the Capitanian mass extinction. Terrestrial survivors of the Capitanian extinction event were generally to and commonly found in burrows.
Causes
Emeishan Traps
Volcanic emissions
It is believed that the extinction, which coincided with the beginning of a major negative δ13C excursion signifying a severe disturbance of the carbon cycle, was triggered by eruptions of the Emeishan Traps large igneous province, basalt piles from which currently cover an area of 250,000 to 500,000 km2, although the original volume of the basalts may have been anywhere from 500,000 km3 to over 1,000,000 km3. The age of the extinction event and the deposition of the Emeishan basalts are in good alignment. Reefs and other marine sediments interbedded among basalt piles indicate Emeishan volcanism initially developed underwater; terrestrial outflows of lava occurred only later in the large igneous province's period of activity. These eruptions would have released high doses of toxic mercury; increased mercury concentrations are coincident with the negative carbon isotope excursion, indicating a common volcanic cause. Coronene enrichment at the Guadalupian-Lopingian boundary further confirms the existence of massive volcanic activity; coronene can only form at extremely high temperatures created either by extraterrestrial impacts or massive volcanism, with the former being ruled out because of an absence of iridium anomalies coeval with mercury and coronene anomalies. A large amount of carbon dioxide and sulphur dioxide is believed to have been discharged into the stratosphere of the Northern and Southern Hemispheres due to the equatorial location of the Emeishan Traps, leading to sudden global cooling and global warming. The Emeishan Traps discharged between 130 and 188 teratonnes of carbon dioxide in total, doing so at a rate of between 0.08 to 0.25 gigatonnes of carbon dioxide per year, making them responsible for an increase in atmospheric carbon dioxide that was both one of the largest and one of the most precipitous in the entire geological history of the Earth. The rate of carbon dioxide emissions during the Capitanian mass extinction, though extremely abrupt, was nonetheless significantly slower than that during the end-Permian extinction, during which carbon dioxide levels rose five times faster according to one study. Significant quantities of methane released by dikes and sills intruding into coal-rich deposits has been implicated as an additional driver of warming, though this idea has been challenged by studies that instead conclude that the extinction was precipitated directly by the Emeishan Traps or by their interaction with platform carbonates. The emissions of the Emeishan Traps may also have contributed to the downfall of the ozone shield, exposing the Earth's surface to a vastly increased flux of high-frequency solar radiation.
Anoxia and euxinia
Global warming resulting from the large igneous province's activity has been implicated as a cause of marine anoxia. Two anoxic events, the middle Capitanian OAE-C1 and the end-Capitanian OAE-C2, occurred thanks to Emeishan volcanic activity. Volcanic greenhouse gas release and global warming increased continental weathering and mineral erosion, which in turn has been propounded as a factor enhancing oceanic euxinia. Euxinia may have been exacerbated even further by the increasing sluggishness of ocean circulation resulting from volcanically driven warming. The initial hydrothermal nature of the Emeishan Traps meant that local marine life around South China would have been especially jeopardised by anoxia due to hyaloclastite development in restricted, fault-bounded basins. Expansion of oceanic anoxia has been posited to have occurred slightly before the Capitanian extinction event itself by some studies, though it is probable that upwelling of anoxic waters prior to the mass extinction was a local phenomenon specific to South China.
Hypercapnia and acidification
Because the ocean acts as a carbon sink absorbing atmospheric carbon dioxide, it is likely that the excessive volcanic emissions of carbon dioxide resulted in marine hypercapnia, which would have acted in conjunction with other killing mechanisms to further increase the severity of the biotic crisis. The dissolution of volcanically emitted carbon dioxide in the oceans triggered ocean acidification, which probably contributed to the demise of various calcareous marine organisms, particularly giant alatoconchid bivalves. By virtue of the greater solubility of carbon dioxide in colder waters, ocean acidification was especially lethal in high latitude waters. Furthermore, acid rain would have arisen as yet another biocidal consequence of the intense sulphur emissions produced by Emeishan Traps volcanism. This resulted in soil acidification and a decline of terrestrial infaunal invertebrates. Some researchers have cast doubt on whether significant acidification took place globally, concluding that the carbon cycle perturbation was too small to have caused a major worldwide drop in pH.
Criticism of the volcanic cause hypothesis
Not all studies, however, have supported the volcanic warming hypothesis; analysis of δ13C and δ18O values from the tooth apatite of Diictodon feliceps specimens from the Karoo Supergroup shows a positive δ13C excursion and concludes that the end of the Capitanian was marked by massive aridification in the region, although the temperature remained largely the same, suggesting that global climate change did not account for the extinction event. Analysis of vertebrate extinction rates in the Karoo Basin, specifically the upper Abrahamskraal Formation and lower Teekloof Formation, show that the large scale decrease in terrestrial vertebrate diversity coincided with volcanism in the Emeishan Traps, although robust evidence for a causal relationship between these two events remains elusive. A 2015 study called into question whether the Capitanian mass extinction event was global in nature at all or merely a regional biotic crisis limited to South China and a few other areas, finding no evidence for terrestrial or marine extinctions in eastern Australia linked to the Emeishan Traps or to any proposed extinction triggers invoked to explain the biodiversity drop in low-latitudes of the Northern Hemisphere.
Sea level fall
The Capitanian mass extinction has been attributed to sea level fall, with the widespread demise of reefs in particular being linked to this marine regression. The Guadalupian-Lopingian boundary coincided with one of the most prominent first-order marine regressions of the Phanerozoic. Evidence for abrupt sea level fall at the terminus of the Guadalupian comes from evaporites and terrestrial facies overlying marine carbonate deposits across the Guadalupian-Lopingian transition. Additionally, a tremendous unconformity is associated with the Guadalupian-Lopingian boundary in many strata across the world. The closure of the Sino-Mongolian Seaway at the end of the Capitanian has been invoked as a potential driver of Palaeotethyan biodiversity loss.
Other hypotheses
Global drying, plate tectonics, and biological competition may have also played a role in the extinction. Potential drivers of extinction proposed as causes of end-Guadalupian reef decline include fluctuations in salinity and tectonic collisions of microcontinents.
See also
Olson's Extinction
Permian–Triassic extinction event
References
Extinction events
Guadalupian | Capitanian mass extinction event | Biology | 4,230 |
535,581 | https://en.wikipedia.org/wiki/Pearlite | Pearlite is a two-phased, lamellar (or layered) structure composed of alternating layers of ferrite (87.5 wt%) and cementite (12.5 wt%) that occurs in some steels and cast irons. During slow cooling of an iron-carbon alloy, pearlite forms by a eutectoid reaction as austenite cools below (the eutectoid temperature). Pearlite is a microstructure occurring in many common grades of steels.
Composition
The eutectoid composition of austenite is approximately 0.8% carbon; steel with less carbon content (hypoeutectoid steel) will contain a corresponding proportion of relatively pure ferrite crystallites that do not participate in the eutectoid reaction and cannot transform into pearlite. Likewise steels with higher carbon content (hypereutectoid steels) will form cementite before reaching the eutectoid point. The proportion of ferrite and cementite forming above the eutectoid point can be calculated from the iron/iron—carbide equilibrium phase diagram using the lever rule.
Steels with pearlitic (eutectoid composition) or near-pearlitic microstructure (near-eutectoid composition) can be drawn into thin wires. Such wires, often bundled into ropes, are commercially used as piano wires, ropes for suspension bridges, and as steel cord for tire reinforcement. High degrees of wire drawing (logarithmic strain above 3) leads to pearlitic wires with yield strengths of several gigapascals. It makes pearlite one of the strongest structural bulk materials on earth.
Some hypereutectoid pearlitic steel wires, when cold wire drawn to true (logarithmic) strains above 5, can even show a maximal tensile strength above . Although pearlite is used in many engineering applications, the origin of its extreme strength is not well understood. It has been recently shown that cold wire drawing not only strengthens pearlite by refining the lamellae structure, but also simultaneously causes partial chemical decomposition of cementite, associated with an increased carbon content of the ferrite phase, deformation induced lattice defects in ferrite lamellae, and even a structural transition from crystalline to amorphous cementite. The deformation-induced decomposition and microstructural change of cementite is closely related to several other phenomena such as a strong redistribution of carbon and other alloy elements like silicon and manganese in both the cementite and the ferrite phase; a variation of the deformation accommodation at the phase interfaces due to a change in the carbon concentration gradient at the interfaces; and mechanical alloying.
Pearlite was first identified by Henry Clifton Sorby and initially named sorbite, however the similarity of microstructure to nacre and especially the optical effect caused by the scale of the structure made the alternative name more popular.
Pearlite forms as a result of the cooperative growth of ferrite and cementite during the decomposition of austenite. The morphology of pearlite is significantly affected by the cooling rate and coiling temperature. At lower coiling temperatures, pearlite forms with finer lamellar spacing, resulting in enhanced mechanical properties due to the finer distribution of ferrite and cementite layers. Conversely, at higher coiling temperatures, pearlite forms with coarser lamellae, and a smaller amount of pearlite is observed as coarse cementite particles tend to dominate the structure. The carbon diffusion during the formation of pearlite, just ahead of the growth front, is critical in determining the thickness of the lamellae and, consequently, the strength of the steel.
Bainite is a similar structure with lamellae much smaller than the wavelength of visible light and thus lacks this pearlescent appearance. It is prepared by more rapid cooling. Unlike pearlite, whose formation involves the diffusion of all atoms, bainite grows by a displacive transformation mechanism.
The transformation of pearlite to austenite takes place at lower critical temperature of . At this temperature pearlite changes to austenite because of nucleation process.
Eutectoid steel
Eutectoid steel can in principle be transformed completely into pearlite; hypoeutectoid steels can also be completely pearlitic if transformed at a temperature below the normal eutectoid. Pearlite can be hard and strong but is not particularly tough. It can be wear-resistant because of a strong lamellar network of ferrite and cementite. Examples of applications include cutting tools, high strength wires, knives, chisels, and nails.
References
Further reading
Comprehensive information on pearlite
Introduction to Physical metallurgy by Sidney H. Avner, second edition, McGraw hill publications.
Steels: Processing, Structure, and Performance, Chapter 15 High-Carbon Steels: Fully Pearlitic Microstructures and Applications by George Krauss, 2005 Edition, ASM International.
External links
Metallurgy
Steel
Iron | Pearlite | Chemistry,Materials_science,Engineering | 1,024 |
46,968,203 | https://en.wikipedia.org/wiki/Patrick%20Michael%20Grundy | Patrick Michael Grundy (16 November 1917, Yarmouth, Isle of Wight – 4 November 1959) was an English mathematician and statistician. He was one of the eponymous co-discoverers of the Sprague–Grundy function and its application to the analysis of a wide class of combinatorial games.
Biography
Grundy received his secondary education from Malvern College, to which he had obtained a Major Scholarship in 1931, and from which he graduated in 1935. While there, he demonstrated his aptitude for mathematics by winning three prizes in that subject. After leaving school he entered Clare College, Cambridge, on a Foundation Scholarship, where he read for the Mathematical Tripos from 1936 to 1939, earning first class honours in Part II and a distinction in Part III.
The work for which he is best known appeared in his first paper, Mathematics and Games, first published in the Cambridge University Mathematical Society's magazine, Eureka in 1939, and reprinted by the same magazine in 1964. The main results of this paper were discovered independently by Grundy and by Roland Sprague, and had already been published by the latter in 1935. The key idea is that of a function that assigns a non-negative integer to each position of a class of combinatorial games, now called impartial games, and which greatly assists in the identification of winning and losing positions, and of the winning moves from the former. The number assigned to a position by this function is called its Grundy value (or Grundy number), and the function itself is called the Sprague–Grundy function, in honour of its co-discoverers. The procedures developed by Sprague and Grundy for using their function to analyse impartial games are collectively called Sprague–Grundy theory, and at least two different theorems concerning these procedures have been called Sprague–Grundy theorems. The maximum number of colors used by a greedy coloring algorithm is called the Grundy number, also after this work on games, as its definition has some formal similarities with the Sprague–Grundy theory.
In 1939 Grundy began research in algebraic geometry as a research student at the University of Cambridge, eventually specialising in the theory of ideals. In 1941 he won a Smith's Prize for an essay entitled On the theory of R-modules, and his first research paper in the area, A generalisation of additive ideal theory, was published in the following year. In 1943 he was appointed to an assistant lectureship at the University College of Hull, which he left in 1944. He was awarded a Ph.D. from the University of Cambridge in 1945.
Shortly after the end of World War II, Grundy moved away from the field of algebra to take up work in statistics. In 1947 he began formal training in the latter discipline at the Rothamsted Experimental Station under a Ministry of Agriculture scholarship, graduating in 1949, when he then joined the permanent staff of the former organisation as an Experimental Officer. In 1951 he was promoted to Senior Experimental Officer. During his time at Rothamsted he performed most of his published statistical research, which included investigations of problems in the design and analysis of experiments, sampling, composition of animal populations, and fitting truncated distributions.
From 1954 to 1958 Grundy worked as a statistician at the National Institute for Educational Research. During this period, he collaborated with Michael Healy and D.H. Rees to extend Frank Yates's work on cost–benefit analysis of experimentation. The results of this collaboration were reported in an influential paper, Economic choice of the amount of experimentation, published in series B of the Journal of the Royal Statistical Society in 1956. In 1958 Grundy moved to a position in the Biometry Unit at Oxford. However, he retired from this position after only one term, due to ill health.
Early in 1959 Grundy married Hilary Taylor, a former colleague from the National Institute of Educational Research. Although his health then greatly improved throughout 1959, he was unfortunately killed in an accident in November of that year.
List of Grundy's papers
With the exception of the final item, this list is taken from Smith's obituary (1960). The first item is missing from Goddard's (1960) list, which is otherwise the same as Smith's.
(with R.S. Scorer and C.A.B Smith)
(with M.J.R. Healy)
(with F. Yates)
(with F. Leech)
(with D.H. Rees and M.J.R. Healy)
(with D.H. Rees and M.J.R. Healy)
(with C.A.B. Smith)
. Reprint of Grundy (1939).
Notes
References
1917 births
1959 deaths
20th-century English mathematicians
Algebraists
Combinatorial game theorists
Recreational mathematicians
English statisticians
Alumni of Clare College, Cambridge | Patrick Michael Grundy | Mathematics | 1,007 |
33,675,095 | https://en.wikipedia.org/wiki/Qsort | qsort is a C standard library function that implements a sorting algorithm for arrays of arbitrary objects according to a user-provided comparison function. It is named after the "quicker sort" algorithm (a quicksort variant due to R. S. Scowen), which was originally used to implement it in the Unix C library, although the C standard does not require it to implement quicksort.
The ability to operate on different kinds of data (polymorphism) is achieved by taking a function pointer to a three-way comparison function, as well as a parameter that specifies the size of its individual input objects. The C standard requires the comparison function to implement a total order on the items in the input array.
History
A qsort function appears in Version 2 Unix in 1972 as a library assembly language subroutine. Its interface is unlike the modern version, in that it can be pseudo-prototyped as qsort(void * start, void * end, unsigned length) – sorting contiguously-stored length-long byte strings from the range [start, end). This, and the lack of a replaceable comparison function, makes it unsuitable to properly sort the system's little-endian integers, or any other data structures.
In Version 3 Unix, the interface is extended by calling compar(III), with an interface identical to modern-day memcmp. This function may be overridden by the user's program to implement any kind of ordering, in an equivalent fashion to the compar argument to standard qsort (though program-global, of course).
Version 4 Unix adds a C implementation, with an interface equivalent to the standard.
It was rewritten in 1983 for the Berkeley Software Distribution. The function was standardized in ANSI C (1989).
The assembly implementation is removed in Version 6 Unix.
In 1991, Bell Labs employees observed that AT&T and BSD versions of qsort would consume quadratic time for some simple inputs. Thus Jon Bentley and Douglas McIlroy engineered a new faster and more robust implementation. McIlroy would later produce a more complex quadratic-time input, termed AntiQuicksort, in 1998. This function constructs adversary data on-the-fly.
Example
The following piece of C code shows how to sort a list of integers using qsort.
#include <stdlib.h>
/* Comparison function. Receives two generic (void) pointers to the items under comparison. */
int compare_ints(const void *p, const void *q) {
int x = *(const int *)p;
int y = *(const int *)q;
/* Avoid return x - y, which can cause undefined behaviour
because of signed integer overflow. */
if (x < y)
return -1; // Return -1 if you want ascending, 1 if you want descending order.
else if (x > y)
return 1; // Return 1 if you want ascending, -1 if you want descending order.
return 0;
// All the logic is often alternatively written:
return (x > y) - (x < y);
}
/* Sort an array of n integers, pointed to by a. */
void sort_ints(int *a, size_t n) {
qsort(a, n, sizeof(*a), compare_ints);
}
Extensions
Since the comparison function of the original qsort only accepts two pointers, passing in additional parameters (e.g. producing a comparison function that compares by the two value's difference with another value) must be done using global variables. The issue was solved by the BSD and GNU Unix-like systems by introducing a qsort_r function, which allows for an additional parameter to be passed to the comparison function. The two versions of qsort_r have different argument orders. C11 Annex K defines a qsort_s essentially identical to GNU's qsort_r. The macOS and FreeBSD libcs also contain qsort_b, a variant that uses blocks, an analogue to closures, as an alternate solution to the same problem.
References
C standard library
Sorting algorithms | Qsort | Mathematics | 882 |
36,479,925 | https://en.wikipedia.org/wiki/NGC%206304 | NGC 6304 is a globular cluster in the constellation Ophiuchus. William Herschel discovered this star cluster using an f/13 speculum reflector telescope in 1786. It is about 19,000 light-years away, near the Milky Way's central bulge.
See also
NGC object
List of NGC objects
List of NGC objects (6001–7000)
References
External links
NED – NGC 6304
SEDS – NGC 6304
SIMBAD – NGC 6304
VizieR – NGC 6304
Globular clusters
Ophiuchus
6304 | NGC 6304 | Astronomy | 114 |
18,348,855 | https://en.wikipedia.org/wiki/Harmful%20algal%20bloom | A harmful algal bloom (HAB), or excessive algae growth, is an algal bloom that causes negative impacts to other organisms by production of natural algae-produced toxins, water deoxygenation, mechanical damage to other organisms, or by other means. HABs are sometimes defined as only those algal blooms that produce toxins, and sometimes as any algal bloom that can result in severely lower oxygen levels in natural waters, killing organisms in marine or fresh waters. Blooms can last from a few days to many months. After the bloom dies, the microbes that decompose the dead algae use up more of the oxygen, generating a "dead zone" which can cause fish die-offs. When these zones cover a large area for an extended period of time, neither fish nor plants are able to survive. Harmful algal blooms in marine environments are often called "red tides".
It is sometimes unclear what causes specific HABs as their occurrence in some locations appears to be entirely natural, while in others they appear to be a result of human activities. In certain locations there are links to particular drivers like nutrients, but HABs have also been occurring since before humans started to affect the environment. HABs are induced by eutrophication, which is an overabundance of nutrients in the water. The two most common nutrients are fixed nitrogen (nitrates, ammonia, and urea) and phosphate. The excess nutrients are emitted by agriculture, industrial pollution, excessive fertilizer use in urban/suburban areas, and associated urban runoff. Higher water temperature and low circulation also contribute.
HABs can cause significant harm to animals, the environment and economies. They have been increasing in size and frequency worldwide, a fact that many experts attribute to global climate change. The U.S. National Oceanic and Atmospheric Administration (NOAA) predicts more harmful blooms in the Pacific Ocean. Potential remedies include chemical treatment, additional reservoirs, sensors and monitoring devices, reducing nutrient runoff, research and management as well as monitoring and reporting.
Terrestrial runoff, containing fertilizer, sewage and livestock wastes, transports abundant nutrients to the seawater and stimulates bloom events. Natural causes, such as river floods or upwelling of nutrients from the sea floor, often following massive storms, provide nutrients and trigger bloom events as well. Increasing coastal developments and aquaculture also contribute to the occurrence of coastal HABs. Effects of HABs can worsen locally due to wind driven Langmuir circulation and their biological effects.
Description and identification
HABs from cyanobacteria (blue-green algae) can appear as a foam, scum, or mat on or just below the surface of water and can take on various colors depending on their pigments. Cyanobacteria blooms in freshwater lakes or rivers may appear bright green, often with surface streaks that look like floating paint. Cyanobacterial blooms are a global problem.
Most blooms occur in warm waters with excessive nutrients. The harmful effects from such blooms are due to the toxins they produce or from using up oxygen in the water which can lead to fish die-offs. Not all algal blooms produce toxins, however, with some only discoloring water, producing a smelly odor, or adding a bad taste to the water. Unfortunately, it is not possible to tell if a bloom is harmful from just appearances, since sampling and microscopic examination is required. In many cases microscopy is not sufficient to tell the difference between toxic and non-toxic populations. In these cases, tools can be employed to measure the toxin level or to determine if the toxin-production genes are present.
Terminology
In a narrow definition, harmful algal blooms are only those blooms that release toxins that affect other species. On the other hand, any algal bloom can cause dead zones due to low oxygen levels, and could therefore be called "harmful" in that sense. The usage of the term "harmful algal blooms" in the media and scientific literature is varied. In a broader definition, all "organisms and events are considered to be HABs if they negatively impact human health or socioeconomic interests or are detrimental to aquatic systems". A harmful algal bloom is "a societal concept rather than a scientific definition".
A similarly broad definition of HABs was adopted by the US Environmental Protection Agency in 2008 who stated that HABs include "potentially toxic (auxotrophic, heterotrophic) species and high-biomass producers that can cause hypoxia and anoxia and indiscriminate mortalities of marine life after reaching dense concentrations, whether or not toxins are produced".
Red tide
Harmful algal bloom in coastal areas are also often referred to as "red tides". The term "red tide" is derived from blooms of any of several species of dinoflagellate, such as Karenia brevis. However, the term is misleading since algal blooms can widely vary in color, and growth of algae is unrelated to the tides. Not all red tides are produced by dinoflagellates. The mixotrophic ciliate Mesodinium rubrum produces non-toxic blooms coloured deep red by chloroplasts it obtains from the algae it eats.
As a technical term, it is being replaced in favor of more precise terminology, including the generic term "harmful algal bloom" for harmful species, and "algal bloom" for benign species.
Types
There are three main types of phytoplankton which can form into harmful algal blooms: cyanobacteria, dinoflagellates, and diatoms. All three are made up of microscopic floating organisms which, like plants, can create their own food from sunlight by means of photosynthesis. That ability makes the majority of them an essential part of the food web for small fish and other organisms.
Cyanobacteria
Harmful algal blooms in freshwater lakes and rivers, or at estuaries, where rivers flow into the ocean, are caused by cyanobacteria, which are commonly referred to as "blue-green algae", but are in fact prokaryotic bacteria, as opposed to algae which are eukaryotes. Some cyanobacteria, including the widespread genus Microsystis, can produce hazardous cyanotoxins such as microcystins, which are hepatotoxins that harm the liver of mammals. Other types of cyanobacteria can also produce hepatotoxins, as well as neurotoxins, cytotoxins, and endotoxins. Water purification plants may be unable to remove these toxins, leading to increasingly common localised advisories against drinking tap water, as happened in Toledo, Ohio in August 2014.
In August 2021, there were 47 lakes confirmed to have algal blooms in New York State alone. In September 2021, Spokane County's Environmental Programs issued a HAB alert for Newman Lake following tests showing potentially harmful toxicity levels for cyanobacteria, while in the same month record-high levels of microcystins were reported leading to an extended 'Do Not Drink' advisory for 280 households at Clear Lake, California's second-largest freshwater lake. Water conditions in Florida, meanwhile, continue to deteriorate under increasing nutrient inflows, causing severe HAB events in both freshwater and marine areas.
HABs also cause harm by blocking the sunlight used by plants and algae to photosynthesise, or by depleting the dissolved oxygen needed by fish and other aquatic animals, which can lead to fish die-offs. When such oxygen-depleted water covers a large area for an extended period of time, it can become hypoxic or even anoxic; these areas are commonly called dead zones. These dead zones can be the result of numerous different factors ranging from natural phenomenon to deliberate human intervention, and are not just limited to large bodies of fresh water as found in the great lakes, but are also prone to bodies of salt water as well.
Dual-stage life systems of algal species
Many of the species that form harmful algae blooms will undergo a dual-stage life system. These species will alternate between a benthic resting stage and a pelagic vegetative state. The benthic resting stage corresponds to when these species are resting near the ocean floor. In this stage, the species cells are waiting for optimal conditions so that they can move towards the surface. These species will then transition from the benthic resting stage into the pelagic vegetative state - where they are more active and found near the water body surface. In the pelagic vegetative state, these cells are able to grow and multiply. It is within the pelagic vegetative state that a bloom is able to occur - as the cells rapidly reproduce and take over the upper regions of the body of water. The transition between these two life stages can have multiple effects on the algae bloom (such as rapid termination of the HAB as cells convert from the pelagic state to the benthic state). Many of the algal species that undergo this dual-stage life cycle are capable of rapid vertical migration. This migration is required for the movement from the benthic area of bodies of water to the pelagic zone. These species require immense amounts of energy as they pass through the various thermoclines, haloclines, and pycnoclines that are associated with the bodies of water in which these cells exist.
Diatoms and dinoflagellates (in marine coastal areas)
The other types of algae are diatoms and dinoflagellates, found primarily in marine environments, such as ocean coastlines or bays, where they can also form algal blooms. Coastal HABs are a natural phenomenon, although in many instances, particularly when they form close to coastlines or in estuaries, it has been shown that they are exacerbated by human-induced eutrophication and / or climate change. They can occur when warmer water, salinity, and nutrients reach certain levels, which then stimulates their growth. Most HAB algae are dinoflagellates. They are visible in water at a concentration of 1,000 algae cells/ml, while in dense blooms they can measure over 200,000/ml.
Diatoms produce domoic acid, another neurotoxin, which can cause seizures in higher vertebrates and birds as it concentrates up the food chain. Domoic acid readily accumulates in the bodies of shellfish, sardines, and anchovies, which if then eaten by sea lions, otters, cetaceans, birds or people, can affect the nervous system causing serious injury or death. In the summer of 2015, the state governments closed important shellfish fisheries in Washington, Oregon, and California because of high concentrations of domoic acid in shellfish.
In the marine environment, single-celled, microscopic, plant-like organisms naturally occur in the well-lit surface layer of any body of water. These organisms, referred to as phytoplankton or microalgae, form the base of the food web upon which nearly all other marine organisms depend. Of the 5000+ species of marine phytoplankton that exist worldwide, about 2% are known to be harmful or toxic. Blooms of harmful algae can have large and varied impacts on marine ecosystems, depending on the species involved, the environment where they are found, and the mechanism by which they exert negative effects.
List of common HAB genera
Gonyaulax
Karenia
Gymnodinium
Dinophysis
Noctiluca
Chattonella
Ceratium
Amoebophyra
Alexandrium
Cochlodinium
Causes
It is sometimes unclear what causes specific HABs as their occurrence in some locations appears to be entirely natural, while in others they appear to be a result of human activities. Furthermore, there are many different species of algae that can form HABs, each with different environmental requirements for optimal growth. The frequency and severity of HABs in some parts of the world have been linked to increased nutrient loading from human activities. In other areas, HABs are a predictable seasonal occurrence resulting from coastal upwelling, a natural result of the movement of certain ocean currents.
The growth of marine phytoplankton (both non-toxic and toxic) is generally limited by the availability of nitrates and phosphates, which can be abundant in coastal upwelling zones as well as in agricultural run-off. The type of nitrates and phosphates available in the system are also a factor, since phytoplankton can grow at different rates depending on the relative abundance of these substances (e.g. ammonia, urea, nitrate ion).
A variety of other nutrient sources can also play an important role in affecting algal bloom formation, including iron, silica or carbon. Coastal water pollution produced by humans (including iron fertilization) and systematic increase in sea water temperature have also been suggested as possible contributing factors in HABs.
Among the causes of algal blooms are:
Excess nutrients—phosphorus and nitrates—from fertilizers or sewage that are discharged to water bodies (also called nutrient pollution)
climate change
thermal pollution from power plants and factories
low water levels in inland waterways and lakes, which reduces water flow and increases water temperatures
invasive filter feeders—especially Zebra mussels, Dreissena polymorpha—which preferentially eat non-toxic algae, competitors to harmful algae
Nutrients
Nutrients enter freshwater or marine environments as surface runoff from agricultural pollution and urban runoff from fertilized lawns, golf courses and other landscaped properties; and from sewage treatment plants that lack nutrient control systems. Additional nutrients are introduced from atmospheric pollution. Coastal areas worldwide, especially wetlands and estuaries, coral reefs and swamps, are prone to being overloaded with those nutrients. Most of the large cities along the Mediterranean Sea, for example, discharge all of their sewage into the sea untreated. The same is true for most coastal developing countries, while in parts of the developing world, as much as 70% of wastewater from large cities may re-enter water systems without being treated.
Residual nutrients in treated wastewater can also accumulate in downstream source water areas and fuel eutrophication, which leads progressively to a cyanobacteria-dominated system characterized by seasonal HABs. As more wastewater treatment infrastructure is built, more treated wastewater is returned to the natural water system, leading to a significant increase in these residual nutrients.
Residual nutrients combine with nutrients from other sources to increase the sediment nutrient stockpile that is the driving force behind phase shifts to entrenched eutrophic conditions.
This contributes to the ongoing degradation of dams, lakes, rivers, and reservoirs - source water areas that are starting to become known as ecological infrastructure, placing increasing pressure on wastewater treatment works and water purification plants. Such pressures, in turn, intensify seasonal HABs.
Climate change
Climate change contributes to warmer waters which makes conditions more favorable for algae growth in more regions and farther north. In general, still, warm, shallow water, combined with high-nutrient conditions in lakes or rivers, increases the risk of harmful algal blooms. Warming of summer surface temperatures of lakes, which rose by 0.34 °C decade per decade between 1985 and 2009 due to global warming, also will likely increase algal blooming by 20% over the next century.
Although the drivers of harmful algal blooms are poorly understood, they do appear to have increased in range expansion and frequency in coastal areas since the 1980s. The is the result of human induced factors such as increased nutrient inputs (nutrient pollution) and climate change (in particular the warming of water temperatures). The parameters that affect the formation of HABs are ocean warming, marine heatwaves, oxygen loss, eutrophication and water pollution.
Causes or contributing factors of coastal HABs
HABs contain dense concentrations of organisms and appear as discolored water, often reddish-brown in color. It is a natural phenomenon, but the exact cause or combination of factors that result in a HAB event are not necessarily known. However, three key natural factors are thought to play an important role in a bloom - salinity, temperature, and wind. HABs cause economic harm, so outbreaks are carefully monitored. For example, the Florida Fish and Wildlife Conservation Commission provides an up-to-date status report on HABs in Florida. The Texas Parks and Wildlife Department also provides a status report. While no particular cause of HABs has been found, many different factors can contribute to their presence. These factors can include water pollution, which originates from sources such as human sewage and agricultural runoff.
The occurrence of HABs in some locations appears to be entirely natural (algal blooms are a seasonal occurrence resulting from coastal upwelling, a natural result of the movement of certain ocean currents) while in others they appear to be a result of increased nutrient pollution from human activities. The growth of marine phytoplankton is generally limited by the availability of nitrates and phosphates, which can be abundant in agricultural run-off as well as coastal upwelling zones. Other factors such as iron-rich dust influx from large desert areas such as the Sahara Desert are thought to play a major role in causing HAB events. Some algal blooms on the Pacific Coast have also been linked to occurrences of large-scale climatic oscillations such as El Niño events.
Other causes
Other factors such as iron-rich dust influx from large desert areas such as the Sahara are thought to play a role in causing HABs. Some algal blooms on the Pacific coast have also been linked to natural occurrences of large-scale climatic oscillations such as El Niño events. HABs are also linked to heavy rainfall. Although HABs in the Gulf of Mexico were witnessed in the early 1500s by explorer Cabeza de Vaca, it is unclear what initiates these blooms and how large a role nanthropogenic and natural factors play in their development.
Number and sizes
The number of reported harmful algal blooms (cyanobacterial) has been increasing throughout the world. It is unclear whether the apparent increase in frequency and severity of HABs in various parts of the world is in fact a real increase or is due to increased observation effort and advances in species identification technology.
In 2008, the U.S. government prepared a report on the problem, "Harmful Algal Bloom Management and Response: Assessment and Plan". The report recognized the seriousness of the problem:
Researchers have reported the growth of HABs in Europe, Africa and Australia. Those have included blooms on some of the African Great Lakes, such as Lake Victoria, the second largest freshwater lake in the world. India has been reporting an increase in the number of blooms each year. In 1977 Hong Kong reported its first coastal HAB. By 1987 they were getting an average of 35 per year. Additionally, there have been reports of harmful algal blooms throughout popular Canadian lakes such as Beaver Lake and Quamichan Lake. These blooms were responsible for the deaths of a few animals and led to swimming advisories.
Global warming and pollution is causing algal blooms to form in places previously considered "impossible" or rare for them to exist, such as under the ice sheets in the Arctic, in Antarctica, the Himalayan Mountains, the Rocky Mountains, and in the Sierra Nevada Mountains.
In the U.S., every coastal state has had harmful algal blooms over the last decade and new species have emerged in new locations that were not previously known to have caused problems. Inland, major rivers have seen an increase in their size and frequency. In 2015 the Ohio River had a bloom which stretched an "unprecedented" into adjoining states and tested positive for toxins, which created drinking water and recreation problems. A portion of Utah's Jordan River was closed due to toxic algal bloom in 2016.
Off the west coast of South Africa, HABs caused by Alexandrium catanella occur every spring. These blooms of organisms cause severe disruptions in fisheries of these waters as the toxins in the phytoplankton cause filter-feeding shellfish in affected waters to become poisonous for human consumption.
Harmful effects
As algal blooms grow, they deplete the oxygen in the water and block sunlight from reaching fish and plants. Such blooms can last from a few days to many months. With less light, plants beneath the bloom can die and fish can starve. Furthermore, the dense population of a bloom reduces oxygen saturation during the night by respiration. And when the algae eventually die off, the microbes which decompose the dead algae use up even more oxygen, which in turn causes more fish to die or leave the area. When oxygen continues to be depleted by blooms it can lead to hypoxic dead zones, where neither fish nor plants are able to survive. These dead zones in the case of the Chesapeake Bay, where they are a normal occurrence, are also suspected of being a major source of methane.
Scientists have found that HABs were a prominent feature of previous mass extinction events, including the End-Permian Extinction.
Human health
Tests have shown some toxins near blooms can be in the air and thereby be inhaled, which could affect health.
Food
Eating fish or shellfish from lakes with a bloom nearby is not recommended. Potent toxins are accumulated in shellfish that feed on the algae. If the shellfish are consumed, various types of poisoning may result. These include amnesic shellfish poisoning (ASP), diarrhetic shellfish poisoning, neurotoxic shellfish poisoning, and paralytic shellfish poisoning. A 2002 study has shown that algal toxins may be the cause for as many as 60,000 intoxication cases in the world each year.
In 1987 a new illness emerged: amnesic shellfish poisoning (ASP). People who had eaten mussels from Prince Edward Island were found to have ASP. The illness was caused by domoic acid, produced by a diatom found in the area where the mussels were cultivated. A 2013 study found that toxic paralytic shellfish poisoning in the Philippines during HABs has caused at least 120 deaths over a few decades. After a 2014 HAB incident in Monterey Bay, California, health officials warned people not to eat certain parts of anchovy, sardines, or crab caught in the bay. In 2015 most shellfish fisheries in Washington, Oregon and California were shut down because of high concentrations of toxic domoic acid in shellfish. People have been warned that inhaling vapors from waves or wind during a HAB event may cause asthma attacks or lead to other respiratory ailments.
In 2018 agricultural officials in Utah worried that even crops could become contaminated if irrigated with toxic water, although they admit that they can't measure contamination accurately because of so many variables in farming. They issued warnings to residents, however, out of caution.
Drinking water
Persons are generally warned not to enter or drink water from algal blooms, or let their pets swim in the water since many pets have died from algal blooms. In at least one case, people began getting sick before warnings were issued. There is no treatment available for animals, including livestock cattle, if they drink from algal blooms where such toxins are present. Pets are advised to be kept away from algal blooms to avoid contact.
In some locations visitors have been warned not to even touch the water. Boaters have been told that toxins in the water can be inhaled from the spray from wind or waves. Ocean beaches, lakes and rivers have been closed due to algal blooms. After a dog died in 2015 from swimming in a bloom in California's Russian River, officials likewise posted warnings for parts of the river. Boiling the water at home before drinking does not remove the toxins.
In August 2014 the city of Toledo, Ohio advised its 500,000 residents to not drink tap water as the high toxin level from an algal bloom in western Lake Erie had affected their water treatment plant's ability to treat the water to a safe level. The emergency required using bottled water for all normal uses except showering, which seriously affected public services and commercial businesses. The bloom returned in 2015 and was forecast again for the summer of 2016.
In 2004, a bloom in Kisumu Bay, which is the drinking water source for 500,000 people in Kisumu, Kenya, suffered from similar water contamination. In China, water was cut off to residents in 2007 due to an algal bloom in its third largest lake, which forced 2 million people to use bottled water. A smaller water shut-down in China affected 15,000 residents two years later at a different location. Australia in 2016 also had to cut off water to farmers.
Alan Steinman of Grand Valley State University has explained that among the major causes for the algal blooms in general, and Lake Erie specifically, is because blue-green algae thrive with high nutrients, along with warm and calm water. Lake Erie is more prone to blooms because it has a high nutrient level and is shallow, which causes it to warm up more quickly during the summer.
Symptoms from drinking toxic water can show up within a few hours after exposure. They can include nausea, vomiting, and diarrhea, or trigger headaches and gastrointestinal problems. Although rare, liver toxicity can cause death. Those symptoms can then lead to dehydration, another major concern. In high concentrations, the toxins in the algal waters when simply touched can cause skin rashes, irritate the eyes, nose, mouth or throat. Those with suspected symptoms are told to call a doctor if symptoms persist or they can't hold down fluids after 24 hours.
In studies at the population level bloom coverage has been significantly related to the risk of non-alcoholic liver disease death.
Neurological disorders
Toxic algae blooms are thought to play a role in humans developing degenerative neurological disorders such as amyotrophic lateral sclerosis and Parkinson's disease.
Less than one percent of algal blooms produce hazardous toxins, such as microcystins. Although blue-green or other algae do not usually pose a direct threat to health, the toxins (poisons) which they produce are considered dangerous to humans, land animals, sea mammals, birds and fish when the toxins are ingested. The toxins are neurotoxins which destroy nerve tissue which can affect the nervous system, brain, and liver, and can lead to death.
Effects on humans from harmful algal blooms in marine environments
Humans are affected by the HAB species by ingesting improperly harvested shellfish, breathing in aerosolized brevetoxins (i.e. PbTx or Ptychodiscus toxins) and in some cases skin contact. The brevetoxins bind to voltage-gated sodium channels, important structures of cell membranes. Binding results in persistent activation of nerve cells, which interferes with neural transmission leading to health problems. These toxins are created within the unicellular organism, or as a metabolic product. The two major types of brevetoxin compounds have similar but distinct backbone structures. PbTx-2 is the primary intracellular brevetoxin produced by K. brevis blooms. However, over time, the PbTx-2 brevetoxin can be converted to PbTx-3 through metabolic changes. Researchers found that PbTx-2 has been the primary intracellular brevetoxin that converts over time into PbTx-3.
In the U.S., the seafood consumed by humans is tested regularly for toxins by the USDA to ensure safe consumption. Such testing is common in other nations. However, improper harvesting of shellfish can cause paralytic shellfish poisoning and neurotoxic shellfish poisoning in humans. Some symptoms include drowsiness, diarrhea, nausea, loss of motor control, tingling, numbing or aching of extremities, incoherence, and respiratory paralysis. Reports of skin irritation after swimming in the ocean during a HAB are common.
When the HAB cells rupture, they release extracellular brevetoxins into the environment. Some of those stay in the ocean, while other particles get aerosolized. During onshore winds, brevetoxins can become aerosolized by bubble-mediated transport, causing respiratory irritation, bronchoconstriction, coughing, and wheezing, among other symptoms.
It is recommended to avoid contact with wind-blown aerosolized toxin. Some individuals report a decrease in respiratory function after only 1 hour of exposure to a K. brevis red-tide beach and these symptoms may last for days. People with severe or persistent respiratory conditions (such as chronic lung disease or asthma) may experience stronger adverse reactions.
The National Oceanic and Atmospheric Administration's National Ocean Service provides a public conditions report identifying possible respiratory irritation impacts in areas affected by HABs.
Economic impact
Recreation and tourism
The hazards which accompany harmful algal blooms have hindered visitors' enjoyment of beaches and lakes in places in the U.S. such as Florida, California, Vermont, and Utah. Persons hoping to enjoy their vacations or days off have been kept away to the detriment of local economies. Lakes and rivers in North Dakota, Minnesota, Utah, California and Ohio have had signs posted warning about the potential of health risk.
Similar blooms have become more common in Europe, with France among the countries reporting them. In the summer of 2009, beaches in northern Brittany became covered by tonnes of potentially lethal rotting green algae. A horse being ridden along the beach collapsed and died from fumes given off by the rotting algae.
The economic damage resulting from lost business has become a serious concern. According to one report in 2016, the four main economic impacts from harmful algal blooms come from damage to human health, fisheries, tourism and recreation, and the cost of monitoring and management of area where blooms appear. EPA estimates that algal blooms impact 65 percent of the country's major estuaries, with an annual cost of $2.2 billion. In the U.S. there are an estimated 166 coastal dead zones. Because data collection has been more difficult and limited from sources outside the U.S., most of the estimates as of 2016 have been primarily for the U.S.
In port cities in the Shandong Province of eastern China, residents are no longer surprised when massive algal blooms arrive each year and inundate beaches. Prior to the Beijing Olympics in 2008, over 10,000 people worked to clear 20,000 tons of dead algae from beaches. In 2013 another bloom in China, thought to be its largest ever, covered an area of 7,500 square miles, and was followed by another in 2015 which blanketed an even greater 13,500 square miles. The blooms in China are thought to be caused by pollution from untreated agricultural and industrial discharges into rivers leading to the ocean.
Fisheries industry
As early as 1976 a short-term, relatively small, dead zone off the coasts of New York and New Jersey cost commercial and recreational fisheries over $500 million. In 1998 a HAB in Hong Kong killed over $10 million in high-value fish.
In 2009, the economic impact for the state of Washington's coastal counties dependent on its fishing industry was estimated to be $22 million. In 2016, the U.S. seafood industry expected future lost revenue could amount to $900 million annually.
NOAA has provided a few cost estimates for various blooms over the past few years: $10.3 million in 2011 due to a HAB at Texas oyster landings; $2.4 million lost income by tribal commerce from 2015 fishery closures in the pacific northwest; $40 million from Washington state's loss of tourism from the same fishery closure.
Along with damage to businesses, the toll from human sickness results in lost wages and damaged health. The costs of medical treatment, investigation by health agencies through water sampling and testing, and the posting of warning signs at effected locations is also costly.
The closures applied to areas where this algae bloom occurs has a big negative impact of the fishing industries, add to that the high fish mortality that follows, the increase in price due to the shortage of fish available and decrease in the demand for seafood due to the fear of contamination by toxins. This causes a big economic loss for the industry.
Economic costs are estimated to rise. In June 2015, for instance, the largest known toxic HAB forced the shutdown of the west coast shellfish industry, the first time that has ever happened. One Seattle NOAA expert commented, "This is unprecedented in terms of the extent and magnitude of this harmful algal bloom and the warm water conditions we're seeing offshore...." The bloom covered a range from Santa Barbara, California northward to Alaska.
The negative impact on fish can be even more severe when they are confined to pens, as they are in fish farms. In 2007 a fish farm in British Columbia lost 260 tons of salmon as a result of blooms, and in 2016 a farm in Chile lost 23 million salmon after an algal bloom.
Environmental impact
Dead zones
The presence of harmful algae bloom's can lead to hypoxia or anoxia in a body of water. The depletion of oxygen within a body of water can lead to the creation of a dead zone. Dead zones occur when a body of water has become unsuitable for organism survival in that location. HAB's cause dead zones by consuming oxygen in these bodies of water - leaving minimal oxygen available to other marine organisms. When the HAB's die, their bodies will sink to the bottom of the body of water - as the decaying of their bodies (through bacteria) is what causes the consumption of oxygen. Once oxygen levels get so low, the HAB's have placed the body of water in hypoxia - and these low oxygen levels will cause marine organisms to seek out better suited locations for their survival.
Blooms can harm the environment even without producing toxins by depleting oxygen from the water when growing and while decaying after they die. Blooms can also block sunlight to organisms living beneath it. A record-breaking number and size of blooms have formed in the Pacific coast, in Lake Erie, in the Chesapeake Bay and in the Gulf of Mexico, where a number of dead zones were created as a result. In the 1960s the number of dead zones worldwide was 49; the number rose to over 400 by 2008.
Among the largest dead zones were those in northern Europe's Baltic Sea and the Gulf of Mexico, which affects a $2.8 billion U.S. fish industry. Unfortunately, dead zones rarely recover and usually grow in size. One of the few dead zones to ever recover was in the Black Sea, which returned to normal fairly quickly after the collapse of the Soviet Union in the 1990s due to a resulting reduction in fertilizer use.
Fish die-offs
Massive fish die-offs have been caused by HABs. In 2016, 23 million salmon which were being farmed in Chile died from a toxic algae bloom. To get rid of the dead fish, the ones fit for consumption were made into fishmeal and the rest were dumped 60 miles offshore to avoid risks to human health. The economic cost of that die-off is estimated to have been $800 million. Environmental expert Lester Brown has written that the farming of salmon and shrimp in offshore ponds concentrates waste, which contributes to eutrophication and the creation of dead zones.
Other countries have reported similar impacts, with cities such as Rio de Janeiro, Brazil seeing major fish die-offs from blooms becoming a common occurrence. In early 2015, Rio collected an estimated 50 tons of dead fish from the lagoon where water events in the 2016 Olympics were planned to take place.
The Monterey Bay has suffered from harmful algal blooms, most recently in 2015: "Periodic blooms of toxin-producing Pseudo-nitzschia diatoms have been documented for over 25 years in Monterey Bay and elsewhere along the U.S. west coast. During large blooms, the toxin accumulates in shellfish and small fish such as anchovies and sardines that feed on algae, forcing the closure of some fisheries and poisoning marine mammals and birds that feed on contaminated fish." Similar fish die-offs from toxic algae or lack of oxygen have been seen in Russia, Colombia, Vietnam, China, Canada, Turkey, Indonesia, and France.
Land animal deaths
Land animals, including livestock and pets have been affected. Dogs have died from the toxins after swimming in algal blooms. Warnings have come from government agencies in the state of Ohio, which noted that many dogs and livestock deaths resulted from HAB exposure in the U.S. and other countries. They also noted in a 2003 report that during the previous 30 years, they have seen more frequent and longer-lasting harmful algal blooms." In 50 countries and 27 states that year there were reports of human and animal illnesses linked to algal toxins. In Australia, the department of agriculture warned farmers that the toxins from a HAB had the "potential to kill large numbers of livestock very quickly."
Marine mammals have also been seriously harmed, as over 50 percent of unusual marine mammal deaths are caused by harmful algal blooms. In 1999, over 65 bottlenose dolphins died during a coastal HAB in Florida. In 2013 a HAB in southwest Florida killed a record number of Manatee. Whales have also died in large numbers. During the period from 2005 to 2014, Argentina reported an average 65 baby whales dying which experts have linked to algal blooms. A whale expert there expects the whale population to be reduced significantly. In 2003 off Cape Cod in the North Atlantic, at least 12 humpback whales died from toxic algae from a HAB. In 2015 Alaska and British Columbia reported many humpback whales had likely died from HAB toxins, with 30 having washed ashore in Alaska. "Our leading theory at this point is that the harmful algal bloom has contributed to the deaths," said a NOAA spokesperson.
Birds have died after eating dead fish contaminated with toxic algae. Rotting and decaying fish are eaten by birds such as pelicans, seagulls, cormorants, and possibly marine or land mammals, which then become poisoned. The nervous systems of dead birds were examined and had failed from the toxin's effect. On the Oregon and Washington coast, a thousand scoters, or sea ducks, were also killed in 2009. "This is huge," said a university professor. As dying or dead birds washed up on the shore, wildlife agencies went into "an emergency crisis mode."
It has even been suggested that harmful algal blooms are responsible for the deaths of animals found in fossil troves, such as the dozens of cetacean skeletons found at Cerro Ballena.
Effects on marine ecosystems
Harmful algal blooms in marine ecosystems have been observed to cause adverse effects to a wide variety of aquatic organisms, most notably marine mammals, sea turtles, seabirds and finfish. The impacts of HAB toxins on these groups can include harmful changes to their developmental, immunological, neurological, or reproductive capacities. The most conspicuous effects of HABs on marine wildlife are large-scale mortality events associated with toxin-producing blooms. For example, a mass mortality event of 107 bottlenose dolphins occurred along the Florida panhandle in the spring of 2004 due to ingestion of contaminated menhaden with high levels of brevetoxin. Manatee mortalities have also been attributed to brevetoxin but unlike dolphins, the main toxin vector was endemic seagrass species (Thalassia testudinum) in which high concentrations of brevetoxins were detected and subsequently found as a main component of the stomach contents of manatees.
Additional marine mammal species, like the highly endangered North Atlantic right whale, have been exposed to neurotoxins by preying on highly contaminated zooplankton. With the summertime habitat of this species overlapping with seasonal blooms of the toxic dinoflagellate Alexandrium fundyense, and subsequent copepod grazing, foraging right whales will ingest large concentrations of these contaminated copepods. Ingestion of such contaminated prey can affect respiratory capabilities, feeding behavior, and ultimately the reproductive condition of the population.
Immune system responses have been affected by brevetoxin exposure in another critically endangered species, the loggerhead sea turtle. Brevetoxin exposure, from inhalation of aerosolized toxins and ingestion of contaminated prey, can have clinical signs of increased lethargy and muscle weakness in loggerhead sea turtles causing these animals to wash ashore in a decreased metabolic state with increases of immune system responses upon blood analysis.
Examples of common harmful effects of HABs include:
the production of neurotoxins which cause mass mortalities in fish, seabirds, sea turtles, and marine mammals
human illness or death from consumption of seafood contaminated by toxic algae
mechanical damage to other organisms, such as disruption of epithelial gill tissues in fish, resulting in asphyxiation
oxygen depletion of the water column (hypoxia or anoxia) from cellular respiration and bacterial degradation
Marine life exposure
HABs occur naturally off coasts all over the world. Marine dinoflagellates produce ichthyotoxins. Where HABs occur, dead fish wash up on shore for up to two weeks after a HAB has been through the area. In addition to killing fish, the toxic algae contaminate shellfish. Some mollusks are not susceptible to the toxin, and store it in their fatty tissues. By consuming the organisms responsible for HABs, shellfish can accumulate and retain saxitoxin produced by these organisms. Saxitoxin blocks sodium channels and ingestion can cause paralysis within 30 minutes.
In addition to directly harming marine animals and vegetation loss, harmful algal blooms can also lead to ocean acidification, which occurs when the amount of carbon dioxide in the water is increased to unnatural levels. Ocean acidification slows the growth of certain species of fish and shellfish, and even prevents shell formation in certain species of mollusks. These subtle, small changes can add up over time to cause chain reactions and devastating effects on whole marine ecosystems.
Other animals that eat exposed shellfish are susceptible to the neurotoxin, which may lead to neurotoxic shellfish poisoning and sometimes even death. Most mollusks and clams filter feed, which results in higher concentrations of the toxin than just drinking the water. Scaup, for example, are diving ducks whose diet mainly consists of mollusks. When scaup eat the filter-feeding shellfish that have accumulated high levels of the HAB toxin, their population becomes a prime target for poisoning. However, even birds that do not eat mollusks can be affected by simply eating dead fish on the beach or drinking the water.
The toxins released by the blooms can kill marine animals including dolphins, sea turtles, birds, and manatees. The Florida Manatee, a subspecies of the West Indian Manatee, is a species often impacted by red tide blooms. Florida manatees are often exposed to the poisonous red-tide toxins either by consumption or inhalation. There are many small barnacles, crustaceans, and other epiphytes that grow on the blades of seagrass. These tiny creatures filter particles from the water around them and use these particles as their main food source. During red tide blooms, they also filter the toxic red tide cells from the water, which then becomes concentrated inside them. Although these toxins do not harm epiphytes, they are extremely poisonous to marine creatures who consume (or accidentally consume) the exposed epiphytes, such as manatees. When manatees unknowingly consume exposed epiphytes while grazing on sea grass, the toxins are subsequently released from the epiphytes and ingested by the manatees. In addition to consumption, manatees may also become exposed to air-borne Brevetoxins released from harmful red-tide cells when passing through algal blooms.
Manatees also have an immunoresponse to HABs and their toxins that can make them even more susceptible to other stressors. Due to this susceptibility, manatees can die from either the immediate, or the after effects of the HAB. In addition to causing manatee mortalities, red-tide exposure also causes severe sublethal health problems among Florida manatee populations. Studies have shown that red-tide exposure among free-ranging Florida manatees has been shown to negatively impact immune functioning by causing increased inflammation, a reduction in lymphocyte proliferation responses, and oxidative stress.
Fish such as Atlantic herring, American pollock, winter flounder, Atlantic salmon, and cod were dosed orally with these toxins in an experiment, and within minutes the subjects started to exhibit a loss of equilibrium and began to swim in an irregular, jerking pattern, followed by paralysis and shallow, arrhythmic breathing and eventually death, after about an hour. HABs have been shown to have a negative effect also in the memory functions of sea lions.
Potential remedies
Reducing nutrient runoff
Since many algal blooms are caused by a major influx of nutrient-rich runoff into a water body, programs to treat wastewater, reduce the overuse of fertilizers in agriculture and reducing the bulk flow of runoff can be effective for reducing severe algal blooms at river mouths, estuaries, and the ocean directly in front of the river's mouth.
The nitrates and phosphorus in fertilizers cause algal blooms when they run off into lakes and rivers after heavy rains. Modifications in farming methods have been suggested, such as only using fertilizer in a targeted way at the appropriate time exactly where it can do the most good for crops to reduce potential runoff. A method used successfully is drip irrigation, which instead of widely dispersing fertilizers on fields, drip-irrigates plant roots through a network of tubes and emitters, leaving no traces of fertilizer to be washed away. Drip irrigation also prevents the formation of algal blooms in reservoirs for drinking water while saving up to 50% of water typically used by agriculture.
There have also been proposals to create buffer zones of foliage and wetlands to help filter out the phosphorus before it reaches water. Other experts have suggested using conservation tillage, changing crop rotations, and restoring wetlands. It is possible for some dead zones to shrink within a year under proper management.
There have been a few success stories in controlling chemicals. After Norway's lobster fishery collapsed in 1986 due to low oxygen levels, for instance, the government in neighboring Denmark took action and reduced phosphorus output by 80 percent which brought oxygen levels closer to normal. Similarly, dead zones in the Black Sea and along the Danube River recovered after phosphorus applications by farmers were reduced by 60%.
Nutrients can be permanently removed from wetlands harvesting wetland plants, reducing nutrient influx into surrounding bodies of water. Research is ongoing to determine the efficacy of floating mats of cattails in removing nutrients from surface waters too deep to sustain the growth of wetland plants.
In the U.S., surface runoff is the largest source of nutrients added to rivers and lakes, but is mostly unregulated under the federal Clean Water Act. Locally developed initiatives to reduce nutrient pollution are underway in various areas of the country, such as the Great Lakes region and the Chesapeake Bay. To help reduce algal blooms in Lake Erie, the State of Ohio presented a plan in 2016 to reduce phosphorus runoff.
Chemical treatment
Although a number of algaecides have been effective in killing algae, they have been used mostly in small bodies of water. For large algal blooms, however, adding algaecides such as silver nitrate or copper sulfate can have worse effects, such as killing fish outright and harming other wildlife. Cyanobacteria can also develop resistance to copper-containing algaecides, requiring a larger quantity of the chemical to be effective for HAB management, but introducing a greater risk to other species in the region. The negative effects can therefore be worse than letting the algae die off naturally.
In 2019, Chippewa Lake in Northeast Ohio became the first lake in the U.S. to successfully test a new chemical treatment. The chemical formula killed all of the toxic algae in the lake within a single day. The formula has already been used in China, South Africa and Israel.
In February 2020, Roodeplaat Dam in Gauteng Province, South Africa was treated with a new algicide formulation against a severe bloom of Microcystis sp. This formulation allows the granular product to float and slow-release its active ingredient, sodium percarbonate, that releases hydrogen peroxide (H2O2), on the water surface. Consequently, the effective concentrations are limited, vertically, to the surface of the water; and spatially to areas where cyanobacteria are abundant. This provide the aquatic organisms a "safe haven" in untreated areas and avoids the adverse effects associated with the use of standard algicides.
Bioactive compounds isolated from terrestrial and aquatic plants, particularly seaweeds, have seen results as a more environmentally friendly control for HABs. Molecules found in seaweeds such as Corallina, Sargassum, and Saccharina japonica have shown to inhibit some bloom-forming microalgae. In addition to their anti-microalgal effects, the bioactive molecules found in these seaweeds also have antibacterial, antifungal, and antioxidant properties.
Removal of HABs using aluminum-modified clay
Other chemicals are being tested for their efficacy for removing cyanobacteria during blooms. Modified clays, such as aluminum chloride modified clay (AC-MC), aluminum sulfide modified clay (AS-MC) and polyaluminum chloride modified clay (PAC-MC) have shown positive results in vitro for the removal of Aureococcus by trapping the microalgae in the sediment of clay, removing it from the top layer of water where harmful blooms can occur.
Many efforts have been made in an attempt to control HAB's so that the harm that they cause can be kept at a minimum. Studies into the use of clay to control HAB's have proven that this method may be an effective way to reduce the negative effects caused by HAB's. The addition of aluminum chloride, aluminum sulfate, or polyaluminum chloride to clay can modify the clay surface and increase its efficiency in the removal of HAB's from a body of water. The addition of aluminum-containing compounds causes the clay particles to achieve a positive charge, with these particles then undergoing flocculation with the harmful algae cells. The algae cells then group together: becoming a sediment instead of a suspension. The process of flocculation will limit the bloom growth and reduce the impact in which the bloom can have on an area.
In the Netherlands, successful algae and phosphate removal from surface water has been obtained by pumping affected water through a hydrodynamic separator. The treated water is then free from algae and contains a significant lower amount of phosphate since the removed algae cells contain a lot of phosphate. The treated water also gets a lower turbidity. Future projects will study the positive effects on the ecology and marine life as it is expected plant life will be restored and a reduction in bottom dwelling fish will automatically reduce the turbidity of the cleaned water. The removed algae and phosphate may find its way not as waste but as infeed to bio digesters.
Additional reservoirs
Other experts have proposed building reservoirs to prevent the movement of algae downstream. However, that can lead to the growth of algae within the reservoir, which become sediment traps with a resultant buildup of nutrients. Some researchers found that intensive blooms in reservoirs were the primary source of toxic algae observed downstream, but the movement of algae has so far been less studied, although it is considered a likely cause of algae transport.
Restoring shellfish populations
The decline of filter-feeding shellfish populations, such as oysters, likely contribute to HAB occurrence. As such, numerous research projects are assessing the potential of restored shellfish populations to reduce HAB occurrence.
Improved monitoring
Other remedies include using improved monitoring methods, trying to improve predictability, and testing new potential methods of controlling HABs. Some countries surrounding the Baltic Sea, which has the world's largest dead zone, have considered using massive geoengineering options, such as forcing air into bottom layers to aerate them.
Mathematical models are useful to predict future algal blooms.
Sensors and monitoring devices
A growing number of scientists agree that there is an urgent need to protect the public by being able to forecast harmful algal blooms. One way they hope to do that is with sophisticated sensors which can help warn about potential blooms. The same types of sensors can also be used by water treatment facilities to help them prepare for higher toxic levels.
The only sensors now in use are located in the Gulf of Mexico. In 2008 similar sensors in the Gulf forewarned of an increased level of toxins that led to a shutdown of shellfish harvesting in Texas along with a recall of mussels, clams, and oysters, possibly saving many lives. With an increase in the size and frequency of HABs, experts state the need for significantly more sensors located around the country. The same kinds of sensors can also be used to detect threats to drinking water from intentional contamination.
Satellite and remote sensing technologies are growing in importance for monitoring, tracking, and detecting HABs. Four U.S. federal agencies—EPA, the National Aeronautics and Space Administration (NASA), NOAA, and the U.S. Geological Survey (USGS)—are working on ways to detect and measure cyanobacteria blooms using satellite data. The data may help develop early-warning indicators of cyanobacteria blooms by monitoring both local and national coverage. In 2016 automated early-warning monitoring systems were successfully tested, and for the first time proven to identify the rapid growth of algae and the subsequent depletion of oxygen in the water.
Examples
Notable occurrences
1530: First alleged case off the Florida Gulf Coast is without foundation. According to Marine Lab at University of Miami, the first possible Red Tide in Florida was in 1844. Earlier "signs" were from boats sorting fish on their way to home port dumping trash fish overboard. Thus "dead fish" reports along the coast were not Red Tide.
1793: The first recorded case occurring in British Columbia, Canada.
1840: No deaths of humans have been attributed to Florida red tide, but people may experience respiratory irritation (coughing, sneezing, and tearing) when the red tide organism (Karenia brevis) is present along a coast and winds blow its aerosolized toxins. Swimming is usually safe, but skin irritation and burning is possible in areas of high concentration of red tide.
1844: First possible case off the Florida Gulf Coast according to Marine Lab University of Miami, probably by ships off shore, no known inhabitants of the coast reporting.
1901: Lingulodinium polyedrum produces brilliant displays of bioluminescence in warm coastal waters. Seen in Southern California regularly since at least 1901.
1916: Massive fish kill along SW Florida coast. Noxious air thought to be seismic underwater explosion releasing chlorine gas.
1947: Southwest Florida: A massive bloom that lasts close a year almost destroys the commercial fishing industry and sponge beds. The resulting poisoned surf caused beaches to need to be evacuated.
1972: A red tide was caused in New England by a toxic dinoflagellate Alexandrium (Gonyaulax) tamarense. The red tides caused by the dinoflagellate Gonyaulax are serious because this organism produces saxitoxin and gonyautoxins which accumulate in shellfish and if ingested may lead to paralytic shellfish poisoning (PSP) and can lead to death.
1972 and 1973: Red tides killed two villagers west of Port Moresby. In March 1973 a red tide invaded Port Moresby Harbour and destroyed a Japanese pearl farm.
In 1972, a red tide was caused in New England by a toxic dinoflagellate Alexandrium (Gonyaulax) tamarense.
1976: The first PSP case in Sabah, Malaysian Borneo where 202 victims were reported to be suffering and 7 deaths.
1987: A red algae bloom in Prince Edward Island caused over a million dollars in losses.
1991: The largest algal bloom on record was the 1991 Darling River cyanobacterial bloom in Australia, largely of Anabaena circinalis, between October and December 1991 over of the Barwon and Darling Rivers.
2005: The Canadian red tide was discovered to have come further south than it has in years prior by the ship (R/V) Oceanus, closing shellfish beds in Maine and Massachusetts and alerting authorities as far south as Montauk (Long Island, NY) to check their beds. Experts who discovered the reproductive cysts in the seabed warn of a possible spread to Long Island in the future, halting the area's fishing and shellfish industry and threatening the tourist trade, which constitutes a significant portion of the island's economy.
In 2008 large blooms of the algae Cochlodinium polykrikoid were found along the Chesapeake Bay and nearby tributaries such as the James River, causing millions of dollars in damage and numerous beach closures.
In 2009, Brittany, France experienced recurring macroalgal blooms caused by the high amount of fertilizer discharging in the sea due to intensive pig farming, causing lethal gas emissions that have led to one case of human unconsciousness and three animal deaths.
In 2010, dissolved iron in the ash from the Eyjafjallajökull volcano triggered a plankton bloom in the North Atlantic.
2011: Northern California
2011: Gulf of Mexico
In 2013, an algal bloom was caused in Qingdao, China, by sea lettuce.
2013: In January, a red tide occurred again on the West Coast Sea of Sabah in the Malaysian Borneo. Two human fatalities were reported after they consumed shellfish contaminated with the red tide toxin.
2013: In January, a red tide bloom appeared at Sarasota beach – mainly Siesta Key, Florida causing a fish kill that had a negative impact on tourists, and caused respiratory issues for beach-goers.
In 2014, Myrionecta rubra (previously known as Mesodinium rubrum), a ciliate protist that ingests cryptomonad algae, caused a bloom in southeastern coast of Brazil.
In 2014, blue green algae caused a bloom in the western basin of Lake Erie, poisoning the Toledo, Ohio water system connected to 500,000 people.
2014: In August, massive 'Florida red tide' long and wide.
2015: June, 12 persons hospitalized in the Philippine province of Bohol for red tide poisoning.
2015: August, several beaches in the Netherlands between Katwijk and Scheveningen were plagued. Government institutions dissuaded swimmers from entering the water.
2015: September, a red tide bloom occurred in the Gulf of Mexico, affecting Padre Island National Seashore along North Padre Island and South Padre Island in Texas.
2017 and 2018: K. brevis red tide algae with warnings not to swim, state of emergency declared, dead dolphin and manatee, worsened by Caloosahatchee River. Peaked in the summer of 2018. Toxic harmful algae bloom red tide in Southwest Florida. A rare harmful algal bloom along Florida's east coast of Palm Beach County occurred the weekend of September 30, 2018.
In 2019, blue-green algae, or Cyanobacteria blooms, were again problematic on Lake Erie. In early August 2019, satellite images depicted a bloom stretching up to 1,300 square kilometers, with the epicentre near Toledo, Ohio. The largest Lake Erie bloom to date occurred in 2015, exceeding the severity index at 10.5 and in 2011 at a 10. A large bloom does not necessarily mean the cyanobacteria ... will produce toxins", said Michael McKay, of the University of Windsor. Water quality testing was underway in August.
In 2019, a bloom of Noctiluca algae caused bioluminescent glow off the coast of Chennai, India. Similar blooms have been reported annually in the northern Arabian Sea since the early 2000s.
2021: In July, a large red tide occurred on the Gulf Coast of Florida in and around Tampa Bay. The event has caused the death of millions of pounds of fish, and led to the National Weather Service declaring a Beach Hazard.
2021: in October, the mass deaths of shellfish (specifically crabs and lobster) on the beaches of Northern England, led to and algal bloom being blamed as the cause by the UK Government. However, those who work in the fishing industry in the area, and some academics, have stated that pyridine poisoning is the cause.
2023: A blue-green algae bloom occurred in Lough Neagh, Northern Ireland, the largest fresh water lake in the UK and Ireland where 40% of Northern Ireland gets its tap water from. It is caused by Northern Ireland experiencing both the wettest and hottest summer on record making conditions perfect for blue-green algae. Poor management of the Lough is being blamed. The bloom has killed dogs and wildlife, including swans.
United States
In July 2016 Florida declared a state of emergency for four counties as a result of blooms. They were said to be "destroying" a number of businesses and affecting local economies, with many needing to shut down entirely. Some beaches were closed, and hotels and restaurants suffered a drop in business. Tourist sporting activities such as fishing and boating were also affected.
In 2019, the biggest Sargassum bloom ever seen created a crisis in the Tourism industry in North America. This event was likely caused by climate change and nutrient pollution from fertilizers. Several Caribbean countries considered declaring a state of emergency due to the impact on tourism as a result of environmental damage and potentially toxic and harmful health effects.
On the U.S. coasts
The Gulf of Maine frequently experiences blooms of the dinoflagellate Alexandrium fundyense, an organism that produces saxitoxin, the neurotoxin responsible for paralytic shellfish poisoning. The well-known "Florida red tide" that occurs in the Gulf of Mexico is a HAB caused by Karenia brevis, another dinoflagellate which produces brevetoxin, the neurotoxin responsible for neurotoxic shellfish poisoning. California coastal waters also experience seasonal blooms of Pseudo-nitzschia, a diatom known to produce domoic acid, the neurotoxin responsible for amnesic shellfish poisoning.
The term red tide is most often used in the US to refer to Karenia brevis blooms in the eastern Gulf of Mexico, also called the Florida red tide. K. brevis is one of many different species of the genus Karenia found in the world's oceans.
Major advances have occurred in the study of dinoflagellates and their genomics. Some include identification of the toxin-producing genes (PKS genes), exploration of environmental changes (temperature, light/dark, etc.) have on gene expression, as well as an appreciation of the complexity of the Karenia genome. These blooms have been documented since the 1800s, and occur almost annually along Florida's coasts.
There was increased research activity of harmful algae blooms (HABs) in the 1980s and 1990s. This was primarily driven by media attention from the discovery of new HAB organisms and the potential adverse health effects of their exposure to animals and humans. The Florida red tides have been observed to have spread as far as the eastern coast of Mexico. The density of these organisms during a bloom can exceed tens of millions of cells per litre of seawater, and often discolor the water a deep reddish-brown hue.
Red tide is also sometimes used to describe harmful algal blooms on the northeast coast of the United States, particularly in the Gulf of Maine. This type of bloom is caused by another species of dinoflagellate known as Alexandrium fundyense. These blooms of organisms cause severe disruptions in fisheries of these waters, as the toxins in these organism cause filter-feeding shellfish in affected waters to become poisonous for human consumption due to saxitoxin.
The related Alexandrium monilatum is found in subtropical or tropical shallow seas and estuaries in the western Atlantic Ocean, the Caribbean Sea, the Gulf of Mexico, and the eastern Pacific Ocean.
Texas
Natural water reservoirs in Texas have been threatened by anthropogenic activities due to large petroleum refineries and oil wells (i.e. emission and wastewater discharge), massive agricultural activities (i.e. pesticide release) and mining extractions (i.e. toxic wastewater) as well as natural phenomena involving frequent HAB events. For the first time in 1985, the state of Texas documented the presence of the P. parvum (golden alga) bloom along the Pecos River. This phenomenon has affected 33 reservoirs in Texas along major river systems, including the Brazos, Canadian, Rio Grande, Colorado, and Red River, and has resulted in the death of more than 27 million fish and caused tens of millions of dollars in damage.
Chesapeake Bay
The Chesapeake Bay, the largest estuary in the U.S., has suffered from repeated large algal blooms for decades due to chemical runoff from multiple sources, including 9 large rivers and 141 smaller streams and creeks in parts of six states. In addition, the water is quite shallow and only 1% of the waste entering it gets flushed into the ocean.
By weight, 60% of the phosphates entering the bay in 2003 were from sewage treatment plants, while 60% of its nitrates came from fertilizer runoff, farm animal waste, and the atmosphere. About 300 million pounds (140 Gg) of nitrates are added to the bay each year. The population increase in the bay watershed, from 3.7 million people in 1940 to 18 million in 2015 is also a major factor, as economic growth leads to the increased use of fertilizers and rising emissions of industrial waste.
As of 2015, the six states and the local governments in the Chesapeake watershed have upgraded their sewage treatment plants to control nutrient discharges. The U.S. Environmental Protection Agency (EPA) estimates that sewage treatment plant improvements in the Chesapeake region between 1985 and 2015 have prevented the discharge of 900 million pounds (410 Gg) of nutrients, with nitrogen discharges reduced by 57% and phosphorus by 75%. Agricultural and urban runoff pollution continue to be major sources of nutrients in the bay, and efforts to manage those problems are continuing throughout the watershed.
Lake Erie
Recent algae blooms in Lake Erie have been fed primarily by agricultural runoff and have led to warnings for some people in Canada and Ohio not to drink their water. The International Joint Commission has called on United States and Canada to drastically reduce phosphorus loads into Lake Erie to address the threat.
Green Bay
Green Bay has a dead zone caused by phosphorus pollution that appears to be getting worse.
Okeechobee Waterway
Lake Okeechobee is an ideal habitat for cyanobacteria because its shallow, sunny, and laden with nutrients from Florida's agriculture. The Okeechobee Waterway connects the lake to the Atlantic Ocean and the Gulf of Mexico through the St. Lucie River and the Caloosahatchee respectively. This means that harmful algal blooms are carried down the estuaries as water is released during the wet summer months. In July 2018 up to 90% of Lake Okeechobee was covered in algae. Water draining from the lake filled the region with a noxious odor and caused respiratory problems in some humans during the following month. To make matters worse, harmful red tide blooms are historically common on Florida's coasts during these same summer months. Cyanobacteria in the rivers die as they reach saltwater but their nitrogen fixation feeds the red tide on the coast. Areas at the mouth of the estuaries such as Cape Coral and Port St. Lucie therefore experience the compounded effects of both types of harmful algal bloom. Cleanup crews hired by authorities in Lee County - where the Caloosahatchee meets the Gulf of Mexico - removed more than 1700 tons of dead marine life in August 2018.
Baltic Sea
In 2020, a large harmful algal bloom closed beaches in Poland and Finland, brought on by a combination of fertilizer runoff and extreme heat, posing a risk to flounder and mussel beds. This is seen by the Baltic Sea Action Group as a threat to biodiversity and regional fishing stocks.
Coastal seas of Bangladesh, India, and Pakistan
Open defecation is common in south Asia, but human waste is an often overlooked source of nutrient pollution in marine pollution modeling. When nitrogen (N) and phosphorus (P) contributed by human waste was included in models for Bangladesh, India, and Pakistan, the estimated N and P inputs to bodies of water increased one to two orders of magnitude compared to previous models. River export of nutrients to coastal seas increases coastal eutrophication potential (ICEP). The ICEP of the Godavari River is three times higher when N and P inputs from human waste are included.
See also
Brevetoxin
Ciguatera
Cyanobacterial bloom
Cyanotoxin
GEOHAB - an international research programme on the Global Ecology and Oceanography of Harmful algal blooms
Milky seas effect – A phenomenon in which disturbed red algae dinoflagellates will make the water glow blue, at night
Pfiesteria
Thin layers (oceanography)
Water quality
Water security
References
External links
International Society for the Study of Harmful Algae (ISSHA)
FAQ about Harmful Algal Blooms (NOAA)
Harmful Algal Blooms Observing System (NOAA/HAB-OFS)
GEOHAB: The International IOC-SCOR Research Programme on the Global Ecology and Oceanography of Harmful Algal Blooms
Biological oceanography
Aquatic ecology
Fishing industry
Water quality indicators
Human impact on the environment
Agriculture and the environment
Climate change and the environment
Water pollution
Algal blooms
Red tide
Dinoflagellate biology
Fisheries science | Harmful algal bloom | Chemistry,Biology,Environmental_science | 14,449 |
1,585,274 | https://en.wikipedia.org/wiki/Heawood%20conjecture | In graph theory, the Heawood conjecture or Ringel–Youngs theorem gives a lower bound for the number of colors that are necessary for graph coloring on a surface of a given genus. For surfaces of genus 0, 1, 2, 3, 4, 5, 6, 7, ..., the required number of colors is 4, 7, 8, 9, 10, 11, 12, 12, .... , the chromatic number or Heawood number.
The conjecture was formulated in 1890 by P.J. Heawood and proven in 1968 by Gerhard Ringel and J.W.T. Youngs. One case, the non-orientable Klein bottle, proved an exception to the general formula. An entirely different approach was needed for the much older problem of finding the number of colors needed for the plane or sphere, solved in 1976 as the four color theorem by Haken and Appel. On the sphere the lower bound is easy, whereas for higher genera the upper bound is easy and was proved in Heawood's original short paper that contained the conjecture. In other words, Ringel, Youngs, and others had to construct extreme examples for every genus g = 1, 2, 3, …. If g = 12s + k, then the genera fall into the 12 cases as k = 0, 1, 2, 3, …., 11. To simplify, suppose that case k has been established if only a finite number of gs of the form 12s + k are in doubt. Then the years in which the twelve cases were settled, and by whom, are the following:
1954, Ringel: case 5
1961, Ringel: cases 3, 7, 10
1963, Terry, Welch, Youngs: cases 0, 4
1964, Gustin, Youngs: case 1
1965, Gustin: case 9
1966, Youngs: case 6
1967, Ringel, Youngs: cases 2, 8, 11
The last seven sporadic exceptions were settled as follows:
1967, Mayer: cases 18, 20, 23
1968, Ringel, Youngs: cases 30, 35, 47, 59, and the conjecture was proved.
Formal statement
Percy John Heawood conjectured in 1890 that for a given genus g > 0, the minimum number of colors necessary to color all graphs drawn on an orientable surface of that genus (or equivalently, to color the regions of any partition of the surface into simply connected regions) is given by
where is the floor function.
Replacing the genus by the Euler characteristic, we obtain a formula that covers both the orientable and non-orientable cases,
This relation holds, as Ringel and Youngs showed, for all surfaces except for the Klein bottle. Philip Franklin (1930) proved that the Klein bottle requires at most 6 colors, rather than 7 as predicted by the formula. The Franklin graph can be drawn on the Klein bottle in a way that forms six mutually-adjacent regions, showing that this bound is tight.
The upper bound, proved in Heawood's original short paper, is based on a greedy coloring algorithm. By manipulating the Euler characteristic, one can show that every graph embedded in the given surface must have at least one vertex of degree less than the given bound. If one removes this vertex, and colors the rest of the graph, the small number of edges incident to the removed vertex ensures that it can be added back to the graph and colored without increasing the needed number of colors beyond the bound. In the other direction, the proof is more difficult, and involves showing that in each case (except the Klein bottle) a complete graph with a number of vertices equal to the given number of colors can be embedded on the surface.
Example
The torus has g = 1, so χ = 0. Therefore, as the formula states, any subdivision of the torus into regions can be colored using at most seven colors. The illustration shows a subdivision of the torus in which each of seven regions are adjacent to each other region; this subdivision shows that the bound of seven on the number of colors is tight for this case. The boundary of this subdivision forms an embedding of the Heawood graph onto the torus.
References
External links
Conjectures that have been proved
Graph coloring
Topological graph theory
Theorems in graph theory | Heawood conjecture | Mathematics | 884 |
9,258,361 | https://en.wikipedia.org/wiki/Ruppeiner%20geometry | Ruppeiner geometry is thermodynamic geometry (a type of information geometry) using the language of Riemannian geometry to study thermodynamics. George Ruppeiner proposed it in 1979. He claimed that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model.
This geometrical model is based on the inclusion of the theory of fluctuations into the axioms of equilibrium thermodynamics, namely, there exist equilibrium states which can be represented by points on two-dimensional surface (manifold) and the distance between these equilibrium states is related to the fluctuation between them. This concept is associated to probabilities, i.e. the less probable a fluctuation between states, the further apart they are. This can be recognized if one considers the metric tensor gij in the distance formula (line element) between the two equilibrium states
where the matrix of coefficients gij is the symmetric metric tensor which is called a Ruppeiner metric, defined as a negative Hessian of the entropy function
where U is the internal energy (mass) of the system and Na refers to the extensive parameters of the system. Mathematically, the Ruppeiner geometry is one particular type of information geometry and it is similar to the Fisher–Rao metric used in mathematical statistics.
The Ruppeiner metric can be understood as the thermodynamic limit (large systems limit) of the more general Fisher information metric. For small systems (systems where fluctuations are large), the Ruppeiner metric may not exist, as second derivatives of the entropy are not guaranteed to be non-negative.
The Ruppeiner metric is conformally related to the Weinhold metric via
where T is the temperature of the system under consideration. Proof of the conformal relation can be easily done when one writes down the first law of thermodynamics (dU = TdS + ...) in differential form with a few manipulations. The Weinhold geometry is also considered as a thermodynamic geometry. It is defined as a Hessian of the internal energy with respect to entropy and other extensive parameters.
It has long been observed that the Ruppeiner metric is flat for systems with noninteracting underlying statistical mechanics such as the ideal gas. Curvature singularities signal critical behaviors. In addition, it has been applied to a number of statistical systems including Van der Waals gas. Recently the anyon gas has been studied using this approach.
Application to black hole systems
This geometry has been applied to black hole thermodynamics, with some physically relevant results. The most physically significant case is for the Kerr black hole in higher dimensions, where the curvature singularity signals thermodynamic instability, as found earlier by conventional methods.
The entropy of a black hole is given by the well-known Bekenstein–Hawking formula
where is the Boltzmann constant, is the speed of light, is the Newtonian constant of gravitation and is the area of the event horizon of the black hole. Calculating the Ruppeiner geometry of the black hole's entropy is, in principle, straightforward, but it is important that the entropy should be written in terms of extensive parameters,
where is ADM mass of the black hole and Na are the conserved charges and a runs from 1 to n. The signature of the metric reflects the sign of the hole's specific heat. For a Reissner–Nordström black hole, the Ruppeiner metric has a Lorentzian signature which corresponds to the negative heat capacity it possess, while for the BTZ black hole, we have a Euclidean signature. This calculation cannot be done for the Schwarzschild black hole, because its entropy is
which renders the metric degenerate.
References
.
Riemannian geometry
Thermodynamics
New College of Florida faculty
Mathematical physics | Ruppeiner geometry | Physics,Chemistry,Mathematics | 795 |
76,152,796 | https://en.wikipedia.org/wiki/Connorstictic%20acid | Connorstictic acid is an organic compound in the structural class of chemicals known as depsidones. It occurs as a secondary metabolite in many lichen species in several genera.
History
Connorstictic acid was first identified and named in 1971 by Chicita Culberson and William Culberson, from chemical analysis of Diploschistes lichens. They described it as "probably a β-orcinol depsidone", and noted that it commonly co-occurred in lichens with norstictic acid. Its structure was published in 1980 following spectral and elemental analysis of the compound purified from the lichen Pertusaria pseudocorallina. The following year, John Elix and Labunmi Lajide corroborated the structure by synthesising it in several steps from the precursor norstictic acid. They also showed that connorstictic acid could be obtained by the direct reduction of norstictic acid by the addition of sodium triacetoxyborohydride, or by catalytic reduction. In 1981, Chicita Culberson and colleagues reported on the difficulties of isolating connorstictic acid using standard thin-layer chromatography protocols, due to its co-eluting with related substances such as constictic acid and cryptostictic acid, depending on the solvent system used. They suggested that connorstictic acid could be a common or even constant satellite compound in chemistries with stictic and norstictic acids, and that many prior reports of connorstictic acid may have been misidentifications with cryptostictic acid.
Properties
Connorstictic acid is a member of the class of chemical compounds called depsidones. Its IUPAC name is 5,13,17-trihydroxy-4-(hydroxymethyl)-7,12-dimethyl-2,10,16-trioxatetracyclo[9.7.0.03,8.014,18]octadeca-1(11),3(8),4,6,12,14(18)-hexaene-9,15-dione. The absorbance maxima (λmax) in the infrared spectrum occur at
1250, 1292, 1445, 1610, 1710, 1745, and 3400 cm−1. Connorstictic acid's molecular formula is C19H14O9; it has a molecular mass of 374.29 grams per mole. In its purified crystalline form, its predicted melting point is .
Occurrence
Lichen genera from which connorstictic acid has been isolated include Bryoria, Buellia, Cladonia, Cratiria, Diorygma, Graphis, Paraparmelia, Parmotrema, Pertusaria, Usnea, and Xanthoparmelia.
References
Benzaldehydes
Heterocyclic compounds with 4 rings
Lactones
Lichen products
Methoxy compounds
Hydroxyarenes
Dioxepines | Connorstictic acid | Chemistry | 624 |
3,095,831 | https://en.wikipedia.org/wiki/Axiality%20and%20rhombicity | In physics and mathematics, axiality and rhombicity are two characteristics of a symmetric second-rank tensor in three-dimensional Euclidean space, describing its directional asymmetry.
Let A denote a second-rank tensor in R3, which can be represented by a 3-by-3 matrix. We assume that A is symmetric. This implies that A has three real eigenvalues, which we denote by , and . We assume that they are ordered such that
The axiality of A is defined by
The rhombicity is the difference between the smallest and the second-smallest eigenvalue:
Other definitions of axiality and rhombicity differ from the ones given above by constant factors which depend on the context. For example, when using them as parameters in the irreducible spherical tensor expansion, it is most convenient to divide the above definition of axiality by and that of rhombicity by .
Applications
The description of physical interactions in terms of axiality and rhombicity is frequently encountered in spin dynamics and, in particular, in spin relaxation theory, where many traceless bilinear interaction Hamiltonians, having the (eigenframe) form
(hats denote spin projection operators) may be conveniently rotated using rank 2 irreducible spherical tensor operators:
where are Wigner functions, are Euler angles, and the expressions for the rank 2 irreducible spherical tensor operators are:
Defining Hamiltonian rotations in this way (axiality, rhombicity, three angles) significantly simplifies calculations, since the properties of Wigner functions are well understood.
References
D.M. Brink and G.R. Satchler, Angular momentum, 3rd edition, 1993, Oxford: Clarendon Press.
D.A. Varshalovich, A.N. Moskalev, V.K. Khersonski, Quantum theory of angular momentum: irreducible tensors, spherical harmonics, vector coupling coefficients, 3nj symbols, 1988, Singapore: World Scientific Publications.
I. Kuprov, N. Wagner-Rundell, P.J. Hore, J. Magn. Reson., 2007 (184) 196-206. Article
Tensors | Axiality and rhombicity | Engineering | 454 |
72,249,115 | https://en.wikipedia.org/wiki/Data%20decolonization | Data decolonization is the process of divesting from colonial, hegemonic models and epistemological frameworks that guide the collection, usage, and dissemination of data related to Indigenous peoples and nations, instead prioritising and centering Indigenous paradigms, frameworks, values, and data practices. Data decolonization is guided by the belief that data pertaining to Indigenous people should be owned and controlled by Indigenous people, a concept that is closely linked to data sovereignty, as well as the decolonization of knowledge.
Data decolonization is linked to the decolonization movement that emerged in the mid-20th century.
History
In various colonial states, data was used to identify Indigenous peoples using Western classification systems, leading to erasure of Indigenous identities, and the origin of narratives that focus on disadvantages in Indigenous communities.
Indigenous knowledge systems were replaced with Western values and systems, devaluing Indigenous ways-of-knowing in the process. Indigenous data practices tend to be more holistic, value diverse, personal opinions, and centre on the person community for their own benefit, rather than Western practices that are closely linked to categorising people as products, replicating colonial structures. Traditions such as oral history, using traditional knowledge, and other practices that were deemed "unscientific" were devalued and replaced with Western ways of knowing that presented as universal and objective. Tools such as the census were used to control narratives about Indigenous peoples, counting Indigenous peoples as they were viewed by the Canadian governenment rather than how they viewed themselves.
Data decolonization seeks to counter the negative narratives that are reinforced by the colonial data practices that persist in a post-colonial era.
Principles
Self-identification
Indigenous peoples value the right to self-identify themselves and define their own identities in data collection. Indigenous peoples value the diversity in their communities and wish to see this diversity accounted for in data.
Self-determination
Indigenous peoples value the right to make decisions about their data. They value the right to control how data is collected about them, how their data is stored, who gets to own the data, and how the data is used.
In practice
Policies
United Nations Declaration on the Rights of Indigenous Peoples
The United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP) was first introduced to the General Assembly in 2007. UNDRIP outlines the comprehensive rights of Indigenous peoples, and serves as a guideline for countries seeking reconciliation with their Indigenous populations. Article 18 especially outlines Indigenous rights to have decision-making power in matters that affect their rights, and this affects their data rights as well. Four countries voted against UNDRIP when it was first proposed: Canada, United States, New Zealand, and Australia, although all four would later agree with the declaration.
Canada
The Canadian government began to endorse UNDRIP in 2010, and they began to fully implement it in 2021. In 2015, the Truth and Reconciliation Commission urged all levels of the Canadian government to adopt the UNDRIP.
United States
The United States supports the declaration, but does not support the UNDRIP. In 2016, the Organization of American States ratified the American Declaration on the Rights of Indigenous Peoples, which is similar to the UNDRIP.
New Zealand
New Zealand announced its support for the UNDRIP in 2010, and is currently working with the Māori Development to design and implement their Declaration plan.
Healthcare
Decolonizing data in healthcare involves reforming healthcare infrastructure and policies to prioritise Indigenous peoples. Current healthcare data structures collect, store, and use data about Indigenous peoples without necessarily consulting the input of Indigenous peoples recreating power dynamics that have previously led to the harm of Indigenous peoples. Decolonizing such structures would put control over healthcare-related data and the use of that data into the hands of Indigenous peoples.
Palestinian Public Health scholar, outlined some principles to guide the creation of decolonized healthcare data systems.
Centering the community: Centering the concerns and opinions of Indigenous peoples at all levels.
Diversity: Ensuring that opinions, and decision-making are sourced from various Indigenous communities, rather than a few tokens.
Transparency: Building complete awareness in Indigenous communities of how their data is collected, aggregated.
Consent: Prioritising the informed consent of Indigenous peoples, promptly and accurately informing them of all actions that are taken with their data.
Concrete action: Focusing on action that produces real-world results for Indigenous peoples, rather than discourse for researchers.
See also
Indigenous decolonization
References
decolonization
Decolonization
Indigenous peoples
Human rights | Data decolonization | Technology | 907 |
38,648,693 | https://en.wikipedia.org/wiki/Nanoscale%20%28journal%29 | Nanoscale is a peer-reviewed scientific journal covering experimental and theoretical research in all areas of nanotechnology and nanoscience. It is published by the Royal Society of Chemistry. According to the Journal Citation Reports, the journal has a 2021 impact factor of 8.307.
References
External links
Royal Society of Chemistry academic journals
Biweekly journals
Academic journals established in 2009
Nanotechnology journals
English-language journals | Nanoscale (journal) | Materials_science | 84 |
39,058,868 | https://en.wikipedia.org/wiki/Bioresorbable%20metal | Bioresorbable (also called biodegradable or bioabsorbable) metals are metals or their alloys that degrade safely within the body. The primary metals in this category are magnesium-based and iron-based alloys, although recently zinc has also been investigated. Currently, the primary uses of bioresorbable metals are as stents for blood vessels (for example bioresorbable stents) and other internal ducts.
Background
Although bioabsorbable polymers and other materials have come into widespread use in recent years, degradable metals have not yet had the same success in the medical industry.
Driving force for development
The driving force behind the development of bioresorbable metals is primarily due to their ability to provide metal-like mechanical properties while degrading safely in the body. This is especially relevant in orthopaedic applications, where although many surgeries only require implants to provide temporary support (allowing the surrounding tissue to heal), the majority of current bio-metals are permanent (e.g. stainless steel, titanium). Degradation of the implant means that intervention or secondary surgery will not be necessary to remove the material at the end of its functional life, providing significant savings in both cost and time for the patient and health care system. In addition, the corrosion products of current bio-metals (which will still corrode in the body to some degree) can generally not be considered biocompatible.
Potential applications
There are a number of applications for biodegradable metals, including cardiovascular implants (i.e. stents) and orthopedics. It is in this latter category where these materials offer the greatest potential. Bioresorbable metals are able to withstand loads that would destroy any currently available polymers, and offer much greater plasticity than bioceramics, which are brittle and prone to fracture. A well-designed implant could provide the exact mechanical support needed for different areas (through alloying and metal working), and load would be transferred to the surrounding tissue over time, letting it heal and reducing the effects of stress shielding. A summary of the primary benefits and drawbacks of magnesium biomaterials has been provided by Kirkland.
Considerations and issues facing bioresorbable metal development
Changing shape over time
It is the same advantage that bioresorbable metals possess over non-degradable current materials, their biodegradability, that poses the greatest challenges to their development and wider use. The degradable nature of any implant means that their shape and thus mechanical properties will change through its lifetime. This means that lifecycle analysis must be performed on any implant, especially one designed for orthopedic applications where failure could result in death.
Lack of standards
Current standards for corrosion of metals have been found to not apply well to bioresorbable metals during in vitro testing. This is a significant problem as the majority of tests performed in the research community are a mix of other standards from both the biomedical and the engineering (e.g. corrosion) communities, often making comparison between results difficult.
Corrosion Product Toxicity
Even though all elements in a bioresorbable metal may themselves be considered biocompatible, the morphology and elemental makeup (or combination of elements) of the degradation products may cause adverse reactions in the body. In addition, the rapid evolution of hydrogen gas that is concomitant with Mg-alloy degradation may cause addition problems in vivo. It is therefore crucial to intricately understand the corrosion of each implant and the products that are release, in light of their toxicity and the likelihood of inflammation. The majority of studies in the literature have focused on elements that are known to be biocompatible or abundant in the body, such as calcium and zinc.
Potential bioresorbable metal candidates
Although all metals will degrade and eventually disappear inside the body through the processes of corrosion and wear, true bioresorbable metals must have an appreciable degradation rate to allow the implant to be absorbed in a practical amount of time in reference to their application. Also, any degradation product would have to be safely metabolized or excreted by the body to avoid toxicity and inflammation.
Magnesium
Perhaps the most widely investigated material in this category, magnesium was originally investigated as a potential biomaterial in 1878 when it was used by physician Edward C. Huse in wire form as a ligature to stop bleeding. Development continued into the 1920s, after which Mg-based biomaterials fell out of general investigation due to their poor performance (likely due to impurities in the alloys drastically increasing corrosion). It was not until the late 1990s that interest started to pick up again, Mg has a density close to that of bone and is absorbed by the body .Mg is of interest for orthopedic applications due to its relatively low cost, high specific strength, and near-bone elastic modulus, which avoids stress shielding and allows uniform distribution of tissue stress
Currently, most research on Mg is focused on reducing and controlling the rate of degradation, with many alloys corroding too rapidly (in vitro) for any practical application.
Iron
The majority of iron-based alloy research has been focused on cardiovascular applications, such as stents. However this area receives much less interest in the research community than Mg-based alloys.
Zinc
To date little work has been published on the use of a primarily zinc-based biomaterial, with corrosion rates found to be very low and zinc within a tolerable toxicity range .Furthermore, Pure Zn has poor mechanical behavior, with
a tensile strength of around 100–150 MPa and an elongation of 0.3–2%, which is far from reaching the strength required as an orthopedic implant material (tensile strength is more than 300 MPa,
elongation more than 15%). Alloy and composite fabrication have proven to be excellent ways to improve the mechanical performance of Zn.
Biodegradable bulk metallic glasses
Although strictly speaking a side-category, a related, relatively new area of interest has been the investigation of bioabsorbable metallic glass, with a group at UNSW currently investigating these novel materials.
References
Biomaterials | Bioresorbable metal | Physics,Biology | 1,268 |
1,658,979 | https://en.wikipedia.org/wiki/FreeS/WAN | FreeS/WAN, for Free Secure Wide-Area Networking, was a free software project which implemented a reference version of the IPsec network security layer for Linux. The project goal of ubiquitous opportunistic encryption of Internet traffic was not realized, although it did contribute to general Internet encryption.
The project was founded by John Gilmore, and administered for most of its duration by Hugh Daniel. John Ioannidis and Angelos Keromytis started the codebase while outside the United States prior to autumn 1997. Technical lead for the project was Henry Spencer, and later Michael Richardson. The IKE keying daemon (pluto) was maintained by D. Hugh Redelmeier while the IPsec kernel module (KLIPS) was maintained by Richard Guy Briggs. Sandy Harris was the main documentation person for most of the project, later Claudia Schmeing.
The final FreeS/WAN version 2.06 was released on 22 April 2004. The earlier version 2.04 was forked to form two projects, Openswan and strongSwan. Openswan has since (2012) been forked to Libreswan.
External links
Project website
Documentation
Free security software
History of software
IPsec | FreeS/WAN | Technology | 245 |
77,881,388 | https://en.wikipedia.org/wiki/Upsweep | Upsweep is an unidentified sound detected by the U.S. National Oceanic and Atmospheric Administration's (NOAA) equatorial autonomous hydrophone arrays. The sound was recorded in August, 1991, using the Pacific Marine Environmental Laboratory's underwater sound surveillance system, SOSUS. Loud enough to be detected throughout the entire Pacific Ocean, Upsweep remains one of the only detected sounds to have an unresolved origin. By 1996, early speculations that the sound originated from a biological source was dismissed. The sound consists of a long train of narrow-band upsweeping sounds that occur in intervals of several seconds each. Upsweep occurs and changes seasonally, and is therefore speculated by NOAA scientists to originate from areas of underwater volcanic activity.
Sound profile
The sound's source is roughly located at , in a remote region of the Pacific Ocean between New Zealand and approximately 2,500 miles due west of the southern tip of South America. The sound varies seasonally, usually reaching peaks around spring and fall, but it is unclear whether this is due to changes in the source or seasonal propagation changes in the sound's environment. The sound consists of a long sequence of repeating vertical "sweeps" from low to high frequency lasting for roughly three seconds each and was loud enough to be heard by the entire Equatorial Pacific Ocean autonomous hydrophone array system. Upsweep is characterized by its anomalous reverberating tone, such as those from an ambulance or siren.
The sound was heard by a system of hydrophones operated by the NOAA's Sound Surveillance System (SOSUS) program for monitoring the northeast Pacific Ocean for low-level seismic activity and detection of volcanic activity along the northeast Pacific spreading centers. Researchers initially attributed the sound to Fin whales, however, this theory was dismissed after it was argued there was not enough variation in the tone for the sound to be biological.
Scientists have traced the source's origins near the location of inferred volcanic seismicity. Since 1991, the Upsweep's level of sound (volume) has been declining, but it can still be detected on NOAA's hydrophone arrays.
Volcanic origin
A leading theory behind the origins of Upsweep are attributed to underwater volcanic and seismic activity. Submarine volcanic eruptions are characteristic of the formation of rift zones found in all of the Earth's major ocean basins. These are also known as seafloor spreading centers, where the SOSUS program was established by the NOAA to monitor seafloor earthquake and volcanic activity. The Monterey Bay Aquarium Research Institute described the acoustic characteristics of these phenomena as:
The source's approximate location has led scientist to infer its source was near an area of underwater volcanic seismicity, however, the sound's exact location is unknown.
See also
List of unidentified sounds
Notes
References
External links
Acoustics Monitoring Program - Upsweep
1991 in science
Pacific Ocean
Oceanography
Unidentified sounds
Underwater | Upsweep | Physics,Environmental_science | 589 |
24,009,160 | https://en.wikipedia.org/wiki/C4H8S | {{DISPLAYTITLE:C4H8S}}
The molecular formula C4H8S (molar mass: 88.17 g/mol, exact mass: 88.0347 u) may refer to:
Allyl methyl sulfide
Tetrahydrothiophene, also known as thiophane, thiolane, or THT
Molecular formulas | C4H8S | Physics,Chemistry | 80 |
145,380 | https://en.wikipedia.org/wiki/Pilaster | In architecture, a pilaster is both a load-bearing section of thickened wall or column integrated into a wall, and a purely decorative element in classical architecture which gives the appearance of a supporting column and articulates an extent of wall. As an ornament it consists of a flat surface raised from the main wall surface, usually treated as though it were a column, with a capital at the top, plinth (base) at the bottom, and the various other column elements. In contrast to a Classical pilaster, an engaged column or buttress can support the structure of a wall and roof above.
In human anatomy, a pilaster is a ridge that extends vertically across the femur, which is unique to modern humans. Its structural function is unclear.
Definition
A pilaster is foremost a load-bearing architectural element used widely throughout the world and its history where a structural load is carried by a thickened section of wall or column integrated into a wall.
It is also a purely ornamental element used in Classical architecture. As such it may be defined as a flattened column which has lost its three-dimensional and tactile value.".
In Classical architecture
In discussing Leon Battista Alberti's use of pilasters, which Alberti reintroduced into wall-architecture, Rudolf Wittkower wrote: "The pilaster is the logical transformation of the column for the decoration of a wall.
A pilaster appears with a capital. and entablature, also in "low-relief" or flattened against the wall. Generally, a pilaster often repeats all parts and proportions of an order column; however, unlike it, a pilaster is usually devoid of entasis.
Pilasters often appear on the sides of a door frame or window opening on the facade of a building, and are sometimes paired with columns or pillars set directly in front of them at some distance away from the wall, which support a roof structure above, such as a portico. These vertical elements can also be used to support a recessed archivolt around a doorway. The pilaster can be replaced by ornamental brackets supporting the entablature or a balcony over a doorway.
When a pilaster appears at the corner intersection of two walls it is known as a canton.
As with a column, a pilaster can have a plain or fluted surface to its profile and can be represented in the mode of numerous architectural styles. During the Renaissance and Baroque architects used a range of pilaster forms. In the giant order pilasters appear as two storeys tall, linking floors in a single unit.
The fashion of using this decorative element from ancient Greek and Roman architecture was adopted in the Italian Renaissance, gained wide popularity with Greek Revival architecture, and continues to be seen in some modern architecture.
Gallery
See also
Glossary of architecture
Classical order
Lesene
Post and lintel
Notes
References
Lewis, Philippa, and Gillian Darley (1986). Dictionary of Ornament. New York: Pantheon.
External links
Architectural elements
Columns and entablature | Pilaster | Technology,Engineering | 622 |
33,876,162 | https://en.wikipedia.org/wiki/Kick%20the%20Fossil%20Fuel%20Habit | Kick The Fossil Fuel Habit: 10 Clean Technologies to Save Our World is a 2010 book by Tom Rand (venture capitalist). The book is about making an energy transition from fossil fuels to clean technologies, by changing to 100% renewable energy. It includes detailed descriptions of the technologies required – solar energy, wind power, geothermal energy and more. Author Tom Rand says we will "need to deploy resources on a scale not seen since World War II, generate international co-operation, and develop rules to put a price on carbon."
Rand says that there are many reasons to kick the fossil fuel habit: "energy security; the moral cost of supporting undemocratic regimes that sit on the oil we use; the military cost, both in blood and cash, to keep the supply lines open; and getting a leg up on the competition in the next industrial revolution. Each of these is reason enough to kick the habit".
Rand stresses that we need to act quickly and, equally important, collectively. That means "this generation of government, businesses and individuals all need act together to save the world for the next".
See also
The Third Industrial Revolution
The Clean Tech Revolution
List of books about renewable energy
Mark Z. Jacobson
References
2010 non-fiction books
2010 in the environment
Books about energy issues
Energy economics
Renewable energy commercialization
Sustainability books
Climate change books | Kick the Fossil Fuel Habit | Environmental_science | 273 |
19,372,852 | https://en.wikipedia.org/wiki/MecA | mecA is a gene found in bacterial cells which allows them to be resistant to antibiotics such as methicillin, penicillin and other penicillin-like antibiotics.
The bacteria strain most commonly known to carry mecA is methicillin-resistant Staphylococcus aureus (MRSA). In Staphylococcus species, mecA is spread through the staphylococcal chromosome cassette SCCmec genetic element. Resistant strains cause many hospital-acquired infections.
mecA encodes the protein PBP2A (penicillin-binding protein 2A), a transpeptidase that helps form the bacterial cell wall. PBP2A has a lower affinity for beta-lactam antibiotics such as methicillin and penicillin than DD-transpeptidase does, so it does not bind to the ringlike structure of penicillin-like antibiotics. This enables transpeptidase activity in the presence of beta-lactams, preventing them from inhibiting cell wall synthesis. The bacteria can then replicate as normal.
History
Methicillin resistance first emerged in hospitals in Staphylococcus aureus that was more aggressive and failed to respond to methicillin treatment. The prevalence of this strain, MRSA, continued to increase, reaching up to 60% of British hospitals, and has spread throughout the world and beyond hospital settings. Researchers traced the source of this resistance to the mecA gene acquired through a mobile genetic element, staphylococcal cassette chromosome mec, present in all known MRSA strains. On February 27, 2017, the World Health Organization (WHO) put MRSA on their list of priority bacterial resistant pathogens and made it a high priority target for further research and treatment development.
Detection
Successful treatment of MRSA begins with the detection of mecA, usually through polymerase chain reaction (PCR). Alternative methods include enzymatic detection PCR, which labels the PCR with enzymes detectable by immunoabsorbant assays. This takes less time and does not need gel electrophoresis, which can be costly, tedious, and unpredictable. cefoxitin disc diffusion uses phenotypic resistance to test not only for methicillin resistant strains but also for low resistant strains. The presence of mecA alone does not determine resistant strains; further phenotypic assays of mecA-positive strains can determine how resistant the strain is to methicillin. These phenotypic assays cannot rely on the accumulation of PBP2a, the protein product of mecA, as a test for methicillin resistance, as no connection between protein amount and resistance exists.
Structure
mecA is on staphylococcal cassette chromosome mec, a mobile gene element from which the gene can undergo horizontal gene transfer and insert itself into the host species, which can be any species in the Staphylococcus genus. This cassette is a 52 kilobase piece of DNA that contains mecA and two recombinase genes, ccrA and ccrB. Proper insertion of the mecA complex into the host genome requires the recombinases. Researchers have isolated multiple genetic variants from resistant strains of S. aureus, but all variants function similarly and have the same insertion site, near the host DNA origin of replication. mecA also forms a complex with two regulatory units, mecI and mecR1. These two genes can repress mecA; deletions or knock-outs in these genes increase resistance of S. aureus to methicillin. The S. aureus strains isolated from humans either lack these regulatory elements or contain mutations in these genes that cause a loss of function of the protein products that inhibit mecA. This in turn, causes constitutive transcription of mecA. This cassette chromosome can move between species. Two other Staphylococci species, S.epidermidis and S.haemolyticus, show conservation in this insertion site, not only for mecA but also for other non-essential genes the cassette chromosome can carry.
Mechanism of resistance
Penicillin, its derivatives and methicillin, and other beta-lactam antibiotics inhibits activity of the cell-wall forming penicillin-binding protein family (PBP 1, 2, 3 and 4). This disrupts the cell wall structure, causing the cytoplasm to leak and cell death. However, mecA codes for PBP2a that has a lower affinity for beta-lactams, which keeps the structural integrity of the cell wall, preventing cell death. Bacterial cell wall synthesis in S. aureus depends on transglycosylation to form linear polymer of sugar monomers and transpeptidation to form an interlinking peptides to strengthen the newly developed cell wall. PBPs have a transpeptidase domain, but scientists thought only monofunctional enzymes catalyze transglycosylation, yet PBP2 has domains to perform both essential processes. When antibiotics enter the medium, they bind to the transpeptidation domain and inhibit PBPs from cross-linking muropeptides, therefore preventing the formation of stable cell wall. With cooperative action, PBP2a lacks the proper receptor for the antibiotics and continues transpeptidation, preventing cell wall breakdown. The functionality of PBP2a depends on two structural factors on the cell wall of S. aureus. First, for PBP2a to properly fit onto the cell wall, to continue transpeptidation, it needs the proper amino acid residues, specifically a pentaglycine residue and an amidated glutamate residue. Second, PBP2a has an effective transpeptidase activity but lacks the transglycosylation domain of PBP2, which builds the backbone of the cell wall with polysaccharide monomers, so PBP2a must rely on PBP2 to continue this process. The latter forms a therapeutic target to improve the ability of beta-lactams to prevent cell wall synthesis in resistant S. aureus. Identifying inhibitors of glycosylases involved in the cell wall synthesis and modulating their expression can resensitize these previously resistant bacteria to beta-lactam treatment. For example, epicatechin gallate, a compound found in green tea, has shown signs of lowering the resistance to beta-lactams, to the point where oxacillin, which acts on PBP2 and PBP2a, effectively inhibits cell wall formation.
Interactions with other genes decrease resistance to beta-lactams in resistant strains of S. aureus. These gene networks are mainly involved in cell division, and cell wall synthesis and function, where there PBP2a localizes. Furthermore, other PBP proteins also affect the resistance of S. aureus to antibiotics. Oxacillin resistance decreased in S. aureus strains when expression of PBP4 was inhibited but PBP2a was not.
Evolutionary history
mecA is acquired and transmitted through a mobile genetic element, that inserts itself into the host genome. That structure is conserved between the mecA gene product and a homologous mecA gene product in Staphylococcus sciuri. As of 2007, function for the mecA homologue in S. sciuri remains unknown, but they may be a precursor for the mecA gene found in S. aureus. The structure of the protein product of this homologue is so similar that the protein can be used in S. aureus. When the mecA homologue of beta-lactam resistant S. sciuri is inserted into antibiotic sensitive S. aureus, antibiotics resistance increases. Even though the muropeptides (peptidoglycan precursors) that both species use are the same, the protein product of mecA gene of the S. sciuri can continue cell wall synthesis when a beta-lactam inhibits the PBP protein family.
To further understand the origin of mecA, specifically the mecA complex found on the Staphylococcal cassette chromosome, researchers used the mecA gene from S. sciuri in comparison to other Staphylococci species. Nucleotide analysis shows the sequence of mecA is almost identical to the mecA homologue found in Staphylococcus fleurettii, the most significant candidate for the origin of the mecA gene on the staphylococcal cassette chromosome. Since the genome of the S. fleurettii contains this gene, the cassette chromosome must originate from another species.
References
External links
at HUGO Gene Nomenclature Committee
Cell biology
Infectious diseases
Prokaryote genes | MecA | Biology | 1,818 |
74,274,754 | https://en.wikipedia.org/wiki/AI-assisted%20reverse%20engineering | AI-assisted reverse engineering (AIARE) is a branch of computer science that leverages artificial intelligence (AI), notably machine learning (ML) strategies, to augment and automate the process of reverse engineering. The latter involves breaking down a product, system, or process to comprehend its structure, design, and functionality. AIARE was primarily introduced in the early years of the 21st century, witnessing substantial advancements from the mid-2010s onwards.
Overview
Conventionally, reverse engineering is conducted by specialists who dismantle a system to grasp its working principles, often for the purposes of reproduction, modification, enhancement of compatibility, or forensic examination. This method, while efficient, can be laborious and time-intensive, particularly when dealing with intricate software or hardware systems.
AIARE integrates machine learning algorithms to either partially automate or augment this process. It is capable of detecting patterns, relationships, structures, and potential vulnerabilities within the analyzed system, frequently surpassing human experts in speed and accuracy. This has rendered AIARE a critical tool in numerous fields, including cybersecurity, software development, and hardware design and analysis.
Techniques
AIARE encompasses several AI methodologies:
Supervised learning
Supervised learning employs tagged data to train models to recognize system components, their operations, and their interconnections. This method is particularly helpful in software analysis to discover vulnerabilities or enhance compatibility.
Unsupervised learning
Unsupervised learning is utilized to detect concealed patterns and structures in untagged data. It proves beneficial in comprehending complex systems where there's no evident labeling or mapping of components.
Reinforcement learning
Reinforcement learning is employed to build models that progressively refine their system understanding through a process of trial and error. This method is often implemented when deciphering a system's functionality under various circumstances or configurations.
Deep learning
Deep learning is employed for analysis of high-dimensional data. For instance, deep learning techniques can aid in examining the layout and connections of integrated circuits (ICs), substantially reducing the manual effort required for reverse engineering.
References
Applications of artificial intelligence
Reverse engineering | AI-assisted reverse engineering | Engineering | 428 |
73,627,447 | https://en.wikipedia.org/wiki/Introduction%20to%20Elementary%20Particles%20%28book%29 | Introduction to Elementary Particles, by David Griffiths, is an introductory textbook that describes an accessible "coherent and unified theoretical structure" of particle physics, appropriate for advanced undergraduate physics students. It was originally published in 1987, and the second revised and enlarged edition was published 2008.
Content (2nd edition)
Table of contents
History and Overview
Chapter 1: Historical Introduction to the Elementary Particles
Chapter 2: Elementary Particle Dynamics
Chapter 3: Relative Kinematics
Chapter 4: Symmetries
Chapter 5: Bound States
Quantitative Formulation of Particle Dynamics
Chapter 6: The Feynman Calculus
Chapter 7: Quantum Electrodynamics
Chapter 8: Electrodynamics of Quarks and Hadrons
Chapter 9: Quantum Chromodynamics
Chapter 10: Weak Interactions
Chapter 11: Gauge Theories
Appendices
Appendix A: The Dirac Delta Function
Appendix B: Decay Rates and Cross Sections
Appendix C: Pauli and Dirac Matrices
Appendix D: Feynman Rules
New content in the second addition includes "neutrino oscillations and prospects for physics beyond the Standard Model".
Reception
The first edition, reviewed by Gerald Intermann, earned praise for its "good use of examples as a means of discussing in detail useful problem-solving techniques that other texts leave for the student to discover."
Acknowledging it as a "a well-established textbook", an IAEA review said the second edition "...strikes a balance between quantitative rigor and intuitive understanding, using a lively, informal style... The first chapter provides a detailed historical introduction to the subject, while subsequent chapters offer a quantitative presentation of the Standard Model. A simplified introduction to the Feynman rules, based on a 'toy' model, helps readers learn the calculational techniques without the complications of spin. It is followed by accessible treatments of quantum electrodynamics, the strong and weak interactions, and gauge theories."
The Times Higher Education review said, "The first edition of this textbook was notable for providing a clear and logical overview of particle physics that was at the right level for advanced undergraduates... The contents of this revised edition are largely similar to those contained in the first edition and changes reflect the development of the subject in the intervening 20 years. As a result, some discussions have now been tightened or removed, and chapters describing neutrino oscillations and contemporary theoretical developments have been added." The review concluded, "Reading any section will always yield insights, and you can't go wrong with Griffiths as a guide. Who is it for? Advanced undergraduates, postgraduates, lecturers and anyone in the field of experimental particle physics."
Publication history
References
External links
, Preface | Physics Audio Books, video (4:11 minutes)
Physics textbooks
Quantum mechanics
1987 non-fiction books
2005 non-fiction books
2004 non-fiction books
Undergraduate education
Wiley (publisher) | Introduction to Elementary Particles (book) | Physics | 575 |
25,275,248 | https://en.wikipedia.org/wiki/Barbertonite | Barbertonite is a magnesium chromium carbonate mineral with formula of . It is polymorphous with the mineral stichtite and, along with stichtite, is an alteration product of chromite in serpentinite. Barbertonite has a close association with stichtite, chromite, and antigorite (Taylor, 1973). Mills et al. (2011) presented evidence that barbertonite is a polytype of stichtite and should be discredited as a mineral species.
Barbertonite family group
Barbertonite is a member of the hexagonal sjogrenite group along with manasseite and sjogrenite (Palache et al., 1944).
The rhombohedral hydrotalcite group consists of the three minerals:
– stichtite with 3 units of ;
– hydrotalcite with 3 units of , and;
– pyroaurite with 3 units of .
These two isostructural groups are polymorphous in relation to each other (Palache et al., 1944).
Structure
The structure of barbertonite has brucite-like layers alternating with interlayers. Neighboring brucite layers are stacked so that the hydroxyl ions () are directly above one another (Taylor, 1973). In between brucite layers are interlayers containing ions and molecules (Taylor, 1973). Oxygen atoms are accommodated in a single set of sites distributed close to the axes that pass through the hydroxyl ions of adjacent brucite layers (Taylor, 1973).
Geologic occurrence
Barbertonite was first found in the Barberton district in Transvaal, South Africa. It can also be found in the Ag-Pb mine in Dumas, Tasmania, Australia (Anthony et al., 2003). Read and Dixon (1933) stated that the mineral that was found in Cunningsburgh, Shetland Islands was stichtite but it is now thought to be barbertonite because of the very similar indices of the minerals (Frondel et al. 1941). Barbertonite frequently occurs admixed with its rhombohedral analogue and as an alteration product of chromite in serpentinite (Anthony et al. 2003).
References
Further reading
Mondel, S. K., Baidya, T.K. (1996). Stichtite [Mg6Cr2(OH)16CO3·4H2O] in Nausahi ultramafites, Orissa, India – Its transformation at elevated temperatures. Mineralogical Magazine, 60, 836–840.
Palache, C., Berman H., and Frondel C. (1944). Dana's System of Mineralogy, (7th Edition), v. 1, 659.
Magnesium minerals
Chromium minerals
Carbonate minerals
Hexagonal minerals
Minerals in space group 194
Polymorphism (materials science) | Barbertonite | Materials_science,Engineering | 599 |
67,346,535 | https://en.wikipedia.org/wiki/Batelapine | Batelapine (developmental code name CGS-13429) is a structural analogue of clozapine which was investigated as a potential antipsychotic.
References
External links
Batelapine - AdisInsight
Abandoned drugs
Antipsychotics
4-Methylpiperazin-1-yl compounds
Triazolobenzodiazepines
Tricyclic compounds | Batelapine | Chemistry | 76 |
13,413,355 | https://en.wikipedia.org/wiki/Neutral%20fat | Neutral fats, also known as true fats, are simple lipids that are produced by the dehydration synthesis of one or more fatty acids with an alcohol like glycerol. Neutral fats are also known as triacylglycerols, these lipids are dense as well as hydrophobic due to their long carbon chain and are there main function is to store energy. Neutral fats can be made from the compact packing of fatty acids. Triacylglycerols can also serve to part of lipid membranes, which serve to provide flexibility to the membranes, they can also serve as parts for signaling molecules. Many types of neutral fats are possible both because of the number and variety of fatty acids that could form part of it and because of the different bonding locations for the fatty acids. An example is a monoglyceride, which has one fatty acid combined with glycerol, a diglyceride, which has two fatty acids combined with glycerol, or a triglyceride, which has three fatty acids combined with glycerol.
Triglycerides
Triglycerides are formed from the esterification of 3 molecules of fatty acids with one molecule of trihydric alcohol, glycerol (glycerine or trihydroxy propane). In the process, 3 molecules of water are eliminated. The word "triglyceride" refers to the number of fatty acids esterified to one molecule of glycerol.
In triglycerides, the three fatty acids are rarely similar and are thus called pure fats. For example, tripalmitin, tristearin, etc.
References
Lipids | Neutral fat | Chemistry | 350 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.