id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Antiviral drug | Antiviral drugs are a class of medication used for treating viral infections. Most antivirals target specific viruses, while a broad-spectrum antiviral is effective against a wide range of viruses. Antiviral drugs are a class of antimicrobials, a larger group which also includes antibiotic (also termed antibacterial), antifungal and antiparasitic drugs, or antiviral drugs based on monoclonal antibodies. Most antivirals are considered relatively harmless to the host, and therefore can be used to treat infections. They should be distinguished from virucides, which are not medication but deactivate or destroy virus particles, either inside or outside the body. Natural virucides are produced by some plants such as eucalyptus and Australian tea trees.
Medical uses
Most of the antiviral drugs now available are designed to help deal with HIV, herpes viruses, the hepatitis B and C viruses, and influenza A and B viruses.
Viruses use the host's cells to replicate and this makes it difficult to find targets for the drug that would interfere with the virus without also harming the host organism's cells. Moreover, the major difficulty in developing vaccines and antiviral drugs is due to viral variation.
The emergence of antivirals is the product of a greatly expanded knowledge of the genetic and molecular function of organisms, allowing biomedical researchers to understand the structure and function of viruses, major advances in the techniques for finding new drugs, and the pressure placed on the medical profession to deal with the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS).
The first experimental antivirals were developed in the 1960s, mostly to deal with herpes viruses, and were found using traditional trial-and-error drug discovery methods. Researchers grew cultures of cells and infected them with the target virus. They then introduced into the cultures chemicals which they thought might inhibit viral activity and observed whether the level of virus in the cultures rose or fell. Chemicals that seemed to have an effect were selected for closer study.
This was a very time-consuming, hit-or-miss procedure, and in the absence of a good knowledge of how the target virus worked, it was not efficient in discovering effective antivirals which had few side effects. Only in the 1980s, when the full genetic sequences of viruses began to be unraveled, did researchers begin to learn how viruses worked in detail, and exactly what chemicals were needed to thwart their reproductive cycle.
Antiviral drug design
Antiviral targeting
The general idea behind modern antiviral drug design is to identify viral proteins, or parts of proteins, that can be disabled. These "targets" should generally be as unlike any proteins or parts of proteins in humans as possible, to reduce the likelihood of side effects and toxicity. The targets should also be common across many strains of a virus, or even among different species of virus in the same family, so a single drug will have broad effectiveness. For example, a researcher might target a critical enzyme synthesized by the virus, but not by the patient, that is common across strains, and see what can be done to interfere with its operation.
Once targets are identified, candidate drugs can be selected, either from drugs already known to have appropriate effects or by actually designing the candidate at the molecular level with a computer-aided design program.
The target proteins can be manufactured in the lab for testing with candidate treatments by inserting the gene that synthesizes the target protein into bacteria or other kinds of cells. The cells are then cultured for mass production of the protein, which can then be exposed to various treatment candidates and evaluated with "rapid screening" technologies.
Approaches by virus life cycle stage
Viruses consist of a genome and sometimes a few enzymes stored in a capsule made of protein (called a capsid), and sometimes covered with a lipid layer (sometimes called an 'envelope'). Viruses cannot reproduce on their own and instead propagate by subjugating a host cell to produce copies of themselves, thus producing the next generation.
Researchers working on such "rational drug design" strategies for developing antivirals have tried to attack viruses at every stage of their life cycles. Some species of mushrooms have been found to contain multiple antiviral chemicals with similar synergistic effects.
Compounds isolated from fruiting bodies and filtrates of various mushrooms have broad-spectrum antiviral activities, but successful production and availability of such compounds as frontline antiviral is a long way away.
Viral life cycles vary in their precise details depending on the type of virus, but they all share a general pattern:
Attachment to a host cell.
Release of viral genes and possibly enzymes into the host cell.
Replication of viral components using host-cell machinery.
Assembly of viral components into complete viral particles.
Release of viral particles to infect new host cells.
Before cell entry
One antiviral strategy is to interfere with the ability of a virus to infiltrate a target cell. The virus must go through a sequence of steps to do this, beginning with binding to a specific "receptor" molecule on the surface of the host cell and ending with the virus "uncoating" inside the cell and releasing its contents. Viruses that have a lipid envelope must also fuse their envelope with the target cell, or with a vesicle that transports them into the cell before they can uncoat.
This stage of viral replication can be inhibited in two ways:
Using agents which mimic the virus-associated protein (VAP) and bind to the cellular receptors. This may include VAP anti-idiotypic antibodies, natural ligands of the receptor and anti-receptor antibodies.
Using agents which mimic the cellular receptor and bind to the VAP. This includes anti-VAP antibodies, receptor anti-idiotypic antibodies, extraneous receptor and synthetic receptor mimics.
This strategy of designing drugs can be very expensive, and since the process of generating anti-idiotypic antibodies is partly trial and error, it can be a relatively slow process until an adequate molecule is produced.
Entry inhibitor
A very early stage of viral infection is viral entry, when the virus attaches to and enters the host cell. A number of "entry-inhibiting" or "entry-blocking" drugs are being developed to fight HIV. HIV most heavily targets a specific type of lymphocyte known as "helper T cells", and identifies these target cells through T-cell surface receptors designated "CD4" and "CCR5". Attempts to interfere with the binding of HIV with the CD4 receptor have failed to stop HIV from infecting helper T cells, but research continues on trying to interfere with the binding of HIV to the CCR5 receptor in hopes that it will be more effective.
HIV infects a cell through fusion with the cell membrane, which requires two different cellular molecular participants, CD4 and a chemokine receptor (differing depending on the cell type). Approaches to blocking this virus/cell fusion have shown some promise in preventing entry of the virus into a cell. At least one of these entry inhibitors—a biomimetic peptide called Enfuvirtide, or the brand name Fuzeon—has received FDA approval and has been in use for some time. Potentially, one of the benefits from the use of an effective entry-blocking or entry-inhibiting agent is that it potentially may not only prevent the spread of the virus within an infected individual but also the spread from an infected to an uninfected individual.
One possible advantage of the therapeutic approach of blocking viral entry (as opposed to the currently dominant approach of viral enzyme inhibition) is that it may prove more difficult for the virus to develop resistance to this therapy than for the virus to mutate or evolve its enzymatic protocols.
Uncoating inhibitors
Inhibitors of uncoating have also been investigated.
Amantadine and rimantadine have been introduced to combat influenza. These agents act on penetration and uncoating.
Pleconaril works against rhinoviruses, which cause the common cold, by blocking a pocket on the surface of the virus that controls the uncoating process. This pocket is similar in most strains of rhinoviruses and enteroviruses, which can cause diarrhea, meningitis, conjunctivitis, and encephalitis.
Some scientists are making the case that a vaccine against rhinoviruses, the predominant cause of the common cold, is achievable.
Vaccines that combine dozens of varieties of rhinovirus at once are effective in stimulating antiviral antibodies in mice and monkeys, researchers reported in Nature Communications in 2016.
Rhinoviruses are the most common cause of the common cold; other viruses such as respiratory syncytial virus, parainfluenza virus and adenoviruses can cause them too. Rhinoviruses also exacerbate asthma attacks. Although rhinoviruses come in many varieties, they do not drift to the same degree that influenza viruses do. A mixture of 50 inactivated rhinovirus types should be able to stimulate neutralizing antibodies against all of them to some degree.
During viral synthesis
A second approach is to target the processes that synthesize virus components after a virus invades a cell.
Reverse transcription
One way of doing this is to develop nucleotide or nucleoside analogues that look like the building blocks of RNA or DNA, but deactivate the enzymes that synthesize the RNA or DNA once the analogue is incorporated. This approach is more commonly associated with the inhibition of reverse transcriptase (RNA to DNA) than with "normal" transcriptase (DNA to RNA).
The first successful antiviral, aciclovir, is a nucleoside analogue, and is effective against herpesvirus infections. The first antiviral drug to be approved for treating HIV, zidovudine (AZT), is also a nucleoside analogue.
An improved knowledge of the action of reverse transcriptase has led to better nucleoside analogues to treat HIV infections. One of these drugs, lamivudine, has been approved to treat hepatitis B, which uses reverse transcriptase as part of its replication process. Researchers have gone further and developed inhibitors that do not look like nucleosides, but can still block reverse transcriptase.
Another target being considered for HIV antivirals include RNase H—which is a component of reverse transcriptase that splits the synthesized DNA from the original viral RNA.
Integrase
Another target is integrase, which integrate the synthesized DNA into the host cell genome. Examples of integrase inhibitors include raltegravir, elvitegravir, and dolutegravir.
Transcription
Once a virus genome becomes operational in a host cell, it then generates messenger RNA (mRNA) molecules that direct the synthesis of viral proteins. Production of mRNA is initiated by proteins known as transcription factors. Several antivirals are now being designed to block attachment of transcription factors to viral DNA.
Translation/antisense
Genomics has not only helped find targets for many antivirals, it has provided the basis for an entirely new type of drug, based on "antisense" molecules. These are segments of DNA or RNA that are designed as complementary molecule to critical sections of viral genomes, and the binding of these antisense segments to these target sections blocks the operation of those genomes. A phosphorothioate antisense drug named fomivirsen has been introduced, used to treat opportunistic eye infections in AIDS patients caused by cytomegalovirus, and other antisense antivirals are in development. An antisense structural type that has proven especially valuable in research is morpholino antisense.
Morpholino oligos have been used to experimentally suppress many viral types:
caliciviruses
flaviviruses (including West Nile virus)
dengue
HCV
coronaviruses
Translation/ribozymes
Yet another antiviral technique inspired by genomics is a set of drugs based on ribozymes, which are enzymes that will cut apart viral RNA or DNA at selected sites. In their natural course, ribozymes are used as part of the viral manufacturing sequence, but these synthetic ribozymes are designed to cut RNA and DNA at sites that will disable them.
A ribozyme antiviral to deal with hepatitis C has been suggested, and ribozyme antivirals are being developed to deal with HIV. An interesting variation of this idea is the use of genetically modified cells that can produce custom-tailored ribozymes. This is part of a broader effort to create genetically modified cells that can be injected into a host to attack pathogens by generating specialized proteins that block viral replication at various phases of the viral life cycle.
Protein processing and targeting
Interference with post translational modifications or with targeting of viral proteins in the cell is also possible.
Protease inhibitors
Some viruses include an enzyme known as a protease that cuts viral protein chains apart so they can be assembled into their final configuration. HIV includes a protease, and so considerable research has been performed to find "protease inhibitors" to attack HIV at that phase of its life cycle. Protease inhibitors became available in the 1990s and have proven effective, though they can have unusual side effects, for example causing fat to build up in unusual places. Improved protease inhibitors are now in development.
Protease inhibitors have also been seen in nature. A protease inhibitor was isolated from the shiitake mushroom (Lentinus edodes). The presence of this may explain the Shiitake mushrooms' noted antiviral activity in vitro.
Long dsRNA helix targeting
Most viruses produce long dsRNA helices during transcription and replication. In contrast, uninfected mammalian cells generally produce dsRNA helices of fewer than 24 base pairs during transcription. DRACO (double-stranded RNA activated caspase oligomerizer) is a group of experimental antiviral drugs initially developed at the Massachusetts Institute of Technology. In cell culture, DRACO was reported to have broad-spectrum efficacy against many infectious viruses, including dengue flavivirus, Amapari and Tacaribe arenavirus, Guama bunyavirus, H1N1 influenza and rhinovirus, and was additionally found effective against influenza in vivo in weanling mice. It was reported to induce rapid apoptosis selectively in virus-infected mammalian cells, while leaving uninfected cells unharmed. DRACO effects cell death via one of the last steps in the apoptosis pathway in which complexes containing intracellular apoptosis signalling molecules simultaneously bind multiple procaspases. The procaspases transactivate via cleavage, activate additional caspases in the cascade, and cleave a variety of cellular proteins, thereby killing the cell.
Assembly
Rifampicin acts at the assembly phase.
Release phase
The final stage in the life cycle of a virus is the release of completed viruses from the host cell, and this step has also been targeted by antiviral drug developers. Two drugs named zanamivir (Relenza) and oseltamivir (Tamiflu) that have been recently introduced to treat influenza prevent the release of viral particles by blocking a molecule named neuraminidase that is found on the surface of flu viruses, and also seems to be constant across a wide range of flu strains.
Immune system stimulation
Rather than attacking viruses directly, a second category of tactics for fighting viruses involves encouraging the body's immune system to attack them. Some antivirals of this sort do not focus on a specific pathogen, instead stimulating the immune system to attack a range of pathogens.
One of the best-known of this class of drugs are interferons, which inhibit viral synthesis in infected cells. One form of human interferon named "interferon alpha" is well-established as part of the standard treatment for hepatitis B and C, and other interferons are also being investigated as treatments for various diseases.
A more specific approach is to synthesize antibodies, protein molecules that can bind to a pathogen and mark it for attack by other elements of the immune system. Once researchers identify a particular target on the pathogen, they can synthesize quantities of identical "monoclonal" antibodies to link up that target. A monoclonal drug is now being sold to help fight respiratory syncytial virus in babies, and antibodies purified from infected individuals are also used as a treatment for hepatitis B.
Antiviral drug resistance
Antiviral resistance can be defined by a decreased susceptibility to a drug caused by changes in viral genotypes. In cases of antiviral resistance, drugs have either diminished or no effectiveness against their target virus. The issue inevitably remains a major obstacle to antiviral therapy as it has developed to almost all specific and effective antimicrobials, including antiviral agents.
The Centers for Disease Control and Prevention (CDC) inclusively recommends anyone six months and older to get a yearly vaccination to protect them from influenza A viruses (H1N1) and (H3N2) and up to two influenza B viruses (depending on the vaccination). Comprehensive protection starts by ensuring vaccinations are current and complete. However, vaccines are preventative and are not generally used once a patient has been infected with a virus. Additionally, the availability of these vaccines can be limited based on financial or locational reasons which can prevent the effectiveness of herd immunity, making effective antivirals a necessity.
The three FDA-approved neuraminidase antiviral flu drugs available in the United States, recommended by the CDC, include: oseltamivir (Tamiflu), zanamivir (Relenza), and peramivir (Rapivab). Influenza antiviral resistance often results from changes occurring in neuraminidase and hemagglutinin proteins on the viral surface. Currently, neuraminidase inhibitors (NAIs) are the most frequently prescribed antivirals because they are effective against both influenza A and B. However, antiviral resistance is known to develop if mutations to the neuraminidase proteins prevent NAI binding. This was seen in the H257Y mutation, which was responsible for oseltamivir resistance to H1N1 strains in 2009. The inability of NA inhibitors to bind to the virus allowed this strain of virus with the resistance mutation to spread due to natural selection. Furthermore, a study published in 2009 in Nature Biotechnology emphasized the urgent need for augmentation of oseltamivir stockpiles with additional antiviral drugs including zanamivir. This finding was based on a performance evaluation of these drugs supposing the 2009 H1N1 'Swine Flu' neuraminidase (NA) were to acquire the oseltamivir-resistance (His274Tyr) mutation, which is currently widespread in seasonal H1N1 strains.
Origin of antiviral resistance
The genetic makeup of viruses is constantly changing, which can cause a virus to become resistant to currently available treatments. Viruses can become resistant through spontaneous or intermittent mechanisms throughout the course of an antiviral treatment. Immunocompromised patients, more often than immunocompetent patients, hospitalized with pneumonia are at the highest risk of developing oseltamivir resistance during treatment. Subsequent to exposure to someone else with the flu, those who received oseltamivir for "post-exposure prophylaxis" are also at higher risk of resistance.
The mechanisms for antiviral resistance development depend on the type of virus in question. RNA viruses such as hepatitis C and influenza A have high error rates during genome replication because RNA polymerases lack proofreading activity. RNA viruses also have small genome sizes that are typically less than 30 kb, which allow them to sustain a high frequency of mutations. DNA viruses, such as HPV and herpesvirus, hijack host cell replication machinery, which gives them proofreading capabilities during replication. DNA viruses are therefore less error prone, are generally less diverse, and are more slowly evolving than RNA viruses. In both cases, the likelihood of mutations is exacerbated by the speed with which viruses reproduce, which provides more opportunities for mutations to occur in successive replications. Billions of viruses are produced every day during the course of an infection, with each replication giving another chance for mutations that encode for resistance to occur.
Multiple strains of one virus can be present in the body at one time, and some of these strains may contain mutations that cause antiviral resistance. This effect, called the quasispecies model, results in immense variation in any given sample of virus, and gives the opportunity for natural selection to favor viral strains with the highest fitness every time the virus is spread to a new host. Recombination, the joining of two different viral variants, and reassortment, the swapping of viral gene segments among viruses in the same cell, also play a role in resistance, especially in influenza.
Antiviral resistance has been reported in antivirals for herpes, HIV, hepatitis B and C, and influenza, but antiviral resistance is a possibility for all viruses. Mechanisms of antiviral resistance vary between virus types.
Detection of antiviral resistance
National and international surveillance is performed by the CDC to determine effectiveness of the current FDA-approved antiviral flu drugs. Public health officials use this information to make current recommendations about the use of flu antiviral medications. WHO further recommends in-depth epidemiological investigations to control potential transmission of the resistant virus and prevent future progression. As novel treatments and detection techniques to antiviral resistance are enhanced so can the establishment of strategies to combat the inevitable emergence of antiviral resistance.
Treatment options for antiviral resistant pathogens
If a virus is not fully wiped out during a regimen of antivirals, treatment creates a bottleneck in the viral population that selects for resistance, and there is a chance that a resistant strain may repopulate the host. Viral treatment mechanisms must therefore account for the selection of resistant viruses.
The most commonly used method for treating resistant viruses is combination therapy, which uses multiple antivirals in one treatment regimen. This is thought to decrease the likelihood that one mutation could cause antiviral resistance, as the antivirals in the cocktail target different stages of the viral life cycle. This is frequently used in retroviruses like HIV, but a number of studies have demonstrated its effectiveness against influenza A, as well. Viruses can also be screened for resistance to drugs before treatment is started. This minimizes exposure to unnecessary antivirals and ensures that an effective medication is being used. This may improve patient outcomes and could help detect new resistance mutations during routine scanning for known mutants. However, this has not been consistently implemented in treatment facilities at this time.
Direct-acting antivirals
The term Direct-acting antivirals (DAA) has long been associated with the combination of antiviral drugs used to treat hepatitis C infections. These are the more effective than older treatments such as ribavirin (partially indirectly acting) and interferon (indirect acting). The DAA drugs against hepatitis C are taken orally, as tablets, for 8 to 12 weeks. The treatment depends on the type or types (genotypes) of hepatitis C virus that are causing the infection. Both during and at the end of treatment, blood tests are used to monitor the effectiveness of the treatment and subsequent cure.
The DAA combination drugs used include:
Harvoni (sofosbuvir and ledipasvir)
Epclusa (sofosbuvir and velpatasvir)
Vosevi (sofosbuvir, velpatasvir, and voxilaprevir)
Zepatier (elbasvir and grazoprevir)
Mavyret (glecaprevir and pibrentasvir)
The United States Food and Drug Administration approved DAAs on the basis of a surrogate endpoint called sustained virological response (SVR). SVR is achieved in a patient when hepatitis C virus RNA remains undetectable 12–24 weeks after treatment ends. Whether through DAAs or older interferon-based regimens, SVR is associated with improved health outcomes and significantly decreased mortality. For those who already have advanced liver disease (including hepatocellular carcinoma), however, the benefits of achieving SVR may be less pronounced, though still substantial.
Despite its historical roots in hepatitis C research, the term "direct-acting antivirals" is becoming more broadly used to also include other anti-viral drugs with a direct viral target such as aciclovir (against herpes simplex virus), letermovir (against cytomegalovirus), or AZT (against human immunodeficiency virus). In this context it serves to distinguish these drugs from those with an indirect mechanism of action such as immune modulators like interferon alfa. This difference is of particular relevance for potential drug resistance mutation development.
Public policy
Use and distribution
Guidelines regarding viral diagnoses and treatments change frequently and limit quality care. Even when physicians diagnose older patients with influenza, use of antiviral treatment can be low. Provider knowledge of antiviral therapies can improve patient care, especially in geriatric medicine. Furthermore, in local health departments (LHDs) with access to antivirals, guidelines may be unclear, causing delays in treatment. With time-sensitive therapies, delays could lead to lack of treatment.
Overall, national guidelines, regarding infection control and management, standardize care and improve healthcare worker and patient safety. Guidelines, such as those provided by the Centers for Disease Control and Prevention (CDC) during the 2009 flu pandemic caused by the H1N1 virus, recommend, among other things, antiviral treatment regimens, clinical assessment algorithms for coordination of care, and antiviral chemoprophylaxis guidelines for exposed persons. Roles of pharmacists and pharmacies have also expanded to meet the needs of public during public health emergencies.
Stockpiling
Public Health Emergency Preparedness initiatives are managed by the CDC via the Office of Public Health Preparedness and Response. Funds aim to support communities in preparing for public health emergencies, including pandemic influenza. Also managed by the CDC, the Strategic National Stockpile (SNS) consists of bulk quantities of medicines and supplies for use during such emergencies. Antiviral stockpiles prepare for shortages of antiviral medications in cases of public health emergencies. During the H1N1 pandemic in 2009–2010, guidelines for SNS use by local health departments was unclear, revealing gaps in antiviral planning. For example, local health departments that received antivirals from the SNS did not have transparent guidance on the use of the treatments. The gap made it difficult to create plans and policies for their use and future availabilities, causing delays in treatment.
| Biology and health sciences | Antiviral drugs | Health |
49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Age of Earth | The age of Earth is estimated to be 4.54 ± 0.05 billion years This age may represent the age of Earth's accretion, or core formation, or of the material from which Earth formed. This dating is based on evidence from radiometric age-dating of meteorite material and is consistent with the radiometric ages of the oldest-known terrestrial material and lunar samples.
Following the development of radiometric age-dating in the early 20th century, measurements of lead in uranium-rich minerals showed that some were in excess of a billion years old. The oldest such minerals analyzed to date—small crystals of zircon from the Jack Hills of Western Australia—are at least 4.404 billion years old. Calcium–aluminium-rich inclusions—the oldest known solid constituents within meteorites that are formed within the Solar System—are 4.567 billion years old, giving a lower limit for the age of the Solar System.
It is hypothesised that the accretion of Earth began soon after the formation of the calcium-aluminium-rich inclusions and the meteorites. Because the time this accretion process took is not yet known, and predictions from different accretion models range from a few million up to about 100 million years, the difference between the age of Earth and of the oldest rocks is difficult to determine. It is also difficult to determine the exact age of the oldest rocks on Earth, exposed at the surface, as they are aggregates of minerals of possibly different ages.
Development of modern geologic concepts
Studies of strata—the layering of rocks and soil—gave naturalists an appreciation that Earth may have been through many changes during its existence. These layers often contained fossilized remains of unknown creatures, leading some to interpret a progression of organisms from layer to layer.
Nicolas Steno in the 17th century was one of the first naturalists to appreciate the connection between fossil remains and strata. His observations led him to formulate important stratigraphic concepts (i.e., the "law of superposition" and the "principle of original horizontality"). In the 1790s, William Smith hypothesized that if two layers of rock at widely differing locations contained similar fossils, then it was very plausible that the layers were the same age. Smith's nephew and student, John Phillips, later calculated by such means that Earth was about 96 million years old.
In the mid-18th century, the naturalist Mikhail Lomonosov suggested that Earth had been created separately from, and several hundred thousand years before, the rest of the universe. Lomonosov's ideas were mostly speculative. In 1779 the Comte du Buffon tried to obtain a value for the age of Earth using an experiment: he created a small globe that resembled Earth in composition and then measured its rate of cooling. This led him to estimate that Earth was about 75,000 years old.
Other naturalists used these hypotheses to construct a history of Earth, though their timelines were inexact as they did not know how long it took to lay down stratigraphic layers. In 1830, geologist Charles Lyell, developing ideas found in James Hutton's works, popularized the concept that the features of Earth were in perpetual change, eroding and reforming continuously, and the rate of this change was roughly constant. This was a challenge to the traditional view, which saw the history of Earth as dominated by intermittent catastrophes. Many naturalists were influenced by Lyell to become "uniformitarians" who believed that changes were constant and uniform.
Early calculations
In 1862, the physicist William Thomson, 1st Baron Kelvin published calculations that fixed the age of Earth at between 20 million and 400 million years. He assumed that Earth had formed as a completely molten object, and determined the amount of time it would take for the near-surface temperature gradient to decrease to its present value. His calculations did not account for heat produced via radioactive decay (a then unknown process) or, more significantly, convection inside Earth, which allows the temperature in the upper mantle to remain high much longer, maintaining a high thermal gradient in the crust much longer. Even more constraining were Thomson's estimates of the age of the Sun, which were based on estimates of its thermal output and a theory that the Sun obtains its energy from gravitational collapse; Thomson estimated that the Sun is about 20 million years old.
Geologists such as Lyell had difficulty accepting such a short age for Earth. For biologists, even 100 million years seemed much too short to be plausible. In Charles Darwin's theory of evolution, the process of random heritable variation with cumulative selection requires great durations of time, and Darwin stated that Thomson's estimates did not appear to provide enough time. According to modern biology, the total evolutionary history from the beginning of life to today has taken place since 3.5 to 3.8 billion years ago, the amount of time which passed since the last universal ancestor of all living organisms as shown by geological dating.
In a lecture in 1869, Darwin's great advocate, Thomas Henry Huxley, attacked Thomson's calculations, suggesting they appeared precise in themselves but were based on faulty assumptions. The physicist Hermann von Helmholtz (in 1856) and astronomer Simon Newcomb (in 1892) contributed their own calculations of 22 and 18 million years, respectively, to the debate: they independently calculated the amount of time it would take for the Sun to condense down to its current diameter and brightness from the nebula of gas and dust from which it was born. Their values were consistent with Thomson's calculations. However, they assumed that the Sun was only glowing from the heat of its gravitational contraction. The process of solar nuclear fusion was not yet known to science.
In 1892, Thomson was ennobled as Lord Kelvin in appreciation of his many scientific accomplishments. In 1895 John Perry challenged Kelvin's figure on the basis of his assumptions on conductivity, and Oliver Heaviside entered the dialogue, considering it "a vehicle to display the ability of his operator method to solve problems of astonishing complexity." Other scientists backed up Kelvin's figures. Darwin's son, the astronomer George H. Darwin, proposed that Earth and Moon had broken apart in their early days when they were both molten. He calculated the amount of time it would have taken for tidal friction to give Earth its current 24-hour day. His value of 56 million years was additional evidence that Thomson was on the right track. The last estimate Kelvin gave, in 1897, was: "that it was more than 20 and less than 40 million year old, and probably much nearer 20 than 40". In 1899 and 1900, John Joly calculated the rate at which the oceans should have accumulated salt from erosion processes and determined that the oceans were about 80 to 100 million years old.
Radiometric dating
Overview
By their chemical nature, rock minerals contain certain elements and not others; but in rocks containing radioactive isotopes, the process of radioactive decay generates exotic elements over time. By measuring the concentration of the stable end product of the decay, coupled with knowledge of the half life and initial concentration of the decaying element, the age of the rock can be calculated. Typical radioactive end products are argon from decay of potassium-40, and lead from decay of uranium and thorium. If the rock becomes molten, as happens in Earth's mantle, such nonradioactive end products typically escape or are redistributed. Thus the age of the oldest terrestrial rock gives a minimum for the age of Earth, assuming that no rock has been intact for longer than Earth itself.
Convective mantle and radioactivity
The discovery of radioactivity introduced another factor in the calculation. After Henri Becquerel's initial discovery in 1896, Marie and Pierre Curie discovered the radioactive elements polonium and radium in 1898; and in 1903, Pierre Curie and Albert Laborde announced that radium produces enough heat to melt its own weight in ice in less than an hour. Geologists quickly realized that this upset the assumptions underlying most calculations of the age of Earth. These had assumed that the original heat of Earth and the Sun had dissipated steadily into space, but radioactive decay meant that this heat had been continually replenished. George Darwin and John Joly were the first to point this out, in 1903.
Invention of radiometric dating
Radioactivity, which had overthrown the old calculations, yielded a bonus by providing a basis for new calculations, in the form of radiometric dating.
Ernest Rutherford and Frederick Soddy jointly had continued their work on radioactive materials and concluded that radioactivity was caused by a spontaneous transmutation of atomic elements. In radioactive decay, an element breaks down into another, lighter element, releasing alpha, beta, or gamma radiation in the process. They also determined that a particular isotope of a radioactive element decays into another element at a distinctive rate. This rate is given in terms of a "half-life", or the amount of time it takes half of a mass of that radioactive material to break down into its "decay product".
Some radioactive materials have short half-lives; some have long half-lives. Uranium and thorium have long half-lives and so persist in Earth's crust, but radioactive elements with short half-lives have generally disappeared. This suggested that it might be possible to measure the age of Earth by determining the relative proportions of radioactive materials in geological samples. In reality, radioactive elements do not always decay into nonradioactive ("stable") elements directly, instead, decaying into other radioactive elements that have their own half-lives and so on, until they reach a stable element. These "decay chains", such as the uranium-radium and thorium series, were known within a few years of the discovery of radioactivity and provided a basis for constructing techniques of radiometric dating.
The pioneers of radioactivity were chemist Bertram B. Boltwood and physicist Rutherford. Boltwood had conducted studies of radioactive materials as a consultant, and when Rutherford lectured at Yale in 1904, Boltwood was inspired to describe the relationships between elements in various decay series. Late in 1904, Rutherford took the first step toward radiometric dating by suggesting that the alpha particles released by radioactive decay could be trapped in a rocky material as helium atoms. At the time, Rutherford was only guessing at the relationship between alpha particles and helium atoms, but he would prove the connection four years later.
Soddy and Sir William Ramsay had just determined the rate at which radium produces alpha particles, and Rutherford proposed that he could determine the age of a rock sample by measuring its concentration of helium. He dated a rock in his possession to an age of 40 million years by this technique. Rutherford wrote of addressing a meeting of the Royal Institution in 1904:
Rutherford assumed that the rate of decay of radium as determined by Ramsay and Soddy was accurate and that helium did not escape from the sample over time. Rutherford's scheme was inaccurate, but it was a useful first step. Boltwood focused on the end products of decay series. In 1905, he suggested that lead was the final stable product of the decay of radium. It was already known that radium was an intermediate product of the decay of uranium. Rutherford joined in, outlining a decay process in which radium emitted five alpha particles through various intermediate products to end up with lead, and speculated that the radium–lead decay chain could be used to date rock samples. Boltwood did the legwork and by the end of 1905 had provided dates for 26 separate rock samples, ranging from 92 to 570 million years. He did not publish these results, which was fortunate because they were flawed by measurement errors and poor estimates of the half-life of radium. Boltwood refined his work and finally published the results in 1907.
Boltwood's paper pointed out that samples taken from comparable layers of strata had similar lead-to-uranium ratios, and that samples from older layers had a higher proportion of lead, except where there was evidence that lead had leached out of the sample. His studies were flawed by the fact that the decay series of thorium was not understood, which led to incorrect results for samples that contained both uranium and thorium. However, his calculations were far more accurate than any that had been performed to that time. Refinements in the technique would later give ages for Boltwood's 26 samples of 410 million to 2.2 billion years.
Arthur Holmes establishes radiometric dating
Although Boltwood published his paper in a prominent geological journal, the geological community had little interest in radioactivity. Boltwood gave up work on radiometric dating and went on to investigate other decay series. Rutherford remained mildly curious about the issue of the age of Earth but did little work on it.
Robert Strutt tinkered with Rutherford's helium method until 1910 and then ceased. However, Strutt's student Arthur Holmes became interested in radiometric dating and continued to work on it after everyone else had given up. Holmes focused on lead dating because he regarded the helium method as unpromising. He performed measurements on rock samples and concluded in 1911 that the oldest (a sample from Ceylon) was about 1.6 billion years old. These calculations were not particularly trustworthy. For example, he assumed that the samples had contained only uranium and no lead when they were formed.
More important research was published in 1913. It showed that elements generally exist in multiple variants with different masses, or "isotopes". In the 1930s, isotopes would be shown to have nuclei with differing numbers of the neutral particles known as "neutrons". In that same year, other research was published establishing the rules for radioactive decay, allowing more precise identification of decay series.
Many geologists felt these new discoveries made radiometric dating so complicated as to be worthless. Holmes felt that they gave him tools to improve his techniques, and he plodded ahead with his research, publishing before and after the First World War. His work was generally ignored until the 1920s, though in 1917 Joseph Barrell, a professor of geology at Yale, redrew geological history as it was understood at the time to conform to Holmes's findings in radiometric dating. Barrell's research determined that the layers of strata had not all been laid down at the same rate, and so current rates of geological change could not be used to provide accurate timelines of the history of Earth.
Holmes' persistence finally began to pay off in 1921, when the speakers at the yearly meeting of the British Association for the Advancement of Science came to a rough consensus that Earth was a few billion years old and that radiometric dating was credible. Holmes published The Age of the Earth, an Introduction to Geological Ideas in 1927 in which he presented a range of 1.6 to 3.0 billion years. No great push to embrace radiometric dating followed, however, and the die-hards in the geological community stubbornly resisted. They had never cared for attempts by physicists to intrude in their domain, and had successfully ignored them so far. The growing weight of evidence finally tilted the balance in 1931, when the National Research Council of the US National Academy of Sciences decided to resolve the question of the age of Earth by appointing a committee to investigate.
Holmes, being one of the few people who was trained in radiometric dating techniques, was a committee member and in fact wrote most of the final report. Thus, Holmes' report concluded that radioactive dating was the only reliable means of pinning down a geologic time scale. Questions of bias were deflected by the great and exacting detail of the report. It described the methods used, the care with which measurements were made, and their error bars and limitations.
Modern radiometric dating
Radiometric dating continues to be the predominant way scientists date geologic time scales. Techniques for radioactive dating have been tested and fine-tuned on an ongoing basis since the 1960s. Forty or so different dating techniques have been utilized to date, working on a wide variety of materials. Dates for the same sample using these different techniques are in very close agreement on the age of the material. Possible contamination problems do exist, but they have been studied and dealt with by careful investigation, leading to sample preparation procedures being minimized to limit the chance of contamination.
Use of meteorites
An age of 4.55 ± 0.07 billion years, very close to today's accepted age, was determined by Clair Cameron Patterson using uranium–lead isotope dating (specifically lead–lead dating) on several meteorites including the Canyon Diablo meteorite and published in 1956. The quoted age of Earth is derived, in part, from the Canyon Diablo meteorite for several important reasons and is built upon a modern understanding of cosmochemistry built up over decades of research.
Most geological samples from Earth are unable to give a direct date of the formation of Earth from the solar nebula because Earth has undergone differentiation into the core, mantle, and crust, and this has then undergone a long history of mixing and unmixing of these sample reservoirs by plate tectonics, weathering and hydrothermal circulation.
All of these processes may adversely affect isotopic dating mechanisms because the sample cannot always be assumed to have remained as a closed system, by which it is meant that either the parent or daughter nuclide (a species of atom characterised by the number of neutrons and protons an atom contains) or an intermediate daughter nuclide may have been partially removed from the sample, which will skew the resulting isotopic date. To mitigate this effect it is usual to date several minerals in the same sample, to provide an isochron. Alternatively, more than one dating system may be used on a sample to check the date.
Some meteorites are furthermore considered to represent the primitive material from which the accreting solar disk was formed. Some have behaved as closed systems (for some isotopic systems) soon after the solar disk and the planets formed. To date, these assumptions are supported by much scientific observation and repeated isotopic dates, and it is certainly a more robust hypothesis than that which assumes a terrestrial rock has retained its original composition.
Nevertheless, ancient Archaean lead ores of galena have been used to date the formation of Earth as these represent the earliest formed lead-only minerals on the planet and record the earliest homogeneous lead–lead isotope systems on the planet. These have returned age dates of 4.54 billion years with a precision of as little as 1% margin for error.
Statistics for several meteorites that have undergone isochron dating are as follows:
Canyon Diablo meteorite
The Canyon Diablo meteorite was used because it is both large and representative of a particularly rare type of meteorite that contains sulfide minerals (particularly troilite, FeS), metallic nickel-iron alloys, plus silicate minerals. This is important because the presence of the three mineral phases allows investigation of isotopic dates using samples that provide a great separation in concentrations between parent and daughter nuclides. This is particularly true of uranium and lead. Lead is strongly chalcophilic and is found in the sulfide at a much greater concentration than in the silicate, versus uranium. Because of this segregation in the parent and daughter nuclides during the formation of the meteorite, this allowed a much more precise date of the formation of the solar disk and hence the planets than ever before.
The age determined from the Canyon Diablo meteorite has been confirmed by hundreds of other age determinations, from both terrestrial samples and other meteorites. The meteorite samples, however, show a spread from 4.53 to 4.58 billion years ago. This is interpreted as the duration of formation of the solar nebula and its collapse into the solar disk to form the Sun and the planets. This 50 million year time span allows for accretion of the planets from the original solar dust and meteorites.
The Moon, as another extraterrestrial body that has not undergone plate tectonics and that has no atmosphere, provides quite precise age dates from the samples returned from the Apollo missions. Rocks returned from the Moon have been dated at a maximum of 4.51 billion years old. Martian meteorites that have landed upon Earth have also been dated to around 4.5 billion years old by lead–lead dating. Lunar samples, since they have not been disturbed by weathering, plate tectonics or material moved by organisms, can also provide dating by direct electron microscope examination of cosmic ray tracks. The accumulation of dislocations generated by high energy cosmic ray particle impacts provides another confirmation of the isotopic dates. Cosmic ray dating is only useful on material that has not been melted, since melting erases the crystalline structure of the material, and wipes away the tracks left by the particles.
| Physical sciences | Basics | Earth science |
49281 | https://en.wikipedia.org/wiki/Hydronium | Hydronium | In chemistry, hydronium (hydroxonium in traditional British English) is the cation , also written as , the type of oxonium ion produced by protonation of water. It is often viewed as the positive ion present when an Arrhenius acid is dissolved in water, as Arrhenius acid molecules in solution give up a proton (a positive hydrogen ion, ) to the surrounding water molecules (). In fact, acids must be surrounded by more than a single water molecule in order to ionize, yielding aqueous and conjugate base.
Three main structures for the aqueous proton have garnered experimental support:
the Eigen cation, which is a tetrahydrate, H3O+(H2O)3
the Zundel cation, which is a symmetric dihydrate, H+(H2O)2
and the Stoyanov cation, an expanded Zundel cation, which is a hexahydrate: H+(H2O)2(H2O)4
Spectroscopic evidence from well-defined IR spectra overwhelmingly supports the Stoyanov cation as the predominant form. For this reason, it has been suggested that wherever possible, the symbol H+(aq) should be used instead of the hydronium ion.
Relation to pH
The molar concentration of hydronium or ions determines a solution's pH according to
pH = -log([]/M)
where M = mol/L. The concentration of hydroxide ions analogously determines a solution's pOH. The molecules in pure water auto-dissociate into aqueous protons and hydroxide ions in the following equilibrium:
In pure water, there is an equal number of hydroxide and ions, so it is a neutral solution. At , pure water has a pH of 7 and a pOH of 7 (this varies when the temperature changes: see self-ionization of water). A pH value less than 7 indicates an acidic solution, and a pH value more than 7 indicates a basic solution.
Nomenclature
According to IUPAC nomenclature of organic chemistry, the hydronium ion should be referred to as oxonium. Hydroxonium may also be used unambiguously to identify it.
An oxonium ion is any cation containing a trivalent oxygen atom.
Structure
Since and N have the same number of electrons, is isoelectronic with ammonia. As shown in the images above, has a trigonal pyramidal molecular geometry with the oxygen atom at its apex. The bond angle is approximately 113°, and the center of mass is very close to the oxygen atom. Because the base of the pyramid is made up of three identical hydrogen atoms, the molecule's symmetric top configuration is such that it belongs to the point group. Because of this symmetry and the fact that it has a dipole moment, the rotational selection rules are ΔJ = ±1 and ΔK = 0. The transition dipole lies along the c-axis and, because the negative charge is localized near the oxygen atom, the dipole moment points to the apex, perpendicular to the base plane.
Acids and acidity
The hydrated proton is very acidic: at 25 °C, its pKa is approximately 0. The values commonly given for pKaaq(H3O+) are 0 or –1.74. The former uses the convention that the activity of the solvent in a dilute solution (in this case, water) is 1, while the latter uses the value of the concentration of water in the pure liquid of 55.5 M. Silverstein has shown that the latter value is thermodynamically unsupportable. The disagreement comes from the ambiguity that to define pKa of H3O+ in water, H2O has to act simultaneously as a solute and the solvent. The IUPAC has not given an official definition of pKa that would resolve this ambiguity. Burgot has argued that H3O+(aq) + H2O (l) ⇄ H2O (aq) + H3O+ (aq) is simply not a thermodynamically well-defined process. For an estimate of pKaaq(H3O+), Burgot suggests taking the measured value pKaEtOH(H3O+) = 0.3, the pKa of H3O+ in ethanol, and applying the correlation equation pKaaq = pKaEtOH – 1.0 (± 0.3) to convert the ethanol pKa to an aqueous value, to give a value of pKaaq(H3O+) = –0.7 (± 0.3). On the other hand, Silverstein has shown that Ballinger and Long's experimental results support a pKa of 0.0 for the aqueous proton. Neils and Schaertel provide added arguments for a pKa of 0.0
The aqueous proton is the most acidic species that can exist in water (assuming sufficient water for dissolution): any stronger acid will ionize and yield a hydrated proton. The acidity of (aq) is the implicit standard used to judge the strength of an acid in water: strong acids must be better proton donors than (aq), as otherwise a significant portion of acid will exist in a non-ionized state (i.e.: a weak acid). Unlike (aq) in neutral solutions that result from water's autodissociation, in acidic solutions, (aq) is long-lasting and concentrated, in proportion to the strength of the dissolved acid.
pH was originally conceived to be a measure of the hydrogen ion concentration of aqueous solution. Virtually all such free protons are quickly hydrated; acidity of an aqueous solution is therefore more accurately characterized by its concentration of (aq). In organic syntheses, such as acid catalyzed reactions, the hydronium ion () is used interchangeably with the ion; choosing one over the other has no significant effect on the mechanism of reaction.
Solvation
Researchers have yet to fully characterize the solvation of hydronium ion in water, in part because many different meanings of solvation exist. A freezing-point depression study determined that the mean hydration ion in cold water is approximately : on average, each hydronium ion is solvated by 6 water molecules which are unable to solvate other solute molecules.
Some hydration structures are quite large: the magic ion number structure (called magic number because of its increased stability with respect to hydration structures involving a comparable number of water molecules – this is a similar usage of the term magic number as in nuclear physics) might place the hydronium inside a dodecahedral cage. However, more recent ab initio method molecular dynamics simulations have shown that, on average, the hydrated proton resides on the surface of the cluster. Further, several disparate features of these simulations agree with their experimental counterparts suggesting an alternative interpretation of the experimental results.
Two other well-known structures are the Zundel cation and the Eigen cation. The Eigen solvation structure has the hydronium ion at the center of an complex in which the hydronium is strongly hydrogen-bonded to three neighbouring water molecules. In the Zundel complex the proton is shared equally by two water molecules in a symmetric hydrogen bond. A work in 1999 indicates that both of these complexes represent ideal structures in a more general hydrogen bond network defect.
Isolation of the hydronium ion monomer in liquid phase was achieved in a nonaqueous, low nucleophilicity superacid solution (). The ion was characterized by high resolution nuclear magnetic resonance.
A 2007 calculation of the enthalpies and free energies of the various hydrogen bonds around the hydronium cation in liquid protonated water at room temperature and a study of the proton hopping mechanism using molecular dynamics showed that the hydrogen-bonds around the hydronium ion (formed with the three water ligands in the first solvation shell of the hydronium) are quite strong compared to those of bulk water.
A new model was proposed by Stoyanov based on infrared spectroscopy in which the proton exists as an ion. The positive charge is thus delocalized over 6 water molecules.
Solid hydronium salts
For many strong acids, it is possible to form crystals of their hydronium salt that are relatively stable. These salts are sometimes called acid monohydrates. As a rule, any acid with an ionization constant of or higher may do this. Acids whose ionization constants are below generally cannot form stable salts. For example, nitric acid has an ionization constant of , and mixtures with water at all proportions are liquid at room temperature. However, perchloric acid has an ionization constant of , and if liquid anhydrous perchloric acid and water are combined in a 1:1 molar ratio, they react to form solid hydronium perchlorate ().
The hydronium ion also forms stable compounds with the carborane superacid . X-ray crystallography shows a symmetry for the hydronium ion with each proton interacting with a bromine atom each from three carborane anions 320 pm apart on average. The salt is also soluble in benzene. In crystals grown from a benzene solution the solvent co-crystallizes and a cation is completely separated from the anion. In the cation three benzene molecules surround hydronium forming pi-cation interactions with the hydrogen atoms. The closest (non-bonding) approach of the anion at chlorine to the cation at oxygen is 348 pm.
There are also many known examples of salts containing hydrated hydronium ions, such as the ion in , the and ions both found in .
Sulfuric acid is also known to form a hydronium salt at temperatures below .
Interstellar H3O+
Hydronium is an abundant molecular ion in the interstellar medium and is found in diffuse and dense molecular clouds as well as the plasma tails of comets. Interstellar sources of hydronium observations include the regions of Sagittarius B2, Orion OMC-1, Orion BN–IRc2, Orion KL, and the comet Hale–Bopp.
Interstellar hydronium is formed by a chain of reactions started by the ionization of into by cosmic radiation. can produce either or through dissociative recombination reactions, which occur very quickly even at the low (≥10 K) temperatures of dense clouds. This leads to hydronium playing a very important role in interstellar ion-neutral chemistry.
Astronomers are especially interested in determining the abundance of water in various interstellar climates due to its key role in the cooling of dense molecular gases through radiative processes. However, does not have many favorable transitions for ground-based observations. Although observations of HDO (the deuterated version of water) could potentially be used for estimating abundances, the ratio of HDO to is not known very accurately.
Hydronium, on the other hand, has several transitions that make it a superior candidate for detection and identification in a variety of situations. This information has been used in conjunction with laboratory measurements of the branching ratios of the various dissociative recombination reactions to provide what are believed to be relatively accurate and abundances without requiring direct observation of these species.
Interstellar chemistry
As mentioned previously, is found in both diffuse and dense molecular clouds. By applying the reaction rate constants (α, β, and γ) corresponding to all of the currently available characterized reactions involving , it is possible to calculate k(T) for each of these reactions. By multiplying these k(T) by the relative abundances of the products, the relative rates (in cm3/s) for each reaction at a given temperature can be determined. These relative rates can be made in absolute rates by multiplying them by the . By assuming for a dense cloud and for a diffuse cloud, the results indicate that most dominant formation and destruction mechanisms were the same for both cases. It should be mentioned that the relative abundances used in these calculations correspond to TMC-1, a dense molecular cloud, and that the calculated relative rates are therefore expected to be more accurate at . The three fastest formation and destruction mechanisms are listed in the table below, along with their relative rates. Note that the rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions. All three destruction mechanisms in the table below are classified as dissociative recombination reactions.
It is also worth noting that the relative rates for the formation reactions in the table above are the same for a given reaction at both temperatures. This is due to the reaction rate constants for these reactions having β and γ constants of 0, resulting in which is independent of temperature.
Since all three of these reactions produce either or OH, these results reinforce the strong connection between their relative abundances and that of . The rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions.
Astronomical detections
As early as 1973 and before the first interstellar detection, chemical models of the interstellar medium (the first corresponding to a dense cloud) predicted that hydronium was an abundant molecular ion and that it played an important role in ion-neutral chemistry. However, before an astronomical search could be underway there was still the matter of determining hydronium's spectroscopic features in the gas phase, which at this point were unknown. The first studies of these characteristics came in 1977, which was followed by other, higher resolution spectroscopy experiments. Once several lines had been identified in the laboratory, the first interstellar detection of H3O+ was made by two groups almost simultaneously in 1986. The first, published in June 1986, reported observation of the J = 1 − 2 transition at in OMC-1 and Sgr B2. The second, published in August, reported observation of the same transition toward the Orion-KL nebula.
These first detections have been followed by observations of a number of additional transitions. The first observations of each subsequent transition detection are given below in chronological order:
In 1991, the 3 − 2 transition at was observed in OMC-1 and Sgr B2. One year later, the 3 − 2 transition at was observed in several regions, the clearest of which was the W3 IRS 5 cloud.
The first far-IR 4 − 3 transition at 69.524 μm (4.3121 THz) was made in 1996 near Orion BN-IRc2. In 2001, three additional transitions of in were observed in the far infrared in Sgr B2; 2 − 1 transition at 100.577 μm (2.98073 THz), 1 − 1 at 181.054 μm (1.65582 THz) and 2 − 1 at 100.869 μm (2.9721 THz).
| Physical sciences | Concepts | Chemistry |
49295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | Fine-structure constant | In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by (the Greek letter alpha), is a fundamental physical constant that quantifies the strength of the electromagnetic interaction between elementary charged particles.
It is a dimensionless quantity (dimensionless physical constant), independent of the system of units used, which is related to the strength of the coupling of an elementary charge e with the electromagnetic field, by the formula . Its numerical value is approximately , with a relative uncertainty of
The constant was named by Arnold Sommerfeld, who introduced it in 1916 when extending the Bohr model of the atom. quantified the gap in the fine structure of the spectral lines of the hydrogen atom, which had been measured precisely by Michelson and Morley in 1887.
Why the constant should have this value is not understood, but there are a number of ways to measure its value.
Definition
In terms of other physical constants, may be defined as:
where
is the elementary charge ();
is the Planck constant ();
is the reduced Planck constant, ()
is the speed of light ();
is the electric constant ().
Since the 2019 revision of the SI, the only quantity in this list that does not have an exact value in SI units is the electric constant (vacuum permittivity).
Alternative systems of units
The electrostatic CGS system implicitly sets , as commonly found in older physics literature, where the expression of the fine-structure constant becomes
A nondimensionalised system commonly used in high energy physics sets , where the expression for the fine-structure constant becomesAs such, the fine-structure constant is chiefly a quantity determining (or determined by) the elementary charge: in terms of such a natural unit of charge.
In the system of atomic units, which sets , the expression for the fine-structure constant becomes
Measurement
The CODATA recommended value of is
This has a relative standard uncertainty of
This value for gives , 0.8 times the standard uncertainty away from its old defined value, with the mean differing from the old value by only 0.13 parts per billion.
Historically the value of the reciprocal of the fine-structure constant is often given. The CODATA recommended value is
While the value of can be determined from estimates of the constants that appear in any of its definitions, the theory of quantum electrodynamics (QED) provides a way to measure directly using the quantum Hall effect or the anomalous magnetic moment of the electron. Other methods include the A.C. Josephson effect and photon recoil in atom interferometry.
There is general agreement for the value of , as measured by these different methods. The preferred methods in 2019 are measurements of electron anomalous magnetic moments and of photon recoil in atom interferometry. The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant (the magnetic moment of the electron is also referred to as the electron -factor ). One of the most precise values of obtained experimentally (as of 2023) is based on a measurement of using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved tenth-order Feynman diagrams:
This measurement of has a relative standard uncertainty of . This value and uncertainty are about the same as the latest experimental results.
Further refinement of the experimental value was published by the end of 2020, giving the value
with a relative accuracy of , which has a significant discrepancy from the previous experimental value.
Physical interpretations
The fine-structure constant, , has several physical interpretations. is:
When perturbation theory is applied to quantum electrodynamics, the resulting perturbative expansions for physical results are expressed as sets of power series in . Because is much less than one, higher powers of are soon unimportant, making the perturbation theory practical in this case. On the other hand, the large value of the corresponding factors in quantum chromodynamics makes calculations involving the strong nuclear force extremely difficult.
Variation with energy scale
In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron's mass gives a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, is the asymptotic value of the fine-structure constant at zero energy.
At higher energies, such as the scale of the Z boson, about 90 GeV, one instead measures an effective ≈ 1/127.
As the energy scale increases, the strength of the electromagnetic interaction in the Standard Model approaches that of the other two fundamental interactions, a feature important for grand unification theories. If quantum electrodynamics were an exact theory, the fine-structure constant would actually diverge at an energy known as the Landau pole – this fact undermines the consistency of quantum electrodynamics beyond perturbative expansions.
History
Based on the precise measurement of the hydrogen atom spectrum by Michelson and Morley in 1887,
Arnold Sommerfeld extended the Bohr model to include elliptical orbits and relativistic dependence of mass on velocity. He introduced a term for the fine-structure constant in 1916.
The first physical interpretation of the fine-structure constant was as the ratio of the velocity of the electron in the first circular orbit of the relativistic Bohr atom to the speed of light in the vacuum.
Equivalently, it was the quotient between the minimum angular momentum allowed by relativity for a closed orbit, and the minimum angular momentum allowed for it by quantum mechanics. It appears naturally in Sommerfeld's analysis, and determines the size of the splitting or fine-structure of the hydrogenic spectral lines. This constant was not seen as significant until Paul Dirac's linear relativistic wave equation in 1928, which gave the exact fine structure formula.
With the development of quantum electrodynamics (QED) the significance of has broadened from a spectroscopic phenomenon to a general coupling constant for the electromagnetic field, determining the strength of the interaction between electrons and photons. The term is engraved on the tombstone of one of the pioneers of QED, Julian Schwinger, referring to his calculation of the anomalous magnetic dipole moment.
History of measurements
The CODATA values in the above table are computed by averaging other measurements; they are not independent experiments.
Potential variation over time
Physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying has been proposed as a way of solving problems in cosmology and astrophysics. String theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just ) actually vary.
In the experiments below, represents the change in over time, which can be computed by prev − now . If the fine-structure constant really is a constant, then any experiment should show that
or as close to zero as experiment can measure. Any value far away from zero would indicate that does change over time. So far, most experimental data is consistent with being constant.
Past rate of change
The first experimenters to test whether the fine-structure constant might actually vary examined the spectral lines of distant astronomical objects and the products of radioactive decay in the Oklo natural nuclear fission reactor. Their findings were consistent with no variation in the fine-structure constant between these two vastly separated locations and times.
Improved technology at the dawn of the 21st century made it possible to probe the value of at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in .
Using the Keck telescopes and a data set of 128 quasars at redshifts , Webb et al. found that their spectra were consistent with a slight increase in over the last 10–12 billion years. Specifically, they found that
In other words, they measured the value to be somewhere between and . This is a very small value, but the error bars do not actually include zero. This result either indicates that is not constant or that there is experimental error unaccounted for.
In 2004, a smaller study of 23 absorption systems by Chand et al., using the Very Large Telescope, found no measurable variation:
However, in 2007 simple flaws were identified in the analysis method of Chand et al., discrediting those results.
King et al. have used Markov chain Monte Carlo methods to investigate the algorithm used by the UNSW group to determine from the quasar spectra, and have found that the algorithm appears to produce correct uncertainties and maximum likelihood estimates for for particular models. This suggests that the statistical uncertainties and best estimate for stated by Webb et al. and Murphy et al. are robust.
Lamoreaux and Torgerson analyzed data from the Oklo natural nuclear fission reactor in 2004, and concluded that has changed in the past 2 billion years by 45 parts per billion. They claimed that this finding was "probably accurate to within 20%". Accuracy is dependent on estimates of impurities and temperature in the natural reactor. These conclusions have yet to be verified.
In 2007, Khatri and Wandelt of the University of Illinois at Urbana-Champaign realized that the 21 cm hyperfine transition in neutral hydrogen of the early universe leaves a unique absorption line imprint in the cosmic microwave background radiation.
They proposed using this effect to measure the value of during the epoch before the formation of the first stars. In principle, this technique provides enough information to measure a variation of 1 part in (4 orders of magnitude better than the current quasar constraints). However, the constraint which can be placed on is strongly dependent upon effective integration time, going as . The European LOFAR radio telescope would only be able to constrain to about 0.3%. The collecting area required to constrain to the current level of quasar constraints is on the order of 100 square kilometers, which is economically impracticable at present.
Present rate of change
In 2008, Rosenband et al.
used the frequency ratio of and in single-ion optical atomic clocks to place a very stringent constraint on the present-time temporal variation of , namely = per year. A present day null constraint on the time variation of alpha does not necessarily rule out time variation in the past. Indeed, some theories
that predict a variable fine-structure constant also predict that the value of the fine-structure constant should become practically fixed in its value once the universe enters its current dark energy-dominated epoch.
Spatial variation – Australian dipole
Researchers from Australia have said they had identified a variation of the fine-structure constant across the observable universe.
These results have not been replicated by other researchers. In September and October 2010, after released research by Webb et al., physicists C. Orzel and S.M. Carroll separately suggested various approaches of how Webb's observations may be wrong. Orzel argues
that the study may contain wrong data due to subtle differences in the two telescopes
a totally different approach; he looks at the fine-structure constant as a scalar field and claims that if the telescopes are correct and the fine-structure constant varies smoothly over the universe, then the scalar field must have a very small mass. However, previous research has shown that the mass is not likely to be extremely small. Both of these scientists' early criticisms point to the fact that different techniques are needed to confirm or contradict the results, a conclusion Webb, et al., previously stated in their study.
Other research finds no meaningful variation in the fine structure constant.
Anthropic explanation
The anthropic principle is an argument about the reason the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were very different. One example is that, if modern grand unified theories are correct, then needs to be between around 1/180 and 1/85 to have proton decay to be slow enough for life to be possible.
Numerological explanations
As a dimensionless constant which does not seem to be directly related to any mathematical constant, the fine-structure constant has long fascinated physicists.
Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the universe.
This led him in 1929 to conjecture that the reciprocal of the fine-structure constant was not approximately but precisely the integer 137.
By the 1940s experimental values for deviated sufficiently from 137 to refute Eddington's arguments.
Physicist Wolfgang Pauli commented on the appearance of certain numbers in physics, including the fine-structure constant, which he also noted approximates the prime number 137. This constant so intrigued him that he collaborated with psychoanalyst Carl Jung in a quest to understand its significance. Similarly, Max Born believed that if the value of differed, the universe would degenerate, and thus that = is a law of nature.
Richard Feynman, one of the originators and early developers of the theory of quantum electrodynamics (QED), referred to the fine-structure constant in these terms:
Conversely, statistician I. J. Good argued that a numerological explanation would only be acceptable if it could be based on a good theory that is not yet known but "exists" in the sense of a Platonic Ideal.
Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. However, no numerological explanation has ever been accepted by the physics community.
In the late 20th century, multiple physicists, including Stephen Hawking in his 1988 book A Brief History of Time, began exploring the idea of a multiverse, and the fine-structure constant was one of several universal constants that suggested the idea of a fine-tuned universe.
Quotes
| Physical sciences | Physical constants | Physics |
49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Bipolar junction transistor | A bipolar junction transistor (BJT) is a type of transistor that uses both electrons and electron holes as charge carriers. In contrast, a unipolar transistor, such as a field-effect transistor (FET), uses only one kind of charge carrier. A bipolar transistor allows a small current injected at one of its terminals to control a much larger current between the remaining two terminals, making the device capable of amplification or switching.
BJTs use two p–n junctions between two semiconductor types, n-type and p-type, which are regions in a single crystal of material. The junctions can be made in several different ways, such as changing the doping of the semiconductor material as it is grown, by depositing metal pellets to form alloy junctions, or by such methods as diffusion of n-type and p-type doping substances into the crystal. The superior predictability and performance of junction transistors quickly displaced the original point-contact transistor. Diffused transistors, along with other components, are elements of integrated circuits for analog and digital functions. Hundreds of bipolar junction transistors can be made in one circuit at a very low cost.
Bipolar transistor integrated circuits were the main active devices of a generation of mainframe and minicomputers, but most computer systems now use Complementary metal–oxide–semiconductor (CMOS) integrated circuits relying on the field-effect transistor (FET). Bipolar transistors are still used for amplification of signals, switching, and in mixed-signal integrated circuits using BiCMOS. Specialized types are used for high voltage switches, for radio-frequency (RF) amplifiers, or for switching high currents.
Current direction conventions
By convention, the direction of current on diagrams is shown as the direction that a positive charge would move. This is called conventional current. However, current in metal conductors is generally due to the flow of electrons. Because electrons carry a negative charge, they move in the direction opposite to conventional current. On the other hand, inside a bipolar transistor, currents can be composed of both positively charged holes and negatively charged electrons. In this article, current arrows are shown in the conventional direction, but labels for the movement of holes and electrons show their actual direction inside the transistor.
Arrow direction
The arrow on the symbol for bipolar transistors indicates the p–n junction between base and emitter and points in the direction in which conventional current travels.
Function
BJTs exist as PNP and NPN types, based on the doping types of the three main terminal regions. An NPN transistor comprises two semiconductor junctions that share a thin p-doped region, and a PNP transistor comprises two semiconductor junctions that share a thin n-doped region. N-type means doped with impurities (such as phosphorus or arsenic) that provide mobile electrons, while p-type means doped with impurities (such as boron) that provide holes that readily accept electrons.
Charge flow in a BJT is due to diffusion of charge carriers (electrons and holes) across a junction between two regions of different charge carrier concentration. The regions of a BJT are called emitter, base, and collector. A discrete transistor has three leads for connection to these regions. Typically, the emitter region is heavily doped compared to the other two layers, and the collector is doped more lightly (typically ten times lighter) than the base. By design, most of the BJT collector current is due to the flow of charge carriers injected from a heavily doped emitter into the base where they are minority carriers (electrons in NPNs, holes in PNPs) that diffuse toward the collector, so BJTs are classified as minority-carrier devices.
In typical operation, the base–emitter junction is forward biased, which means that the p-doped side of the junction is at a more positive potential than the n-doped side, and the base–collector junction is reverse biased. When forward bias is applied to the base–emitter junction, the equilibrium between the thermally generated carriers and the repelling electric field of the emitter depletion region is disturbed. This allows thermally excited carriers (electrons in NPNs, holes in PNPs) to inject from the emitter into the base region. These carriers create a diffusion current through the base from the region of high concentration near the emitter toward the region of low concentration near the collector.
To minimize the fraction of carriers that recombine before reaching the collector–base junction, the transistor's base region must be thin enough that carriers can diffuse across it in much less time than the semiconductor's minority-carrier lifetime. Having a lightly doped base ensures recombination rates are low. In particular, the thickness of the base must be much less than the diffusion length of the carriers. The collector–base junction is reverse-biased, and so negligible carrier injection occurs from the collector to the base, but carriers that are injected into the base from the emitter, and diffuse to reach the collector–base depletion region, are swept into the collector by the electric field in the depletion region. The thin shared base and asymmetric collector–emitter doping are what differentiates a bipolar transistor from two separate diodes connected in series.
Voltage, current, and charge control
The collector–emitter current can be viewed as being controlled by the base–emitter current (current control), or by the base–emitter voltage (voltage control). These views are related by the current–voltage relation of the base–emitter junction, which is the usual exponential current–voltage curve of a p–n junction (diode).
The explanation for collector current is the concentration gradient of minority carriers in the base region. Due to low-level injection (in which there are much fewer excess carriers than normal majority carriers) the ambipolar transport rates (in which the excess majority and minority carriers flow at the same rate) is in effect determined by the excess minority carriers.
Detailed transistor models of transistor action, such as the Gummel–Poon model, account for the distribution of this charge explicitly to explain transistor behavior more exactly. The charge-control view easily handles phototransistors, where minority carriers in the base region are created by the absorption of photons, and handles the dynamics of turn-off, or recovery time, which depends on charge in the base region recombining. However, because base charge is not a signal that is visible at the terminals, the current- and voltage-control views are generally used in circuit design and analysis.
In analog circuit design, the current-control view is sometimes used because it is approximately linear. That is, the collector current is approximately times the base current. Some basic circuits can be designed by assuming that the base–emitter voltage is approximately constant and that collector current is β times the base current. However, to accurately and reliably design production BJT circuits, the voltage-control model (e.g. the Ebers–Moll model) is required. The voltage-control model requires an exponential function to be taken into account, but when it is linearized such that the transistor can be modeled as a transconductance, as in the Ebers–Moll model, design for circuits such as differential amplifiers again becomes a mostly linear problem, so the voltage-control view is often preferred. For translinear circuits, in which the exponential I–V curve is key to the operation, the transistors are usually modeled as voltage-controlled current sources whose transconductance is proportional to their collector current. In general, transistor-level circuit analysis is performed using SPICE or a comparable analog-circuit simulator, so mathematical model complexity is usually not of much concern to the designer, but a simplified view of the characteristics allows designs to be created following a logical process.
Turn-on, turn-off, and storage delay
Bipolar transistors, and particularly power transistors, have long base-storage times when they are driven into saturation; the base storage limits turn-off time in switching applications. A Baker clamp can prevent the transistor from heavily saturating, which reduces the amount of charge stored in the base and thus improves switching time.
Transistor characteristics: alpha (α) and beta (β)
The proportion of carriers able to cross the base and reach the collector is a measure of the BJT efficiency. The heavy doping of the emitter region and light doping of the base region causes many more electrons to be injected from the emitter into the base than holes to be injected from the base into the emitter. A thin and lightly doped base region means that most of the minority carriers that are injected into the base will diffuse to the collector and not recombine.
Common-emitter current gain
The common-emitter current gain is represented by F or the -parameter FE; it is approximately the ratio of the collector's direct current to the base's direct current in forward-active region. (The F subscript is used to indicate the forward-active mode of operation.) It is typically greater than 50 for small-signal transistors, but can be smaller in transistors designed for high-power applications. Both injection efficiency and recombination in the base reduce the BJT gain.
Common-base current gain
Another useful characteristic is the common-base current gain, F. The common-base current gain is approximately the gain of current from emitter to collector in the forward-active region. This ratio usually has a value close to unity; between 0.980 and 0.998. It is less than unity due to recombination of charge carriers as they cross the base region.
Alpha and beta are related by the following identities:
Beta is a convenient figure of merit to describe the performance of a bipolar transistor, but is not a fundamental physical property of the device. Bipolar transistors can be considered voltage-controlled devices (fundamentally the collector current is controlled by the base–emitter voltage; the base current could be considered a defect and is controlled by the characteristics of the base–emitter junction and recombination in the base). In many designs beta is assumed high enough so that base current has a negligible effect on the circuit. In some circuits (generally switching circuits), sufficient base current is supplied so that even the lowest beta value a particular device may have will still allow the required collector current to flow.
Structure
BJTs consists of three differently doped semiconductor regions: the emitter region, the base region and the collector region. These regions are, respectively, p type, n type and p type in a PNP transistor, and n type, p type and n type in an NPN transistor. Each semiconductor region is connected to a terminal, appropriately labeled: emitter (E), base (B) and collector (C).
The base is physically located between the emitter and the collector and is made from lightly doped, high-resistivity material. The collector surrounds the emitter region, making it almost impossible for the electrons injected into the base region to escape without being collected, thus making the resulting value of α very close to unity, and so, giving the transistor a large β. A cross-section view of a BJT indicates that the collector–base junction has a much larger area than the emitter–base junction.
The bipolar junction transistor, unlike other transistors, is usually not a symmetrical device. This means that interchanging the collector and the emitter makes the transistor leave the forward active mode and start to operate in reverse mode. Because the transistor's internal structure is usually optimized for forward-mode operation, interchanging the collector and the emitter makes the values of α and β in reverse operation much smaller than those in forward operation; often the α of the reverse mode is lower than 0.5. The lack of symmetry is primarily due to the doping ratios of the emitter and the collector. The emitter is heavily doped, while the collector is lightly doped, allowing a large reverse bias voltage to be applied before the collector–base junction breaks down. The collector–base junction is reverse biased in normal operation. The reason the emitter is heavily doped is to increase the emitter injection efficiency: the ratio of carriers injected by the emitter to those injected by the base. For high current gain, most of the carriers injected into the emitter–base junction must come from the emitter.
The low-performance "lateral" bipolar transistors sometimes used in CMOS processes are sometimes designed symmetrically, that is, with no difference between forward and backward operation.
Small changes in the voltage applied across the base–emitter terminals cause the current between the emitter and the collector to change significantly. This effect can be used to amplify the input voltage or current. BJTs can be thought of as voltage-controlled current sources, but are more simply characterized as current-controlled current sources, or current amplifiers, due to the low impedance at the base.
Early transistors were made from germanium but most modern BJTs are made from silicon. A significant minority are also now made from gallium arsenide, especially for very high speed applications (see HBT, below).
The heterojunction bipolar transistor (HBT) is an improvement of the BJT that can handle signals of very high frequencies up to several hundred GHz. It is common in modern ultrafast circuits, mostly RF systems.
Two commonly used HBTs are silicon–germanium and aluminum gallium arsenide, though a wide variety of semiconductors may be used for the HBT structure. HBT structures are usually grown by epitaxy techniques like MOCVD and MBE.
Regions of operation
Bipolar transistors have four distinct regions of operation, defined by BJT junction biases:
Forward-active (or simply active) The base–emitter junction is forward biased and the base–collector junction is reverse biased. Most bipolar transistors are designed to afford the greatest common-emitter current gain, βF, in forward-active mode. If this is the case, the collector–emitter current is approximately proportional to the base current, but many times larger, for small base current variations.
Reverse-active (or inverse-active or inverted) By reversing the biasing conditions of the forward-active region, a bipolar transistor goes into reverse-active mode. In this mode, the emitter and collector regions switch roles. Because most BJTs are designed to maximize current gain in forward-active mode, the βF in inverted mode is several times smaller (2–3 times for the ordinary germanium transistor). This transistor mode is seldom used, usually being considered only for failsafe conditions and some types of bipolar logic. The reverse bias breakdown voltage to the base may be an order of magnitude lower in this region.
Saturation With both junctions forward biased, a BJT is in saturation mode and facilitates high current conduction from the emitter to the collector (or the other direction in the case of NPN, with negatively charged carriers flowing from emitter to collector). This mode corresponds to a logical "on", or a closed switch.
Cut-off In cut-off, biasing conditions opposite of saturation (both junctions reverse biased) are present. There is very little current, which corresponds to a logical "off", or an open switch.
Although these regions are well defined for sufficiently large applied voltage, they overlap somewhat for small (less than a few hundred millivolts) biases. For example, in the typical grounded-emitter configuration of an NPN BJT used as a pulldown switch in digital logic, the "off" state never involves a reverse-biased junction because the base voltage never goes below ground; nevertheless the forward bias is close enough to zero that essentially no current flows, so this end of the forward active region can be regarded as the cutoff region.
Active-mode transistors in circuits
The diagram shows a schematic representation of an NPN transistor connected to two voltage sources. (The same description applies to a PNP transistor with reversed directions of current flow and applied voltage.) This applied voltage causes the lower p–n junction to become forward biased, allowing a flow of electrons from the emitter into the base. In active mode, the electric field existing between base and collector (caused by VCE) will cause the majority of these electrons to cross the upper p–n junction into the collector to form the collector current IC. The remainder of the electrons recombine with holes, the majority carriers in the base, making a current through the base connection to form the base current, IB. As shown in the diagram, the emitter current, IE, is the total transistor current, which is the sum of the other terminal currents, (i.e. IE = IB + IC).
In the diagram, the arrows representing current point in the direction of conventional current – the flow of electrons is in the opposite direction of the arrows because electrons carry negative electric charge. In active mode, the ratio of the collector current to the base current is called the DC current gain. This gain is usually 100 or more, but robust circuit designs do not depend on the exact value (for example see op-amp). The value of this gain for DC signals is referred to as , and the value of this gain for small signals is referred to as . That is, when a small change in the currents occurs, and sufficient time has passed for the new condition to reach a steady state is the ratio of the change in collector current to the change in base current. The symbol is used for both and .
The emitter current is related to exponentially. At room temperature, an increase in by approximately 60 mV increases the emitter current by a factor of 10. Because the base current is approximately proportional to the collector and emitter currents, they vary in the same way.
History
The bipolar point-contact transistor was invented in December 1947 at the Bell Telephone Laboratories by John Bardeen and Walter Brattain under the direction of William Shockley. The junction version known as the bipolar junction transistor (BJT), invented by Shockley in 1948, was for three decades the device of choice in the design of discrete and integrated circuits. Nowadays, the use of the BJT has declined in favor of CMOS technology in the design of digital integrated circuits. The incidental low performance BJTs inherent in CMOS ICs, however, are often utilized as bandgap voltage reference, silicon bandgap temperature sensor and to handle electrostatic discharge.
Germanium transistors
The germanium transistor was more common in the 1950s and 1960s but has a greater tendency to exhibit thermal runaway. Since germanium p-n junctions have a lower forward bias than silicon, germanium transistors turn on at lower voltage.
Early manufacturing techniques
Various methods of manufacturing bipolar transistors were developed.
Point-contact transistor – first transistor ever constructed (December 1947), a bipolar transistor, limited commercial use due to high cost and noise.
Tetrode point-contact transistor – Point-contact transistor having two emitters. It became obsolete in the middle 1950s.
Junction transistors
Grown-junction transistor first bipolar junction transistor made. Invented by William Shockley at Bell Labs on June 23, 1948. Patent filed on June 26, 1948.
Alloy-junction transistor emitter and collector alloy beads fused to base. Developed at General Electric and RCA in 1951.
Micro-alloy transistor (MAT) high-speed type of alloy junction transistor. Developed at Philco.
Micro-alloy diffused transistor (MADT) high-speed type of alloy junction transistor, speedier than MAT, a diffused-base transistor. Developed at Philco.
Post-alloy diffused transistor (PADT) high-speed type of alloy junction transistor, speedier than MAT, a diffused-base transistor. Developed at Philips.
Tetrode transistor high-speed variant of grown-junction transistor or alloy junction transistor with two connections to base.
Surface-barrier transistor high-speed metal-barrier junction transistor. Developed at Philco in 1953.
Drift-field transistor high-speed bipolar junction transistor. Invented by Herbert Kroemer at the Central Bureau of Telecommunications Technology of the German Postal Service, in 1953.
Spacistor around 1957.
Diffusion transistor modern type bipolar junction transistor. Prototypes developed at Bell Labs in 1954.
Diffused-base transistor first implementation of diffusion transistor.
Mesa transistor developed at Texas Instruments in 1957.
Planar transistor the bipolar junction transistor that made mass-produced monolithic integrated circuits possible. Developed by Jean Hoerni at Fairchild in 1959.
Epitaxial transistor a bipolar junction transistor made using vapor-phase deposition. See Epitaxy. Allows very precise control of doping levels and gradients.
Theory and modeling
BJTs can be thought of as two diodes (p–n junctions) sharing a common region that minority carriers can move through. A PNP BJT will function like two diodes that share an N-type cathode region, and the NPN like two diodes sharing a P-type anode region. Connecting two diodes with wires will not make a BJT, since minority carriers will not be able to get from one p–n junction to the other through the wire.
Both types of BJT function by letting a small current input to the base control an amplified output from the collector. The result is that the BJT makes a good switch that is controlled by its base input. The BJT also makes a good amplifier, since it can multiply a weak input signal to about 100 times its original strength. Networks of BJTs are used to make powerful amplifiers with many different applications.
In the discussion below, focus is on the NPN BJT. In what is called active mode, the base–emitter voltage and collector–base voltage are positive, forward biasing the emitter–base junction and reverse-biasing the collector–base junction. In this mode, electrons are injected from the forward biased n-type emitter region into the p-type base where they diffuse as minority carriers to the reverse-biased n-type collector and are swept away by the electric field in the reverse-biased collector–base junction.
For an illustration of forward and reverse bias, see semiconductor diodes.
Large-signal models
In 1954, Jewell James Ebers and John L. Moll introduced their mathematical model of transistor currents:
Ebers–Moll model
The DC emitter and collector currents in active mode are well modeled by an approximation to the Ebers–Moll model:
The base internal current is mainly by diffusion (see Fick's law) and
where
is the thermal voltage (approximately 26 mV at 300 K ≈ room temperature).
is the emitter current
is the collector current
is the common base forward short-circuit current gain (0.98 to 0.998)
is the reverse saturation current of the base–emitter diode (on the order of 10−15 to 10−12 amperes)
is the base–emitter voltage
is the diffusion constant for electrons in the p-type base
W is the base width
The and forward parameters are as described previously. A reverse is sometimes included in the model.
The unapproximated Ebers–Moll equations used to describe the three currents in any operating region are given below. These equations are based on the transport model for a bipolar junction transistor.
where
is the collector current
is the base current
is the emitter current
is the forward common emitter current gain (20 to 500)
is the reverse common emitter current gain (0 to 20)
is the reverse saturation current (on the order of 10−15 to 10−12 amperes)
is the thermal voltage (approximately 26 mV at 300 K ≈ room temperature).
is the base–emitter voltage
is the base–collector voltage
Base-width modulation
As the collector–base voltage () varies, the collector–base depletion region varies in size. An increase in the collector–base voltage, for example, causes a greater reverse bias across the collector–base junction, increasing the collector–base depletion region width, and decreasing the width of the base. This variation in base width often is called the Early effect after its discoverer James M. Early.
Narrowing of the base width has two consequences:
There is a lesser chance for recombination within the "smaller" base region.
The charge gradient is increased across the base, and consequently, the current of minority carriers injected across the emitter junction increases.
Both factors increase the collector or "output" current of the transistor in response to an increase in the collector–base voltage.
Punchthrough
When the base–collector voltage reaches a certain (device-specific) value, the base–collector depletion region boundary meets the base–emitter depletion region boundary. When in this state the transistor effectively has no base. The device thus loses all gain when in this state.
Gummel–Poon charge-control model
The Gummel–Poon model is a detailed charge-controlled model of BJT dynamics, which has been adopted and elaborated by others to explain transistor dynamics in greater detail than the terminal-based models typically do. This model also includes the dependence of transistor -values upon the direct current levels in the transistor, which are assumed current-independent in the Ebers–Moll model.
Small-signal models
Hybrid-pi model
The hybrid-pi model is a popular circuit model used for analyzing the small signal and AC behavior of bipolar junction and field effect transistors. Sometimes it is also called Giacoletto model because it was introduced by L.J. Giacoletto in 1969. The model can be quite accurate for low-frequency circuits and can easily be adapted for higher-frequency circuits with the addition of appropriate inter-electrode capacitances and other parasitic elements.
h-parameter model
Another model commonly used to analyze BJT circuits is the h-parameter model, also known as the hybrid equivalent model, closely related to the hybrid-pi model and the y-parameter two-port, but using input current and output voltage as independent variables, rather than input and output voltages. This two-port network is particularly suited to BJTs as it lends itself easily to the analysis of circuit behavior, and may be used to develop further accurate models. As shown, the term x in the model represents a different BJT lead depending on the topology used. For common-emitter mode the various symbols take on the specific values as:
Terminal 1, base
Terminal 2, collector
Terminal 3 (common), emitter; giving x to be e
ii, base current (ib)
io, collector current (ic)
Vin, base-to-emitter voltage (VBE)
Vo, collector-to-emitter voltage (VCE)
and the h-parameters are given by:
hix = hie for the common-emitter configuration, the input impedance of the transistor (corresponding to the base resistance rpi).
hrx = hre, a reverse transfer relationship, it represents the dependence of the transistor's (input) IB–VBE curve on the value of (output) VCE. It is usually very small and is often neglected (assumed to be zero) at DC.
hfx = hfe, the "forward" current-gain of the transistor, sometimes written h21. This parameter, with lower case "fe" to imply small signal (AC) gain, or more often with capital letters for "FE" (specified as hFE) to mean the "large signal" or DC current-gain (βDC or often simply β), is one of the main parameters in datasheets, and may be given for a typical collector current and voltage or plotted as a function of collector current. See below.
hox = 1/hoe, the output impedance of transistor. The parameter hoe usually corresponds to the output admittance of the bipolar transistor and has to be inverted to convert it to an impedance.
As shown, the h-parameters have lower-case subscripts and hence signify AC conditions or analyses. For DC conditions they are specified in upper-case. For the CE topology, an approximate h-parameter model is commonly used which further simplifies the circuit analysis. For this the hoe and hre parameters are neglected (that is, they are set to infinity and zero, respectively). The h-parameter model as shown is suited to low-frequency, small-signal analysis. For high-frequency analyses the inter-electrode capacitances that are important at high frequencies must be added.
Etymology of hFE
The h refers to its being an h-parameter, a set of parameters named for their origin in a hybrid equivalent circuit model (see above). As with all h parameters, the choice of lower case or capitals for the letters that follow the "h" is significant; lower-case signifies "small signal" parameters, that is, the slope the particular relationship; upper-case letters imply "large signal" or DC values, the ratio of the voltages or currents. In the case of the very often used hFE:
F is from Forward current amplification also called the current gain.
E refers to the transistor operating in a common Emitter (CE) configuration.
So hFE (or hFE) refers to the (total; DC) collector current divided by the base current, and is dimensionless. It is a parameter that varies somewhat with collector current, but is often approximated as a constant; it is normally specified at a typical collector current and voltage, or graphed as a function of collector current.
Had capital letters not been used for used in the subscript, i.e. if it were written hfe the parameter indicate small signal (AC) current gain, i.e. the slope of the Collector current versus Base current graph at a given point, which is often close to the hFE value unless the test frequency is high.
Industry models
The Gummel–Poon SPICE model is often used, but it suffers from several limitations. For instance, reverse breakdown of the base–emitter diode is not captured by the SGP (SPICE Gummel–Poon) model, neither are thermal effects (self-heating) or quasi-saturation. These have been addressed in various more advanced models which either focus on specific cases of application (Mextram, HICUM, Modella) or are designed for universal usage (VBIC).
Applications
The BJT remains a device that excels in some applications, such as discrete circuit design, due to the very wide selection of BJT types available, and because of its high transconductance and output resistance compared to MOSFETs.
The BJT is also the choice for demanding analog circuits, especially for very-high-frequency applications, such as radio-frequency circuits for wireless systems.
High-speed digital logic
Emitter-coupled logic (ECL) use BJTs.
Bipolar transistors can be combined with MOSFETs in an integrated circuit by using a BiCMOS process of wafer fabrication to create circuits that take advantage of the application strengths of both types of transistor.
Amplifiers
The transistor parameters α and β characterize the current gain of the BJT. It is this gain that allows BJTs to be used as the building blocks of electronic amplifiers. The three main BJT amplifier topologies are:
Common emitter
Common base
Common collector
Temperature sensors
Because of the known temperature and current dependence of the forward-biased base–emitter junction voltage, the BJT can be used to measure temperature by subtracting two voltages at two different bias currents in a known ratio.
Logarithmic converters
Because base–emitter voltage varies as the logarithm of the base–emitter and collector–emitter currents, a BJT can also be used to compute logarithms and anti-logarithms. A diode can also perform these nonlinear functions but the transistor provides more circuit flexibility.
Avalanche pulse generators
Transistors may be deliberately made with a lower collector to emitter breakdown voltage than the collector to base breakdown voltage. If the emitter–base junction is reverse biased the collector emitter voltage may be maintained at a voltage just below breakdown. As soon as the base voltage is allowed to rise, and current flows avalanche occurs and impact ionization in the collector base depletion region rapidly floods the base with carriers and turns the transistor fully on. So long as the pulses are short enough and infrequent enough that the device is not damaged, this effect can be used to create very sharp falling edges.
Special avalanche transistor devices are made for this application.
| Technology | Semiconductors | null |
49373 | https://en.wikipedia.org/wiki/Grid%20computing | Grid computing | Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.
Grids are a form of distributed computing composed of many networked loosely coupled computers acting together to perform large tasks. For certain applications, distributed or grid computing can be seen as a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a computer network (private or public) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back office data processing in support for e-commerce and Web services.
Grid computing combines computers from multiple administrative domains to reach a common goal, to solve a single task, and may then disappear just as quickly. The size of a grid may vary from small—confined to a network of computer workstations within a corporation, for example—to large, public collaborations across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whereas the notion of a larger, wider grid may thus refer to an inter-nodes cooperation".
Coordinating applications on Grids can be a complex task, especially when coordinating the flow of information across distributed computing resources. Grid workflow systems have been developed as a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in the grid context.
Comparison of grids and conventional supercomputers
“Distributed” or “grid” computing in general is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors. The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet.
There are also some differences between programming for a supercomputer and programming for a grid computing system. It can be costly and difficult to write programs that can run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a “thin” layer of “grid” infrastructure can allow conventional, standalone programs, given a different part of the same problem, to run on multiple machines. This makes it possible to write and debug on a single conventional machine and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.
Design considerations and variations
One feature of distributed grids is that they can be formed from computing resources belonging to one or multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.
One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes. However, due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dial-up Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results in the expected time.
Another set of what could be termed social compatibility issues in the early days of grid computing related to the goals of grid developers to carry their innovation beyond the original field of high-performance computing and across disciplinary boundaries into new fields, like that of high-energy physics.
The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors. In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines.
Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a trade-off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform). Various middleware projects have created generic infrastructure to allow diverse scientific and commercial projects to harness a particular associated grid or for the purpose of setting up new grids. BOINC is a common one for various academic projects seeking public volunteers; more are listed at the end of the article.
In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include SLA management, Trust, and Security, Virtual organization management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.
Market segmentation of the grid computing market
For the segmentation of the grid computing market, two perspectives need to be considered: the provider side and the user side:
The provider side
The overall grid market comprises several specific markets. These are the grid middleware market, the market for grid-enabled applications, the utility computing market, and the software-as-a-service (SaaS) market.
Grid middleware is a specific software product, which enables the sharing of heterogeneous resources, and Virtual Organizations. It is installed and integrated into the existing infrastructure of the involved company or companies and provides a special layer placed among the heterogeneous infrastructure and the specific user applications. Major grid middlewares are Globus Toolkit, gLite, and UNICORE.
Utility computing is referred to as the provision of grid computing and applications as service either as an open grid utility or as a hosting solution for one organization or a VO. Major players in the utility computing market are Sun Microsystems, IBM, and HP.
Grid-enabled applications are specific software applications that can utilize grid infrastructure. This is made possible by the use of grid middleware, as pointed out above.
Software as a service (SaaS) is “software that is owned, delivered and managed remotely by one or more providers.” (Gartner 2007) Additionally, SaaS applications are based on a single set of common code and data definitions. They are consumed in a one-to-many model, and SaaS uses a Pay As You Go (PAYG) model or a subscription model that is based on usage. Providers of SaaS do not necessarily own the computing resources themselves, which are required to run their SaaS. Therefore, SaaS providers may draw upon the utility computing market. The utility computing market provides computing resources for SaaS providers.
The user side
For companies on the demand or user side of the grid computing market, the different segments have significant implications for their IT deployment strategy. The IT deployment strategy as well as the type of IT investments made are relevant aspects for potential grid users and play an important role for grid adoption.
CPU scavenging
CPU-scavenging, cycle-scavenging, or shared computing creates a “grid” from the idle resources in a network of participants (whether worldwide or internal to an organization). Typically, this technique exploits the 'spare' instruction cycles resulting from the intermittent inactivity that typically occurs at night, during lunch breaks, or even during the (comparatively minuscule, though numerous) moments of idle waiting that modern desktop CPU's experience throughout the day (when the computer is waiting on IO from the user, network, or storage). In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power.
Many volunteer computing projects, such as BOINC, use the CPU scavenging model. Since nodes are likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies.
Creating an Opportunistic Environment is another implementation of CPU-scavenging where special workload management system harvests the idle desktop computers for compute-intensive jobs, it also refers as Enterprise Desktop Grid (EDG). For instance, HTCondor (the open-source high-throughput computing software framework for coarse-grained distributed rationalization of computationally intensive tasks) can be configured to only use desktop machines where the keyboard and mouse are idle to effectively harness wasted CPU power from otherwise idle desktop workstations. Like other full-featured batch systems, HTCondor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. It can be used to manage workload on a dedicated cluster of computers as well or it can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment.
History
The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid. The power grid metaphor for accessible computing quickly became canonical when Ian Foster and Carl Kesselman published their seminal work, "The Grid: Blueprint for a new computing infrastructure" (1999). This was preceded by decades by the metaphor of utility computing (1961): computing as a public utility, analogous to the phone system.
CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.
The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster and Steve Tuecke of the University of Chicago, and Carl Kesselman of the University of Southern California's Information Sciences Institute. The trio, who led the effort to create the Globus Toolkit, is widely regarded as the "fathers of the grid". The toolkit incorporates not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation. While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.
In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid) and earlier utility computing.
Progress
In November 2006, Seidel received the Sidney Fernbach Award at the Supercomputing Conference in Tampa, Florida. "For outstanding contributions to the development of software for HPC and Grid computing to enable the collaborative numerical investigation of complex problems in physics; in particular, modeling black hole collisions." This award, which is one of the highest honors in computing, was awarded for his achievements in numerical relativity.
Fastest virtual supercomputers
As of March 2020, Folding@home – 1.1 exaFLOPS.
As of April 7, 2020, BOINC – 29.8 PFLOPS.
As of November 2019, IceCube via OSG – 350 fp32 PFLOPS.
As of February 2018, Einstein@Home – 3.489 PFLOPS.
As of April 7, 2020, SETI@Home – 1.11 PFLOPS.
As of April 7, 2020, MilkyWay@Home – 1.465 PFLOPS.
As of March 2019, GIMPS – 0.558 PFLOPS.
Also, as of March 2019, the Bitcoin Network had a measured computing power equivalent to over 80,000 exaFLOPS (Floating-point Operations Per Second). This measurement reflects the number of FLOPS required to equal the hash output of the Bitcoin network rather than its capacity for general floating-point arithmetic operations, since the elements of the Bitcoin network (Bitcoin mining ASICs) perform only the specific cryptographic hash computation required by the Bitcoin protocol.
Projects and applications
Grid computing offers a way to solve Grand Challenge problems such as protein folding, financial modeling, earthquake simulation, and climate/weather modeling, and was integral in enabling the Large Hadron Collider at CERN. Grids offer a way of using information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility for commercial and noncommercial clients, with those clients paying only for what they use, as with electricity or water.
As of October 2016, over 4 million machines running the open-source Berkeley Open Infrastructure for Network Computing (BOINC) platform are members of the World Community Grid. One of the projects using BOINC is SETI@home, which was using more than 400,000 computers to achieve 0.828 TFLOPS as of October 2016. As of October 2016 Folding@home, which is not part of BOINC, achieved more than 101 x86-equivalent petaflops on over 110,000 machines.
The European Union funded projects through the framework programmes of the European Commission. BEinGRID (Business Experiments in Grid) was a research project funded by the European Commission as an Integrated Project under the Sixth Framework Programme (FP6) sponsorship program. Started on June 1, 2006, the project ran 42 months, until November 2009. The project was coordinated by Atos Origin. According to the project fact sheet, their mission is “to establish effective routes to foster the adoption of grid computing across the EU and to stimulate research into innovative business models using Grid technologies”. To extract best practice and common themes from the experimental implementations, two groups of consultants are analyzing a series of pilots, one technical, one business. The project is significant not only for its long duration but also for its budget, which at 24.8 million Euros, is the largest of any FP6 integrated project. Of this, 15.7 million is provided by the European Commission and the remainder by its 98 contributing partner companies. Since the end of the project, the results of BEinGRID have been taken up and carried forward by IT-Tude.com.
The Enabling Grids for E-sciencE project, based in the European Union and included sites in Asia and the United States, was a follow-up project to the European DataGrid (EDG) and evolved into the European Grid Infrastructure. This, along with the Worldwide LHC Computing Grid (WLCG), was developed to support experiments using the CERN Large Hadron Collider. A list of active sites participating within WLCG can be found online as can real time monitoring of the EGEE infrastructure. The relevant software and documentation is also publicly accessible. There is speculation that dedicated fiber optic links, such as those installed by CERN to address the WLCG's data-intensive needs, may one day be available to home users thereby providing internet services at speeds up to 10,000 times faster than a traditional broadband connection. The European Grid Infrastructure has been also used for other research activities and experiments such as the simulation of oncological clinical trials.
The distributed.net project was started in 1997.
The NASA Advanced Supercomputing facility (NAS) ran genetic algorithms using the Condor cycle scavenger running on about 350 Sun Microsystems and SGI workstations.
In 2001, United Devices operated the United Devices Cancer Research Project based on its Grid MP product, which cycle-scavenges on volunteer PCs connected to the Internet. The project ran on about 3.1 million machines before its close in 2007.
Definitions
Today there are many definitions of grid computing:
In his article “What is the Grid? A Three Point Checklist”, Ian Foster lists these primary attributes:
Computing resources are not administered centrally.
Open standards are used.
Nontrivial quality of service is achieved.
Plaszczak/Wellner define grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
IBM defines grid computing as “the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across ‘multiple’ administrative domains based on their (resources) availability, capacity, performance, cost and users' quality-of-service requirements”.
An earlier example of the notion of computing as the utility was in 1965 by MIT's Fernando Corbató. Corbató and the other designers of the Multics operating system envisioned a computer facility operating “like a power company or water company”.
Buyya/Venugopal define grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements".
| Technology | Computer architecture concepts | null |
49375 | https://en.wikipedia.org/wiki/Larynx | Larynx | The larynx (), commonly called the voice box, is an organ in the top of the neck involved in breathing, producing sound and protecting the trachea against food aspiration. The opening of larynx into pharynx known as the laryngeal inlet is about 4–5 centimeters in diameter. The larynx houses the vocal cords, and manipulates pitch and volume, which is essential for phonation. It is situated just below where the tract of the pharynx splits into the trachea and the esophagus. The word 'larynx' (: larynges) comes from the Ancient Greek word lárunx ʻlarynx, gullet, throatʼ.
Structure
The triangle-shaped larynx consists largely of cartilages that are attached to one another, and to surrounding structures, by muscles or by fibrous and elastic tissue components. The larynx is lined by a ciliated columnar epithelium except for the vocal folds. The cavity of the larynx extends from its triangle-shaped inlet, to the epiglottis, and to the circular outlet at the lower border of the cricoid cartilage, where it is continuous with the lumen of the trachea. The mucous membrane lining the larynx forms two pairs of lateral folds that project inward into its cavity. The upper folds are called the vestibular folds. They are also sometimes called the false vocal cords for the rather obvious reason that they play no part in vocalization. The Kargyraa style of Tuvan throat singing makes use of these folds to sing an octave lower, and they are used in Umngqokolo, a type of Xhosa throat singing. The lower pair of folds are known as the vocal cords, which produce sounds needed for speech and other vocalizations. The slit-like space between the left and right vocal cords, called the rima glottidis, is the narrowest part of the larynx. The vocal cords and the rima glottidis are together designated as the glottis. The laryngeal cavity above the vestibular folds is called the vestibule. The very middle portion of the cavity between the vestibular folds and the vocal cords is the ventricle of the larynx, or laryngeal ventricle. The infraglottic cavity is the open space below the glottis.
Location
In adult humans, the larynx is found in the anterior neck at the level of the cervical vertebrae C3–C6. It connects the inferior part of the pharynx (hypopharynx) with the trachea. The laryngeal skeleton consists of nine cartilages: three single (epiglottic, thyroid and cricoid) and three paired (arytenoid, corniculate, and cuneiform). The hyoid bone is not part of the larynx, though the larynx is suspended from the hyoid. The larynx extends vertically from the tip of the epiglottis to the inferior border of the cricoid cartilage. Its interior can be divided in supraglottis, glottis and subglottis.
Cartilages
There are nine cartilages, three unpaired and three paired (3 pairs=6), that support the mammalian larynx and form its skeleton.
Unpaired cartilages:
Thyroid cartilage: This forms the Adam's apple (also called the laryngeal prominence). It is usually larger in males than in females. The thyrohyoid membrane is a ligament associated with the thyroid cartilage that connects it with the hyoid bone. It supports the front portion of the larynx.
Cricoid cartilage: A ring of hyaline cartilage that forms the inferior wall of the larynx. It is attached to the top of the trachea. The median cricothyroid ligament connects the cricoid cartilage to the thyroid cartilage.
Epiglottis: A large, spoon-shaped piece of elastic cartilage. During swallowing, the pharynx and larynx rise. Elevation of the pharynx widens it to receive food and drink; elevation of the larynx causes the epiglottis to move down and form a lid over the glottis, closing it off.
Paired cartilages:
Arytenoid cartilages: Of the paired cartilages, the arytenoid cartilages are the most important because they influence the position and tension of the vocal cords. These are triangular pieces of mostly hyaline cartilage located at the posterosuperior border of the cricoid cartilage.
Corniculate cartilages: Horn-shaped pieces of elastic cartilage located at the apex of each arytenoid cartilage.
Cuneiform cartilages: Club-shaped pieces of elastic cartilage located anterior to the corniculate cartilages.
Muscles
The muscles of the larynx are divided into intrinsic and extrinsic muscles. The extrinsic muscles act on the region and pass between the larynx and parts around it but have their origin elsewhere; the intrinsic muscles are confined entirely within the larynx and have their origin and insertion there.
The intrinsic muscles are divided into respiratory and the phonatory muscles (the muscles of phonation). The respiratory muscles move the vocal cords apart and serve breathing. The phonatory muscles move the vocal cords together and serve the production of voice. The main respiratory muscles are the posterior cricoarytenoid muscles. The phonatory muscles are divided into adductors (lateral cricoarytenoid muscles, arytenoid muscles) and tensors (cricothyroid muscles, thyroarytenoid muscles).
Intrinsic
The intrinsic laryngeal muscles are responsible for controlling sound production.
Cricothyroid muscle lengthen and tense the vocal cords.
Posterior cricoarytenoid muscles abduct and externally rotate the arytenoid cartilages, resulting in abducted vocal cords.
Lateral cricoarytenoid muscles adduct and internally rotate the arytenoid cartilages, increase medial compression.
Transverse arytenoid muscle adduct the arytenoid cartilages, resulting in adducted vocal cords.
Oblique arytenoid muscles narrow the laryngeal inlet by constricting the distance between the arytenoid cartilages.
Thyroarytenoid muscles narrow the laryngeal inlet, shortening the vocal cords, and lowering voice pitch. The internal thyroarytenoid is the portion of the thyroarytenoid that vibrates to produce sound.
Notably the only muscle capable of separating the vocal cords for normal breathing is the posterior cricoarytenoid. If this muscle is incapacitated on both sides, the inability to pull the vocal cords apart (abduct) will cause difficulty breathing. Bilateral injury to the recurrent laryngeal nerve would cause this condition. It is also worth noting that all muscles are innervated by the recurrent laryngeal branch of the vagus except the cricothyroid muscle, which is innervated by the external laryngeal branch of the superior laryngeal nerve (a branch of the vagus).
Additionally, intrinsic laryngeal muscles present a constitutive Ca2+-buffering profile that predicts their better ability to handle calcium changes in comparison to other muscles. This profile is in agreement with their function as very fast muscles with a well-developed capacity for prolonged work. Studies suggests that mechanisms involved in the prompt sequestering of Ca2+ (sarcoplasmic reticulum Ca2+-reuptake proteins, plasma membrane pumps, and cytosolic Ca2+-buffering proteins) are particularly elevated in laryngeal muscles, indicating their importance for the myofiber function and protection against disease, such as Duchenne muscular dystrophy. Furthermore, different levels of Orai1 in rat intrinsic laryngeal muscles and extraocular muscles over the limb muscle suggests a role for store operated calcium entry channels in those muscles' functional properties and signaling mechanisms.
Extrinsic
The extrinsic laryngeal muscles support and position the larynx within the mid-cervical cereal region.
Sternothyroid muscles depress the larynx. (Innervated by ansa cervicalis)
Omohyoid muscles depress the larynx. (Ansa cervicalis)
Sternohyoid muscles depress the larynx. (Ansa cervicalis)
Inferior constrictor muscles. (CN X)
Thyrohyoid muscles elevates the larynx. (C1)
Digastric elevates the larynx. (CN V3, CN VII)
Stylohyoid elevates the larynx. (CN VII)
Mylohyoid elevates the larynx. (CN V3)
Geniohyoid elevates the larynx. (C1)
Hyoglossus elevates the larynx. (CN XII)
Genioglossus elevates the larynx. (CN XII)
Nerve supply
The larynx is innervated by branches of the vagus nerve on each side. Sensory innervation to the glottis and laryngeal vestibule is by the internal branch of the superior laryngeal nerve. The external branch of the superior laryngeal nerve innervates the cricothyroid muscle. Motor innervation to all other muscles of the larynx and sensory innervation to the subglottis is by the recurrent laryngeal nerve. While the sensory input described above is (general) visceral sensation (diffuse, poorly localized), the vocal cords also receives general somatic sensory innervation (proprioceptive and touch) by the superior laryngeal nerve.
Injury to the external branch of the superior laryngeal nerve causes weakened phonation because the vocal cords cannot be tightened. Injury to one of the recurrent laryngeal nerves produces hoarseness, if both are damaged the voice may or may not be preserved, but breathing becomes difficult.
Development
In newborn infants, the larynx is initially at the level of the C2–C3 vertebrae, and is further forward and higher relative to its position in the adult body. The larynx descends as the child grows.
Laryngeal cavity
The laryngeal cavity (cavity of the larynx) extends from the laryngeal inlet downwards to the lower border of the cricoid cartilage where it is continuous with that of the trachea.
It is divided into two parts by the projection of the vocal folds, between which is a narrow triangular opening, the rima glottidis.
The portion of the cavity of the larynx above the vocal folds is called the laryngeal vestibule; it is wide and triangular in shape, its base or anterior wall presenting, however, about its center the backward projection of the tubercle of the epiglottis.
It contains the vestibular folds, and between these and the vocal folds are the laryngeal ventricles.
The portion below the vocal folds is called the infraglottic cavity. It is at first of an elliptical form, but lower down it widens out, assumes a circular form, and is continuous with the tube of the trachea.
Function
Sound generation
Sound is generated in the larynx, and that is where pitch and volume are manipulated. The strength of expiration from the lungs also contributes to loudness.
Manipulation of the larynx is used to generate a source sound with a particular fundamental frequency, or pitch. This source sound is altered as it travels through the vocal tract, configured differently based on the position of the tongue, lips, mouth, and pharynx. The process of altering a source sound as it passes through the filter of the vocal tract creates the many different vowel and consonant sounds of the world's languages as well as tone, certain realizations of stress and other types of linguistic prosody. The larynx also has a similar function to the lungs in creating pressure differences required for sound production; a constricted larynx can be raised or lowered affecting the volume of the oral cavity as necessary in glottalic consonants.
The vocal cords can be held close together (by adducting the arytenoid cartilages) so that they vibrate (see phonation). The muscles attached to the arytenoid cartilages control the degree of opening. Vocal cord length and tension can be controlled by rocking the thyroid cartilage forward and backward on the cricoid cartilage (either directly by contracting the cricothyroids or indirectly by changing the vertical position of the larynx), by manipulating the tension of the muscles within the vocal cords, and by moving the arytenoids forward or backward. This causes the pitch produced during phonation to rise or fall. In most males the vocal cords are longer and have a greater mass than most females' vocal cords, producing a lower pitch.
The vocal apparatus consists of two pairs of folds, the vestibular folds (false vocal cords) and the true vocal cords. The vestibular folds are covered by respiratory epithelium, while the vocal cords are covered by stratified squamous epithelium. The vestibular folds are not responsible for sound production, but rather for resonance. The exceptions to this are found in Tibetan chanting and Kargyraa, a style of Tuvan throat singing. Both make use of the vestibular folds to create an undertone. These false vocal cords do not contain muscle, while the true vocal cords do have skeletal muscle.
Other
The most important role of the larynx is its protective function, the prevention of foreign objects from entering the lungs by coughing and other reflexive actions. A cough is initiated by a deep inhalation through the vocal cords, followed by the elevation of the larynx and the tight adduction (closing) of the vocal cords. The forced expiration that follows, assisted by tissue recoil and the muscles of expiration, blows the vocal cords apart, and the high pressure expels the irritating object out of the throat. Throat clearing is less violent than coughing, but is a similar increased respiratory effort countered by the tightening of the laryngeal musculature. Both coughing and throat clearing are predictable and necessary actions because they clear the respiratory passageway, but both place the vocal cords under significant strain.
Another important role of the larynx is abdominal fixation, a kind of Valsalva maneuver in which the lungs are filled with air in order to stiffen the thorax so that forces applied for lifting can be translated down to the legs. This is achieved by a deep inhalation followed by the adduction of the vocal cords. Grunting while lifting heavy objects is the result of some air escaping through the adducted vocal cords ready for phonation.
Abduction of the vocal cords is important during physical exertion. The vocal cords are separated by about during normal respiration, but this width is doubled during forced respiration.
During swallowing, elevation of the posterior portion of the tongue levers (inverts) the epiglottis over the glottis' opening to prevent swallowed material from entering the larynx which leads to the lungs, and provides a path for a food or liquid bolus to "slide" into the esophagus; the hyo-laryngeal complex is also pulled upwards to assist this process. Stimulation of the larynx by aspirated food or liquid produces a strong cough reflex to protect the lungs.
In addition, intrinsic laryngeal muscles are spared from some muscle wasting disorders, such as Duchenne muscular dystrophy, may facilitate the development of novel strategies for the prevention and treatment of muscle wasting in a variety of clinical scenarios. ILM have a calcium regulation system profile suggestive of a better ability to handle calcium changes in comparison to other muscles, and this may provide a mechanistic insight for their unique pathophysiological properties
Clinical significance
Disorders
There are several things that can cause a larynx to not function properly. Some symptoms are hoarseness, loss of voice, pain in the throat or ears, and breathing difficulties.
Acute laryngitis is the sudden inflammation and swelling of the larynx. It is caused by the common cold or by excessive shouting. It is not serious.
Chronic laryngitis is caused by smoking, dust, frequent yelling, or prolonged exposure to polluted air. It is much more serious than acute laryngitis.
Presbylarynx is a condition in which age-related atrophy of the soft tissues of the larynx results in weak voice and restricted vocal range and stamina. Bowing of the anterior portion of the vocal colds is found on laryngoscopy.
Ulcers may be caused by the prolonged presence of an endotracheal tube.
Polyps and vocal cord nodules are small bumps caused by prolonged exposure to tobacco smoke and vocal misuse, respectively.
Two related types of cancer of the larynx, namely squamous cell carcinoma and verrucous carcinoma, are strongly associated with repeated exposure to cigarette smoke and alcohol.
Vocal cord paresis is weakness of one or both vocal cords that can greatly impact daily life.
Idiopathic laryngeal spasm.
Laryngopharyngeal reflux is a condition in which acid from the stomach irritates and burns the larynx. Similar damage can occur with gastroesophageal reflux disease (GERD).
Laryngomalacia is a very common condition of infancy, in which the soft, immature cartilage of the upper larynx collapses inward during inhalation, causing airway obstruction.
Laryngeal perichondritis, the inflammation of the perichondrium of laryngeal cartilages, causing airway obstruction.
Laryngeal paralysis is a condition seen in some mammals (including dogs) in which the larynx no longer opens as wide as required for the passage of air, and impedes respiration. In mild cases it can lead to exaggerated or "raspy" breathing or panting, and in serious cases can pose a considerable need for treatment.
Duchenne muscular dystrophy, intrinsic laryngeal muscles (ILM) are spared from the lack of dystrophin and may serve as a useful model to study the mechanisms of muscle sparing in neuromuscular diseases. Dystrophic ILM presented a significant increase in the expression of calcium-binding proteins. The increase of calcium-binding proteins in dystrophic ILM may permit better maintenance of calcium homeostasis, with the consequent absence of myonecrosis. The results further support the concept that abnormal calcium buffering is involved in these neuromuscular diseases.
Treatments
Patients who have lost the use of their larynx are typically prescribed the use of an electrolarynx device. Larynx transplants are a rare procedure. The world's first successful operation took place in 1998 at the Cleveland Clinic, and the second took place in October 2010 at the University of California Davis Medical Center in Sacramento.
Other animals
Pioneering work on the structure and evolution of the larynx was carried out in the 1920s by the British comparative anatomist Victor Negus, culminating in his monumental work The Mechanism of the Larynx (1929). Negus, however, pointed out that the descent of the larynx reflected the reshaping and descent of the human tongue into the pharynx. This process is not complete until age six to eight years. Some researchers, such as Philip Lieberman, Dennis Klatt, Bart de Boer and Kenneth Stevens using computer-modeling techniques have suggested that the species-specific human tongue allows the vocal tract (the airway above the larynx) to assume the shapes necessary to produce speech sounds that enhance the robustness of human speech. Sounds such as the vowels of the words and , [i] and [u] (in phonetic notation), have been shown to be less subject to confusion in classic studies such as the 1950 Peterson and Barney investigation of the possibilities for computerized speech recognition.
In contrast, though other species have low larynges, their tongues remain anchored in their mouths and their vocal tracts cannot produce the range of speech sounds of humans. The ability to lower the larynx transiently in some species extends the length of their vocal tract, which as Fitch showed creates the acoustic illusion that they are larger. Research at Haskins Laboratories in the 1960s showed that speech allows humans to achieve a vocal communication rate that exceeds the fusion frequency of the auditory system by fusing sounds together into syllables and words. The additional speech sounds that the human tongue enables us to produce, particularly [i], allow humans to unconsciously infer the length of the vocal tract of the person who is talking, a critical element in recovering the phonemes that make up a word.
Non-mammals
Most tetrapod species possess a larynx, but its structure is typically simpler than that found in mammals. The cartilages surrounding the larynx are apparently a remnant of the original gill arches in fish, and are a common feature, but not all are always present. For example, the thyroid cartilage is found only in mammals. Similarly, only mammals possess a true epiglottis, although a flap of non-cartilagenous mucosa is found in a similar position in many other groups. In modern amphibians, the laryngeal skeleton is considerably reduced; frogs have only the cricoid and arytenoid cartilages, while salamanders possess only the arytenoids.
An example of a frog that possesses a larynx is the túngara frog. While the larynx is the main sound producing organ in túngara frogs, it serves a higher significance due to its contribution to mating call, which consist of two components: 'whine' and 'chuck'. While 'whine' induces female phonotaxis and allows species recognition, 'chuck' increases mating attractiveness. In particular, the túngara frog produces 'chuck' by vibrating the fibrous mass attached to the larynx.
Vocal folds are found only in mammals, and a few lizards. As a result, many reptiles and amphibians are essentially voiceless; frogs use ridges in the trachea to modulate sound, while birds have a separate sound-producing organ, the syrinx.
History
The ancient Greek physician Galen first described the larynx, describing it as the "first and supremely most important instrument of the voice".
Additional images
| Biology and health sciences | Respiratory system | Biology |
49387 | https://en.wikipedia.org/wiki/Deep%20Blue%20%28chess%20computer%29 | Deep Blue (chess computer) | Deep Blue was a chess-playing expert system run on a unique purpose-built IBM supercomputer. It was the first computer to win a game, and the first to win a match, against a reigning world champion under regular time controls. Development began in 1985 at Carnegie Mellon University under the name ChipTest. It then moved to IBM, where it was first renamed Deep Thought, then again in 1989 to Deep Blue. It first played world champion Garry Kasparov in a six-game match in 1996, where it won one, draw two and lost three games. It was upgraded in 1997 and in a six-game re-match, it defeated Kasparov by winning two games and drawing three. Deep Blue's victory is considered a milestone in the history of artificial intelligence and has been the subject of several books and films.
History
While a doctoral student at Carnegie Mellon University, Feng-hsiung Hsu began development of a chess-playing supercomputer under the name ChipTest. The machine won the North American Computer Chess Championship in 1987 and Hsu and his team followed up with a successor, Deep Thought, in 1988. After receiving his doctorate in 1989, Hsu and Murray Campbell joined IBM Research to continue their project to build a machine that could defeat a world chess champion. Their colleague Thomas Anantharaman briefly joined them at IBM before leaving for the finance industry and being replaced by programmer Arthur Joseph Hoane. Jerry Brody, a long-time employee of IBM Research, subsequently joined the team in 1990.
After Deep Thought's two-game 1989 loss to Kasparov, IBM held a contest to rename the chess machine: the winning name was "Deep Blue", submitted by Peter Fitzhugh Brown, was a play on IBM's nickname, "Big Blue". After a scaled-down version of Deep Blue played Grandmaster Joel Benjamin, Hsu and Campbell decided that Benjamin was the expert they were looking for to help develop Deep Blue's opening book, so hired him to assist with the preparations for Deep Blue's matches against Garry Kasparov. In 1995, a Deep Blue prototype played in the eighth World Computer Chess Championship, playing Wchess to a draw before ultimately losing to Fritz in round five, despite playing as White.
Today, one of the two racks that made up Deep Blue is held by the National Museum of American History, having previously been displayed in an exhibit about the Information Age, while the other rack was acquired by the Computer History Museum in 1997, and is displayed in the Revolution exhibit's "Artificial Intelligence and Robotics" gallery. Several books were written about Deep Blue, among them Behind Deep Blue: Building the Computer that Defeated the World Chess Champion by Deep Blue developer Feng-hsiung Hsu.
Deep Blue versus Kasparov
Subsequent to its predecessor Deep Thought's 1989 loss to Garry Kasparov, Deep Blue played Kasparov twice more. In the first game of the first match, which took place from 10 to 17 February 1996, Deep Blue became the first machine to win a chess game against a reigning world champion under regular time controls. However, Kasparov won three and drew two of the following five games, beating Deep Blue by 4–2 at the close of the match.
Deep Blue's hardware was subsequently upgraded, doubling its speed before it faced Kasparov again in May 1997, when it won the six-game rematch 3½–2½. Deep Blue won the deciding game after Kasparov failed to secure his position in the opening, thereby becoming the first computer system to defeat a reigning world champion in a match under standard chess tournament time controls. The version of Deep Blue that defeated Kasparov in 1997 typically searched to a depth of six to eight moves, and twenty or more moves in some situations. David Levy and Monty Newborn estimate that each additional ply (half-move) of forward insight increases the playing strength between 50 and 70 Elo points.
In the 44th move of the first game of their second match, unknown to Kasparov, a bug in Deep Blue's code led it to enter an unintentional loop, which it exited by taking a randomly selected valid move. Kasparov did not take this possibility into account, and misattributed the seemingly pointless move to "superior intelligence". Subsequently, Kasparov experienced a decline in performance in the following game, though he denies this was due to anxiety in the wake of Deep Blue's inscrutable move.
After his loss, Kasparov said that he sometimes saw unusual creativity in the machine's moves, suggesting that during the second game, human chess players had intervened on behalf of the machine. IBM denied this, saying the only human intervention occurred between games. Kasparov demanded a rematch, but IBM had dismantled Deep Blue after its victory and refused the rematch. The rules allowed the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play that were revealed during the course of the match. Kasparov requested printouts of the machine's log files, but IBM refused, although the company later published the logs on the Internet.
The 1997 tournament awarded a $700,000 first prize to the Deep Blue team and a $400,000 second prize to Kasparov. Carnegie Mellon University awarded an additional $100,000 to the Deep Blue team, a prize created by computer science professor Edward Fredkin in 1980 for the first computer program to beat a reigning world chess champion.
Aftermath
Chess
Kasparov initially called Deep Blue an "alien opponent" but later belittled it, stating that it was "as intelligent as your alarm clock". According to Martin Amis, two grandmasters who played Deep Blue agreed that it was "like a wall coming at you". Hsu had the rights to use the Deep Blue design independently of IBM, but also independently declined Kasparov's rematch offer. In 2003, the documentary film Game Over: Kasparov and the Machine investigated Kasparov's claims that IBM had cheated. In the film, some interviewees describe IBM's investment in Deep Blue as an effort to boost its stock value.
Other games
Following Deep Blue's victory, AI specialist Omar Syed designed a new game, Arimaa, which was intended to be very simple for humans but very difficult for computers to master; however, in 2015, computers proved capable of defeating strong Arimaa players. Since Deep Blue's victory, computer scientists have developed software for other complex board games with competitive communities. The AlphaGo series (AlphaGo, AlphaGo Zero, AlphaZero) defeated top Go players in 2016–2017.
Computer science
Computer scientists such as Deep Blue developer Campbell believed that playing chess was a good measurement for the effectiveness of artificial intelligence, and by beating a world champion chess player, IBM showed that they had made significant progress. Deep Blue is also responsible for the popularity of using games as a display medium for artificial intelligence, as in the cases of IBM Watson or AlphaGo.
While Deep Blue, with its capability of evaluating 200 million positions per second, was the first computer to face a world chess champion in a formal match, it was a then-state-of-the-art expert system, relying upon rules and variables defined and fine-tuned by chess masters and computer scientists. In contrast, current chess engines such as Leela Chess Zero typically use reinforcement machine learning systems that train a neural network to play, developing its own internal logic rather than relying upon rules defined by human experts.
In a November 2006 match between Deep Fritz and world chess champion Vladimir Kramnik, the program ran on a computer system containing a dual-core Intel Xeon 5160 CPU, capable of evaluating only 8 million positions per second, but searching to an average depth of 17 to 18 plies (half-moves) in the middlegame thanks to heuristics; it won 4–2.
Design
Software
Deep Blue's evaluation function was initially written in a generalized form, with many to-be-determined parameters (e.g., how important is a safe king position compared to a space advantage in the center, etc.). Values for these parameters were determined by analyzing thousands of master games. The evaluation function was then split into 8,000 parts, many of them designed for special positions. The opening book encapsulated more than 4,000 positions and 700,000 grandmaster games, while the endgame database contained many six-piece endgames and all five and fewer piece endgames. An additional database named the "extended book" summarizes entire games played by Grandmasters. The system combines its searching ability of 200 million chess positions per second with summary information in the extended book to select opening moves.
Before the second match, the program's rules were fine-tuned by grandmaster Joel Benjamin. The opening library was provided by grandmasters Miguel Illescas, John Fedorowicz, and Nick de Firmian. When Kasparov requested that he be allowed to study other games that Deep Blue had played so as to better understand his opponent, IBM refused, leading Kasparov to study many popular PC chess games to familiarize himself with computer gameplay.
Hardware
Deep Blue used custom VLSI chips to parallelize the alpha–beta search algorithm, an example of symbolic AI. The system derived its playing strength mainly from brute force computing power. It was a massively parallel IBM RS/6000 SP Supercomputer with 30 PowerPC 604e processors and 480 custom 600 nm CMOS VLSI "chess chips" designed to execute the chess-playing expert system, as well as FPGAs intended to allow patching of the VLSIs (which ultimately went unused) all housed in two cabinets. The chess chip has four parts: the move generator, the smart-move stack, the evaluation function, and the search control. The move generator is a 8x8 combinational logic circuit, a chess board in miniature.
Its chess playing program was written in C and ran under the AIX operating system. It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version.
In 1997, Deep Blue was upgraded again to become the 259th most powerful supercomputer according to the TOP500 list, achieving 11.38 GFLOPS on the parallel high performance LINPACK benchmark.
| Technology | Specific hardware | null |
49400 | https://en.wikipedia.org/wiki/Window | Window | A window is an opening in a wall, door, roof, or vehicle that allows the exchange of light and may also allow the passage of sound and sometimes air. Modern windows are usually glazed or covered in some other transparent or translucent material, a sash set in a frame in the opening; the sash and frame are also referred to as a window. Many glazed windows may be opened, to allow ventilation, or closed to exclude inclement weather. Windows may have a latch or similar mechanism to lock the window shut or to hold it open by various amounts.
Types include the eyebrow window, fixed windows, hexagonal windows, single-hung, and double-hung sash windows, horizontal sliding sash windows, casement windows, awning windows, hopper windows, tilt, and slide windows (often door-sized), tilt and turn windows, transom windows, sidelight windows, jalousie or louvered windows, clerestory windows, lancet windows, skylights, roof windows, roof lanterns, bay windows, oriel windows, thermal, or Diocletian, windows, picture windows, rose windows, emergency exit windows, stained glass windows, French windows, panel windows, double/triple-paned windows, and witch windows.
Etymology
The English language-word window originates from the Old Norse , from 'wind' and 'eye'. In Norwegian, Nynorsk, and Icelandic, the Old Norse form has survived to this day (in Icelandic only as a less used word for a type of small open "window", not strictly a synonym for , the Icelandic word for 'window'). In Swedish, the word remains as a term for a hole through the roof of a hut, and in the Danish language and Norwegian , the direct link to eye is lost, just as for window. The Danish (but not the ) word is pronounced fairly similarly to window.
Window is first recorded in the early 13th century, and originally referred to an unglazed hole in a roof. Window replaced the Old English , which literally means 'eye-hole', and 'eye-door'. Many Germanic languages, however, adopted the Latin word to describe a window with glass, such as standard Swedish , or German . The use of window in English is probably because of the Scandinavian influence on the English language by means of loanwords during the Viking Age. In English, the word fenester was used as a parallel until the mid-18th century. Fenestration is still used to describe the arrangement of windows within a façade, as well as defenestration, meaning 'to throw out of a window'.
History
The Romans were the first known to use glass for windows, a technology likely first produced in Roman Egypt, in Alexandria AD. Presentations of windows can be seen in ancient Egyptian wall art and sculptures from Assyria. Paper windows were economical and widely used in ancient China, Korea, and Japan. In England, glass became common in the windows of ordinary homes only in the early 17th century whereas windows made up of panes of flattened animal horn were used as early as the 14th century. In the 19th century American west, greased paper windows came to be used by pioneering settlers. Modern-style floor-to-ceiling windows became possible only after the industrial plate glass making processes were fully perfected.
Technologies
In the 13th century BC, the earliest windows were unglazed openings in a roof to admit light during the day. Later, windows were covered with animal hide, cloth, or wood. Shutters that could be opened and closed came next. Over time, windows were built that both protected the inhabitants from the elements and transmitted light, using multiple small pieces of translucent material, such as flattened pieces of translucent animal horn, paper sheets, thin slices of marble (such as fengite), or pieces of glass, set in frameworks of wood, iron or lead. In the Far East, paper was used to fill windows.
The Romans were the first known users of glass for windows, exploiting a technology likely first developed in Roman Egypt. Specifically, in Alexandria 100 CE, cast-glass windows, albeit with poor optical properties, began to appear, but these were small thick productions, little more than blown-glass jars (cylindrical shapes) flattened out into sheets with circular striation patterns throughout. It would be over a millennium before window glass became transparent enough to see through clearly, as we expect now. In 1154, Al-Idrisi described glass windows as a feature of the palace belonging to the king of the Ghana Empire.
Over the centuries techniques were developed to shear through one side of a blown glass cylinder and produce thinner rectangular window panes from the same amount of glass material. This gave rise to tall narrow windows, usually separated by a vertical support called a mullion. Mullioned glass windows were the windows of choice among the European well-to-do, whereas paper windows were economical and widely used in ancient China, Korea, and Japan. In England, glass became common in the windows of ordinary homes only in the early-17th century, whereas windows made up of panes of flattened animal horn were used as early as the 14th century.
Modern-style floor-to-ceiling windows became possible only after the industrial plate glass-making processes were perfected in the late 19th century Modern windows are usually filled using glass, although transparent plastic is also used.
Fashions and trends
The introduction of lancet windows into Western European church architecture from the 12th century CE built on a tradition of arched windows inserted between columns, and led not only to tracery and elaborate stained-glass windows but also to a long-standing motif of pointed or rounded window-shapes in ecclesiastical buildings, still seen in many churches today.
Peter Smith discusses overall trends in early-modern rural Welsh window architecture:
Up to about 1680 windows tended to be horizontal in proportion, a shape suitable for lighting the low-ceilinged rooms that had resulted from the insertion of the upper floor into the hall-house. After that date vertically proportioned windows came into fashion, partly at least as a response to the Renaissance taste for the high ceiling. Since 1914 the wheel has come full circle and a horizontally proportioned window is again favoured.
The spread of plate-glass technology made possible the introduction of picture windows (in Levittown, Pennsylvania, founded 1951–1952).
Many modern day windows may have a window screen or mesh, often made of aluminum or fibreglass, to keep bugs out when the window is opened. Windows are primarily designed to facilitate a vital connection with the outdoors, offering those within the confines of the building visual access to the everchanging events occurring outside. The provision of this connection serves as an integral safeguard for the health and well-being of those inhabiting buildings, lest they experience the detrimental effects of enclosed buildings devoid of windows. Among the myriad criteria for the design of windows, several pivotal criteria have emerged in daylight standards: location, time, weather, nature, and people. Of these criteria, windows that are designed to provide views of nature are considered to be the most important by people.
Types
Cross
A cross-window is a rectangular window usually divided into four lights by a mullion and transom that form a Latin cross.
Eyebrow
The term eyebrow window is used in two ways: a curved top window in a wall or an eyebrow dormer; and a row of small windows usually under the front eaves such as the James-Lorah House in Pennsylvania.
Fixed
A fixed window is a window that cannot be opened, whose function is limited to allowing light to enter (unlike an unfixed window, which can open and close). Clerestory windows in church architecture are often fixed. Transom windows may be fixed or operable. This type of window is used in situations where light or vision alone is needed as no ventilation is possible in such windows without the use of trickle vents or overglass vents.
Single-hung sash
A single-hung sash window is a window that has one sash that is movable (usually the bottom one) and the other fixed. This is the earlier form of sliding sash window and is also cheaper.
Double-hung sash
A sash window is the traditional style of window in the United Kingdom, and many other places that were formerly colonized by the UK, with two parts (sashes) that overlap slightly and slide up and down inside the frame. The two parts are not necessarily the same size; where the upper sash is smaller (shorter) it is termed a cottage window. Currently, most new double-hung sash windows use spring balances to support the sashes, but traditionally, counterweights held in boxes on either side of the window were used. These were and are attached to the sashes using pulleys of either braided cord or, later, purpose-made chain. Three types of spring balances are called a tape or clock spring balance; channel or block-and-tackle balance, and a spiral or tube balance.
Double-hung sash windows were traditionally often fitted with shutters. Sash windows can be fitted with simplex hinges that let the window be locked into hinges on one side, while the rope on the other side is detached—so the window can be opened for fire escape or cleaning.
Foldup
A foldup has two equal sashes similar to a standard double-hung but folds upward allowing air to pass through nearly the full-frame opening. The window is balanced using either springs or counterbalances, similar to a double-hung. The sashes can be either offset to simulate a double-hung, or in-line. The inline versions can be made to fold inward or outward. The inward swinging foldup windows can have fixed screens, while the outward swinging ones require movable screens. The windows are typically used for screen rooms, kitchen pass-throughs, or egress.
Horizontal sliding sash
A horizontal sliding sash window has two or more sashes that overlap slightly but slide horizontally within the frame. In the UK, these are sometimes called Yorkshire sash windows, presumably because of their traditional use in that county.
Casement
A casement window is a window with a hinged sash that swings in or out like a door comprising either a side-hung, top-hung (also called "awning window"; see below), or occasionally bottom-hung sash or a combination of these types, sometimes with fixed panels on one or more sides of the sash. In the US, these are usually opened using a crank, but in parts of Europe, they tend to use projection friction stays and espagnolette locking. Formerly, plain hinges were used with a casement stay. Handing applies to casement windows to determine direction of swing; a casement window may be left-handed, right-handed, or double. The casement window is the dominant type now found in modern buildings in the UK and many other parts of Europe.
Awning
An awning window is a casement window that is hung horizontally, hinged on top, so that it swings outward like an awning. In addition to being used independently, they can be stacked, several in one opening, or combined with fixed glass. They are particularly useful for ventilation.
Hopper
A hopper window is a bottom-pivoting casement window that opens by tilting vertically, typically to the inside, resembling a hopper chute.
Pivot
A pivot window is a window hung on one hinge on each of two opposite sides which allows the window to revolve when opened. The hinges may be mounted top and bottom (Vertically Pivoted) or at each jamb (Horizontally Pivoted). The window will usually open initially to a restricted position for ventilation and, once released, fully reverse and lock again for safe cleaning from inside. Modern pivot hinges incorporate a friction device to hold the window open against its weight and may have restriction and reversed locking built-in. In the UK, where this type of window is most common, they were extensively installed in high-rise social housing.
Tilt and slide
A tilt and slide window is a window (more usually a door-sized window) where the sash tilts inwards at the top similar to a hopper window and then slides horizontally behind the fixed pane.
Tilt and turn
A tilt and turn window can both tilt inwards at the top or open inwards from hinges at the side. This is the most common type of window in Germany, its country of origin. It is also widespread in many other European countries. In Europe, it is usual for these to be of the "turn first" type. i.e. when the handle is turned to 90 degrees the window opens in the side hung mode. With the handle turned to 180 degrees the window opens in bottom hung mode. Most usually in the UK the windows will be "tilt first" i.e. bottom hung at 90 degrees for ventilation and side hung at 180 degrees for cleaning the outer face of the glass from inside the building.
Transom
A transom window is a window above a door. In an exterior door the transom window is often fixed, in an interior door, it can open either by hinges at top or bottom, or rotate on hinges. It provided ventilation before forced air heating and cooling. A fan-shaped transom is known as a fanlight, especially in the British Isles.
Side light
Windows beside a door or window are called side-, wing-, margen-lights, and flanking windows.
Jalousie window
Also known as a louvered window, the jalousie window consists of parallel slats of glass or acrylic that open and close like a Venetian blind, usually using a crank or a lever. They are used extensively in tropical architecture. A jalousie door is a door with a jalousie window.
Clerestory
A clerestory window is a window set in a roof structure or high in a wall, used for daylighting.
Skylight
A skylight is a window built into a roof structure. This type of window allows for natural daylight and moonlight.
Roof
A roof window is a sloped window used for daylighting, built into a roof structure. It is one of the few windows that could be used as an exit. Larger roof windows meet building codes for emergency evacuation.
Roof lantern
A roof lantern is a multi-paned glass structure, resembling a small building, built on a roof for day or moon light. Sometimes includes an additional clerestory. May also be called a cupola.
Bay
A bay window is a multi-panel window, with at least three panels set at different angles to create a protrusion from the wall line.
Oriel
An oriel window is a form of bay window. This form most often appears in Tudor-style houses and monasteries. It projects from the wall and does not extend to the ground. Originally a form of porch, they are often supported by brackets or corbels.
Thermal
Thermal, or Diocletian, windows are large semicircular windows (or niches) which are usually divided into three lights (window compartments) by two mullions. The central compartment is often wider than the two side lights on either side of it.
Picture
A picture window is a large fixed window in a wall, typically without glazing bars, or glazed with only perfunctory glazing bars (muntins) near the edge of the window. Picture windows provide an unimpeded view, as if framing a picture.
Multi-lite
A multi-lite window is a window glazed with small panes of glass separated by wooden or lead glazing bars, or muntins, arranged in a decorative glazing pattern often dictated by the building's architectural style. Due to the historic unavailability of large panes of glass, the multi-lit (or lattice window) was the most common window style until the beginning of the 20th century, and is still used in traditional architecture.
Emergency exit/egress
An emergency exit window is a window big enough and low enough so that occupants can escape through the opening in an emergency, such as a fire. In many countries, exact specifications for emergency windows in bedrooms are given in many building codes. Specifications for such windows may also allow for the entrance of emergency rescuers. Vehicles, such as buses, aircraft, and trains frequently have emergency exit windows as well.
Stained glass
A stained glass window is a window composed of pieces of colored glass, transparent, translucent or opaque, frequently portraying persons or scenes. Typically the glass in these windows is separated by lead glazing bars. Stained glass windows were popular in Victorian houses and some Wrightian houses, and are especially common in churches.
French
A French door has two rows of upright rectangular glass panes (lights) extending its full length; and two of these doors on an exterior wall and without a mullion separating them, that open outward with opposing hinges to a terrace or porch, are referred to as a French window. Sometimes these are set in pairs or multiples thereof along the exterior wall of a very large room, but often, one French window is placed centrally in a typically sized room, perhaps among other fixed windows flanking the feature. French windows are known as porte-fenêtre in France and portafinestra in Italy, and frequently are used in modern houses.
Double-paned
Double-paned windows have two parallel panes (slabs of glass) with a separation of typically about 1 cm; this space is permanently sealed and filled at the time of manufacture with dry air or other dry nonreactive gas. Such windows provide a marked improvement in thermal insulation (and usually in acoustic insulation as well) and are resistant to fogging and frosting caused by temperature differential. They are widely used for residential and commercial construction in intemperate climates. In the UK, double-paned and triple-paned are referred to as double-glazing and triple-glazing. Triple-paned windows are now a common type of glazing in central to northern Europe. Quadruple glazing is now being introduced in Scandinavia.
Hexagonal window
A hexagonal window is a hexagon-shaped window, resembling a bee cell or crystal lattice of graphite. The window can be vertically or horizontally oriented, openable or dead. It can also be regular or elongately-shaped and can have a separator (mullion). Typically, the cellular window is used for an attic or as a decorative feature, but it can also be a major architectural element to provide the natural lighting inside buildings.
Guillotine window
A guillotine window is a window that opens vertically. Guillotine windows have more than one sliding frame, and open from bottom to top or top to bottom.
Terms
EN 12519 is the European standard that describes windows terms officially used in EU Member States. The main terms are:
Light, or Lite, is the area between the outer parts of a window (transom, sill and jambs), usually filled with a glass pane. Multiple panes are divided by mullions when load-bearing, muntins when not.
Lattice light is a compound window pane madeup of small pieces of glass held together in a lattice.
Fixed window is a unit of one non-moving lite. The terms single-light, double-light, etc., refer to the number of these glass panes in a window.
Sash unit is a window consisting of at least one sliding glass component, typically composed of two lites (known as a double-light).
Replacement window in the United States means a framed window designed to slip inside the original window frame from the inside after the old sashes are removed. In Europe, it usually means a complete window including a replacement outer frame.
New construction window, in the US, means a window with a nailing fin that is inserted into a rough opening from the outside before applying siding and inside trim. A nailing fin is a projection on the outer frame of the window in the same plane as the glazing, which overlaps the prepared opening, and can thus be 'nailed' into place. In the UK and mainland Europe, windows in new-build houses are usually fixed with long screws into expanding plastic plugs in the brickwork. A gap of up to 13 mm is left around all four sides, and filled with expanding polyurethane foam. This makes the window fixing weatherproof but allows for expansion due to heat.
Lintel is a beam over the top of a window, also known as a transom.
Window sill is the bottom piece in a window frame. Window sills slant outward to drain water away from the inside of the building.
Secondary glazing is an additional frame applied to the inside of an existing frame, usually used on protected or listed buildings to achieve higher levels of thermal and sound insulation without compromising the look of the building
Decorative millwork is the moulding, cornices and lintels often decorating the surrounding edges of the window.
Labeling
The United States NFRC Window Label lists the following terms:
Thermal transmittance (U-factor), best values are around U-0.15 (equal to 0.8 W/m2/K)
Solar heat gain coefficient (SHGC), ratio of solar heat (infrared) passing through the glass to incident solar heat
Visible transmittance (VT), ratio of transmitted visible light divided by incident visible light
Air leakage (AL), measured in cubic foot per minute per linear foot of crack between sash and frame
Condensation resistance (CR), measured between 1 and 100 (the higher the number, the higher the resistance of the formation of condensation)
The European harmonised standard hEN 14351–1, which deals with doors and windows, defines 23 characteristics (divided into essential and non essential). Two other, preliminary European Norms that are under development deal with internal pedestrian doors (prEN 14351–2), smoke and fire resisting doors, and openable windows (prEN 16034).
Construction
Windows can be a significant source of heat transfer. Therefore, insulated glazing units consist of two or more panes to reduce the transfer of heat.
Grids or muntins
These are the pieces of framing that separate a larger window into smaller panes. In older windows, large panes of glass were quite expensive, so muntins let smaller panes fill a larger space. In modern windows, light-colored muntins still provide a useful function by reflecting some of the light going through the window, making the window itself a source of diffuse light (instead of just the surfaces and objects illuminated within the room). By increasing the indirect illumination of surfaces near the window, muntins tend to brighten the area immediately around a window and reduce the contrast of shadows within the room.
Frame and sash construction
Frames and sashes can be made of the following materials:
Composites (also known as Hybrid Windows) are start since early 1998 and combine materials like aluminium + pvc or wood to obtain aesthetics of one material with the functional benefits of another.
A special class of PVC window frames, uPVC window frames, became widespread since the late 20th century, particularly in Europe: there were 83.5 million installed by 1998 with numbers still growing as of 2012.
Glazing and filling
Low-emissivity coated panes reduce heat transfer by radiation, which, depending on which surface is coated, helps prevent heat loss (in cold climates) or heat gains (in warm climates).
High thermal resistance can be obtained by evacuating or filling the insulated glazing units with gases such as argon or krypton, which reduces conductive heat transfer due to their low thermal conductivity. Performance of such units depends on good window seals and meticulous frame construction to prevent entry of air and loss of efficiency.
Modern double-pane and triple-pane windows often include one or more low-e coatings to reduce the window's U-factor (its insulation value, specifically its rate of heat loss). In general, soft-coat low-e coatings tend to result in a lower solar heat gain coefficient (SHGC) than hard-coat low-e coatings.
Modern windows are usually glazed with one large sheet of glass per sash, while windows in the past were glazed with multiple panes separated by glazing bars, or muntins, due to the unavailability of large sheets of glass. Today, glazing bars tend to be decorative, separating windows into small panes of glass even though larger panes of glass are available, generally in a pattern dictated by the architectural style at use. Glazing bars are typically wooden, but occasionally lead glazing bars soldered in place are used for more intricate glazing patterns.
Other construction details
Many windows have movable window coverings such as blinds or curtains to keep out light, provide additional insulation, or ensure privacy.
Windows allow natural light to enter, but too much can have negative effects such as glare and heat gain. Additionally, while windows let the user see outside, there must be a way to maintain privacy on in the inside. Window coverings are practical accommodations for these issues.
Impact of the sun
Sun incidence angle
Historically, windows are designed with surfaces parallel to vertical building walls. Such a design allows considerable solar light and heat penetration due to the most commonly occurring incidence of sun angles. In passive solar building design, an extended eave is typically used to control the amount of solar light and heat entering the window(s).
An alternative method is to calculate an optimum window mounting angle that accounts for summer sun load minimization, with consideration of actual latitude of the building. This process has been implemented, for example, in the Dakin Building in Brisbane, California—in which most of the fenestration is designed to reflect summer heat load and help prevent summer interior over-illumination and glare, by canting windows to nearly a 45 degree angle.
Solar window
Photovoltaic windows not only provide a clear view and illuminate rooms, but also convert sunlight to electricity for the building. In most cases, translucent photovoltaic cells are used.
Passive solar
Passive solar windows allow light and solar energy into a building while minimizing air leakage and heat loss. Properly positioning these windows in relation to sun, wind, and landscape—while properly shading them to limit excess heat gain in summer and shoulder seasons, and providing thermal mass to absorb energy during the day and release it when temperatures cool at night—increases comfort and energy efficiency. Properly designed in climates with adequate solar gain, these can even be a building's primary heating system.
Coverings
A window covering is a shade or screen that provides multiple functions. Some coverings, such as drapes and blinds provide occupants with privacy. Some window coverings control solar heat gain and glare. There are external shading devices and internal shading devices. Low-e window film is a low-cost alternative to window replacement to transform existing poorly-insulating windows into energy-efficient windows. For high-rise buildings, smart glass can provide an alternative.
Gallery
| Technology | Architectural elements | null |
49414 | https://en.wikipedia.org/wiki/Sex-determination%20system | Sex-determination system | A sex-determination system is a biological system that determines the development of sexual characteristics in an organism. Most organisms that create their offspring using sexual reproduction have two common sexes, males and females, and in other species, there are hermaphrodites, organisms that can function reproductively as either female or male, or both.
There are also some species in which only one sex is present, temporarily or permanently. This can be due to parthenogenesis, the act of a female reproducing without fertilization. In some plants or algae the gametophyte stage may reproduce itself, thus producing more individuals of the same sex as the parent.
In some species, sex determination is genetic: males and females have different alleles or even different genes that specify their sexual morphology. In animals this is often accompanied by chromosomal differences, generally through combinations of XY, ZW, XO, ZO chromosomes, or haplodiploidy. The sexual differentiation is generally triggered by a main gene (a "sex locus"), with a multitude of other genes following in a domino effect.
In other cases, sex of a fetus is determined by environmental variables (such as temperature). The details of some sex-determination systems are not yet fully understood. Hopes for future fetal biological system analysis include complete-reproduction-system initialized signals that can be measured during pregnancies to more accurately determine whether a determined sex of a fetus is male, or female. Such analysis of biological systems could also signal whether the fetus is hermaphrodite, which includes total or partial of both male and female reproduction organs.
Some species such as various plants and fish do not have a fixed sex, and instead go through life cycles and change sex based on genetic cues during corresponding life stages of their type. This could be due to environmental factors such as seasons and temperature. In some gonochoric species, a few individuals may have conditions that cause a mix of different sex characteristics.
Discovery
Sex determination was discovered in the mealworm by the American geneticist Nettie Stevens in 1903.
In 1694, J.R. Camerarius, conducted early experiments on pollination and reported the existence of male and female characteristics in plants(Maize).
In 1866, Gregor Mendel published on inheritance of genetic traits. This is known as Mendelian inheritance and it eventually established the modern understanding of inheritance from two gametes.
In 1902, C.E. McClung identified sex chromosomes in bugs.
In 1917, C.E. Allen, discovered sex determination mechanisms in plants.
In 1922, C.B. Bridges, put forth the Genic Balance Theory of sex determination.
Chromosomal systems
Among animals, the most common chromosomal sex determination systems are XY, XO, ZW, ZO, but with numerous exceptions.
According to the Tree of Sex database (as of 2023), the known sex determination systems are:
1. complex sex chromosomes, homomorphic sex chromosomes, or others
XX/XY sex chromosomes
The XX/XY sex-determination system is the most familiar, as it is found in humans. The XX/XY system is found in most other mammals, as well as some insects. In this system, females have two of the same kind of sex chromosome (XX), while males have two distinct sex chromosomes (XY). The X and Y sex chromosomes are different in shape and size from each other, unlike the rest of the chromosomes (autosomes), and are sometimes called allosomes. In some species, such as humans, organisms remain sex indifferent for a time during development (embryogenesis); in others, however, such as fruit flies, sexual differentiation occurs as soon as the egg is fertilized.
Y-centered sex determination
Some species (including humans) have a gene SRY on the Y chromosome that determines maleness. Members of SRY-reliant species can have uncommon XY chromosomal combinations such as XXY and still live.
Human sex is determined by the presence or absence of a Y chromosome with a functional SRY gene. Once the SRY gene is activated, cells create testosterone and anti-müllerian hormone which typically ensures the development of a single, male reproductive system. In typical XX embryos, cells secrete estrogen, which drives the body toward the female pathway.
In Y-centered sex determination, the SRY gene is the main gene in determining male characteristics, but multiple genes are required to develop testes. In XY mice, lack of the gene DAX1 on the X chromosome results in sterility, but in humans it causes adrenal hypoplasia congenita. However, when an extra DAX1 gene is placed on the X chromosome, the result is a female, despite the existence of SRY, since it overrides the effects of SRY. Even when there are normal sex chromosomes in XX females, duplication or expression of SOX9 causes testes to develop. Gradual sex reversal in developed mice can also occur when the gene FOXL2 is removed from females. Even though the gene DMRT1 is used by birds as their sex locus, species who have XY chromosomes also rely upon DMRT1, contained on chromosome 9, for sexual differentiation at some point in their formation.
X-centered sex determination
Some species, such as fruit flies, use the presence of two X chromosomes to determine femaleness. Species that use the number of Xs to determine sex are nonviable with an extra X chromosome.
Other variants of XX/XY sex determination
Some fish have variants of the XY sex-determination system, as well as the regular system. For example, while having an XY format, Xiphophorus nezahualcoyotl and X. milleri also have a second Y chromosome, known as Y', that creates XY' females and YY' males.
At least one monotreme, the platypus, presents a particular sex determination scheme that in some ways resembles that of the ZW sex chromosomes of birds and lacks the SRY gene. The platypus has sex chromosomes . The males have , while females have . During meiosis, 5 of X form one chain, and 5 of Y form another chain. Thus, they behave effectively as a typical XY chromosomal system, except each of X and Y is broken into 5 parts, with the effect at recombinations occur very frequently at 4 particular points. One of the X chromosomes is homologous to the human X chromosome, and another is homologous to the bird Z chromosome.
Although it is an XY system, the platypus' sex chromosomes share no homologues with eutherian sex chromosomes. Instead, homologues with eutherian sex chromosomes lie on the platypus chromosome 6, which means that the eutherian sex chromosomes were autosomes at the time that the monotremes diverged from the therian mammals (marsupials and eutherian mammals). However, homologues to the avian DMRT1 gene on platypus sex chromosomes X3 and X5 suggest that it is possible the sex-determining gene for the platypus is the same one that is involved in bird sex-determination. More research must be conducted in order to determine the exact sex determining gene of the platypus.
XX/X0 sex chromosomes
In this variant of the XY system, females have two copies of the sex chromosome (XX) but males have only one (X0). The 0 denotes the absence of a second sex chromosome. Generally in this method, the sex is determined by amount of genes expressed across the two chromosomes. This system is observed in a number of insects, including the grasshoppers and crickets of order Orthoptera and in cockroaches (order Blattodea). A small number of mammals also lack a Y chromosome. These include the Amami spiny rat (Tokudaia osimensis) and the Tokunoshima spiny rat (Tokudaia tokunoshimensis) and Sorex araneus, a shrew species. Transcaucasian mole voles (Ellobius lutescens) also have a form of XO determination, in which both sexes lack a second sex chromosome. The mechanism of sex determination is not yet understood.
The nematode C. elegans is male with one sex chromosome (X0); with a pair of chromosomes (XX) it is a hermaphrodite. Its main sex gene is XOL, which encodes XOL-1 and also controls the expression of the genes TRA-2 and HER-1. These genes reduce male gene activation and increase it, respectively.
ZW/ZZ sex chromosomes
The ZW sex-determination system is found in birds, some reptiles, and some insects and other organisms. The ZW sex-determination system is reversed compared to the XY system: females have two different kinds of chromosomes (ZW), and males have two of the same kind of chromosomes (ZZ). In the chicken, this was found to be dependent on the expression of DMRT1. In birds, the genes FET1 and ASW are found on the W chromosome for females, similar to how the Y chromosome contains SRY. However, not all species depend upon the W for their sex. For example, there are moths and butterflies that are ZW, but some have been found female with ZO, as well as female with ZZW. Also, while mammals deactivate one of their extra X chromosomes when female, it appears that in the case of Lepidoptera, the males produce double the normal amount of enzymes, due to having two Z's. Because the use of ZW sex determination is varied, it is still unknown how exactly most species determine their sex. However, reportedly, the silkworm Bombyx mori uses a single female-specific piRNA as the primary determiner of sex. Despite the similarities between the ZW and XY systems, these sex chromosomes evolved separately. In the case of the chicken, their Z chromosome is more similar to humans' autosome 9. The chicken's Z chromosome also seems to be related to the X chromosome of the platypus. When a ZW species, such as the Komodo dragon, reproduces parthenogenetically, usually only males are produced. This is due to the fact that the haploid eggs double their chromosomes, resulting in ZZ or WW. The ZZ become males, but the WW are not viable and are not brought to term.
In both XY and ZW sex determination systems, the sex chromosome carrying the critical factors is often significantly smaller, carrying little more than the genes necessary for triggering the development of a given sex.
ZZ/Z0 sex chromosomes
The ZZ/Z0 sex-determination system is found in some moths. In these insects there is one sex chromosome, Z. Males have two Z chromosomes, whereas females have one Z. Males are ZZ, while females are Z0.
UV sex chromosomes
In some bryophyte and some algae species, the gametophyte stage of the life cycle, rather than being hermaphrodite, occurs as separate male or female individuals that produce male and female gametes respectively. When meiosis occurs in the sporophyte generation of the life cycle, the sex chromosomes known as U and V assort in spores that carry either the U chromosome and give rise to female gametophytes, or the V chromosome and give rise to male gametophytes.
Mating types
The mating type in microorganisms is analogous to sex in multi-cellular organisms, and is sometimes described using those terms, though they are not necessarily correlated with physical body structures. Some species have more than two mating types. Tetrahymena, a type of ciliate, has 7 mating types
Mating types are extensively studied in fungi. Among fungi, mating type is determined by chromosomal regions called mating-type loci. Furthermore, it is not as simple as "two different mating types can mate", but rather, a matter of combinatorics. As a simple example, most basidiomycete have a "tetrapolar heterothallism" mating system: there are two loci, and mating between two individuals is possible if the alleles on both loci are different. For example, if there are 3 alleles per locus, then there would be 9 mating types, each of which can mate with 4 other mating types. By multiplicative combination, it generates a vast number of mating types. For example, Schizophyllum commune, a type of fungus, has mating types.
Haplodiploidy
Haplodiploidy is found in insects belonging to Hymenoptera, such as ants and bees. Sex determination is controlled by the zygosity of a complementary sex determiner (csd) locus. Unfertilized eggs develop into haploid individuals which have a single, hemizygous copy of the csd locus and are therefore males. Fertilized eggs develop into diploid individuals which, due to high variability in the csd locus, are generally heterozygous females. In rare instances diploid individuals may be homozygous, these develop into sterile males.
The gene acting as a csd locus has been identified in the honeybee and several candidate genes have been proposed as a csd locus for other Hymenopterans.
Most females in the Hymenoptera order can decide the sex of their offspring by holding received sperm in their spermatheca and either releasing it into their oviduct or not. This allows them to create more workers, depending on the status of the colony.
Polygenic sex determination
Polygenic sex determination is when the sex is primarily determined by genes that occur on multiple non-homologous chromosomes. The environment may have a limited, minor influence on sex determination. Examples include African cichlid fish (Metriaclima spp.), lemmings (Myopus schisticolor), green swordtail, medaka, etc. In such systems, there is typically a dominance hierarchy, where one system is dominant over another if in conflict. For example, in some species of cichlid fish from Lake Malawi, if an individual has both the XY locus (on one chromosome pair) and the WZ locus (on another chromosome pair), then the W is dominant and the individual has a female phenotype.
The sex-determination system of zebrafish is polygenic. Juvenile zebrafishes (0–30 days after hatching) have both ovary-like tissue to testis tissue. They then develop into male or female adults, with the determination based on a complex interaction genes on multiple chromosomes, but not affected by environmental variations.
Other chromosomal systems
In systems with two sex chromosomes, they can be heteromorphic or homomorphic. Homomorphic sex chromosomes are almost identical in size and gene content. The two familiar kinds of sex chromosome pairs (XY and ZW) are heteromorphic. Homomorphic sex chromosomes exist among pufferfish, ratite birds, pythons, and European tree frogs. Some are quite old, meaning that there is some evolutionary force that resists their differentiation. For example, three species of European tree frogs have homologous, homomorphic sex chromosomes, and this homomorphism was maintained for at least 5.4 million years by occasional recombination.
The Nematocera, particularly the Simuliids and Chironomus, have sex determination regions that are labile, meaning that one species may have the sex determination region in one chromosome, but a closely related species might have the same region moved to a different non-homologous chromosome. Some species even have the sex determination region different among individuals within the same species (intraspecific variation). In some species, some populations have homomorphic sex chromosomes while other populations have heteromorphic sex chromosomes.
The New Zealand frog, Leiopelma hochstetteri, uses a supernumerary sex chromosome. With zero of that chromosome, the frog develops into a male. With one or more, the frog develops into a female. One female had as many as 16 of that chromosome.
Different populations of the Japanese frog Rana rugosa uses different systems. Two use homomorphic male heterogamety, one uses XX/XY, one uses ZZ/ZW. Remarkably, the X and Z chromosomes are homologous, and the Y and W as well. Dmrt1 is on autosome 1 and not sex-linked. This means that an XX female individual is genetically similar to a ZZ male individual, and an XY male individual is to a ZW female individual. The mechanism behind this is yet unclear, but it is hypothesized that during its recent evolution, the XY-to-ZW transition occurred twice.
Clarias gariepinus uses both XX/XY and ZW/ZZ system within the species, with some populations using homomorphic XX/XY while others using heteromorphic ZW/ZZ. A population in Thailand appears to use both systems simultaneously, possibly because C. gariepinus were not native to Thailand, and were introduced from different source populations which resulted in a mixture.
Multiple sex chromosomes like those of platypus also occurs in bony fish. Some moths and butterflies have or .
The Southern platyfish has a complex sex determination system involving 3 sex chromosomes and 4 autosomal alleles.
Gastrotheca pseustes has C-banding heteromorphism, meaning that both males and females have XY chromosomes, but their Y chromosomes are different on one or more C-bands. Eleutherodactylus maussi has a system.
Evolution
See for a review.
Origin of sex chromosomes
Sexual chromosome pairs can arise from an autosomal pair that, for various reasons, stopped recombination, allowing for their divergence. The rate at which recombination is suppressed, and therefore the rate of sex chromosome divergence, is very different across clades.
In analogy with geological strata, historical events in the evolution of sex chromosomes are called evolutionary strata. The human Y-chromosome has had about 5 strata since the origin of the X and Y chromosomes about 300 Mya from a pair of autosomes. Each stratum was formed when a pseudoautosomal region (PAR) of the Y chromosome is inverted, stopping it from recombination with the X chromosome. Over time, each inverted region decays, possibly due to Muller's ratchet. Primate Y-chromosome evolution was rapid, with multiple inversions and shifts of the boundary of PAR.
Among many species of the salamanders, the two chromosomes are only distinguished by a pericentric inversion, so that the banding pattern of the X chromosome is the same as that of Y, but with a region near the centromere reversed. (fig 7 ) In some species, the X is pericentrically inverted and the Y is ancestral. In other species it is the opposite. (p. 15 )
The gene content of the X chromosome is almost identical among placental mammals. This is hypothesized to be because the X inactivation means any change would cause serious disruption, thus subjecting it to strong purifying selection. Similarly, birds have highly conserved Z chromosomes.
Neo-sex chromosomes
Neo-sex chromosomes are currently existing sex chromosomes that formed when an autosome pair fused to the previously existing sex chromosome pair. Following this fusion, the autosomal portion undergoes recombination suppression, allowing them to differentiate. Such systems have been observed in insects, reptiles, birds, and mammals. They are useful to the study of the evolution of Y chromosome degeneration and dosage compensation.
Sex-chromosome turnover
The sex-chromosome turnover is an evolutionary phenomenon where sex chromosomes disappear, or becomes autosomal, and autosomal chromosomes become sexual, repeatedly over evolutionary time. Some lineages have extensive turnover, but others don't. Generally, in an XY system, if the Y chromosome is degenerate, mostly different from the X chromosome, and has X dosage compensation, then turnover is unlikely. In particular, this applies to humans.
The ZW and XY systems can evolve into to each other due to sexual conflict.
Homomorphism and the fountain of youth
It is an evolutionary puzzle why certain sex chromosomes remain homomorphic over millions of years, especially among lineages of fishes, amphibians, and nonavian reptiles. The fountain-of-youth model states that heteromorphy results from recombination suppression, and recombination suppression results from the male phenotype, not the sex chromosomes themselves. Therefore, if some XY sex-reversed females are fertile and adaptive under some circumstances, then the X and Y chromosomes would recombine in these individuals, preventing Y chromosome decay and maintaining long-term homomorphism.
Sex reversal denotes a situation where the phenotypic sex is different from the genotypic sex. While in humans, sex reversal (such as the XX male syndrome) are often infertile, sex-reversed individuals of some species are fertile under some conditions. For example, some XY-individuals in population of Chinook salmon in the Columbia River became fertile females, producing YY sons. Since Chinook salmons have homomorphic sex chromosomes, such YY sons are healthy. When YY males mate with XX females, all their progeny would be XY male if grown under normal conditions.
Support for the hypothesis is found in the common frog, for which XX males and XY males both suppresses sex chromosome recombination, but XX and XY females both recombine at the same rate.
Environmental systems
Temperature-dependent
Many other sex-determination systems exist. In some species of reptiles, including alligators, some turtles, and the tuatara, sex is determined by the temperature at which the egg is incubated during a temperature-sensitive period. There are no examples of temperature-dependent sex determination (TSD) in birds. Megapodes had formerly been thought to exhibit this phenomenon, but were found to actually have different temperature-dependent embryo mortality rates for each sex. For some species with TSD, sex determination is achieved by exposure to hotter temperatures resulting in the offspring being one sex and cooler temperatures resulting in the other. This type of TSD is called Pattern I. For others species using TSD, it is exposure to temperatures on both extremes that results in offspring of one sex, and exposure to moderate temperatures that results in offspring of the opposite sex, called Pattern II TSD. The specific temperatures required to produce each sex are known as the female-promoting temperature and the male-promoting temperature. When the temperature stays near the threshold during the temperature sensitive period, the sex ratio is varied between the two sexes. Some species' temperature standards are based on when a particular enzyme is created. These species that rely upon temperature for their sex determination do not have the SRY gene, but have other genes such as DAX1, DMRT1, and SOX9 that are expressed or not expressed depending on the temperature. The sex of some species, such as the Nile tilapia, Australian skink lizard, and Australian dragon lizard, has an initial bias, set by chromosomes, but can later be changed by the temperature of incubation.
It is unknown how exactly temperature-dependent sex determination evolved. It could have evolved through certain sexes being more suited to certain areas that fit the temperature requirements. For example, a warmer area could be more suitable for nesting, so more females are produced to increase the amount that nest next season.
In amniotes, environmental sex determination preceded the genetically determined systems of birds and mammals; it is thought that a temperature-dependent amniote was the common ancestor of amniotes with sex chromosomes.
Other environmental systems
There are other environmental sex determination systems including location-dependent determination systems as seen in the marine worm Bonellia viridis – larvae become males if they make physical contact with a female, and females if they end up on the bare sea floor. This is triggered by the presence of a chemical produced by the females, bonellin. Some species, such as some snails, practice sex change: adults start out male, then become female. In tropical clownfish, the dominant individual in a group becomes female while the other ones are male, and bluehead wrasses (Thalassoma bifasciatum) are the reverse.
Clownfish live in colonies of several small undifferentiated fish and two large fish (male and female). The male and female are the only sexually mature fish to reproduce. Clownfish are protandrous hermaphrodites, which means after they mature into males, they eventually can transform into females. They develop undifferentiated until they are needed to fill a certain role in their environment, i.e., if they receive the social and environmental cues to do so.
Some species, however, have no sex-determination system. Hermaphrodite species include the common earthworm and certain species of snails. A few species of fish, reptiles, and insects reproduce by parthenogenesis and are female altogether. There are some reptiles, such as the boa constrictor and Komodo dragon that can reproduce both sexually and asexually, depending on whether a mate is available.
Others
There are exceptional sex-determination systems, neither genetic nor environmental.
The Wolbachia genus of parasitic bacteria lives inside the cytoplasm of its host, and is vertically transmitted from parents to children. They primarily infect arthropods and nematodes. Different Wolbachia can determine the sex of its host by a variety of means.
In some species, there is paternal genome elimination, where sons lose the entire genome from the father.
Mitochondrial male sterility: In many flowering plants, the mitochondria can cause hermaphrodite individuals to be unable to father offsprings, effectively turning them into exclusive females. This is a form of mother’s curse. It is an evolutionarily adaptive strategy for mitochondria as mitochondrial inheritance is exclusively from mother to child. The first published case of mitochondrial male sterility among metazoan was reported in 2022 in the hermaphroditic snail Physa acuta.
In some flies and crustaceans, all offspring of a particular individual female are either exclusively male or exclusively female (monogeny).
Evolution
Sex determination systems may have evolved from mating type, which is a feature of microorganisms.
Chromosomal sex determination may have evolved early in the history of eukaryotes. But in plants it has been suggested to have evolved recently.
The accepted hypothesis of XY and ZW sex chromosome evolution in amniotes is that they evolved at the same time, in two different branches.
No genes are shared between the avian ZW and mammal XY chromosomes and the chicken Z chromosome is similar to the human autosomal chromosome 9, rather than X or Y. This suggests not that the ZW and XY sex-determination systems share an origin but that the sex chromosomes are derived from autosomal chromosomes of the common ancestor of birds and mammals. In the platypus, a monotreme, the X1 chromosome shares homology with therian mammals, while the X5 chromosome contains an avian sex-determination gene, further suggesting an evolutionary link.
However, there is some evidence to suggest that there could have been transitions between ZW and XY, such as in Xiphophorus maculatus, which have both ZW and XY systems in the same population, despite the fact that ZW and XY have different gene locations. A recent theoretical model raises the possibility of both transitions between the XY/XX and ZZ/ZW system and environmental sex determination The platypus' genes also back up the possible evolutionary link between XY and ZW, because they have the DMRT1 gene possessed by birds on their X chromosomes. Regardless, XY and ZW follow a similar route. All sex chromosomes started out as an original autosome of an original amniote that relied upon temperature to determine the sex of offspring. After the mammals separated, the reptile branch further split into Lepidosauria and Archosauromorpha. These two groups both evolved the ZW system separately, as evidenced by the existence of different sex chromosomal locations. In mammals, one of the autosome pair, now Y, mutated its SOX3 gene into the SRY gene, causing that chromosome to designate sex. After this mutation, the SRY-containing chromosome inverted and was no longer completely homologous with its partner. The regions of the X and Y chromosomes that are still homologous to one another are known as the pseudoautosomal region. Once it inverted, the Y chromosome became unable to remedy deleterious mutations, and thus degenerated. There is some concern that the Y chromosome will shrink further and stop functioning in ten million years: but the Y chromosome has been strictly conserved after its initial rapid gene loss.
There are some vertebrate species, such as the medaka fish, that evolved sex chromosomes separately; their Y chromosome never inverted and can still swap genes with the X. These species' sex chromosomes are relatively primitive and unspecialized. Because the Y does not have male-specific genes and can interact with the X, XY and YY females can be formed as well as XX males. Non-inverted Y chromosomes with long histories are found in pythons and emus, each system being more than 120 million years old, suggesting that inversions are not necessarily an eventuality. XO sex determination can evolve from XY sex determination with about 2 million years.
| Biology and health sciences | Genetics | Biology |
49417 | https://en.wikipedia.org/wiki/Extinction | Extinction | Extinction is the termination of a taxon by the death of its last member. A taxon may become functionally extinct before the death of its last member if it loses the capacity to reproduce and recover. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively. This difficulty leads to phenomena such as Lazarus taxa, where a species presumed extinct abruptly "reappears" (typically in the fossil record) after a period of apparent absence.
More than 99% of all species that ever lived on Earth, amounting to over five billion species, are estimated to have died out. It is estimated that there are currently around 8.7 million species of eukaryotes globally, and possibly many times more if microorganisms, such as bacteria, are included. Notable extinct animal species include non-avian dinosaurs, palaeotheres, saber-toothed cats, dodos, mammoths, ground sloths, thylacines, trilobites, golden toads, and passenger pigeons.
Through evolution, species arise through the process of speciation—where new varieties of organisms arise and thrive when they are able to find and exploit an ecological niche—and species become extinct when they are no longer able to survive in changing conditions or against superior competition. The relationship between animals and their ecological niches has been firmly established. A typical species becomes extinct within 10 million years of its first appearance, although some species, called living fossils, survive with little to no morphological change for hundreds of millions of years.
Mass extinctions are relatively rare events; however, isolated extinctions of species and clades are quite common, and are a natural part of the evolutionary process. Only recently have extinctions been recorded and scientists have become alarmed at the current high rate of extinctions. Most species that become extinct are never scientifically documented. Some scientists estimate that up to half of presently existing plant and animal species may become extinct by 2100. A 2018 report indicated that the phylogenetic diversity of 300 mammalian species erased during the human era since the Late Pleistocene would require 5 to 7 million years to recover.
According to the 2019 Global Assessment Report on Biodiversity and Ecosystem Services by IPBES, the biomass of wild mammals has fallen by 82%, natural ecosystems have lost about half their area and a million species are at risk of extinction—all largely as a result of human actions. Twenty-five percent of plant and animal species are threatened with extinction. In a subsequent report, IPBES listed unsustainable fishing, hunting and logging as being some of the primary drivers of the global extinction crisis.
In June 2019, one million species of plants and animals were at risk of extinction. At least 571 plant species have been lost since 1750, but likely many more. The main cause of the extinctions is the destruction of natural habitats by human activities, such as cutting down forests and converting land into fields for farming.
A dagger symbol (†) placed next to the name of a species or other taxon normally indicates its status as extinct.
Examples
Examples of species and subspecies that are extinct include:
Steller's sea cow (the last known member died circa 1768)
Dodo (the last confirmed sighting was in 1662)
Chinese paddlefish (last seen in 2003; declared extinct in 2022)
Great auk (last confirmed pair was killed in the 1840s)
Thylacine (the last thylacine killed in the wild was shot in 1930; the last captive tiger lived in Hobart Zoo until 1936)
Kauai O'o (last known member was heard in 1987; the entire Mohoidae family became extinct with it)
Spectacled cormorant (last known members were said to live in the 1850s)
Carolina parakeet (last known member named Incas died in captivity in 1918; declared extinct in 1939)
Passenger pigeon (last known member named Martha died in captivity in 1914)
Tasmanian emu (the last claimed sighting of the emu was in 1839)
Japanese Sea Lion (the last confirmed record was a juvenile specimen captured in 1974)
Schomburgk's deer (became extinct in the wild in 1932; the last captive deer was killed in 1938)
Quagga (hunted to extinction in the late 19th century; the last captive quagga died in Natura Artis Magistra in 1883)
Definition
A species is extinct when the last existing member dies. Extinction therefore becomes a certainty when there are no surviving individuals that can reproduce and create a new generation. A species may become functionally extinct when only a handful of individuals survive, which cannot reproduce due to poor health, age, sparse distribution over a large range, a lack of individuals of both sexes (in sexually reproducing species), or other reasons.
Pinpointing the extinction (or pseudoextinction) of a species requires a clear definition of that species. If it is to be declared extinct, the species in question must be uniquely distinguishable from any ancestor or daughter species, and from any other closely related species. Extinction of a species (or replacement by a daughter species) plays a key role in the punctuated equilibrium hypothesis of Stephen Jay Gould and Niles Eldredge.
In ecology, extinction is sometimes used informally to refer to local extinction, in which a species ceases to exist in the chosen area of study, despite still existing elsewhere. Local extinctions may be made good by the reintroduction of individuals of that species taken from other locations; wolf reintroduction is an example of this. Species that are not globally extinct are termed extant. Those species that are extant, yet are threatened with extinction, are referred to as threatened or endangered species.
Currently, an important aspect of extinction is human attempts to preserve critically endangered species. These are reflected by the creation of the conservation status "extinct in the wild" (EW). Species listed under this status by the International Union for Conservation of Nature (IUCN) are not known to have any living specimens in the wild and are maintained only in zoos or other artificial environments. Some of these species are functionally extinct, as they are no longer part of their natural habitat and it is unlikely the species will ever be restored to the wild. When possible, modern zoological institutions try to maintain a viable population for species preservation and possible future reintroduction to the wild, through use of carefully planned breeding programs.
The extinction of one species' wild population can have knock-on effects, causing further extinctions. These are also called "chains of extinction". This is especially common with extinction of keystone species.
A 2018 study indicated that the sixth mass extinction started in the Late Pleistocene could take up to 5 to 7 million years to restore mammal diversity to what it was before the human era.
Pseudoextinction
Extinction of a parent species where daughter species or subspecies are still extant is called pseudoextinction or phyletic extinction. Effectively, the old taxon vanishes, transformed (anagenesis) into a successor, or split into more than one (cladogenesis).
Pseudoextinction is difficult to demonstrate unless one has a strong chain of evidence linking a living species to members of a pre-existing species. For example, it is sometimes claimed that the extinct Hyracotherium, which was an early horse that shares a common ancestor with the modern horse, is pseudoextinct, rather than extinct, because there are several extant species of Equus, including zebra and donkey; however, as fossil species typically leave no genetic material behind, one cannot say whether Hyracotherium evolved into more modern horse species or merely evolved from a common ancestor with modern horses. Pseudoextinction is much easier to demonstrate for larger taxonomic groups.
Lazarus taxa
A Lazarus taxon or Lazarus species refers to instances where a species or taxon was thought to be extinct, but was later rediscovered. It can also refer to instances where large gaps in the fossil record of a taxon result in fossils reappearing much later, although the taxon may have ultimately become extinct at a later point.
The coelacanth, a fish related to lungfish and tetrapods, is an example of a Lazarus taxon that was known only from the fossil record and was considered to have been extinct since the end of the Cretaceous Period. In 1938, however, a living specimen was found off the Chalumna River (now Tyolomnqa) on the east coast of South Africa. Calliostoma bullatum, a species of deepwater sea snail originally described from fossils in 1844 proved to be a Lazarus species when extant individuals were described in 2019.
Attenborough's long-beaked echidna (Zaglossus attenboroughi) is an example of a Lazarus species from Papua New Guinea that had last been sighted in 1962 and believed to be possibly extinct, until it was recorded again in November 2023.
Some species currently thought to be extinct have had continued speculation that they may still exist, and in the event of rediscovery would be considered Lazarus species. Examples include the thylacine, or Tasmanian tiger (Thylacinus cynocephalus), the last known example of which died in Hobart Zoo in Tasmania in 1936; the Japanese wolf (Canis lupus hodophilax), last sighted over 100 years ago; the American ivory-billed woodpecker (Campephilus principalis), with the last universally accepted sighting in 1944; and the slender-billed curlew (Numenius tenuirostris), not seen since 2007.
Causes
As long as species have been evolving, species have been going extinct. It is estimated that over 99.9% of all species that ever lived are extinct. The average lifespan of a species is 1–10 million years, although this varies widely between taxa.
A variety of causes can contribute directly or indirectly to the extinction of a species or group of species. "Just as each species is unique", write Beverly and Stephen C. Stearns, "so is each extinction ... the causes for each are varied—some subtle and complex, others obvious and simple". Most simply, any species that cannot survive and reproduce in its environment and cannot move to a new environment where it can do so, dies out and becomes extinct. Extinction of a species may come suddenly when an otherwise healthy species is wiped out completely, as when toxic pollution renders its entire habitat unliveable; or may occur gradually over thousands or millions of years, such as when a species gradually loses out in competition for food to better adapted competitors. Extinction may occur a long time after the events that set it in motion, a phenomenon known as extinction debt.
Assessing the relative importance of genetic factors compared to environmental ones as the causes of extinction has been compared to the debate on nature and nurture. The question of whether more extinctions in the fossil record have been caused by evolution or by competition or by predation or by disease or by catastrophe is a subject of discussion; Mark Newman, the author of Modeling Extinction, argues for a mathematical model that falls in all positions. By contrast, conservation biology uses the extinction vortex model to classify extinctions by cause. When concerns about human extinction have been raised, for example in Sir Martin Rees' 2003 book Our Final Hour, those concerns lie with the effects of climate change or technological disaster.
Human-driven extinction started as humans migrated out of Africa more than 60,000 years ago. Currently, environmental groups and some governments are concerned with the extinction of species caused by humanity, and they try to prevent further extinctions through a variety of conservation programs. Humans can cause extinction of a species through overharvesting, pollution, habitat destruction, introduction of invasive species (such as new predators and food competitors), overhunting, and other influences. Explosive, unsustainable human population growth and increasing per capita consumption are essential drivers of the extinction crisis. According to the International Union for Conservation of Nature (IUCN), 784 extinctions have been recorded since the year 1500, the arbitrary date selected to define "recent" extinctions, up to the year 2004; with many more likely to have gone unnoticed. Several species have also been listed as extinct since 2004.
Genetics and demographic phenomena
If adaptation increasing population fitness is slower than environmental degradation plus the accumulation of slightly deleterious mutations, then a population will go extinct. Smaller populations have fewer beneficial mutations entering the population each generation, slowing adaptation. It is also easier for slightly deleterious mutations to fix in small populations; the resulting positive feedback loop between small population size and low fitness can cause mutational meltdown.
Limited geographic range is the most important determinant of genus extinction at background rates but becomes increasingly irrelevant as mass extinction arises. Limited geographic range is a cause both of small population size and of greater vulnerability to local environmental catastrophes.
Extinction rates can be affected not just by population size, but by any factor that affects evolvability, including balancing selection, cryptic genetic variation, phenotypic plasticity, and robustness. A diverse or deep gene pool gives a population a higher chance in the short term of surviving an adverse change in conditions. Effects that cause or reward a loss in genetic diversity can increase the chances of extinction of a species. Population bottlenecks can dramatically reduce genetic diversity by severely limiting the number of reproducing individuals and make inbreeding more frequent.
Genetic pollution
Extinction sometimes results for species evolved to specific ecologies that are subjected to genetic pollution—i.e., uncontrolled hybridization, introgression and genetic swamping that lead to homogenization or out-competition from the introduced (or hybrid) species. Endemic populations can face such extinctions when new populations are imported or selectively bred by people, or when habitat modification brings previously isolated species into contact. Extinction is likeliest for rare species coming into contact with more abundant ones; interbreeding can swamp the rarer gene pool and create hybrids, depleting the purebred gene pool (for example, the endangered wild water buffalo is most threatened with extinction by genetic pollution from the abundant domestic water buffalo). Such extinctions are not always apparent from morphological (non-genetic) observations. Some degree of gene flow is a normal evolutionary process; nevertheless, hybridization (with or without introgression) threatens rare species' existence.
The gene pool of a species or a population is the variety of genetic information in its living members. A large gene pool (extensive genetic diversity) is associated with robust populations that can survive bouts of intense selection. Meanwhile, low genetic diversity (see inbreeding and population bottlenecks) reduces the range of adaptions possible. Replacing native with alien genes narrows genetic diversity within the original population, thereby increasing the chance of extinction.
Habitat degradation
Habitat degradation is currently the main anthropogenic cause of species extinctions. The main cause of habitat degradation worldwide is agriculture, with urban sprawl, logging, mining, and some fishing practices close behind. The degradation of a species' habitat may alter the fitness landscape to such an extent that the species is no longer able to survive and becomes extinct. This may occur by direct effects, such as the environment becoming toxic, or indirectly, by limiting a species' ability to compete effectively for diminished resources or against new competitor species.
Habitat destruction, particularly the removal of vegetation that stabilizes soil, enhances erosion and diminishes nutrient availability in terrestrial ecosystems. This degradation can lead to a reduction in agricultural productivity. Furthermore, increased erosion contributes to poorer water quality by elevating the levels of sediment and pollutants in rivers and streams.
Habitat degradation through toxicity can kill off a species very rapidly, by killing all living members through contamination or sterilizing them. It can also occur over longer periods at lower toxicity levels by affecting life span, reproductive capacity, or competitiveness.
Habitat degradation can also take the form of a physical destruction of niche habitats. The widespread destruction of tropical rainforests and replacement with open pastureland is widely cited as an example of this; elimination of the dense forest eliminated the infrastructure needed by many species to survive. For example, a fern that depends on dense shade for protection from direct sunlight can no longer survive without forest to shelter it. Another example is the destruction of ocean floors by bottom trawling.
Diminished resources or introduction of new competitor species also often accompany habitat degradation. Global warming has allowed some species to expand their range, bringing competition to other species that previously occupied that area. Sometimes these new competitors are predators and directly affect prey species, while at other times they may merely outcompete vulnerable species for limited resources. Vital resources including water and food can also be limited during habitat degradation, leading to extinction.
Predation, competition, and disease
In the natural course of events, species become extinct for a number of reasons, including but not limited to: extinction of a necessary host, prey or pollinator, interspecific competition, inability to deal with evolving diseases and changing environmental conditions (particularly sudden changes) which can act to introduce novel predators, or to remove prey. Recently in geological time, humans have become an additional cause of extinction of some species, either as a new mega-predator or by transporting animals and plants from one part of the world to another. Such introductions have been occurring for thousands of years, sometimes intentionally (e.g. livestock released by sailors on islands as a future source of food) and sometimes accidentally (e.g. rats escaping from boats). In most cases, the introductions are unsuccessful, but when an invasive alien species does become established, the consequences can be catastrophic. Invasive alien species can affect native species directly by eating them, competing with them, and introducing pathogens or parasites that sicken or kill them; or indirectly by destroying or degrading their habitat. Human populations may themselves act as invasive predators. According to the "overkill hypothesis", the swift extinction of the megafauna in areas such as Australia (40,000 years before present), North and South America (12,000 years before present), Madagascar, Hawaii (AD 300–1000), and New Zealand (AD 1300–1500), resulted from the sudden introduction of human beings to environments full of animals that had never seen them before and were therefore completely unadapted to their predation techniques.
Coextinction
Coextinction refers to the loss of a species due to the extinction of another; for example, the extinction of parasitic insects following the loss of their hosts. Coextinction can also occur when a species loses its pollinator, or to predators in a food chain who lose their prey. "Species coextinction is a manifestation of one of the interconnectednesses of organisms in complex ecosystems ... While coextinction may not be the most important cause of species extinctions, it is certainly an insidious one." Coextinction is especially common when a keystone species goes extinct. Models suggest that coextinction is the most common form of biodiversity loss. There may be a cascade of coextinction across the trophic levels. Such effects are most severe in mutualistic and parasitic relationships. An example of coextinction is the Haast's eagle and the moa: the Haast's eagle was a predator that became extinct because its food source became extinct. The moa were several species of flightless birds that were a food source for the Haast's eagle.
Climate change
Extinction as a result of climate change has been confirmed by fossil studies. Particularly, the extinction of amphibians during the Carboniferous Rainforest Collapse, 305 million years ago. A 2003 review across 14 biodiversity research centers predicted that, because of climate change, 15–37% of land species would be "committed to extinction" by 2050. The ecologically rich areas that would potentially suffer the heaviest losses include the Cape Floristic Region and the Caribbean Basin. These areas might see a doubling of present carbon dioxide levels and rising temperatures that could eliminate 56,000 plant and 3,700 animal species. Climate change has also been found to be a factor in habitat loss and desertification.
Sexual selection and male investment
Studies of fossils following species from the time they evolved to their extinction show that species with high sexual dimorphism, especially characteristics in males that are used to compete for mating, are at a higher risk of extinction and die out faster than less sexually dimorphic species, the least sexually dimorphic species surviving for millions of years while the most sexually dimorphic species die out within mere thousands of years. Earlier studies based on counting the number of currently living species in modern taxa have shown a higher number of species in more sexually dimorphic taxa which have been interpreted as higher survival in taxa with more sexual selection, but such studies of modern species only measure indirect effects of extinction and are subject to error sources such as dying and doomed taxa speciating more due to splitting of habitat ranges into more small isolated groups during the habitat retreat of taxa approaching extinction. Possible causes of the higher extinction risk in species with more sexual selection shown by the comprehensive fossil studies that rule out such error sources include expensive sexually selected ornaments having negative effects on the ability to survive natural selection, as well as sexual selection removing a diversity of genes that under current ecological conditions are neutral for natural selection but some of which may be important for surviving climate change.
Mass extinctions
There have been at least five mass extinctions in the history of life on earth, and four in the last 350 million years in which many species have disappeared in a relatively short period of geological time. A massive eruptive event that released large quantities of tephra particles into the atmosphere is considered to be one likely cause of the "Permian–Triassic extinction event" about 250 million years ago, which is estimated to have killed 90% of species then existing. There is also evidence to suggest that this event was preceded by another mass extinction, known as Olson's Extinction. The Cretaceous–Paleogene extinction event (K–Pg) occurred 66 million years ago, at the end of the Cretaceous period; it is best known for having wiped out non-avian dinosaurs, among many other species.
Modern extinctions
According to a 1998 survey of 400 biologists conducted by New York's American Museum of Natural History, nearly 70% believed that the Earth is currently in the early stages of a human-caused mass extinction, known as the Holocene extinction. In that survey, the same proportion of respondents agreed with the prediction that up to 20% of all living populations could become extinct within 30 years (by 2028). A 2014 special edition of Science declared there is widespread consensus on the issue of human-driven mass species extinctions. A 2020 study published in PNAS stated that the contemporary extinction crisis "may be the most serious environmental threat to the persistence of civilization, because it is irreversible."
Biologist E. O. Wilson estimated in 2002 that if current rates of human destruction of the biosphere continue, one-half of all plant and animal species of life on earth will be extinct in 100 years. More significantly, the current rate of global species extinctions is estimated as 100 to 1,000 times "background" rates (the average extinction rates in the evolutionary time scale of planet Earth), faster than at any other time in human history, while future rates are likely 10,000 times higher. However, some groups are going extinct much faster. Biologists Paul R. Ehrlich and Stuart Pimm, among others, contend that human population growth and overconsumption are the main drivers of the modern extinction crisis.
In January 2020, the UN's Convention on Biological Diversity drafted a plan to mitigate the contemporary extinction crisis by establishing a deadline of 2030 to protect 30% of the Earth's land and oceans and reduce pollution by 50%, with the goal of allowing for the restoration of ecosystems by 2050. The 2020 United Nations' Global Biodiversity Outlook report stated that of the 20 biodiversity goals laid out by the Aichi Biodiversity Targets in 2010, only 6 were "partially achieved" by the deadline of 2020. The report warned that biodiversity will continue to decline if the status quo is not changed, in particular the "currently unsustainable patterns of production and consumption, population growth and technological developments". In a 2021 report published in the journal Frontiers in Conservation Science, some top scientists asserted that even if the Aichi Biodiversity Targets set for 2020 had been achieved, it would not have resulted in a significant mitigation of biodiversity loss. They added that failure of the global community to reach these targets is hardly surprising given that biodiversity loss is "nowhere close to the top of any country's priorities, trailing far behind other concerns such as employment, healthcare, economic growth, or currency stability."
History of scientific understanding
For much of history, the modern understanding of extinction as the end of a species was incompatible with the prevailing worldview. Prior to the 19th century, much of Western society adhered to the belief that the world was created by God and as such was complete and perfect. This concept reached its heyday in the 1700s with the peak popularity of a theological concept called the great chain of being, in which all life on earth, from the tiniest microorganism to God, is linked in a continuous chain. The extinction of a species was impossible under this model, as it would create gaps or missing links in the chain and destroy the natural order. Thomas Jefferson was a firm supporter of the great chain of being and an opponent of extinction, famously denying the extinction of the woolly mammoth on the grounds that nature never allows a race of animals to become extinct.
A series of fossils were discovered in the late 17th century that appeared unlike any living species. As a result, the scientific community embarked on a voyage of creative rationalization, seeking to understand what had happened to these species within a framework that did not account for total extinction. In October 1686, Robert Hooke presented an impression of a nautilus to the Royal Society that was more than two feet in diameter, and morphologically distinct from any known living species. Hooke theorized that this was simply because the species lived in the deep ocean and no one had discovered them yet. While he contended that it was possible a species could be "lost", he thought this highly unlikely. Similarly, in 1695, Sir Thomas Molyneux published an account of enormous antlers found in Ireland that did not belong to any extant taxa in that area. Molyneux reasoned that they came from the North American moose and that the animal had once been common on the British Isles. Rather than suggest that this indicated the possibility of species going extinct, he argued that although organisms could become locally extinct, they could never be entirely lost and would continue to exist in some unknown region of the globe. The antlers were later confirmed to be from the extinct deer Megaloceros. Hooke and Molyneux's line of thinking was difficult to disprove. When parts of the world had not been thoroughly examined and charted, scientists could not rule out that animals found only in the fossil record were not simply "hiding" in unexplored regions of the Earth.
Georges Cuvier is credited with establishing the modern conception of extinction in a 1796 lecture to the French Institute, though he would spend most of his career trying to convince the wider scientific community of his theory. Cuvier was a well-regarded geologist, lauded for his ability to reconstruct the anatomy of an unknown species from a few fragments of bone. His primary evidence for extinction came from mammoth skulls found near Paris. Cuvier recognized them as distinct from any known living species of elephant, and argued that it was highly unlikely such an enormous animal would go undiscovered. In 1798, he studied a fossil from the Paris Basin that was first observed by Robert de Lamanon in 1782, first hypothesizing that it belonged to a canine but then deciding that it instead belonged to an animal that was unlike living ones. His study paved the way to his naming of the extinct mammal genus Palaeotherium in 1804 based on the skull and additional fossil material along with another extinct contemporary mammal genus Anoplotherium. In both genera, he noticed that their fossils shared some similarities with other mammals like ruminants and rhinoceroses but still had distinct differences. In 1812, Cuvier, along with Alexandre Brongniart and Geoffroy Saint-Hilaire, mapped the strata of the Paris basin. They saw alternating saltwater and freshwater deposits, as well as patterns of the appearance and disappearance of fossils throughout the record. From these patterns, Cuvier inferred historic cycles of catastrophic flooding, extinction, and repopulation of the earth with new species.
Cuvier's fossil evidence showed that very different life forms existed in the past than those that exist today, a fact that was accepted by most scientists. The primary debate focused on whether this turnover caused by extinction was gradual or abrupt in nature. Cuvier understood extinction to be the result of cataclysmic events that wipe out huge numbers of species, as opposed to the gradual decline of a species over time. His catastrophic view of the nature of extinction garnered him many opponents in the newly emerging school of uniformitarianism.
Jean-Baptiste Lamarck, a gradualist and colleague of Cuvier, saw the fossils of different life forms as evidence of the mutable character of species. While Lamarck did not deny the possibility of extinction, he believed that it was exceptional and rare and that most of the change in species over time was due to gradual change. Unlike Cuvier, Lamarck was skeptical that catastrophic events of a scale large enough to cause total extinction were possible. In his geological history of the earth titled Hydrogeologie, Lamarck instead argued that the surface of the earth was shaped by gradual erosion and deposition by water, and that species changed over time in response to the changing environment.
Charles Lyell, a noted geologist and founder of uniformitarianism, believed that past processes should be understood using present day processes. Like Lamarck, Lyell acknowledged that extinction could occur, noting the total extinction of the dodo and the extirpation of indigenous horses to the British Isles. He similarly argued against mass extinctions, believing that any extinction must be a gradual process. Lyell also showed that Cuvier's original interpretation of the Parisian strata was incorrect. Instead of the catastrophic floods inferred by Cuvier, Lyell demonstrated that patterns of saltwater and freshwater deposits, like those seen in the Paris basin, could be formed by a slow rise and fall of sea levels.
The concept of extinction was integral to Charles Darwin's On the Origin of Species, with less fit lineages disappearing over time. For Darwin, extinction was a constant side effect of competition. Because of the wide reach of On the Origin of Species, it was widely accepted that extinction occurred gradually and evenly (a concept now referred to as background extinction). It was not until 1982, when David Raup and Jack Sepkoski published their seminal paper on mass extinctions, that Cuvier was vindicated and catastrophic extinction was accepted as an important mechanism. The current understanding of extinction is a synthesis of the cataclysmic extinction events proposed by Cuvier, and the background extinction events proposed by Lyell and Darwin.
Human attitudes and interests
Extinction is an important research topic in the field of zoology, and biology in general, and has also become an area of concern outside the scientific community. A number of organizations, such as the Worldwide Fund for Nature, have been created with the goal of preserving species from extinction. Governments have attempted, through enacting laws, to avoid habitat destruction, agricultural over-harvesting, and pollution. While many human-caused extinctions have been accidental, humans have also engaged in the deliberate destruction of some species, such as dangerous viruses, and the total destruction of other problematic species has been suggested. Other species were deliberately driven to extinction, or nearly so, due to poaching or because they were "undesirable", or to push for other human agendas. One example was the near extinction of the American bison, which was nearly wiped out by mass hunts sanctioned by the United States government, to force the removal of Native Americans, many of whom relied on the bison for food.
Biologist Bruce Walsh states three reasons for scientific interest in the preservation of species: genetic resources, ecosystem stability, and ethics; and today the scientific community "stress[es] the importance" of maintaining biodiversity.
In modern times, commercial and industrial interests often have to contend with the effects of production on plant and animal life. However, some technologies with minimal, or no, proven harmful effects on Homo sapiens can be devastating to wildlife (for example, DDT). Biogeographer Jared Diamond notes that while big business may label environmental concerns as "exaggerated", and often cause "devastating damage", some corporations find it in their interest to adopt good conservation practices, and even engage in preservation efforts that surpass those taken by national parks.
Governments sometimes see the loss of native species as a loss to ecotourism, and can enact laws with severe punishment against the trade in native species in an effort to prevent extinction in the wild. Nature preserves are created by governments as a means to provide continuing habitats to species crowded by human expansion. The 1992 Convention on Biological Diversity has resulted in international Biodiversity Action Plan programmes, which attempt to provide comprehensive guidelines for government biodiversity conservation. Advocacy groups, such as The Wildlands Project and the Alliance for Zero Extinctions, work to educate the public and pressure governments into action.
People who live close to nature can be dependent on the survival of all the species in their environment, leaving them highly exposed to extinction risks. However, people prioritize day-to-day survival over species conservation; with human overpopulation in tropical developing countries, there has been enormous pressure on forests due to subsistence agriculture, including slash-and-burn agricultural techniques that can reduce endangered species's habitats.
Antinatalist philosopher David Benatar concludes that any popular concern about non-human species extinction usually arises out of concern about how the loss of a species will impact human wants and needs, that "we shall live in a world impoverished by the loss of one aspect of faunal diversity, that we shall no longer be able to behold or use that species of animal." He notes that typical concerns about possible human extinction, such as the loss of individual members, are not considered in regards to non-human species extinction. Anthropologist Jason Hickel speculates that the reason humanity seems largely indifferent to anthropogenic mass species extinction is that we see ourselves as separate from the natural world and the organisms within it. He says that this is due in part to the logic of capitalism: "that the world is not really alive, and it is certainly not our kin, but rather just stuff to be extracted and discarded – and that includes most of the human beings living here too."
Planned extinction
Completed
The smallpox virus is now extinct in the wild, although samples are retained in laboratory settings.
The rinderpest virus, which infected domestic cattle, is now extinct in the wild.
Proposed
Disease agents
The poliovirus is now confined to small parts of the world due to extermination efforts.
Dracunculus medinensis, or Guinea worm, a parasitic worm which causes the disease dracunculiasis, is now close to eradication thanks to efforts led by the Carter Center.
Treponema pallidum pertenue, a bacterium which causes the disease yaws, is in the process of being eradicated.
Disease vectors
Biologist Olivia Judson has advocated the deliberate extinction of certain disease-carrying mosquito species. In a September 25, 2003 article in The New York Times, she advocated "specicide" of thirty mosquito species by introducing a genetic element that can insert itself into another crucial gene, to create recessive "knockout genes". She says that the Anopheles mosquitoes (which spread malaria) and Aedes mosquitoes (which spread dengue fever, yellow fever, elephantiasis, and other diseases) represent only 30 of around 3,500 mosquito species; eradicating these would save at least one million human lives per year, at a cost of reducing the genetic diversity of the family Culicidae by only 1%. She further argues that since species become extinct "all the time" the disappearance of a few more will not destroy the ecosystem: "We're not left with a wasteland every time a species vanishes. Removing one species sometimes causes shifts in the populations of other species—but different need not mean worse." In addition, anti-malarial and mosquito control programs offer little realistic hope to the 300 million people in developing nations who will be infected with acute illnesses this year. Although trials are ongoing, she writes that if they fail "we should consider the ultimate swatting."
Biologist E. O. Wilson has advocated the eradication of several species of mosquito, including malaria vector Anopheles gambiae. Wilson stated, "I'm talking about a very small number of species that have co-evolved with us and are preying on humans, so it would certainly be acceptable to remove them. I believe it's just common sense."
There have been many campaigns – some successful – to locally eradicate tsetse flies and their trypanosomes in areas, countries, and islands of Africa (including Príncipe). There are currently serious efforts to do away with them all across Africa, and this is generally viewed as beneficial and morally necessary, although not always.
Cloning
Some, such as Harvard geneticist George M. Church, believe that ongoing technological advances will let us "bring back to life" an extinct species by cloning, using DNA from the remains of that species. Proposed targets for cloning include the mammoth, the thylacine, and the Pyrenean ibex. For this to succeed, enough individuals would have to be cloned, from the DNA of different individuals (in the case of sexually reproducing organisms) to create a viable population. Though bioethical and philosophical objections have been raised, the cloning of extinct creatures seems theoretically possible.
In 2003, scientists tried to clone the extinct Pyrenean ibex (C. p. pyrenaica). This attempt failed: of the 285 embryos reconstructed, 54 were transferred to 12 Spanish ibexes and ibex–domestic goat hybrids, but only two survived the initial two months of gestation before they, too, died. In 2009, a second attempt was made to clone the Pyrenean ibex: one clone was born alive, but died seven minutes later, due to physical defects in the lungs.
| Biology and health sciences | Biology | null |
49420 | https://en.wikipedia.org/wiki/CMOS | CMOS | Complementary metal–oxide–semiconductor (CMOS, pronounced "sea-moss
", , ) is a type of metal–oxide–semiconductor field-effect transistor (MOSFET) fabrication process that uses complementary and symmetrical pairs of p-type and n-type MOSFETs for logic functions. CMOS technology is used for constructing integrated circuit (IC) chips, including microprocessors, microcontrollers, memory chips (including CMOS BIOS), and other digital logic circuits. CMOS technology is also used for analog circuits such as image sensors (CMOS sensors), data converters, RF circuits (RF CMOS), and highly integrated transceivers for many types of communication.
In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Bardeen's concept forms the basis of CMOS technology today. The CMOS process was presented by Fairchild Semiconductor's Frank Wanlass and Chih-Tang Sah at the International Solid-State Circuits Conference in 1963. Wanlass later filed US patent 3,356,858 for CMOS circuitry and it was granted in 1967. commercialized the technology with the trademark "COS-MOS" in the late 1960s, forcing other manufacturers to find another name, leading to "CMOS" becoming the standard name for the technology by the early 1970s. CMOS overtook NMOS logic as the dominant MOSFET fabrication process for very large-scale integration (VLSI) chips in the 1980s, also replacing earlier transistor–transistor logic (TTL) technology. CMOS has since remained the standard fabrication process for MOSFET semiconductor devices in VLSI chips. , 99% of IC chips, including most digital, analog and mixed-signal ICs, were fabricated using CMOS technology.
Two important characteristics of CMOS devices are high noise immunity and low static power consumption.
Since one transistor of the MOSFET pair is always off, the series combination draws significant power only momentarily during switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, like NMOS logic or transistor–transistor logic (TTL), which normally have some standing current even when not changing state. These characteristics allow CMOS to integrate a high density of logic functions on a chip. It was primarily for this reason that CMOS became the most widely used technology to be implemented in VLSI chips.
The phrase "metal–oxide–semiconductor" is a reference to the physical structure of MOS field-effect transistors, having a metal gate electrode placed on top of an oxide insulator, which in turn is on top of a semiconductor material. Aluminium was once used but now the material is polysilicon. Other metal gates have made a comeback with the advent of high-κ dielectric materials in the CMOS process, as announced by IBM and Intel for the 45 nanometer node and smaller sizes.
History
The principle of complementary symmetry was first introduced by George Sziklai in 1953 who then discussed several complementary bipolar circuits. Paul Weimer, also at RCA, invented in 1962 thin-film transistor (TFT) complementary circuits, a close relative of CMOS. He invented complementary flip-flop and inverter circuits, but did no work in a more complex complementary logic. He was the first person able to put p-channel and n-channel TFTs in a circuit on the same substrate. Three years earlier, John T. Wallmark and Sanford M. Marcus published a variety of complex logic functions implemented as integrated circuits using JFETs, including complementary memory circuits. Frank Wanlass was familiar with work done by Weimer at RCA.
In 1955, Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over the silicon wafer, for which they observed surface passivation effects. By 1957 Frosch and Derrick, using masking and predeposition, were able to manufacture silicon dioxide transistors and showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer. J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides and fabricated a high quality Si/SiO2 stack in 1960.
Following this research, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D'Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. There were originally two types of MOSFET logic, PMOS (p-type MOS) and NMOS (n-type MOS). Both types were developed by Frosch and Derrick in 1957 at Bell Labs.
In 1948, Bardeen and Brattain patented the progenitor of MOSFET, an insulated-gate FET (IGFET) with an inversion layer. Bardeen's patent, and the concept of an inversion layer, forms the basis of CMOS technology today. A new type of MOSFET logic combining both the PMOS and NMOS processes was developed, called complementary MOS (CMOS), by Chih-Tang Sah and Frank Wanlass at Fairchild. In February 1963, they published the invention in a research paper. In both the research paper and the patent filed by Wanlass, the fabrication of CMOS devices was outlined, on the basis of thermal oxidation of a silicon substrate to yield a layer of silicon dioxide located between the drain contact and the source contact.
CMOS was commercialised by RCA in the late 1960s. RCA adopted CMOS for the design of integrated circuits (ICs), developing CMOS circuits for an Air Force computer in 1965 and then a 288-bit CMOS SRAM memory chip in 1968. RCA also used CMOS for its 4000-series integrated circuits in 1968, starting with a 20μm semiconductor manufacturing process before gradually scaling to a 10 μm process over the next several years.
CMOS technology was initially overlooked by the American semiconductor industry in favour of NMOS, which was more powerful at the time. However, CMOS was quickly adopted and further advanced by Japanese semiconductor manufacturers due to its low power consumption, leading to the rise of the Japanese semiconductor industry. Toshiba developed C2MOS (Clocked CMOS), a circuit technology with lower power consumption and faster operating speed than ordinary CMOS, in 1969. Toshiba used its C2MOS technology to develop a large-scale integration (LSI) chip for Sharp's Elsi Mini LED pocket calculator, developed in 1971 and released in 1972. Suwa Seikosha (now Seiko Epson) began developing a CMOS IC chip for a Seiko quartz watch in 1969, and began mass-production with the launch of the Seiko Analog Quartz 38SQW watch in 1971. The first mass-produced CMOS consumer electronic product was the Hamilton Pulsar "Wrist Computer" digital watch, released in 1970. Due to low power consumption, CMOS logic has been widely used for calculators and watches since the 1970s.
The earliest microprocessors in the early 1970s were PMOS processors, which initially dominated the early microprocessor industry. By the late 1970s, NMOS microprocessors had overtaken PMOS processors. CMOS microprocessors were introduced in 1975, with the Intersil 6100, and RCA CDP 1801. However, CMOS processors did not become dominant until the 1980s.
CMOS was initially slower than NMOS logic, thus NMOS was more widely used for computers in the 1970s. The Intel 5101 (1kb SRAM) CMOS memory chip (1974) had an access time of 800ns, whereas the fastest NMOS chip at the time, the Intel 2147 (4kb SRAM) HMOS memory chip (1976), had an access time of 55/70ns. In 1978, a Hitachi research team led by Toshiaki Masuhara introduced the twin-well Hi-CMOS process, with its HM6147 (4kb SRAM) memory chip, manufactured with a 3 μm process. The Hitachi HM6147 chip was able to match the performance (55/70ns access) of the Intel 2147 HMOS chip, while the HM6147 also consumed significantly less power (15mA) than the 2147 (110mA). With comparable performance and much less power consumption, the twin-well CMOS process eventually overtook NMOS as the most common semiconductor manufacturing process for computers in the 1980s.
In the 1980s, CMOS microprocessors overtook NMOS microprocessors. NASA's Galileo spacecraft, sent to orbit Jupiter in 1989, used the RCA 1802 CMOS microprocessor due to low power consumption.
Intel introduced a 1.5 μm process for CMOS semiconductor device fabrication in 1983. In the mid-1980s, Bijan Davari of IBM developed high-performance, low-voltage, deep sub-micron CMOS technology, which enabled the development of faster computers as well as portable computers and battery-powered handheld electronics. In 1988, Davari led an IBM team that demonstrated a high-performance 250 nanometer CMOS process.
Fujitsu commercialized a 700nm CMOS process in 1987, and then Hitachi, Mitsubishi Electric, NEC and Toshiba commercialized 500nm CMOS in 1989. In 1993, Sony commercialized a 350nm CMOS process, while Hitachi and NEC commercialized 250nm CMOS. Hitachi introduced a 160nm CMOS process in 1995, then Mitsubishi introduced 150nm CMOS in 1996, and then Samsung Electronics introduced 140nm in 1999.
In 2000, Gurtej Singh Sandhu and Trung T. Doan at Micron Technology invented atomic layer deposition High-κ dielectric films, leading to the development of a cost-effective 90 nm CMOS process. Toshiba and Sony developed a 65 nm CMOS process in 2002, and then TSMC initiated the development of 45 nm CMOS logic in 2004. The development of pitch double patterning by Gurtej Singh Sandhu at Micron Technology led to the development of 30nm class CMOS in the 2000s.
CMOS is used in most modern LSI and VLSI devices. As of 2010, CPUs with the best performance per watt each year have been CMOS static logic since 1976. As of 2019, planar CMOS technology is still the most common form of semiconductor device fabrication, but is gradually being replaced by non-planar FinFET technology, which is capable of manufacturing semiconductor nodes smaller than 20nm.
Technical details
"CMOS" refers to both a particular style of digital circuitry design and the family of processes used to implement that circuitry on integrated circuits (chips). CMOS circuitry dissipates less power than logic families with resistive loads. Since this advantage has increased and grown more important, CMOS processes and variants have come to dominate, thus the vast majority of modern integrated circuit manufacturing is on CMOS processes. CMOS logic consumes around one seventh the power of NMOS logic, and about 10 million times less power than bipolar transistor-transistor logic (TTL).
CMOS circuits use a combination of p-type and n-type metal–oxide–semiconductor field-effect transistor (MOSFETs) to implement logic gates and other digital circuits. Although CMOS logic can be implemented with discrete devices for demonstrations, commercial CMOS products are integrated circuits composed of up to billions of transistors of both types, on a rectangular piece of silicon of often between 10 and 400 mm2.
CMOS always uses all enhancement-mode MOSFETs (in other words, a zero gate-to-source voltage turns the transistor off).
Inversion
CMOS circuits are constructed in such a way that all P-type metal–oxide–semiconductor (PMOS) transistors must have either an input from the voltage source or from another PMOS transistor. Similarly, all NMOS transistors must have either an input from ground or from another NMOS transistor. The composition of a PMOS transistor creates low resistance between its source and drain contacts when a low gate voltage is applied and high resistance when a high gate voltage is applied. On the other hand, the composition of an NMOS transistor creates high resistance between source and drain when a low gate voltage is applied and low resistance when a high gate voltage is applied. CMOS accomplishes current reduction by complementing every nMOSFET with a pMOSFET and connecting both gates and both drains together. A high voltage on the gates will cause the nMOSFET to conduct and the pMOSFET not to conduct, while a low voltage on the gates causes the reverse. This arrangement greatly reduces power consumption and heat generation. However, during the switching time, both pMOS and nMOS MOSFETs conduct briefly as the gate voltage transitions from one state to another. This induces a brief spike in power consumption and becomes a serious issue at high frequencies.
The adjacent image shows what happens when an input is connected to both a PMOS transistor (top of diagram) and an NMOS transistor (bottom of diagram). Vdd is some positive voltage connected to a power supply and Vss is ground. A is the input and Q is the output.
When the voltage of A is low (i.e. close to Vss), the NMOS transistor's channel is in a high resistance state, disconnecting Vss from Q. The PMOS transistor's channel is in a low resistance state, connecting Vdd to Q. Q, therefore, registers Vdd.
On the other hand, when the voltage of A is high (i.e. close to Vdd), the PMOS transistor is in a high resistance state, disconnecting Vdd from Q. The NMOS transistor is in a low resistance state, connecting Vss to Q. Now, Q registers Vss.
In short, the outputs of the PMOS and NMOS transistors are complementary such that when the input is low, the output is high, and when the input is high, the output is low. No matter what the input is, the output is never left floating (charge is never stored due to wire capacitance and lack of electrical drain/ground). Because of this behavior of input and output, the CMOS circuit's output is the inverse of the input.
The transistors' resistances are never exactly equal to zero or infinity, so Q will never exactly equal Vss or Vdd, but Q will always be closer to Vss than A was to Vdd (or vice versa if A were close to Vss). Without this amplification, there would be a very low limit to the number of logic gates that could be chained together in series, and CMOS logic with billions of transistors would be impossible.
Power supply pins
The power supply pins for CMOS are called VDD and VSS, or VCC and Ground(GND) depending on the manufacturer. VDD and VSS are carryovers from conventional MOS circuits and stand for the drain and source supplies. These do not apply directly to CMOS, since both supplies are really source supplies. VCC and Ground are carryovers from TTL logic and that nomenclature has been retained with the introduction of the 54C/74C line of CMOS.
Duality
An important characteristic of a CMOS circuit is the duality that exists between its PMOS transistors and NMOS transistors. A CMOS circuit is created to allow a path always to exist from the output to either the power source or ground. To accomplish this, the set of all paths to the voltage source must be the complement of the set of all paths to ground. This can be easily accomplished by defining one in terms of the NOT of the other. Due to the logic based on De Morgan's laws, the PMOS transistors in parallel have corresponding NMOS transistors in series while the PMOS transistors in series have corresponding NMOS transistors in parallel.
Logic
More complex logic functions such as those involving AND and OR gates require manipulating the paths between gates to represent the logic. When a path consists of two transistors in series, both transistors must have low resistance to the corresponding supply voltage, modelling an AND. When a path consists of two transistors in parallel, either one or both of the transistors must have low resistance to connect the supply voltage to the output, modelling an OR.
Shown on the right is a circuit diagram of a NAND gate in CMOS logic. If both of the A and B inputs are high, then both the NMOS transistors (bottom half of the diagram) will conduct, neither of the PMOS transistors (top half) will conduct, and a conductive path will be established between the output and Vss (ground), bringing the output low. If both of the A and B inputs are low, then neither of the NMOS transistors will conduct, while both of the PMOS transistors will conduct, establishing a conductive path between the output and Vdd (voltage source), bringing the output high. If either of the A or B inputs is low, one of the NMOS transistors will not conduct, one of the PMOS transistors will, and a conductive path will be established between the output and Vdd (voltage source), bringing the output high. As the only configuration of the two inputs that results in a low output is when both are high, this circuit implements a NAND (NOT AND) logic gate.
An advantage of CMOS over NMOS logic is that both low-to-high and high-to-low output transitions are fast since the (PMOS) pull-up transistors have low resistance when switched on, unlike the load resistors in NMOS logic. In addition, the output signal swings the full voltage between the low and high rails. This strong, more nearly symmetric response also makes CMOS more resistant to noise.
See Logical effort for a method of calculating delay in a CMOS circuit.
Example: NAND gate in physical layout
This example shows a NAND logic device drawn as a physical representation as it would be manufactured. The physical layout perspective is a "bird's eye view" of a stack of layers. The circuit is constructed on a P-type substrate. The polysilicon, diffusion, and n-well are referred to as "base layers" and are actually inserted into trenches of the P-type substrate. (See steps 1 to 6 in the process diagram below right) The contacts penetrate an insulating layer between the base layers and the first layer of metal (metal1) making a connection.
The inputs to the NAND (illustrated in green color) are in polysilicon. The transistors (devices) are formed by the intersection of the polysilicon and diffusion; N diffusion for the N device & P diffusion for the P device (illustrated in salmon and yellow coloring respectively). The output ("out") is connected together in metal (illustrated in cyan coloring). Connections between metal and polysilicon or diffusion are made through contacts (illustrated as black squares). The physical layout example matches the NAND logic circuit given in the previous example.
The N device is manufactured on a P-type substrate while the P device is manufactured in an N-type well (n-well). A P-type substrate "tap" is connected to VSS and an N-type n-well tap is connected to VDD to prevent latchup.
Power: switching and leakage
CMOS logic dissipates less power than NMOS logic circuits because CMOS dissipates power only when switching ("dynamic power"). On a typical ASIC in a modern 90 nanometer process, switching the output might take 120 picoseconds, and happens once every ten nanoseconds. NMOS logic dissipates power whenever the transistor is on, because there is a current path from Vdd to Vss through the load resistor and the n-type network.
Static CMOS gates are very power efficient because they dissipate nearly zero power when idle. Earlier, the power consumption of CMOS devices was not the major concern while designing chips. Factors like speed and area dominated the design parameters. As the CMOS technology moved below sub-micron levels the power consumption per unit area of the chip has risen tremendously.
Broadly classifying, power dissipation in CMOS circuits occurs because of two components, static and dynamic:
Static dissipation
Both NMOS and PMOS transistors have a gate–source threshold voltage (Vth), below which the current (called sub threshold current) through the device will drop exponentially. Historically, CMOS circuits operated at supply voltages much larger than their threshold voltages (Vdd might have been 5 V, and Vth for both NMOS and PMOS might have been 700 mV). A special type of the transistor used in some CMOS circuits is the native transistor, with near zero threshold voltage.
SiO2 is a good insulator, but at very small thickness levels electrons can tunnel across the very thin insulation; the probability drops off exponentially with oxide thickness. Tunnelling current becomes very important for transistors below 130 nm technology with gate oxides of 20 Å or thinner.
Small reverse leakage currents are formed due to formation of reverse bias between diffusion regions and wells (for e.g., p-type diffusion vs. n-well), wells and substrate (for e.g., n-well vs. p-substrate). In modern process diode leakage is very small compared to sub threshold and tunnelling currents, so these may be neglected during power calculations.
If the ratios do not match, then there might be different currents of PMOS and NMOS; this may lead to imbalance and thus improper current causes the CMOS to heat up and dissipate power unnecessarily. Furthermore, recent studies have shown that leakage power reduces due to aging effects as a trade-off for devices to become slower.
To speed up designs, manufacturers have switched to constructions that have lower voltage thresholds but because of this a modern NMOS transistor with a Vth of 200 mV has a significant subthreshold leakage current. Designs (e.g. desktop processors) which include vast numbers of circuits which are not actively switching still consume power because of this leakage current. Leakage power is a significant portion of the total power consumed by such designs. Multi-threshold CMOS (MTCMOS), now available from foundries, is one approach to managing leakage power. With MTCMOS, high Vth transistors are used when switching speed is not critical, while low Vth transistors are used in speed sensitive paths. Further technology advances that use even thinner gate dielectrics have an additional leakage component because of current tunnelling through the extremely thin gate dielectric. Using high-κ dielectrics instead of silicon dioxide that is the conventional gate dielectric allows similar device performance, but with a thicker gate insulator, thus avoiding this current. Leakage power reduction using new material and system designs is critical to sustaining scaling of CMOS.
Dynamic dissipation
Charging and discharging of load capacitances
CMOS circuits dissipate power by charging the various load capacitances (mostly gate and wire capacitance, but also drain and some source capacitances) whenever they are switched. In one complete cycle of CMOS logic, current flows from VDD to the load capacitance to charge it and then flows from the charged load capacitance (CL) to ground during discharge. Therefore, in one complete charge/discharge cycle, a total of Q=CLVDD is thus transferred from VDD to ground. Multiply by the switching frequency on the load capacitances to get the current used, and multiply by the average voltage again to get the characteristic switching power dissipated by a CMOS device: .
Since most gates do not operate/switch at every clock cycle, they are often accompanied by a factor , called the activity factor. Now, the dynamic power dissipation may be re-written as .
A clock in a system has an activity factor α=1, since it rises and falls every cycle. Most data has an activity factor of 0.1. If correct load capacitance is estimated on a node together with its activity factor, the dynamic power dissipation at that node can be calculated effectively.
Short-circuit power
Since there is a finite rise/fall time for both pMOS and nMOS, during transition, for example, from off to on, both the transistors will be on for a small period of time in which current will find a path directly from VDD to ground, hence creating a short-circuit current, sometimes called a crowbar current. Short-circuit power dissipation increases with the rise and fall time of the transistors.
This form of power consumption became significant in the 1990s as wires on chip became narrower and the long wires became more resistive. CMOS gates at the end of those resistive wires see slow input transitions. Careful design which avoids weakly driven long skinny wires reduces this effect, but crowbar power can be a substantial part of dynamic CMOS power.
Input protection
Parasitic transistors that are inherent in the CMOS structure may be turned on by input signals outside the normal operating range, e.g. electrostatic discharges or line reflections. The resulting latch-up may damage or destroy the CMOS device. Clamp diodes are included in CMOS circuits to deal with these signals. Manufacturers' data sheets specify the maximum permitted current that may flow through the diodes.
Analog CMOS
Besides digital applications, CMOS technology is also used in analog applications. For example, there are CMOS operational amplifier ICs available in the market. Transmission gates may be used as analog multiplexers instead of signal relays. CMOS technology is also widely used for RF circuits all the way to microwave frequencies, in mixed-signal (analog+digital) applications.
RF CMOS
RF CMOS refers to RF circuits (radio frequency circuits) which are based on mixed-signal CMOS integrated circuit technology. They are widely used in wireless telecommunication technology. RF CMOS was developed by Asad Abidi while working at UCLA in the late 1980s. This changed the way in which RF circuits were designed, leading to the replacement of discrete bipolar transistors with CMOS integrated circuits in radio transceivers. It enabled sophisticated, low-cost and portable end-user terminals, and gave rise to small, low-cost, low-power and portable units for a wide range of wireless communication systems. This enabled "anytime, anywhere" communication and helped bring about the wireless revolution, leading to the rapid growth of the wireless industry.
The baseband processors and radio transceivers in all modern wireless networking devices and mobile phones are mass-produced using RF CMOS devices. RF CMOS circuits are widely used to transmit and receive wireless signals, in a variety of applications, such as satellite technology (such as GPS), bluetooth, Wi-Fi, near-field communication (NFC), mobile networks (such as 3G and 4G), terrestrial broadcast, and automotive radar applications, among other uses.
Examples of commercial RF CMOS chips include Intel's DECT cordless phone, and 802.11 (Wi-Fi) chips created by Atheros and other companies. Commercial RF CMOS products are also used for Bluetooth and Wireless LAN (WLAN) networks. RF CMOS is also used in the radio transceivers for wireless standards such as GSM, Wi-Fi, and Bluetooth, transceivers for mobile networks such as 3G, and remote units in wireless sensor networks (WSN).
RF CMOS technology is crucial to modern wireless communications, including wireless networks and mobile communication devices. One of the companies that commercialized RF CMOS technology was Infineon. Its bulk CMOS RF switches sell over 1billion units annually, reaching a cumulative 5billion units, .
Temperature range
Conventional CMOS devices work over a range of −55 °C to +125 °C.
There were theoretical indications as early as August 2008 that silicon CMOS will work down to −233 °C (40 K). Functioning temperatures near 40 K have since been achieved using overclocked AMD Phenom II processors with a combination of liquid nitrogen and liquid helium cooling.
Silicon carbide CMOS devices have been tested for a year at 500 °C.
Single-electron MOS transistors
Ultra small (L = 20 nm, W = 20 nm) MOSFETs achieve the single-electron limit when operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The transistor displays Coulomb blockade due to progressive charging of electrons one by one. The number of electrons confined in the channel is driven by the gate voltage, starting from an occupation of zero electrons, and it can be set to one or many.
| Technology | Semiconductors | null |
49492 | https://en.wikipedia.org/wiki/Divisor | Divisor | In mathematics, a divisor of an integer also called a factor of is an integer that may be multiplied by some integer to produce In this case, one also says that is a multiple of An integer is divisible or evenly divisible by another integer if is a divisor of ; this implies dividing by leaves no remainder.
Definition
An integer is divisible by a nonzero integer if there exists an integer such that This is written as
This may be read as that divides is a divisor of is a factor of or is a multiple of If does not divide then the notation is
There are two conventions, distinguished by whether is permitted to be zero:
With the convention without an additional constraint on for every integer
With the convention that be nonzero, for every nonzero integer
General
Divisors can be negative as well as positive, although often the term is restricted to positive divisors. For example, there are six divisors of 4; they are 1, 2, 4, −1, −2, and −4, but only the positive ones (1, 2, and 4) would usually be mentioned.
1 and −1 divide (are divisors of) every integer. Every integer (and its negation) is a divisor of itself. Integers divisible by 2 are called even, and integers not divisible by 2 are called odd.
1, −1, and are known as the trivial divisors of A divisor of that is not a trivial divisor is known as a non-trivial divisor (or strict divisor). A nonzero integer with at least one non-trivial divisor is known as a composite number, while the units −1 and 1 and prime numbers have no non-trivial divisors.
There are divisibility rules that allow one to recognize certain divisors of a number from the number's digits.
Examples
7 is a divisor of 42 because so we can say It can also be said that 42 is divisible by 7, 42 is a multiple of 7, 7 divides 42, or 7 is a factor of 42.
The non-trivial divisors of 6 are 2, −2, 3, −3.
The positive divisors of 42 are 1, 2, 3, 6, 7, 14, 21, 42.
The set of all positive divisors of 60, partially ordered by divisibility, has the Hasse diagram:
Further notions and facts
There are some elementary rules:
If and then that is, divisibility is a transitive relation.
If and then or (That is, and are associates.)
If and then holds, as does However, if and then does not always hold (for example, and but 5 does not divide 6).
for nonzero . This follows immediately from writing .
If and then This is called Euclid's lemma.
If is a prime number and then or
A positive divisor of that is different from is called a or an of (for example, the proper divisors of 6 are 1, 2, and 3). A number that does not evenly divide but leaves a remainder is sometimes called an of
An integer whose only proper divisor is 1 is called a prime number. Equivalently, a prime number is a positive integer that has exactly two positive factors: 1 and itself.
Any positive divisor of is a product of prime divisors of raised to some power. This is a consequence of the fundamental theorem of arithmetic.
A number is said to be perfect if it equals the sum of its proper divisors, deficient if the sum of its proper divisors is less than and abundant if this sum exceeds
The total number of positive divisors of is a multiplicative function meaning that when two numbers and are relatively prime, then For instance, ; the eight divisors of 42 are 1, 2, 3, 6, 7, 14, 21 and 42. However, the number of positive divisors is not a totally multiplicative function: if the two numbers and share a common divisor, then it might not be true that The sum of the positive divisors of is another multiplicative function (for example, ). Both of these functions are examples of divisor functions.
If the prime factorization of is given by
then the number of positive divisors of is
and each of the divisors has the form
where for each
For every natural
Also,
where is Euler–Mascheroni constant.
One interpretation of this result is that a randomly chosen positive integer n has an average
number of divisors of about However, this is a result from the contributions of numbers with "abnormally many" divisors.
In abstract algebra
Ring theory
Division lattice
In definitions that allow the divisor to be 0, the relation of divisibility turns the set of non-negative integers into a partially ordered set that is a complete distributive lattice. The largest element of this lattice is 0 and the smallest is 1. The meet operation ∧ is given by the greatest common divisor and the join operation ∨ by the least common multiple. This lattice is isomorphic to the dual of the lattice of subgroups of the infinite cyclic group Z.
| Mathematics | Basics | null |
49497 | https://en.wikipedia.org/wiki/Pascal%27s%20triangle | Pascal's triangle | In mathematics, Pascal's triangle is an infinite triangular array of the binomial coefficients which play a crucial role in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in Persia, India, China, Germany, and Italy.
The rows of Pascal's triangle are conventionally enumerated starting with row at the top (the 0th row). The entries in each row are numbered from the left beginning with and are usually staggered relative to the numbers in the adjacent rows. The triangle may be constructed in the following manner: In row 0 (the topmost row), there is a unique nonzero entry 1. Each entry of each subsequent row is constructed by adding the number above and to the left with the number above and to the right, treating blank entries as 0. For example, the initial number of row 1 (or any other row) is 1 (the sum of 0 and 1), whereas the numbers 1 and 3 in row 3 are added to produce the number 4 in row 4.
Formula
In the th row of Pascal's triangle, the th entry is denoted , pronounced " choose ". For example, the topmost entry is . With this notation, the construction of the previous paragraph may be written as
for any positive integer and any integer . This recurrence for the binomial coefficients is known as Pascal's rule.
History
The pattern of numbers that forms Pascal's triangle was known well before Pascal's time. The Persian mathematician Al-Karaji (953–1029) wrote a now-lost book which contained the first description of Pascal's triangle. In India, the Chandaḥśāstra by the Indian lyricist Piṅgala (3rd or 2nd century BC) somewhat crypically describes a method of arranging two types of syllables to form metres of various lengths and counting them; as interpreted and elaborated by Piṅgala's 10th-century commentator Halāyudha his "method of pyramidal expansion" (meru-prastāra) for counting metres is equivalent to Pascal's triangle. It was later repeated by Omar Khayyám (1048–1131), another Persian mathematician; thus the triangle is also referred to as Khayyam's triangle () in Iran. Several theorems related to the triangle were known, including the binomial theorem. Khayyam used a method of finding nth roots based on the binomial expansion, and therefore on the binomial coefficients.
Pascal's triangle was known in China during the 11th century through the work of the Chinese mathematician Jia Xian (1010–1070). During the 13th century, Yang Hui (1238–1298) defined the triangle, and it is known as Yang Hui's triangle () in China.
In Europe, Pascal's triangle appeared for the first time in the Arithmetic of Jordanus de Nemore (13th century).
The binomial coefficients were calculated by Gersonides during the early 14th century, using the multiplicative formula for them. Petrus Apianus (1495–1552) published the full triangle on the frontispiece of his book on business calculations in 1527. Michael Stifel published a portion of the triangle (from the second to the middle column in each row) in 1544, describing it as a table of figurate numbers. In Italy, Pascal's triangle is referred to as Tartaglia's triangle, named for the Italian algebraist Tartaglia (1500–1577), who published six rows of the triangle in 1556. Gerolamo Cardano also published the triangle as well as the additive and multiplicative rules for constructing it in 1570.
Pascal's (Treatise on Arithmetical Triangle) was published posthumously in 1665. In this, Pascal collected several results then known about the triangle, and employed them to solve problems in probability theory. The triangle was later named for Pascal by Pierre Raymond de Montmort (1708) who called it (French: Mr. Pascal's table for combinations) and Abraham de Moivre (1730) who called it (Latin: Pascal's Arithmetic Triangle), which became the basis of the modern Western name.
Binomial expansions
Pascal's triangle determines the coefficients which arise in binomial expansions. For example, in the expansion
the coefficients are the entries in the second row of Pascal's triangle: , , .
In general, the binomial theorem states that when a binomial like is raised to a positive integer power , the expression expands as
where the coefficients are precisely the numbers in row of Pascal's triangle:
The entire left diagonal of Pascal's triangle corresponds to the coefficient of in these binomial expansions, while the next left diagonal corresponds to the coefficient of , and so on.
To see how the binomial theorem relates to the simple construction of Pascal's triangle, consider the problem of calculating the coefficients of the expansion of in terms of the corresponding coefficients of , where we set for simplicity. Suppose then that
Now
The two summations can be reindexed with and combined to yield
Thus the extreme left and right coefficients remain as 1, and for any given , the coefficient of the term in the polynomial is equal to , the sum of the and coefficients in the previous power . This is indeed the downward-addition rule for constructing Pascal's triangle.
It is not difficult to turn this argument into a proof (by mathematical induction) of the binomial theorem.
Since , the coefficients are identical in the expansion of the general case.
An interesting consequence of the binomial theorem is obtained by setting both variables , so that
In other words, the sum of the entries in the th row of Pascal's triangle is the th power of 2. This is equivalent to the statement that the number of subsets of an -element set is , as can be seen by observing that each of the elements may be independently included or excluded from a given subset.
Combinations
A second useful application of Pascal's triangle is in the calculation of combinations. The number of combinations of items taken at a time, i.e. the number of subsets of elements from among elements, can be found by the equation
.
This is equal to entry in row of Pascal's triangle. Rather than performing the multiplicative calculation, one can simply look up the appropriate entry in the triangle (constructed by additions). For example, suppose 3 workers need to be hired from among 7 candidates; then the number of possible hiring choices is 7 choose 3, the entry 3 in row 7 of the above table (taking into consideration the first row is the 0th row), which is .
Relation to binomial distribution and convolutions
When divided by , the th row of Pascal's triangle becomes the binomial distribution in the symmetric case where . By the central limit theorem, this distribution approaches the normal distribution as increases. This can also be seen by applying Stirling's formula to the factorials involved in the formula for combinations.
This is related to the operation of discrete convolution in two ways. First, polynomial multiplication corresponds exactly to discrete convolution, so that repeatedly convolving the sequence with itself corresponds to taking powers of , and hence to generating the rows of the triangle. Second, repeatedly convolving the distribution function for a random variable with itself corresponds to calculating the distribution function for a sum of n independent copies of that variable; this is exactly the situation to which the central limit theorem applies, and hence results in the normal distribution in the limit. (The operation of repeatedly taking a convolution of something with itself is called the convolution power.)
Patterns and properties
Pascal's triangle has many properties and contains many patterns of numbers.
Rows
The sum of the elements of a single row is twice the sum of the row preceding it. For example, row 0 (the topmost row) has a value of 1, row 1 has a value of 2, row 2 has a value of 4, and so forth. This is because every item in a row produces two items in the next row: one left and one right. The sum of the elements of row equals to .
Taking the product of the elements in each row, the sequence of products is related to the base of the natural logarithm, e. Specifically, define the sequence for all as follows: Then, the ratio of successive row products is and the ratio of these ratios is The right-hand side of the above equation takes the form of the limit definition of .
can be found in Pascal's triangle by use of the Nilakantha infinite series.
Some of the numbers in Pascal's triangle correlate to numbers in Lozanić's triangle.
The sum of the squares of the elements of row equals the middle element of row . For example, . In general form,
In any even row , the middle term minus the term two spots to the left equals a Catalan number, specifically . For example, in row 4, which is 1, 4, 6, 4, 1, we get the 3rd Catalan number .
In a row , where is a prime number, all the terms in that row except the 1s are divisible by . This can be proven easily, from the multiplicative formula . Since the denominator can have no prime factors equal to , so remains in the numerator after integer division, making the entire entry a multiple of .
Parity: To count odd terms in row , convert to binary. Let be the number of 1s in the binary representation. Then the number of odd terms will be . These numbers are the values in Gould's sequence.
Every entry in row 2n − 1, n ≥ 0, is odd.
Polarity: When the elements of a row of Pascal's triangle are alternately added and subtracted together, the result is 0. For example, row 6 is 1, 6, 15, 20, 15, 6, 1, so the formula is 1 − 6 + 15 − 20 + 15 − 6 + 1 = 0.
Diagonals
The diagonals of Pascal's triangle contain the figurate numbers of simplices:
The diagonals going along the left and right edges contain only 1's.
The diagonals next to the edge diagonals contain the natural numbers in order. The 1-dimensional simplex numbers increment by 1 as the line segments extend to the next whole number along the number line.
Moving inwards, the next pair of diagonals contain the triangular numbers in order.
The next pair of diagonals contain the tetrahedral numbers in order, and the next pair give pentatope numbers.
The symmetry of the triangle implies that the nth d-dimensional number is equal to the dth n-dimensional number.
An alternative formula that does not involve recursion is
where n(d) is the rising factorial.
The geometric meaning of a function Pd is: Pd(1) = 1 for all d. Construct a d-dimensional triangle (a 3-dimensional triangle is a tetrahedron) by placing additional dots below an initial dot, corresponding to Pd(1) = 1. Place these dots in a manner analogous to the placement of numbers in Pascal's triangle. To find Pd(x), have a total of x dots composing the target shape. Pd(x) then equals the total number of dots in the shape. A 0-dimensional triangle is a point and a 1-dimensional triangle is simply a line, and therefore P0(x) = 1 and P1(x) = x, which is the sequence of natural numbers. The number of dots in each layer corresponds to Pd − 1(x).
Calculating a row or diagonal by itself
There are simple algorithms to compute all the elements in a row or diagonal without computing other elements or factorials.
To compute row with the elements , begin with . For each subsequent element, the value is determined by multiplying the previous value by a fraction with slowly changing numerator and denominator:
For example, to calculate row 5, the fractions are , , , and , and hence the elements are , , , etc. (The remaining elements are most easily obtained by symmetry.)
To compute the diagonal containing the elements begin again with and obtain subsequent elements by multiplication by certain fractions:
For example, to calculate the diagonal beginning at , the fractions are , and the elements are , etc. By symmetry, these elements are equal to , etc.
Overall patterns and properties
The pattern obtained by coloring only the odd numbers in Pascal's triangle closely resembles the fractal known as the Sierpinski triangle. This resemblance becomes increasingly accurate as more rows are considered; in the limit, as the number of rows approaches infinity, the resulting pattern is the Sierpinski triangle, assuming a fixed perimeter. More generally, numbers could be colored differently according to whether or not they are multiples of 3, 4, etc.; this results in other similar patterns.
As the proportion of black numbers tends to zero with increasing n, a corollary is that the proportion of odd binomial coefficients tends to zero as n tends to infinity.
Pascal's triangle overlaid on a grid gives the number of distinct paths to each square, assuming only rightward and downward steps to an adjacent square are considered.
In a triangular portion of a grid (as in the images below), the number of shortest grid paths from a given node to the top node of the triangle is the corresponding entry in Pascal's triangle. On a Plinko game board shaped like a triangle, this distribution should give the probabilities of winning the various prizes.
If the rows of Pascal's triangle are left-justified, the diagonal bands (colour-coded below) sum to the Fibonacci numbers.
{| style="align:center;"
|- align=center
|bgcolor=red|1
|- align=center
| style="background:orange;"|1
| style="background:yellow;"|1
|- align=center
| style="background:yellow;"|1
|bgcolor=lime|2
|bgcolor=aqua|1
|- align=center
|bgcolor=lime|1
|bgcolor=aqua|3
| style="background:violet;"|3
|bgcolor=red|1
|- align=center
|bgcolor=aqua|1
| style="background:violet;"|4
|bgcolor=red|6
| style="background:orange;"|4
| style="background:yellow;"|1
|- align=center
| style="background:violet;"|1
|bgcolor=red|5
| style="background:orange;"|10
| style="background:yellow;"|10
|bgcolor=lime|5
|bgcolor=aqua|1
|- align=center
|bgcolor=red|1
| style="background:orange;"|6
| style="background:yellow;"|15
|bgcolor=lime|20
|bgcolor=aqua|15
| style="background:violet;"|6
|bgcolor=red|1
|- align=center
| style="background:orange; width:40px;"|1
| style="background:yellow; width:40px;"|7
| style="background:lime; width:40px;"|21
| style="background:aqua; width:40px;"|35
| style="background:violet; width:40px;"|35
| style="background:red; width:40px;"|21
| style="background:orange; width:40px;"|7
| style="background:yellow; width:40px;"|1
|}
Construction as matrix exponential
Due to its simple construction by factorials, a very basic representation of Pascal's triangle in terms of the matrix exponential can be given: Pascal's triangle is the exponential of the matrix which has the sequence 1, 2, 3, 4, ... on its sub-diagonal and zero everywhere else.
Construction of Clifford algebra using simplices
Labelling the elements of each n-simplex matches the basis elements of Clifford algebra used as forms in Geometric Algebra rather than matrices. Recognising the geometric operations, such as rotations, allows the algebra operations to be discovered. Just as each row, , starting at 0, of Pascal's triangle corresponds to an -simplex, as described below, it also defines the number of named basis forms in dimensional Geometric algebra. The binomial theorem can be used to prove the geometric relationship provided by Pascal's triangle. This same proof could be applied to simplices except that the first column of all 1's must be ignored whereas in the algebra these correspond to the real numbers, , with basis 1.
Relation to geometry of polytopes
Pascal's triangle can be used as a lookup table for the number of elements (such as edges and corners) within a polytope (such as a triangle, a tetrahedron, a square, or a cube).
Number of elements of simplices
Let's begin by considering the 3rd line of Pascal's triangle, with values 1, 3, 3, 1. A 2-dimensional triangle has one 2-dimensional element (itself), three 1-dimensional elements (lines, or edges), and three 0-dimensional elements (vertices, or corners). The meaning of the final number (1) is more difficult to explain (but see below). Continuing with our example, a tetrahedron has one 3-dimensional element (itself), four 2-dimensional elements (faces), six 1-dimensional elements (edges), and four 0-dimensional elements (vertices). Adding the final 1 again, these values correspond to the 4th row of the triangle (1, 4, 6, 4, 1). Line 1 corresponds to a point, and Line 2 corresponds to a line segment (dyad). This pattern continues to arbitrarily high-dimensioned hyper-tetrahedrons (known as simplices).
To understand why this pattern exists, one must first understand that the process of building an n-simplex from an -simplex consists of simply adding a new vertex to the latter, positioned such that this new vertex lies outside of the space of the original simplex, and connecting it to all original vertices. As an example, consider the case of building a tetrahedron from a triangle, the latter of whose elements are enumerated by row 3 of Pascal's triangle: 1 face, 3 edges, and 3 vertices. To build a tetrahedron from a triangle, position a new vertex above the plane of the triangle and connect this vertex to all three vertices of the original triangle.
The number of a given dimensional element in the tetrahedron is now the sum of two numbers: first the number of that element found in the original triangle, plus the number of new elements, each of which is built upon elements of one fewer dimension from the original triangle. Thus, in the tetrahedron, the number of cells (polyhedral elements) is ; the number of faces is the number of edges is the number of new vertices is . This process of summing the number of elements of a given dimension to those of one fewer dimension to arrive at the number of the former found in the next higher simplex is equivalent to the process of summing two adjacent numbers in a row of Pascal's triangle to yield the number below. Thus, the meaning of the final number (1) in a row of Pascal's triangle becomes understood as representing the new vertex that is to be added to the simplex represented by that row to yield the next higher simplex represented by the next row. This new vertex is joined to every element in the original simplex to yield a new element of one higher dimension in the new simplex, and this is the origin of the pattern found to be identical to that seen in Pascal's triangle.
Number of elements of hypercubes
A similar pattern is observed relating to squares, as opposed to triangles. To find the pattern, one must construct an analog to Pascal's triangle, whose entries are the coefficients of , instead of . There are a couple ways to do this. The simpler is to begin with row 0 = 1 and row 1 = 1, 2. Proceed to construct the analog triangles according to the following rule:
That is, choose a pair of numbers according to the rules of Pascal's triangle, but double the one on the left before adding. This results in:
The other way of producing this triangle is to start with Pascal's triangle and multiply each entry by 2k, where k is the position in the row of the given number. For example, the 2nd value in row 4 of Pascal's triangle is 6 (the slope of 1s corresponds to the zeroth entry in each row). To get the value that resides in the corresponding position in the analog triangle, multiply 6 by . Now that the analog triangle has been constructed, the number of elements of any dimension that compose an arbitrarily dimensioned cube (called a hypercube) can be read from the table in a way analogous to Pascal's triangle. For example, the number of 2-dimensional elements in a 2-dimensional cube (a square) is one, the number of 1-dimensional elements (sides, or lines) is 4, and the number of 0-dimensional elements (points, or vertices) is 4. This matches the 2nd row of the table (1, 4, 4). A cube has 1 cube, 6 faces, 12 edges, and 8 vertices, which corresponds to the next line of the analog triangle (1, 6, 12, 8). This pattern continues indefinitely.
To understand why this pattern exists, first recognize that the construction of an n-cube from an -cube is done by simply duplicating the original figure and displacing it some distance (for a regular n-cube, the edge length) orthogonal to the space of the original figure, then connecting each vertex of the new figure to its corresponding vertex of the original. This initial duplication process is the reason why, to enumerate the dimensional elements of an n-cube, one must double the first of a pair of numbers in a row of this analog of Pascal's triangle before summing to yield the number below. The initial doubling thus yields the number of "original" elements to be found in the next higher n-cube and, as before, new elements are built upon those of one fewer dimension (edges upon vertices, faces upon edges, etc.). Again, the last number of a row represents the number of new vertices to be added to generate the next higher n-cube.
In this triangle, the sum of the elements of row m is equal to 3m. Again, to use the elements of row 4 as an example: , which is equal to .
Counting vertices in a cube by distance
Each row of Pascal's triangle gives the number of vertices at each distance from a fixed vertex in an n-dimensional cube. For example, in three dimensions, the third row (1 3 3 1) corresponds to the usual three-dimensional cube: fixing a vertex V, there is one vertex at distance 0 from V (that is, V itself), three vertices at distance 1, three vertices at distance and one vertex at distance (the vertex opposite V). The second row corresponds to a square, while larger-numbered rows correspond to hypercubes in each dimension.
Fourier transform of sin(x)n+1/x
As stated previously, the coefficients of (x + 1)n are the nth row of the triangle. Now the coefficients of (x − 1)n are the same, except that the sign alternates from +1 to −1 and back again. After suitable normalization, the same pattern of numbers occurs in the Fourier transform of sin(x)n+1/x. More precisely: if n is even, take the real part of the transform, and if n is odd, take the imaginary part. Then the result is a step function, whose values (suitably normalized) are given by the nth row of the triangle with alternating signs. For example, the values of the step function that results from:
compose the 4th row of the triangle, with alternating signs. This is a generalization of the following basic result (often used in electrical engineering):
is the boxcar function. The corresponding row of the triangle is row 0, which consists of just the number 1.
If n is congruent to 2 or to 3 mod 4, then the signs start with −1. In fact, the sequence of the (normalized) first terms corresponds to the powers of i, which cycle around the intersection of the axes with the unit circle in the complex plane:
Extensions
Pascal's triangle may be extended upwards, above the 1 at the apex, preserving the additive property, but there is more than one way to do so.
To higher dimensions
Pascal's triangle has higher dimensional generalizations. The three-dimensional version is known as Pascal's pyramid or Pascal's tetrahedron, while the general versions are known as Pascal's simplices.
To complex numbers
When the factorial function is defined as , Pascal's triangle can be extended beyond the integers to , since is meromorphic to the entire complex plane.
To arbitrary bases
Isaac Newton once observed that the first five rows of Pascal's triangle, when read as the digits of an integer, are the corresponding powers of eleven. He claimed without proof that subsequent rows also generate powers of eleven. In 1964, Robert L. Morton presented the more generalized argument that each row can be read as a radix numeral, where is the hypothetical terminal row, or limit, of the triangle, and the rows are its partial products. He proved the entries of row , when interpreted directly as a place-value numeral, correspond to the binomial expansion of . More rigorous proofs have since been developed. To better understand the principle behind this interpretation, here are some things to recall about binomials:
A radix numeral in positional notation (e.g. ) is a univariate polynomial in the variable , where the degree of the variable of the th term (starting with ) is . For example, .
A row corresponds to the binomial expansion of . The variable can be eliminated from the expansion by setting . The expansion now typifies the expanded form of a radix numeral, as demonstrated above. Thus, when the entries of the row are concatenated and read in radix they form the numerical equivalent of . If for , then the theorem holds for with odd values of yielding negative row products.
By setting the row's radix (the variable ) equal to one and ten, row becomes the product and , respectively. To illustrate, consider , which yields the row product . The numeric representation of is formed by concatenating the entries of row . The twelfth row denotes the product:
with compound digits (delimited by ":") in radix twelve. The digits from through are compound because these row entries compute to values greater than or equal to twelve. To normalize the numeral, simply carry the first compound entry's prefix, that is, remove the prefix of the coefficient from its leftmost digit up to, but excluding, its rightmost digit, and use radix-twelve arithmetic to sum the removed prefix with the entry on its immediate left, then repeat this process, proceeding leftward, until the leftmost entry is reached. In this particular example, the normalized string ends with for all . The leftmost digit is for , which is obtained by carrying the of at entry . It follows that the length of the normalized value of is equal to the row length, . The integral part of contains exactly one digit because (the number of places to the left the decimal has moved) is one less than the row length. Below is the normalized value of . Compound digits remain in the value because they are radix residues represented in radix ten:
| Mathematics | Combinatorics | null |
49547 | https://en.wikipedia.org/wiki/Hyperlink | Hyperlink | In computing, a hyperlink, or simply a link, is a digital reference to data that the user can follow or be guided to by clicking or tapping. A hyperlink points to a whole document or to a specific element within a document. Hypertext is text with hyperlinks. The text that is linked from is known as anchor text. A software system that is used for viewing and creating hypertext is a hypertext system, and to create a hyperlink is to hyperlink (or simply to link). A user following hyperlinks is said to navigate or browse the hypertext.
The document containing a hyperlink is known as its source document. For example, in content from Wikipedia or Google Search, many words and terms in the text are hyperlinked to definitions of those terms. Hyperlinks are often used to implement reference mechanisms such as tables of contents, footnotes, bibliographies, indexes, and glossaries.
In some hypertext, hyperlinks can be bidirectional: they can be followed in two directions, so both ends act as anchors and as targets. More complex arrangements exist, such as many-to-many links.
The effect of following a hyperlink may vary with the hypertext system and may sometimes depend on the link itself; for instance, on the World Wide Web most hyperlinks cause the target document to replace the document being displayed, but some are marked to cause the target document to open in a new window (or, perhaps, in a new tab). Another possibility is transclusion, for which the link target is a document fragment that replaces the link anchor within the source document. Not only persons browsing the document may follow hyperlinks. These hyperlinks may also be followed automatically by programs. A program that traverses the hypertext, following each hyperlink and gathering all the retrieved documents is known as a Web spider or crawler.
Links
Inline links
An inline link displays remote content without the need for embedding the content. The remote content may be accessed with or without the user following the link.
An inline link may display a modified version of the content; for instance, instead of an image, a thumbnail, low resolution preview, cropped section, or magnified section may be shown. The full content is then usually available on demand, as is the case with print publishing software e.g., with an external link. This allows for smaller file sizes and quicker response to changes when the full linked content is not needed, as is the case when rearranging a page layout.
Anchor links
An anchor hyperlink (anchor link) is a link bound to a portion of a document, which is often called a fragment. The fragment is generally a portion of text or a heading, though not necessarily. For instance, it may also be a hot area in an image (image map in HTML), a designated, often irregular part of an image.
Fragments are marked with anchors (in any of various ways), which is why a link to a fragment is called an anchor link (that is, a link to an anchor). For example, in XML, the element " provides anchoring capability (as long as the DTD or schema defines it), and in wiki markup, {{anchor|name}} is a typical example of implementing it. In word processor apps, anchors can be inserted where desired and may be called bookmarks. In URLs, the hash character (#) precedes the name of the anchor for the fragment.
One way to define a hot area in an image is by a list of coordinates that indicate its boundaries. For example, a political map of Africa may have each country hyperlinked to further information about that country. A separate invisible hot area interface allows for swapping skins or labels within the linked hot areas without repetitive embedding of links in the various skin elements.
Text hyperlink. Hyperlink is embedded into a word or a phrase and makes this text clickable.
Image hyperlink. Hyperlink is embedded into an image and makes this image clickable.
Bookmark hyperlink. Hyperlink is embedded into a text or an image and takes visitors to another part of a web page.
E-mail hyperlink. Hyperlink is embedded into e-mail address and allows visitors to send an e-mail message to this e-mail address.
Fat links
A fat link (also known as a "one-to-many" link, an "extended link" or a "multi-tailed link") is a hyperlink which leads to multiple endpoints; the link is a set-valued function.
Uses in various technologies
HTML
Tim Berners-Lee saw the possibility of using hyperlinks to link any information to any other information over the Internet. Hyperlinks were therefore integral to the creation of the World Wide Web. Web pages are written in the hypertext mark-up language HTML.
This is what a hyperlink to the home page of the W3C organization could look like in HTML code:
<a href="https://www.w3.org/">W3C organization website</a>
This HTML code consists of several tags:
The hyperlink starts with an anchor opening tag <a, and includes a hyperlink reference href="https://www.w3.org/" to the URL for the page. (The URL is enclosed in quotes.)
The URL is followed by >, marking the end of the anchor opening tag.
The words that follow identify what is being linked; this is the only part of the code that is ordinarily visible on the screen when the page is rendered, but when the cursor hovers over the link, many browsers display the target URL somewhere on the screen, such as in the lower left-hand corner.
Typically these words are underlined and colored (for example, blue for a link that has not yet been visited and purple for a link already visited).
The anchor closing tag (</a>) terminates the hyperlink code.
The <a> tag can also consist of various attributes such as the "rel" attribute which specifies the relationship between the current document and linked document.
Webgraph is a graph, formed from web pages as vertices and hyperlinks, as directed edges.
XLink
The W3C recommendation called XLink describes hyperlinks that offer a far greater degree of functionality than those offered in HTML. These extended links can be multidirectional, remove linking from, within, and between XML documents. It can also describe simple links, which are unidirectional and therefore offer no more functionality than hyperlinks in HTML.
Permalinks
Permalinks are URLs that are intended to remain unchanged for many years into the future, yielding hyperlinks that are less susceptible to link rot. Permalinks are often rendered simply, that is, as friendly URLs, so as to be easy for people to type and remember. Permalinks are used in order to point and redirect readers to the same Web page, blog post or any online digital media.
The scientific literature is a place where link persistence is crucial to the public knowledge. A 2013 study in BMC Bioinformatics analyzed 15,000 links in abstracts from Thomson Reuters' Web of Science citation index, founding that the median lifespan of Web pages was 9.3 years, and just 62% were archived. The median lifespan of a Web page constitutes high-degree variable, but its order of magnitude usually is of some months.
How hyperlinks work in HTML
A link from one domain to another is said to be outbound from its source anchor and inbound to its target.
The most common destination anchor is a URL used in the World Wide Web. This can refer to a document, e.g. a webpage, or other resource, or to a position in a webpage. The latter is achieved by means of an HTML element with a "name" or "id" attribute at that position of the HTML document. The URL of the position is the URL of the webpage with a fragment identifier "#id attribute" appended.
When linking to PDF documents from an HTML page the "id attribute" can be replaced with syntax that references a page number or another element of the PDF, for example, "#page=386".
Link behavior in web browsers
A web browser usually displays a hyperlink in some distinguishing way, e.g. in a different color, font or style, or with certain symbols following to visualize link target or document types. This is also called link decoration. The behavior and style of links can be specified using the Cascading Style Sheets (CSS) language.
In a graphical user interface, the appearance of a mouse cursor may change into a hand motif to indicate a link. In most graphical web browsers, links are displayed in underlined blue text when they have not been visited, but underlined purple text when they have. When the user activates the link (e.g., by clicking on it with the mouse) the browser displays the link's target. If the target is not an HTML file, depending on the file type and on the browser and its plugins, another program may be activated to open the file.
The HTML code contains some or all of the five main characteristics of a link:
link destination ("href" pointing to a URL)
link label
link title
link target
link class or link id
It uses the HTML element "a" with the attribute "href" (HREF is an abbreviation for "Hypertext REFerence") and optionally also the attributes "title", "target", and "class" or "id":
<a href="URL" title="link title" target="link target" class="link class">link label</a>
To embed a link into a web page, blogpost, or comment, it may take this form:
<a href="https://example.com/">Example</a>
In a typical web browser, this would display as the underlined word "Example" in blue, which when clicked would take the user to the example.com website. This contributes to a clean, easy to read text or document.
By default, browsers will usually display hyperlinks as such:
An unvisited link is usually blue and underlined
A visited link is usually purple and underlined
An active link is usually red and underlined
When the cursor hovers over a link, depending on the browser and graphical user interface, some informative text about the link can be shown, popping up, not in a regular window, but in a special hover box, which disappears when the cursor is moved away (sometimes it disappears anyway after a few seconds, and reappears when the cursor is moved away and back). Mozilla Firefox, IE, Opera, and many other web browsers all show the URL. In addition, the URL is commonly shown in the status bar.
Normally, a link opens in the current frame or window, but sites that use frames and multiple windows for navigation can add a special "target" attribute to specify where the link loads. If no window exists with that name, a new window is created with the ID, which can be used to refer to the window later in the browsing session.
Creation of new windows is probably the most common use of the "target" attribute. To prevent accidental reuse of a window, the special window names "_blank" and "_new" are usually available, and always cause a new window to be created. It is especially common to see this type of link when one large website links to an external page. The intention in that case is to ensure that the person browsing is aware that there is no endorsement of the site being linked to by the site that was linked from. However, the attribute is sometimes overused and can sometimes cause many windows to be created even while browsing a single site.
Another special page name is "_top", which causes any frames in the current window to be cleared away so that browsing can continue in the full window.
History
The term "link" was coined in 1965 (or possibly 1964) by Ted Nelson at the start of Project Xanadu. Nelson had been inspired by "As We May Think", a popular 1945 essay by Vannevar Bush. In the essay, Bush described a microfilm-based machine (the Memex) in which one could link any two pages of information into a "trail" of related information, and then scroll back and forth among pages in a trail as if they were on a single microfilm reel.
In a series of books and articles published from 1964 through 1980, Nelson transposed Bush's concept of automated cross-referencing into the computer context, made it applicable to specific text strings rather than whole pages, generalized it from a local desk-sized machine to a theoretical proprietary worldwide computer network, and advocated the creation of such a network. Though Nelson's Xanadu Corporation was eventually funded by Autodesk in the 1980s, it never created this proprietary public-access network. Meanwhile, working independently, a team led by Douglas Engelbart (with Jeff Rulifson as chief programmer) was the first to implement the hyperlink concept for scrolling within a single document (1966), and soon after for connecting between paragraphs within separate documents (1968), with NLS. Ben Shneiderman working with graduate student Dan Ostroff designed and implemented the highlighted link in the HyperTIES system in 1983. HyperTIES was used to produce the world's first electronic journal, the July 1988 Communications of the ACM, which was cited as the source for the link concept in Tim Berners-Lee's Spring 1989 manifesto for the Web. In 1988, Ben Shneiderman and Greg Kearsley used HyperTIES to publish "Hypertext Hands-On!", the world's first electronic book.
Released in 1987 for the Apple Macintosh, the database program HyperCard allowed for hyperlinking between various pages within a document, as well as to other documents and separate applications on the same computer. In 1990, Windows Help, which was introduced with Microsoft Windows 3.0, had widespread use of hyperlinks to link different pages in a single help file together; in addition, it had a visually different kind of hyperlink that caused a popup help message to appear when clicked, usually to give definitions of terms introduced on the help page. The first widely used open protocol that included hyperlinks from any Internet site to any other Internet site was the Gopher protocol from 1991. It was soon eclipsed by HTML after the 1993 release of the Mosaic browser (which could handle Gopher links as well as HTML links). HTML's advantage was the ability to mix graphics, text, and hyperlinks, unlike Gopher, which just had menu-structured text and hyperlinks.
Legal issues
While hyperlinking among webpages is an intrinsic feature of the web, some websites object to being linked by other websites; some have claimed that linking to them is not allowed without permission.
Contentious in particular are deep links, which do not point to a site's home page or other entry point designated by the site owner, but to content elsewhere, allowing the user to bypass the site's own designated flow, and inline links, which incorporate the content in question into the pages of the linking site, making it seem part of the linking site's own content unless an explicit attribution is added.
In certain jurisdictions, it is or has been held that hyperlinks are not merely references or citations, but are devices for copying web pages. In the Netherlands, Karin Spaink was initially convicted in this way of copyright infringement by linking, although this ruling was overturned in 2003. The courts that advocate this view see the mere publication of a hyperlink that connects to illegal material to be an illegal act in itself, regardless of whether referencing illegal material is illegal. In 2004, Josephine Ho was acquitted of 'hyperlinks that corrupt traditional values' in Taiwan.
In 2000, British Telecom sued Prodigy, claiming that Prodigy infringed its patent () on web hyperlinks. After litigation, a court found for Prodigy, ruling that British Telecom's patent did not cover web hyperlinks.
In United States jurisprudence, there is a distinction between the mere act of linking to someone else's website, and linking to content that is illegal (e.g., gambling illegal in the US) or infringing (e.g., illegal MP3 copies). Several courts have found that merely linking to someone else's website, even if by bypassing commercial advertising, is not copyright or trademark infringement, regardless of how much someone else might object. Linking to illegal or infringing content can be sufficiently problematic to give rise to legal liability. Compare for a summary of the current status of US copyright law as to hyperlinking, see the discussion regarding the Arriba Soft and Perfect 10 cases.
Somewhat controversially, Vuestar Technologies has tried to enforce patents applied for by its owner, Ronald Neville Langford, around the world relating to search techniques using hyperlinked images to other websites or web pages.
| Technology | Internet | null |
49555 | https://en.wikipedia.org/wiki/Batholith | Batholith | A batholith () is a large mass of intrusive igneous rock (also called plutonic rock), larger than in area, that forms from cooled magma deep in the Earth's crust. Batholiths are almost always made mostly of felsic or intermediate rock types, such as granite, quartz monzonite, or diorite (see also granite dome).
Formation
Although they may appear uniform, batholiths are in fact structures with complex histories and compositions. They are composed of multiple masses, or plutons, bodies of igneous rock of irregular dimensions (typically at least several kilometers) that can be distinguished from adjacent igneous rock by some combination of criteria including age, composition, texture, or mappable structures. Individual plutons are solidified from magma that traveled toward the surface from a zone of partial melting near the base of the Earth's crust.
Traditionally, these plutons have been considered to form by ascent of relatively buoyant magma in large masses called plutonic diapirs. Because the diapirs are liquified and very hot, they tend to rise through the surrounding native country rock, pushing it aside and partially melting it. Most diapirs do not reach the surface to form volcanoes, but instead they slow down, cool, and usually solidify 5 to 30 kilometers underground as plutons (hence the use of the word pluton; in reference to the Roman god of the underworld Pluto). An alternate view is that plutons are formed by aggregation of smaller volumes of magma that ascend as dikes.
A batholith is formed when many plutons converge to form a huge expanse of granitic rock. Some batholiths are mammoth, paralleling past and present subduction zones and other heat sources for hundreds of kilometers in continental crust. One such batholith is the Sierra Nevada Batholith, which is a continuous granitic formation that makes up much of the Sierra Nevada in California. An even larger batholith, the Coast Plutonic Complex, is found predominantly in the Coast Mountains of western Canada; it extends for 1,800 kilometers and reaches into southeastern Alaska.
Surface expression and erosion
A batholith is an exposed area of (mostly) continuous plutonic rock that covers an area larger than 100 square kilometers (40 square miles). Areas smaller than 100 square kilometers are called stocks. However, the majority of batholiths visible at the surface (via outcroppings) have areas far greater than 100 square kilometers. These areas are exposed to the surface through the process of erosion accelerated by continental uplift acting over many tens of millions to hundreds of millions of years. This process has removed several tens of square kilometers of overlying rock in many areas, exposing the once deeply buried batholiths.
Batholiths exposed at the surface are subjected to huge pressure differences between their former location deep in the earth and their new location at or near the surface. As a result, their crystal structure expands slightly over time. This manifests itself by a form of mass wasting called exfoliation. This form of weathering causes convex and relatively thin sheets of rock to slough off the exposed surfaces of batholiths (a process accelerated by frost wedging). The result is fairly clean and rounded rock faces. A well-known result of this process is Half Dome in Yosemite Valley.
Examples
Africa
Aswan Granite Batholith
Cape Coast Batholith, Ghana
Heerenveen Batholith, South Africa
Paarl Rock, South Africa
Darling Batholith, South Africa
Hook granite massif, Zambia
Mubende Batholith, Uganda
Antarctica
Antarctic Peninsula Batholith
Queen Maud Batholith
Asia
Angara-Vitim batholith, Siberia
Bhongir Fort Batholith, Telangana, India
Chibagalakh batholith, Siberia
Mount Abu, India
Gangdese batholith, Himalaya
Trans-Himalayan Batholith, Himalaya
Kalba-Narym batholith, Kazakhstan
Karakorum Batholith, Himalaya
Tak batholith, Thailand
Tien Shan batholith, Central Asia
Ranchi batholith, India
Europe
Bindal Batholith, Norway
Cornubian batholith, England
Corsica-Sardinia Batholith
Donegal batholith, Ireland
Leinster Batholith, Ireland
Mancellian batholith, France
North Pennine Batholith, England
Ljusdal Batholith, Sweden
Mt-Louis-Andorra Batholith
Riga Batholith, Latvia
Salmi Batholith, Republic of Karelia, Russia
Sunnhordaland Batholith, Norway
Transscandinavian Igneous Belt, Sweden and Norway
Revsund Massif
Rätan Batholith
Småland–Värmland Belt
Vitosha - Plana, Sofia, Bulgaria
North America
Bald Rock Batholith
Enchanted Rock, Texas
Boulder Batholith
British Virgin Islands
Chambers-Strathy Batholith
Chilliwack batholith
Golden Horn Batholith
Idaho Batholith
Ilimaussaq Batholith, Greenland
Kenosha Batholith
Mount Stuart Batholith, Washington
Wallowa Batholith, Oregon
Peninsular Ranges, Baja and Southern California
Pike's Peak Granite Batholith
Ruby Mountains
Rio Verde Batholith, Mexico
San Lorenzo Batholith, Puerto Rico
Sierra Nevada Batholith
South Mountain Batholith, Nova Scotia
Town Mountain Granite batholith, Texas
Wyoming batholith
Oceania
Cullen Batholith, Australia
Kosciuszko Batholith, Australia
Moruya Batholith, Australia
Scottsdale Batholith, Australia
Median Batholith, New Zealand
New England Batholith, Australia
South America
Achala Batholith, Argentina
Antioquia Batholith, Colombia
Guanambi Batholith, Bahia, Brazil
Parguaza rapakivi granite Batholith, Venezuela and Colombia
Cerro Aspero Batholith, Argentina
Coastal Batholith of Peru
Colangüil Batholith, Argentina
Cordillera Blanca Batholith, Peru
Vicuña Mackenna Batholith, Chile
Elqui-Limarí Batholith, Chile and Argentina
Futrono-Riñihue Batholith, Chile
Illescas Batholith, Uruguay
Coastal Batholith of central Chile
Panguipulli Batholith, Chile
Patagonian Batholith, Chile and Argentina
North Patagonian Batholith
South Patagonian Batholith
| Physical sciences | Igneous rocks | Earth science |
49557 | https://en.wikipedia.org/wiki/Castle | Castle | A castle is a type of fortified structure built during the Middle Ages predominantly by the nobility or royalty and by military orders. Scholars usually consider a castle to be the private fortified residence of a lord or noble. This is distinct from a mansion, palace, and villa, whose main purpose was exclusively for pleasance and are not primarily fortresses but may be fortified. Use of the term has varied over time and, sometimes, has also been applied to structures such as hill forts and 19th- and 20th-century homes built to resemble castles. Over the Middle Ages, when genuine castles were built, they took on a great many forms with many different features, although some, such as curtain walls, arrowslits, and portcullises, were commonplace.
European-style castles originated in the 9th and 10th centuries after the fall of the Carolingian Empire, which resulted in its territory being divided among individual lords and princes. These nobles built castles to control the area immediately surrounding them and they were both offensive and defensive structures: they provided a base from which raids could be launched as well as offering protection from enemies. Although their military origins are often emphasised in castle studies, the structures also served as centres of administration and symbols of power. Urban castles were used to control the local populace and important travel routes, and rural castles were often situated near features that were integral to life in the community, such as mills, fertile land, or a water source.
Many northern European castles were originally built from earth and timber but had their defences replaced later by stone. Early castles often exploited natural defences, lacking features such as towers and arrowslits and relying on a central keep. In the late 12th and early 13th centuries, a scientific approach to castle defence emerged. This led to the proliferation of towers, with an emphasis on flanking fire. Many new castles were polygonal or relied on concentric defence – several stages of defence within each other that could all function at the same time to maximise the castle's firepower. These changes in defence have been attributed to a mixture of castle technology from the Crusades, such as concentric fortification, and inspiration from earlier defences, such as Roman forts. Not all the elements of castle architecture were military in nature, so that devices such as moats evolved from their original purpose of defence into symbols of power. Some grand castles had long winding approaches intended to impress and dominate their landscape.
Although gunpowder was introduced to Europe in the 14th century, it did not significantly affect castle building until the 15th century, when artillery became powerful enough to break through stone walls. While castles continued to be built well into the 16th century, new techniques to deal with improved cannon fire made them uncomfortable and undesirable places to live. As a result, true castles went into a decline and were replaced by artillery star forts with no role in civil administration, and château or country houses that were indefensible. From the 18th century onwards, there was a renewed interest in castles with the construction of mock castles, part of a Romantic revival of Gothic architecture, but they had no military purpose.
Definition
Etymology
The word castle is derived from the Latin word castellum, which is a diminutive of the word castrum, meaning "fortified place". The Old English castel, Occitan castel or chastel, French château, Spanish castillo, Portuguese castelo, Italian castello, and a number of words in other languages also derive from castellum. The word castle was introduced into English shortly before the Norman Conquest of 1066 to denote this type of building, which was then new to England.
Defining characteristics
In its simplest terms, the definition of a castle accepted amongst academics is "a private fortified residence". This contrasts with earlier fortifications, such as Anglo-Saxon burhs and walled cities such as Constantinople and Antioch in the Middle East; castles were not communal defences but were built and owned by the local feudal lords, either for themselves or for their monarch. Feudalism was the link between a lord and his vassal where, in return for military service and the expectation of loyalty, the lord would grant the vassal land. In the late 20th century, there was a trend to refine the definition of a castle by including the criterion of feudal ownership, thus tying castles to the medieval period; however, this does not necessarily reflect the terminology used in the medieval period. During the First Crusade (1096–1099), the Frankish armies encountered walled settlements and forts that they indiscriminately referred to as castles, but which would not be considered as such under the modern definition.
Castles served a range of purposes, the most important of which were military, administrative, and domestic. As well as defensive structures, castles were also offensive tools which could be used as a base of operations in enemy territory. Castles were established by Norman invaders of England for both defensive purposes and to pacify the country's inhabitants. As William the Conqueror advanced through England, he fortified key positions to secure the land he had taken. Between 1066 and 1087, he established 36 castles such as Warwick Castle, which he used to guard against rebellion in the English Midlands.
Towards the end of the Middle Ages, castles tended to lose their military significance due to the advent of powerful cannons and permanent artillery fortifications; as a result, castles became more important as residences and statements of power. A castle could act as a stronghold and prison but was also a place where a knight or lord could entertain his peers. Over time the aesthetics of the design became more important, as the castle's appearance and size began to reflect the prestige and power of its occupant. Comfortable homes were often fashioned within their fortified walls. Although castles still provided protection from low levels of violence in later periods, eventually they were succeeded by country houses as high-status residences.
Terminology
Castle is sometimes used as a catch-all term for all kinds of fortifications, and as a result has been misapplied in the technical sense. An example of this is Maiden Castle which, despite the name, is an Iron Age hill fort which had a very different origin and purpose.
Although castle has not become a generic term for a manor house (like château in French and Schloss in German), many manor houses contain castle in their name while having few if any of the architectural characteristics, usually as their owners liked to maintain a link to the past and felt the term castle was a masculine expression of their power. In scholarship the castle, as defined above, is generally accepted as a coherent concept, originating in Europe and later spreading to parts of the Middle East, where they were introduced by European Crusaders. This coherent group shared a common origin, dealt with a particular mode of warfare, and exchanged influences.
In different areas of the world, analogous structures shared features of fortification and other defining characteristics associated with the concept of a castle, though they originated in different periods and circumstances and experienced differing evolutions and influences. For example, shiro in Japan, described as castles by historian Stephen Turnbull, underwent "a completely different developmental history, were built in a completely different way and were designed to withstand attacks of a completely different nature". While European castles built from the late 12th and early 13th century onwards were generally stone, shiro were predominantly timber buildings into the 16th century.
By the 16th century, when Japanese and European cultures met, fortification in Europe had moved beyond castles and relied on innovations such as the Italian trace italienne and star forts.
Common features
Motte
A motte was an earthen mound with a flat top. It was often artificial, although sometimes it incorporated a pre-existing feature of the landscape. The excavation of earth to make the mound left a ditch around the motte, called a moat (which could be either wet or dry). Although the motte is commonly associated with the bailey to form a motte-and-bailey castle, this was not always the case and there are instances where a motte existed on its own.
"Motte" refers to the mound alone, but it was often surmounted by a fortified structure, such as a keep, and the flat top would be surrounded by a palisade. It was common for the motte to be reached over a flying bridge (a bridge over the ditch from the counterscarp of the ditch to the edge of the top of the mound), as shown in the Bayeux Tapestry's depiction of Château de Dinan. Sometimes a motte covered an older castle or hall, whose rooms became underground storage areas and prisons beneath a new keep.
Bailey and enceinte
A bailey, also called a ward, was a fortified enclosure. It was a common feature of castles, and most had at least one. The keep on top of the motte was the domicile of the lord in charge of the castle and a bastion of last defence, while the bailey was the home of the rest of the lord's household and gave them protection. The barracks for the garrison, stables, workshops, and storage facilities were often found in the bailey. Water was supplied by a well or cistern. Over time the focus of high status accommodation shifted from the keep to the bailey; this resulted in the creation of another bailey that separated the high status buildings – such as the lord's chambers and the chapel – from the everyday structures such as the workshops and barracks.
From the late 12th century there was a trend for knights to move out of the small houses they had previously occupied within the bailey to live in fortified houses in the countryside. Although often associated with the motte-and-bailey type of castle, baileys could also be found as independent defensive structures. These simple fortifications were called ringworks. The enceinte was the castle's main defensive enclosure, and the terms "bailey" and "enceinte" are linked. A castle could have several baileys but only one enceinte. Castles with no keep, which relied on their outer defences for protection, are sometimes called enceinte castles; these were the earliest form of castles, before the keep was introduced in the 10th century.
Keep
A keep was a great tower or other building that served as the main living quarters of the castle and usually the most strongly defended point of a castle before the introduction of concentric defence. "Keep" was not a term used in the medieval period – the term was applied from the 16th century onwards – instead "donjon" was used to refer to great towers, or turris in Latin. In motte-and-bailey castles, the keep was on top of the motte. "Dungeon" is a corrupted form of "donjon" and means a dark, unwelcoming prison. Although often the strongest part of a castle and a last place of refuge if the outer defences fell, the keep was not left empty in case of attack but was used as a residence by the lord who owned the castle, or his guests or representatives.
At first, this was usual only in England, when after the Norman Conquest of 1066 the "conquerors lived for a long time in a constant state of alert"; elsewhere the lord's wife presided over a separate residence (domus, aula or mansio in Latin) close to the keep, and the donjon was a barracks and headquarters. Gradually, the two functions merged into the same building, and the highest residential storeys had large windows; as a result for many structures, it is difficult to find an appropriate term. The massive internal spaces seen in many surviving donjons can be misleading; they would have been divided into several rooms by light partitions, as in a modern office building. Even in some large castles the great hall was separated only by a partition from the lord's chamber, his bedroom and to some extent his office.
Curtain wall
Curtain walls were defensive walls enclosing a bailey. They had to be high enough to make scaling the walls with ladders difficult and thick enough to withstand bombardment from siege engines which, from the 15th century onwards, included gunpowder artillery. A typical wall could be thick and tall, although sizes varied greatly between castles. To protect them from undermining, curtain walls were sometimes given a stone skirt around their bases. Walkways along the tops of the curtain walls allowed defenders to rain missiles on enemies below, and battlements gave them further protection. Curtain walls were studded with towers to allow enfilading fire along the wall. Arrowslits in the walls did not become common in Europe until the 13th century, for fear that they might compromise the wall's strength.
Gatehouse
The entrance was often the weakest part in a circuit of defences. To overcome this, the gatehouse was developed, allowing those inside the castle to control the flow of traffic. In earth and timber castles, the gateway was usually the first feature to be rebuilt in stone. The front of the gateway was a blind spot and to overcome this, projecting towers were added on each side of the gate in a style similar to that developed by the Romans. The gatehouse contained a series of defences to make a direct assault more difficult than battering down a simple gate. Typically, there were one or more portcullises – a wooden grille reinforced with metal to block a passage – and arrowslits to allow defenders to harry the enemy. The passage through the gatehouse was lengthened to increase the amount of time an assailant had to spend under fire in a confined space and unable to retaliate.
It is a popular myth that murder holes – openings in the ceiling of the gateway passage – were used to pour boiling oil or molten lead on attackers; the price of oil and lead and the distance of the gatehouse from fires meant that this was impractical. This method was, however, a common practice in Middle Eastern and Mediterranean castles and fortifications, where such resources were abundant. They were most likely used to drop objects on attackers, or to allow water to be poured on fires to extinguish them. Provision was made in the upper storey of the gatehouse for accommodation so the gate was never left undefended, although this arrangement later evolved to become more comfortable at the expense of defence.
During the 13th and 14th centuries the barbican was developed. This consisted of a rampart, ditch, and possibly a tower, in front of the gatehouse which could be used to further protect the entrance. The purpose of a barbican was not just to provide another line of defence but also to dictate the only approach to the gate.
Moat
A moat is a ditch surrounding a castle – or dividing one part of a castle from another – and could be either dry or filled with water. Its purpose often had a defensive purpose, preventing siege towers from reaching walls making mining harder, but could also be ornamental. Water moats were found in low-lying areas and were usually crossed by a drawbridge, although these were often replaced by stone bridges. The site of the 13th-century Caerphilly Castle in Wales covers over and the water defences, created by flooding the valley to the south of the castle, are some of the largest in Western Europe.
Battlements
Battlements were most often found surmounting curtain walls and the tops of gatehouses, and comprised several elements: crenellations, hoardings, machicolations, and loopholes. Crenellation is the collective name for alternating crenels and merlons: gaps and solid blocks on top of a wall. Hoardings were wooden constructs that projected beyond the wall, allowing defenders to shoot at, or drop objects on, attackers at the base of the wall without having to lean perilously over the crenellations, thereby exposing themselves to retaliatory fire. Machicolations were stone projections on top of a wall with openings that allowed objects to be dropped on an enemy at the base of the wall in a similar fashion to hoardings.
Arrowslits
Arrowslits, also commonly called loopholes, were narrow vertical openings in defensive walls which allowed arrows or crossbow bolts to be fired on attackers. The narrow slits were intended to protect the defender by providing a very small target, but the size of the opening could also impede the defender if it was too small. A smaller horizontal opening could be added to give an archer a better view for aiming. Sometimes a sally port was included; this could allow the garrison to leave the castle and engage besieging forces. It was usual for the latrines to empty down the external walls of a castle and into the surrounding ditch.
Postern
A postern is a secondary door or gate in a concealed location, usually in a fortification such as a city wall.
Great hall
The great hall was a large, decorated room where a lord received his guests. The hall represented the prestige, authority, and richness of the lord. Events such as feasts, banquets, social or ceremonial gatherings, meetings of the military council, and judicial trials were held in the great hall. Sometimes the great hall existed as a separate building, in that case, it was called a hall-house.
History
Antecedents
Historian Charles Coulson states that the accumulation of wealth and resources, such as food, led to the need for defensive structures. The earliest fortifications originated in the Fertile Crescent, the Indus Valley, Europe, Egypt, and China where settlements were protected by large walls. In Northern Europe, hill forts were first developed in the Bronze Age, which then proliferated across Europe in the Iron Age. Hillforts in Britain typically used earthworks rather than stone as a building material.
Many earthworks survive today, along with evidence of palisades to accompany the ditches. In central and western Europe, oppida emerged in the 2nd century BC; these were densely inhabited fortified settlements, such as the oppidum of Manching. Some oppida walls were built on a massive scale, utilising stone, wood, iron and earth in their construction. The Romans encountered fortified settlements such as hill forts and oppida when expanding their territory into northern Europe. Their defences were often effective, and were only overcome by the extensive use of siege engines and other siege warfare techniques, such as at the Battle of Alesia. The Romans' own fortifications (castra) varied from simple temporary earthworks thrown up by armies on the move, to elaborate permanent stone constructions, notably the milecastles of Hadrian's Wall. Roman forts were generally rectangular with rounded corners – a "playing-card shape".
In the medieval period, castles were influenced by earlier forms of elite architecture, contributing to regional variations. Importantly, while castles had military aspects, they contained a recognisable household structure within their walls, reflecting the multi-functional use of these buildings.
Origins (9th and 10th centuries)
The subject of the emergence of castles in Europe is a complex matter which has led to considerable debate. Discussions have typically attributed the rise of the castle to a reaction to attacks by Magyars, Muslims, and Vikings and a need for private defence. The breakdown of the Carolingian Empire led to the privatisation of government, and local lords assumed responsibility for the economy and justice. However, while castles proliferated in the 9th and 10th centuries the link between periods of insecurity and building fortifications is not always straightforward. Some high concentrations of castles occur in secure places, while some border regions had relatively few castles.
It is likely that the castle evolved from the practice of fortifying a lordly home. The greatest threat to a lord's home or hall was fire as it was usually a wooden structure. To protect against this, and keep other threats at bay, there were several courses of action available: create encircling earthworks to keep an enemy at a distance; build the hall in stone; or raise it up on an artificial mound, known as a motte, to present an obstacle to attackers. While the concept of ditches, ramparts, and stone walls as defensive measures is ancient, raising a motte is a medieval innovation.
A bank and ditch enclosure was a simple form of defence, and when found without an associated motte is called a ringwork; when the site was in use for a prolonged period, it was sometimes replaced by a more complex structure or enhanced by the addition of a stone curtain wall. Building the hall in stone did not necessarily make it immune to fire as it still had windows and a wooden door. This led to the elevation of windows to the second storey – to make it harder to throw objects in – and to move the entrance from ground level to the second storey. These features are seen in many surviving castle keeps, which were the more sophisticated version of halls. Castles were not just defensive sites but also enhanced a lord's control over his lands. They allowed the garrison to control the surrounding area, and formed a centre of administration, providing the lord with a place to hold court.
Building a castle sometimes required the permission of the king or other high authority. In 864 the King of West Francia, Charles the Bald, prohibited the construction of castella without his permission and ordered them all to be destroyed. This is perhaps the earliest reference to castles, though military historian R. Allen Brown points out that the word castella may have applied to any fortification at the time.
In some countries the monarch had little control over lords, or required the construction of new castles to aid in securing the land so was unconcerned about granting permission – as was the case in England in the aftermath of the Norman Conquest and the Holy Land during the Crusades. Switzerland is an extreme case of there being no state control over who built castles, and as a result there were 4,000 in the country. There are very few castles dated with certainty from the mid-9th century. Converted into a donjon around 950, Château de Doué-la-Fontaine in France is the oldest standing castle in Europe.
11th century
From 1000 onwards, references to castles in texts such as charters increased greatly. Historians have interpreted this as evidence of a sudden increase in the number of castles in Europe around this time; this has been supported by archaeological investigation which has dated the construction of castle sites through the examination of ceramics. The increase in Italy began in the 950s, with numbers of castles increasing by a factor of three to five every 50 years, whereas in other parts of Europe such as France and Spain the growth was slower. In 950, Provence was home to 12 castles; by 1000, this figure had risen to 30, and by 1030 it was over 100. Although the increase was slower in Spain, the 1020s saw a particular growth in the number of castles in the region, particularly in contested border areas between Christian and Muslim lands.
Despite the common period in which castles rose to prominence in Europe, their form and design varied from region to region. In the early 11th century, the motte and keep – an artificial mound with a palisade and tower on top – was the most common form of castle in Europe, everywhere except Scandinavia. While Britain, France, and Italy shared a tradition of timber construction that was continued in castle architecture, Spain more commonly used stone or mud-brick as the main building material.
The Muslim invasion of the Iberian Peninsula in the 8th century introduced a style of building developed in North Africa reliant on tapial, pebbles in cement, where timber was in short supply. Although stone construction would later become common elsewhere, from the 11th century onwards it was the primary building material for Christian castles in Spain, while at the same time timber was still the dominant building material in north-west Europe.
Historians have interpreted the widespread presence of castles across Europe in the 11th and 12th centuries as evidence that warfare was common, and usually between local lords. Castles were introduced into England shortly before the Norman Conquest in 1066. Before the 12th century castles were as uncommon in Denmark as they had been in England before the Norman Conquest. The introduction of castles to Denmark was a reaction to attacks from Wendish pirates, and they were usually intended as coastal defences. The motte and bailey remained the dominant form of castle in England, Wales, and Ireland well into the 12th century. At the same time, castle architecture in mainland Europe became more sophisticated.
The donjon was at the centre of this change in castle architecture in the 12th century. Central towers proliferated, and typically had a square plan, with walls thick. Their decoration emulated Romanesque architecture, and sometimes incorporated double windows similar to those found in church bell towers. Donjons, which were the residence of the lord of the castle, evolved to become more spacious. The design emphasis of donjons changed to reflect a shift from functional to decorative requirements, imposing a symbol of lordly power upon the landscape. This sometimes led to compromising defence for the sake of display.
Innovation and scientific design (12th century)
| Technology | Fortification | null |
49569 | https://en.wikipedia.org/wiki/Bayes%27%20theorem | Bayes' theorem | Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting conditional probabilities, allowing one to find the probability of a cause given its effect. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to someone of a known age to be assessed more accurately by conditioning it relative to their age, rather than assuming that the person is typical of the population as a whole. Based on Bayes' law, both the prevalence of a disease in a given population and the error rate of an infectious disease test must be taken into account to evaluate the meaning of a positive test result and avoid the base-rate fallacy.
One of Bayes' theorem's many applications is Bayesian inference, an approach to statistical inference, where it is used to invert the probability of observations given a model configuration (i.e., the likelihood function) to obtain the probability of the model configuration given the observations (i.e., the posterior probability).
History
Bayes' theorem is named after Thomas Bayes (), a minister, statistician, and philosopher. Bayes used conditional probability to provide an algorithm (his Proposition 9) that uses evidence to calculate limits on an unknown parameter. His work was published in 1763 as An Essay Towards Solving a Problem in the Doctrine of Chances. Bayes studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). After Bayes's death, his family gave his papers to a friend, the minister, philosopher, and mathematician Richard Price.
Price significantly edited the unpublished manuscript for two years before sending it to a friend who read it aloud at the Royal Society on 23 December 1763. Price edited Bayes's major work "An Essay Towards Solving a Problem in the Doctrine of Chances" (1763), which appeared in Philosophical Transactions, and contains Bayes' theorem. Price wrote an introduction to the paper that provides some of the philosophical basis of Bayesian statistics and chose one of the two solutions Bayes offered. In 1765, Price was elected a Fellow of the Royal Society in recognition of his work on Bayes's legacy. On 27 April, a letter sent to his friend Benjamin Franklin was read out at the Royal Society, and later published, in which Price applies this work to population and computing 'life-annuities'.
Independently of Bayes, Pierre-Simon Laplace used conditional probability to formulate the relation of an updated posterior probability from a prior probability, given evidence. He reproduced and extended Bayes's results in 1774, apparently unaware of Bayes's work, in 1774, and summarized his results in Théorie analytique des probabilités (1812). The Bayesian interpretation of probability was developed mainly by Laplace.
About 200 years later, Sir Harold Jeffreys put Bayes's algorithm and Laplace's formulation on an axiomatic basis, writing in a 1973 book that Bayes' theorem "is to the theory of probability what the Pythagorean theorem is to geometry".
Stephen Stigler used a Bayesian argument to conclude that Bayes' theorem was discovered by Nicholas Saunderson, a blind English mathematician, some time before Bayes, but that is disputed. Martyn Hooper and Sharon McGrayne have argued that Richard Price's contribution was substantial:
Statement of theorem
Bayes' theorem is stated mathematically as the following equation:
where and are events and .
is a conditional probability: the probability of event occurring given that is true. It is also called the posterior probability of given .
is also a conditional probability: the probability of event occurring given that is true. It can also be interpreted as the likelihood of given a fixed because .
and are the probabilities of observing and respectively without any given conditions; they are known as the prior probability and marginal probability.
Proof
For events
Bayes' theorem may be derived from the definition of conditional probability:
where is the probability of both A and B being true. Similarly,
Solving for and substituting into the above expression for yields Bayes' theorem:
For continuous random variables
For two continuous random variables X and Y, Bayes' theorem may be analogously derived from the definition of conditional density:
Therefore,
General case
Let be the conditional distribution of given and let be the distribution of . The joint distribution is then . The conditional distribution of given is then determined by
Existence and uniqueness of the needed conditional expectation is a consequence of the Radon–Nikodym theorem. This was formulated by Kolmogorov in 1933. Kolmogorov underlines the importance of conditional probability, writing, "I wish to call attention to ... the theory of conditional probabilities and conditional expectations". Bayes' theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions. Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line. Modern Markov chain Monte Carlo methods have boosted the importance of Bayes' theorem, including in cases with improper priors.
Examples
Recreational mathematics
Bayes' rule and computing conditional probabilities provide a method to solve a number of popular puzzles, such as the Three Prisoners problem, the Monty Hall problem, the Two Child problem, and the Two Envelopes problem.
Drug testing
Suppose, a particular test for whether someone has been using cannabis is 90% sensitive, meaning the true positive rate (TPR) = 0.90. Therefore, it leads to 90% true positive results (correct identification of drug use) for cannabis users.
The test is also 80% specific, meaning true negative rate (TNR) = 0.80. Therefore, the test correctly identifies 80% of non-use for non-users, but also generates 20% false positives, or false positive rate (FPR) = 0.20, for non-users.
Assuming 0.05 prevalence, meaning 5% of people use cannabis, what is the probability that a random person who tests positive is really a cannabis user?
The Positive predictive value (PPV) of a test is the proportion of persons who are actually positive out of all those testing positive, and can be calculated from a sample as:
PPV = True positive / Tested positive
If sensitivity, specificity, and prevalence are known, PPV can be calculated using Bayes' theorem. Let mean "the probability that someone is a cannabis user given that they test positive", which is what PPV means. We can write:
The denominator is a direct application of the Law of Total Probability. In this case, it says that the probability that someone tests positive is the probability that a user tests positive times the probability of being a user, plus the probability that a non-user tests positive, times the probability of being a non-user. This is true because the classifications user and non-user form a partition of a set, namely the set of people who take the drug test. This combined with the definition of conditional probability results in the above statement.
In other words, if someone tests positive, the probability that they are a cannabis user is only 19%—because in this group, only 5% of people are users, and most positives are false positives coming from the remaining 95%.
If 1,000 people were tested:
950 are non-users and 190 of them give false positive (0.20 × 950)
50 of them are users and 45 of them give true positive (0.90 × 50)
The 1,000 people thus have 235 positive tests, of which only 45 are genuine, about 19%.
Sensitivity or specificity
The importance of specificity can be seen by showing that even if sensitivity is raised to 100% and specificity remains at 80%, the probability that someone who tests positive is a cannabis user rises only from 19% to 21%, but if the sensitivity is held at 90% and the specificity is increased to 95%, the probability rises to 49%.
Cancer rate
If all patients with pancreatic cancer have a certain symptom, it does not follow that anyone who has that symptom has a 100% chance of getting pancreatic cancer. Assuming the incidence rate of pancreatic cancer is 1/100000, while 10/99999 healthy individuals have the same symptoms worldwide, the probability of having pancreatic cancer given the symptoms is 9.1%, and the other 90.9% could be "false positives" (that is, falsely said to have cancer; "positive" is a confusing term when, as here, the test gives bad news).
Based on incidence rate, the following table presents the corresponding numbers per 100,000 people.
Which can then be used to calculate the probability of having cancer when you have the symptoms:
Defective item rate
A factory produces items using three machines—A, B, and C—which account for 20%, 30%, and 50% of its output, respectively. Of the items produced by machine A, 5% are defective, while 3% of B's items and 1% of C's are defective. If a randomly selected item is defective, what is the probability it was produced by machine C?
Once again, the answer can be reached without using the formula by applying the conditions to a hypothetical number of cases. For example, if the factory produces 1,000 items, 200 will be produced by A, 300 by B, and 500 by C. Machine A will produce 5% × 200 = 10 defective items, B 3% × 300 = 9, and C 1% × 500 = 5, for a total of 24. Thus 24/1000 (2.4%) of the total output will be defective and the likelihood that a randomly selected defective item was produced by machine C is 5/24 (~20.83%).
This problem can also be solved using Bayes' theorem: Let Xi denote the event that a randomly chosen item was made by the i th machine (for i = A,B,C). Let Y denote the event that a randomly chosen item is defective. Then, we are given the following information:
If the item was made by the first machine, then the probability that it is defective is 0.05; that is, P(Y | XA) = 0.05. Overall, we have
To answer the original question, we first find P(Y). That can be done in the following way:
Hence, 2.4% of the total output is defective.
We are given that Y has occurred and we want to calculate the conditional probability of XC. By Bayes' theorem,
Given that the item is defective, the probability that it was made by machine C is 5/24. C produces half of the total output but a much smaller fraction of the defective items. Hence the knowledge that the item selected was defective enables us to replace the prior probability P(XC) = 1/2 by the smaller posterior probability P(XC | Y) = 5/24.
Interpretations
The interpretation of Bayes' rule depends on the interpretation of probability ascribed to the terms. The two predominant interpretations are described below.
Bayesian interpretation
In the Bayesian (or epistemological) interpretation, probability measures a "degree of belief". Bayes' theorem links the degree of belief in a proposition before and after accounting for evidence. For example, suppose it is believed with 50% certainty that a coin is twice as likely to land heads than tails. If the coin is flipped a number of times and the outcomes observed, that degree of belief will probably rise or fall, but might remain the same, depending on the results. For proposition A and evidence B,
P (A), the prior, is the initial degree of belief in A.
P (A | B), the posterior, is the degree of belief after incorporating news that B is true.
the quotient represents the support B provides for A.
For more on the application of Bayes' theorem under the Bayesian interpretation of probability, see Bayesian inference.
Frequentist interpretation
In the frequentist interpretation, probability measures a "proportion of outcomes". For example, suppose an experiment is performed many times. P(A) is the proportion of outcomes with property A (the prior) and P(B) is the proportion with property B. P(B | A) is the proportion of outcomes with property B out of outcomes with property A, and P(A | B) is the proportion of those with A out of those with B (the posterior).
The role of Bayes' theorem can be shown with tree diagrams. The two diagrams partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. Bayes' theorem links the different partitionings.
Example
An entomologist spots what might, due to the pattern on its back, be a rare subspecies of beetle. A full 98% of the members of the rare subspecies have the pattern, so P(Pattern | Rare) = 98%. Only 5% of members of the common subspecies have the pattern. The rare subspecies is 0.1% of the total population. How likely is the beetle having the pattern to be rare: what is P(Rare | Pattern)?
From the extended form of Bayes' theorem (since any beetle is either rare or common),
Forms
Events
Simple form
For events A and B, provided that P(B) ≠ 0,
In many applications, for instance in Bayesian inference, the event B is fixed in the discussion and we wish to consider the effect of its having been observed on our belief in various possible events A. In such situations the denominator of the last expression, the probability of the given evidence B, is fixed; what we want to vary is A. Bayes' theorem shows that the posterior probabilities are proportional to the numerator, so the last equation becomes:
In words, the posterior is proportional to the prior times the likelihood.
If events A1, A2, ..., are mutually exclusive and exhaustive, i.e., one of them is certain to occur but no two can occur together, we can determine the proportionality constant by using the fact that their probabilities must add up to one. For instance, for a given event A, the event A itself and its complement ¬A are exclusive and exhaustive. Denoting the constant of proportionality by c, we have:
Adding these two formulas we deduce that:
or
Alternative form
Another form of Bayes' theorem for two competing statements or hypotheses is:
For an epistemological interpretation:
For proposition A and evidence or background B,
is the prior probability, the initial degree of belief in A.
is the corresponding initial degree of belief in not-A, that A is false, where
is the conditional probability or likelihood, the degree of belief in B given that A is true.
is the conditional probability or likelihood, the degree of belief in B given that A is false.
is the posterior probability, the probability of A after taking into account B.
Extended form
Often, for some partition {Aj} of the sample space, the event space is given in terms of P(Aj) and P(B | Aj). It is then useful to compute P(B) using the law of total probability:
Or (using the multiplication rule for conditional probability),
In the special case where A is a binary variable:
Random variables
Consider a sample space Ω generated by two random variables X and Y with known probability distributions. In principle, Bayes' theorem applies to the events A = {X = x} and B = {Y = y}.
Terms become 0 at points where either variable has finite probability density. To remain useful, Bayes' theorem can be formulated in terms of the relevant densities (see Derivation).
Simple form
If X is continuous and Y is discrete,
where each is a density function.
If X is discrete and Y is continuous,
If both X and Y are continuous,
Extended form
A continuous event space is often conceptualized in terms of the numerator terms. It is then useful to eliminate the denominator using the law of total probability. For fY(y), this becomes an integral:
Bayes' rule in odds form
Bayes' theorem in odds form is:
where
is called the Bayes factor or likelihood ratio. The odds between two events is simply the ratio of the probabilities of the two events. Thus:
Thus the rule says that the posterior odds are the prior odds times the Bayes factor; in other words, the posterior is proportional to the prior times the likelihood.
In the special case that and , one writes , and uses a similar abbreviation for the Bayes factor and for the conditional odds. The odds on is by definition the odds for and against . Bayes' rule can then be written in the abbreviated form
or, in words, the posterior odds on equals the prior odds on times the likelihood ratio for given information . In short, posterior odds equals prior odds times likelihood ratio.
For example, if a medical test has a sensitivity of 90% and a specificity of 91%, then the positive Bayes factor is . Now, if the prevalence of this disease is 9.09%, and if we take that as the prior probability, then the prior odds is about 1:10. So after receiving a positive test result, the posterior odds of having the disease becomes 1:1, which means that the posterior probability of having the disease is 50%. If a second test is performed in serial testing, and that also turns out to be positive, then the posterior odds of having the disease becomes 10:1, which means a posterior probability of about 90.91%. The negative Bayes factor can be calculated to be 91%/(100%-90%)=9.1, so if the second test turns out to be negative, then the posterior odds of having the disease is 1:9.1, which means a posterior probability of about 9.9%.
The example above can also be understood with more solid numbers: assume the patient taking the test is from a group of 1,000 people, 91 of whom have the disease (prevalence of 9.1%). If all 1,000 take the test, 82 of those with the disease will get a true positive result (sensitivity of 90.1%), 9 of those with the disease will get a false negative result (false negative rate of 9.9%), 827 of those without the disease will get a true negative result (specificity of 91.0%), and 82 of those without the disease will get a false positive result (false positive rate of 9.0%). Before taking any test, the patient's odds for having the disease is 91:909. After receiving a positive result, the patient's odds for having the disease is
which is consistent with the fact that there are 82 true positives and 82 false positives in the group of 1,000.
Correspondence to other mathematical frameworks
Propositional logic
Where the conditional probability is defined, it can be seen to capture the implication . The probabilistic calculus then mirrors or even generalizes various logical inference rules. Beyond, for example, assigning binary truth values, here one assigns probability values to statements. The assertion is captured by the assertion , i.e. that the conditional probability take the extremal probability value . Likewise, the assertion of a negation of an implication is captured by the assignment of . So, for example, if , then (if it is defined) , which entails , the implication introduction in logic.
Similarly, as the product of two probabilities equaling necessitates that both factors are also , one finds that Bayes' theorem
entails , which now also includes modus ponens.
For positive values , if it equals , then the two conditional probabilities are equal as well, and vice versa. Note that this mirrors the generally valid .
On the other hand, reasoning about either of the probabilities equalling classically entails the following contrapositive form of the above: .
Bayes' theorem with negated gives
.
Ruling out the extremal case (i.e. ), one has and in particular
.
Ruling out also the extremal case , one finds they attain the maximum simultaneously:
which (at least when having ruled out explosive antecedents) captures the classical contraposition principle
.
Subjective logic
Bayes' theorem represents a special case of deriving inverted conditional opinions in subjective logic expressed as:
where denotes the operator for inverting conditional opinions. The argument denotes a pair of binomial conditional opinions given by source , and the argument denotes the prior probability (aka. the base rate) of . The pair of derivative inverted conditional opinions is denoted . The conditional opinion generalizes the probabilistic conditional , i.e. in addition to assigning a probability the source can assign any subjective opinion to the conditional statement . A binomial subjective opinion is the belief in the truth of statement with degrees of epistemic uncertainty, as expressed by source . Every subjective opinion has a corresponding projected probability . The application of Bayes' theorem to projected probabilities of opinions is a homomorphism, meaning that Bayes' theorem can be expressed in terms of projected probabilities of opinions:
Hence, the subjective Bayes' theorem represents a generalization of Bayes' theorem.
Generalizations
Bayes theorem for 3 events
A version of Bayes' theorem for 3 events results from the addition of a third event , with on which all probabilities are conditioned:
Derivation
Using the chain rule
And, on the other hand
The desired result is obtained by identifying both expressions and solving for .
Use in genetics
In genetics, Bayes' rule can be used to estimate the probability that someone has a specific genotype. Many people seek to assess their chances of being affected by a genetic disease or their likelihood of being a carrier for a recessive gene of interest. A Bayesian analysis can be done based on family history or genetic testing to predict whether someone will develop a disease or pass one on to their children. Genetic testing and prediction is common among couples who plan to have children but are concerned that they may both be carriers for a disease, especially in communities with low genetic variance.
Using pedigree to calculate probabilities
Example of a Bayesian analysis table for a female's risk for a disease based on the knowledge that the disease is present in her siblings but not in her parents or any of her four children. Based solely on the status of the subject's siblings and parents, she is equally likely to be a carrier as to be a non-carrier (this likelihood is denoted by the Prior Hypothesis). The probability that the subject's four sons would all be unaffected is 1/16 (⋅⋅⋅) if she is a carrier and about 1 if she is a non-carrier (this is the Conditional Probability). The Joint Probability reconciles these two predictions by multiplying them together. The last line (the Posterior Probability) is calculated by dividing the Joint Probability for each hypothesis by the sum of both joint probabilities.
Using genetic test results
Parental genetic testing can detect around 90% of known disease alleles in parents that can lead to carrier or affected status in their children. Cystic fibrosis is a heritable disease caused by an autosomal recessive mutation on the CFTR gene, located on the q arm of chromosome 7.
Here is a Bayesian analysis of a female patient with a family history of cystic fibrosis (CF) who has tested negative for CF, demonstrating how the method was used to determine her risk of having a child born with CF: because the patient is unaffected, she is either homozygous for the wild-type allele, or heterozygous. To establish prior probabilities, a Punnett square is used, based on the knowledge that neither parent was affected by the disease but both could have been carriers:
Given that the patient is unaffected, there are only three possibilities. Within these three, there are two scenarios in which the patient carries the mutant allele. Thus the prior probabilities are and .
Next, the patient undergoes genetic testing and tests negative for cystic fibrosis. This test has a 90% detection rate, so the conditional probabilities of a negative test are 1/10 and 1. Finally, the joint and posterior probabilities are calculated as before.
After carrying out the same analysis on the patient's male partner (with a negative test result), the chance that their child is affected is the product of the parents' respective posterior probabilities for being carriers times the chance that two carriers will produce an affected offspring ().
Genetic testing done in parallel with other risk factor identification
Bayesian analysis can be done using phenotypic information associated with a genetic condition. When combined with genetic testing, this analysis becomes much more complicated. Cystic fibrosis, for example, can be identified in a fetus with an ultrasound looking for an echogenic bowel, one that appears brighter than normal on a scan. This is not a foolproof test, as an echogenic bowel can be present in a perfectly healthy fetus. Parental genetic testing is very influential in this case, where a phenotypic facet can be overly influential in probability calculation. In the case of a fetus with an echogenic bowel, with a mother who has been tested and is known to be a CF carrier, the posterior probability that the fetus has the disease is very high (0.64). But once the father has tested negative for CF, the posterior probability drops significantly (to 0.16).
Risk factor calculation is a powerful tool in genetic counseling and reproductive planning but cannot be treated as the only important factor. As above, incomplete testing can yield falsely high probability of carrier status, and testing can be financially inaccessible or unfeasible when a parent is not present.
| Mathematics | Statistics and probability | null |
49571 | https://en.wikipedia.org/wiki/Bayesian%20inference | Bayesian inference | Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian inference uses a prior distribution to estimate posterior probabilities. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".
Introduction to Bayes' rule
Formal explanation
Bayesian inference derives the posterior probability as a consequence of two antecedents: a prior probability and a "likelihood function" derived from a statistical model for the observed data. Bayesian inference computes the posterior probability according to Bayes' theorem:
where
stands for any hypothesis whose probability may be affected by data (called evidence below). Often there are competing hypotheses, and the task is to determine which is the most probable.
, the prior probability, is the estimate of the probability of the hypothesis before the data , the current evidence, is observed.
, the evidence, corresponds to new data that were not used in computing the prior probability.
, the posterior probability, is the probability of given , i.e., after is observed. This is what we want to know: the probability of a hypothesis given the observed evidence.
is the probability of observing given and is called the likelihood. As a function of with fixed, it indicates the compatibility of the evidence with the given hypothesis. The likelihood function is a function of the evidence, , while the posterior probability is a function of the hypothesis, .
is sometimes termed the marginal likelihood or "model evidence". This factor is the same for all possible hypotheses being considered (as is evident from the fact that the hypothesis does not appear anywhere in the symbol, unlike for all the other factors) and hence does not factor into determining the relative probabilities of different hypotheses.
(Else one has .)
For different values of , only the factors and , both in the numerator, affect the value of the posterior probability of a hypothesis is proportional to its prior probability (its inherent likeliness) and the newly acquired likelihood (its compatibility with the new observed evidence).
In cases where ("not "), the logical negation of , is a valid likelihood, Bayes' rule can be rewritten as follows:
because
and
This focuses attention on the term If that term is approximately 1, then the probability of the hypothesis given the evidence, , is about , about 50% likely - equally likely or not likely. If that term is very small, close to zero, then the probability of the hypothesis, given the evidence, is close to 1 or the conditional hypothesis is quite likely. If that term is very large, much larger than 1, then the hypothesis, given the evidence, is quite unlikely. If the hypothesis (without consideration of evidence) is unlikely, then is small (but not necessarily astronomically small) and is much larger than 1 and this term can be approximated as and relevant probabilities can be compared directly to each other.
One quick and easy way to remember the equation would be to use rule of multiplication:
Alternatives to Bayesian updating
Bayesian updating is widely used and computationally convenient. However, it is not the only updating rule that might be considered rational.
Ian Hacking noted that traditional "Dutch book" arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. Hacking wrote: "And neither the Dutch book argument nor any other in the personalist arsenal of proofs of the probability axioms entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."
Indeed, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics") following the publication of Richard C. Jeffrey's rule, which applies Bayes' rule to the case where the evidence itself is assigned a probability. The additional hypotheses needed to uniquely require Bayesian updating have been deemed to be substantial, complicated, and unsatisfactory.
Inference over exclusive and exhaustive possibilities
If evidence is simultaneously used to update belief over a set of exclusive and exhaustive propositions, Bayesian inference may be thought of as acting on this belief distribution as a whole.
General formulation
Suppose a process is generating independent and identically distributed events , but the probability distribution is unknown. Let the event space represent the current state of belief for this process. Each model is represented by event . The conditional probabilities are specified to define the models. is the degree of belief in . Before the first inference step, is a set of initial prior probabilities. These must sum to 1, but are otherwise arbitrary.
Suppose that the process is observed to generate . For each , the prior is updated to the posterior . From Bayes' theorem:
Upon observation of further evidence, this procedure may be repeated.
Multiple observations
For a sequence of independent and identically distributed observations , it can be shown by induction that repeated application of the above is equivalent to
where
Parametric formulation: motivating the formal description
By parameterizing the space of models, the belief in all models may be updated in a single step. The distribution of belief over the model space may then be thought of as a distribution of belief over the parameter space. The distributions in this section are expressed as continuous, represented by probability densities, as this is the usual situation. The technique is, however, equally applicable to discrete distributions.
Let the vector span the parameter space. Let the initial prior distribution over be , where is a set of parameters to the prior itself, or hyperparameters. Let be a sequence of independent and identically distributed event observations, where all are distributed as for some . Bayes' theorem is applied to find the posterior distribution over :
where
Formal description of Bayesian inference
Definitions
, a data point in general. This may in fact be a vector of values.
, the parameter of the data point's distribution, i.e., This may be a vector of parameters.
, the hyperparameter of the parameter distribution, i.e., This may be a vector of hyperparameters.
is the sample, a set of observed data points, i.e., .
, a new data point whose distribution is to be predicted.
Bayesian inference
The prior distribution is the distribution of the parameter(s) before any data is observed, i.e. . The prior distribution might not be easily determined; in such a case, one possibility may be to use the Jeffreys prior to obtain a prior distribution before updating it with newer observations.
The sampling distribution is the distribution of the observed data conditional on its parameters, i.e. This is also termed the likelihood, especially when viewed as a function of the parameter(s), sometimes written .
The marginal likelihood (sometimes also termed the evidence) is the distribution of the observed data marginalized over the parameter(s), i.e. It quantifies the agreement between data and expert opinion, in a geometric sense that can be made precise. If the marginal likelihood is 0 then there is no agreement between the data and expert opinion and Bayes' rule cannot be applied.
The posterior distribution is the distribution of the parameter(s) after taking into account the observed data. This is determined by Bayes' rule, which forms the heart of Bayesian inference: This is expressed in words as "posterior is proportional to likelihood times prior", or sometimes as "posterior = likelihood times prior, over evidence".
In practice, for almost all complex Bayesian models used in machine learning, the posterior distribution is not obtained in a closed form distribution, mainly because the parameter space for can be very high, or the Bayesian model retains certain hierarchical structure formulated from the observations and parameter . In such situations, we need to resort to approximation techniques.
General case: Let be the conditional distribution of given and let be the distribution of . The joint distribution is then . The conditional distribution of given is then determined by
Existence and uniqueness of the needed conditional expectation is a consequence of the Radon–Nikodym theorem. This was formulated by Kolmogorov in his famous book from 1933. Kolmogorov underlines the importance of conditional probability by writing "I wish to call attention to ... and especially the theory of conditional probabilities and conditional expectations ..." in the Preface. The Bayes theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions. Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line. Modern Markov chain Monte Carlo methods have boosted the importance of Bayes' theorem including cases with improper priors.
Bayesian prediction
The posterior predictive distribution is the distribution of a new data point, marginalized over the posterior:
The prior predictive distribution is the distribution of a new data point, marginalized over the prior:
Bayesian theory calls for the use of the posterior predictive distribution to do predictive inference, i.e., to predict the distribution of a new, unobserved data point. That is, instead of a fixed point as a prediction, a distribution over possible points is returned. Only this way is the entire posterior distribution of the parameter(s) used. By comparison, prediction in frequentist statistics often involves finding an optimum point estimate of the parameter(s)—e.g., by maximum likelihood or maximum a posteriori estimation (MAP)—and then plugging this estimate into the formula for the distribution of a data point. This has the disadvantage that it does not account for any uncertainty in the value of the parameter, and hence will underestimate the variance of the predictive distribution.
In some instances, frequentist statistics can work around this problem. For example, confidence intervals and prediction intervals in frequentist statistics when constructed from a normal distribution with unknown mean and variance are constructed using a Student's t-distribution. This correctly estimates the variance, due to the facts that (1) the average of normally distributed random variables is also normally distributed, and (2) the predictive distribution of a normally distributed data point with unknown mean and variance, using conjugate or uninformative priors, has a Student's t-distribution. In Bayesian statistics, however, the posterior predictive distribution can always be determined exactly—or at least to an arbitrary level of precision when numerical methods are used.
Both types of predictive distributions have the form of a compound probability distribution (as does the marginal likelihood). In fact, if the prior distribution is a conjugate prior, such that the prior and posterior distributions come from the same family, it can be seen that both prior and posterior predictive distributions also come from the same family of compound distributions. The only difference is that the posterior predictive distribution uses the updated values of the hyperparameters (applying the Bayesian update rules given in the conjugate prior article), while the prior predictive distribution uses the values of the hyperparameters that appear in the prior distribution.
Mathematical properties
Interpretation of factor
. That is, if the model were true, the evidence would be more likely than is predicted by the current state of belief. The reverse applies for a decrease in belief. If the belief does not change, . That is, the evidence is independent of the model. If the model were true, the evidence would be exactly as likely as predicted by the current state of belief.
Cromwell's rule
If then . If and , then . This can be interpreted to mean that hard convictions are insensitive to counter-evidence.
The former follows directly from Bayes' theorem. The latter can be derived by applying the first rule to the event "not " in place of "", yielding "if , then ", from which the result immediately follows.
Asymptotic behaviour of posterior
Consider the behaviour of a belief distribution as it is updated a large number of times with independent and identically distributed trials. For sufficiently nice prior probabilities, the Bernstein-von Mises theorem gives that in the limit of infinite trials, the posterior converges to a Gaussian distribution independent of the initial prior under some conditions firstly outlined and rigorously proven by Joseph L. Doob in 1948, namely if the random variable in consideration has a finite probability space. The more general results were obtained later by the statistician David A. Freedman who published in two seminal research papers in 1963 and 1965 when and under what circumstances the asymptotic behaviour of posterior is guaranteed. His 1963 paper treats, like Doob (1949), the finite case and comes to a satisfactory conclusion. However, if the random variable has an infinite but countable probability space (i.e., corresponding to a die with infinite many faces) the 1965 paper demonstrates that for a dense subset of priors the Bernstein-von Mises theorem is not applicable. In this case there is almost surely no asymptotic convergence. Later in the 1980s and 1990s Freedman and Persi Diaconis continued to work on the case of infinite countable probability spaces. To summarise, there may be insufficient trials to suppress the effects of the initial choice, and especially for large (but finite) systems the convergence might be very slow.
Conjugate priors
In parameterized form, the prior distribution is often assumed to come from a family of distributions called conjugate priors. The usefulness of a conjugate prior is that the corresponding posterior distribution will be in the same family, and the calculation may be expressed in closed form.
Estimates of parameters and predictions
It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation select measurements of central tendency from the posterior distribution.
For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as a robust estimator.
If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation.
Taking a value with the greatest probability defines maximum a posteriori (MAP) estimates:
There are examples where no maximum is attained, in which case the set of MAP estimates is empty.
There are other methods of estimation that minimize the posterior risk (expected-posterior loss) with respect to a loss function, and these are of interest to statistical decision theory using the sampling distribution ("frequentist statistics").
The posterior predictive distribution of a new observation (that is independent of previous observations) is determined by
Examples
Probability of a hypothesis
Suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?
Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. Let correspond to bowl #1, and to bowl #2.
It is given that the bowls are identical from Fred's point of view, thus , and the two must add up to 1, so both are equal to 0.5.
The event is the observation of a plain cookie. From the contents of the bowls, we know that and Bayes' formula then yields
Before we observed the cookie, the probability we assigned for Fred having chosen bowl #1 was the prior probability, , which was 0.5. After observing the cookie, we must revise the probability to , which is 0.6.
Making a prediction
An archaeologist is working at a site thought to be from the medieval period, between the 11th century to the 16th century. However, it is uncertain exactly when in this period the site was inhabited. Fragments of pottery are found, some of which are glazed and some of which are decorated. It is expected that if the site were inhabited during the early medieval period, then 1% of the pottery would be glazed and 50% of its area decorated, whereas if it had been inhabited in the late medieval period then 81% would be glazed and 5% of its area decorated. How confident can the archaeologist be in the date of inhabitation as fragments are unearthed?
The degree of belief in the continuous variable (century) is to be calculated, with the discrete set of events as evidence. Assuming linear variation of glaze and decoration with time, and that these variables are independent,
Assume a uniform prior of , and that trials are independent and identically distributed. When a new fragment of type is discovered, Bayes' theorem is applied to update the degree of belief for each :
A computer simulation of the changing belief as 50 fragments are unearthed is shown on the graph. In the simulation, the site was inhabited around 1420, or . By calculating the area under the relevant portion of the graph for 50 trials, the archaeologist can say that there is practically no chance the site was inhabited in the 11th and 12th centuries, about 1% chance that it was inhabited during the 13th century, 63% chance during the 14th century and 36% during the 15th century. The Bernstein-von Mises theorem asserts here the asymptotic convergence to the "true" distribution because the probability space corresponding to the discrete set of events is finite (see above section on asymptotic behaviour of the posterior).
In frequentist statistics and decision theory
A decision-theoretic justification of the use of Bayesian inference was given by Abraham Wald, who proved that every unique Bayesian procedure is admissible. Conversely, every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.
Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas of frequentist inference as parameter estimation, hypothesis testing, and computing confidence intervals. For example:
"Under some conditions, all admissible procedures are either Bayes procedures or limits of Bayes procedures (in various senses). These remarkable results, at least in their original form, are due essentially to Wald. They are useful because the property of being Bayes is easier to analyze than admissibility."
"In decision theory, a quite general method for proving admissibility consists in exhibiting a procedure as a unique Bayes solution."
"In the first chapters of this work, prior distributions with finite support and the corresponding Bayes procedures were used to establish some of the main theorems relating to the comparison of experiments. Bayes procedures with respect to more general prior distributions have played a very important role in the development of statistics, including its asymptotic theory." "There are many problems where a glance at posterior distributions, for suitable priors, yields immediately interesting information. Also, this technique can hardly be avoided in sequential analysis."
"A useful fact is that any Bayes decision rule obtained by taking a proper prior over the whole parameter space must be admissible"
"An important area of investigation in the development of admissibility ideas has been that of conventional sampling-theory procedures, and many interesting results have been obtained."
Model selection
Bayesian methodology also plays a role in model selection where the aim is to select one model from a set of competing models that represents most closely the underlying process that generated the observed data. In Bayesian model comparison, the model with the highest posterior probability given the data is selected. The posterior probability of a model depends on the evidence, or marginal likelihood, which reflects the probability that the data is generated by the model, and on the prior belief of the model. When two competing models are a priori considered to be equiprobable, the ratio of their posterior probabilities corresponds to the Bayes factor. Since Bayesian model comparison is aimed on selecting the model with the highest posterior probability, this methodology is also referred to as the maximum a posteriori (MAP) selection rule or the MAP probability rule.
Probabilistic programming
While conceptually simple, Bayesian methods can be mathematically and numerically challenging. Probabilistic programming languages (PPLs) implement functions to easily build Bayesian models together with efficient automatic inference methods. This helps separate the model building from the inference, allowing practitioners to focus on their specific problems and leaving PPLs to handle the computational details for them.
Applications
Statistical data analysis
See the separate Wikipedia entry on Bayesian statistics, specifically the statistical modeling section in that page.
Computer applications
Bayesian inference has applications in artificial intelligence and expert systems. Bayesian inference techniques have been a fundamental part of computerized pattern recognition techniques since the late 1950s. There is also an ever-growing connection between Bayesian methods and simulation-based Monte Carlo techniques since complex models cannot be processed in closed form by a Bayesian analysis, while a graphical model structure may allow for efficient simulation algorithms like the Gibbs sampling and other Metropolis–Hastings algorithm schemes. Recently Bayesian inference has gained popularity among the phylogenetics community for these reasons; a number of applications allow many demographic and evolutionary parameters to be estimated simultaneously.
As applied to statistical classification, Bayesian inference has been used to develop algorithms for identifying e-mail spam. Applications which make use of Bayesian inference for spam filtering include CRM114, DSPAM, Bogofilter, SpamAssassin, SpamBayes, Mozilla, XEAMS, and others. Spam classification is treated in more detail in the article on the naïve Bayes classifier.
Solomonoff's Inductive inference is the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution. It is a formal inductive framework that combines two well-studied principles of inductive inference: Bayesian statistics and Occam's Razor. Solomonoff's universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.
Bioinformatics and healthcare applications
Bayesian inference has been applied in different Bioinformatics applications, including differential gene expression analysis. Bayesian inference is also used in a general cancer risk model, called CIRI (Continuous Individualized Risk Index), where serial measurements are incorporated to update a Bayesian model which is primarily built from prior knowledge.
In the courtroom
Bayesian inference can be used by jurors to coherently accumulate the evidence for and against a defendant, and to see whether, in totality, it meets their personal threshold for "beyond a reasonable doubt". Bayes' theorem is applied successively to all evidence presented, with the posterior from one stage becoming the prior for the next. The benefit of a Bayesian approach is that it gives the juror an unbiased, rational mechanism for combining evidence. It may be appropriate to explain Bayes' theorem to jurors in odds form, as betting odds are more widely understood than probabilities. Alternatively, a logarithmic approach, replacing multiplication with addition, might be easier for a jury to handle.
If the existence of the crime is not in doubt, only the identity of the culprit, it has been suggested that the prior should be uniform over the qualifying population. For example, if 1,000 people could have committed the crime, the prior probability of guilt would be 1/1000.
The use of Bayes' theorem by jurors is controversial. In the United Kingdom, a defence expert witness explained Bayes' theorem to the jury in R v Adams. The jury convicted, but the case went to appeal on the basis that no means of accumulating evidence had been provided for jurors who did not wish to use Bayes' theorem. The Court of Appeal upheld the conviction, but it also gave the opinion that "To introduce Bayes' Theorem, or any similar method, into a criminal trial plunges the jury into inappropriate and unnecessary realms of theory and complexity, deflecting them from their proper task."
Gardner-Medwin argues that the criterion on which a verdict in a criminal trial should be based is not the probability of guilt, but rather the probability of the evidence, given that the defendant is innocent (akin to a frequentist p-value). He argues that if the posterior probability of guilt is to be computed by Bayes' theorem, the prior probability of guilt must be known. This will depend on the incidence of the crime, which is an unusual piece of evidence to consider in a criminal trial. Consider the following three propositions:
A – the known facts and testimony could have arisen if the defendant is guilty.
B – the known facts and testimony could have arisen if the defendant is innocent.
C – the defendant is guilty.
Gardner-Medwin argues that the jury should believe both A and not-B in order to convict. A and not-B implies the truth of C, but the reverse is not true. It is possible that B and C are both true, but in this case he argues that a jury should acquit, even though they know that they will be letting some guilty people go free. | Mathematics | Statistics and probability | null |
7189344 | https://en.wikipedia.org/wiki/Populus%20trichocarpa | Populus trichocarpa | Populus trichocarpa, the black cottonwood, western balsam-poplar or California poplar, is a deciduous broadleaf tree species native to western North America. It is used for timber, and is notable as a model organism in plant biology.
Description
It is a large tree, growing to a height of and a trunk diameter over . It ranks 3rd in poplar species in the American Forests Champion Tree Registry. It is normally fairly short-lived, but some trees may live up to 400 years. A cottonwood in Willamette Mission State Park near Salem, Oregon, holds the national and world records. Last measured in April 2008, this black cottonwood was found to be standing at tall, around, with 527 points.
The bark is grey and covered with lenticels, becoming thick and deeply fissured on old trees. The bark can become hard enough to cause sparks when cut with a chainsaw. The stem is grey in the older parts and light brown in younger parts. The crown is usually roughly conical and quite dense. In large trees, the lower branches droop downwards. Spur shoots are common. The wood has a light coloring and a straight grain.
The leaves are usually long with a glossy, dark green upper side and glaucous, light grey-green underside; larger leaves may be up to long and may be produced on stump sprouts and very vigorous young trees. The leaves are alternate, elliptical with a crenate margin and an acute tip, and reticulate venation. The petiole is reddish. The buds are conical, long, narrow, and sticky, with a strong balsam scent in spring when they open.
P. trichocarpa has an extensive and aggressive root system, which can invade and damage drainage systems. Sometimes, the roots can even damage the foundations of buildings by drying out the soil.
In 2016, the first direct evidence was published indicating that wild P. trichocarpa fixes nitrogen.
Reproduction
Flowering and fruiting
P. trichocarpa is normally dioecious; male and female catkins are borne on separate trees. The species reaches flowering age around 10 years. Flowers may appear in early March to late May in Washington and Oregon, and sometimes as late as mid-June in northern and interior British Columbia, Idaho, and Montana. Staminate catkins contain 30 to 60 stamens, elongated to 2 to 3 cm, and are deciduous. The pollen can be an allergen. Pistillate catkins at maturity are 8 to 20 cm long with rotund-ovate, tricarpellate subsessile fruits 5 to 8 mm long. Each capsule contains many minute seeds with long, white, cottony hairs.
Seed production and dissemination
The seed ripens and is disseminated by late May to late June in Oregon and Washington, but frequently not until mid-July in Idaho and Montana. Abundant seed crops are usually produced every year. Attached to its cotton, the seed is light and buoyant and can be transported long distances by wind and water. Although highly viable, longevity of P. trichocarpa seed under natural conditions may be as short as two weeks to a month. This can be increased with cold storage.
Seedling development
Moist seedbeds are essential for high germination, and seedling survival depends on continuously favorable conditions during the first month. Wet bottomlands of rivers and major streams frequently provide such conditions, particularly where bare soil has been exposed or new soil laid down. Germination is epigeal (above ground). P. trichocarpa seedlings do not usually become established in abundance after logging unless special measures are taken to prepare the bare, moist seedbeds required for initial establishment. Where seedlings become established in great numbers, they thin out naturally by age five because the weaker seedlings of this shade-intolerant species are suppressed.
Vegetative reproduction
Due to its high levels of rooting hormones, P. trichocarpa sprouts readily. After logging operations, it sometimes regenerates naturally from rooting of partially buried fragments of branches or from stumps. Sprouting from roots also occurs. The species also has the ability to abscise shoots complete with green leaves. These shoots drop to the ground and may root where they fall or may be dispersed by water transport. In some situations, abscission may be one means of colonizing exposed sandbars.
Taxonomy
"Trichocarpa" is Greek for "hairy fruits". These scientific names are now considered synonymous with P. trichocarpa:
Distribution and habitat
The native range of P. trichocarpa covers large sections of western North America. It extends from Southeast Alaska's Kodiak Island and Cook Inlet to latitude 62° 30° N., through British Columbia and the forested areas of Washington and Oregon, to the mountains in southern California and northern Baja California (31°N). It is also found inland, generally on the west side of the Rocky Mountains, in British Columbia, southwestern Alberta, western Montana, and north-to-central Idaho. Scattered small populations have been noted in southeastern Alberta, eastern Montana, western North Dakota, western Wyoming, Utah, and Nevada.
Black cottonwood grows on alluvial sites, riparian habitats, and moist woods on mountain slopes, from sea level to elevations of . It often forms extensive stands on bottomlands of major streams and rivers at low elevations along the Pacific Coast, west of the Cascade Range. In eastern Washington and other dry areas, it is restricted to protected valleys and canyon bottoms, along streambanks, and edges of ponds and meadows. It grows on a variety of soils from moist silts, gravels, and sands to rich humus, loams, and occasionally clays. Black cottonwood is a pioneer species that grows best in full sunlight and commonly establishes on recently disturbed alluvium. Seeds are numerous and widely dispersed because of their cottony tufts, enabling the species to colonize even burn sites, if conditions for establishment are met. Seral communities dominated or codominated by cottonwood are maintained by periodic flooding or other types of soil disturbance. Black cottonwood has low drought tolerance; it is flood-tolerant but cannot tolerate brackish water or stagnant pools.
P. trichocarpa has been one of the most successful introductions of trees to the otherwise almost treeless Faroe Islands.
The species was imported from Alaska to Iceland in 1944 and has since become one of the most widespread trees in the country.
Ecology
Although the most populous cottonwood of the Pacific Northwest, it hybridizes with the region's three other species: balsam poplar, plains cottonwood, and narrowleaf cottonwood; all four have similar appearances and provide habitats for various animals.
Cottonwoods are shade intolerant. Black cottonwood thrives by colonizing disturbed sites, but can be replaced by conifers. The wood is relatively weak and waterlogged, often splitting during freezes. It is susceptible to rot as well. Woodpeckers create cavities which various animals can use for nests. Larger birds nest in the large upper branches. Beavers use the trees as food and dam-building material.
Cultivation
It is grown as an ornamental tree, valued for its fast growth and scented foliage in spring, detectable from over 100 m distance. The roots are however invasive, and it can damage the foundations of buildings on shrinkable clay soils if planted nearby (Mitchel 1996).
Branches can be added to potted plants to stimulate rooting.
Uses
Traditional
The tree was and is significant for many Native American tribes of the Western United States. Some Native Americans consumed cottonwood inner bark and sap, feeding their horses the inner bark and foliage. The wood, roots and bark have been used for firewood, canoe making, rope, fish traps, baskets and structures. The gum-like sap was used as a glue or as waterproofing. The Quinault used it for post wood. The Cowlitz made the base (hearth board) of their fire-making tool, a bow drill, with its wood. The Squaxin cut young branches for building sweat lodges.
Medicinal
The tree had medicinal value as well. The Squaxin used the bark for sore throats and for the treatment of tuberculosis, as well as water and the bruised leaves as an antiseptic mixture. The Klallam used the buds for an eye treatment. For the Quinault, they extracted gum from the burls and applied it to cuts on the skin.
Modern
Commercial extracts are produced from the fragrant buds for use as a perfume in cosmetics.
Lumber
P. trichocarpa wood is light-weight and although not particularly strong, is strong for its weight. The wood material has short, fine cellulose fibres that are used in pulp for high-quality book and magazine paper. The wood is also excellent for production of plywood. Living trees are used as windbreaks.
This species grows very quickly; trees in plantations in Great Britain have reached tall in 11 years, and tall in 28 years. It can reach suitable size for pulp production in 10–15 years and about 25 years for timber production.
As a model species
Populus trichocarpa has several qualities that makes it a good model species for trees:
Model genome size (although significantly larger than the other model plant, Arabidopsis thaliana)
Rapid growth (for a tree)
Reaches reproductive maturity 4–6 years
Economically important
It represents a phenotypically diverse genus
For these reasons, the species has been extensively studied. Its genome sequence was published in 2006. More than 121,000 expressed sequence tags have been sequenced from it. The wide range of topics studied by using P. trichocarpa include the effects of ethylene, lignin biosynthesis, drought tolerance, and wood formation.
Cultural significance
The Chehalis believed that the tree was intelligent and had a form of special physical agency, moving on its own without the need of wind. Due to this belief, they refused to use it for firewood.
Genome
The sequence of P. trichocarpa is that of an individual female specimen "Nisqually-1", named after the Nisqually River in Washington, where the specimen was collected. The sequencing was performed at the Joint Genome Institute using the shotgun method. The depth of the sequencing was about 7.5 x (meaning that each base pair was sequenced on average 7.5 times). Genome annotation was done primarily by the Joint Genome Institute, the Oak Ridge National Laboratory, the Umeå Plant Science Centre, and Genome Canada.
Prior to the publication of P. trichocarpa genome the only available plant genomes were those of thale cress and rice, both of which are herbaceous. P. trichocarpa is the first woody plant genome to be sequenced. Considering the economic importance of wood and wood products, the availability of a tree genome was necessary. The sequence also allows evolutionary comparisons and the elucidation of basic molecular differences between herbaceous and woody plants.
Characteristics
Size: 485 million base pairs (human genome: 3 billion base pairs)
Proportion of heterochromatin to euchromatin: 3:7
Number of chromosomes: 19
Number of putative genes: 45,555, the largest number of genes ever recorded (estimate in September 2008)
Mitochondrial genome: 803,000 base pairs, 52 genes
Chloroplast genome: 157,000 base pairs, 101 genes
Somatic mosaicism
Genome-wide analysis of 11 clumps of P. trichocarpa trees reveals significant genetic differences between the roots and the leaves and branches of the same tree. The variation within a specimen is as much as found between unrelated trees. These results may be important in resolving debate in evolutionary biology regarding somatic mutation (that evolution can occur within individuals, not solely among populations), with a variety of implications.
| Biology and health sciences | Malpighiales | Plants |
2181360 | https://en.wikipedia.org/wiki/Tarski%27s%20axioms | Tarski's axioms | Tarski's axioms are an axiom system for Euclidean geometry, specifically for that portion of Euclidean geometry that is formulable in first-order logic with identity (i.e. is formulable as an elementary theory). As such, it does not require an underlying set theory. The only primitive objects of the system are "points" and the only primitive predicates are "betweenness" (expressing the fact that a point lies on a line segment between two other points) and "congruence" (expressing the fact that the distance between two points equals the distance between two other points). The system contains infinitely many axioms.
The axiom system is due to Alfred Tarski who first presented it in 1926. Other modern axiomizations of Euclidean geometry are Hilbert's axioms (1899) and Birkhoff's axioms (1932).
Using his axiom system, Tarski was able to show that the first-order theory of Euclidean geometry is consistent, complete and decidable: every sentence in its language is either provable or disprovable from the axioms, and we have an algorithm which decides for any given sentence whether it is provable or not.
Overview
Early in his career Tarski taught geometry and researched set theory. His coworker Steven Givant (1999) explained Tarski's take-off point:
From Enriques, Tarski learned of the work of Mario Pieri, an Italian geometer who was strongly influenced by Peano. Tarski preferred Pieri's system [of his Point and Sphere memoir], where the logical structure and the complexity of the axioms were more transparent.
Givant then says that "with typical thoroughness" Tarski devised his system:
What was different about Tarski's approach to geometry? First of all, the axiom system was much simpler than any of the axiom systems that existed up to that time. In fact the length of all of Tarski's axioms together is not much more than just one of Pieri's 24 axioms. It was the first system of Euclidean geometry that was simple enough for all axioms to be expressed in terms of the primitive notions only, without the help of defined notions. Of even greater importance, for the first time a clear distinction was made between full geometry and its elementary — that is, its first order — part.
Like other modern axiomatizations of Euclidean geometry, Tarski's employs a formal system consisting of symbol strings, called sentences, whose construction respects formal syntactical rules, and rules of proof that determine the allowed manipulations of the sentences. Unlike some other modern axiomatizations, such as Birkhoff's and Hilbert's, Tarski's axiomatization has no primitive objects other than points, so a variable or constant cannot refer to a line or an angle. Because points are the only primitive objects, and because Tarski's system is a first-order theory, it is not even possible to define lines as sets of points. The only primitive relations (predicates) are "betweenness" and "congruence" among points.
Tarski's axiomatization is shorter than its rivals, in a sense Tarski and Givant (1999) make explicit. It is more concise than Pieri's because Pieri had only two primitive notions while Tarski introduced three: point, betweenness, and congruence. Such economy of primitive and defined notions means that Tarski's system is not very convenient for doing Euclidean geometry. Rather, Tarski designed his system to facilitate its analysis via the tools of mathematical logic, i.e., to facilitate deriving its metamathematical properties. Tarski's system has the unusual property that all sentences can be written in universal-existential form, a special case of the prenex normal form. This form has all universal quantifiers preceding any existential quantifiers, so that all sentences can be recast in the form This fact allowed Tarski to prove that Euclidean geometry is decidable: there exists an algorithm which can determine the truth or falsity of any sentence. Tarski's axiomatization is also complete. This does not contradict Gödel's first incompleteness theorem, because Tarski's theory lacks the expressive power needed to interpret Robinson arithmetic .
The axioms
Alfred Tarski worked on the axiomatization and metamathematics of Euclidean geometry intermittently from 1926 until his death in 1983, with Tarski (1959) heralding his mature interest in the subject. The work of Tarski and his students on Euclidean geometry culminated in the monograph Schwabhäuser, Szmielew, and Tarski (1983), which set out the 10 axioms and one axiom schema shown below, the associated metamathematics, and a fair bit of the subject. Gupta (1965) made important contributions, and Tarski and Givant (1999) discuss the history.
Fundamental relations
These axioms are a more elegant version of a set Tarski devised in the 1920s as part of his investigation of the metamathematical properties of Euclidean plane geometry. This objective required reformulating that geometry as a first-order theory. Tarski did so by positing a universe of points, with lower case letters denoting variables ranging over that universe. Equality is provided by the underlying logic (see First-order logic#Equality and its axioms). Tarski then posited two primitive relations:
Betweenness, a triadic relation. The atomic sentence Bxyz denotes that the point y is "between" the points x and z, in other words, that y is a point on the line segment xz. (This relation is interpreted inclusively, so that Bxyz is trivially true whenever x=y or y=z).
Congruence (or "equidistance"), a tetradic relation. The atomic sentence Cwxyz or commonly wx ≡ yz can be interpreted as wx is congruent to yz, in other words, that the length of the line segment wx is equal to the length of the line segment yz.
Betweenness captures the affine aspect (such as the parallelism of lines) of Euclidean geometry; congruence, its metric aspect (such as angles and distances). The background logic includes identity, a binary relation denoted by =.
The axioms below are grouped by the types of relation they invoke, then sorted, first by the number of existential quantifiers, then by the number of atomic sentences. The axioms should be read as universal closures; hence any free variables should be taken as tacitly universally quantified.
Congruence axioms
Reflexivity of Congruence
Identity of Congruence
Transitivity of Congruence
Commentary
While the congruence relation is, formally, a 4-way relation among points, it may also be thought of, informally, as a binary relation between two line segments and . The "Reflexivity" and "Transitivity" axioms above, combined, prove both:
that this binary relation is in fact an equivalence relation
it is reflexive: .
it is symmetric .
it is transitive .
and that the order in which the points of a line segment are specified is irrelevant.
.
.
.
The "transitivity" axiom asserts that congruence is Euclidean, in that it respects the first of Euclid's "common notions".
The "Identity of Congruence" axiom states, intuitively, that if xy is congruent with a segment that begins and ends at the same point, x and y are the same point. This is closely related to the notion of reflexivity for binary relations.
Betweenness axioms
Identity of Betweenness
The only point on the line segment is itself.
Axiom of Pasch
Axiom schema of Continuity
Let φ(x) and ψ(y) be first-order formulae containing no free instances of either a or b. Let there also be no free instances of x in ψ(y) or of y in φ(x). Then all instances of the following schema are axioms:
Let r be a ray with endpoint a. Let the first order formulae φ and ψ define subsets X and Y of r, such that every point in Y is to the right of every point of X (with respect to a). Then there exists a point b in r lying between X and Y. This is essentially the Dedekind cut construction, carried out in a way that avoids quantification over sets.
Note that the formulae φ(x) and ψ(y) may contain parameters, i.e. free variables different from a, b, x, y. And indeed, each instance of the axiom scheme that does not contain parameters can be proven from the other axioms.
Lower Dimension
There exist three noncollinear points. Without this axiom, the theory could be modeled by the one-dimensional real line, a single point, or even the empty set.
Congruence and betweenness
Upper Dimension
Three points equidistant from two distinct points form a line. Without this axiom, the theory could be modeled by three-dimensional or higher-dimensional space.
Axiom of Euclid
Three variants of this axiom can be given, labeled A, B and C below. They are equivalent to each other given the remaining Tarski's axioms, and indeed equivalent to Euclid's parallel postulate.
A:
Let a line segment join the midpoint of two sides of a given triangle. That line segment will be half as long as the third side. This is equivalent to the interior angles of any triangle summing to two right angles.
B:
Given any triangle, there exists a circle that includes all of its vertices.
C:
Given any angle and any point v in its interior, there exists a line segment including v, with an endpoint on each side of the angle.
Each variant has an advantage over the others:
A dispenses with existential quantifiers;
B has the fewest variables and atomic sentences;
C requires but one primitive notion, betweenness. This variant is the usual one given in the literature.
Five Segment
Begin with two triangles, xuz and x'u'z'. Draw the line segments yu and y'u', connecting a vertex of each triangle to a point on the side opposite to the vertex. The result is two divided triangles, each made up of five segments. If four segments of one triangle are each congruent to a segment in the other triangle, then the fifth segments in both triangles must be congruent.
This is equivalent to the side-angle-side rule for determining that two triangles are congruent; if the angles uxz and u'x'z' are congruent (there exist congruent triangles xuz and x'u'z'), and the two pairs of incident sides are congruent (xu ≡ x'u' and xz ≡ x'z'), then the remaining pair of sides is also congruent (uz ≡ u'z).
Segment Construction
For any point y, it is possible to draw in any direction (determined by x) a line congruent to any segment ab.
Discussion
According to Tarski and Givant (1999: 192-93), none of the above axioms are fundamentally new. The first four axioms establish some elementary properties of the two primitive relations. For instance, Reflexivity and Transitivity of Congruence establish that congruence is an equivalence relation over line segments. The Identity of Congruence and of Betweenness govern the trivial case when those relations are applied to nondistinct points. The theorem xy≡zz ↔ x=y ↔ Bxyx extends these Identity axioms.
A number of other properties of Betweenness are derivable as theorems including:
Reflexivity: Bxxy ;
Symmetry: Bxyz → Bzyx ;
Transitivity: (Bxyw ∧ Byzw) → Bxyz ;
Connectivity: (Bxyw ∧ Bxzw) → (Bxyz ∨ Bxzy).
The last two properties totally order the points making up a line segment.
The Upper and Lower Dimension axioms together require that any model of these axioms have dimension 2, i.e. that we are axiomatizing the Euclidean plane. Suitable changes in these axioms yield axiom sets for Euclidean geometry for dimensions 0, 1, and greater than 2 (Tarski and Givant 1999: Axioms 8(1), 8(n), 9(0), 9(1), 9(n) ). Note that solid geometry requires no new axioms, unlike the case with Hilbert's axioms. Moreover, Lower Dimension for n dimensions is simply the negation of Upper Dimension for n - 1 dimensions.
When the number of dimensions is greater than 1, Betweenness can be defined in terms of congruence (Tarski and Givant, 1999). First define the relation "≤" (where is interpreted "the length of line segment is less than or equal to the length of line segment "):
In the case of two dimensions, the intuition is as follows: For any line segment xy, consider the possible range of lengths of xv, where v is any point on the perpendicular bisector of xy. It is apparent that while there is no upper bound to the length of xv, there is a lower bound, which occurs when v is the midpoint of xy. So if xy is shorter than or equal to zu, then the range of possible lengths of xv will be a superset of the range of possible lengths of zw, where w is any point on the perpendicular bisector of zu.
Betweenness can then be defined by using the intuition that the shortest distance between any two points is a straight line:
The Axiom Schema of Continuity assures that the ordering of points on a line is complete (with respect to first-order definable properties). As was pointed out by Tarski, this first-order axiom schema may be replaced by a more powerful second-order Axiom of Continuity if one allows for variables to refer to arbitrary sets of points. The resulting second-order system is equivalent to Hilbert's set of axioms. (Tarski and Givant 1999)
The Axioms of Pasch and Euclid are well known. The Segment Construction axiom makes measurement and the Cartesian coordinate system possible—simply assign the length 1 to some arbitrary non-empty line segment. Indeed, it is shown in (Schwabhäuser 1983) that by specifying two distinguished points on a line, called 0 and 1, we can define an addition, multiplication and ordering, turning the set of points on that line into a real-closed ordered field. We can then introduce coordinates from this field, showing that every model of Tarski's axioms is isomorphic to the two-dimensional plane over some real-closed ordered field.
The standard geometric notions of parallelism and intersection of lines (where lines are represented by two distinct points on them), right angles, congruence of angles, similarity of triangles, tangency of lines and circles (represented by a center point and a radius) can all be defined in Tarski's system.
Let wff stand for a well-formed formula (or syntactically correct first-order formula) in Tarski's system. Tarski and Givant (1999: 175) proved that Tarski's system is:
Consistent: There is no wff such that it and its negation can both be proven from the axioms;
Complete: Every wff or its negation is a theorem provable from the axioms;
Decidable: There exists an algorithm that decides for every wff whether is it is provable or disprovable from the axioms. This follows from Tarski's:
Decision procedure for the real closed field, which he found by quantifier elimination (the Tarski–Seidenberg theorem);
Axioms admitting the above-mentioned representation as a two-dimensional plane over a real closed field.
This has the consequence that every statement of (second-order, general) Euclidean geometry which can be formulated as a first-order sentence in Tarski's system is true if and only if it is provable in Tarski's system, and this provability can be automatically checked with Tarski's algorithm. This, for instance, applies to all theorems in Euclid's Elements, Book I. An example of a theorem of Euclidean geometry which cannot be so formulated is the Archimedean property: to any two positive-length line segments S1 and S2 there exists a natural number n such that nS1 is longer than S2. (This is a consequence of the fact that there are real-closed fields that contain infinitesimals.) Other notions that cannot be expressed in Tarski's system are the constructability with straightedge and compass and statements that talk about "all polygones" etc.
Gupta (1965) proved the Tarski's axioms independent, excepting Pasch and Reflexivity of Congruence.
Negating the Axiom of Euclid yields hyperbolic geometry, while eliminating it outright yields absolute geometry. Full (as opposed to elementary) Euclidean geometry requires giving up a first order axiomatization: replace φ(x) and ψ(y) in the axiom schema of Continuity with x ∈ A and y ∈ B, where A and B are universally quantified variables ranging over sets of points.
Comparison with Hilbert's system
Hilbert's axioms for plane geometry number 16, and include Transitivity of Congruence and a variant of the Axiom of Pasch. The only notion from intuitive geometry invoked in the remarks to Tarski's axioms is triangle. (Versions B and C''' of the Axiom of Euclid refer to "circle" and "angle," respectively.) Hilbert's axioms also require "ray," "angle," and the notion of a triangle "including" an angle. In addition to betweenness and congruence, Hilbert's axioms require a primitive binary relation "on," linking a point and a line.
Hilbert uses two axioms of Continuity, and they require second-order logic. By contrast, Tarski's Axiom schema of Continuity consists of infinitely many first-order axioms. Such a schema is indispensable; Euclidean geometry in Tarski's (or equivalent) language cannot be finitely axiomatized as a first-order theory.
Hilbert's system is therefore considerably stronger: every model is isomorphic to the real plane (using the standard notions of points and lines). By contrast, Tarski's system has many non-isomorphic models: for every real-closed field F, the plane F2'' provides one such model (where betweenness and congruence are defined in the obvious way).
The first four groups of axioms of Hilbert's axioms for plane geometry are bi-interpretable with Tarski's axioms minus continuity.
| Mathematics | Axiomatic systems | null |
2183839 | https://en.wikipedia.org/wiki/Macrobrachium%20rosenbergii | Macrobrachium rosenbergii | Macrobrachium rosenbergii, also known as the giant river prawn or giant freshwater prawn, is a commercially important species of palaemonid freshwater prawn. It is found throughout the tropical and subtropical areas of the Indo-Pacific region, from India to Southeast Asia and Northern Australia. The giant freshwater prawn has also been introduced to parts of Africa, Thailand, China, Japan, New Zealand, the Americas, and the Caribbean. It is one of the biggest freshwater prawns in the world, and is widely cultivated in several countries for food. While M. rosenbergii is considered a freshwater species, the larval stage of the animal depends on estuarine brackish water. Once the individual shrimp has grown beyond the planktonic stage and becomes a juvenile, it migrants from the estuary and lives entirely in fresh water.
It is also known as the Malaysian prawn, freshwater scampi (India), or cherabin (Australia). Locally, it is known as golda chingri () in Bangladesh and India, udang galah in Indonesia and Malaysia, uwáng or uláng in the Philippines, Thailand prawn in Southern China and Taiwan (Chinese: Tàiguó xiā 泰國蝦), and (กุ้งแม่น้ำ) or (กุ้งก้ามกราม) in Thailand.
Description
M. rosenbergii can grow to a length over . They are predominantly brownish in colour, but can vary. Smaller individuals may be greenish and display faint vertical stripes. The rostrum is very prominent and contains 11 to 14 dorsal teeth and 8 to 11 ventral teeth. The first pair of walking legs (pereiopods) is elongated and very thin, ending in delicate claws (chelipeds), which are used as feeding appendages. The second pair of walking legs are much larger and powerful, especially in males. The movable claws of the second pair of walking legs are distinctively covered in dense bristles (setae) that give them a velvety appearance. The colour of the claws in males varies according to their social dominance.
Females can be distinguished from males by their wider abdomens and smaller second pereiopods. The genital openings are found on the body segments containing the fifth pereiopods and the third pereiopods in males and females, respectively.
This sexual dimorphism is driven by the IAG physiological sexual switch, discovered by Prof. Amir Sagi and his research group, and monosex biotechnologies were established for all-male and all-female culture. The all-male technology includes the first application of temporal RNA interference (RNAi) in the field of aquaculture. Also, all-female culture technology was established. Crustacean monosex technologies are applied in Vietnam, Thailand, China, Malaysia and Israel.
Morphotypes
Three different morphotypes of males exist. The first stage is called "small male" (SM); this smallest stage has short, nearly translucent claws. If conditions allow, small males grow and metamorphose into "orange claws" (OC), which have large orange claws on their second chelipeds, which may have a length of 0.8 to 1.4 times their body size. OC males later may transform into the third and final stage, the "blue claw" (BC) males. These have blue claws, and their second chelipeds may become twice as long as their bodies.
Males of M. rosenbergii have a strict hierarchy; the territorial BC males dominate the OCs, which in turn dominate the SMs. The presence of BC males inhibits the growth of SMs and delays the metamorphosis of OCs into BCs; an OC keeps growing until it is larger than the largest BC male in its neighbourhood before transforming. All three male stages are sexually active, and females that have undergone their premating moult cooperate with any male to reproduce. BC males protect the females until their shells have hardened; OCs and SMs show no such behaviour.
Life cycle
In mating, the male deposits spermatophores on the underside of the female's thorax, between the walking legs. The female then extrudes eggs, which pass through the spermatophores. The female carries the fertilised eggs with her until they hatch; the time may vary, but is generally less than 3 weeks. Females lay 10,000–50,000 eggs up to five times per year.
From these eggs hatch zoeae, the first larval stage of crustaceans. They go through several larval stages in brackish water before metamorphosing into postlarvae, at which stage they are long and resemble adults. This metamorphosis usually takes place about 32 to 35 days after hatching. These postlarvae then migrate back into fresh water.
| Biology and health sciences | Shrimps and prawns | Animals |
2184383 | https://en.wikipedia.org/wiki/Boiling-point%20elevation | Boiling-point elevation | Boiling-point elevation is the phenomenon whereby the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an ebullioscope.
Explanation
The boiling point elevation is a colligative property, which means that boiling point elevation is dependent on the number of dissolved particles but not their identity. It is an effect of the dilution of the solvent in the presence of a solute. It is a phenomenon that happens for all solutes in all solutions, even in ideal solutions, and does not depend on any specific solute–solvent interactions. The boiling point elevation happens both when the solute is an electrolyte, such as various salts, and a nonelectrolyte. In thermodynamic terms, the origin of the boiling point elevation is entropic and can be explained in terms of the vapor pressure or chemical potential of the solvent. In both cases, the explanation depends on the fact that many solutes are only present in the liquid phase and do not enter into the gas phase (except at extremely high temperatures).
The vapor pressure affects the solute shown by Raoult's Law while the free energy change and chemical potential are shown by Gibbs free energy. Most solutes remain in the liquid phase and do not enter the gas phase, except at very high temperatures.
In terms of vapor pressure, a liquid boils when its vapor pressure equals the surrounding pressure. A nonvolatile solute lowers the solvent’s vapor pressure, meaning a higher temperature is needed for the vapor pressure to equalize the surrounding pressure, causing the boiling point to elevate.
In terms of chemical potential, at the boiling point, the liquid and gas phases have the same chemical potential. Adding a nonvolatile solute lowers the solvent’s chemical potential in the liquid phase, but the gas phase remains unaffected. This shifts the equilibrium between phases to a higher temperature, elevating the boiling point.
Relationship between Freezing-point Depression
Freezing-point depression is analogous to boiling point elevation, though the magnitude of freezing-point depression is higher for the same solvent and solute concentration. These phenomena extend the liquid range of a solvent in the presence of a solute.
Related equations for Calculating Boiling Point
The extent of boiling-point elevation can be calculated by applying Clausius–Clapeyron relation and Raoult's law together with the assumption of the non-volatility of the solute. The result is that in dilute ideal solutions, the extent of boiling-point elevation is directly proportional to the molal concentration (amount of substance per mass) of the solution according to the equation:
ΔTb = Kb · bc
where the boiling point elevation, is defined as Tb (solution) − Tb (pure solvent).
Kb, the ebullioscopic constant, which is dependent on the properties of the solvent. It can be calculated as Kb = RTb2M/ΔHv, where R is the gas constant, and Tb is the boiling temperature of the pure solvent [in K], M is the molar mass of the solvent, and ΔHv is the heat of vaporization per mole of the solvent.
bc is the colligative molality, calculated by taking dissociation into account since the boiling point elevation is a colligative property, dependent on the number of particles in solution. This is most easily done by using the van 't Hoff factor i as bc = bsolute · i, where bsolute is the molality of the solution. The factor i accounts for the number of individual particles (typically ions) formed by a compound in solution. Examples:
i = 1 for sugar in water
i = 1.9 for sodium chloride in water, due to the near full dissociation of NaCl into Na+ and Cl− (often simplified as 2)
i = 2.3 for calcium chloride in water, due to nearly full dissociation of CaCl2 into Ca2+ and 2Cl− (often simplified as 3)
Non integer i factors result from ion pairs in solution, which lower the effective number of particles in the solution.
Equation after including the van 't Hoff factor
ΔTb = Kb · bsolute · i
The above formula reduces precision at high concentrations, due to nonideality of the solution. If the solute is volatile, one of the key assumptions used in deriving the formula is not true because the equation derived is for solutions of non-volatile solutes in a volatile solvent. In the case of volatile solutes, the equation can represent a mixture of volatile compounds more accurately, and the effect of the solute on the boiling point must be determined from the phase diagram of the mixture. In such cases, the mixture can sometimes have a lower boiling point than either of the pure components; a mixture with a minimum boiling point is a type of azeotrope.
Ebullioscopic constants
Values of the ebullioscopic constants Kb for selected solvents:
Uses
Together with the formula above, the boiling-point elevation can be used to measure the degree of dissociation or the molar mass of the solute. This kind of measurement is called ebullioscopy (Latin-Greek "boiling-viewing"). However, superheating is a factor that can affect the precision of the measurement and would be challenging to avoid because of the decrease in molecular mobility. Therefore, ΔTb would be hard to measure precisely even though superheating can be partially overcome by the invention of the Beckmann thermometer. In reality, cryoscopy is used more often because the freezing point is often easier to measure with precision.
| Physical sciences | Thermodynamics | Chemistry |
2184516 | https://en.wikipedia.org/wiki/Common%20ringtail%20possum | Common ringtail possum | The common ringtail possum (Pseudocheirus peregrinus, Greek for "false hand" and Latin for "pilgrim" or "alien") is an Australian marsupial.
It lives in a variety of habitats and eats a variety of leaves of both native and introduced plants, as well as flowers, fruits and sap. This possum also consumes caecotropes, which is material fermented in the caecum and expelled during the daytime when it is resting in a nest. This behaviour is called caecotrophy and is similar to that seen in rabbits.
Taxonomy
The common ringtail possum is currently classified as one of the two living species in the genus Pseudocheirus; the species of Pseudochirulus and other ringtail genera were formerly also classified in Pseudocheirus. Several subspecies have been described:
Pseudocheirus peregrinus pereginus, the type subspecies based on a collection made at Endeavour River
Pseudocheirus peregrinus convolutor, (Eastern ringtail possum or Southeastern ringtail possum)
Pseudocheirus peregrinus pulcher, (Rufous ringtail possum)
Pseudocheirus occidentalis (Ngwayir, or the Western ringtail possum), found in the south west of Australia, used to be considered a subspecies of Pseudocheirus peregrinus; however, it is now formally considered a separate species.
Description
The common ringtail possum weighs between and is approximately cm long when grown (excluding the tail, which is roughly the same length again). It has grey or black fur with white patches behind the eyes and usually a cream-coloured belly. It has a long prehensile tail which normally displays a distinctive white tip over 25% of its length. The back feet are syndactyl, which helps it to climb. The ringtail possum's molars have sharp and pointed cusps.
Distribution and habitat
The common ringtail possum ranges on the east coast of Australia, as well as Tasmania and a part of southwestern Australia. They generally live in temperate and tropical environments and are rare in drier environments. Ringtail possums prefer forests of dense brush, particularly eucalyptus forests. The common ringtail possum and its relatives occupy a range of niches similar to those of lemurs, monkeys, squirrels, and bushbabies in similar forests on other continents. It is less prolific and less widespread than the common brushtail possum.
Behaviour
The common ringtail possum is nocturnal and well adapted to arboreal life. It relies on its prehensile tail and sometimes will descend to the ground. They communicate with soft, high-pitched, and twittering calls.
Diet and foraging
The common ringtail possum feeds on a wide variety of plants in the family Myrtaceae including the foliage, flowers and fruits from shrubs and lower canopy. Some populations are also known to feed on the leaves of cypress pine (Callitris), wattles (Acacia spp.) and plant gum or resins.
When foraging, ringtail possums prefer young leaves over old ones. One study found the emergence of young possums from their pouches corresponds to the flowering and fruiting of the tea-tree, Leptospermum and the peak of fresh plant growth. Young eucalypt leaves are richer in nitrogen and have less dense cell walls than older leaves; however, the protein gained from them is less available due to higher amounts of tannins. When feeding, the possum's molars slice through the leaves, slitting them into pieces. The possum's gastrointestinal tract sends the fine particles to the caecum and the coarse ones to the colon. These particles stay in the caecum for up to 70 hours where the cell walls and tanned cytoplasts are partially digested.
What distinguishes the digestive system of the common ringtail possum from that of the koala and the greater glider is the caecal contents are expelled as caecotropes, reingested and passed into the stomach. Because of this, the ringtail possum is able to gain more protein. This is also done by lagomorphs (rabbits, hares and pikas). Hard faeces are produced during the night while feeding and are not eaten, while caecotropes are produced during the day during rests and are eaten.
Metabolism
The re-ingestion of caecotropes also serves to maintain the ringtail possum's energy balance. Ringtail possums gain much of their gross energy from reingestion. The common ringtail possum has a daily maintenance nitrogen requirement (MNR) of 290 mg N/kg0.75. Common ringtail possums gain much of their MNR from consuming their nitrogen-rich caecotropes. They would have to gain 620 mg N/kg0.75 otherwise. The ringtail possum recycles 96% of its liver's urea, which is then transferred into the caecum and made into bacterial protein. Only re-ingestion makes this effective and the bacterial protein must be digested in the stomach and the amino acids subsequently absorbed in the small intestine. This recycling also allows the possum to conserve water and urinate less. Reingestion allows the possum to live on low nitrogen eucalyptus leaves which is particularly important during late lactation. It has been found that at higher temperatures, the common ringtail possum consumes less food due to a limited ability to metabolize toxins found in their diet. Because 55% of their water intake comes from the leaves and foliage they consume, their metabolic rate must remain low and stable while facing water loss. In response to this challenge, common ringtail possums can control their body temperature and conserve water by using facultative hyperthermia to temporarily raise their internal body temperature, ranging from .
Nesting
Common ringtail possums live a gregarious lifestyle which centres on their communal nests, also called dreys. Ringtail possums build nests from tree branches and occasionally use tree hollows. A communal nest is made up of an adult female and an adult male, their dependant offspring and immature offspring of the previous year. A group of ringtail possums may build several dreys at different sites. Ringtail possums are territorial and will drive away any strange conspecifics from their nests. A group has a strong attachment to their site. In one experiment, in which a group was removed from their territory, it remained uncolonised for the following two years. Ringtail possum nests tend to be more common in low scrub and less common in heavily timbered areas with little under-story. Dreys contribute to the survival of the young when they are no longer carried on their mother's back.
Reproduction and growth
The common ringtail possum carries its young in a pouch, where it develops. Depending on the area, the mating season can take place anywhere between April and December. The majority of the young are born between May and July. The oestrous cycle of ringtail possum lasts 28 days. It is both polyoestrous and polyovular. If a female prematurely loses her litter, she can return to oestrous and produce a second litter in October as a replacement if conditions are right. The average litter is two, although there are very occasionally triplets. Common ringtail possum young tend to grow relatively slowly due to dilute milk with low lipid levels that is provided to the young. As with other marsupials, the common ringtail possum's milk changes through lactation. During the second phase of lactation, more solid foods are eaten, especially when the young first emerges from the pouch. During this time, the concentration of carbohydrates fall, while those of proteins and lipids reach their highest. The long lactation of the ringtail possums may give the young more time to learn skills in the communal nest as well as to climb and forage in the trees.
The young are first able to vocalise and open their eyes between 90 and 106 days of age. They leave their mother's pouch at 120–130 days. However, lactation usually continues until 180–220 days after birth but sometimes ends by 145 days. Both sexes become sexually mature in the first mating season after their birth.
Status
Common ringtail possum populations severely declined during the 1950s. However, populations seem to have recovered in recent times. Because they are largely arboreal, common ringtail possums are particularly affected by deforestation in Australia. They are also heavily preyed upon by the introduced red fox. They are also hit by cars, or killed by snakes, cats and dogs in suburban areas.
| Biology and health sciences | Diprotodontia | Animals |
13352174 | https://en.wikipedia.org/wiki/Quadrant%20%28instrument%29 | Quadrant (instrument) | A quadrant is an instrument used to measure angles up to 90°. Different versions of this instrument could be used to calculate various readings, such as longitude, latitude, and time of day. Its earliest recorded usage was in ancient India in Rigvedic times by Rishi Atri to observe a solar eclipse. It was then proposed by Ptolemy as a better kind of astrolabe. Several different variations of the instrument were later produced by medieval Muslim astronomers. Mural quadrants were important astronomical instruments in 18th-century European observatories, establishing a use for positional astronomy.
Etymology
The term quadrant, meaning one fourth, refers to the fact that early versions of the instrument were derived from astrolabes. The quadrant condensed the workings of the astrolabe into an area one fourth the size of the astrolabe face; it was essentially a quarter of an astrolabe.
History
During Rigvedic times in ancient India, quadrants called 'Tureeyam's were used to measure the extent of a great solar eclipse. The use of a Tureeyam for observing a solar eclipse by Rishi Atri is described in the fifth mandala of the Rigveda, most likely between c. 1500 and 1000 BCE.
Early accounts of a quadrant also come from Ptolemy's Almagest around AD 150. He described a "plinth" that could measure the altitude of the noon sun by projecting the shadow of a peg on a graduated arc of 90 degrees. This quadrant was unlike later versions of the instrument; it was larger and consisted of several moving parts. Ptolemy's version was a derivative of the astrolabe and the purpose of this rudimentary device was to measure the meridian angle of the sun.
Islamic astronomers in the Middle Ages improved upon these ideas and constructed quadrants throughout the Middle East, in observatories such as Marageh, Rey and Samarkand. At first these quadrants were usually very large and stationary, and could be rotated to any bearing to give both the altitude and azimuth for any celestial body. As Islamic astronomers made advancements in astronomical theory and observational accuracy they are credited with developing four different types of quadrants during the Middle Ages and beyond. The first of these, the sine quadrant, was invented by Muhammad ibn Musa al-Khwarizmi in the 9th century at the House of Wisdom in Baghdad. The other types were the universal quadrant, the horary quadrant and the astrolabe quadrant.
During the Middle Ages the knowledge of these instruments spread to Europe. In the 13th century Jewish astronomer Jacob ben Machir ibn Tibbon was crucial in further developing the quadrant. He was a skilled astronomer and wrote several volumes on the topic, including an influential book detailing how to build and use an improved version of the quadrant. The quadrant that he invented came to be known as the novus quadrans, or new quadrant. This device was revolutionary because it was the first quadrant to be built that did not involve several moving parts and thus could be much smaller and more portable.
Tibbon's Hebrew manuscripts were translated into Latin and improved upon by Danish scholar Peter Nightingale several years later. Because of the translation, Tibbon, or Prophatius Judaeus as he was known in Latin, became an influential name in astronomy. His new quadrant was based upon the idea that the stereographic projection that defines a planispheric astrolabe can still work if the astrolabe parts are folded into a single quadrant. The result was a device that was far cheaper, easier to use and more portable than a standard astrolabe. Tibbon's work had a far reach and influenced Copernicus, Christopher Clavius and Erasmus Reinhold; and his manuscript was referenced in Dante's Divine Comedy.
As the quadrant became smaller and thus more portable, its value for navigation was soon realized. The first documented use of the quadrant to navigate at sea is in 1461, by Diogo Gomes. Sailors began by measuring the height of Polaris to ascertain their latitude. This application of quadrants is generally attributed to Arab sailors who traded along the east coast of Africa and often travelled out of sight of land. It soon became more common to take the height of the sun at a given time due to the fact that Polaris is not visible south of the equator.
In 1618, the English mathematician Edmund Gunter further adapted the quadrant with an invention that came to be known as the Gunter quadrant. This pocket sized quadrant was revolutionary because it was inscribed with projections of the tropics, the equator, the horizon and the ecliptic. With the correct tables one could use the quadrant to find the time, the date, the length of the day or night, the time of sunrise and sunset and the meridian. The Gunter quadrant was extremely useful but it had its drawbacks; the scales only applied to a certain latitude so the instrument's use was limited at sea.
Types
There are several types of quadrants:
Mural quadrants, used for determining the time by measuring the altitudes of astronomical objects. Tycho Brahe created one of the largest mural quadrants. In order to tell time he would place two clocks next to the quadrant so that he could identify the minutes and seconds in relation to the measurements on the side of the instrument.
Large frame-based instruments used for measuring angular distances between astronomical objects.
Geometric quadrant used by surveyors and navigators.
Davis quadrant a compact, framed instrument used by navigators for measuring the altitude of an astronomical object.
They can also be classified as:
Altitude – The plain quadrant with plumb line, used to take the altitude of an object.
Gunner's – A type of clinometer used by an artillerist to measure the elevation or depression angle of a gun barrel of a cannon or mortar, both to verify proper firing elevation, and to verify the correct alignment of the weapon-mounted fire control devices.
Gunter's – A quadrant used for time determination as well as the length of day, when the sun had risen and set, the date, and the meridian using scales and curves of the quadrant along with related tables. It was invented by Edmund Gunter in 1623. Gunter's quadrant was fairly simple which allowed for its widespread and long-lasting use in the 17th and 18th centuries. Gunter expanded the basic features of other quadrants to create a convenient and comprehensive instrument. Its distinguishable feature included projections of the tropics, equator, ecliptic, and the horizon.
Islamic – King identified four types of quadrants that were produced by Muslim astronomers.
The sine quadrant (Arabic: Rubul Mujayyab) – also known as the Sinecal Quadrant – was used for solving trigonometric problems and taking astronomical observations. It was developed by al-Khwarizmi in 9th century Baghdad and prevalent until the nineteenth century. Its defining feature is a graph-paper like grid on one side that is divided into sixty equal intervals on each axis and is also bounded by a 90 degree graduated arc. A cord was attached to the apex of the quadrant with a bead, for calculation, and a plumb bob. They were also sometimes drawn on the back of astrolabes.
The universal (shakkāzīya) quadrant – used for solving astronomical problems for any latitude: These quadrants had either one or two sets of shakkāzīya grids and were developed in the fourteenth century in Syria. Some astrolabes are also printed on the back with the universal quadrant like an astrolabe created by Ibn al-Sarrāj.
The horary quadrant – used for finding the time with the sun: The horary quadrant could be used to find the time either in equal or unequal (length of the day divided by twelve) hours. Different sets of markings were created for either equal or unequal hours. For measuring the time in equal hours, the horary quadrant could only be used for one specific latitude while a quadrant for unequal hours could be used anywhere based on an approximate formula. One edge of the quadrant had to be aligned with the sun, and once aligned, a bead on the plumbline attached to the centre of the quadrant showed the time of the day. A British version dated 1311 was listed by Christie's in December 2023, with the claim of being "the earliest dated English scientific instrument" without showing any provenance. A further example exists dated 1396, from European sources (Richard II of England). The oldest horary quadrant was found during an excavation in 2013 in the Hanseatic town of Zutphen (Netherlands), is dated ca. 1300, and is in the local Stedelijk Museum in Zutphen.
The astrolabe/almucantar quadrant – a quadrant developed from the astrolabe: This quadrant was marked with one half of a typical astrolabe plate as astrolabe plates are symmetrical. A cord attached from the centre of the quadrant with a bead at the other end was moved to represent the position of a celestial body (sun or a star). The ecliptic and star positions were marked on the quadrant for the above. It is not known where and when the astrolabe quadrant was invented, existent astrolabe quadrants are either of Ottoman or Mamluk origin, while there have been discovered twelfth century Egyptian and fourteenth century Syrian treatises on the astrolabe quadrant. These quadrants proved to be very popular alternatives to astrolabes.
Geometric quadrant
The geometric quadrant is a quarter-circle panel usually of wood or brass. Markings on the surface might be printed on paper and pasted to the wood or painted directly on the surface. Brass instruments had their markings scribed directly into the brass.
For marine navigation, the earliest examples were found around 1460. They were not graduated in degrees but rather had the latitudes of the most common destinations directly scribed on the limb. When in use, the navigator would sail north or south until the quadrant indicated he was at the destination's latitude, turn in the direction of the destination and sail to the destination maintaining a course of constant latitude. After 1480, more of the instruments were made with limbs graduated in degrees.
Along one edge there were two sights forming an alidade. A plumb bob was suspended by a line from the centre of the arc at the top.
In order to measure the altitude of a star, the observer would view the star through the sights and hold the quadrant so that the plane of the instrument was vertical. The plumb bob was allowed to hang vertical and the line indicated the reading on the arc's graduations. It was not uncommon for a second person to take the reading while the first concentrated on observing and holding the instrument in proper position.
The accuracy of the instrument was limited by its size and by the effect the wind or observer's motion would have on the plumb bob. For navigators on the deck of a moving ship, these limitations could be difficult to overcome.
Solar observations
In order to avoid staring into the sun to measure its altitude, navigators could hold the instrument in front of them with the sun to their side. By having the sunward sighting vane cast its shadow on the lower sighting vane, it was possible to align the instrument to the sun. Care would have to be taken to ensure that the altitude of the centre of the sun was determined. This could be done by averaging the elevations of the upper and lower umbra in the shadow.
Back observation quadrant
In order to perform measurements of the altitude of the sun, a back observation quadrant was developed.
With such a quadrant, the observer viewed the horizon from a sight vane (C in the figure on the right) through a slit in the horizon vane (B). This ensured the instrument was level. The observer moved the shadow vane (A) to a position on the graduated scale so as to cause its shadow to appear coincident with the level of the horizon on the horizon vane. This angle was the elevation of the sun.
Framed quadrant
Large frame quadrants were used for astronomical measurements, notably determining the altitude of celestial objects. They could be permanent installations, such as mural quadrants. Smaller quadrants could be moved. Like the similar astronomical sextants, they could be used in a vertical plane or made adjustable for any plane.
When set on a pedestal or other mount, they could be used to measure the angular distance between any two celestial objects.
The details on their construction and use are essentially the same as those of the astronomical sextants; refer to that article for details.
Navy: Used to gauge elevation on ships cannon, the quadrant had to be placed on each gun's trunnion in order to judge range, after the loading. The reading was taken at the top of the ship's roll, the gun adjusted, and checked, again at the top of the roll, and he went to the next gun, until all that were going to be fired were ready. The ship's Gunner was informed, who in turn informed the captain...You may fire when ready...at the next high roll, the cannon would be fired.
In more modern applications, the quadrant is attached to the trunnion ring or of a large naval gun to align it to benchmarks welded to the ship's deck. This is done to ensure firing of the gun hasn't "warped the deck." A flat surface on the mount gunhouse or turret is also checked against benchmarks, also, to ensure large bearings and/or bearing races haven't changed... to "calibrate" the gun.
Customization
During the Middle Ages, makers often added customization to impress the person for whom the quadrant was intended. In large, unused spaces on the instrument, a sigil or badge would often be added to denote the ownership by an important person or the allegiance of the owner.
| Technology | Navigation | null |
7032057 | https://en.wikipedia.org/wiki/Speech%E2%80%93language%20pathology | Speech–language pathology | Speech–language pathology (a.k.a. speech and language pathology or logopedics) is a healthcare and academic discipline concerning the evaluation, treatment, and prevention of communication disorders, including expressive and mixed receptive-expressive language disorders, voice disorders, speech sound disorders, speech disfluency, pragmatic language impairments, and social communication difficulties, as well as swallowing disorders across the lifespan. It is an allied health profession regulated by professional bodies including the American Speech-Language-Hearing Association (ASHA) and Speech Pathology Australia. The field of speech-language pathology is practiced by a clinician known as a speech-language pathologist (SLP) or a speech and language therapist (SLT). SLPs also play an important role in the screening, diagnosis, and treatment of autism spectrum disorder (ASD), often in collaboration with pediatricians and psychologists.
History
The development of speech-language pathology into a profession took different paths in the various regions of the world. Three identifiable trends influenced the evolution of speech-language pathology in the United States during the late 19th century to early 20th century: the elocution movement, scientific revolution, and the rise of professionalism. Groups of "speech correctionists" formed in the early 1900s. The American Academy of Speech Correction was founded in 1925, which became ASHA in 1978.
Profession
Speech-language pathologists (SLPs) provide a wide range of services, mainly on an individual basis, but also as support for families, support groups, and providing information for the general public. SLPs work to assess levels of communication needs, make diagnoses based on the assessments, and then treat the diagnoses or address the needs. Speech/language services begin with initial screening for communication and/or swallowing disorders and continue with assessment and diagnosis, consultation for the provision of advice regarding management, intervention, and treatment, and providing counseling and other followup services for these disorders. Services are provided in the following areas:
Developmental language and early feeding neurodevelopment and prevention;
Cognitive aspects of communication (e.g., attention, memory, problem-solving, executive functions);
Speech (phonation, articulation, fluency, resonance, and voice including aeromechanical components of respiration);
Language (phonology, morphology, syntax, semantics, and pragmatic/social aspects of communication) including comprehension and expression in oral, written, graphic, and manual modalities; language processing; preliteracy and language-based literacy skills, phonological awareness;
Augmentative and alternative communication (AAC) for individuals with severe language and communication impairments;
Swallowing or other upper aerodigestive functions such as infant feeding and aeromechanical events (evaluation of esophageal function is for the purpose of referral to medical professionals);
Voice (hoarseness, dysphonia), poor vocal volume (hypophonia), abnormal (e.g., rough, breathy, strained) vocal quality. Research demonstrates voice therapy to be especially helpful with certain patient populations; individuals with Parkinson's Disease often develop voice issues as a result of their disease.
Sensory awareness related to communication, swallowing, or other upper aerodigestive functions.
Speech, language, and swallowing disorders result from a variety of causes, such as a stroke, brain injury, hearing loss, developmental delay, a cleft palate, cerebral palsy, or emotional issues.
A common misconception is that speech–language pathology is restricted to the treatment of articulation disorders (e.g., helping English-speaking individuals enunciate the traditionally difficult r) and/or the treatment of individuals who stutter but, in fact, speech–language pathology is concerned with a broad scope of speech, language, literacy, swallowing, and voice issues involved in communication, some of which include:
Word-finding and other semantic issues, either as a result of a specific language impairment (SLI) such as a language delay or as a secondary characteristic of a more general issue such as dementia.
Social communication difficulties involving how people communicate or interact with others (pragmatics).
Language impairments, including difficulties creating sentences that are grammatical (syntax) and modifying word meaning (morphology).
Literacy impairments (reading and writing) related to the letter-to-sound relationship (phonics), the word-to-meaning relationship (semantics), and understanding the ideas presented in a text (reading comprehension).
Voice difficulties, such as a raspy voice, a voice that is too soft, or other voice difficulties that negatively impact a person's social or professional performance.
Cognitive impairments (e.g. attention, memory, executive function) to the extent that they interfere with communication.
Parent, caregiver, and other communication partner coaching.
Primary pediatric speech and language disorders include: receptive and expressive language disorders, speech sound disorders, childhood apraxia of speech (CAS), stuttering, and language-based learning disabilities. Speech-language pathologists (SLPs) work with people of all ages.
Swallowing disorders include difficulties in any phase of the swallowing process (i.e., oral, pharyngeal, esophageal), as well as functional dysphagia and feeding disorders. Swallowing disorders can occur at any age and can stem from multiple causes.
Multi-discipline collaboration
SLPs collaborate with other health care professionals, often working as part of a multidisciplinary team. They can provide information and referrals to audiologists, physicians, dentists, nurses, nurse practitioners, occupational therapists, rehabilitation psychologists, dietitians, educators, behavior consultants (applied behavior analysis), and parents as dictated by the individual client's needs. For example, the treatment for patients with cleft lip and palate often requires multidisciplinary collaboration. Speech–language pathologists can be very beneficial in helping resolve speech problems associated with cleft lip and palate. Research has indicated that children who receive early language intervention are less likely to develop compensatory error patterns later in life, although speech therapy outcomes are usually better when surgical treatment is performed earlier. Another area of collaboration relates to auditory processing disorders, where SLPs can collaborate in assessments and provide intervention where there is evidence of speech, language, and/or other cognitive-communication disorders.
Working environments
SLPs work in a variety of clinical and educational settings. SLPs work in public and private hospitals, private practices, skilled nursing facilities (SNFs), long-term acute care (LTAC) facilities, hospice, and home healthcare. SLPs may also work as part of the support structure in the education system, working in both public and private schools, colleges, and universities. Some SLPs also work in community health, providing services at prisons and young offenders' institutions or providing expert testimony in applicable court cases.
Following ASHA's 2005 approval of the delivery of speech/language services via video conference or telepractice, SLPs in the United States have begun to use this service model.
Children with speech, language, and communication needs (SLCN) are particularly at risk of not being heard because of communication challenges. Speech-language pathologists (SLPs) can explain the significance of supporting communication as a tool for the child to shape and influence choices available to them in their lives, even though it is advised that children with SLCN can and should be actively involved as equal partners in decision-making about their communication needs. Building these skills is especially crucial for SLPs working in settings related to traditional education.
Research
SLPs conduct research related to communication sciences and disorders, swallowing disorders, or other upper aerodigestive functions.
Experimental, empirical, and scientific methodologies that build on hypothesis testing and logical, deductive reasoning have dominated research in speech-language pathology. Other types of research in the field are complemented by qualitative research.
Education and training
United States
In the United States, speech–language pathologists must hold a master's degree from an ASHA-accredited program. Following graduation and passing a nation-wide board exam, SLPs typically begin their Clinical Fellowship Year, during which they are granted a provisional license and receive guidance from their supervisor. At the end of this process, SLPs may choose to apply for ASHA's Certificate of Clinical Competence and apply for full state licensure. SLPs may additionally choose to earn advanced degrees such as a clinical doctorate in speech–language pathology, PhD, or EdD.
Methods of assessment
Many approaches exist to assess language, communication, speech and swallowing. Two main aspects of assessment can be to determine the extent of breakdown (impairment-level), or how communication can be supported (functional level). When evaluating impairment-based level of breakdown, therapists are trained to use a cognitive neuropsychological approach to assessment, to precisely determine what aspect of communication is impaired. Some therapists use assessments that are based on historic anatomical models of language, that have since been shown to be unreliable. These tools are often preferred by therapists working within a medical model, where medics request a 'type' of impairment, and a 'severity' rating. The broad tools available allow clinicians to precisely select the aspect of communication that they wish to assess.
Because school-based speech therapy is run under state guidelines and funds, the process of assessment and qualification is more strict. To qualify for in-school speech therapy, students must meet the state's criteria on language testing and speech standardization. Due to such requirements, some students may not be assessed in an efficient time frame or their needs may be undermined by criteria. For a private clinic, students are more likely to qualify for therapy because it is a paid service with more availability.
Clients and patients
Speech–language pathologists work with clients and patients who may present with a wide range of issues.
Infants and children
Premature infants are at higher risk of feeding and later language needs and SLTS work with this cohort to prevent developmental difficulties and support neonatal care
Infants with injuries due to complications at birth, feeding and swallowing difficulties, including dysphagia
Children with mild, moderate or severe:
Genetic disorders that adversely affect speech, language and/or cognitive development including cleft palate, Down syndrome, DiGeorge syndrome
Attention deficit hyperactivity disorder
Autism spectrum disorders, including Asperger syndrome
Developmental delay
Feeding disorders, including oral motor deficits
Cranial nerve damage
Hearing loss
Craniofacial anomalies that adversely affect speech, language and/or cognitive development
Language delay
Specific language impairment
Specific difficulties in producing sounds, called articulation disorders, (including vocalic /r/ and lisps)
Pediatric traumatic brain injury
Developmental verbal dyspraxia
Cleft palate
United States
In the US, some children are eligible to receive speech therapy services, including assessment and lessons through the public school system. If not, private therapy is readily available through personal lessons with a qualified speech–language pathologist or the growing field of telepractice. Teleconferencing tools such as Skype are being used more commonly as a means to access remote locations in private therapy practice, such as in the geographically diverse south island of New Zealand. More at-home or combination treatments have become readily available to address specific types of articulation disorders. The use of mobile applications in speech therapy is also growing as an avenue to bring treatment into the home.
United Kingdom
In the UK, children are entitled to an assessment by local NHS speech- and language-therapy teams, usually after referral by health visitors or education settings, but parents are also entitled to request an assessment directly. If treatment is appropriate, an educational plan will be drawn up. Speech therapists often play a role in multi-disciplinary teams when a child has speech delay or disorder as part of a wider health condition. The Children's Commissioner for England reported in June 2019 that there was a postcode lottery; £291.65 a year per head was spent on services in some areas, while the budget in some areas was £30.94 or less. In 2018, 193,971 children in English primary schools were on the special educational needs register needing speech-therapy services.
Speech and language therapists work in acute settings and are often
integrated into the MDT in multiple areas of speciality for neonatal, children and adult services. Areas include but not limited to; neonatal care, respiratory, ENT, gastrointestinal, stroke, Neurology,ICU, oncology and geriatric care
Children and adults
Puberphonia
Neonatal care
Respiratory
ENT
Cerebral palsy
Head injury (Traumatic brain injury)
Hearing loss and impairments
Learning difficulties including
Dyslexia
Specific language impairment (SLI)
Auditory processing disorder
Physical disabilities
Speech disorders (such as oral dyspraxia)
Stammering, stuttering (disfluency)
Stroke
Voice disorders (dysphonia)
Language delay
Motor speech disorders (dysarthria or developmental verbal dyspraxia)
Naming difficulties (anomia)
Dysgraphia, agraphia
Cognitive communication disorders
Pragmatics
Laryngectomies
Tracheostomies
Oncology (ear, nose or throat cancer)
Adults
Adults with aphasia
Adults with mild, moderate, or severe eating, feeding and swallowing difficulties, including dysphagia
Adults recovering from significant tumors in the bronchus, lung, oropharynx, breast, and brain
Adults with mild, moderate, or severe language difficulties as a result of:
Motor neuron diseases,
Alzheimer's disease,
Dementia,
Huntington's disease,
Hearing loss
Multiple sclerosis,
Parkinson's disease,
Traumatic brain injury,
Mental health issues
Stroke
Progressive neurological conditions such as cancer of the head, neck and throat (including laryngectomy)
Aphasic
Adults seeking transgender-specific voice training, including voice feminization and voice masculinization
| Biology and health sciences | Disabilities | Health |
3016568 | https://en.wikipedia.org/wiki/Human%20fertilization | Human fertilization | Human fertilization is the union of an egg and sperm, occurring primarily in the ampulla of the fallopian tube. The result of this union leads to the production of a fertilized egg called a zygote, initiating embryonic development. Scientists discovered the dynamics of human fertilization in the 19th century.
The process of fertilization involves a sperm fusing with an ovum. The most common sequence begins with ejaculation during copulation, follows with ovulation, and finishes with fertilization. Various exceptions to this sequence are possible, including artificial insemination, in vitro fertilization, external ejaculation without copulation, or copulation shortly after ovulation. Upon encountering the secondary oocyte, the acrosome of the sperm produces enzymes which allow it to burrow through the outer shell called the zona pellucida of the egg. The sperm plasma then fuses with the egg's plasma membrane and their nuclei fuse, triggering the sperm head to disconnect from its flagellum as the egg travels down the fallopian tube to reach the uterus.
In vitro fertilization (IVF) is a process by which egg cells are fertilized by sperm outside the womb, in vitro.
History
Fertilization was not understood in antiquity. Hippocrates believed that the embryo was the product of male semen and a female factor. Aristotle held that only male semen gave rise to an embryo, while the female only provided a place for the embryo to develop, a concept he acquired from the preformationist Pythagoras. Aristotle argued for form and function emerging gradually, in a mode called by him as epigenetic. In 1651 William Harvey refuted Aristotle's idea, that menstrual blood could be involved in the formation of a fetus, asserting that eggs from the female were somehow caused to become a fetus as a result of sexual intercourse. Sperm cells were discovered in 1677 by Antonie van Leeuwenhoek, who believed that Aristotle had been proven correct. Some observers believed they could see an entirely pre-formed little human body in the head of a sperm. The human ova was first observed in 1827 by Karl Ernst von Baer. Only in 1876 did Oscar Hertwig prove that fertilization is due to fusion of an egg and sperm cell.
Sperm and oocyte meet
Ampulla
Fertilization occurs in the ampulla of the fallopian tube, the section that curves around the ovary. Capacitated sperm are attracted to progesterone, which is secreted from the cumulus cells surrounding the oocyte. Progesterone binds to the CatSper receptor on the sperm membrane and increases intracellular calcium levels, causing hyperactive motility. The sperm will continue to swim towards higher concentrations of progesterone, effectively guiding it to the oocyte. Around 200 out of 200 million spermatozoa reach the ampulla.
Sperm preparation
At the beginning of the process, the sperm undergoes a series of changes, as freshly ejaculated sperm is unable or poorly able to fertilize. The sperm must undergo capacitation in the female's reproductive tract, which increases its motility and hyperpolarizes its membrane, preparing it for the acrosome reaction, the enzymatic penetration of the egg's tough membrane, the zona pellucida, which surrounds the oocyte.
Corona radiata
The sperm binds through the corona radiata, a layer of follicle cells on the outside of the secondary oocyte. The corona radiata sends out chemicals that attract the sperm in the fallopian tube to the oocyte. It lies above the zona pellucida, a membrane of glycoproteins that surrounds the oocyte.
Cone of attraction and perivitelline membrane
Where the spermatozoan is about to pierce, the yolk (ooplasm) is drawn out into a conical elevation, termed the cone of attraction or reception cone. Once the spermatozoon has entered, the peripheral portion of the yolk changes into a membrane, the perivitelline membrane, which prevents the passage of additional spermatozoa.
Zona pellucida and acrosome reaction
After binding to the corona radiata the sperm reaches the zona pellucida, which is an extracellular matrix of glycoproteins. A ZP3 glycoprotein on the zona pellucida binds to a receptor on the cell surface of the sperm head. This binding triggers the acrosome to burst, releasing acrosomal enzymes that help the sperm penetrate through the thick zona pellucida layer surrounding the oocyte, ultimately gaining access to the egg's cell membrane.
Some sperm cells consume their acrosome prematurely on the surface of the egg cell, facilitating the penetration by other sperm cells. As a population, mature haploid sperm cells have on average 50% genome similarity, so the premature acrosomal reactions aid fertilization by a member of the same cohort. It may be regarded as a mechanism of kin selection.
Recent studies have shown that the egg is not passive during this process. In other words, they too appear to undergo changes that help facilitate such interaction.
Fusion
Cortical reaction
After the sperm enters the cytoplasm of the oocyte, the tail and the outer coating of the sperm disintegrate. The fusion of sperm and oocyte membranes causes cortical reaction to occur. Cortical granules inside the secondary oocyte fuse with the plasma membrane of the cell, causing enzymes inside these granules to be expelled by exocytosis to the zona pellucida. This in turn causes the glycoproteins in the zona pellucida to cross-link with each other — i.e. the enzymes cause the ZP2 to hydrolyse into ZP2f — making the whole matrix hard and impermeable to sperm. This prevents fertilization of an egg by more than one sperm.
Fusion of genetic material
Preparation
In preparation for the fusion of their genetic material both the oocyte and the sperm undergo transformations as a reaction to the fusion of cell membranes.
The oocyte completes its second meiotic division. This results in a mature haploid ovum and the release of a polar body. The nucleus of the oocyte is called a pronucleus in this process, to distinguish it from the nuclei that are the result of fertilization.
The sperm's tail and mitochondria degenerate with the formation of the male pronucleus. This is why all mitochondria in humans are of maternal origin. Still, a considerable amount of RNA from the sperm is delivered to the resulting embryo and likely influences embryo development and the phenotype of the offspring.
Fusion
The sperm nucleus then fuses with the ovum, enabling fusion of their genetic material.
Blocks of polyspermy
When the sperm enters the perivitelline space, a sperm-specific protein Izumo on the head binds to Juno receptors on the oocyte membrane. Once it is bound, two blocks to polyspermy then occur. After approximately 40 minutes, the other Juno receptors on the oocyte are lost from the membrane, causing it to no longer be fusogenic. Additionally, the cortical reaction will happen which is caused by ovastacin binding and cleaving ZP2 receptors on the zona pellucida. These two blocks of polyspermy are what prevent the zygote from having too much DNA.
Replication
The pronuclei migrate toward the center of the oocyte, rapidly replicating their DNA as they do so to prepare the zygote for its first mitotic division.
Mitosis
Usually 23 chromosomes from spermatozoon and 23 chromosomes from egg cell fuse (approximately half of spermatozoons carry X chromosome and the other half Y chromosome). Their membranes dissolve, leaving no barriers between the male and female chromosomes. During this dissolution, a mitotic spindle forms between them. The spindle captures the chromosomes before they disperse in the egg cytoplasm. Upon subsequently undergoing mitosis (which includes pulling of chromatids towards centrioles in anaphase) the cell gathers genetic material from the male and female together. Thus, the first mitosis of the union of sperm and oocyte is the actual fusion of their chromosomes.
Each of the two daughter cells resulting from that mitosis has one replica of each chromatid that was replicated in the previous stage. Thus, they are genetically identical.
Fertilization age
Fertilization is the event most commonly used to mark the beginning point of life, in descriptions of prenatal development of the embryo or fetus. The resultant age is known as fertilization age, fertilizational age, conceptional age, embryonic age, fetal age or (intrauterine) developmental (IUD) age.
Gestational age, in contrast, takes the beginning of the last menstrual period (LMP) as the start point. By convention, gestational age is calculated by adding 14 days to fertilization age and vice versa. Fertilization though usually occurs within a day of ovulation, which, in turn, occurs on average 14.6 days after the beginning of the preceding menstruation (LMP). There is also considerable variability in this interval, with a 95% prediction interval of the ovulation of 9 to 20 days after menstruation even for an average woman who has a mean LMP-to-ovulation time of 14.6. In a reference group representing all women, the 95% prediction interval of the LMP-to-ovulation is 8.2 to 20.5 days.
The average time to birth has been estimated to be 268 days (38 weeks and two days) from ovulation, with a standard deviation of 10 days or coefficient of variation of 3.7%.
Fertilization age is sometimes used postnatally (after birth) as well to estimate various risk factors. For example, it is a better predictor than postnatal age for risk of intraventricular hemorrhage in premature babies treated with extracorporeal membrane oxygenation.
Diseases affecting human fertility
Various disorders can arise from defects in the fertilization process. Whether that results in the process of contact between the sperm and egg, or the state of health of the biological parent carrying the zygote cell. The following are a few of the diseases that can occur and be present during the process.
Polyspermy results from multiple sperm fertilizing an egg, leading to an offset number of chromosomes within the embryo. Polyspermy, while physiologically possible in some species of vertebrates and invertebrates, is a lethal condition for the human zygote.
Polycystic ovary syndrome is a condition in which the woman does not produce enough follicle stimulating hormone and excessively produces androgens. This results in the ovulation period between contact of the egg being postponed or excluded.
Autoimmune disorders can lead to complications in implantation of the egg in the uterus, which may be the immune system's attack response to an established embryo on the uterine wall.
Cancer ultimately affects fertility and may lead to birth defects or miscarriages. Cancer severely damages reproductive organs, which affects fertility.
Endocrine system disorders affect human fertility by decreasing the body's ability to produce the level of hormones needed to successfully carry a zygote. Examples of these disorders include diabetes, adrenal disorders, and thyroid disorders.
Endometriosis is a condition that affects women in which the tissue normally produced in the uterus proceeds to grow outside of the uterus. This leads to extreme amounts of pain and discomfort and may result in an irregular menstrual cycle.
| Biology and health sciences | Human reproduction | Biology |
3017092 | https://en.wikipedia.org/wiki/Spin-1/2 | Spin-1/2 | In quantum mechanics, spin is an intrinsic property of all elementary particles. All known fermions, the particles that constitute ordinary matter, have a spin of . The spin number describes how many symmetrical facets a particle has in one full rotation; a spin of means that the particle must be rotated by two full turns (through 720°) before it has the same configuration as when it started.
Particles having net spin include the proton, neutron, electron, neutrino, and quarks. The dynamics of spin- objects cannot be accurately described using classical physics; they are among the simplest systems which require quantum mechanics to describe them. As such, the study of the behavior of spin- systems forms a central part of quantum mechanics.
Stern–Gerlach experiment
The necessity of introducing half-integer spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong heterogeneous magnetic field, which then splits into N parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be an integer, because even if the intrinsic angular momentum of the atoms were the smallest (non-zero) integer possible, 1, the beam would be split into 3 parts, corresponding to atoms with Lz = −1, +1, and 0, with 0 simply being the value known to come between −1 and +1 while also being a whole-integer itself, and thus a valid quantized spin number in this case. The existence of this hypothetical "extra step" between the two polarized quantum states would necessitate a third quantum state; a third beam, which is not observed in the experiment. The conclusion was that silver atoms had net intrinsic angular momentum of .
General properties
Spin- objects are all fermions (a fact explained by the spin–statistics theorem) and satisfy the Pauli exclusion principle. Spin- particles can have a permanent magnetic moment along the direction of their spin, and this magnetic moment gives rise to electromagnetic interactions that depend on the spin. One such effect that was important in the discovery of spin is the Zeeman effect, the splitting of a spectral line into several components in the presence of a static magnetic field.
Unlike in more complicated quantum mechanical systems, the spin of a spin- particle can be expressed as a linear combination of just two eigenstates, or eigenspinors. These are traditionally labeled spin up and spin down. Because of this, the quantum-mechanical spin operators can be represented as simple 2 × 2 matrices. These matrices are called the Pauli matrices.
Creation and annihilation operators can be constructed for spin- objects; these obey the same commutation relations as other angular momentum operators.
Connection to the uncertainty principle
One consequence of the generalized uncertainty principle is that the spin projection operators (which measure the spin along a given direction like x, y, or z) cannot be measured simultaneously. Physically, this means that the axis about which a particle is spinning is ill-defined. A measurement of the z-component of spin destroys any information about the x- and y-components that might previously have been obtained.
Mathematical description
A spin- particle is characterized by an angular momentum quantum number for spin s of . In solutions of the Schrödinger equation, angular momentum is quantized according to this number, so that total spin angular momentum
However, the observed fine structure when the electron is observed along one axis, such as the z-axis, is quantized in terms of a magnetic quantum number, which can be viewed as a quantization of a vector component of this total angular momentum, which can have only the values of .
Note that these values for angular momentum are functions only of the reduced Planck constant (the angular momentum of any photon), with no dependence on mass or charge.
Complex phase
Mathematically, quantum mechanical spin is not described by a vector as in classical angular momentum. It is described by a complex-valued vector with two components called a spinor. There are subtle differences between the behavior of spinors and vectors under coordinate rotations, stemming from the behavior of a vector space over a complex field.
When a spinor is rotated by 360° (one full turn), it transforms to its negative, and then after a further rotation of 360° it transforms back to its initial value again. This is because in quantum theory the state of a particle or system is represented by a complex probability amplitude (wavefunction) ψ, and when the system is measured, the probability of finding the system in the state ψ equals , the absolute square (square of the absolute value) of the amplitude. In mathematical terms, the quantum Hilbert space carries a projective representation of the rotation group SO(3).
Suppose a detector that can be rotated measures a particle in which the probabilities of detecting some state are affected by the rotation of the detector. When the system is rotated through 360°, the observed output and physics are the same as initially but the amplitudes are changed for a spin- particle by a factor of −1 or a phase shift of half of 360°. When the probabilities are calculated, the −1 is squared, , so the predicted physics is the same as in the starting position. Also, in a spin- particle there are only two spin states and the amplitudes for both change by the same −1 factor, so the interference effects are identical, unlike the case for higher spins. The complex probability amplitudes are something of a theoretical construct which cannot be directly observed.
If the probability amplitudes rotated by the same amount as the detector, then they would have changed by a factor of −1 when the equipment was rotated by 180° which when squared would predict the same output as at the start, but experiments show this to be wrong. If the detector is rotated by 180°, the result with spin- particles can be different from what it would be if not rotated, hence the factor of a half is necessary to make the predictions of the theory match the experiments.
In terms of more direct evidence, physical effects of the difference between the rotation of a spin- particle by 360° as compared with 720° have been experimentally observed in classic experiments in neutron interferometry. In particular, if a beam of spin-oriented spin- particles is split, and just one of the beams is rotated about the axis of its direction of motion and then recombined with the original beam, different interference effects are observed depending on the angle of rotation. In the case of rotation by 360°, cancellation effects are observed, whereas in the case of rotation by 720°, the beams are mutually reinforcing.
Non-relativistic quantum mechanics
The quantum state of a spin- particle can be described by a two-component complex-valued vector called a spinor. Observable states of the particle are then found by the spin operators Sx, Sy, and Sz, and the total spin operator S.
Observables
When spinors are used to describe the quantum states, the three spin operators (Sx, Sy, Sz,) can be described by 2 × 2 matrices called the Pauli matrices whose eigenvalues are .
For example, the spin projection operator Sz affects a measurement of the spin in the z direction.
The two eigenvalues of Sz, , then correspond to the following eigenspinors:
These vectors form a complete basis for the Hilbert space describing the spin- particle. Thus, linear combinations of these two states can represent all possible states of the spin, including in the x- and y-directions.
The ladder operators are:
Since , it follows that and . Thus:
Their normalized eigenspinors can be found in the usual way. For Sx, they are:
For Sy, they are:
Relativistic quantum mechanics
While non relativistic quantum mechanics defines spin with 2 dimensions in Hilbert space with dynamics that are described in 3-dimensional space and time, relativistic quantum mechanics defines the spin with 4 dimensions in Hilbert space and dynamics described by 4-dimensional space-time.
Observables
As a consequence of the four-dimensional nature of space-time in relativity, relativistic quantum mechanics uses 4×4 matrices to describe spin operators and observables.
History
When physicist Paul Dirac tried to modify the Schrödinger equation so that it was consistent with Einstein's theory of relativity, he found it was only possible by including matrices in the resulting Dirac equation, implying the wave must have multiple components leading to spin.
The 4π spinor rotation was experimentally verified using neutron interferometry in 1974, by Helmut Rauch and collaborators, after being suggested by Yakir Aharonov and Leonard Susskind in 1967.
| Physical sciences | Quantum numbers | Physics |
3019076 | https://en.wikipedia.org/wiki/Cartwheel%20Galaxy | Cartwheel Galaxy | The Cartwheel Galaxy (also known as ESO 350-40 or PGC 2248) is a lenticular ring galaxy about 500 million light-years away in the constellation Sculptor. It has a D25 isophotal diameter of , and a mass of about solar masses; its outer ring has a circular velocity of .
It was discovered by Fritz Zwicky in 1941. Zwicky considered his discovery "one of the most complicated structures awaiting its explanation on the basis of stellar dynamics."
The Third Reference Catalogue of Bright Galaxies (RC3) measured a D25 isophotal diameter for the Cartwheel Galaxy at about 60.9 arcseconds, giving it a diameter of based on a redshift-derived distance of .
This diameter is slightly smaller than that of the Andromeda Galaxy.
The large Cartwheel Galaxy is the dominant member of the Cartwheel Galaxy group, consisting of four physically associated spiral galaxies. The three companions are referred to in several studies as G1, the smaller irregular blue Magellanic spiral; G2, the yellow compact spiral with a tidal tail; and G3, a more distant spiral often seen in wide field images.
One supernova has been observed in the Cartwheel Galaxy. SN 2021afdx (type II, mag. 18.8) was discovered by ATLAS on 23 November 2021.
Structures
The structure of the Cartwheel Galaxy is noted to be highly complicated and heavily disturbed. The Cartwheel consists of two rings: the outer ring, the site of massive ongoing star formation due to gas and dust compression; and the inner ring that surrounds the galactic center. A ring of dark absorbing dust is also present in the nucleic ring. Several optical arms or "spokes" are seen connecting the outer ring to the inner. Observations show the presence of both non-thermal radio continuum and optical spokes, but the two do not seem to overlap.
Evolution
The galaxy was once a normal spiral galaxy before it apparently underwent a head-on "bullseye" style collision with a smaller companion approximately 200–300 million years prior to how we see the system today. When the nearby galaxy passed through the Cartwheel Galaxy, the force of the collision caused a powerful gravitational shock wave to expand through the galaxy. Moving at high speed, the shock wave swept up and compressed gas and dust, creating a Starburst region around the galaxy's center portion that went unscathed as it expanded outwards. This explains the bluish ring around the center, which is the brighter portion. It can be noted that the galaxy is beginning to retake the form of a normal spiral galaxy, with arms spreading out from a central core. These arms are often referred to as the cartwheel's “spokes”.
Alternatively, a model based on the gravitational Jeans instability of both axisymmetric (radial) and nonaxisymmetric (spiral) small-amplitude gravity perturbations allows an association between growing clumps of matter and the gravitationally unstable axisymmetric and nonaxisymmetric waves which take on the appearance of a ring and spokes. Based on observational data, however, this theory of ring galaxy evolution does not appear to apply to this specific galaxy.
While most images of the Cartwheel display three galaxies close together, a fourth physically associated companion (also known as G3) is known to be associated with the group through an HI (or neutral hydrogen) tail that connects G3 to the cartwheel. Due to the presence of the HI tail, it is widely believed that G3 is the "bullet" galaxy that plunged through the disk of the cartwheel, creating its current shape, not G1 or G2. This hypothesis makes sense given the size and predicted age of the current structure (~300 million years old as mentioned before). Considering how close G1 and G2 are to the Cartwheel still, it is much more widely believed that the roughly 88 kpc (~287,000 light years) distant G3 is the intruding galaxy.
HI tail mapping is extremely useful in determining “culprit” galaxies in similar cases where the solution is relatively unclear. Hydrogen gas, being the lightest and most abundant gas in galaxies, is easily torn away from parent galaxies through gravitational forces. Evidence of this can be seen in the Jellyfish Galaxy and the Comet Galaxy, which are undergoing a type of gravitational effect called ram pressure stripping, and other galaxies with tidal tails and star forming stellar streams associated with collisions and mergers. Ram pressure stripping will almost always cause trailing-dominant tails of HI gas as a galaxy infalls into a galaxy cluster, while mergers and collisions like the ones involving in Cartwheel galaxy often create leading-dominant tails as the culprit galaxy’s gravity attracts and pulls on the victim galaxy’s gas in the direction of the culprit's motion.
The existing structure of the cartwheel is expected to disintegrate over the next few hundred million years as the remaining gas, dust and stars that haven’t escaped the galaxy begin to infall back towards the center. It is likely that the galaxy will regain a spiral shape after the infall process completes and spiral density waves have a chance to reform. This is only possible if companions G1, G2 and G3 remain distant and do not undergo an additional collision with the cartwheel.
X-ray sources
The unusual shape of the Cartwheel Galaxy may be due to a collision with a smaller galaxy such as one of those in the lower left of the image. The most recent starburst has lit up the Cartwheel rim, which has a diameter larger than that of the Milky Way. Star formation via starburst galaxies, such as the Cartwheel Galaxy, results in the formation of large and extremely luminous stars. When massive stars explode as supernovas, they leave behind neutron stars and black holes. Some of these neutron stars and black holes have nearby companion stars, and become powerful sources of X-rays as they pull matter off their companions (also known as ultra and hyperluminous X-ray sources). The brightest X-ray sources are likely black holes with companion stars, appearing as the white dots that lie along the rim of the X-ray image. The Cartwheel contains an exceptionally large number of these black hole binary X-ray sources, because many massive stars formed in the ring.
| Physical sciences | Notable galaxies | Astronomy |
3021875 | https://en.wikipedia.org/wiki/Entropy%20of%20mixing | Entropy of mixing | In thermodynamics, the entropy of mixing is the increase in the total entropy when several initially separate systems of different composition, each in a thermodynamic state of internal equilibrium, are mixed without chemical reaction by the thermodynamic operation of removal of impermeable partition(s) between them, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new unpartitioned closed system.
In general, the mixing may be constrained to occur under various prescribed conditions. In the customarily prescribed conditions, the materials are each initially at a common temperature and pressure, and the new system may change its volume, while being maintained at that same constant temperature, pressure, and chemical component masses. The volume available for each material to explore is increased, from that of its initially separate compartment, to the total common final volume. The final volume need not be the sum of the initially separate volumes, so that work can be done on or by the new closed system during the process of mixing, as well as heat being transferred to or from the surroundings, because of the maintenance of constant pressure and temperature.
The internal energy of the new closed system is equal to the sum of the internal energies of the initially separate systems. The reference values for the internal energies should be specified in a way that is constrained to make this so, maintaining also that the internal energies are respectively proportional to the masses of the systems.
For concision in this article, the term 'ideal material' is used to refer to either an ideal gas (mixture) or an ideal solution.
In the special case of mixing ideal materials, the final common volume is in fact the sum of the initial separate compartment volumes. There is no heat transfer and no work is done. The entropy of mixing is entirely accounted for by the diffusive expansion of each material into a final volume not initially accessible to it.
In the general case of mixing non-ideal materials, however, the total final common volume may be different from the sum of the separate initial volumes, and there may occur transfer of work or heat, to or from the surroundings; also there may be a departure of the entropy of mixing from that of the corresponding ideal case. That departure is the main reason for interest in entropy of mixing. These energy and entropy variables and their temperature dependences provide valuable information about the properties of the materials.
On a molecular level, the entropy of mixing is of interest because it is a macroscopic variable that provides information about constitutive molecular properties. In ideal materials, intermolecular forces are the same between every pair of molecular kinds, so that a molecule feels no difference between other molecules of its own kind and of those of the other kind. In non-ideal materials, there may be differences of intermolecular forces or specific molecular effects between different species, even though they are chemically non-reacting. The entropy of mixing provides information about constitutive differences of intermolecular forces or specific molecular effects in the materials.
The statistical concept of randomness is used for statistical mechanical explanation of the entropy of mixing. Mixing of ideal materials is regarded as random at a molecular level, and, correspondingly, mixing of non-ideal materials may be non-random.
Mixing of ideal species at constant temperature and pressure
In ideal species, intermolecular forces are the same between every pair of molecular kinds, so that a molecule "feels" no difference between itself and its molecular neighbors. This is the reference case for examining corresponding mixing of non-ideal species.
For example, two ideal gases, at the same temperature and pressure, are initially separated by a dividing partition.
Upon removal of the dividing partition, they expand into a final common volume (the sum of the two initial volumes), and the entropy of mixing
is given by
where is the gas constant, the total number of moles and the mole fraction of component , which initially occupies volume . After the removal of the partition, the moles of component may explore the combined volume , which causes an entropy increase equal to for each component gas.
In this case, the increase in entropy is entirely due to the irreversible processes of expansion of the two gases, and involves no heat or work flow between the system and its surroundings.
Gibbs free energy of mixing
The Gibbs free energy change determines whether mixing at constant (absolute) temperature and pressure is a spontaneous process. This quantity combines two physical effects—the enthalpy of mixing, which is a measure of the energy change, and the entropy of mixing considered here.
For an ideal gas mixture or an ideal solution, there is no enthalpy of mixing (), so that the Gibbs free energy of mixing is given by the entropy term only:
For an ideal solution, the Gibbs free energy of mixing is always negative, meaning that mixing of ideal solutions is always spontaneous. The lowest value is when the mole fraction is 0.5 for a mixture of two components, or 1/n for a mixture of n components.
Solutions and temperature dependence of miscibility
Ideal and regular solutions
The above equation for the entropy of mixing of ideal gases is valid also for certain liquid (or solid) solutions—those formed by completely random mixing so that the components move independently in the total volume. Such random mixing of solutions occurs if the interaction energies between unlike molecules are similar to the average interaction energies between like molecules. The value of the entropy corresponds exactly to random mixing for ideal solutions and for regular solutions, and approximately so for many real solutions.
For binary mixtures the entropy of random mixing can be considered as a function of the mole fraction of one component.
For all possible mixtures, , so that and are both negative and the entropy of mixing is positive and favors mixing of the pure components.
The curvature of as a function of is given by the second derivative
This curvature is negative for all possible mixtures , so that mixing two solutions to form a solution of intermediate composition also increases the entropy of the system. Random mixing therefore always favors miscibility and opposes phase separation.
For ideal solutions, the enthalpy of mixing is zero so that the components are miscible in all proportions. For regular solutions a positive enthalpy of mixing may cause incomplete miscibility (phase separation for some compositions) at temperatures below the upper critical solution temperature (UCST). This is the minimum temperature at which the term in the Gibbs energy of mixing is sufficient to produce miscibility in all proportions.
Systems with a lower critical solution temperature
Nonrandom mixing with a lower entropy of mixing can occur when the attractive interactions between unlike molecules are significantly stronger (or weaker) than the mean interactions between like molecules. For some systems this can lead to a lower critical solution temperature (LCST) or lower limiting temperature for phase separation.
For example, triethylamine and water are miscible in all proportions below 19 °C, but above this critical temperature, solutions of certain compositions separate into two phases at equilibrium with each other. This means that is negative for mixing of the two phases below 19 °C and positive above this temperature. Therefore, is negative for mixing of these two equilibrium phases. This is due to the formation of attractive hydrogen bonds between the two components that prevent random mixing. Triethylamine molecules cannot form hydrogen bonds with each other but only with water molecules, so in solution they remain associated to water molecules with loss of entropy. The mixing that occurs below 19 °C is due not to entropy but to the enthalpy of formation of the hydrogen bonds.
Lower critical solution temperatures also occur in many polymer-solvent mixtures. For polar systems such as polyacrylic acid in 1,4-dioxane, this is often due to the formation of hydrogen bonds between polymer and solvent. For nonpolar systems such as polystyrene in cyclohexane, phase separation has been observed in sealed tubes (at high pressure) at temperatures approaching the liquid-vapor critical point of the solvent. At such temperatures the solvent expands much more rapidly than the polymer, whose segments are covalently linked. Mixing therefore requires contraction of the solvent for compatibility of the polymer, resulting in a loss of entropy.
Statistical thermodynamical explanation of the entropy of mixing of ideal gases
Since thermodynamic entropy can be related to statistical mechanics or to information theory, it is possible to calculate the entropy of mixing using these two approaches. Here we consider the simple case of mixing ideal gases.
Proof from statistical mechanics
Assume that the molecules of two different substances are approximately the same size, and regard space as subdivided into a square lattice whose cells are the size of the molecules. (In fact, any lattice would do, including close packing.) This is a crystal-like conceptual model to identify the molecular centers of mass. If the two phases are liquids, there is no spatial uncertainty in each one individually. (This is, of course, an approximation. Liquids have a "free volume". This is why they are (usually) less dense than solids.) Everywhere we look in component 1, there is a molecule present, and likewise for component 2. After the two different substances are intermingled (assuming they are miscible), the liquid is still dense with molecules, but now there is uncertainty about what kind of molecule is in which location. Of course, any idea of identifying molecules in given locations is a thought experiment, not something one could do, but the calculation of the uncertainty is well-defined.
We can use Boltzmann's equation for the entropy change as applied to the mixing process
where is the Boltzmann constant. We then calculate the number of ways of arranging molecules of component 1 and molecules of component 2 on a lattice, where
is the total number of molecules, and therefore the number of lattice sites.
Calculating the number of permutations of objects, correcting for the fact that of them are identical to one another, and likewise for ,
After applying Stirling's approximation for the factorial of a large integer m:
,
the result is
where we have introduced the mole fractions, which are also the probabilities of finding any particular component in a given lattice site.
Since the Boltzmann constant , where is the Avogadro constant, and the number of molecules , we recover the thermodynamic expression for the mixing of two ideal gases,
This expression can be generalized to a mixture of components, , with
The Flory–Huggins solution theory is an example of a more detailed model along these lines.
Relationship to information theory
The entropy of mixing is also proportional to the Shannon entropy or compositional uncertainty of information theory, which is defined without requiring Stirling's approximation. Claude Shannon introduced this expression for use in information theory, but similar formulas can be found as far back as the work of Ludwig Boltzmann and J. Willard Gibbs. The Shannon uncertainty is not the same as the Heisenberg uncertainty principle in quantum mechanics which is based on variance. The Shannon entropy is defined as:
where pi is the probability that an information source will produce the ith symbol from an r-symbol alphabet and is independent of previous symbols. (thus i runs from 1 to r ). H is then a measure of the expected amount of information (log pi) missing before the symbol is known or measured, or, alternatively, the expected amount of information supplied when the symbol becomes known. The set of messages of length N symbols from the source will then have an entropy of NH.
The thermodynamic entropy is only due to positional uncertainty, so we may take the "alphabet" to be any of the r different species in the gas, and, at equilibrium, the probability that a given particle is of type i is simply the mole fraction xi for that particle. Since we are dealing with ideal gases, the identity of nearby particles is irrelevant. Multiplying by the number of particles N yields the change in entropy of the entire system from the unmixed case in which all of the pi were either 1 or 0. We again obtain the entropy of mixing on multiplying by the Boltzmann constant .
So thermodynamic entropy with r chemical species with a total of N particles has a parallel to an information source that has r distinct symbols with messages that are N symbols long.
Application to gases
In gases there is a lot more spatial uncertainty because most of their volume is merely empty space. We can regard the mixing process as allowing the contents of the two originally separate contents to expand into the combined volume of the two conjoined containers. The two lattices that allow us to conceptually localize molecular centers of mass also join. The total number of empty cells is the sum of the numbers of empty cells in the two components prior to mixing. Consequently, that part of the spatial uncertainty concerning whether any molecule is present in a lattice cell is the sum of the initial values, and does not increase upon "mixing".
Almost everywhere we look, we find empty lattice cells. Nevertheless, we do find molecules in a few occupied cells. When there is real mixing, for each of those few occupied cells, there is a contingent uncertainty about which kind of molecule it is. When there is no real mixing because the two substances are identical, there is no uncertainty about which kind of molecule it is. Using conditional probabilities, it turns out that the analytical problem for the small subset of occupied cells is exactly the same as for mixed liquids, and the increase in the entropy, or spatial uncertainty, has exactly the same form as obtained previously. Obviously the subset of occupied cells is not the same at different times. But only when there is real mixing and an occupied cell is found do we ask which kind of molecule is there.
| Physical sciences | Thermodynamics | Physics |
8700358 | https://en.wikipedia.org/wiki/Javelin | Javelin | A javelin is a light spear designed primarily to be thrown, historically as a ranged weapon. Today, the javelin is predominantly used for sporting purposes such as the javelin throw. The javelin is nearly always thrown by hand, unlike the sling, bow, and crossbow, which launch projectiles with the aid of a hand-held mechanism. However, devices do exist to assist the javelin thrower in achieving greater distances, such as spear-throwers or the amentum.
A warrior or soldier armed primarily with one or more javelins is a javelineer.
The word javelin comes from Middle English and it derives from Old French javelin, a diminutive of javelot, which meant spear. The word javelot probably originated from one of the Celtic languages.
Prehistory
There is archaeological evidence that javelins and throwing sticks were already in use by the last phase of the Lower Paleolithic. Seven spear-like objects were found in a coal mine in the city of Schöningen, Germany. Stratigraphic dating indicates that the weapons are about 400,000 years old. The excavated items were made of spruce (Picea) trunk and were between long. They were manufactured with the maximum thickness and weight situated at the front end of the wooden shaft. The frontal centre of gravity suggests that these weapons were used as javelins. A fossilized horse shoulder blade with a projectile wound, dated to 500,000 years ago, was revealed in a gravel quarry in the village of Boxgrove, England. Studies suggested that the wound was probably caused by a javelin.
Classical age
Ancient Egypt
In History of Ancient Egypt: Volume 1 (1882), George Rawlinson depicts the javelin as an offensive weapon used by the Ancient Egyptian military. It was lighter in weight than that used by other nations. He describes the Ancient Egyptian javelin's features:
It consisted of a long thin shaft, sometimes merely pointed, but generally armed with a head, which was either leaf-shaped, or like the head of a spear, or else four-sided, and attached to the shaft by projections at the angles.
A strap or tasseled head was situated at the lower end of the javelin: it allowed the javelin thrower to recover his javelin after throwing it.
Egyptian military trained from a young age in special military schools. Focusing on gymnastics to gain strength, hardiness, and endurance in childhood, they learned to throw the javelin – along with practicing archery and the battle-axe – when they grew older, before entering a specific regiment.
Javelins were carried by Egyptian light infantry, as a main weapon, and as an alternative to a bow or spear, generally along with a shield. They also carried a curved sword, club, or hatchet as a sidearm. An important part in battles is often assigned to javelin-men, "whose weapons seem to inflict death at every blow".
Multiple javelins were also sometimes carried by Egyptian war-chariots, in a quiver and/or bow case.
Beyond its military purpose, the javelin was likely also a hunting instrument, for food and sport.
Ancient Greece
The peltasts, usually serving as skirmishers, were armed with several javelins, often with throwing straps to increase stand-off power. The peltasts hurled their javelins at the enemy's heavier troops, the hoplite phalanx, in order to break their lines so that their own army's hoplites could destroy the weakened enemy formation. In the battle of Lechaeum, the Athenian general Iphicrates took advantage of the fact that a Spartan hoplite phalanx operating near Corinth was moving in the open field without the protection of any missile-throwing troops. He decided to ambush it with his force of peltasts. By launching repeated hit-and-run attacks against the Spartan formation, Iphicrates and his men were able to wear the Spartans down, eventually routing them and killing just under half. This marked the first recorded occasion in ancient Greek military history in which a force entirely made up of peltasts had defeated a force of hoplites.
The thureophoroi and thorakitai, who gradually replaced the peltasts, carried javelins in addition to a long thrusting spear and a short sword.
Javelins were often used as an effective hunting weapon, the strap adding enough power to take down large game. Javelins were also used in the Ancient Olympics and other Panhellenic games. They were hurled in a certain direction and whoever hurled it the farthest, as long as it hit tip-first, won that game.
Ancient Rome
Republic and early empire
In 387 BC, the Gauls invaded Italy, inflicted a crushing defeat on the Roman Republican army, and sacked Rome. After this defeat, the Romans undertook a comprehensive reform of their army and changed the basic tactical formation from the Greek-style phalanx armed with the hasta spear and the clipeus round shield to a more flexible three-line formation. The hastati stood in the first line, the principes in the second line and the triarii in the third line. While the triarii were still armed with hastae, the hastati and the principes were rearmed with short swords and heavy javelins. Each soldier from the hastati and principes lines carried two javelins. This heavy javelin, known as a pilum (plural pila), was about two metres long overall, consisting of an iron shank, about 7 mm in diameter and 60 cm long, with pyramidal head, secured to a wooden shaft. The iron shank was either socketed or, more usually, widened to a flat tang. A pilum usually weighed between , with the versions produced during the empire being somewhat lighter. Pictorial evidence suggests that some versions of the weapon were weighted with a lead ball at the base of the shank in order to increase penetrative power, but no archaeological specimens have been found. Recent experiments have shown pila to have a range of about , although the effective range is only . Pila were sometimes referred to as "javelins", but the archaic term for the javelin was "verutum".
From the third century BC, the Roman legion added a skirmisher type of soldier to its tactical formation. The velites were light infantry armed with short swords (the gladius or pugio), small round shields, and several small javelins. These javelins were called "veruta" (singular verutum). The velites typically drew near the enemy, hurled javelins against their formation, and then retreated behind the legion's heavier infantry. The velites were considered highly effective in turning back war elephants, on account of discharging a hail of javelins at some range and not presenting a "block" that could be trampled on or otherwise smashed – unlike the close-order infantry behind them. At the Battle of Zama in 202 BC, the javelin-throwing velites proved their worth and were no doubt critical in helping to herd Hannibal's war elephants through the formation to be slaughtered. The velites would slowly have been either disbanded or re-equipped as more-heavily armed legionaries from the time when Gaius Marius and other Roman generals reorganised the army in the late second and early first centuries BC. Their role would most likely have been taken by irregular auxiliary troops as the republic expanded overseas. The verutum was a cheaper missile weapon than the pilum. The verutum was a short-range weapon, with a simply made head of soft iron.
Legionaries of the late republic and early empire often carried two pila, with one sometimes being lighter than the other. Standard tactics called for a Roman soldier to throw his pilum (both if there was time) at the enemy just before charging to engage with his gladius. Some pila had small hand-guards, to protect the wielder if he intended to use it as a melee weapon, but it does not appear that this was common.
Late Empire
In the late Roman Empire, the Roman infantry came to use a differently-shaped javelin from the earlier pilum. This javelin was lighter and had a greater range. Called a plumbata, it resembled a thick stocky arrow, fletched with leather vanes to provide stability and rotation in flight (which increased accuracy). To overcome its comparatively small mass, the plumbata was fitted with an oval-shaped lead weight socketed around the shaft just forward of the center of balance, giving the weapon its name. Even so, plumbatae were much lighter than pila, and would not have had the armour penetration or shield transfixing capabilities of their earlier counterparts.
Two or three plumbatae were typically clipped to a small wooden bracket on the inside of the large oval or round shields used at the time. Massed troops would unclip and hurl plumbatae as the enemy neared, hopefully stalling their movement and morale by making them clump together and huddle under their shields. With the enemy deprived of rapid movement and their visibility impaired by their own raised shields, the Roman troops were then better placed to exploit the tactical situation. It is unlikely plumbatae were viewed by the Romans as the killing blow, but more as a means of stalling the enemy at ranges greater than previously provided by the heavier and shorter ranged pilum.
Gaul
The Gallic cavalry used to hurl several javelin volleys to soften the enemy before a frontal attack. The Gallic cavalry used their javelins in a tactic similar to that of horse archers' Parthian shot. The Gauls knew how to turn on horseback to throw javelins backwards while appearing to retreat.
Iberia
The Hispanic cavalry was a light cavalry armed with falcatas and several light javelins. The Cantabri tribes invented a military tactic to maximize the advantages of the combination between horse and javelin. In this tactic the horsemen rode around in circles, toward and away from the enemy, continually hurling javelins. The tactic was usually employed against heavy infantry. The constant movement of the horsemen gave them an advantage against slow infantry and made them hard to target. The maneuver was designed to harass and taunt the enemy forces, disrupting close formations. This was commonly used against enemy infantry, especially the heavily armed and slow moving legions of the Romans. This tactic came to be known as the Cantabrian circle. In the late Republic various auxiliary cavalry completely replaced the Italian cavalry contingents and the Hispanic auxiliary cavalry was considered the best.
Numidia
The Numidians were indigenous tribes of northwest Africa. The Numidian cavalry was a light cavalry usually operating as skirmishers. The Numidian horseman was armed with a small shield and several javelins. The Numidians had a reputation as swift horsemen, cunning soldiers and excellent javelin throwers. It is said that Jugurtha, the Numidian king "...took part in the national pursuits of riding, javelin throwing and competed with other young men in running." [Sallust The Jugurthine War: 6]. The Numidian Cavalry served as mercenaries in the Carthaginian Army and played a key role in assisting both Hannibal and Scipio during the Second Punic War.
Middle ages
Norse
There is some literary and archeological evidence that the Norse were familiar with and used the javelin for hunting and warfare, but they commonly used a spear designed for both throwing and thrusting. The Old Norse word for javelin was frakka.
Anglo-Saxons
The Anglo-Saxon term for javelin was france. In Anglo-Saxon warfare, soldiers usually formed a shield wall and used heavy weapons like Danish axes, swords and spears. Javelins, including barbed angons, were used as an offensive weapon from behind the shield wall or by warriors who left the protective formation and attacked the enemy as skirmishers. Designed to be difficult to remove from either flesh or wood, the Angon javelin used by Anglo-Saxon warriors was an effective means of disabling an opponent or his shield, thus having the potential to disrupt opposing shield-walls.
Iberia
The Almogavars were a class of Aragonese infantrymen armed with a short sword, a shield and two heavy javelins, known as azcona. The equipment resembled that of a Roman legionary and the use of the heavy javelins was much the same.
The Jinetes were Arabic light horsemen armed with several javelins, a sword, and a shield. They were proficient at skirmishing and rapid maneuver, and played an important role in Arabic mounted warfare throughout the Reconquista until the sixteenth century. These units were widespread among the Italian infantrymen of the fifteenth century.
Wales
The Welsh, particularly those of North Wales, used the javelin as one of their main weapons. During the Norman and later English invasions, the primary Welsh tactic was to rain javelins on the tired, hungry, and heavily armoured English troops and then retreat into the mountains or woods before the English troops could pursue and attack them. This tactic was very successful, since it demoralized and damaged the English armies while the Welsh ranks suffered little.
Ireland
The kern of Ireland used javelins as their main weapon as they accompanied the more heavily armoured galloglass.
Chinese
Various kingdoms and dynasties in China have used javelins, such as the iron-headed javelin of the Qing dynasty.
Qi Jiguang's anti-pirate army included javelin throwers with shields.
Modern age
Africa
Many African kingdoms have used the javelin as their main weapon since ancient times. Typical African warfare was based on ritualized stand-off encounters involving throwing javelins without advancing for close combat. In the flag of Eswatini there is a shield and two javelins, which symbolize the protection from the country's enemies.
Zulu
The Zulu warriors used a long version of the assegai javelin as their primary weapon. The Zulu legendary leader Shaka initiated military reforms in which a short stabbing spear, with a long, swordlike spearhead named iklwa, had become the Zulu warrior's main weapon and was used as a mêlée weapon. The assegai was not discarded, but was used for an initial missile assault. With the larger shields, introduced by Shaka to the Zulu army, the short spears used as stabbing swords and the opening phase of javelin attack; the Zulu regiments were quite similar to the Roman legion with its Scutum, Gladius and Pilum tactical combination.
Mythology
Norse mythology
In Norse mythology, Odin, the chief god, carried a javelin or spear called Gungnir. It was created by a group of dwarves known as the Sons of Ivaldi who also fashioned the ship of Freyr called Skidbladnir and the golden hair of Sif. It had the property of always finding its mark ("the spear never stopped in its thrust"). During the final conflict of Ragnarok between the gods and giants, Odin will use Gungnir to attack the wolf Fenrir before being devoured by him.
During the war (and subsequent alliance) between the Aesir and Vanir at the dawn of time, Odin hurled a javelin over the enemy host which, according to custom, was thought to bring good fortune or victory to the thrower. Odin also wounded himself with a spear while hanging from Yggdrasil, the World Tree, in his ritual quest for knowledge but in neither case is the weapon referred to specifically as Gungnir.
When the god Baldr began to have prophetic dreams of his own death, his mother Frigg extracted an oath from all things in nature not to harm him. However, she neglected the mistletoe, thinking it was too young to make, let alone respect, such a solemn vow. When Loki learned of this weakness, he had a javelin or dart made from one of its branches and tricked Hod, the blind god, into hurling it at Baldr and causing his death.
Lusitanian mythology
The god Runesocesius is identified as a "god of the javelin".
| Technology | Ranged weapons | null |
8702779 | https://en.wikipedia.org/wiki/Nanofiltration | Nanofiltration | Nanofiltration is a membrane filtration process that uses nanometer sized pores through which particles smaller than about 1–10 nanometers pass through the membrane. Nanofiltration membranes have pore sizes of about 1–10 nanometers, smaller than those used in microfiltration and ultrafiltration, but a slightly bigger than those in reverse osmosis. Membranes used are predominantly polymer thin films. It is used to soften, disinfect, and remove impurities from water, and to purify or separate chemicals such as pharmaceuticals.
Membranes
Membrane materials that are commonly used are polymer thin films such as polyethylene terephthalate or metals such as aluminium. Pore dimensions are controlled by pH, temperature and time during development with pore densities ranging from 1 to 106 pores per cm2.
Membranes made from polyethylene terephthalate (PET) and other similar materials, are referred to as "track-etch" membranes, named after the way the pores on the membranes are made. "Tracking" involves bombarding the polymer thin film with high energy particles. This results in making tracks that are chemically developed into the membrane, or "etched" into the membrane, which are the pores.
Membranes created from metal such as alumina membranes, are made by electrochemically growing a thin layer of aluminum oxide from aluminum in an acidic medium.
Range of applications
Historically, nanofiltration and other membrane technology used for molecular separation was applied entirely on aqueous systems. The original uses for nanofiltration were water treatment and in particular water softening. Nanofilters "soften" water by retaining scale-forming divalent ions (e.g. Ca2+, Mg2+).
Nanofiltration has been extended into other industries such as milk and juice production as well as pharmaceuticals, fine chemicals, and flavour and fragrance industries.
Advantages and disadvantages
One of the main advantages of nanofiltration as a method of softening water is that during the process of retaining calcium and magnesium ions while passing smaller hydrated monovalent ions, filtration is performed without adding extra sodium ions, as used in ion exchangers. Many separation processes do not operate at room temperature (e.g. distillation), which greatly increases the cost of the process when continuous heating or cooling is applied. Performing gentle molecular separation is linked with nanofiltration that is often not included with other forms of separation processes (centrifugation). These are two of the main benefits that are associated with nanofiltration.
Nanofiltration has a very favorable benefit of being able to process large volumes and continuously produce streams of products. Still, Nanofiltration is the least used method of membrane filtration in industry as the membrane pores sizes are limited to only a few nanometers. Anything smaller, reverse osmosis is used and anything larger is used for ultrafiltration. Ultrafiltration can also be used in cases where nanofiltration can be used, due to it being more conventional.
A main disadvantage associated with nanotechnology, as with all membrane filter technology, is the cost and maintenance of the membranes used. Nanofiltration membranes are an expensive part of the process. Repairs and replacement of membranes is dependent on total dissolved solids, flow rate and components of the feed. With nanofiltration being used across various industries, only an estimation of replacement frequency can be used. This causes nanofilters to be replaced a short time before or after their prime usage is complete.
Design and operation
Industrial applications of membranes require hundreds to thousands of square meters of membranes and therefore an efficient way to reduce the footprint by packing them is required. Membranes first became commercially viable when low cost methods of housing in 'modules' were achieved.
Membranes are not self-supporting. They need to be stayed by a porous support that can withstand the pressures required to operate the NF membrane without hindering the performance of the membrane. To do this effectively, the module needs to provide a channel to remove the membrane permeation and provide appropriate flow condition that reduces the phenomena of concentration polarisation. A good design minimises pressure losses on both the feed side and permeate side and thus energy requirements.
Concentration polarisation
Concentration polarization describes the accumulation of the species being retained close to the surface of the membrane which reduces separation capabilities. It occurs because the particles are convected towards the membrane with the solvent and its magnitude is the balance between this convection caused by solvent flux and the particle transport away from the membrane due to the concentration gradient (predominantly caused by diffusion.) Although concentration polarization is easily reversible, it can lead to fouling of the membrane.
Spiral wound module
Spiral wound modules are the most commonly used style of module and are 'standardized' design, available in a range of standard diameters (2.5", 4" and 8") to fit standard pressure vessel that can hold several modules in series connected by O-rings. The module uses flat sheets wrapped around a central tube. The membranes are glued along three edges over a permeate spacer to form 'leaves'. The permeate spacer supports the membrane and conducts the permeate to the central permeate tube. Between each leaf, a mesh like feed spacer is inserted. The reason for the mesh like dimension of the spacer is to provide a hydrodynamic environment near the surface of the membrane that discourages concentration polarisation. Once the leaves have been wound around the central tube, the module is wrapped in a casing layer and caps placed on the end of the cylinder to prevent 'telescoping' that can occur in high flow rate and pressure conditions
Tubular module
Tubular modules look similar to shell and tube heat exchangers with bundles of tubes with the active surface of the membrane on the inside. Flow through the tubes is normally turbulent, ensuring low concentration polarisation but also increasing energy costs. The tubes can either be self-supporting or supported by insertion into perforated metal tubes. This module design is limited for nanofiltration by the pressure they can withstand before bursting, limiting the maximum flux possible. Due to both the high energy operating costs of turbulent flow and the limiting burst pressure, tubular modules are more suited to 'dirty' applications where feeds have particulates such as filtering raw water to gain potable water in the Fyne process. The membranes can be easily cleaned through a 'pigging' technique with foam balls are squeezed through the tubes, scouring the caked deposits.
Flux enhancing strategies
These strategies work to reduce the magnitude of concentration polarisation and fouling. There is a range of techniques available however the most common is feed channel spacers as described in spiral wound modules. All of the strategies work by increasing eddies and generating a high shear in the flow near the membrane surface. Some of these strategies include vibrating the membrane, rotating the membrane, having a rotor disk above the membrane, pulsing the feed flow rate and introducing gas bubbling close to the surface of the membrane.
Characterisation
Performance parameters
Retention of both charged and uncharged solutes and permeation measurements can be categorised into performance parameters since the performance under natural conditions of a membrane is based on the ratio of solute retained/ permeated through the membrane.
For charged solutes, the ionic distribution of salts near the membrane-solution interface plays an important role in determining the retention characteristic of a membrane. If the charge of the membrane and the composition and concentration of the solution to be filtered is known, the distribution of various salts can be found. This in turn can be combined with the known charge of the membrane and the Gibbs–Donnan effect to predict the retention characteristics for that membrane.
Uncharged solutes cannot be characterised simply by Molecular Weight Cut Off (MWCO,) although in general an increase in molecular weight or solute size leads to an increase in retention. The charge and structure, pH of the solute, influence the retention characteristics.
Morphology parameters
The morphology of a membrane is usually established by microscopy. Atomic force microscopy (AFM) is one method used to characterise the surface roughness of a membrane by passing a small sharp tip (<100 Ă) across the surface of a membrane and measuring the resulting Van der Waals force between the atoms in the end of the tip and the surface. This is useful as a direct correlation between surface roughness and colloidal fouling has been developed. Correlations also exist between fouling and other morphology parameters, such as hydrophobe, showing that the more hydrophobic a membrane is, the less prone to fouling it is. See membrane fouling for more information.
Methods to determine the porosity of porous membranes have also been found via permporometry, making use of differing vapour pressures to characterise the pore size and pore size distribution within the membrane. Initially all pores in the membrane are completely filled with a liquid and as such no permeation of a gas occurs, but after reducing the relative vapour pressure some gaps will start to form within the pores as dictated by the Kelvin equation. Polymeric (non-porous) membranes cannot be subjected to this methodology as the condensable vapour should have a negligible interaction within the membrane.
Solute transport and rejection
Unlike membranes with larger and smaller pore sizes, passage of solutes through nanofiltration is significantly more complex.
Because of the pore sizes, there are three modes of transport of solutes through the membrane. These include 1) diffusion (molecule travel due to concentration potential gradients, as seen through reverse osmosis membranes), 2) convection (travel with flow, like in larger pore size filtration such as microfiltration), and 3) electromigration (attraction or repulsion from charges within and near the membrane).
Additionally, the exclusion mechanisms in nanofiltration are more complex than in other forms of filtration. Most filtration systems operate solely by size (steric) exclusion, but at small length scales seen in nanofiltration, important effects include surface charge and hydration (solvation shell). The exclusion due to hydration is referred to as dielectric exclusion, a reference to the dielectric constants (energy) associated with a particles precense in solution versus within a membrane substrate. Solution pH strongly impacts surface charge, providing a method to understand and better control rejection.
The transport and exclusion mechanisms are heavily influenced by membrane pore size, solvent viscosity, membrane thickness, solute diffusivity, solution temperature, solution pH, and membrane dielectric constant. The pore size distribution is also important. Modeling rejection accurately for NF is very challenging. It can be done with applications of the Nernst–Planck equation, although a heavy reliance on fitting parameters to experimental data is usually required.
In general, charged solutes are much more effectively rejected in NF than uncharged solutes, and multivalent solutes like (valence of 2) experience very high rejection.
Typical figures for industrial applications
Keeping in mind that NF is usually part of a composite system for purification, a single unit is chosen based on the design specifications for the NF unit. For drinking water purification many commercial membranes exist, coming from chemical families having diverse structures, chemical tolerances and salt rejections.
NF units in drinking water purification range from extremely low salt rejection (<5% in 1001A membranes) to almost complete rejection (99% in 8040-TS80-TSA membranes.) Flow rates range from 25 to 60 m3/day for each unit, so commercial filtration requires multiple NF units in parallel to process large quantities of feed water. The pressures required in these units are generally between 4.5 and 7.5 bar.
For seawater desalination using a NF-RO system a typical process is shown below.
Because NF permeate is rarely clean enough to be used as the final product for drinking water and other water purification, is it commonly used as a pre treatment step for reverse osmosis (RO) as is shown above.
Post-treatment
As with other membrane based separations such as ultrafiltration, microfiltration and reverse osmosis, post-treatment of either permeate or retentate flow streams (depending on the application) – is a necessary stage in industrial NF separation prior to commercial distribution of the product. The choice and order of unit operations employed in post-treatment is dependent on water quality regulations and the design of the NF system. Typical NF water purification post-treatment stages include aeration and disinfection & stabilisation.
Aeration
A Polyvinyl chloride (PVC) or fibre-reinforced plastic (FRP) degasifier is used to remove dissolved gases such as carbon dioxide and hydrogen sulfide from the permeate stream. This is achieved by blowing air in a countercurrent direction to the water falling through packing material in the degasifier. The air effectively strips the unwanted gases from the water.
Disinfection and stabilisation
The permeate water from a NF separation is demineralised and may be disposed to large changes in pH, thus providing a substantial risk of corrosion in piping and other equipment components. To increase the stability of the water, chemical addition of alkaline solutions such as lime and caustic soda is employed. Furthermore, disinfectants such as chlorine or chloroamine are added to the permeate, as well as phosphate or fluoride corrosion inhibitors in some cases.
Research trends
Challenges in nanofiltration (NF) technology include minimising membrane fouling and reducing energy requirements. Thin film composite membranes (TFC), which consist of a number of extremely thin selective layers interfacially polymerized over a microporous substrate, have had commercial success in industrial membrane applications. Electrospunnanofibrous membrane layers (ENMs) enhances permeate flux.
Energy-efficient alternatives to the commonly used spiral wound arrangement are hollow fibre membranes, which require less pre-treatment. Titanium Dioxide nanoparticles have been used to minimize for membrane fouling.
| Physical sciences | Other separations | Chemistry |
5515027 | https://en.wikipedia.org/wiki/Directory%20%28computing%29 | Directory (computing) | In computing, a directory is a file system cataloging structure which contains references to other computer files, and possibly other directories. On many computers, directories are known as folders, or drawers, analogous to a workbench or the traditional office filing cabinet. The name derives from books like a telephone directory that lists the phone numbers of all the people living in a certain area.
Files are organized by storing related files in the same directory. In a hierarchical file system (that is, one in which files and directories are organized in a manner that resembles a tree), a directory contained inside another directory is called a subdirectory. The terms parent and child are often used to describe the relationship between a subdirectory and the directory in which it is cataloged, the latter being the parent. The top-most directory in such a filesystem, which does not have a parent of its own, is called the root directory.
The freedesktop.org media type for directories within many Unix-like systems – including but not limited to systems using GNOME, KDE Plasma 5, or ROX Desktop as the desktop environment – is "inode/directory". This is not an IANA registered media type.
Overview
Historically, and even on some modern embedded systems, the file systems either had no support for directories at all or had only a "flat" directory structure, meaning subdirectories were not supported; there were only a group of top-level directories, each containing files. In modern systems, a directory can contain a mix of files and subdirectories.
A reference to a location in a directory system is called a path.
In many operating systems, programs have an associated working directory in which they execute. Typically, file names accessed by the program are assumed to reside within this directory if the file names are not specified with an explicit directory name.
Some operating systems restrict a user's access only to their home directory or project directory, thus isolating their activities from all other users. In early versions of Unix the root directory was the home directory of the root user, but modern Unix usually uses another directory such as for this purpose.
In keeping with Unix philosophy, Unix systems treat directories as a type of file. Caveats include not being able to write to a directory file except indirectly by creating, renaming and removing file system objects in the directory and only being able to read from a directory file using directory-specific library routines and system calls that return records, not a byte-stream.
Folder metaphor
The name folder, presenting an analogy to the file folder used in offices, and used in a hierarchical file system design for the Electronic Recording Machine, Accounting (ERMA) Mark 1 published in 1958 as well as by Xerox Star, is used in almost all modern operating systems' desktop environments. Folders are often depicted with icons which visually resemble physical file folders.
There is a difference between a directory, which is a file system concept, and the graphical user interface metaphor that is used to represent it (a folder). For example, Microsoft Windows uses the concept of special folders to help present the contents of the computer to the user in a fairly consistent way that frees the user from having to deal with absolute directory paths, which can vary between versions of Windows, and between individual installations. Many operating systems also have the concept of "smart folders" or virtual folders that reflect the results of a file system search or other operation. These folders do not represent a directory in the file hierarchy. Many email clients allow the creation of folders to organize email. These folders have no corresponding representation in the filesystem structure.
If one is referring to a container of documents, the term folder is more appropriate. The term directory refers to the way a structured list of document files and folders are stored on the computer. The distinction can be due to the way a directory is accessed; on Unix systems, is usually referred to as a directory when viewed in a command line console, but if accessed through a graphical file manager, users may sometimes call it a folder.
Lookup cache
Operating systems that support hierarchical filesystems (practically all modern ones) implement a form of caching to RAM of recent path lookups. In the Unix world, this is usually called Directory Name Lookup Cache (DNLC), although it is called dcache on Linux.
For local filesystems, DNLC entries normally expire only under pressure from other more recent entries. For network file systems a coherence mechanism is necessary to ensure that entries have not been invalidated by other clients.
| Technology | Data storage and memory | null |
585682 | https://en.wikipedia.org/wiki/Scops%20owl | Scops owl | Scops owls are typical owls in family Strigidae belonging to the genus Otus and are restricted to the Old World. Otus is the largest genus of owls with 59 species. Scops owls are colored in various brownish hues, sometimes with a lighter underside and/or face, which helps to camouflage them against the bark of trees. Some are polymorphic, occurring in a greyish- and a reddish-brown morph. They are small and agile, with both sexes being compact in size and shape. Female scops owls are usually larger than males.
For most of the 20th century, this genus included the American screech owls, which are now again separated in Megascops based on a range of behavioral, biogeographical, morphological and DNA sequence data.
Taxonomy
The genus Otus was introduced in 1769 by the Welsh naturalist Thomas Pennant for the Indian scops owl (O. bakkamoena). The name is derived from the Latin word and the Greek word ōtos meaning horned or eared owl (cf. οὖς, ὠτός, "ear"). The generic name Scops that was proposed by Marie Jules César Savigny in 1809 is a junior synonym and is derived from the Greek (skōps) meaning small kind of owl, Otus scops.
By the mid-19th century, it was becoming clear that Otus encompassed more than one genus. First, in 1848, the screech owls were split off as Megascops. The white-faced owls of Africa, with their huge eyes and striking facial coloration, were separated in Ptilopsis in 1851. In 1854, the highly apomorphic white-throated screech owl of the Andes was placed in the monotypic genus Macabra. Gymnasio was established in the same year for the Puerto Rican owl, and the bare-legged owl (or "Cuban screech owl") was separated in Gymnoglaux the following year; the latter genus was sometimes merged with Gymnasio by subsequent authors. The Palau scops owl, described only in 1872 and little-known to this day, was eventually separated in Pyrroglaux by Yoshimaro Yamashina in 1938.
In the early 20th century, the lumping-together of taxa had come to be preferred. The 3rd edition of the AOU checklist in 1910 placed the screech owls back in Otus. Although this move was never unequivocally accepted, it was the dominant treatment throughout most of the 20th century. In 1988 it was attempted to resolve this by re-establishing all those genera split some 140 years earlier at subgenus rank inside Otus. Still, the diversity and distinctness of the group failed to come together in a good evolutionary and phylogenetic picture, and it was not until the availability of DNA sequence data that this could be resolved. In 1999, a preliminary study of mtDNA cytochrome b across a wide range of owls found that even the treatment as subgenera was probably unsustainable and suggested that most of the genera proposed around 1850 should be accepted. Though there was some debate about the reliability of these findings at first, they have been confirmed by subsequent studies. In 2003, the AOU formally re-accepted the genus Megascops again.
Species
The genus Otus contains 59 species (including 3 extinct species):
Giant scops owl, Otus gurneyi
White-fronted scops owl, Otus sagittatus
Reddish scops owl, Otus rufescens
Serendib scops owl, Otus thilohoffmanni
Sandy scops owl, Otus icterorhynchus
Sokoke scops owl, Otus ireneae
Andaman scops owl, Otus balli
Flores scops owl, Otus alfredi
Mountain scops owl, Otus spilocephalus
Javan scops owl, Otus angelinae
Mindanao scops owl, Otus mirus
Luzon scops owl, Otus longicornis
Mindoro scops owl, Otus mindorensis
São Tomé scops owl, Otus hartlaubi
Torotoroka scops owl, Otus madagascariensis – formerly included in O. rutilus
Rainforest scops owl, Otus rutilus
Mayotte scops owl, Otus mayottensis – formerly included in O. rutilus
Karthala scops owl, Otus pauliani
Anjouan scops owl, Otus capnodes
Moheli scops owl, Otus moheliensis
† Réunion scops owl, Otus grucheti – extinct, formerly placed in the genus Mascarenotus
† Mauritius scops owl, Otus sauzieri – extinct, formerly placed in the genus Mascarenotus
† Rodrigues scops owl, Otus murivorus – extinct, formerly placed in the genus Mascarenotus
Pemba scops owl, Otus pembaensis
Eurasian scops owl, Otus scops
Cyprus scops owl, Otus cyprius – formerly included in O. scops
Pallid scops owl, Otus brucei
Arabian scops owl, Otus pamelae
African scops owl, Otus senegalensis
Annobón scops owl, Otus feae – formerly included in O. senegalensis
Socotra scops owl, Otus socotranus
Oriental scops owl, Otus sunia
Ryūkyū scops owl, Otus elegans
Moluccan scops owl, Otus magicus
Wetar scops owl, Otus tempestatis
Sula scops owl, Otus sulaensis
Biak scops owl, Otus beccarii
Sulawesi scops owl, Otus manadensis
Banggai scops owl, Otus mendeni
Siau scops owl, Otus siaoensis
Sangihe scops owl, Otus collari
Mantanani scops owl, Otus mantananensis
Seychelles scops owl, Otus insularis
Nicobar scops owl, Otus alius
Simeulue scops owl, Otus umbra
Enggano scops owl, Otus enganensis
Mentawai scops owl, Otus mentawi
Rajah scops owl, Otus brookii
Indian scops owl, Otus bakkamoena
Collared scops owl, Otus lettia – formerly included in O. bakkamoena
Japanese scops owl, Otus semitorques – formerly included in O. bakkamoena
Sunda scops owl, Otus lempiji – formerly included in O. bakkamoena
Philippine scops owl, Otus megalotis
Negros scops owl, Otus nigrorum – formerly included in O. megalotis
Everett's scops owl, Otus everetti – formerly included in O. megalotis
Palawan scops owl, Otus fuliginosus
Wallace's scops owl, Otus silvicola
Rinjani scops owl, Otus jolandae
Palau scops owl, Otus podarginus – formerly placed in the monotypic genus Pyrroglaux
Principe scops owl, Otus bikegila
Two extinct species are sometimes placed in the genus:
† Madeiran scops owl, Otus mauli (extinct, c. 15th century)
† São Miguel scops owl, Otus frutuosoi (extinct, c. 15th century)
An apparent Otus owl was heard calling at about 1,000 meters ASL south of the summit of Camiguin in the Philippines on May 14, 1994. No scops owls had previously known from this island, and given that new species of Otus are occasionally discovered, it may have been an undescribed taxon.
In July 2016, an unknown Otus species was photographed on Príncipe. The image was published on Ornithomedia. Dubbed Otus bikegila, it was formally described in 2022.
Formerly placed here
As noted above, the fossil record of scops owls gives an incomplete picture of their evolution at present. While older sources cite many species of supposed extinct Otus (or "Scops"), these are now placed in entirely different genera:
"Otus" henrici was a barn owl of the genus Selenornis
"Otus" providentiae was a burrowing owl, probably a paleosubspecies
"Otus" wintershofensis may be close to extant genus Ninox and some material assigned to it belongs into Intutula
"Scops" commersoni is a junior synonym of the recently extinct Mauritius owl, referring to pictures and descriptions which mention ear tufts; the subfossil material of this species had been erroneously assigned to tuftless owls.
Evolution
The evolutionary relationships of the scops and screech owls are not entirely clear. What is certain is that they are very closely related; they may be considered sister lineages which fill essentially the same ecological niche in their allopatric ranges. A screech-owl fossil from the Late Pliocene of Kansas – which is almost identical to eastern and western screech owls – indicate a long-standing presence of these birds in the Americas, while coeval scops owl fossils very similar to the Eurasian scops-owl have been found at S'Onix on the Spanish island Majorca. The scops and screech owl lineage probably evolved at some time during the Miocene (like most other genera of typical owls), and the three (see below) modern lineages separated perhaps roughly 5 million years ago. Note that there is no reliable estimate of divergence time, as Otus and Megascops are osteologically very similar, as is to be expected from a group that has apparently conserved its ecomorphology since before its evolutionary radiation. As almost all scops and screech owls today, their common ancestor was in all probability already a small owl, with ear tufts and at least the upper tarsus ("leg") feathered.
However that may be, the hypothesis that the group evolved from Old World stock is tentatively supported by cytochrome b sequence data.
Ecology and behaviour
While late 19th-century ornithologists knew little of the variation of these cryptic birds which often live in far-off places, with every new taxon being described a few differences between the Old and New World "scops" owls became more and more prominent. Namely, the scops owls give a whistling call or a row of high-pitched hoots with less than four individual hoots per second. This call is given in social interaction or when the owl tries to scare away other animals. The screech owls on the other hand are named for their piercing trills of more than four individual notes per second. They also have a kind of song, which is a short sequence of varying calls given by the males when they try to attract females to their nests, or between members of a pair. There are a few other differences such as the screech owls almost never being brown below which is common in scops owls, but the difference in vocalizations is most striking.
Scops owls hunt from perches in semi-open landscapes. They prefer areas which contain old trees with hollows; these are home to their prey which includes insects, reptiles, small mammals such as bats and mice and other small birds. The owls will also eat earthworms, amphibians and aquatic invertebrates. Scops owls have a good sense of hearing which helps them locate their prey in any habitat. They also possess well-developed raptorial claws and a curved bill, both of which are used for tearing their prey into pieces small enough to swallow easily.
Scops owls are primarily solitary birds. Most species lay and incubate their eggs in a cavity nest that was originally made by another animal. During the incubation period, the male will feed the female. These birds are monogamous, with biparental care, and only fledge one young per year. The young of most scops owls are altricial to semialtricial.
As opposed to screech owls, scops owls have only a single type of call. This consists of a series of whistles or high-pitched hoots, given with a frequency of 4 calls per second or less, or of a single, drawn-out whistle. Calls differ widely between species in type and pitch, and in the field are often the first indication of these birds' presence, as well as the most reliable means to distinguish between species. Some, like the recently described Serendib scops owl (Otus thilohoffmanni), were discovered because their vocalizations were unfamiliar to experts in birdcalls.
| Biology and health sciences | Strigiformes | Animals |
585732 | https://en.wikipedia.org/wiki/Radius%20%28bone%29 | Radius (bone) | The radius or radial bone (: radii or radiuses) is one of the two large bones of the forearm, the other being the ulna. It extends from the lateral side of the elbow to the thumb side of the wrist and runs parallel to the ulna. The ulna is longer than the radius, but the radius is thicker. The radius is a long bone, prism-shaped and slightly curved longitudinally.
The radius is part of two joints: the elbow and the wrist. At the elbow, it joins with the capitulum of the humerus, and in a separate region, with the ulna at the radial notch. At the wrist, the radius forms a joint with the ulna bone.
The corresponding bone in the lower leg is the tibia.
Structure
The long narrow medullary cavity is enclosed in a strong wall of compact bone. It is thickest along the interosseous border and thinnest at the extremities, same over the cup-shaped articular surface (fovea) of the head.
The trabeculae of the spongy tissue are somewhat arched at the upper end and pass upward from the compact layer of the shaft to the fovea capituli (the humerus's cup-shaped articulatory notch); they are crossed by others parallel to the surface of the fovea. The arrangement at the lower end is somewhat similar. It is missing in radial aplasia.
The radius has a body and two extremities. The upper extremity of the radius consists of a somewhat cylindrical head articulating with the ulna and the humerus, a neck, and a radial tuberosity. The body of the radius is self-explanatory, and the lower extremity of the radius is roughly quadrilateral in shape, with articular surfaces for the ulna, scaphoid and lunate bones. The distal end of the radius forms two palpable points, radially the styloid process and Lister's tubercle on the ulnar side. Along with the proximal and distal radioulnar articulations, an interosseous membrane originates medially along the length of the body of the radius to attach the radius to the ulna.
Near the wrist
The distal end of the radius is large and of quadrilateral form.
Joint surfaces
It is provided with two articular surfaces – one below, for the carpus, and another at the medial side, for the ulna.
The carpal articular surface is triangular, concave, smooth, and divided by a slight antero-posterior ridge into two parts. Of these, the lateral, triangular, articulates with the scaphoid bone; the medial, quadrilateral, with the lunate bone.
The articular surface for the ulna is called the ulnar notch (sigmoid cavity) of the radius; it is narrow, concave, smooth, and articulates with the head of the ulna.
These two articular surfaces are separated by a prominent ridge, to which the base of the triangular articular disk is attached; this disk separates the wrist-joint from the distal radioulnar articulation.
Other surfaces
This end of the bone has three non-articular surfaces – volar, dorsal, and lateral.
The volar surface, rough and irregular, affords attachment to the volar radiocarpal ligament.
The dorsal surface is convex, affords attachment to the dorsal radiocarpal ligament, and is marked by three grooves. Enumerated from the lateral side:
The first groove is broad, but shallow, and subdivided into two by a slight ridge: the lateral of these two, transmits the tendon of the extensor carpi radialis longus muscle; the medial, the tendon of the extensor carpi radialis brevis muscle.
The second is deep but narrow, and bounded laterally by a sharply defined ridge; it is directed obliquely from above downward and lateralward, and transmits the tendon of the extensor pollicis longus muscle.
The third is broad, for the passage of the tendons of the extensor indicis proprius and extensor digitorum communis.
The lateral surface is prolonged obliquely downward into a strong, conical projection, the styloid process, which gives attachment by its base to the tendon of the brachioradialis, and by its apex to the radial collateral ligament of wrist joint. The lateral surface of this process is marked by a flat groove, for the tendons of the abductor pollicis longus muscle and extensor pollicis brevis muscle.
Body
The body of the radius (or shaft of radius) is prismoid in form, narrower above than below, and slightly curved, so as to be convex lateralward. It presents three borders and three surfaces.
Borders
The volar border (margo volaris; anterior border; palmar;) extends from the lower part of the tuberosity above to the anterior part of the base of the styloid process below, and separates the volar from the lateral surface. Its upper third is prominent, and from its oblique direction has received the name of the oblique line of the radius; it gives origin to the flexor digitorum superficialis muscle (also flexor digitorum sublimis) and flexor pollicis longus muscle; the surface above the line gives insertion to part of the supinator muscle. The middle third of the volar border is indistinct and rounded. The lower fourth is prominent, and gives insertion to the pronator quadratus muscle, and attachment to the dorsal carpal ligament; it ends in a small tubercle, into which the tendon of the brachioradialis muscle is inserted.
The dorsal border (margo dorsalis; posterior border) begins above at the back of the neck, and ends below at the posterior part of the base of the styloid process; it separates the posterior from the lateral surface. is indistinct above and below, but well-marked in the middle third of the bone.
The interosseous border (internal border; crista interossea; interosseous crest;) begins above, at the back part of the tuberosity, and its upper part is rounded and indistinct; it becomes sharp and prominent as it descends, and at its lower part divides into two ridges which are continued to the anterior and posterior margins of the ulnar notch. To the posterior of the two ridges the lower part of the interosseous membrane is attached, while the triangular surface between the ridges gives insertion to part of the pronator quadratus muscle. This crest separates the volar from the dorsal surface, and gives attachment to the interosseous membrane. The connection between the two bones is actually a joint referred to as a syndesmosis joint.
Surfaces
The volar surface (facies volaris; anterior surface) is concave in its upper three-fourths, and gives origin to the flexor pollicis longus muscle; it is broad and flat in its lower fourth, and affords insertion to the Pronator quadratus. A prominent ridge limits the insertion of the Pronator quadratus below, and between this and the inferior border is a triangular rough surface for the attachment of the volar radiocarpal ligament. At the junction of the upper and middle thirds of the volar surface is the nutrient foramen, which is directed obliquely upward.
The dorsal surface (facies dorsalis; posterior surface) is convex, and smooth in the upper third of its extent, and covered by the Supinator. Its middle third is broad, slightly concave, and gives origin to the Abductor pollicis longus above, and the extensor pollicis brevis muscle below. Its lower third is broad, convex, and covered by the tendons of the muscles which subsequently run in the grooves on the lower end of the bone.
The lateral surface (facies lateralis; external surface) is convex throughout its entire extent and is known as the convexity of the radius, curving outwards to be convex at the side. Its upper third gives insertion to the supinator muscle. About its center is a rough ridge, for the insertion of the pronator teres muscle. Its lower part is narrow, and covered by the tendons of the abductor pollicis longus muscle and extensor pollicis brevis muscle.
Near the elbow
The upper extremity of the radius (or proximal extremity) presents a head, neck, and tuberosity.
The radial head has a cylindrical form, and on its upper surface is a shallow cup or fovea for articulation with the capitulum (or capitellum) of the humerus. The circumference of the head is smooth; it is broad medially where it articulates with the radial notch of the ulna, narrow in the rest of its extent, which is embraced by the annular ligament. The deepest point in the fovea is not axi-symmetric with the long axis of the radius, creating a cam effect during pronation and supination.
The head is supported on a round, smooth, and constricted portion called the neck, on the back of which is a slight ridge for the insertion of part of the supinator muscle.
Beneath the neck, on the medial side, is an eminence, the radial tuberosity; its surface is divided into a posterior, rough portion, for the insertion of the tendon of the biceps brachii muscle, and an anterior, smooth portion, on which a bursa is interposed between the tendon and the bone.
Development
The radius is ossified from three centers: one for the body, and one for each extremity. That for the body makes its appearance near the center of the bone, during the eighth week of fetal life.
Ossification commences in the lower end between 9 and 26 months of age. The ossification center for the upper end appears by the fifth year.
The upper epiphysis fuses with the body at the age of seventeen or eighteen years, the lower about the age of twenty.
An additional center sometimes found in the radial tuberosity, appears about the fourteenth or fifteenth year.
Function
Muscle attachments
The biceps muscle inserts on the radial tuberosity of the upper extremity of the bone. The upper third of the body of the bone attaches to the supinator, the flexor digitorum superficialis, and the flexor pollicis longus muscles.
The middle third of the body attaches to the extensor ossis metacarpi pollicis, extensor primi internodii pollicis, and the pronator teres muscles.
The lower quarter of the body attaches to the pronator quadratus muscle and the tendon of the supinator longus.
Clinical significance
Radial aplasia refers to the congenital absence or shortness of the radius.
Fracture
Specific fracture types of the radius include:
Proximal radius fracture. A fracture within the capsule of the elbow joint results in the fat pad sign or "sail sign" which is a displacement of the fat pad at the elbow.
Essex-Lopresti fracture – a fracture of the radial head with concomitant dislocation of the distal radio-ulnar joint with disruption of the interosseous membrane.
Radial shaft fracture
Distal radius fracture
Galeazzi fracture – a fracture of the radius with dislocation of the distal radioulnar joint
Colles' fracture – a distal fracture of the radius with dorsal (posterior) displacement of the wrist and hand
Smith's fracture – a distal fracture of the radius with volar (ventral) displacement of the wrist and hand
Barton's fracture – an intra-articular fracture of the distal radius with dislocation of the radiocarpal joint.
History
The word radius is Latin for "ray". In the context of the radius bone, a ray can be thought of rotating around an axis line extending diagonally from center of capitulum to the center of distal ulna. While the ulna is the major contributor to the elbow joint, the radius primarily contributes to the wrist joint.
The radius is named so because the radius (bone) acts like the radius (of a circle). It rotates around the ulna and the far end (where it joins to the bones of the hand), known as the styloid process of the radius, is the distance from the ulna (center of the circle) to the edge of the radius (the circle). The ulna acts as the center point to the circle because when the arm is rotated the ulna does not move.
Other animals
In four-legged animals, the radius is the main load-bearing bone of the lower forelimb. Its structure is similar in most terrestrial tetrapods, but it may be fused with the ulna in some mammals (such as horses) and reduced or modified in animals with flippers or vestigial forelimbs.
Gallery
| Biology and health sciences | Skeletal system | Biology |
585842 | https://en.wikipedia.org/wiki/Sabot%20%28firearms%29 | Sabot (firearms) | A sabot (, ) is a supportive device used in firearm/artillery ammunitions to fit/patch around a projectile, such as a bullet/slug or a flechette-like projectile (such as a kinetic energy penetrator), and keep it aligned in the center of the barrel when fired. It allows a narrower projectile with high sectional density to be fired through a barrel of much larger bore diameter with maximal accelerative transfer of kinetic energy. After leaving the muzzle, the sabot typically separates from the projectile in flight, diverting only a very small portion of the overall kinetic energy.
The sabot component in projectile design is the relatively thin, tough and deformable seal known as a driving band or obturation ring needed to trap propellant gases behind a projectile, and also keep the projectile centered in the barrel, when the outer shell of the projectile is only slightly smaller in diameter than the caliber of the barrel. Driving bands and obturators are used to seal these full-bore projectiles in the barrel because of manufacturing tolerances; there always exists some gap between the projectile outer diameter and the barrel inner diameter, usually a few thousandths of an inch; enough of a gap for high pressure gasses to slip by during firing. Driving bands and obturator rings are made from material that will deform and seal the barrel as the projectile is forced from the chamber into the barrel.
Sabots use driving bands and obturators, because the same manufacturing tolerance issues exist when sealing the saboted projectile in the barrel, but the sabot itself is a more substantial structural component of the in-bore projectile configuration. Refer to the two armor-piercing fin-stabilized discarding sabot (APFSDS) pictures to see the substantial material nature of a sabot to fill the bore diameter around the sub-caliber arrow-type flight projectile, compared to the very small gap sealed by a driving band or obturator to mitigate what is known classically as windage. More detailed cutaways of the internal structural complexity of advanced APFSDS saboted long rod penetrator projectiles can be found in #External links.
Design
The function of a sabot is to provide a larger bulkhead structure that fills the entire bore area between an intentionally designed sub-caliber flight projectile and the barrel, giving a larger surface area for propellant gasses to act upon than just the base of the smaller flight projectile. Efficient aerodynamic design of a flight projectile does not always accommodate efficient interior ballistic design to achieve high muzzle velocity. This is especially true for arrow-type projectiles, which are long and thin for low drag efficiency, but too thin to shoot from a gun barrel of equal diameter to achieve high muzzle velocity. The physics of interior ballistics demonstrates why the use of a sabot is advantageous to achieve higher muzzle velocity with an arrow-type projectile. Propellant gasses generate high pressure, and the larger the base area that pressure acts upon the greater the net force on that surface. Force (pressure times area) provides an acceleration to the mass of the projectile. Therefore, for a given pressure and barrel diameter, a lighter projectile can be driven from a barrel to a higher muzzle velocity than a heavier projectile. However, a lighter projectile may not fit in the barrel, because it is too thin. To make up this difference in diameter, a properly designed sabot provides less parasitic mass than if the flight projectile were made full-bore, in particular providing dramatic improvement in muzzle velocity for APDS (Armor-piercing discarding sabot) and APFSDS (Armor-piercing fin-stabilized discarding sabot) ammunition.
Seminal research on two important sabot configurations for long rod penetrators used in APFSDS ammunition, namely the "saddle-back" and "double-ramp" sabot was performed by the US Army Ballistics Research Laboratory during the development and improvement of modern 105mm and 120mm kinetic energy APFSDS penetrators and published in 1978, permitted by the significant advancement in the computerized finite element method in structural mechanics at that time; and now represents the existing fielded technology standard. (See for example the development of the M829 series of anti-tank projectiles beginning with the base model M829 in the early 1980s, to the 2016 M829A4 model, employing ever longer "double-ramp" sabots). Upon muzzle exit, the sabot is discarded, and the smaller flight projectile flies to the target with less drag resistance than a full-bore projectile. In this manner, very high velocity and slender, low drag projectiles can be fired more efficiently, (see external ballistics and terminal ballistics). Nevertheless, the weight of the sabot represents parasitic mass that must also be accelerated to muzzle velocity, but does not contribute to the terminal ballistics of the flight projectile. For this reason, great emphasis is placed on selecting strong yet lightweight structural materials for the sabot, and configuring the sabot geometry to efficiently employ these parasitic materials at minimum weight penalty.
Made of some lightweight material (usually high strength plastic in small caliber rifles, (see SLAP Saboted light armor penetrator), shotguns and muzzle loader ammunition; aluminium, steel, and carbon fiber reinforced plastic for modern anti-tank kinetic energy ammunition; and, in classic times, wood or papier-mâché – in muzzle loading cannons). The sabot usually consists of several longitudinal pieces held in place by the cartridge case, an obturator or driving band. When the projectile is fired, the sabot blocks the gas, provides significant structural support against launch acceleration, and carries the projectile down the barrel. When the sabot reaches the end of the barrel, the shock of hitting still air pulls the parts of the sabot away from the projectile, allowing the projectile to continue in flight. Modern sabots are made from high strength aluminum and graphite fiber reinforced epoxy. They are used primarily to fire long rods of very dense materials, such as tungsten heavy alloy and depleted uranium. (see for example the M829 series of anti-tank projectiles).
Sabot-type shotgun slugs were marketed in the United States from about 1985, and became legal for hunting in most U.S. states. When used with a rifled slug barrel, they are very much more accurate than normal shotgun slugs.
Types
Cup sabot
A cup sabot supports the base and rear end of a projectile, and the cup material alone can provide both structural support and barrel obturation. When the sabot and projectile exit the muzzle of the gun, air pressure alone on the sabot forces the sabot to release the projectile. Cup sabots are found typically in small arms ammunition, smooth-bore shotgun and smooth-bore muzzleloader projectiles.
Expanding cup sabot
Used typically in rifled small arms (SLAP, shotguns, and muzzleloaders), an expanding cup sabot has a one piece sabot surrounding the base and sides of a projectile, providing both structural support and obturation. Upon firing, when the sabot and projectile leave the muzzle of the gun, centrifugal force from the rotation of the projectile, due to barrel rifling, opens up the segments surrounding the projectile, rapidly presenting more surface area to air pressure, quickly releasing it.
Although the use of cup sabots of various complexity are popular with rifle ammunition hand-loaders, in order to achieve significantly higher muzzle velocity with a lower drag, smaller diameter and lighter bullet, successful saboted projectile design has to include the resulting bullet stability characteristics. For example, simply inserting a commercially available 5.56mm (.224) bullet into a sabot that will fire it from a commercially available 7.62mm (.300) barrel may result in that 5.56mm bullet failing to achieve sufficient gyroscopic stability to fly accurately without tumbling. To achieve gyroscopic stability of longer bullets in smaller diameter requires faster rifling. Therefore, if a bullet requires at least 1 turn in 7 inch twist, (1:7 rifling), in 5.56mm, it will also require at least 1:7 rifling when saboted in 7.62mm. However, larger caliber commercial rifles generally don't need such fast twist rates; 1:10 being a readily available standard in 7.62mm. As a result, the twist rate of the larger barrel will dictate which smaller bullets can be fired with sufficient stability out of a sabot. In this example, using 1:10 rifling in 7.62mm restricts saboting to 5.56mm bullets that require 1:10 twist or slower, and this requirement will tend to restrict saboting to the shorter (and lighter) 5.56mm bullets.
Base sabot
A base sabot has a one piece base which supports the bottom of the projectile, and separate pieces that surround the sides of the projectile and center it. The base sabot can have better and cleaner sabot/projectile separation than cup or expanding cup sabots for small arms ammunition, but may be more expensive to manufacture and assemble.
In larger caliber APDS ammunition, based on the cup, expanding cup, and base sabot concepts, significantly more complex assemblies are required.
Spindle sabot
A spindle sabot uses a set of at least two and upwards of four matched longitudinal rings or "petals" which have a center section in contact with a long arrow-type projectile; a front section or "bore-rider" which centers that projectile in the barrel and provides an air scoop to assist in sabot separation upon muzzle exit, and a rear section which both centers the projectile, provides a structural "bulkhead", and seals propellant gases with an obturator ring around the outside diameter. Spindle sabots are the standard type used in modern large caliber armor-piercing ammunition. Three-petal spindle-type sabots are shown in the illustrations at the right of this paragraph. The "double-ramp" and "saddle-back" sabots used on modern APFSDS ammunition are a form of spindle sabot.
Shotgun slugs often use a cast plastic sabot similar to the spindle sabot. Shotgun sabots in general extend the full length of the projectile and are designed to be used more effectively in rifled barrels.
Ring sabot
A ring sabot uses the rear fins on a long rod projectile to help center the projectile and ride the bore, and the multi-petal sabot forms only a single bulkhead ring around the projectile near the front, with an obturator sealing gases from escaping past it, and centering the front of the projectile. The former Soviet Union favored armor-piercing sabot projectiles using ring sabots, which performed acceptably for that era, manufactured from high strength steel for both the long rod penetrator and ring sabot. The strength of the steel ring was sufficient to withstand launch accelerations without the need for sabot ramps to also support the steel flight projectile.
| Technology | Ammunition | null |
586353 | https://en.wikipedia.org/wiki/Tailed%20frog | Tailed frog | The tailed frogs are two species of frogs in the genus Ascaphus, the only taxon in the family Ascaphidae . The "tail" in the name is actually an extension of the male cloaca. The tail is one of two distinctive anatomical features adapting the species to life in fast-flowing streams. These are the only North American frog species that reproduce by internal fertilization. They are among the most primitive known families of frogs.
Its scientific name means 'without a spade', from the privative prefix a- and the Ancient Greek (, 'spade, shovel'), referring to the metatarsal spade, which these frogs do not have.
Taxonomy
Until 2001, the genus was believed to be monotypic, the single species being the tailed frog (Ascaphus truei Stejneger, 1899). However, in that year, Nielson, Lohman, and Sullivan published evidence that promoted the Rocky Mountain tailed frog (Ascaphus montanus) from a subspecies to its own species. Since then, the former species has been formally called the coastal tailed frog.
The genus "Ascaphus", through mtDNA comparisons, has been grouped into a clade with the genus "Leiopelma" creating a sister taxon to all modern anurans.
General morphology
The existence of the visible "tail" appendage makes this frog family distinct from all other frogs. It is usually classified in the ancient frog suborder Archaeobatrachia and further organized into a basal clade with Leiopelma that is considered a sister taxon to all other frogs.
The "tail" is found only in males, and is actually part of the cloaca, used to insert sperm into the female during mating. This anatomical feature improves breeding success by minimizing loss of sperm in the turbulent, fast-flowing streams inhabited by this species. Thus, the tailed frogs exhibit internal fertilisation, rather than the external fertilisation found in other frogs.
Ascaphidae and Leiopelmatidae are primitive to almost all other frogs in having nine amphicoelous vertebrae and a caudalipuboischiotibialis tail-wagging muscle in adults. a type of vertebrae seen mostly in fish and early terrestrial tetrapod fossils (such as fossil salamanders and fossil frogs. The joints in amphicoelous vertebrae allow for significant lateral movement of the vertebral column, seen most clearly when fish use their tail to generate propulsive force. An additional plesiomorphy is the presence of free ribs in adults, a characteristic only present in the basal group archaeobatrachia.
Ascaphids lack the ability to vocalise, are small – around long – and are found in steep, fast-flowing streams in Montana, Idaho, Washington, Oregon, and northern California in the northwest United States, and southeastern British Columbia (Rocky Mountain Tailed Frog) and coastal BC (Coastal Tailed Frog).
Unique to the tailed frogs is the ability to secrete a series of antimicrobial peptides called ascaphins. These peptides share minimal genetic characteristics with other peptides secreted by frogs, yet show some similarities with antibacterial peptides found in African scorpions Pandinus imperator and Opistophthalmus carinatus. The ascaphin peptides are secreted through the skin and imperative in fighting bacteria such as E. coli and S. aureus.
The tailed frogs share certain characteristics with the Leiopelma, a genus of primitive frogs native to New Zealand, with which they are a phylogenetic sister taxa to all other anurans.
Mating practices
When attempting to mate, males will lunge at the female, wrapping a forelimb around them to secure them initially in an inguinal amplexus formation (males wrap their digits around female anterior to the pelvic region, placing their head on the back and close to the rear of the female) and then in a ventral amplexus formation (female is flipped over and male and female venters face each other). From here, the male inserts the "tail" into the female, and squeezes the female to gain leverage before thrusting. During this process the female is relatively still, occasionally kicking during the insertion process.
In some situations there is male-male competition for the female. In these situations, both males compete to enter the amplexus formation, eventually one establishing a better hold on the female and expelling the other male from the breeding process. Usually the male that is larger is more likely to succeed.
General habitat
The habitat of the tailed frog is cold, fast-moving streams with cobblestone bottoms. They are mostly aquatic, but adults may emerge during cool, wet conditions to forage terrestrially. Breeding season lasts from May through September, and females deposit their eggs in strings under rocks in fast-moving streams. Larvae take one to four years to metamorphose in the cool, fast-moving mountain streams. The amount of cobbles and fines (sand and similarly sized fine particles) in streams have been shown to be good indicators of tadpole abundance, with tadpole abundance being inversely proportional to concentration of fines and proportional to concentration of cobbles.
Thermal tolerance range in adults is exceptionally low relative to other North American anurans, with eggs rarely found above 20 °C and adults and larvae regularly migrating along microhabitats to reach temperatures below 20 °C whenever possible. It would appear that they prefer temperatures 16 °C and below. Eggs develop best at temperatures between 5° and 13
5 °C.
Because of this very narrow thermal tolerance, Adults may exhibit philopatry where temperatures are stable and low. However it has also been hypothesized that they may migrate to colder waters in autumn. Unfortunately, movements and migrational habits in Ascaphus have not been well documented, preventing any conclusive statements on migratory behavior or philopatry from being made with confidence.
Adults forage primarily terrestrially along stream banks, but also occasionally feed underwater. A wide variety of food items is taken, including both aquatic and terrestrial larval and adult insects, other arthropods (especially spiders), and snails. Tadpoles consume small quantities of filamentous green algae and desmids. Large quantities of conifer pollen are consumed seasonally by tadpoles.
During the day, adults seek cover under submerged substrates in the stream, or occasionally under similar surface objects close to the stream. Individuals have also been found in crevices in spray-drenched cliff walls near waterfalls. During winter, individuals are less active, especially inland, and appear to retreat beneath large logs and boulders. Tadpoles require cool streams with smooth-surfaced stones with a minimum diameter of . Tadpoles probably spend most of their time attached to such substrates by a large oral sucker. The large, sucker-like mouth parts of the tadpoles are a second distinctive feature of the species, enabling survival in turbulent water unsuitable for other frogs. They prefer turbulent water to smooth, swiftly flowing water.
| Biology and health sciences | Frogs and toads | Animals |
586357 | https://en.wikipedia.org/wiki/Artificial%20general%20intelligence | Artificial general intelligence | Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
Creating AGI is a primary goal of AI research and of companies such as OpenAI and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries.
The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades; others maintain it might take a century or longer; a minority believe it may never be achieved; and another minority claims that it is already here. Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect.
There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI. AGI is a common topic in science fiction and futures studies.
Contention exists over whether AGI represents an existential risk. Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be too remote to present such a risk.
Terminology
AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action.
Some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness. In contrast, weak AI (or narrow AI) is able to solve one specific problem but lacks general cognitive abilities. Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.
Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.
A framework for classifying AGI in levels was proposed in 2023 by Google DeepMind researchers. They define five levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI.
Characteristics
Various popular definitions of intelligence have been proposed. One of the leading proposals is the Turing test. However, there are other well-known definitions, and some researchers disagree with the more popular approaches.
Intelligence traits
Researchers generally hold that intelligence is required to do all of the following:
reason, use strategy, solve puzzles, and make judgments under uncertainty
represent knowledge, including common sense knowledge
plan
learn
communicate in natural language
if necessary, integrate these skills in completion of any given goal
Many interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.
Computer-based systems that exhibit many of these capabilities exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent). There is debate about whether modern AI systems possess them to an adequate degree.
Physical traits
Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:
the ability to sense (e.g. see, hear, etc.), and
the ability to act (e.g. move and manipulate objects, change location to explore, etc.)
This includes the ability to detect and respond to hazard.
Although the ability to sense (e.g. see, hear, etc.) and the ability to act (e.g. move and manipulate objects, change location to explore, etc.) can be desirable for some intelligent systems, these physical capabilities are not strictly required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may already be or become AGI. Even from a less optimistic perspective on LLMs, there is no firm requirement for an AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process input (language) from the external world in place of human senses. This interpretation aligns with the understanding that AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional "eyes and ears".
Tests for human-level AGI
Several tests meant to confirm human-level AGI have been considered, including:
The Turing Test (Turing)
Proposed by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine.
Turing described the test as follows:
In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant skepticism from the AI research community, who questioned the test's implementation and its relevance to AGI.
More recently, a 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%).
The Robot College Student Test (Goertzel)
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes.
The Employment Test (Nilsson)
A machine performs an economically important job at least as well as humans in the same job. AIs are now replacing humans in many roles as varied as fast food and marketing.
The Ikea test (Marcus)
Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly.
The Coffee Test (Wozniak)
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons. This has not yet been completed.
The Modern Turing Test (Suleyman)
An AI model is given $100,000 and has to obtain $1 million.
AI-complete problems
A problem is informally called "AI-complete" or "AI-hard" if it is believed that in order to solve it, one would need to implement AGI, because the solution is beyond the capabilities of a purpose-specific algorithm.
There are many problems that have been conjectured to require general intelligence to solve as well as humans. Examples include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author's argument (reason), understand the context (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance.
However, many of these tasks can now be performed by modern large language models. According to Stanford University's 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning.
History
Classical AI
Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."
Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved".
Several classical AI projects, such as Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project, were directed at AGI.
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". In the early 1980s, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".
Narrow AI research
In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as speech recognition and recommendation algorithms. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. , development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.
At the turn of the century, many mainstream AI researchers hoped that strong AI could be developed by combining programs that solve various sub-problems. Hans Moravec wrote in 1988: I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real-world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts.
However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on the symbol grounding hypothesis by stating: The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).
Modern artificial general intelligence research
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed by Marcus Hutter in 2000. Named AIXI, the proposed AGI agent maximises "the ability to satisfy goals in a wide range of environments". This type of AGI, characterized by the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behaviour, was also called universal artificial intelligence.
The term AGI was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course on AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers.
, a small number of computer scientists are active in AGI research, and many contribute to a series of AGI conferences. However, increasingly more researchers are interested in open-ended learning, which is the idea of allowing AI to continuously learn and innovate like humans do.
Feasibility
As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist. AI pioneer Herbert A. Simon speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.
A further challenge is the lack of clarity in defining what intelligence entails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions?
Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like Hubert Dreyfus and Roger Penrose, deny the possibility of achieving strong AI. John McCarthy is among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted. AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found above Tests for confirming human-level AGI.
A report by Stuart Armstrong and Kaj Sotala of the Machine Intelligence Research Institute found that "over [a] 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about.
In 2023, Microsoft researchers published a detailed evaluation of GPT-4. They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." Another study in 2023 reported that GPT-4 outperforms 99% of humans on the Torrance tests of creative thinking.
Blaise Agüera y Arcas and Peter Norvig wrote in 2023 that a significant level of general intelligence has already been achieved with frontier models. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".
2023 also marked the emergence of large multimodal models (large language models capable of processing or generating multiple modalities such as text, audio, and images).
In 2024, OpenAI released o1-preview, the first of a series of models that "spend more time thinking before they respond". According to Mira Murati, this ability to think before responding represents a new, additional paradigm. It improves model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power.
An OpenAI employee, Vahid Kazemi, claimed in 2024 that the company had achieved AGI, stating, "In my opinion, we have already achieved AGI and it’s even more clear with O1." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks." He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying. These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard. Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership with Microsoft, prompting speculation about the company’s strategic intentions.
Timescales
Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop. Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress. For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers of GPU-enabled CPUs.
In the introduction to his 2006 book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. , the consensus in the AGI research community seemed to be that the timeline discussed by Ray Kurzweil in 2005 in The Singularity is Near (i.e. between 2015 and 2045) was plausible. Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert.
In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton developed a neural network called AlexNet, which won the ImageNet competition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers). AlexNet was regarded as the initial ground-breaker of the current deep learning wave.
In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27.
In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system.
In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.
In 2022, DeepMind developed Gato, a "general-purpose" system capable of performing more than 600 different tasks.
In 2023, Microsoft Research published a study on an early version of OpenAI's GPT-4, contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law. This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems.
In 2023, the AI researcher Geoffrey Hinton stated that:
In May 2023, Demis Hassabis similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years. In March 2024, Nvidia's CEO, Jensen Huang, stated his expectation that within five years, AI would be capable of passing any test at least as well as humans. In June 2024, the AI researcher Leopold Aschenbrenner, a former OpenAI employee, estimated AGI by 2027 to be "strikingly plausible".
Whole brain emulation
While the development of transformer models like in ChatGPT is considered the most promising path to AGI, whole brain emulation can serve as an alternative approach. With whole brain simulation, a brain model is built by scanning and mapping a biological brain in detail, and then copying and simulating it on a computer system or another computational device. The simulation model must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain. Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it.
Early estimates
For low-level brain simulation, a very powerful cluster of computers or GPUs would be required, given the enormous quantity of synapses within the human brain. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. The brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5×1014 synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS).
In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating-point operation" – a measure used to rate current supercomputers – then 1016 "computations" would be equivalent to 10 petaFLOPS, achieved in 2011, while 1018 was achieved in 2022.) He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.
Current research
The Human Brain Project, an EU-funded initiative active from 2013 to 2023, has developed a particularly detailed and publicly accessible atlas of the human brain. In 2023, researchers from Duke University performed a high-resolution scan of a mouse brain.
Criticisms of simulation-based approaches
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in broad outline. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition, the estimates do not account for glial cells, which are known to play a role in cognitive processes.
A fundamental criticism of the simulated brain approach derives from embodied cognition theory which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient.
Philosophical perspective
"Strong AI" as defined in philosophy
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He proposed a distinction between two hypotheses about artificial intelligence:
Strong AI hypothesis: An artificial intelligence system can have "a mind" and "consciousness".
Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.
The first one he called "strong" because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.
In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term "strong AI" to mean "human level artificial general intelligence". This is not the same as Searle's strong AI, unless it is assumed that consciousness is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers the question is out-of-scope.
Mainstream AI is most interested in how a program behaves. According to Russell and Norvig, "as long as the program works, they don't care if you call it real or a simulation." If the program can behave as if it has a mind, then there is no need to know if it actually has mind – indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." Thus, for academic AI research, "Strong AI" and "AGI" are two different things.
Consciousness
Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence:
Sentience (or "phenomenal consciousness"): The ability to "feel" perceptions or emotions subjectively, as opposed to the ability to reason about perceptions. Some philosophers, such as David Chalmers, use the term "consciousness" to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Determining why and how subjective experience arises is known as the hard problem of consciousness. Thomas Nagel explained in 1974 that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not. In 2022, a Google engineer claimed that the company's AI chatbot, LaMDA, had achieved sentience, though this claim was widely disputed by other experts.
Self-awareness: To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one's own thoughts. This is opposed to simply being the "subject of one's thought"—an operating system or debugger is able to be "aware of itself" (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term "self-awareness".
These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals. Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights. Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.
Benefits
AGI could have a wide variety of applications. If oriented towards such goals, AGI could help mitigate various problems in the world such as hunger, poverty and health problems.
AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, cheap and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society.
AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life.
Risks
Existential risks
AGI may represent multiple types of existential risk, which are risks that threaten "the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development". The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it. If humanity still has moral blind spots similar to slavery in the past, AGI might irreversibly entrench it, preventing moral progress. Furthermore, AGI could facilitate mass surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime. There is also a risk for the machines themselves. If machines that are sentient or otherwise worthy of moral consideration are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe. Considering how much AGI could improve humanity's future and help reduce other existential risks, Toby Ord calls these existential risks "an argument for proceeding with due caution", not for "abandoning AI".
Risk of loss of control and human extinction
The thesis that AI poses an existential risk for humans, and that this risk needs more attention, is controversial but has been endorsed in 2023 by many public figures, AI researchers and CEOs of AI companies such as Elon Musk, Bill Gates, Geoffrey Hinton, Yoshua Bengio, Demis Hassabis and Sam Altman.
In 2014, Stephen Hawking criticized widespread indifference:
The potential fate of humanity has sometimes been compared to the fate of gorillas threatened by human activities. The comparison states that greater intelligence allowed humanity to dominate gorillas, which are now vulnerable in ways that they could not have anticipated. As a result, the gorilla has become an endangered species, not out of malice, but simply as a collateral damage from human activities.
The skeptic Yann LeCun considers that AGIs will have no desire to dominate humanity and that we should be careful not to anthropomorphize them and interpret their intents as we would for humans. He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards". On the other side, the concept of instrumental convergence suggests that almost whatever their goals, intelligent agents will have reasons to try to survive and acquire more power as intermediary steps to achieving these goals. And that this does not require having emotions.
Many scholars who are concerned about existential risk advocate for more research into solving the "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence? Solving the control problem is complicated by the AI arms race (which could lead to a race to the bottom of safety precautions in order to release products before competitors), and the use of AI in weapon systems.
The thesis that AI can pose existential risk also has detractors. Skeptics usually say that AGI is unlikely in the short-term, or that concerns about AGI distract from other issues related to current AI. Former Google fraud czar Shuman Ghosemajumder considers that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear.
Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God. Some researchers believe that the communication campaigns on AI existential risk by certain AI groups (such as OpenAI, Anthropic, DeepMind, and Conjecture) may be an at attempt at regulatory capture and to inflate interest in their products.
In 2023, the CEOs of Google DeepMind, OpenAI and Anthropic, along with other industry leaders and researchers, issued a joint statement asserting that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Mass unemployment
Researchers from OpenAI estimated that "80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted". They consider office workers to be the most exposed, for example mathematicians, accountants or web designers. AGI could have a better autonomy, ability to make decisions, to interface with other computer tools, but also to control robotized bodies.
According to Stephen Hawking, the outcome of automation on the quality of life will depend on how the wealth will be redistributed:
Elon Musk considers that the automation of society will require governments to adopt a universal basic income.
| Technology | Artificial intelligence concepts | null |
586727 | https://en.wikipedia.org/wiki/Alytidae | Alytidae | The Alytidae are a family of primitive frogs. Their common name is painted frogs or midwife toads. Most are endemic to Europe, but three species occur in northwest Africa, and a species formerly thought to be extinct is found in Israel.
This family is also known as Discoglossidae, but the older name Alytidae has priority and is now recognized by major reference works. Some researchers, though, suggest that Alytes and Discoglossus are different enough to be treated as belonging to separate families, implying resurrection of the Discoglossidae. The term "discoglossid" has also been used to refer to many primitive fossil frogs that share plesiomorphic (ancestral) similities to alytids, but that are probably not closely related.
Genera and species
The family contains three extant genera, Alytes, Discoglossus, and Latonia. The first is somewhat toad-like and can often be found on land. The second is smoother and more frog-like, preferring the water. The third genus was until recently considered extinct, and is represented by the recently rediscovered Hula painted frog. All of the species have pond-dwelling tadpoles.
The genera Bombina and Barbourula also used to be under this family, but have now been moved to the Bombinatoridae.
Extant genera
Extinct genera
Family Alytidae
Genus †Enneabatrachus (prehistoric)
†Enneabatrachus hechti
Genus †Aralobatrachus (prehistoric)
†Aralobatrachus robustus
Genus †Callobatrachus (prehistoric)
†Callobatrachus sanyanensis
Genus †Bakonybatrachus (prehistoric)
†Bakonybatrachus fedori
Genus †Eodiscoglossus (prehistoric)
†Eodiscoglossus oxoniensis
†Eodiscoglossus santonjae
| Biology and health sciences | Frogs and toads | Animals |
588886 | https://en.wikipedia.org/wiki/Chlorate | Chlorate | Chlorate is the common name of the anion, whose chlorine atom is in the +5 oxidation state. The term can also refer to chemical compounds containing this anion, with chlorates being the salts of chloric acid. Other oxyanions of chlorine can be named "chlorate" followed by a Roman numeral in parentheses denoting the oxidation state of chlorine: e.g., the ion commonly called perchlorate can also be called chlorate(VII).
As predicted by valence shell electron pair repulsion theory, chlorate anions have trigonal pyramidal structures.
Chlorates are powerful oxidizers and should be kept away from organics or easily oxidized materials. Mixtures of chlorate salts with virtually any combustible material (sugar, sawdust, charcoal, organic solvents, metals, etc.) will readily deflagrate. Chlorates were once widely used in pyrotechnics for this reason, though their use has fallen due to their instability. Most pyrotechnic applications that formerly used chlorates now use the more stable perchlorates instead.
Structure and bonding
The chlorate ion cannot be satisfactorily represented by just one Lewis structure, since all the Cl–O bonds are the same length (1.49 Å in potassium chlorate), and the chlorine atom is hypervalent. Instead, it is often thought of as a hybrid of multiple resonance structures:
Preparation
Laboratory
Metal chlorates can be prepared by adding chlorine to hot metal hydroxides like KOH:
3 Cl2 + 6 KOH -> 5 KCl + KClO3 + 3 H2O
In this reaction, chlorine undergoes disproportionation, both reduction and oxidation. Chlorine, oxidation number 0, forms chloride Cl− (oxidation number −1) and chlorate(V) (oxidation number +5). The reaction of cold aqueous metal hydroxides with chlorine produces the chloride and hypochlorite (oxidation number +1) instead.
Industrial
The industrial-scale synthesis for sodium chlorate starts from an aqueous sodium chloride solution (brine) rather than chlorine gas. If the electrolysis equipment allows for the mixing of the chlorine and the sodium hydroxide, then the disproportionation reaction described above occurs. The heating of the reactants to 50–70 °C is performed by the electrical power used for electrolysis.
Natural occurrence
A 2010 study has discovered the presence of natural chlorate deposits around the world, with relatively high concentrations found in arid and hyper-arid regions. The chlorate was also measured in rainfall samples with the amount of chlorate similar to perchlorate. It is suspected that chlorate and perchlorate may share a common natural formation mechanism and could be a part of the chlorine biogeochemistry cycle. From a microbial standpoint, the presence of natural chlorate could also explain why there is a variety of microorganisms capable of reducing chlorate to chloride. Further, the evolution of chlorate reduction may be an ancient phenomenon as all perchlorate reducing bacteria described to date also utilize chlorate as a terminal electron acceptor. It should be clearly stated, that currently no chlorate-dominant minerals are known. This means that the chlorate anion exists only as a substitution in the known mineral species, or – eventually – is present in the pore-filling solutions.
In 2011, a study of the Georgia Institute of Technology unveiled the presence of magnesium chlorate on the planet Mars.
Compounds (salts)
Examples of chlorates include
potassium chlorate, KClO3
sodium chlorate, NaClO3
magnesium chlorate, Mg(ClO3)2
Other oxyanions
If a Roman numeral in brackets follows the word "chlorate", this indicates the oxyanion contains chlorine in the indicated oxidation state, namely:
Using this convention, "chlorate" means any chlorine oxyanion. Usually, "chlorate" refers only to chlorine in the +5 oxidation state.
Toxicity
Chlorates are relatively toxic, though they form generally harmless chlorides on reduction.
| Physical sciences | Halide oxyanions | Chemistry |
589286 | https://en.wikipedia.org/wiki/Pi%20bond | Pi bond | In chemistry, pi bonds (π bonds) are covalent chemical bonds, in each of which two lobes of an orbital on one atom overlap with two lobes of an orbital on another atom, and in which this overlap occurs laterally. Each of these atomic orbitals has an electron density of zero at a shared nodal plane that passes through the two bonded nuclei. This plane also is a nodal plane for the molecular orbital of the pi bond. Pi bonds can form in double and triple bonds but do not form in single bonds in most cases.
The Greek letter π in their name refers to p orbitals, since the orbital symmetry of the pi bond is the same as that of the p orbital when seen down the bond axis. One common form of this sort of bonding involves p orbitals themselves, though d orbitals also engage in pi bonding. This latter mode forms part of the basis for metal-metal multiple bonding.
Properties
Pi bonds are usually weaker than sigma bonds. The C-C double bond, composed of one sigma and one pi bond, has a bond energy less than twice that of a C-C single bond, indicating that the stability added by the pi bond is less than the stability of a sigma bond. From the perspective of quantum mechanics, this bond's weakness is explained by significantly less overlap between the component p-orbitals due to their parallel orientation. This is contrasted by sigma bonds which form bonding orbitals directly between the nuclei of the bonding atoms, resulting in greater overlap and a strong sigma bond.
Pi bonds result from overlap of atomic orbitals that are in contact through two areas of overlap. Most orbital overlaps that do not include the s-orbital, or have different internuclear axes (for example px + py overlap, which does not apply to an s-orbital) are generally all pi bonds. Pi bonds are more diffuse bonds than the sigma bonds. Electrons in pi bonds are sometimes referred to as pi electrons. Molecular fragments joined by a pi bond cannot rotate about that bond without breaking the pi bond, because rotation involves destroying the parallel orientation of the constituent p orbitals.
For homonuclear diatomic molecules, bonding π molecular orbitals have only the one nodal plane passing through the bonded atoms, and no nodal planes between the bonded atoms. The corresponding antibonding, or π* ("pi-star") molecular orbital, is defined by the presence of an additional nodal plane between these two bonded atoms.
Multiple bonds
A typical double bond consists of one sigma bond and one pi bond; for example, the C=C double bond in ethylene (H2C=CH2). A typical triple bond, for example in acetylene (HC≡CH), consists of one sigma bond and two pi bonds in two mutually perpendicular planes containing the bond axis. Two pi bonds are the maximum that can exist between a given pair of atoms. Quadruple bonds are extremely rare and can be formed only between transition metal atoms, and consist of one sigma bond, two pi bonds and one delta bond.
A pi bond is weaker than a sigma bond, but the combination of pi and sigma bond is stronger than either bond by itself. The enhanced strength of a multiple bond versus a single (sigma bond) is indicated in many ways, but most obviously by a contraction in bond lengths. For example, in organic chemistry, carbon–carbon bond lengths are about 154 pm in ethane, 134 pm in ethylene and 120 pm in acetylene. More bonds make the total bond length shorter and the bond becomes stronger.
Special cases
A pi bond can exist between two atoms that do not have a net sigma-bonding effect between them.
In certain metal complexes, pi interactions between a metal atom and alkyne and alkene pi antibonding orbitals form pi-bonds.
In some cases of multiple bonds between two atoms, there is no net sigma-bonding at all, only pi bonds. Examples include diiron hexacarbonyl (Fe2(CO)6), dicarbon (C2), and diborane(2) (B2H2). In these compounds the central bond consists only of pi bonding because of a sigma antibond accompanying the sigma bond itself. These compounds have been used as computational models for analysis of pi bonding itself, revealing that in order to achieve maximum orbital overlap the bond distances are much shorter than expected.
| Physical sciences | Bond structure | Chemistry |
589303 | https://en.wikipedia.org/wiki/Molecular%20orbital%20theory | Molecular orbital theory | In chemistry, molecular orbital theory (MO theory or MOT) is a method for describing the electronic structure of molecules using quantum mechanics. It was proposed early in the 20th century. The MOT explains the paramagnetic nature of O2, which valence bond theory cannot explain.
In molecular orbital theory, electrons in a molecule are not assigned to individual chemical bonds between atoms, but are treated as moving under the influence of the atomic nuclei in the whole molecule. Quantum mechanics describes the spatial and energetic properties of electrons as molecular orbitals that surround two or more atoms in a molecule and contain valence electrons between atoms.
Molecular orbital theory revolutionized the study of chemical bonding by approximating the states of bonded electrons – the molecular orbitals – as linear combinations of atomic orbitals (LCAO). These approximations are made by applying the density functional theory (DFT) or Hartree–Fock (HF) models to the Schrödinger equation.
Molecular orbital theory and valence bond theory are the foundational theories of quantum chemistry.
Linear combination of atomic orbitals (LCAO) method
In the LCAO method, each molecule has a set of molecular orbitals. It is assumed that the molecular orbital wave function ψj can be written as a simple weighted sum of the n constituent atomic orbitals χi, according to the following equation:
One may determine cij coefficients numerically by substituting this equation into the Schrödinger equation and applying the variational principle. The variational principle is a mathematical technique used in quantum mechanics to build up the coefficients of each atomic orbital basis. A larger coefficient means that the orbital basis is composed more of that particular contributing atomic orbital – hence, the molecular orbital is best characterized by that type. This method of quantifying orbital contribution as a linear combination of atomic orbitals is used in computational chemistry. An additional unitary transformation can be applied on the system to accelerate the convergence in some computational schemes. Molecular orbital theory was seen as a competitor to valence bond theory in the 1930s, before it was realized that the two methods are closely related and that when extended they become equivalent.
Molecular orbital theory is used to interpret ultraviolet–visible spectroscopy (UV–VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state.
There are three main requirements for atomic orbital combinations to be suitable as approximate molecular orbitals.
The atomic orbital combination must have the correct symmetry, which means that it must belong to the correct irreducible representation of the molecular symmetry group. Using symmetry adapted linear combinations, or SALCs, molecular orbitals of the correct symmetry can be formed.
Atomic orbitals must also overlap within space. They cannot combine to form molecular orbitals if they are too far away from one another.
Atomic orbitals must be at similar energy levels to combine as molecular orbitals. Because if the energy difference is great, when the molecular orbitals form, the change in energy becomes small. Consequently, there is not enough reduction in energy of electrons to make significant bonding.
History
Molecular orbital theory was developed in the years after valence bond theory had been established (1927), primarily through the efforts of Friedrich Hund, Robert Mulliken, John C. Slater, and John Lennard-Jones. MO theory was originally called the Hund-Mulliken theory. According to physicist and physical chemist Erich Hückel, the first quantitative use of molecular orbital theory was the 1929 paper of Lennard-Jones. This paper predicted a triplet ground state for the dioxygen molecule which explained its paramagnetism (see ) before valence bond theory, which came up with its own explanation in 1931. The word orbital was introduced by Mulliken in 1932. By 1933, the molecular orbital theory had been accepted as a valid and useful theory.
Erich Hückel applied molecular orbital theory to unsaturated hydrocarbon molecules starting in 1931 with his Hückel molecular orbital (HMO) method for the determination of MO energies for pi electrons, which he applied to conjugated and aromatic hydrocarbons. This method provided an explanation of the stability of molecules with six pi-electrons such as benzene.
The first accurate calculation of a molecular orbital wavefunction was that made by Charles Coulson in 1938 on the hydrogen molecule. By 1950, molecular orbitals were completely defined as eigenfunctions (wave functions) of the self-consistent field Hamiltonian and it was at this point that molecular orbital theory became fully rigorous and consistent. This rigorous approach is known as the Hartree–Fock method for molecules although it had its origins in calculations on atoms. In calculations on molecules, the molecular orbitals are expanded in terms of an atomic orbital basis set, leading to the Roothaan equations. This led to the development of many ab initio quantum chemistry methods. In parallel, molecular orbital theory was applied in a more approximate manner using some empirically derived parameters in methods now known as semi-empirical quantum chemistry methods.
The success of Molecular Orbital Theory also spawned ligand field theory, which was developed during the 1930s and 1940s as an alternative to crystal field theory.
Types of orbitals
Molecular orbital (MO) theory uses a linear combination of atomic orbitals (LCAO) to represent molecular orbitals resulting from bonds between atoms. These are often divided into three types, bonding, antibonding, and non-bonding. A bonding orbital concentrates electron density in the region between a given pair of atoms, so that its electron density will tend to attract each of the two nuclei toward the other and hold the two atoms together. An anti-bonding orbital concentrates electron density "behind" each nucleus (i.e. on the side of each atom which is farthest from the other atom), and so tends to pull each of the two nuclei away from the other and actually weaken the bond between the two nuclei. Electrons in non-bonding orbitals tend to be associated with atomic orbitals that do not interact positively or negatively with one another, and electrons in these orbitals neither contribute to nor detract from bond strength.
Molecular orbitals are further divided according to the types of atomic orbitals they are formed from. Chemical substances will form bonding interactions if their orbitals become lower in energy when they interact with each other. Different bonding orbitals are distinguished that differ by electron configuration (electron cloud shape) and by energy levels.
The molecular orbitals of a molecule can be illustrated in molecular orbital diagrams.
Common bonding orbitals are sigma (σ) orbitals which are symmetric about the bond axis and pi (π) orbitals with a nodal plane along the bond axis. Less common are delta (δ) orbitals and phi (φ) orbitals with two and three nodal planes respectively along the bond axis. Antibonding orbitals are signified by the addition of an asterisk. For example, an antibonding pi orbital may be shown as π*.
Bond order
Bond order is the number of chemical bonds between a pair of atoms. The bond order of a molecule can be calculated by subtracting the number of electrons in anti-bonding orbitals from the number of bonding orbitals, and the resulting number is then divided by two. A molecule is expected to be stable if it has bond order larger than zero. It is adequate to consider the valence electron to determine the bond order. Because (for principal quantum number n > 1) when MOs are derived from 1s AOs, the difference in number of electrons in bonding and anti-bonding molecular orbital is zero. So, there is no net effect on bond order if the electron is not the valence one.
From bond order, one can predict whether a bond between two atoms will form or not. For example, the existence of He2 molecule. From the molecular orbital diagram, the bond order is . That means, no bond formation will occur between two He atoms which is seen experimentally. It can be detected under very low temperature and pressure molecular beam and has binding energy of approximately 0.001 J/mol. (The helium dimer is a van der Waals molecule.)
Besides, the strength of a bond can also be realized from bond order (BO). For example:
For H2: Bond order is ; bond energy is 436 kJ/mol.
For H2+: Bond order is ; bond energy is 171 kJ/mol.
As the bond order of H2+ is smaller than H2, it should be less stable which is observed experimentally and can be seen from the bond energy.
Overview
MOT provides a global, delocalized perspective on chemical bonding. In MO theory, any electron in a molecule may be found anywhere in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, as long as they are in eigenstates permitted by certain quantum rules. Thus, when excited with the requisite amount of energy through high-frequency light or other means, electrons can transition to higher-energy molecular orbitals. For instance, in the simple case of a hydrogen diatomic molecule, promotion of a single electron from a bonding orbital to an antibonding orbital can occur under UV radiation. This promotion weakens the bond between the two hydrogen atoms and can lead to photodissociation, the breaking of a chemical bond due to the absorption of light.
Molecular orbital theory is used to interpret ultraviolet–visible spectroscopy (UV–VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state.
Although in MO theory some molecular orbitals may hold electrons that are more localized between specific pairs of molecular atoms, other orbitals may hold electrons that are spread more uniformly over the molecule. Thus, overall, bonding is far more delocalized in MO theory, which makes it more applicable to resonant molecules that have equivalent non-integer bond orders than valence bond theory. This makes MO theory more useful for the description of extended systems.
Robert S. Mulliken, who actively participated in the advent of molecular orbital theory, considers each molecule to be a self-sufficient unit. He asserts in his article: ...Attempts to regard a molecule as consisting of specific atomic or ionic units held together by discrete numbers of bonding electrons or electron-pairs are considered as more or less meaningless, except as an approximation in special cases, or as a method of calculation […]. A molecule is here regarded as a set of nuclei, around each of which is grouped an electron configuration closely similar to that of a free atom in an external field, except that the outer parts of the electron configurations surrounding each nucleus usually belong, in part, jointly to two or more nuclei....An example is the MO description of benzene, , which is an aromatic hexagonal ring of six carbon atoms and three double bonds. In this molecule, 24 of the 30 total valence bonding electrons – 24 coming from carbon atoms and 6 coming from hydrogen atoms – are located in 12 σ (sigma) bonding orbitals, which are located mostly between pairs of atoms (C–C or C–H), similarly to the electrons in the valence bond description. However, in benzene the remaining six bonding electrons are located in three π (pi) molecular bonding orbitals that are delocalized around the ring. Two of these electrons are in an MO that has equal orbital contributions from all six atoms. The other four electrons are in orbitals with vertical nodes at right angles to each other. As in the VB theory, all of these six delocalized π electrons reside in a larger space that exists above and below the ring plane. All carbon–carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the three molecular π orbitals combine and evenly spread the extra six electrons over six carbon atoms.
In molecules such as methane, , the eight valence electrons are found in four MOs that are spread out over all five atoms. It is possible to transform the MOs into four localized sp3 orbitals. Linus Pauling, in 1931, hybridized the carbon 2s and 2p orbitals so that they pointed directly at the hydrogen 1s basis functions and featured maximal overlap. However, the delocalized MO description is more appropriate for predicting ionization energies and the positions of spectral absorption bands. When methane is ionized, a single electron is taken from the valence MOs, which can come from the s bonding or the triply degenerate p bonding levels, yielding two ionization energies. In comparison, the explanation in valence bond theory is more complicated. When one electron is removed from an sp3 orbital, resonance is invoked between four valence bond structures, each of which has a single one-electron bond and three two-electron bonds. Triply degenerate T2 and A1 ionized states (CH4+) are produced from different linear combinations of these four structures. The difference in energy between the ionized and ground state gives the two ionization energies.
As in benzene, in substances such as beta carotene, chlorophyll, or heme, some electrons in the π orbitals are spread out in molecular orbitals over long distances in a molecule, resulting in light absorption in lower energies (the visible spectrum), which accounts for the characteristic colours of these substances. This and other spectroscopic data for molecules are well explained in MO theory, with an emphasis on electronic states associated with multicenter orbitals, including mixing of orbitals premised on principles of orbital symmetry matching. The same MO principles also naturally explain some electrical phenomena, such as high electrical conductivity in the planar direction of the hexagonal atomic sheets that exist in graphite. This results from continuous band overlap of half-filled p orbitals and explains electrical conduction. MO theory recognizes that some electrons in the graphite atomic sheets are completely delocalized over arbitrary distances, and reside in very large molecular orbitals that cover an entire graphite sheet, and some electrons are thus as free to move and therefore conduct electricity in the sheet plane, as if they resided in a metal.
| Physical sciences | Molecular physics | null |
1539814 | https://en.wikipedia.org/wiki/Acanthostega | Acanthostega | Acanthostega (meaning "spiny roof") is an extinct genus of stem-tetrapod, among the first vertebrate animals to have recognizable limbs. It appeared in the late Devonian period (Famennian age) about 365 million years ago, and was anatomically intermediate between lobe-finned fishes and those that were fully capable of coming onto land.
Discovery
The fossilized remains are generally well preserved, with the famous fossil by which the significance of this species was discovered being found by Jennifer A. Clack in East Greenland in 1987, though fragments of the skull had been discovered in 1933 by Gunnar Säve-Söderbergh and Erik Jarvik.
Description
The Acanthostega had eight digits on each hand (the number of digits on the feet is unclear) linked by webbing, it lacked wrists, and was generally poorly adapted for walking on land. It also had a remarkably fish-like shoulder and forelimb. The front limbs of Acanthostega could not bend forward at the elbow, and therefore could not be brought into a weight bearing position, appearing to be more suitable for paddling or for holding on to aquatic plants. Acanthostega is the earliest stem-tetrapod to show the shift in locomotory dominance from the pectoral girdle to the pelvic girdle.
There are many morphological changes that allowed the pelvic girdle of Acanthostega to become a weight-bearing structure. In more ancestral states the two sides of the girdle were not attached. In Acanthostega there is contact between the two sides and fusion of the girdle with the sacral rib of the vertebral column. These fusions would have made the pelvic region more powerful and equipped to counter the force of gravity when not supported by the buoyancy of an aquatic environment. It had internal gills that were covered like those of fish. It also had lungs, but its ribs were too short to support its chest cavity out of water.
Classification
Acanthostega is seen as part of widespread evolutionary radiation in the late Devonian period, starting with purely aquatic finned tetrapodomorphs, with their successors showing increased air-breathing capability and related adaptions to the jaws and gills, as well as more muscular neck allowing freer movement of the head than fish have, and use of the fins to raise the body of the fish. These features are displayed by the earlier Tiktaalik, which like Ichthyostega showed signs of greater abilities to move around on land, but is thought to have been primarily aquatic.
In Late Devonian vertebrate speciation, descendants of pelagic lobe-finned fish –like Eusthenopteron– exhibited a sequence of adaptations: Panderichthys, suited to muddy shallows; Tiktaalik with limb-like fins that could take it onto land; stem-tetrapods in weed-filled swamps, such as Acanthostega, which had eight-digited feet; and Ichthyostega, with full limbs. Their descendants also included pelagic lobe-finned fish such as coelacanth species.
Paleobiology
Hunting strategy
It has been inferred that Acanthostega probably lived in shallow, weed-choked swamps, its legs apparently being adapted for these specific ecosystems. Apart from the presence of limbs, it was not adapted in any way for walking on land. Jennifer A. Clack interprets this as showing that Acanthostega was primarily an aquatic animal descended from fish that never left the sea, and that the specializations of the tetrapod lineage were exaptations: features which would later be useful for terrestrial life, even if they originated for a different purpose. At that period, deciduous plants were flourishing and annually shedding leaves into the water, attracting small prey into warm oxygen-poor shallows that were difficult for larger fish to swim in; Clack remarks on how the lower jaw of Acanthostega shows a change from those of fish that have two rows of teeth, with a large number of small teeth in the outer row, and two large fangs and some smaller teeth in the inner row. This difference likely corresponds to a shift in stem-tetrapods from feeding exclusively in the water to feeding with the head above water or on land.
Research based on analysis of the suture morphology in the skull of Acanthostega indicates that the species was able to bite prey at or near the water's edge. Markey and Marshall compared the skull with the skulls of fish, which use suction feeding as the primary method of prey capture, and creatures known to have used the direct biting on prey typical of terrestrial animals. Their results indicate that Acanthostega was adapted for what they call terrestrial-style feeding, strongly supporting the hypothesis that the terrestrial mode of feeding first emerged in aquatic animals. If correct, this shows an animal specialized for hunting and living in shallow waters in the line between land and water.
Lifestyle
While normally considered more basal than Ichthyostega, it is possible that Acanthostega was actually more derived. Since Acanthostega resembles juvenile Ichthyostega and shows a lot less differences from juveniles to adults than the latter, it has been suggested that Acanthostega might be descended from a neotenic lineage. Although it appears to have spent its whole life in water, its humerus also exhibits traits that resemble those of later, fully terrestrial stem-tetrapods (the humerus in Ichthyostega being somewhat derived from, and homologous with the pectoral and pelvic fin bones of earlier fishes). This could indicate that vertebrates evolved terrestrial traits earlier than previously assumed, and numerous times independently from another. Muscle scars on the forelimbs of Acanthostega were similar to those of crown-tetrapods, suggesting that it evolved from an ancestor that had more terrestrial adaptations than itself.
Development
A histological study of Acanthostega humeri, assisted by synchrotron scans, indicates that the animal matured slowly. Some individuals reached sexual maturity (based on a fully ossified humerus) at more than six years of age, and adult fossils are much rarer than juveniles. Late ossification of the humerus supports a fully aquatic lifestyle for Acanthostega. There is barely any correlation between humerus size and maturity, suggesting that there was significant size variation among individuals of the same age. This may be due to competitive pressures, differing adaptive strategies, or even sexual dimorphism. However, the small sample size prevents recognition of a bimodal distribution which could confirm the latter hypothesis.
| Biology and health sciences | Prehistoric amphibians | Animals |
1540096 | https://en.wikipedia.org/wiki/Stenopodidea | Stenopodidea | The Stenopodidea or boxer shrimps are a small group of decapod crustaceans. Often confused with Caridea shrimp or Dendrobranchiata prawns, they are neither, belonging to their own group.
Anatomy
They can be differentiated from the Dendrobranchiata prawns by their lack of branching gills, and by the fact that they brood their eggs instead of directly releasing them into the water. They differ from the Caridea shrimp by their greatly enlarged third pair of legs.
Taxonomy
Stenopodidea belongs to the order Decapoda, and is most closely related to the Caridea and Procarididea infraorders of shrimp. The cladogram below shows Stenopodidea's relationships to other relatives within Decapoda, from analysis by Wolfe et al., 2019.
There are 71 extant species currently recognized within Stenopodidea, divided into 12 genera. Three fossil species are also recognized, each belonging to a separate genus. The earliest fossil assigned to the Stenopodidea is Devonostenopus pennsylvaniensis from the Devonian. Until D. pennsylvaniensis was discovered, the oldest known member of the group was Jilinicaris chinensis from the Late Cretaceous.
The cladogram below shows Stenopodidea's internal relationships:
Stenopodidea comprises the following families and genera:
†Dubiostenopus Alencar et al. 2023
Macromaxillocarididae Alvarez, Iliffe & Villalobos, 2006
Macromaxillocaris Alvarez, Iliffe & Villalobos, 2006
Spongicolidae Schram, 1986
Engystenopus Alcock & Anderson, 1894
Globospongicola Komai & Saito, 2006
†Jilinicaris Schram, Shen, Vonk & Taylor, 2000
Microprosthema Stimpson, 1860
Paraspongicola De Saint Laurent & Cléva, 1981
Spongicola De Haan, 1844
Spongicoloides Hansen, 1908
Spongiocaris Bruce & Baba, 1973
Stenopodidae Claus, 1872
†Devonostenopus Jones et al., 2014
Juxtastenopus Goy, 2010
Odontozona Holthuis, 1946
†Phoenice Garassino, 2001
Richardina A. Milne-Edwards, 1881
Stenopus Latreille, 1819
| Biology and health sciences | Decapoda | Animals |
1541115 | https://en.wikipedia.org/wiki/Pistonless%20rotary%20engine | Pistonless rotary engine | A pistonless rotary engine is an internal combustion engine that does not use pistons in the way a reciprocating engine does. Designs vary widely but typically involve one or more rotors, sometimes called rotary pistons. Although many different designs have been constructed, only the Wankel engine has achieved widespread adoption.
The term rotary combustion engine has been used as a name for these engines to distinguish them from early (generally up to the early 1920s) aircraft engines and motorcycle engines also known as rotary engines. However, both continue to be called rotary engines and only the context determines which type is meant, whereas the "pistonless" prefix is less ambiguous.
Pistonless rotary engines
A pistonless rotary engine replaces the linear reciprocating motion of a piston with more complex compression/expansion motions with the objective of improving some aspect of the engine's operation, such as: higher efficiency thermodynamic cycles, lower mechanical stress, lower vibration, higher compression, or less mechanical complexity. the Wankel engine is the only successful pistonless rotary engine, but many similar concepts have been proposed and are under various stages of development. Examples of rotary engines include:
Production stage
Wankel engine
LiquidPiston engine
Beauchamp Tower's nineteenth century spherical steam engine (in actual use as a steam engine, but theoretically adaptable to use internal combustion)
Development stage
Engineair engine
Hamilton Walker engines
Libralato rotary Atkinson cycle engine
Nutating disc engine
Quasiturbine
RKM engine,
Sarich orbital engine
Swing-piston engine, Trochilic
Wave disk engine
Conceptual stage
Gerotor engine
Internally Radiating Impulse Structure: IRIS engine
| Technology | Engines | null |
1541301 | https://en.wikipedia.org/wiki/Lugol%27s%20iodine | Lugol's iodine | Lugol's iodine, also known as aqueous iodine and strong iodine solution, is a solution of potassium iodide with iodine in water. It is a medication and disinfectant used for a number of purposes. Taken by mouth it is used to treat thyrotoxicosis until surgery can be carried out, protect the thyroid gland from radioactive iodine, and to treat iodine deficiency. When applied to the cervix it is used to help in screening for cervical cancer. As a disinfectant it may be applied to small wounds such as a needle stick injury. A small amount may also be used for emergency disinfection of drinking water.
Side effects may include allergic reactions, headache, vomiting, and conjunctivitis. Long term use may result in trouble sleeping and depression. It should not typically be used during pregnancy or breastfeeding. Lugol's iodine is a liquid made up of two parts potassium iodide for every one part elemental iodine in water.
Lugol's iodine was first made in 1829 by the French physician Jean Lugol. It is on the World Health Organization's List of Essential Medicines. Lugol's iodine is available as a generic medication and over the counter. Lugol's solution is available in different strengths of iodine. Large volumes of concentrations more than 2.2% may be subject to regulation.
Uses
Medical uses
Preoperative administration of Lugol's solution decreases intraoperative blood loss during thyroidectomy in patients with Graves' disease. However, it appears ineffective in patients who are already euthyroid on anti-thyroid drugs and levothyroxine.
During colposcopy, Lugol's iodine is applied to the vagina and cervix. Normal vaginal tissue stains brown due to its high glycogen content, while tissue suspicious for cancer does not stain, and thus appears pale compared to the surrounding tissue. Biopsy of suspicious tissue can then be performed. This is called a Schiller's test.
Patients at high risk of oesophageal squamous cell carcinoma are usually followed using a combination of Lugol's chromoendoscopy and narrow-band imaging. With Lugol's iodine, low-grade dysplasia appears as an unstained or weakly stained area; high-grade dysplasia is consistently unstained.
Lugol's iodine may also be used to better visualize the mucogingival junction in the mouth. Similar to the method of staining mentioned above regarding a colposcopy, alveolar mucosa has a high glycogen content that gives a positive iodine reaction vs. the keratinized gingiva.
Lugol's iodine may also be used as an oxidizing germicide, however it is somewhat undesirable in that it may lead to scarring and discolors the skin temporarily. One way to avoid this problem is by using a solution of 70% ethanol to wash off the iodine later.
Lugol's iodine was distributed in Polish People's Republic after the Chernobyl catastrophe, due to government not being informed of how severe the event was and overestimating radiation, and unavailability of iodine tablets.
Science
As a mordant when performing a Gram stain. It is applied for 1 minute after staining with crystal violet, but before ethanol to ensure that gram positive organisms' peptidoglycan remains stained, easily identifying it as a gram positive in microscopy.
This solution is used as an indicator test for the presence of starches in organic compounds, with which it reacts by turning a dark-blue/black. Elemental iodine solutions like Lugol's will stain starches due to iodine's interaction with the coil structure of the polysaccharide. Starches include the plant starches amylose and amylopectin and glycogen in animal cells. Lugol's solution will not detect simple sugars such as glucose or fructose. In the pathologic condition amyloidosis, amyloid deposits (i.e., deposits that stain like starch, but are not) can be so abundant that affected organs will also stain grossly positive for the Lugol reaction for starch.
It can be used as a cell stain, making the cell nuclei more visible and for preserving phytoplankton samples.
Lugol's solution can also be used in various experiments to observe how a cell membrane uses osmosis and diffusion.
Lugol's solution is also used in the marine aquarium industry. Lugol's solution provides a strong source of free iodine and iodide to reef inhabitants and macroalgae. Although the solution is thought to be effective when used with stony corals, systems containing xenia and soft corals are assumed to be particularly benefited by the use of Lugol's solution. Used as a dip for stony and soft or leather corals, Lugol's may help rid the animals of unwanted parasites and harmful bacteria. The solution is thought to foster improved coloration and possibly prevent bleaching of corals due to changes in light intensity, and to enhance coral polyp expansion. The blue colors of Acropora spp. are thought to be intensified by the use of potassium iodide. Specially packaged supplements of the product intended for aquarium use can be purchased at specialty stores and online.
Outdated uses
Up until early 1970s, it was often recommended for use in victims of rape in order to avoid pregnancy. The idea stemmed from the fact that, in the laboratory, Lugol's iodine appeared to kill sperm cells even in such great dilutions as 1:32. Thus it was thought that an intrauterine application of Lugol's iodine, immediately after the event, would help avoid pregnancy.
Side effects
Because it contains free iodine, Lugol's solution at 2% or 5% concentration without dilution is irritating and destructive to mucosa, such as the lining of the esophagus and stomach. Doses of 10 mL of undiluted 5% solution have been reported to cause gastric lesions when used in endoscopy. The LD50 for 5% Iodine is 14,000 mg/kg (14 g/kg) in rats, and 22,000 mg/kg (22 g/kg) in mice.
The World Health Organization classifies substances taken orally with an LD50 of 5–50 mg/kg as the second highest toxicity class, Class Ib (Highly Hazardous). The Global Harmonized System of Classification and Labeling of Chemicals categorizes this as Category 2 with a hazard statement "Fatal if swallowed". Potassium iodide is not considered hazardous.
Mechanism of action
The above uses and effects are consequences of the fact that the solution is a source of effectively free elemental iodine, which is readily generated from the equilibrium between elemental iodine molecules and polyiodide ions in the solution.
History
It was historically used as a first-line treatment for hyperthyroidism, as the administration of pharmacologic amounts of iodine leads to temporary inhibition of iodine organification in the thyroid gland, caused by phenomena including the Wolff–Chaikoff effect and the Plummer effect. However it is not used to treat certain autoimmune causes of thyroid disease as iodine-induced blockade of iodine organification may result in hypothyroidism. They are not considered as a first line therapy because of possible induction of resistant hyperthyroidism but may be considered as an adjuvant therapy when used together with other hyperthyroidism medications.
Lugol's iodine has been used traditionally to replenish iodine deficiency. Because of its wide availability as a drinking-water decontaminant, and high content of potassium iodide, emergency use of it was at first recommended to the Polish government in 1986, after the Chernobyl disaster to replace and block any intake of radioactive , even though it was known to be a non-optimal agent, due to its somewhat toxic free-iodine content. Other sources state that pure potassium iodide solution in water (SSKI) was eventually used for most of the thyroid protection after this accident. There is "strong scientific evidence" for potassium iodide thyroid protection to help prevent thyroid cancer. Potassium iodide does not provide immediate protection but can be a component of a general strategy in a radiation emergency.
Historically, Lugol's iodine solution has been widely available and used for a number of health problems with some precautions. Lugol's is sometimes prescribed in a variety of alternative medical treatments. Only since the end of the Cold War has the compound become subject to national regulation in the English-speaking world.
Society and culture
Regulation
Until 2007, in the United States, Lugol's solution was unregulated and available over the counter as a general reagent, an antiseptic, a preservative, or as a medicament for human or veterinary application.
Since 1 August 2007, the DEA regulates all iodine solutions containing greater than 2.2% elemental iodine as a List I precursor because they may potentially be used in the illicit production of methamphetamine. Transactions of up to one fluid ounce (30 ml) of Lugol's solution are exempt from this regulation.
Formula and manufacture
Lugol's solution is commonly available in different potencies of (nominal) 1%, 2%, 5% or 10%. Iodine concentrations greater than 2.2% are subject to US regulations. If the US regulations are taken literally, their 2.2% maximum iodine concentration limits a Lugol's solution to maximum (nominal) 0.87%.
The most commonly used (nominal) 5% solution consists of 5% (wt/v) iodine () and 10% (wt/v) potassium iodide (KI) mixed in distilled water and has a total iodine content of 126.4 mg/mL. The (nominal) 5% solution thus has a total iodine content of 6.32 mg per drop of 0.05 mL; the (nominal) 2% solution has 2.53 mg total iodine content per drop.
Potassium iodide renders the elementary iodine soluble in water through the formation of the triiodide () ion. It is not to be confused with tincture of iodine solutions, which consist of elemental iodine, and iodide salts dissolved in water and alcohol. Lugol's solution contains no alcohol.
Other names for Lugol's solution are (iodine-potassium iodide); Markodine, Strong solution (Systemic); and Aqueous Iodine Solution BP.
Economics
In the United Kingdom, in 2015, the NHS paid £9.57 per 500 ml of solution.
| Physical sciences | Halide salts | Chemistry |
1543303 | https://en.wikipedia.org/wiki/Handcar | Handcar | A handcar (also known as a pump trolley, pump car, rail push trolley, push-trolley, jigger, Kalamazoo, velocipede, or draisine) is a railroad car powered by its passengers, or by people pushing the car from behind. It is mostly used as a railway maintenance of way or mining car, but it was also used for passenger service in some cases.
Use
A typical design consists of an arm, called the walking beam, that pivots seesaw-like on a base, which the passengers alternately push down and pull up to move the car.
An even simpler design is pushed by two or four people (called trolleymen), with hand brakes to stop the trolley. When the trolley slows down, two trolleymen jump off the trolley and push it till it picks up speed. Then they jump into the trolley again, and the cycle continues. The trolleymen take turns in pushing the trolley to maintain the speed and avoid fatigue. Four people also required to safely lift the trolley off the rail tracks when a train approaches.
Rail tracks have a tendency to develop various defects, including cracks, loose packing etc., which may lead to accidents. The first rail inspections were done visually. Push trolley inspections formed a very important part of these visual inspections.
Modern usage
Handcars were normally used by railway service personnel (the latter also known as gandy dancers) for railroad inspection and maintenance. Because of their low weight and small size, they can be put on and taken off the rails at any place, allowing trains to pass. Handcars have since been replaced by self-propelled vehicles that do not require the use of manual power, instead relying on internal combustion engines or electricity to move the vehicle.
Handcars are nowadays used by handcar enthusiasts at vintage railroad events and for races between handcars driven by five person teams (one to push the car from a halt, four to pump the lever). One such race, the Handcar Regatta, was held in Santa Rosa, California from 2008 to 2011, and other races are held in Australia. See the section on racing below. Aside from handcars built for racing, new handcars are being built with modern roller bearings and milled axles and crankshafts.
Tourist usage
For some decades, especially in Europe, the handcar has also been used for tourist and recreational purposes. In this case, handcar is usually called a draisine or railbike. Thanks to draisines it is possible to make use of sections of abandoned railway lines, allowing visitors to discover beautiful natural landscapes that would be otherwise inaccessible. The use of handcars is growing thanks to increasing attention, throughout the Western world, to sustainable tourism.
The European country in which the draisine is most prevalent is probably France (under the name vélorail), where in 2021 there were 56 active routes. Many of these have been united, since 2004, in the Federation of Vélorail of France.
The usage of draisines in Europe has also spread to many Northern countries, such as Sweden and Finland, but also Belgium, Luxembourg, in Germany. Even in Italy the practice is starting to spread, with a few projects under consideration.
By country
United States
It is not clear who invented the handcar, also written as hand car or hand-car. One of the first was the track velocipede invented by George S. Sheffield of Three Rivers in 1877. It is likely that machinists in individual railroad shops began to build them to their own design. Many of the earliest ones operated by turning large cranks. It is likely that the pump handcar, with a reciprocating walking beam, came later. While there are hundreds of US patents pertaining to details of handcars, probably the primary designs of mechanisms for powering handcars were in such common use that they were not patentable when companies started to manufacture handcars for sale to the railroads.
Handcars were absolutely essential to the operation of railroads during a time when railroads were the primary form of public transportation for people and goods in America, from about 1850 to 1910. There may have been handcars as early as the late 1840s but they were quite common during the American Civil War. They were a very important tool in the construction of the Transcontinental Railroad. There were many thousands of them built. They were commonly assigned to a "section" of track, the section being between about 6 to 10 miles long, depending upon the traffic weight and locomotive speed experienced on the section. Each section would have a section crew that would maintain that piece of track. Each section usually had a section house which was used to store tools and the section's handcar. Roughly 130,000 miles of track had been constructed in America by 1900. Thus, considering there was a handcar assigned to at least every ten miles of that track, there would have been a minimum of 13,000 handcars operating in the United States. This number is obviously a gross underestimate because many sections were shorter than 10 miles and railroads also had spare handcars for use in unusual circumstances. Telegraph company Western Union and other rail-users had their own handcars, adding to the overall handcar population.
The first handcars, built in the railroad shops, were probably made of whatever parts the shops had around or could easily make. These cars were probably quite heavy. Heavy handcars need more people to propel them. More people will add more power but at some point the benefits are offset by the weight of the people: their own weight would not be compensated by any extra power they can produce. Many companies made handcars in the years following the American Civil War as evidenced by the number of advertisements in contemporary publications such as The Car Builder's Dictionary. By the mid 1880s The Sheffield Velocipede Car Company, The Kalamazoo Velocipede Company and the Buda Foundry and Manufacturing Company were the three large companies who were the primary builders of handcars. Sheffield was almost immediately acquired by industrial giant Fairbanks Morse. All three companies changed their names over the years but for most of the years that they produced handcars, they were still identified as Sheffield, Kalamazoo and Buda. Hand cars continued to be available through the first half of the 20th century. Fairbanks Morse was still offering a handcar from their catalog as late as 1950 and Kalamazoo sold them until at least 1955.
While depictions on TV and in movies might suggest that being a member of a handcar crew is a joyride, in fact pumping a traditional handcar with bronze bearings rather than modern roller bearings can be very hard work. The disagreeable nature of this experience must have been heightened by the dead weight of typical section crew supplies such as railroad spikes, track nuts and bolts, shovels, pry bars of various sorts and other iron and steel equipment.
Motor section cars began to appear in the very early 1900s, or a few years earlier. They quickly replaced most of the handcars. Those handcars whose uses continued even during World War I, were probably scrapped during World War II. The number of handcars that survived is unclear. They can be found in railroad museums and some are in private hands.
Australia
In Australia, hand cars or pump carts are commonly referred to as Kalamazoos after the Kalamazoo Manufacturing Company, which provided many examples to the Australian railway market. Many Kalamazoos are preserved in Australia, some even being used for races.
Guatemala
There is a push car service along the railroad tracks between Anguiatú in Guatemala and rural towns across the Salvadoran border. Sometimes it is pulled by a horse.
Indian Railways
Although many railways in the world have switched to other methods of inspection, it is still widely used over Indian Railways in addition to other techniques, especially for inspecting railway track and assets like bridges which are situated between stations. The push trolley carries one or more officials inspecting the track and the railside equipment. The official carries instruments to measure and check the condition of the tracks and monitor the work being done by the trackmen, keymen, gatemen etc. who maintain, patrol, man the track and installations. The push trolley is also used by officials inspecting signalling installations in some parts of India. On routes carrying high volumes of traffic, such as the suburban section in Mumbai, push trolleys cannot be used and foot inspections are being resorted to.
Japan
In Japan, dozens of commercially operated handcar railway lines, called or existed in early 20th century. Those lines were purely built for its passenger/freight service, and "drivers" pushed small train cars all the way. The first line, Fujieda-Yaizu Tramway, opened in 1891, and most of the others opened before 1910. Most lines were very short with less than 10 km lengths, and the rail gauges used were either or .
As the human-powered system was fairly inefficient, many handcar tramways soon changed their power resources to either horse or gasoline. The system was not strong against a competition with other modes of transport, such as trucks, horses, buses, or other railways. Taishaku Handcar Tramway ceased its operation as early as 1912, and almost all the lines were already closed before 1945.
List of handcar tramways in Japan
Hokkaidō
Ebetsu Town Handcar Tramway 江別町営人車軌道
Akita
Nakanishi Tokugorō Operated Tramway 中西徳五郎経営軌道
Yamagata
Akayu Handcar Tramway 赤湯人車軌道
Iwate
Waga Light Tramway 和賀軽便軌道
Miyagi
Matsuyama Handcar Tramway 松山人車軌道
Tochigi
Iwafune Handcar Tramway 岩舟人車鉄道
Kitsuregawa Handcar Tramway 喜連川人車鉄道
Nabeyama Handcar Tramway 鍋山人車軌道
Nasu Handcar Tramway 那須人車軌道
Otome Handcar Tramway 乙女人車軌道
Utsunomiya Stone Tramway 宇都宮石材軌道
Ibaraki
Haguro Tramway 羽黒軌道
Inada Tramway 稲田軌道
Iwama Tramway 岩間軌道
Kabaho Kōgyō Tramway 樺穂興業軌道
Kasama Handcar Tramway 笠間人車軌道
Chiba
Mobara-Chōnan Handcar Tramway 茂原・長南間人車軌道
Noda Handcar Tramway 野田人車鉄道 (linemap)
Ōhara-Ōtaki Handcar Tramway 大原・大多喜間人車軌道
Tōkatsu Handcar Tramway 東葛人車鉄道
Tokyo
Taishaku Handcar Tramway 帝釈人車軌道
The current Keisei Kanamachi Line.
Gunma
Satomi Tramway 里見軌道
Yabuzuka Stone Tramway 藪塚石材軌道
The part of the current Tōbu Kiryū Line.
Kanagawa
Zusō Handcar Tramway 豆相人車鉄道
Also in Shizuoka.
Shizuoka
Fujieda-Yaizu Tramway 藤枝焼津間軌道
Nakaizumi Tramway 中泉軌道
Shimada Tramway 島田軌道
Fukui
Hongō Tramway 本郷軌道
Okinawa
Okinawa Handcar Tramway 沖縄人車軌道
Philippines
Hand built Trolleys are illegally used on suburban railway tracks as an unauthorised commuter service in Manila, Philippines.
Taiwan
In Taiwan, commercially operated handcars were called either light railway line (Traditional Chinese: 輕便線; Hanyu Pinyin: qīngbiàn-xiàn), hand-pushed light railway line (手押輕便線; shǒuyā qīngbiàn-xiàn), hand-pushed tramway (手押軌道; shǒuyā guǐdào), or most commonly, hand-pushed wagon (手押臺車; shǒuyā táichē). The first line was built in the 1870s. The network developed later under Japanese rule. In 1933, its peak, there were more than 50 lines in the island with 1,292 km network, transporting local passengers, coal, factory products, sugar, salt, bananas, tea leaves, and others. Most lines, excluding those in mines and isolated islands, have disappeared following the end of Japanese rule. However, a few lines survived well until the 1970s. Currently, only the sightseeing line in Wūlái still exists, although its line is not human-powered anymore.
In popular culture
Handcars are a recurring railway-themed plot device of twentieth and twenty-first century film, such as comedy, drama and animation.
The opening scene of Blazing Saddles, set at a railroad construction site, features a handcar.
In the movie Mad Max Beyond Thunderdome, the culminating chase scene takes place along a railway, with one of the pursuers chasing the heroes down the tracks on a handcar.
In the Dad's Army episode "The Royal Train", the Walmington-on-Sea Home Guard platoon find themselves stuck on a runaway train. Warden Hodges, the vicar, the verger and the town mayor chase them using a handcar.
In the movie O Brother, Where Art Thou?, the three main characters encounter an old blind man on a handcar after escaping from prison and in the conclusion of the movie.
In the film Gallowwalkers there is a handcar used in the opening scenes.
In the movie The Great St Trinian's Train Robbery two St Trinian's schoolgirls use one to move between distant points levers.
In the Wile E. Coyote and Road Runner episode "Rushing Roulette" (1965) Wile E. Coyote attempts to catch the Road Runner using a handcar.
In 1998, Sega manufactured the handcar-themed arcade game Magical Truck Adventure which the player controls by pumping a large handle.
Buster Keaton uses a handcar during a chase scene in the film The General; he also uses a powered draisine in The Railrodder.
In the Simpsons episode "500 Keys", Marge chases a toy handcar called the "Pooter Toot Express". The two figures pumping the car pass gas every time they pump.
In Reds (1981), John Reed, played by Warren Beatty, attempts to leave Russia via a velocipede but is detained by Finnish troops at the border.
In Thomas and Friends, Old Bailey uses a handcar in the episode "Haunted Henry" (Series 5, Episode 11). This handcar can also be seen in Series 6, 14, 15, 16, 22 and 23. A real-life handcar can also be seen in the "10 Years of Thomas and Friends" VHS on the Strasburg Railroad during a Day Out with Thomas event.
In Postman Pat, Pat, Jess and Ted use a handcar in the Special Delivery Service episode "A Wobbly Piano" so they could get to Greendale and deliver Lizzy's piano.
In a Dr. Seuss movie, Green Eggs and Ham, the Grumpy Guy escapes on the handcar in the rain.
In the Help! It's the Hair Bear Bunch episode Raffle Ruckus, the animals and keepers of the Wonderland Zoo use handcars when leaving the train they were on.
In the TV show Petticoat Junction, a handcar is shown in many episodes, whenever the Cannonball is not available to take the Hooterville Valley residents where they need to be.
In the Mr. Men Show episode "Trains and Planes", Mr. Bump and Miss Helpful use a handcar to deliver sleepers for the railway. Later at the end, Mr. Grumpy jumps on board their handcar, but it gets destroyed by Miss Whoops.
In the Lego Loco game intro, two minifigs are riding on a handcar, before becoming chased by a speeding train.
In The Good Place episode "Tinker, Tailor, Demon, Spy", the characters Michael and Jason begin a journey from The Good Place to The Bad Place on a handcar. In the following episode "Employee of the Bearimy" they complete the journey and later return to The Good Place on the handcar with Janet.
In Hell on Wheels episode Range War, the main character Cullen is approached on the railroad tracks by a man operating a handcar who brings with him a scalped head.
In Last of the Dogmen a handcar is used in a scene where a young, Native American boy is captured.
Handcars are featured in the western adventure game Red Dead Redemption 2. In one mission, protagonists Arthur Morgan and John Marston use a handcar to carry some dynamite onto a railway bridge they need to blow up. After planting the dynamite, they use the handcar again to escape an approaching train.
Racing
The Canadian Championship Handcar Races are held annually at the Palmerston Railway Heritage Museum (formerly the old Palmerston CNR station) in Palmerston, Ontario, Canada each June. These races began in 1992 and have been running since.
An annual handcar race, Dr. E. P. Kitty's Wunderkammer, featuring the Great Sonoma County Handcar Races (formerly known as The Hand-car Regatta), is held in the rail-yard in old downtown Santa Rosa, California. A multi-faceted festival, it centers around races of numerous widely varying human-powered vehicles operating on railroad tracks, including traditional hand-powered carts and others powered by pedals or pushing.
A similar race occurred in the nearby Northern California town of Willits, California, on Sept. 8 and 9, 2012.
Other races are held in Australia, some using preserved old handcars.
Advantages
Push trolleys have a major advantage over motorised trolleys as they do not require any traffic block and the inspecting officials can carry out inspections at their leisure.
Disadvantages
The push trolleys are a potential safety hazard as they occupy track (albeit temporarily) and, if the trolley is not removed from track in time, it can collide with a train and cause an accident. Therefore, on sections having gradients or poor visibility, the push trolleys are not allowed without traffic blocks. '
Additional images
| Technology | Human-powered transport | null |
1545228 | https://en.wikipedia.org/wiki/Zeta%20Ophiuchi | Zeta Ophiuchi | Zeta Ophiuchi (ζ Oph, ζ Ophiuchi) is a single star located in the constellation of Ophiuchus. It has an apparent visual magnitude of 2.6, making it the third-brightest star in the constellation. Parallax measurements give an estimated distance of roughly from the Earth. It is surrounded by the Sh2-27 "Cobold" nebula, the star's bow shock as it ploughs through dense dust clouds near the Rho Ophiuchi cloud complex.
In April 2010, ζ Ophiuchi was occulted by asteroid 824 Anastasia.
Properties
ζ Ophiuchi is an enormous star with more than 20 times the Sun's mass and eight times its radius. The stellar classification of this star is O9.5 V, with the luminosity class of V indicating that it is generating energy in its core by the nuclear fusion of hydrogen. From Earth, the apparent effective temperature of the star appears to be 34,300K, giving the star the blue hue of an O-type star. However, since the star is rapidly rotating, the exact surface temperature varies across the surface of the star from as high as 39,000K at the poles to as low as 30,700K at the equator. The projected rotational velocity may be as high as and it may be rotating at a rate of once per day, close to the velocity at which it would begin to break up.
This is a young star with an age of only three million years. Its luminosity is varying in a periodic manner similar to that of a Beta Cephei variable. This periodicity has a dozen or more frequencies ranging between 1–10 cycles per day. In 1979, examination of the spectrum of this star found "moving bumps" in its helium line profiles. This feature has since been found in other stars, which have come to be called ζ Oph stars. These spectral properties are likely the result of non-radial pulsations.
This star is roughly halfway through the initial phase of its stellar evolution and will, within the next few million years, expand into a red supergiant star wider than the orbit of Jupiter before ending its life in a supernova explosion, leaving behind a neutron star or pulsar. From the Earth, a significant fraction of the light from this star is absorbed by interstellar dust, particularly at the blue end of the spectrum. In fact, were it not for this dust, ζ Ophiuchi would shine several times brighter and be among the very brightest stars visible. If the star's luminosity were not obscured, it would shine at magnitude 1.54, becoming the twenty-third brightest star in the night sky.
X-ray emissions have been detected from Zeta Ophiuchi that vary periodically. The net X-ray flux is estimated at . In the energy range of 0.5–10 keV, this flux varies by about 20% over a period of 0.77 days. This behavior may be the result of a magnetic field in the star. The measured average strength of the longitudinal field is about .
Bow shock
ζ Ophiuchi is moving through space with a peculiar velocity of 30 km s−1. Based upon the age and direction of motion of this star, it is a member of the Upper Scorpius sub-group of the Scorpius–Centaurus association of stars that share a common origin and space velocity. Such runaway stars may be ejected by dynamic interactions between three or four stars. However, in this case the star may be a former component of a binary star system in which the more massive primary was destroyed in a type II supernova explosion. It is possible that ζ Ophiuchi accreted mass from its companion before it was ejected. The pulsar PSR B1929+10 may be the leftover remnant of this supernova, as it too was ejected from the association with a velocity vector that fits the scenario.
Due to the high space velocity of Zeta Ophiuchi, in combination with high intrinsic brightness and its current location in a dust-rich area of the galaxy, the star is creating a bow-shock in the direction of motion. This shock has been made visible via NASA's Wide-field Infrared Survey Explorer. The formation of this bow shock can be explained by a mass loss rate of about times the mass of the Sun per year, which equals the mass of the Sun every nine million years.
Traditional names
ζ Ophiuchi was a member of indigenous Arabic asterism al-Nasaq al-Yamānī, "the Southern Line" of al-Nasaqān "the Two Lines", along with α Serpentis (Unukalhai), δ Ser (Qin, Tsin), ε Ser (Ba, Pa), δ Ophiuchi (Yed Prior), ε Oph (Yed Posterior) and γ Oph (Tsung Ching).
According to the catalogue of stars in the Technical Memorandum 33-507 – A Reduced Star Catalog Containing 537 Named Stars, al-Nasaq al-Yamānī or Nasak Yamani was the title for two stars: δ Serpentis as Nasak Yamani I and ε Ser as Nasak Yamani II (exclude this star, α Ser, δ Ophiuchi, ε Oph and γ Oph)
In Chinese, (), meaning Right Wall of Heavenly Market Enclosure, refers to an asterism which is represent eleven old states in China which is marking the right borderline of the enclosure, consisting of ζ Ophiuchi, β Herculis, γ Herculis, κ Herculis, γ Serpentis, β Serpentis, α Serpentis, δ Serpentis, ε Serpentis, δ Ophiuchi and ε Ophiuchi. Consequently, the Chinese name for ζ Ophiuchi itself is (, ), represent the state Han (韓), together with 35 Capricorni in Twelve States (asterism).
| Physical sciences | Notable stars | Astronomy |
1545608 | https://en.wikipedia.org/wiki/Anaerobic%20digestion | Anaerobic digestion | Anaerobic digestion is a sequence of processes by which microorganisms break down biodegradable material in the absence of oxygen. The process is used for industrial or domestic purposes to manage waste or to produce fuels. Much of the fermentation used industrially to produce food and drink products, as well as home fermentation, uses anaerobic digestion.
Anaerobic digestion occurs naturally in some soils and in lake and oceanic basin sediments, where it is usually referred to as "anaerobic activity". This is the source of marsh gas methane as discovered by Alessandro Volta in 1776.
Anaerobic digestion comprises four stages:
Hydrolysis
Acidogenesis
Acetogenesis
Methanogenesis
The digestion process begins with bacterial hydrolysis of the input materials. Insoluble organic polymers, such as carbohydrates, are broken down to soluble derivatives that become available for other bacteria. Acidogenic bacteria then convert the sugars and amino acids into carbon dioxide, hydrogen, ammonia, and organic acids. In acetogenesis, bacteria convert these resulting organic acids into acetic acid, along with additional ammonia, hydrogen, and carbon dioxide amongst other compounds. Finally, methanogens convert these products to methane and carbon dioxide. The methanogenic archaea populations play an indispensable role in anaerobic wastewater treatments.
Anaerobic digestion is used as part of the process to treat biodegradable waste and sewage sludge. As part of an integrated waste management system, anaerobic digestion reduces the emission of landfill gas into the atmosphere. Anaerobic digesters can also be fed with purpose-grown energy crops, such as maize.
Anaerobic digestion is widely used as a source of renewable energy. The process produces a biogas, consisting of methane, carbon dioxide, and traces of other 'contaminant' gases. This biogas can be used directly as fuel, in combined heat and power gas engines or upgraded to natural gas-quality biomethane. The nutrient-rich digestate also produced can be used as fertilizer.
With the re-use of waste as a resource and new technological approaches that have lowered capital costs, anaerobic digestion has in recent years received increased attention among governments in a number of countries, among these the United Kingdom (2011), Germany, Denmark (2011), and the United States.
Process
Many microorganisms affect anaerobic digestion, including acetic acid-forming bacteria (acetogens) and methane-forming archaea (methanogens). These organisms promote a number of chemical processes in converting the biomass to biogas.
Gaseous oxygen is excluded from the reactions by physical containment. Anaerobes utilize electron acceptors from sources other than oxygen gas. These acceptors can be the organic material itself or may be supplied by inorganic oxides from within the input material. When the oxygen source in an anaerobic system is derived from the organic material itself, the 'intermediate' end products are primarily alcohols, aldehydes, and organic acids, plus carbon dioxide. In the presence of specialised methanogens, the intermediates are converted to the 'final' end products of methane, carbon dioxide, and trace levels of hydrogen sulfide. In an anaerobic system, the majority of the chemical energy contained within the starting material is released by methanogenic archaea as methane.
Populations of anaerobic microorganisms typically take a significant period of time to establish themselves to be fully effective. Therefore, common practice is to introduce anaerobic microorganisms from materials with existing populations, a process known as "seeding" the digesters, typically accomplished with the addition of sewage sludge or cattle slurry.
Process stages
The four key stages of anaerobic digestion involve hydrolysis, acidogenesis, acetogenesis and methanogenesis.
The overall process can be described by the chemical reaction, where organic material such as glucose is biochemically digested into carbon dioxide (CO2) and methane (CH4) by the anaerobic microorganisms.
C6H12O6 → 3CO2 + 3CH4
Hydrolysis
In most cases, biomass is made up of large organic polymers. For the bacteria in anaerobic digesters to access the energy potential of the material, these chains must first be broken down into their smaller constituent parts. These constituent parts, or monomers, such as sugars, are readily available to other bacteria. The process of breaking these chains and dissolving the smaller molecules into solution is called hydrolysis. Therefore, hydrolysis of these high-molecular-weight polymeric components is the necessary first step in anaerobic digestion. Through hydrolysis the complex organic molecules are broken down into simple sugars, amino acids, and fatty acids.
Acetate and hydrogen produced in the first stages can be used directly by methanogens. Other molecules, such as volatile fatty acids (VFAs) with a chain length greater than that of acetate must first be catabolised into compounds that can be directly used by methanogens.
Acidogenesis
The biological process of acidogenesis results in further breakdown of the remaining components by acidogenic (fermentative) bacteria. Here, VFAs are created, along with ammonia, carbon dioxide, and hydrogen sulfide, as well as other byproducts. The process of acidogenesis is similar to the way milk sours.
Acetogenesis
The third stage of anaerobic digestion is acetogenesis. Here, simple molecules created through the acidogenesis phase are further digested by acetogens to produce largely acetic acid, as well as carbon dioxide and hydrogen.
Methanogenesis
The terminal stage of anaerobic digestion is the biological process of methanogenesis. Here, methanogens use the intermediate products of the preceding stages and convert them into methane, carbon dioxide, and water. These components make up the majority of the biogas emitted from the system. Methanogenesis is sensitive to both high and low pHs and occurs between pH 6.5 and pH 8. The remaining, indigestible material the microbes cannot use and any dead bacterial remains constitute the digestate.
Configuration
Anaerobic digesters can be designed and engineered to operate using a number of different configurations and can be categorized into batch vs. continuous process mode, mesophilic vs. thermophilic temperature conditions, high vs. low portion of solids, and single stage vs. multistage processes. Continuous process requires more complex design, but still, it may be more economical than batch process, because batch process requires more initial building money and a larger volume of the digesters (spread across several batches) to handle the same amount of waste as a continuous process digester. Higher heat energy is required in a thermophilic system compared to a mesophilic system, but the thermophilic system requires much less time and has a larger gas output capacity and higher methane gas content, so one has to consider that trade-off carefully. For solids content, low will handle up to 15% solid content. Above this level is considered high solids content and can also be known as dry digestion. In a single stage process, one reactor houses the four anaerobic digestion steps. A multistage process utilizes two or more reactors for digestion to separate the methanogenesis and hydrolysis phases.
Batch or continuous
Anaerobic digestion can be performed as a batch process or a continuous process. In a batch system, biomass is added to the reactor at the start of the process. The reactor is then sealed for the duration of the process. In its simplest form batch processing needs inoculation with already processed material to start the anaerobic digestion. In a typical scenario, biogas production will be formed with a normal distribution pattern over time. Operators can use this fact to determine when they believe the process of digestion of the organic matter has completed. There can be severe odour issues if a batch reactor is opened and emptied before the process is well completed. A more advanced type of batch approach has limited the odour issues by integrating anaerobic digestion with in-vessel composting. In this approach inoculation takes place through the use of recirculated degasified percolate. After anaerobic digestion has completed, the biomass is kept in the reactor which is then used for in-vessel composting before it is opened As the batch digestion is simple and requires less equipment and lower levels of design work, it is typically a cheaper form of digestion. Using more than one batch reactor at a plant can ensure constant production of biogas.
In continuous digestion processes, organic matter is constantly added (continuous complete mixed) or added in stages to the reactor (continuous plug flow; first in – first out). Here, the end products are constantly or periodically removed, resulting in constant production of biogas. A single or multiple digesters in sequence may be used. Examples of this form of anaerobic digestion include continuous stirred-tank reactors, upflow anaerobic sludge blankets, expanded granular sludge beds, and internal circulation reactors.
Temperature
The two conventional operational temperature levels for anaerobic digesters determine the species of methanogens in the digesters:
Mesophilic digestion takes place optimally around 30 to 38 °C, or at ambient temperatures between 20 and 45 °C, where mesophiles are the primary microorganisms present.
Thermophilic digestion takes place optimally around 49 to 57 °C, or at elevated temperatures up to 70 °C, where thermophiles are the primary microorganisms present.
A limit case has been reached in Bolivia, with anaerobic digestion in temperature working conditions of less than 10 °C. The anaerobic process is very slow, taking more than three times the normal mesophilic time process. In experimental work at University of Alaska Fairbanks, a 1,000-litre digester using psychrophiles harvested from "mud from a frozen lake in Alaska" has produced 200–300 litres of methane per day, about 20 to 30% of the output from digesters in warmer climates. Mesophilic species outnumber thermophiles, and they are also more tolerant to changes in environmental conditions than thermophiles. Mesophilic systems are, therefore, considered to be more stable than thermophilic digestion systems. In contrast, while thermophilic digestion systems are considered less stable, their energy input is higher, with more biogas being removed from the organic matter in an equal amount of time. The increased temperatures facilitate faster reaction rates, and thus faster gas yields. Operation at higher temperatures facilitates greater pathogen reduction of the digestate. In countries where legislation, such as the Animal By-Products Regulations in the European Union, requires digestate to meet certain levels of pathogen reduction there may be a benefit to using thermophilic temperatures instead of mesophilic.
Additional pre-treatment can be used to reduce the necessary retention time to produce biogas. For example, certain processes shred the substrates to increase the surface area or use a thermal pretreatment stage (such as pasteurisation) to significantly enhance the biogas output. The pasteurisation process can also be used to reduce the pathogenic concentration in the digestate, leaving the anaerobic digester. Pasteurisation may be achieved by heat treatment combined with maceration of the solids.
Solids content
In a typical scenario, three different operational parameters are associated with the solids content of the feedstock to the digesters:
High solids (dry—stackable substrate)
High solids (wet—pumpable substrate)
Low solids (wet—pumpable substrate)
High solids (dry) digesters are designed to process materials with a solids content between 25 and 40%. Unlike wet digesters that process pumpable slurries, high solids (dry – stackable substrate) digesters are designed to process solid substrates without the addition of water. The primary styles of dry digesters are continuous vertical plug flow and batch tunnel horizontal digesters. Continuous vertical plug flow digesters are upright, cylindrical tanks where feedstock is continuously fed into the top of the digester, and flows downward by gravity during digestion. In batch tunnel digesters, the feedstock is deposited in tunnel-like chambers with a gas-tight door. Neither approach has mixing inside the digester. The amount of pretreatment, such as contaminant removal, depends both upon the nature of the waste streams being processed and the desired quality of the digestate. Size reduction (grinding) is beneficial in continuous vertical systems, as it accelerates digestion, while batch systems avoid grinding and instead require structure (e.g. yard waste) to reduce compaction of the stacked pile. Continuous vertical dry digesters have a smaller footprint due to the shorter effective retention time and vertical design. Wet digesters can be designed to operate in either a high-solids content, with a total suspended solids (TSS) concentration greater than ~20%, or a low-solids concentration less than ~15%.
High solids (wet) digesters process a thick slurry that requires more energy input to move and process the feedstock. The thickness of the material may also lead to associated problems with abrasion. High solids digesters will typically have a lower land requirement due to the lower volumes associated with the moisture. High solids digesters also require correction of conventional performance calculations (e.g. gas production, retention time, kinetics, etc.) originally based on very dilute sewage digestion concepts, since larger fractions of the feedstock mass are potentially convertible to biogas.
Low solids (wet) digesters can transport material through the system using standard pumps that require significantly lower energy input. Low solids digesters require a larger amount of land than high solids due to the increased volumes associated with the increased liquid-to-feedstock ratio of the digesters. There are benefits associated with operation in a liquid environment, as it enables more thorough circulation of materials and contact between the bacteria and their food. This enables the bacteria to more readily access the substances on which they are feeding, and increases the rate of gas production.
Complexity
Digestion systems can be configured with different levels of complexity. In a single-stage digestion system (one-stage), all of the biological reactions occur within a single, sealed reactor or holding tank. Using a single stage reduces construction costs, but results in less control of the reactions occurring within the system. Acidogenic bacteria, through the production of acids, reduce the pH of the tank. Methanogenic archaea, as outlined earlier, operate in a strictly defined pH range. Therefore, the biological reactions of the different species in a single-stage reactor can be in direct competition with each other. Another one-stage reaction system is an anaerobic lagoon. These lagoons are pond-like, earthen basins used for the treatment and long-term storage of manures. Here the anaerobic reactions are contained within the natural anaerobic sludge contained in the pool.
In a two-stage digestion system (multistage), different digestion vessels are optimised to bring maximum control over the bacterial communities living within the digesters. Acidogenic bacteria produce organic acids and more quickly grow and reproduce than methanogenic archaea. Methanogenic archaea require stable pH and temperature to optimise their performance.
Under typical circumstances, hydrolysis, acetogenesis, and acidogenesis occur within the first reaction vessel. The organic material is then heated to the required operational temperature (either mesophilic or thermophilic) prior to being pumped into a methanogenic reactor. The initial hydrolysis or acidogenesis tanks prior to the methanogenic reactor can provide a buffer to the rate at which feedstock is added. Some European countries require a degree of elevated heat treatment to kill harmful bacteria in the input waste. In this instance, there may be a pasteurisation or sterilisation stage prior to digestion or between the two digestion tanks. Notably, it is not possible to completely isolate the different reaction phases, and often some biogas is produced in the hydrolysis or acidogenesis tanks.
Residence time
The residence time in a digester varies with the amount and type of feed material, and with the configuration of the digestion system. In a typical two-stage mesophilic digestion, residence time varies between 15 and 40 days, while for a single-stage thermophilic digestion, residence times is normally faster and takes around 14 days. The plug-flow nature of some of these systems will mean the full degradation of the material may not have been realised in this timescale. In this event, digestate exiting the system will be darker in colour and will typically have more odour.
In the case of an upflow anaerobic sludge blanket digestion (UASB), hydraulic residence times can be as short as 1 hour to 1 day, and solid retention times can be up to 90 days. In this manner, a UASB system is able to separate solids and hydraulic retention times with the use of a sludge blanket. Continuous digesters have mechanical or hydraulic devices, depending on the level of solids in the material, to mix the contents, enabling the bacteria and the food to be in contact. They also allow excess material to be continuously extracted to maintain a reasonably constant volume within the digestion tanks.
Pressure
A recent development in anaerobic reactor design is High-pressure anaerobic digestion (HPAD) also referred to a Autogenerative High Pressure Digestion (AHPD). This technique produces a biogas with a elevated methane content. The produced carbon dioxide in biogas dissolves more into the water phase under pressure then methane does. Hence the produced biogas is richer in methane. Research at the University of Groningen demonstrated that the bacterial community changes in composition under the influence of pressure. Individual bacteria species have their optimum circumstances in which they grow and replicate the fastest. Commonly known are pH, temperature, salinity etc. but pressure is also one of them. Some species have adapted to life in the deep oceans where pressure is much higher than at sea level. This makes it possible in similar vein as other process parameters such as Temperature, Retention Time, pH to influence the anaerobic digestion process.
Inhibition
The anaerobic digestion process can be inhibited by several compounds, affecting one or more of the bacterial groups responsible for the different organic matter degradation steps. The degree of the inhibition depends, among other factors, on the concentration of the inhibitor in the digester. Potential inhibitors are ammonia, sulfide, light metal ions (Na, K, Mg, Ca, Al), heavy metals, some organics (chlorophenols, halogenated aliphatics, N-substituted aromatics, long chain fatty acids), etc.
Total ammonia nitrogen (TAN) has been shown to inhibit production of methane. Furthermore, it destabilises the microbial community, impacting the synthesis of acetic acid. Acetic acid is one of the driving forces in methane production. At an excess of 5000 mg/L TAN, pH adjustment is needed to keep the reaction stable. A TAN concentration above 1700– 1800 mg/L inhibits methane production and yield decreases at greater TAN concentrations. High TAN concentrations cause the reaction to turn acidic and lead to a domino effect of inhibition. Total ammonia nitrogen is the combination of free ammonia and ionized ammonia. TAN is produced through degrading material high in nitrogen, typically proteins and will naturally build in anaerobic digestion. This is depending on the organic feed stock fed to the system. In typical wastewater treatment practices, TAN reduction is done with via nitrification. Nitrification is an aerobic process where TAN is consumed by aerobic heterotrophic bacteria. These bacteria release nitrate and nitrite which are later converted to nitrogen gas through the denitrification process. Hydrolysis and acidogenesis can also be impacted by TAN concentration. In mesophilic conditions, inhibition for hydrolysis was found to occur at 5500 mg/L TAN, while acidogenesis inhibition occurs at 6500 mg/L TAN.
Feedstocks
The most important initial issue when considering the application of anaerobic digestion systems is the feedstock to the process. Almost any organic material can be processed with anaerobic digestion; however, if biogas production is the aim, the level of putrescibility is the key factor in its successful application. The more putrescible (digestible) the material, the higher the gas yields possible from the system.
Feedstocks can include biodegradable waste materials, such as waste paper, grass clippings, leftover food, sewage, and animal waste. Woody wastes are the exception, because they are largely unaffected by digestion, as most anaerobes are unable to degrade lignin. Xylophagous anaerobes (lignin consumers) or high temperature pretreatment, such as pyrolysis, can be used to break lignin down. Anaerobic digesters can also be fed with specially grown energy crops, such as silage, for dedicated biogas production. In Germany and continental Europe, these facilities are referred to as "biogas" plants. A codigestion or cofermentation plant is typically an agricultural anaerobic digester that accepts two or more input materials for simultaneous digestion.
The length of time required for anaerobic digestion depends on the chemical complexity of the material. Material rich in easily digestible sugars breaks down quickly, whereas intact lignocellulosic material rich in cellulose and hemicellulose polymers can take much longer to break down. Anaerobic microorganisms are generally unable to break down lignin, the recalcitrant aromatic component of biomass.
Anaerobic digesters were originally designed for operation using sewage sludge and manures. Sewage and manure are not, however, the material with the most potential for anaerobic digestion, as the biodegradable material has already had much of the energy content taken out by the animals that produced it. Therefore, many digesters operate with codigestion of two or more types of feedstock. For example, in a farm-based digester that uses dairy manure as the primary feedstock, the gas production may be significantly increased by adding a second feedstock, e.g., grass and corn (typical on-farm feedstock), or various organic byproducts, such as slaughterhouse waste, fats, oils and grease from restaurants, organic household waste, etc. (typical off-site feedstock).
Digesters processing dedicated energy crops can achieve high levels of degradation and biogas production. Slurry-only systems are generally cheaper, but generate far less energy than those using crops, such as maize and grass silage; by using a modest amount of crop material (30%), an anaerobic digestion plant can increase energy output tenfold for only three times the capital cost, relative to a slurry-only system.
Moisture content
A second consideration related to the feedstock is moisture content. Drier, stackable substrates, such as food and yard waste, are suitable for digestion in tunnel-like chambers. Tunnel-style systems typically have near-zero wastewater discharge, as well, so this style of system has advantages where the discharge of digester liquids are a liability. The wetter the material, the more suitable it will be to handling with standard pumps instead of energy-intensive concrete pumps and physical means of movement. Also, the wetter the material, the more volume and area it takes up relative to the levels of gas produced. The moisture content of the target feedstock will also affect what type of system is applied to its treatment. To use a high-solids anaerobic digester for dilute feedstocks, bulking agents, such as compost, should be applied to increase the solids content of the input material. Another key consideration is the carbon:nitrogen ratio of the input material. This ratio is the balance of food a microbe requires to grow; the optimal C:N ratio is 20–30:1. Excess N can lead to ammonia inhibition of digestion.
Contamination
The level of contamination of the feedstock material is a key consideration when using wet digestion or plug-flow digestion.
If the feedstock to the digesters has significant levels of physical contaminants, such as plastic, glass, or metals, then processing to remove the contaminants will be required for the material to be used. If it is not removed, then the digesters can be blocked and will not function efficiently. This contamination issue does not occur with dry digestion or solid-state anaerobic digestion (SSAD) plants, since SSAD handles dry, stackable biomass with a high percentage of solids (40-60%) in gas-tight chambers called fermenter boxes. It is with this understanding that mechanical biological treatment plants are designed. The higher the level of pretreatment a feedstock requires, the more processing machinery will be required, and, hence, the project will have higher capital costs. National Non-Food Crops Centre.
After sorting or screening to remove any physical contaminants from the feedstock, the material is often shredded, minced, and mechanically or hydraulically pulped to increase the surface area available to microbes in the digesters and, hence, increase the speed of digestion. The maceration of solids can be achieved by using a chopper pump to transfer the feedstock material into the airtight digester, where anaerobic treatment takes place.
Substrate composition
Substrate composition is a major factor in determining the methane yield and methane production rates from the digestion of biomass. Techniques to determine the compositional characteristics of the feedstock are available, while parameters such as solids, elemental, and organic analyses are important for digester design and operation. Methane yield can be estimated from the elemental composition of substrate along with an estimate of its degradability (the fraction of the substrate that is converted to biogas in a reactor). In order to predict biogas composition (the relative fractions of methane and carbon dioxide) it is necessary to estimate carbon dioxide partitioning between the aqueous and gas phases, which requires additional information (reactor temperature, pH, and substrate composition) and a chemical speciation model. Direct measurements of biomethanation potential are also made using gas evolution or more recent gravimetric assays.
Applications
Using anaerobic digestion technologies can help to reduce the emission of greenhouse gases in a number of key ways:
Replacement of fossil fuels
Reducing or eliminating the energy footprint of waste treatment plants
Reducing methane emission from landfills
Displacing industrially produced chemical fertilizers
Reducing vehicle movements
Reducing electrical grid transportation losses
Reducing usage of LP Gas for cooking
An important component of the Zero Waste initiatives.
Waste and wastewater treatment
Anaerobic digestion is particularly suited to organic material, and is commonly used for industrial effluent, wastewater and sewage sludge treatment. Anaerobic digestion, a simple process, can greatly reduce the amount of organic matter which might otherwise be destined to be dumped at sea, dumped in landfills, or burnt in incinerators.
Pressure from environmentally related legislation on solid waste disposal methods in developed countries has increased the application of anaerobic digestion as a process for reducing waste volumes and generating useful byproducts. It may either be used to process the source-separated fraction of municipal waste or alternatively combined with mechanical sorting systems, to process residual mixed municipal waste. These facilities are called mechanical biological treatment plants.
If the putrescible waste processed in anaerobic digesters were disposed of in a landfill, it would break down naturally and often anaerobically. In this case, the gas will eventually escape into the atmosphere. As methane is about 20 times more potent as a greenhouse gas than carbon dioxide, this has significant negative environmental effects.
In countries that collect household waste, the use of local anaerobic digestion facilities can help to reduce the amount of waste that requires transportation to centralized landfill sites or incineration facilities. This reduced burden on transportation reduces carbon emissions from the collection vehicles. If localized anaerobic digestion facilities are embedded within an electrical distribution network, they can help reduce the electrical losses associated with transporting electricity over a national grid.
Anaerobic digestion can be used for the remediation sludge polluted with PFAS. A 2024 study has shown that anaerobic digestion, combined with adsorption in activated carbon and voltage application can remove up to 61% of PFAS from sewage sludge.
Power generation
In developing countries, simple home and farm-based anaerobic digestion systems offer the potential for low-cost energy for cooking and lighting.
From 1975, China and India have both had large, government-backed schemes for adaptation of small biogas plants for use in the household for cooking and lighting. At present, projects for anaerobic digestion in the developing world can gain financial support through the United Nations Clean Development Mechanism if they are able to show they provide reduced carbon emissions.
Methane and power produced in anaerobic digestion facilities can be used to replace energy derived from fossil fuels, and hence reduce emissions of greenhouse gases, because the carbon in biodegradable material is part of a carbon cycle. The carbon released into the atmosphere from the combustion of biogas has been removed by plants for them to grow in the recent past, usually within the last decade, but more typically within the last growing season. If the plants are regrown, taking the carbon out of the atmosphere once more, the system will be carbon neutral. In contrast, carbon in fossil fuels has been sequestered in the earth for many millions of years, the combustion of which increases the overall levels of carbon dioxide in the atmosphere. Power generation through anaerobic digesters is best suited to large-scale operations, rather than small farms, as large operations have the volume of manure that is able to make the systems financially viable.
Biogas from sewage sludge treatment is sometimes used to run a gas engine to produce electrical power, some or all of which can be used to run the sewage works. Some waste heat from the engine is then used to heat the digester. The waste heat is, in general, enough to heat the digester to the required temperatures. The power potential from sewage works is limited – in the UK, there are about 80 MW total of such generation, with the potential to increase to 150 MW, which is insignificant compared to the average power demand in the UK of about 35,000 MW. The scope for biogas generation from nonsewage waste biological matter – energy crops, food waste, abattoir waste, etc. - is much higher, estimated to be capable of about 3,000 MW. Farm biogas plants using animal waste and energy crops are expected to contribute to reducing CO2 emissions and strengthen the grid, while providing UK farmers with additional revenues.
Some countries offer incentives in the form of, for example, feed-in tariffs for feeding electricity onto the power grid to subsidize green energy production.
In Oakland, California at the East Bay Municipal Utility District's main wastewater treatment plant (EBMUD), food waste is currently codigested with primary and secondary municipal wastewater solids and other high-strength wastes. Compared to municipal wastewater solids digestion alone, food waste codigestion has many benefits. Anaerobic digestion of food waste pulp from the EBMUD food waste process provides a higher normalized energy benefit, compared to municipal wastewater solids: 730 to 1,300 kWh per dry ton of food waste applied compared to 560 to 940 kWh per dry ton of municipal wastewater solids applied.
Grid injection
Biogas grid-injection is the injection of biogas into the natural gas grid. The raw biogas has to be previously upgraded to biomethane. This upgrading implies the removal of contaminants such as hydrogen sulphide or siloxanes, as well as the carbon dioxide. Several technologies are available for this purpose, the most widely implemented being pressure swing adsorption (PSA), water or amine scrubbing (absorption processes) and, in recent years, membrane separation. As an alternative, the electricity and the heat can be used for on-site generation, resulting in a reduction of losses in the transportation of energy. Typical energy losses in natural gas transmission systems range from 1–2%, whereas the current energy losses on a large electrical system range from 5–8%.
In October 2010, Didcot Sewage Works became the first in the UK to produce biomethane gas supplied to the national grid, for use in up to 200 homes in Oxfordshire. By 2017, UK electricity firm Ecotricity plan to have digester fed by locally sourced grass fueling 6000 homes
Vehicle fuel
After upgrading with the above-mentioned technologies, the biogas (transformed into biomethane) can be used as vehicle fuel in adapted vehicles. This use is very extensive in Sweden, where over 38,600 gas vehicles exist, and 60% of the vehicle gas is biomethane generated in anaerobic digestion plants.
Fertiliser and soil conditioner
The solid, fibrous component of the digested material can be used as a soil conditioner to increase the organic content of soils. Digester liquor can be used as a fertiliser to supply vital nutrients to soils instead of chemical fertilisers that require large amounts of energy to produce and transport. The use of manufactured fertilisers is, therefore, more carbon-intensive than the use of anaerobic digester liquor fertiliser. In countries such as Spain, where many soils are organically depleted, the markets for the digested solids can be equally as important as the biogas.
Cooking gas
By using a bio-digester, which produces the bacteria required for decomposing, cooking gas is generated. The organic waste like fallen leaves, kitchen waste, food waste etc. are fed into a crusher unit, where it is mixed with a small amount of water. The mixture is then fed into the bio-digester, where the archaea decomposes it to produce cooking gas. This gas is piped to kitchen stove. A 2 cubic meter bio-digester can produce 2 cubic meters of cooking gas. This is equivalent to 1 kg of LPG. The notable advantage of using a bio-digester is the sludge which is a rich organic manure.
Products
The three principal products of anaerobic digestion are biogas, digestate, and water.
Biogas
Biogas is the ultimate waste product of the bacteria feeding off the input biodegradable feedstock (the methanogenesis stage of anaerobic digestion is performed by archaea, a micro-organism on a distinctly different branch of the phylogenetic tree of life to bacteria), and is mostly methane and carbon dioxide, with a small amount hydrogen and trace hydrogen sulfide. (As-produced, biogas also contains water vapor, with the fractional water vapor volume a function of biogas temperature). Most of the biogas is produced during the middle of the digestion, after the bacterial population has grown, and tapers off as the putrescible material is exhausted. The gas is normally stored on top of the digester in an inflatable gas bubble or extracted and stored next to the facility in a gas holder.
The methane in biogas can be burned to produce both heat and electricity, usually with a reciprocating engine or microturbine often in a cogeneration arrangement where the electricity and waste heat generated are used to warm the digesters or to heat buildings. Excess electricity can be sold to suppliers or put into the local grid. Electricity produced by anaerobic digesters is considered to be renewable energy and may attract subsidies. Biogas does not contribute to increasing atmospheric carbon dioxide concentrations because the gas is not released directly into the atmosphere and the carbon dioxide comes from an organic source with a short carbon cycle.
Biogas may require treatment or 'scrubbing' to refine it for use as a fuel. Hydrogen sulfide, a toxic product formed from sulfates in the feedstock, is released as a trace component of the biogas. National environmental enforcement agencies, such as the U.S. Environmental Protection Agency or the English and Welsh Environment Agency, put strict limits on the levels of gases containing hydrogen sulfide, and, if the levels of hydrogen sulfide in the gas are high, gas scrubbing and cleaning equipment (such as amine gas treating) will be needed to process the biogas to within regionally accepted levels. Alternatively, the addition of ferrous chloride FeCl2 to the digestion tanks inhibits hydrogen sulfide production.
Volatile siloxanes can also contaminate the biogas; such compounds are frequently found in household waste and wastewater. In digestion facilities accepting these materials as a component of the feedstock, low-molecular-weight siloxanes volatilise into biogas. When this gas is combusted in a gas engine, turbine, or boiler, siloxanes are converted into silicon dioxide (SiO2), which deposits internally in the machine, increasing wear and tear. Practical and cost-effective technologies to remove siloxanes and other biogas contaminants are available at the present time. In certain applications, in situ treatment can be used to increase the methane purity by reducing the offgas carbon dioxide content, purging the majority of it in a secondary reactor.
In countries such as Switzerland, Germany, and Sweden, the methane in the biogas may be compressed for it to be used as a vehicle transportation fuel or input directly into the gas mains. In countries where the driver for the use of anaerobic digestion are renewable electricity subsidies, this route of treatment is less likely, as energy is required in this processing stage and reduces the overall levels available to sell.
Digestate
Digestate is the solid remnants of the original input material to the digesters that the microbes cannot use. It also consists of the mineralised remains of the dead bacteria from within the digesters. Digestate can come in three forms: fibrous, liquor, or a sludge-based combination of the two fractions. In two-stage systems, different forms of digestate come from different digestion tanks. In single-stage digestion systems, the two fractions will be combined and, if desired, separated by further processing.
The second byproduct (acidogenic digestate) is a stable, organic material consisting largely of lignin and cellulose, but also of a variety of mineral components in a matrix of dead bacterial cells; some plastic may be present. The material resembles domestic compost and can be used as such or to make low-grade building products, such as fibreboard.
The solid digestate can also be used as feedstock for ethanol production.
The third byproduct is a liquid (methanogenic digestate) rich in nutrients, which can be used as a fertiliser, depending on the quality of the material being digested. Levels of potentially toxic elements (PTEs) should be chemically assessed. This will depend upon the quality of the original feedstock. In the case of most clean and source-separated biodegradable waste streams, the levels of PTEs will be low. In the case of wastes originating from industry, the levels of PTEs may be higher and will need to be taken into consideration when determining a suitable end use for the material.
Digestate typically contains elements, such as lignin, that cannot be broken down by the anaerobic microorganisms. Also, the digestate may contain ammonia that is phytotoxic, and may hamper the growth of plants if it is used as a soil-improving material. For these two reasons, a maturation or composting stage may be employed after digestion. Lignin and other materials are available for degradation by aerobic microorganisms, such as fungi, helping reduce the overall volume of the material for transport. During this maturation, the ammonia will be oxidized into nitrates, improving the fertility of the material and making it more suitable as a soil improver. Large composting stages are typically used by dry anaerobic digestion technologies.
Wastewater
The final output from anaerobic digestion systems is water, which originates both from the moisture content of the original waste that was treated and water produced during the microbial reactions in the digestion systems. This water may be released from the dewatering of the digestate or may be implicitly separate from the digestate.
The wastewater exiting the anaerobic digestion facility will typically have elevated levels of biochemical oxygen demand (BOD) and chemical oxygen demand (COD). These measures of the reactivity of the effluent indicate an ability to pollute. Some of this material is termed 'hard COD', meaning it cannot be accessed by the anaerobic bacteria for conversion into biogas. If this effluent were put directly into watercourses, it would negatively affect them by causing eutrophication. As such, further treatment of the wastewater is often required. This treatment will typically be an oxidation stage wherein air is passed through the water in a sequencing batch reactors or reverse osmosis unit.
History
Reported scientific interest in the manufacturing of gas produced by the natural decomposition of organic matter dates from the 17th century, when Robert Boyle (1627-1691) and Stephen Hales (1677-1761) noted that disturbing the sediment of streams and lakes released flammable gas. In 1778, the Italian physicist Alessandro Volta (1745-1827), the father of electrochemistry, scientifically identified that gas as methane.
In 1808 Sir Humphry Davy proved the presence of methane in the gases produced by cattle manure. The first known anaerobic digester was built in 1859 at a leper colony in Bombay in India. In 1895, the technology was developed in Exeter, England, where a septic tank was used to generate gas for the sewer gas destructor lamp, a type of gas lighting. Also in England, in 1904, the first dual-purpose tank for both sedimentation and sludge treatment was installed in Hampton, London.
By the early 20th century, anaerobic digestion systems began to resemble the technology as it appears today. In 1906, Karl Imhoff created the Imhoff tank; an early form of anaerobic digester and model wastewater treatment system throughout the early 20th century. After 1920, closed tank systems began to replace the previously common use of anaerobic lagoons – covered earthen basins used to treat volatile solids. Research on anaerobic digestion began in earnest in the 1930s.
Around the time of World War I, production from biofuels slowed as petroleum production increased and its uses were identified. While fuel shortages during World War II re-popularized anaerobic digestion, interest in the technology decreased again after the war ended. Similarly, the 1970s energy crisis sparked interest in anaerobic digestion. In addition to high energy prices, factors affecting the adoption of anaerobic digestion systems include receptivity to innovation, pollution penalties, policy incentives, and the availability of subsidies and funding opportunities.
Modern geographical distribution
Today, anaerobic digesters are commonly found alongside farms to reduce nitrogen run-off from manure, or wastewater treatment facilities to reduce the costs of sludge disposal. Agricultural anaerobic digestion for energy production has become most popular in Germany, where there were 8,625 digesters in 2014. In the United Kingdom, there were 259 facilities by 2014, and 500 projects planned to become operational by 2019. In the United States, there were 191 operational plants across 34 states in 2012. Policy may explain why adoption rates are so different across these countries.
Feed-in tariffs in Germany were enacted in 1991, also known as FIT, providing long-term contracts compensating investments in renewable energy generation. Consequently, between 1991 and 1998 the number of anaerobic digester plants in Germany grew from 20 to 517. In the late 1990s, energy prices in Germany varied and investors became unsure of the market's potential. The German government responded by amending FIT four times between 2000 and 2011, increasing tariffs and improving the profitability of anaerobic digestion, and resulting in reliable returns for biogas production and continued high adoption rates across the country.
Incidents involving digesters
Anaerobic digesters have caused Fish kills (e.g. River Mole, Devon, River Teifi, Afon Llynfi, and loss of human life (e.g. Avonmouth explosion)
There have been explosions of Anaerobic Digesters in the US (Jay, Maine Pixelle Specialty Solutions' Androscoggin Mill; Pensacola (Cantonment) 22 January 2017 (Kamyr digester explosion); EPDM failure March 2013 Aumsville, Oregon; February 6, 1987, Pennsylvania two workers at a wastewater treatment plant were re-draining a sewage digester when an explosion lifted the 30-ton floating cover, killing both workers instantly; Southwest Wastewater Treatment Plant in Springfield, Missouri), in the UK (for example at Avonmouth and at Harper Adams College, Newport, Shropshire), plus In Europe, there were about 800 accidents on biogas plants between 2005 and 2015, e.g. in France (Saint-Fargeau) (though few of them were 'serious' with direct consequences for the human population). Fortunately, according to one source, 'less than a dozen of them had consequences on humans'- for example, the incident at Rhadereistedt, Germany (4 dead).
Safety analyses have included a 2016 study compiled a database of 169 accidents involving ADs.
| Technology | Biotechnology | null |
1545779 | https://en.wikipedia.org/wiki/Milan%20Metro | Milan Metro | The Milan Metro () is the rapid transit system serving Milan, Italy, operated by Azienda Trasporti Milanesi. The network consists of five lines with a total network length of , and a total of 125 stations (+2 in construction), mostly underground. It has a daily ridership of about 1.4 million on weekdays.
The Milan Metro is the largest rapid transit system in Italy in terms of length, number of stations and ridership; and the fifth longest in the European Union and the eighth in the Europe.
The first line, Line 1, opened in 1964; Line 2 opened 5 years later in 1969, Line 3 in 1990, Line 5 in 2013, and Line 4 in 2022. There are also several extensions planned and under construction. The architectural project of the Milan Metro, created by Franco Albini and Franca Helg, and the signs, designed by Bob Noorda, received the Compasso d'Oro award in 1964.
History
The first projects for a subway line in Milan were drawn up in 1914 and 1925, following the examples of underground transport networks in other European cities such as London and Paris. Planning proceeded in 1938 for the construction of a system of 7 lines, but this too halted after the start of World War II and due to lack of funds.
On 3 July 1952, the city administration voted for a project of a metro system and on 6 October 1955, a new company, Metropolitana Milanese, was created to manage the construction of the new infrastructure. The project was funded with ₤ 500 million from the municipality and the rest from a loan. The construction site of the first line was opened in viale Monte Rosa on 4 May 1957. Stations on the new line were designed by Franco Albini and Franca Helg architecture studio, while Bob Noorda designed the signage. For this project both Albini-Helg and Noorda won the Compasso d'Oro prize.
The first section from Lotto to Sesto Marelli (21 stations) was opened on 1 November 1964 after 7 years of construction works. Two trains adorned with Italian flags left at 10.41 a.m. and arrived at the Sesto Marelli terminus at 11.15 a.m., greeted by the notes of the national anthem and the triumphal march of Giuseppe Verdi's opera "Aida". The track was long, and the mean distance between the stations was . In the same year, in April, works on the second line started.
Passengers on the network grew constantly through the first years of service, passing from 37,092,315 in 1965 to 61,937,192 in 1969.
The green line from Caiazzo to Cascina Gobba (7 stations) opened five years later. During the 1960s and 1970s the network of 2 lines was completed, and both lines had 2 different spurs. In 1978, the lines were already and long respectively, with 28 and 22 stations.
The first section of the third line (yellow), with 5 stations, was opened on 3 May 1990 after almost 9 years of construction works. The line opened just before the World Cup. The other 9 stations on Line 3 opened to the southeast in 1991, and northwest to Maciachini Station in 2004.
In March 2005, the Line 2 Abbiategrasso station (south branch from Famagosta) and the Line 1 Rho Fiera station opened. The intermediate station of Pero, on line 1, opened in December 2005. A north extension of Line 3 to Comasina (4 stations) and a new south branch on the Line 2 to Assago (2 stations) opened in early 2011.
The first stage of the Line 5 (the first automated line of the network), covering the from Bignami to Zara, in the northern part of the municipality, opened on 10 February 2013. The second stage, from Zara to Garibaldi FS, opened on 1 March 2014. The third stage, from Garibaldi FS to San Siro Stadio, in the west of the city, opened on 29 April 2015, with some intermediate stations not in service at that time; as of November 2015, all the stations have been opened.
The metro replaced several interurban tramroutes of the original Società Trazione Elettrica Lombarda (STEL) tramlines, in particular the Line 2 to Gessate.
In November 2022, the first six stations of the automated line 4 were opened, from Linate airport to Dateo; it was the first metro line to be inaugurated without any connection to the rest of the system, instead relying on a connection to the suburban railway network at Dateo railway station; the line was extended in 2023 to San Babila, linking it to line 1, and in 2024 to San Crisforo FS, another railway station, in the city west.
Timeline
Infrastructure
Lines
The system comprises 5 lines. All the lines run underground except for the northern part of Line 2 and the Line 2 Assago branch.
There are 9 interchange stations, each with 2 lines: Loreto (Lines 1 and 2); Cadorna (Lines 1 and 2), terminus of Ferrovienord railway network, Centrale (Lines 2 and 3), also Milan's main train station; Duomo (Lines 1 and 3), considered the center of the city; Zara (Lines 3 and 5); Garibaldi (Lines 2 and 5), also a major railway station; Lotto (Lines 1 and 5); San Babila (Lines 1 and 4) and Sant’Ambrogio (Lines 2 and 4)
Lines run in the Milan municipality for 80% of the total length (92 stations). Beside Milan, 13 other neighbouring municipalities are served: Assago, Bussero, Cassina de' Pecchi, Cernusco sul Naviglio, Cologno Monzese, Gessate, Gorgonzola, Pero, Rho, San Donato Milanese, Segrate, Sesto San Giovanni, Vimodrone. The network covers about 20% of Milan's total area.
The metro network is also linked with the suburban rail service, with 14 interchange stations: Affori FN, Cadorna FN, Dateo, Domodossola, Stazione Forlanini, Garibaldi FS, Lambrate FS, San Cristoforo FS, Lodi T.I.B.B. (with the nearby Porta Romana station), Porta Venezia, Repubblica, Rho Fiera, Rogoredo FS, Romolo and Sesto 1º Maggio.
The track gauge for all lines is the .
Platform screen doors are present in all stations on Line 4 and Line 5 and on some stations on Line 1.
Network Map
Power supply
Lines 2 and 3 use overhead lines to supply the electric current to the train and are electrified at 1500 V DC. Line 1, electrified at 750 V DC, uses a fourth rail system, although the same line also supports overhead lines in some stretches and depots; this allows Line 2 and 3 trains to use Line 1 tracks to reach a depot placed on the line. Line 4 and Line 5 trains are supplied by a third rail system at 750 V DC.
Signalling
Passenger information
All the stations are provided with LED screens showing the destination and waiting time of coming trains. In every station, a recorded voice announces the direction of every approaching train and, at the platform, the name of the station. While older trains have no on-train information, the new Meneghino and Leonardo trains and the driverless trains on Line 5 are equipped with displays and recorded announcements in Italian and in English.
Mobile phone coverage
Since December 2009 all stations and trains of the Milan metro have full UMTS and HSDPA connectivity. Mobile operators TIM and Vodafone also provide LTE connectivity in all lines.
Rolling stock
The first 3 lines are heavy rapid-transit lines, with 6-cars trains, about 105 m in length. Line 4 and Line 5 are light metro lines, with 4-cars trains, about 50 m long.
Line 4 and Line 5 are equipped with the same driverless trains made from Hitachi though they have different interior configurations (M4 trains have a seat arrangement similar to those of the first 3 lines).
Service
Tickets
A standard ticket costs €2.20 and is valid for 90 minutes since its validation on metro, tram, bus, trolleybus and suburban lines within Milan and 21 bordering municipalities. Other tickets are available as well, such as daily, weekly, monthly, annual, student and senior passes. Additional fares are required to travel outside Milan and the 21 bordering municipalities.
Paper tickets can be substituted by contactless bank cards payments, provided the trip starts in the metro, by tapping in the orange gates installed in every metro station. This payment method is not available on suburban lines; it was expected to be implemented on trams and buses starting by the end of 2019; it was eventually introduced in December 2020 on three urban bus lines, with plans for coverage on all the network by 2023.
Between 2004 and 2007 ATM introduced Itinero smartcard, a proximity card which can be charged with season tickets, replacing paper for this type of tickets. At the beginning of 2010, a new smartcard, RicaricaMi, was introduced. The new card can be charged up with credit and can be used for travel in place of magnetic paper tickets, on the model of London's Oyster card.
Milan metro lines can be accessed also with the regional integrated ticket "Io viaggio ovunque in Lombardia", as 1 to 7 days tickets or longer subscriptions using the smartcard "Io Viaggio".
Opening hours
The service starts at about 5:40 am and ends at about 0:30.
During Sundays and holidays service usually starts later and ends a bit later, depending on the occasion. M5 stations Segesta and San Siro Ippodromo typically close after events at nearby Meazza Stadium to avoid passenger congestion.
Headways at peak hours vary from two minutes on M1 and M2’s main-branches to three minutes on M3, with secondary-branch headways doubling, at around four minutes. Driverless rolling stock on M4 and M5 allow for more frequent service, with headways as low as ninety seconds during peak hours.
Night service
A night service has operated since 2015 with buses for line M1, M2, M3 and, from 2022, M4.
The bus service follows roughly the same route and stops at the same stations of the metro for most of the central part.
The entire lines 1, 3 and 4 and the urban section of line 2 (Abbiategrasso-Cascina Gobba) are covered by the service. For M1 the night bus service its divided in 3 lines and continues to Baggio, well over the metro path.
The future network
The metro system is currently expanding.
An extension of Line 1 from Sesto 1º Maggio to Cinisello/Monza, towards the city of Cinisello Balsamo, is currently under construction.
The track will be long with an intermediate station at Sesto Restellone.
The completion has been delayed several times, and is now scheduled for 2027.
There is a project for a further 3 km extension of Line 1 to the west into Baggio, a neighbourhood on the eastern border of the municipality.
An extension of Line 2 from Cologno Nord to Vimercate is planned.
The section will be long with 6 stations (Brugherio, Carugate, Agrate Colleoni, Concorezzo, Vimercate Torri Bianche, Vimercate).
The track will be mostly underground (83%).
Line 3 is planned to be extended in some form (by metro or some less expensive means) to the south-east from San Donato to Paullo: with intermediate stations in the city of San Donato, Peschiera Borromeo, Mediglia, Caleppio Cerca, Paullo and Paullo East, the first 3 being underground and the other on the surface. The project is currently on hold.
The last phase of line 4, from the city centre in San Babila to San Cristoforo railway station in the south-west, near the municipal border with Buccinasco and Corsico, opened in October 2024. Further extensions to Segrate train station in the east and to Buccinasco are planned, though not yet in construction.
Line 5 is planned to be extended to Cinisello/Monza, where it will intersect with line 1 a second time at , and then to Monza city centre and west side.
| Technology | Italy | null |
13359661 | https://en.wikipedia.org/wiki/Mesonychidae | Mesonychidae | Mesonychidae (meaning "middle claws") is an extinct family of small to large-sized omnivorous-carnivorous mammals. They were endemic to North America and Eurasia during the Early Paleocene to the Early Oligocene, and were the earliest group of large carnivorous mammals in Asia. Once considered a sister-taxon to artiodactyls, recent evidence now suggests no close connection to any living mammal. Mesonychid taxonomy has long been disputed and they have captured popular imagination as "wolves on hooves", animals that combine features of both ungulates and carnivores. Skulls and teeth have similar features to early whales, and the family was long thought to be the ancestors of cetaceans. Recent fossil discoveries have overturned this idea; the consensus is that whales are highly derived artiodactyls. Some researchers now consider the family a sister group either to whales or to artiodactyls, close relatives rather than direct ancestors. Other studies define Mesonychia as basal to all ungulates, occupying a position between Perissodactyla and Ferae. In this case, the resemblances to early whales would be due to convergent evolution among ungulate-like herbivores that developed adaptations related to hunting or eating meat.
Description
The mesonychids were an unusual group of condylarths with a specialized dentition featuring tri-cuspid upper molars and high-crowned lower molars with shearing surfaces. They had large heads with relatively long necks. Over time, the family evolved foot and leg adaptations for faster running, and jaw adaptations for greater bite force. Like the Paleocene family Arctocyonidae, mesonychids were once viewed as primitive carnivorans, and the diet of most genera probably included meat or fish. Various genera and species coexisted in some locations, as hunters and omnivores or scavengers. In contrast to arctocyonids, the mesonychids had only four digits furnished with hooves supported by narrow fissured end phalanges.
Evolutionary history
They first appeared in the Early Paleocene, undergoing numerous speciation events during the Paleocene, and Eocene. Mesonychids fared very poorly at the close of the Eocene epoch, with only one genus, Mongolestes, surviving into the Early Oligocene epoch.
Mesonychids probably originated in Asia, where the most primitive mesonychid, Yantanglestes, is known from the early Paleocene. They were also most diverse in Asia where they occur in all major Paleocene faunas. Since other carnivores such as the creodonts and Carnivora were either rare or absent in these animal communities, mesonychids most likely dominated the large predator niche in the Paleocene of Asia. Throughout the Paleocene and Eocene, several genera, including Dissacus, Pachyaena and Mesonyx would radiate out from their ancestral home in Asia and into Europe and North America, where they would give rise to new mesonychid genera. These animals would have migrated to North America via the Bering land bridge.
Taxonomy
Mesonychidae was named by Cope (1880). Its type genus is Mesonyx. It was assigned to Creodonta by Cope (1880); to Creodonta by Cope (1889); to Carnivora by Peterson (1919); to Mesonychia by Carroll (1988) and Zhou et al. (1995); and to Cete by Archibald (1998); and to Mesonychia by Carroll (1988), Zhou et al. (1995), Geisler and McKenna (2007) and Spaulding et al. (2009).
Classification
Family Mesonychidae
| Biology and health sciences | Mammals: General | Animals |
13364000 | https://en.wikipedia.org/wiki/Soil%20health | Soil health | Soil health is a state of a soil meeting its range of ecosystem functions as appropriate to its environment. In more colloquial terms, the health of soil arises from favorable interactions of all soil components (living and non-living) that belong together, as in microbiota, plants and animals. It is possible that a soil can be healthy in terms of ecosystem functioning but not necessarily serve crop production or human nutrition directly, hence the scientific debate on terms and measurements.
Soil health testing is pursued as an assessment of this status but tends to be confined largely to agronomic objectives. Soil health depends on soil biodiversity (with a robust soil biota), and it can be improved via soil management, especially by care to keep protective living covers on the soil and by natural (carbon-containing) soil amendments. Inorganic fertilizers do not necessarily damage soil health if they are not used in excess, and if they bring about a general improvement of overall plant growth which contributes more carbon-containing residues to the soil.
Aspects
The term soil health is used to describe the state of a soil in:
Sustaining plant and animal productivity (agronomic focus);
Enhancing biodiversity (Soil biodiversity) (ecological focus);
Maintaining or enhancing water and air quality (environmental/climate focus);
Supporting human health and habitation.
sequestering carbon
The phrase "soil health" has largely replaced the older "soil quality". The primary difference between the two expressions is that soil quality was focused on individual traits within a functional group, as in "quality of soil for maize production" or "quality of soil for roadbed preparation" and so on. The addition of the word "health" shifted the perception to be integrative, holistic and systematic. The two expressions still overlap considerably. Soil health as an expression derives from organic or "biological farming" movements in Europe, however, well before soil quality was first applied as a discipline around 1990. In 1978, Swiss soil biologist Dr Otto Buess wrote an essay "The Health of Soil and Plants" which largely defines the field even today.
The underlying principle in the use of the term "soil health" is that soil is not just an inert, lifeless growing medium, which modern intensive farming tends to represent, rather it is a living, dynamic and ever-so-subtly changing whole environment. It turns out that soils highly fertile from the point of view of crop productivity are also lively from a biological point of view. It is now commonly recognized that soil microbial biomass is large: in temperate grassland soil the bacterial and fungal biomass have been documented to be 1–/hectare and 2–/ha, respectively.
Some microbiologists now believe that 80% of soil nutrient functions are essentially controlled by microbes.
Using the human health analogy, a healthy soil can be categorized as one:
In a state of composite well-being in terms of biological, chemical and physical properties;
Not diseased or infirmed (i.e. not degraded, nor degrading), nor causing negative off-site impacts;
With each of its qualities cooperatively functioning such that the soil reaches its full potential and resists degradation;
Providing a full range of functions (especially nutrient, carbon and water cycling) and in such a way that it maintains this capacity into the future.
Conceptualisation
Soil health is the condition of the soil in a defined space and at a defined scale relative to a set of benchmarks that encompass healthy functioning. It would not be appropriate to refer to soil health for soil-roadbed preparation, as in the analogy of soil quality in a functional class.
The definition of soil health may vary between users of the term as alternative users may place differing priorities upon the multiple functions of a soil.
Therefore, the term soil health can only be understood within the context of the user of the term, and their aspirations of a soil, as well as by the boundary definition of the soil at issue. Finally, intrinsic to the discussion on soil health are many potentially conflicting interpretations, especially ecological landscape assessment vs agronomic objectives, each claiming to have soil health criteria.
Interpretation
Different soils will have different benchmarks of health depending on the "inherited" qualities, and on the geographic circumstance of the soil.
The generic aspects defining a healthy soil can be considered as follows:
"Productive" options are broad;
Life diversity is broad;
Absorbency, storing, recycling and processing is high in relation to limits set by climate;
Water runoff quality is of high standard;
Low entropy; and
No damage to or loss of the fundamental components.
This translates to:
A comprehensive cover of vegetation;
Carbon levels relatively close to the limits set by soil type and climate;
Little leakage of nutrients from the ecosystem;
Biological and agricultural productivity relatively close to the limits set by the soil environment and climate;
Only geological rates of erosion;
No accumulation of contaminants; and,
An unhealthy soil thus is the simple converse of the above.
Measurement
On the basis of the above, soil health will be measured in terms of individual ecosystem services provided relative to the benchmark. Specific benchmarks used to evaluate soil health include CO2 release, humus levels, microbial activity, and available calcium.
Soil health testing is spreading in the United States, Australia and South Africa.
Cornell University, a land-grant college in NY State, has had a Soil Health Test since 2006. Woods End Laboratories, a private soil lab founded in Maine in 1975, has offered a soil quality package since 1985. Both these services combine physical (aggregate stability), chemical (mineral balance), and biological (CO2 respiration) analyses, which today are considered hallmarks of soil health testing. The approach of other soil labs also entering the soil health field is to add into common chemical nutrient testing a biological set of factors not normally included in routine soil testing. The best example is adding biological soil respiration ("CO2-Burst") as a test procedure; this has already been adapted to modern commercial labs in the period since 2006.
There is however resistance among soil testing labs and university scientists to add new biological tests, primarily because the established metric of soil fertility is largely based on models constructed from "crop response" studies, which match crop yield to specific chemical nutrient concentrations, and no similar models appear to exist for soil health tests. Critics of novel soil health tests argue that they may be insensitive to management changes.
Soil test methods have evolved slowly over the past 40 years. However, in this same time USA soils have also lost up to 75% of their carbon (humus), causing biological fertility and ecosystem functioning to decline; how much is debatable. Many critics of the conventional system say the loss of soil quality is sufficient evidence that the old soil testing models have failed us, and need to be replaced with new approaches. These older models have stressed "maximum yield" and " yield calibration" to such an extent that related factors have been overlooked. Thus, surface and groundwater pollution with excess nutrients (nitrates and phosphates) has grown enormously, and early 2000s measures were reported (in the United States) to be the worst it has been since the 1970s, before the advent of environmental consciousness.
Regenerative Agriculture & Soil Health
Regenerative agriculture (RA) is a holistic approach to farming that emphasizes soil conservation, biodiversity, and sustainable land management. Utilizing various soil health practices, regenerative agriculture "integrates local and indigenous knowledge of landscapes, as well as their management, with established scientific knowledge" while aiming to improve the socioeconomic well-being of a community. Central to RA is the principle that healthy soil is foundational to sustainable agriculture, essentially focusing on feeding the soil rather than feeding each plant. RA serves as an opportunity to directly apply soil health practices to produce crops sustainably. Research highlights that regenerative agriculture enhances nutrient cycling while supporting biodiversity and ecosystem services, which are vital for maintaining soil health. Practices such as cover cropping, crop rotation, no-till farming, integrated pest management, permaculture, and composting support self-sustaining soil ecosystems – further enriching soil fertility while reducing dependence on chemical fertilizers and pesticides, demonstrating that cover crops not only reduce erosion but also improve nutrient cycling.
RA's primary contributions to soil health is the enhancement of organic matter and microbial activity. A myriad of practices can be used to increase soil organic content, like cover cropping, composting, and crop rotation to improve soil fertility, water retention, and ability to resist soil erosion. Research supports that soil microbial diversity is critical for maintaining fertility and resilience against the changing climate, and regenerative practices have been shown to enhance and support this biodiversity. Cover crops act as a protective blanket during the winter months, preventing compaction and erosion, while their roots maintain soil structure and nurture microbial diversity. Crop rotation further enriches soil microbiomes by diversifying nutrient and microbial inputs, disrupting pest cycles, and decreasing reliance on chemical inputs. Similarly, no-till farming minimizes physical disturbances to the soil, preserving its structure and improving water infiltration while conserving organic matter and keeping carbon in the soil, and not in the atmosphere. Permaculture is a design philosophy often incorporated into RA due to its focus on sustainable, ecosystem-based farming practices. Permaculture supports soil health by fostering natural nutrient cycles through techniques like companion planting, mulching, and perennial cropping. It emphasizes the creation of agricultural systems that model and mimic natural ecosystems, promoting biodiversity, more efficient resource use, and long-term soil health. These practices minimize soil erosion, enhance organic matter, and encourage beneficial microbial activity.
Regenerative agriculture offers significant economic and community benefits as well, nurturing resilient farming systems that enhance local economies and promote social well-being. Economically, RA reduces input costs by minimizing reliance on chemical fertilizers and pesticides, leading to lower operational expenses and increased profitability for farmers. Enhanced soil health from practices such as cover cropping and composting improves crop yields and market quality, which can provide greater productivity and financial stability. Although, the lack of heavy machinery increases the amount of necessary labor and steepens dependence on workers. Additionally, RA is designed to support community health by improving access to fresh local produce and working to alleviate food insecurity. Through RA, Community Supported Agriculture (CSA) systems can be established to bridge the divides between farmers and consumers, strengthen community ties, and facilitate a direct-market relationship. These practices not only sustain farmers but benefit surrounding communities by promoting sustainable livelihoods and resilience to environmental changes.
RA also addresses climate challenges by promoting carbon sequestration through practices like composting and no-till farming. These methods not only mitigate climate change by lowering atmospheric CO2 levels but also improve soil health, boosting soil productivity and resilience (Mishra et al. 295-309). Increasing soil organic carbon through RA practices has measurable effects on reducing atmospheric CO2 levels while improving soil functionality. The addition of organic material increases levels of soil organic carbon, thereby reducing atmospheric CO2 levels and enhancing soil fertility and productivity.
These practices collectively cultivate a resilient soil ecosystem that supports plant growth, enhances pest and disease resistance, and mitigates greenhouse gas emissions through carbon storage. However, despite its many benefits, RA faces challenges in assessment and widespread adoption. Biological indicators of soil health are often underrepresented in current evaluations due to their complexity and the context-specific knowledge required, as biological indicators of soil health often require context-specific ecological knowledge and are not universally standardized. Addressing these gaps and advancing research into RA’s ecological and socioeconomic impacts will be crucial for its broader implementation and success.
Soil health gap
The importance of soil for global food security, agro-ecosystem, environment, and human life has exponentially shifted the research trends toward soil health. However, the lack of a site/region-specific benchmark has limited the research toward understanding the effect of different agronomic managements on soil health. In 2020, Maharjan and his team introduced a new term and concept, "Soil Health Gap" and described how native land in a particular region can help in establishing the benchmark to compare the efficacies of different management practices and at the same time, it can be used in understanding quantitative difference in soil health status.
| Physical sciences | Soil science | Earth science |
161507 | https://en.wikipedia.org/wiki/Marlin | Marlin | Marlins are fish from the family Istiophoridae, which includes 11 species.
Name
The family's common name is thought to derive from their resemblance to a sailor's marlinspike.
Taxonomy
The family name Istiophoridae comes from the genus Istiophorus which first placed the species Istiophorus platypterus by George Kearsley Shaw in 1792 from the Greek word istion meaning "sail" that describes the shape of the species's dorsal fins.
Family description
Marlins have elongated bodies, a spear-like snout or bill, and a long, rigid dorsal fin which extends forward to form a crest.
Marlins are among the fastest marine swimmers. However, greatly exaggerated speeds are often claimed in popular literature, based on unreliable or outdated reports.
The larger species include the Atlantic blue marlin, Makaira nigricans, which can reach in length and in weight and the black marlin, Istiompax indica, which can reach in excess of in length and in weight. They are popular sporting fish in tropical areas. The Atlantic blue marlin and the white marlin are endangered due to overfishing.
Marlins can change colour, lighting up their stripes just before attacking prey.
Classification
The marlins are Istiophoriform fish, most closely related to the swordfish (which itself is the sole member of the family Xiphiidae). The carangiformes are believed to be the second-closest clade to marlins. Although previously thought to be closely related to Scombridae, genetic analysis only shows a slight relationship.
Extant genera
{| class="wikitable"
|+
|-
! Image !! Genus !! Living species !! Common name
|-
| || Istiompax || Istiompax indica || black marlin
|-
|rowspan=2| ||rowspan=2| Istiophorus || I. albicans || Atlantic sailfish
|-
| I. platypterus || Indo-Pacific sailfish
|-
|rowspan=2| ||rowspan=2| Makaira || Makaira nigricans || Atlantic blue marlin
|-
| Makaira mazara || Indo-Pacific blue marlin
|-
|rowspan=2| ||rowspan=2| Kajikia || Kajikia albida || white marlin
|-
| Kajikia audax || striped marlin
|-
|rowspan=4| ||rowspan=4| Tetrapturus || Tetrapturus angustirostris || shortbill spearfish
|-
| Tetrapturus belone || Mediterranean spearfish
|-
| Tetrapturus georgii || roundscale spearfish
|-
| Tetrapturus pfluegeri || longbill spearfish
|}
Fossil genera
Marlins have a continuous fossil record from the Miocene onwards, with the oldest uncontroversial fossil dated to 22 million years ago. It is thought that they probably evolved in the Paratethys Sea.
The following fossil genera are known:
†Morgula Gracia et al., 2022
†Pizzikoskerma Gracia, Villalobos-Segura, Ballen, Carnevale & Kriwet, 2024
†Prototetrapturus Gracia et al., 2022
†Sicophasma Gracia, Villalobos-Segura, Ballen, Carnevale & Kriwet, 2024
†Spathochoira Gracia et al., 2022
Popular culture
In the Nobel Prize-winning author Ernest Hemingway's 1952 novel The Old Man and the Sea, the central character of the work is an aged Cuban fisherman who, after 84 days without success on the water, heads out to sea to break his run of bad luck. On the 85th day, Santiago, the old fisherman, hooks a resolute marlin; what follows is a great struggle between man, sea creature, and the elements.
Frederick Forsyth's story "The Emperor", in the collection No Comebacks, tells of a bank manager named Murgatroyd, who catches a marlin and is acknowledged by the islanders of Mauritius as a master fisherman.
A marlin features prominently in the last chapter and climactic scenes of Christina Stead's The Man Who Loved Children. Sam's friend Saul gives Sam a marlin, and Sam makes his children help him render the fish's fat.
The Miami Marlins, a professional baseball team based in Miami, Florida, is named after the fish.
| Biology and health sciences | Acanthomorpha | null |
161804 | https://en.wikipedia.org/wiki/Africanized%20bee | Africanized bee | The Africanized bee, also known as the Africanized honey bee (AHB) and colloquially as the "killer bee", is a hybrid of the western honey bee (Apis mellifera), produced originally by crossbreeding of the East African lowland honey bee (A. m. scutellata) with various European honey bee subspecies such as the Italian honey bee (A. m. ligustica) and the Iberian honey bee (A. m. iberiensis).
The East African lowland honey bee was first introduced to Brazil in 1956 in an effort to increase honey production, but 26 swarms escaped quarantine in 1957. Since then, the hybrid has spread throughout South America and arrived in North America in 1985. Hives were found in south Texas in the United States in 1990.
Africanized honey bees are typically much more defensive, react to disturbances faster, and chase people further () than other varieties of honey bees. They have killed some 1,000 humans, with victims receiving 10 times more stings than from European honey bees. They have also killed horses and other animals.
History
There are 29 recognized subspecies of Apis mellifera based largely on geographic variations. All subspecies are cross-fertile. Geographic isolation led to numerous local adaptations. These adaptations include brood cycles synchronized with the bloom period of local flora, forming a winter cluster in colder climates, migratory swarming in Africa, enhanced (long-distance) foraging behavior in desert areas, and numerous other inherited traits.
The Africanized honey bees in the Western Hemisphere are descended from hives operated by biologist Warwick E. Kerr, who had interbred honey bees from Europe and southern Africa. Kerr was attempting to breed a strain of bees that would produce more honey in tropical conditions than the European strain of honey bee then in use throughout North, Central and South America. The hives containing this particular African subspecies were housed at an apiary near Rio Claro, São Paulo, in the southeast of Brazil, and were noted to be especially defensive. These hives had been fitted with special excluder screens (called queen excluders) to prevent the larger queen bees and drones from getting out and mating with the local population of European bees. According to Kerr, in October 1957 a visiting beekeeper, noticing that the queen excluders were interfering with the worker bees' movement, removed them, resulting in the accidental release of 26 Tanganyikan swarms of A. m. scutellata. Following this accidental release, the Africanized honey bee swarms spread out and crossbred with local European honey bee colonies.
The descendants of these colonies have since spread throughout the Americas, moving through the Amazon basin in the 1970s, crossing into Central America in 1982, and reaching Mexico in 1985. Because their movement through these regions was rapid and largely unassisted by humans, Africanized honey bees have earned the reputation of being a notorious invasive species. The prospect of killer bees arriving in the United States caused a media sensation in the late 1970s, inspired several horror movies, and sparked debate about the wisdom of humans altering entire ecosystems.
The first Africanized honey bees in the U.S. were discovered in 1985 at an oil field in the San Joaquin Valley of California. Bee experts theorized the colony had not traveled overland but instead "arrived hidden in a load of oil-drilling pipe shipped from South America." The first permanent colonies arrived in Texas from Mexico in 1990. In the Tucson region of Arizona, a study of trapped swarms in 1994 found that only 15 percent had been Africanized; this number had grown to 90 percent by 1997.
Characteristics
Though Africanized honey bees display certain behavioral traits that make them less than desirable for commercial beekeeping, excessive defensiveness and swarming foremost, they have now become the dominant type of honey bee for beekeeping in Central and South America due to their genetic dominance as well as ability to out-compete their European counterpart, with some beekeepers asserting that they are superior honey producers and pollinators.
Africanized honey bees, as opposed to other Western bee types:
Tend to swarm more frequently and go farther than other types of honey bees.
Are more likely to migrate as part of a seasonal response to lowered food supply.
Are more likely to "abscond"—the entire colony leaves the hive and relocates—in response to stress.
Have greater defensiveness when in a resting swarm, compared to other honey bee types.
Live more often in ground cavities than the European types.
Guard the hive aggressively, with a larger alarm zone around the hive.
Have a higher proportion of "guard" bees within the hive.
Deploy in greater numbers for defense and pursue perceived threats over much longer distances from the hive.
Cannot survive extended periods of forage deprivation, preventing introduction into areas with harsh winters or extremely dry late summers.
Live in dramatically higher population densities.
North American distribution
Africanized honey bees are considered an invasive species in the Americas. As of 2002, the Africanized honey bees had spread from Brazil south to northern Argentina and north to Central America, Trinidad (the West Indies), Mexico, Texas, Arizona, Nevada, New Mexico, Florida, and southern California. In June 2005, it was discovered that the bees had spread into southwest Arkansas. Their expansion stopped for a time at eastern Texas, possibly due to the large population of European honey bee hives in the area. However, discoveries of the Africanized honey bees in southern Louisiana show that they have gotten past this barrier, or have come as a swarm aboard a ship.
On 11 September 2007, Commissioner Bob Odom of the Louisiana Department of Agriculture and Forestry said that Africanized honey bees had established themselves in the New Orleans area. In February 2009, Africanized honey bees were found in southern Utah. The bees had spread into eight counties in Utah, as far north as Grand and Emery Counties by May 2017.
In October 2010, a 73-year-old man was killed by a swarm of Africanized honey bees while clearing brush on his south Georgia property, as determined by Georgia's Department of Agriculture. In 2012, Tennessee state officials reported that a colony was found for the first time in a beekeeper's colony in Monroe County in the eastern part of the state. In June 2013, 62-year-old Larry Goodwin of Moody, Texas, was killed by a swarm of Africanized honey bees.
In May 2014, Colorado State University confirmed that bees from a swarm which had aggressively attacked an orchardist near Palisade, in west-central Colorado, were from an Africanized honey bee hive. The hive was subsequently destroyed.
In tropical climates they effectively out-compete European honey bees and, at their peak rate of expansion, they spread north at almost two kilometers (about 1¼ mile) a day. There were discussions about slowing the spread by placing large numbers of docile European-strain hives in strategic locations, particularly at the Isthmus of Panama, but various national and international agricultural departments could not prevent the bees' expansion. Current knowledge of the genetics of these bees suggests that such a strategy, had it been tried, would not have been successful.
As the Africanized honey bee migrates further north, colonies continue to interbreed with European honey bees. In a study conducted in Arizona in 2004 it was observed that swarms of Africanized honey bees could take over weakened European honey bee hives by invading the hive, then killing the European queen and establishing their own queen. There are now relatively stable geographic zones in which either Africanized honey bees dominate, a mix of Africanized and European honey bees is present, or only non-Africanized honey bees are found, as in the southern portions of South America or northern North America.
African honey bees abscond (abandon the hive and any food store to start over in a new location) more readily than European honeybees. This is not necessarily a severe loss in tropical climates where plants bloom all year, but in more temperate climates it can leave the colony with not enough stores to survive the winter. Thus Africanized honey bees are expected to be a hazard mostly in the southern states of the United States, reaching as far north as the Chesapeake Bay in the east. The cold-weather limits of the Africanized honey bee have driven some professional bee breeders from Southern California into the harsher wintering locales of the northern Sierra Nevada and southern Cascade Range. This is a more difficult area to prepare bees for early pollination placement in, such as is required for the production of almonds. The reduced available winter forage in northern California means that bees must be fed for early spring buildup.
The arrival of the Africanized honey bee in Central America is threatening the traditional craft of keeping Melipona stingless bees in log gums, although they do not interbreed or directly compete with each other. The honey production from an individual hive of Africanized honey bees can be as high as . This value exceeds the much smaller of the various Melipona stingless bee species. Thus economic pressures are forcing beekeepers to switch from the traditional stingless bees to the new reality of the Africanized honey bee. Whether this will lead to the extinction of the former is unknown, but they are well adapted to exist in the wild, and there are a number of indigenous plants that the Africanized honey bees do not visit, so the fate of the Melipona bees remains to be seen.
Foraging behavior
Africanized honey bees begin foraging at young ages and harvest a greater quantity of pollen compared to their European counterparts (Apis mellifera ligustica). This may be linked to the high reproductive rate of the Africanized honey bee, which requires pollen to feed its greater number of larvae. Africanized honey bees are also sensitive to sucrose at lower concentrations. This adaptation causes foragers to harvest resources with low concentrations of sucrose that include water, pollen, and unconcentrated nectar. A study comparing A. m. scutellata and A. m. ligustica published by Fewell and Bertram in 2002 suggests that the differential evolution of this suite of behaviors is due to the different environmental pressures experienced by African and European subspecies.
Proboscis extension responses
Honey bee sensitivity to different concentrations of sucrose is determined by a reflex known as the proboscis extension response (PER). Different species of honey bees that employ different foraging behaviors will vary in the concentration of sucrose that elicits their proboscis extension response.
For example, European honey bees (Apis mellifera ligustica) forage at older ages and harvest less pollen and more concentrated nectar. The differences in resources collected during harvesting are a result of the European honey bee's sensitivity to sucrose at higher concentrations.
Evolution
The differences in a variety of behaviors between different species of honey bees are the result of a directional selection that acts upon several foraging behavior traits as a common entity. Selection in natural populations of honey bees show that positive selection of sensitivity to low concentrations of sucrose are linked to foraging at younger ages and collecting resources low in sucrose. Positive selection of sensitivity to high concentrations of sucrose were linked to foraging at older ages and collecting resources higher in sucrose. Additionally of interest, "change in one component of a suite of behaviors appear[s] to direct change in the entire suite."
When resource density is low in Africanized honey bee habitats, it is necessary for the bees to harvest a greater variety of resources because they cannot afford to be selective. Honey bees that are genetically inclined towards resources high in sucrose, such as concentrated nectar, will not be able to sustain themselves in harsher environments. The noted to low sucrose concentration in Africanized honey bees may be a result of selective pressure in times of scarcity when their survival depends on their attraction to low quality resources.
Morphology and genetics
The popular term "killer bee" has only limited scientific meaning today because there is no generally accepted fraction of genetic contribution used to establish a cut-off between a "killer" honey bee and an ordinary honey bee. Government and scientific documents prefer "Africanized honey bee" as an accepted scientific taxon.
Morphological tests
Although the native East African lowland honey bees (Apis mellifera scutellata) are smaller and build smaller comb cells than the European honey bees, their hybrids are not smaller. Africanized honey bees have slightly shorter wings, which can only be recognized reliably by performing a statistical analysis on micro-measurements of a substantial sample.
One of the problems with this test is that there are other subspecies, such as A. m. iberiensis, which also have shortened wings. This trait is hypothesized to derive from ancient hybrid haplotypes thought to have links to evolutionary lineages from Africa. Some belong to A. m. intermissa, but others have an indeterminate origin; the Egyptian honeybee (Apis mellifera lamarckii), present in small numbers in the southeastern U.S., has the same morphology.
DNA tests
Currently testing techniques have moved away from external measurements to DNA analysis, but this means the test can only be done by a sophisticated laboratory. Molecular diagnostics using the mitochondrial DNA (mtDNA) cytochrome b gene can differentiate A. m. scutellata from other A. mellifera lineages, though mtDNA only allows one to detect Africanized colonies that have Africanized queens and not colonies where a European queen has mated with Africanized drones. A test based on single nucleotide polymorphisms was created in 2015 to detect Africanized bees based on the proportion of African and European ancestry.
Western variants
The western honey bee is native to the continents of Europe, Asia, and Africa. As of the early 1600s, it was introduced to North America, with subsequent introductions of other European subspecies 200 years later. Since then, they have spread throughout the Americas. The 29 subspecies can be assigned to one of four major branches based on work by Ruttner and subsequently confirmed by analysis of mitochondrial DNA. African subspecies are assigned to branch A, northwestern European subspecies to branch M, southwestern European subspecies to branch C, and Mideast subspecies to branch O. The subspecies are grouped and listed. There are still regions with localized variations that may become identified subspecies in the near future, such as A. m. pomonella from the Tian Shan Mountains, which would be included in the Mideast subspecies branch.
The western honey bee is the third insect whose genome has been mapped, and is unusual in having very few transposons. According to the scientists who analyzed its genetic code, the western honey bee originated in Africa and spread to Eurasia in two ancient migrations. They have also discovered that the number of genes in the honey bee related to smell outnumber those for taste. The genome sequence revealed several groups of genes, particularly the genes related to circadian rhythms, were closer to vertebrates than other insects. Genes related to enzymes that control other genes were also vertebrate-like.
African variants
There are two lineages of the East African lowland subspecies (Apis mellifera scutellata) in the Americas: actual matrilineal descendants of the original escaped queens and a much smaller number that are Africanized through hybridization. The matrilineal descendants carry African mtDNA, but partially European nuclear DNA, while the honey bees that are Africanized through hybridization carry European mtDNA, and partially African nuclear DNA. The matrilineal descendants are in the vast majority. This is supported by DNA analyses performed on the bees as they spread northwards; those that were at the "vanguard" were over 90% African mtDNA, indicating an unbroken matriline, but after several years in residence in an area interbreeding with the local European strains, as in Brazil, the overall representation of African mtDNA drops to some degree. However, these latter hybrid lines (with European mtDNA) do not appear to propagate themselves well or persist. Population genetics analysis of Africanized honey bees in the United States, using a maternally inherited genetic marker, found 12 distinct mitotypes, and the amount of genetic variation observed supports the idea that there have been multiple introductions of AHB into the United States.
A newer publication shows the genetic admixture of the Africanized honey bees in Brazil. The small number of honey bees with African ancestry that were introduced to Brazil in 1956, which dispersed and hybridized with existing managed populations of European origin and quickly spread across much of the Americas, is an example of a massive biological invasion as earlier told in this article. Here, they analysed whole-genome sequences of 32 Africanized honey bees sampled from throughout Brazil to study the effect of this process on genome diversity. By comparison with ancestral populations from Europe and Africa, they infer that these samples had 84% African ancestry, with the remainder from western European populations. However, this proportion varied across the genome and they identified signals of positive selection in regions with high European ancestry proportions. These observations are largely driven by one large gene-rich 1.4 Mbp segment on chromosome 11 where European haplotypes are present at a significantly elevated frequency and likely confer an adaptive advantage in the Africanized honey bee population.
Consequences of selection
The chief difference between the European subspecies of honey bees kept by beekeepers and the African ones is attributable to both selective breeding and natural selection. By selecting only the most gentle, non-defensive subspecies, beekeepers have, over centuries, eliminated the more defensive ones and created a number of subspecies suitable for apiculture.
In Central and southern Africa there was formerly no tradition of beekeeping, and the hive was destroyed in order to harvest the honey, pollen and larvae. The bees adapted to the climate of Sub-Saharan Africa, including prolonged droughts. Having to defend themselves against aggressive insects such as ants and wasps, as well as voracious animals like the honey badger, African honey bees evolved as a subspecies group of highly defensive bees unsuitable by a number of metrics for domestic use.
As Africanized honey bees migrate into regions, hives with an old or absent queen can become hybridized by crossbreeding. The aggressive Africanized drones out-compete European drones for a newly developed queen of such a hive, ultimately resulting in hybridization of the existing colony. Requeening, a term for replacing out the older existing queen with a new, already fertilized one, can avoid hybridization in apiaries. As a prophylactic measure, the majority of beekeepers in North America tend to requeen their hives annually, maintaining strong colonies and avoiding hybridization.
Defensiveness
Africanized honey bees exhibit far greater defensiveness than European honey bees and are more likely to deal with a perceived threat by attacking in large swarms. These hybrids have been known to pursue a perceived threat for a distance of well over 500 meters (1,640 ft).
The venom of an Africanized honey bee is the same as that of a European honey bee, but since the former tends to sting in far greater numbers, deaths from them are naturally more numerous than from European honey bees. While allergies to the European honey bee may cause death, complications from Africanized honey bee stings are usually not caused from allergies to their venom. Humans stung many times by the Africanized honey bees can exhibit serious side effects such as inflammation of the skin, dizziness, headaches, weakness, edema, nausea, diarrhea, and vomiting. Some cases even progress to affecting different body systems by causing increased heart rates, respiratory distress, and even renal failure. Africanized honey bee sting cases can become very serious, but they remain relatively rare and are often limited to accidental discovery in highly populated areas.
Impact on humans
Fear factor
The Africanized honey bee is widely feared by the public, a reaction that has been amplified by sensationalist movies (such as The Swarm) and some of the media reports. Stings from Africanized honey bees kill on average two or three people per year.
As the Africanized honey bee spreads through Florida, a densely populated state, officials worry that public fear may force misguided efforts to combat them:
Misconceptions
"Killer bee" is a term frequently used in media such as movies that portray aggressive behavior or actively seeking to attack humans. "Africanized honey bee" is considered a more descriptive term in part because their behavior is increased defensiveness compared to European honey bees that can exhibit similar defensive behaviors when disturbed.
The sting of the Africanized honey bee is no more potent than any other variety of honey bee, and although they are similar in appearance to European honey bees, they tend to be slightly smaller and darker in color. Although Africanized honey bees do not actively search for humans to attack, they are more dangerous because they are more easily provoked, quicker to attack in greater numbers, and then pursue the perceived threat farther, for as much as a quarter of a mile (400 metres).
While studies have shown that Africanized honey bees can infiltrate European honey bee colonies and then kill and replace their queen (thus usurping the hive), this is less common than other methods. Wild and managed colonies will sometimes be seen to fight over honey stores during the dearth (periods when plants are not flowering), but this behavior should not be confused with the aforementioned activity. The most common way that a European honey bee hive will become Africanized is through crossbreeding during a new queen's mating flight. Studies have consistently shown that Africanized drones are more numerous, stronger and faster than their European cousins and are therefore able to out-compete them during these mating flights. The result of mating between Africanized drones and European queens is almost always Africanized offspring.
Impact on apiculture
In areas of suitable temperate climate, the survival traits of Africanized honey bee colonies help them outperform European honey bee colonies. They also return later and work under conditions that often keep European honey bees hive-bound. This is the reason why they have gained a reputation as superior honey producers, and those beekeepers who have learned to adapt their management techniques now seem to prefer them to their European counterparts. Studies show that in areas of Florida that contain Africanized honey bees, the honey production is higher than in areas in which they do not live. It is also becoming apparent that Africanized honey bees have another advantage over European honey bees in that they seem to show a higher resistance to several health issues, including parasites such as Varroa destructor, some fungal diseases like chalkbrood, and even the mysterious colony collapse disorder which was plaguing beekeepers in the early 2000's. Despite all its negative factors, it is possible that the Africanized honey bee might actually end up being a boon to apiculture.
Queen management
In areas where Africanized honey bees are well established, bought and pre-fertilized (i.e. mated) European queens can be used to maintain a hive's European genetics and behavior. However, this practice can be expensive, since these queens must be bought and shipped from breeder apiaries in areas completely free of Africanized honey bees, such as the northern U.S. states or Hawaii. As such, this is generally not practical for most commercial beekeepers outside the U.S., and it is one of the main reasons why Central and South American beekeepers have had to learn to manage and work with the existing Africanized honey bee. Any effort to crossbreed virgin European queens with Africanized drones will result in the offspring exhibiting Africanized traits; only 26 swarms escaped in 1957, and nearly 60 years later there does not appear to be a noticeable lessening of the typical Africanized characteristics.
Gentleness
Not all Africanized honey bee hives display the typical hyper-defensive behavior, which may provide bee breeders a point to begin breeding a gentler stock (gAHBs). Work has been done in Brazil towards this end, but in order to maintain these traits, it is necessary to develop a queen breeding and mating facility in order to requeen colonies and to prevent reintroduction of unwanted genes or characteristics through unintended crossbreeding with feral colonies. In Puerto Rico, some bee colonies are already beginning to show more gentle behavior. This is believed to be because the more gentle bees contain genetic material that is more similar to the European honey bee, although they also contain Africanized honey bee material. This degree of aggressiveness is surprisingly almost unrelated to individual genetics – instead being almost entirely determined by the entire hive's proportion of aggression genetics.
Safety
While bee incidents are much less common than they were during the first wave of Africanized honey bee colonization, this can be largely attributed to modified and improved bee management techniques. Prominent among these are locating bee-yards much farther away from human habitation, creating barriers to keep livestock at enough of a distance to prevent interaction, and education of the general public to teach them how to properly react when feral colonies are encountered and what resources to contact. The Africanized honey bee is now considered the honey bee of choice for beekeeping in Brazil.
Impact on pets and livestock
Africanized honey bees are a threat to outdoor pets, especially mammals. The most detailed information available pertains to dogs.
Less is known about livestock as victims. There is a widespread consensus that cattle suffer occasional Africanized honey bee attacks in Brazil, but there is little relevant documentation. It appears that cows sustain hundreds of stings if they are attacked, but can survive such injuries.
| Biology and health sciences | Hymenoptera | null |
161856 | https://en.wikipedia.org/wiki/Ovulation | Ovulation | Ovulation is an important part of the menstrual cycle in female vertebrates where the egg cells are released from the ovaries as part of the ovarian cycle. In female humans ovulation typically occurs near the midpoint in the menstrual cycle and after the follicular phase. Ovulation is stimulated by an increase in luteinizing hormone (LH). The ovarian follicles rupture and release the secondary oocyte ovarian cells.
After ovulation, during the luteal phase, the egg will be available to be fertilized by sperm. If it is not, it will break down in less than a day. Meanwhile, the uterine lining (endometrium) continues to thicken to be able to receive a fertilized egg. If no conception occurs, the uterine lining will eventually break down and be shed from the body via the vagina during menstruation.
Some people choose to track ovulation in order to improve or aid becoming pregnant by timing intercourse with the their ovulation. The signs of ovulation may include cervical mucus changes, mild cramping in the abdominal area, and a small rise in basal body temperature. Medication is also sometimes required by those experiencing infertility to induce ovulation.
Process
Ovulation occurs about midway through the menstrual cycle, after the follicular phase. The days in which a woman is most fertile can be calculated based on the date of the last menstrual period and the length of a typical menstrual cycle. The few days surrounding ovulation (from approximately days 10 to 18 of a 28-day cycle), constitute the most fertile phase. The time from the beginning of the last menstrual period (LMP) until ovulation is, on average, 14.6 days, but with substantial variation among females and between cycles in any single female, with an overall 95% prediction interval of 8.2 to 20.5 days.
The process of ovulation is controlled by the hypothalamus of the brain and through the release of hormones secreted in the anterior lobe of the pituitary gland, luteinizing hormone (LH) and follicle-stimulating hormone (FSH). In the preovulatory phase of the menstrual cycle, the ovarian follicle will undergo a series of transformations called cumulus expansion, which is stimulated by FSH. After this is done, a hole called the stigma will form in the follicle, and the secondary oocyte will leave the follicle through this hole. Ovulation is triggered by a spike in the amount of FSH and LH released from the pituitary gland. During the luteal (post-ovulatory) phase, the secondary oocyte will travel through the fallopian tubes toward the uterus. If fertilized by a sperm, the fertilized secondary oocyte or ovum may implant there 6–12 days later.
Follicular phase
The follicular phase (or proliferative phase) is the phase of the menstrual cycle during which the ovarian follicles mature. The follicular phase lasts from the beginning of menstruation to the start of ovulation.
For ovulation to be successful, the ovum must be supported by the corona radiata and cumulus oophorous granulosa cells. The latter undergo a period of proliferation and mucification known as cumulus expansion. Mucification is the secretion of a hyaluronic acid-rich cocktail that disperses and gathers the cumulus cell network in a sticky matrix around the ovum. This network stays with the ovum after ovulation and has been shown to be necessary for fertilization.
Ovulation
Estrogen levels peak towards the end of the follicular phase, around 12 and 24 hours. This, by positive feedback, causes a surge in levels of luteinizing hormone (LH) and follicle-stimulating hormone (FSH). This lasts from 24 to 36 hours, and results in the rupture of the ovarian follicles, causing the oocyte to be released from the ovary.
Through a signal transduction cascade initiated by LH, which activates the pro-inflammatory genes through cAMP secondary messenger, proteolytic enzymes are secreted by the follicle that degrade the follicular tissue at the site of the blister, forming a hole called the stigma. The secondary oocyte leaves the ruptured follicle and moves out into the peritoneal cavity through the stigma, where it is caught by the fimbriae at the end of the fallopian tube. After entering the fallopian tube, the oocyte is pushed along by cilia, beginning its journey toward the uterus.
By this time, the oocyte has completed meiosis I, yielding two cells: the larger secondary oocyte that contains all of the cytoplasmic material and a smaller, inactive first polar body. Meiosis II follows at once but will be arrested in the metaphase and will so remain until fertilization. The spindle apparatus of the second meiotic division appears at the time of ovulation. If no fertilization occurs, the oocyte will degenerate between 12 and 24 hours after ovulation. Approximately 1–2% of ovulations release more than one oocyte. This tendency increases with maternal age. Fertilization of two different oocytes by two different spermatozoa results in fraternal twins.
The precise moment of ovulation was captured on film for the first time in 2008, coincidentally, during a routine hysterectomy procedure. According to the attending gynecologist, the ovum's emergence and subsequent release from the ovarian follicle occurred within a 15-minute timeframe.
Luteal phase
The follicle proper has met the end of its lifespan. Without the oocyte, the follicle folds inward on itself, transforming into the corpus luteum (pl. corpora lutea), a steroidogenic cluster of cells that produces estrogen and progesterone. These hormones induce the endometrial glands to begin production of the proliferative endometrium and later into secretory endometrium, the site of embryonic growth if implantation occurs. The action of progesterone increases basal body temperature by one-quarter to one-half degree Celsius (one-half to one degree Fahrenheit). The corpus luteum continues this paracrine action for the remainder of the menstrual cycle, maintaining the endometrium, before disintegrating into scar tissue during menses.
Clinical presentation
The start of ovulation may be detected by signs that are not readily discernible other than to the ovulating female herself, thus humans are said to have a concealed ovulation. In many animal species there are distinctive signals indicating the period when the female is fertile. Several explanations have been proposed to explain concealed ovulation in humans.
Females near ovulation experience changes in the cervical mucus, and in basal body temperature. Furthermore, many females experience secondary fertility signs including Mittelschmerz (pain associated with ovulation) and a heightened sense of smell, and can sense the precise moment of ovulation. However, midcycle pain may also not be due to Mittelschmerz, but due to other factors such as cysts, endometriosis, sexually transmitted infections, or an ectopic pregnancy. Other possible signs of ovulation include tender breasts, bloating, and cramps, although these symptoms are not a guarantee that ovulation is taking place.
Many females experience heightened sexual desire in the several days immediately before ovulation. One study concluded that females subtly improve their facial attractiveness during ovulation.
Symptoms related to the onset of ovulation, the moment of ovulation and the body's process of beginning and ending the menstrual cycle vary in intensity with each female but are fundamentally the same. The charting of such symptoms — primarily basal body temperature, mittelschmerz and cervical position — is referred to as the sympto-thermal method of fertility awareness, which allow auto-diagnosis by a female of her state of ovulation. Once training has been given by a suitable authority, fertility charts can be completed on a cycle-by-cycle basis to show ovulation. This gives the possibility of using the data to predict fertility for natural contraception and pregnancy planning.
Urine levels of the hormone pregnanediol 3-glucuronide of over 5 μg/mL has been used to confirm ovulation. This test has a 100% specificity over 107 women.
Disorders
Disorders of ovulation, also known as ovulatory disorders are classified as menstrual disorders and include oligoovulation (infrequent or irregular ovulation) and anovulation (absence of ovulation):
Oligoovulation is infrequent or irregular ovulation (usually defined as cycles of greater than 36 days or fewer than 8 cycles a year)
Anovulation is absence of ovulation when it would be normally expected (in a post-menarchal, premenopausal female). Anovulation usually manifests itself as irregularity of menstrual periods, that is, unpredictable variability of intervals, duration, or bleeding. Anovulation can also cause cessation of periods (secondary amenorrhea) or excessive bleeding (dysfunctional uterine bleeding).
The World Health Organization (WHO) has developed the following classification of ovulatory disorders:
WHO group I: Hypothalamic–pituitary–gonadal axis failure
WHO group II: Hypothalamic–pituitary–gonadal axis dysfunction. WHO group II is the most common cause of ovulatory disorders, and the most common causative member is polycystic ovary syndrome (PCOS).
WHO group III: Ovarian failure
WHO group IV: Hyperprolactinemia
Menstrual disorders can often indicate ovulatory disorder.
Ovulation induction
Ovulation induction is a promising assisted reproductive technology for patients with conditions such as polycystic ovary syndrome (PCOS) and oligomenorrhea. It is also used in in vitro fertilization to make the follicles mature before egg retrieval. Usually, ovarian stimulation is used in conjunction with ovulation induction to stimulate the formation of multiple oocytes. Some sources include ovulation induction in the definition of ovarian stimulation.
A low dose of human chorionic gonadotropin (HCG) may be injected after completed ovarian stimulation. Ovulation will occur between 24 and 36 hours after the HCG injection.
By contrast, induced ovulation in some animal species occurs naturally, ovulation can be stimulated by coitus.
Ovulation suppression
Combined hormonal contraceptives inhibit follicular development and prevent ovulation as a primary mechanism of action. The ovulation-inhibiting dose (OID) of an estrogen or progestogen refers to the dose required to consistently inhibit ovulation in women. Ovulation inhibition is an antigonadotropic effect and is mediated by inhibition of the secretion of the gonadotropins, LH and FSH, from the pituitary gland.
In assisted reproductive technology including in vitro fertilization, cycles where a transvaginal oocyte retrieval is planned generally necessitates ovulation suppression, because it is not practically feasible to collect oocytes after ovulation. For this purpose, ovulation can be suppressed by either a GnRH agonist or a GnRH antagonist, with different protocols depending on which substance is used.
Fertility and timing of ovulation
Most women who are able to conceive are fertile for an estimated five days before ovulation and one day after ovulation. There is some evidence that for couples who have been trying to conceive a child for less than 12 months, and the female is under 40 years old, practicing timed intercourse (timing intercourse with ovulation using urine tests that predict ovulation) may help improve the rate of pregnancy and live births. The role that stress plays in ovulation, fertility, and understanding the biological basis for stress-induced anovulation and the role of cortisol is not entirely clear.
| Biology and health sciences | Human reproduction | Biology |
162017 | https://en.wikipedia.org/wiki/Rayon | Rayon | Rayon, also called viscose and commercialised in some countries as sabra silk or cactus silk, is a semi-synthetic fiber made from natural sources of regenerated cellulose, such as wood and related agricultural products. It has the same molecular structure as cellulose. Many types and grades of viscose fibers and films exist. Some imitate the feel and texture of natural fibers such as silk, wool, cotton, and linen. The types that resemble silk are often called artificial silk. It can be woven or knit to make textiles for clothing and other purposes.
Rayon production involves solubilizing cellulose to allow turning the fibers into required form. Three common solubilization methods are:
The cuprammonium process (not in use today), using ammoniacal solutions of copper salts
The viscose process, the most common today, using alkali and carbon disulfide
The Lyocell process, using amine oxide, which avoids producing neurotoxic carbon disulfide but is more expensive
History
French scientist and industrialist Hilaire de Chardonnet (1838–1924) invented the first artificial textile fiber, artificial silk.
Swiss chemist Matthias Eduard Schweizer (1818–1860) discovered that cellulose dissolved in tetraamminecopper dihydroxide. Max Fremery and Johann Urban developed a method to produce carbon fibers for use in light bulbs in 1897. Production of cuprammonium rayon for textiles started in 1899 in the Vereinigte Glanzstoff Fabriken AG in Oberbruch (near Aachen). Improvement by J. P. Bemberg AG in 1904 made the artificial silk a product comparable to real silk.
English chemist Charles Frederick Cross and his collaborators, Edward John Bevan and Clayton Beadle, patented their artificial silk in 1894. They named it "viscose" because its production involved the intermediacy of a highly viscous solution. Cross and Bevan took out British Patent No. 8,700, "Improvements in Dissolving Cellulose and Allied Compounds" in May, 1892. In 1893, they formed the Viscose Syndicate to grant licences and, in 1896, formed the British Viscoid Co. Ltd.
The first commercial viscose rayon was produced by the UK company Courtaulds Fibres in November 1905. Courtaulds formed an American division, American Viscose (later known as Avtex Fibers), to produce their formulation in the US in 1910. The name "rayon" was adopted in 1924, with "viscose" being used for the viscous organic liquid used to make both rayon and cellophane. In Europe, though, the fabric itself became known as "viscose", which has been ruled an acceptable alternative term for rayon by the US Federal Trade Commission (FTC).
Rayon was produced only as a filament fiber until the 1930s, when methods were developed to utilize "broken waste rayon" as staple fiber.
Manufacturers' search for a less environmentally-harmful process for making Rayon led to the development of the lyocell method for producing Rayon. The lyocell process was developed in 1972 by a team at the now defunct American Enka fibers facility at Enka, North Carolina. In 2003, the American Association of Textile Chemists and Colorists (AATCC) awarded Neal E. Franks their Henry E. Millson Award for Invention for lyocell. In 1966–1968, D. L. Johnson of Eastman Kodak Inc. studied NMMO solutions. In the decade 1969 to 1979, American Enka tried unsuccessfully to commercialize the process. The operating name for the fibre inside the Enka organization was "Newcell", and the development was carried through pilot plant scale before the work was stopped. The basic process of dissolving cellulose in NMMO was first described in a 1981 patent by Mcorsley for Akzona Incorporated (the holding company of Akzo). In the 1980s the patent was licensed by Akzo to Courtaulds and Lenzing. The fibre was developed by Courtaulds Fibres under the brand name "Tencel" in the 1980s. In 1982, a 100 kg/week pilot plant was built in Coventry, UK, and production was increased tenfold (to a ton/week) in 1984. In 1988, a 25 ton/week semi-commercial production line opened at the Grimsby, UK, pilot plant. The process was first commercialized at Courtaulds' rayon factories at Mobile, Alabama (1990), and at the Grimsby plant (1998). In January 1993, the Mobile Tencel plant reached full production levels of 20,000 tons per year, by which time Courtaulds had spent £100 million and 10 years on Tencel development. Tencel revenues for 1993 were estimated as likely to be £50 million. A second plant in Mobile was planned. By 2004, production had quadrupled to 80,000 tons.
Lenzing began a pilot plant in 1990, and commercial production in 1997, with 12 metric tonnes/year made in a plant in Heiligenkreuz im Lafnitztal, Austria. When an explosion hit the plant in 2003 it was producing 20,000 tonnes/year, and planning to double capacity by the end of the year. In 2004 Lenzing was producing 40,000 tons [sic, probably metric tonnes]. In 1998, Lenzing and Courtaulds reached a patent dispute settlement.
In 1998 Courtaulds was acquired by competitor Akzo Nobel, which combined the Tencel division with other fibre divisions under the Accordis banner, then sold them to private equity firm CVC Partners. In 2000, CVC sold the Tencel division to Lenzing AG, which combined it with their "Lenzing Lyocell" business, but maintained the brand name Tencel. It took over the plants in Mobile and Grimsby, and by 2015 were the largest lyocell producer at 130,000 tonnes/year.
Process
Rayon is produced by dissolving cellulose, then converting this solution back to insoluble fibrous cellulose. Various processes have been developed for this regeneration. The most common methods for creating rayon are the cuprammonium method, the viscose method, and the lyocell process. The first two methods have been practiced for more than a century.
Cuprammonium methods
Cuprammonium rayon has properties similar to viscose; however, during its production, the cellulose is combined with copper and ammonia (Schweizer's reagent). Due to the detrimental environmental effects of this production method, cuprammonium rayon is no longer being produced in the United States. The process has been described as obsolete, but cuprammonium rayon is still made by one company in Japan.
Tetraamminecopper(II) sulfate is also used as a solvent.
Viscose method
The viscose process builds on the reaction of cellulose with a strong base, followed by treatment of that solution with carbon disulfide to give a xanthate derivative. The xanthate is then converted back to a cellulose fiber in a subsequent step.
The viscose method can use wood as a source of cellulose, whereas other routes to rayon require lignin-free cellulose as a starting material. The use of woody sources of cellulose makes viscose cheaper, so it was traditionally used on a larger scale than the other methods. On the other hand, the original viscose process generates large amounts of contaminated wastewater. Newer technologies use less water and have improved the quality of the wastewater.
The raw material for viscose is primarily wood pulp (sometimes bamboo pulp), which is chemically converted into a soluble compound. It is then dissolved and forced through a spinneret to produce filaments, which are chemically solidified, resulting in fibers of nearly pure cellulose. Unless the chemicals are handled carefully, workers can be seriously harmed by the carbon disulfide used to manufacture most rayon.
To prepare viscose, pulp is treated with aqueous sodium hydroxide (typically 16–19% by mass) to form "alkali cellulose", which has the approximate formula [C6H9O4−ONa]. This material is allowed to depolymerize to an extent. The rate of depolymerization (ripening or maturing) depends on temperature and is affected by the presence of various inorganic additives, such as metal oxides and hydroxides. Air also affects the ripening process, since oxygen causes depolymerization. The alkali cellulose is then treated with carbon disulfide to form sodium cellulose xanthate:
Rayon fiber is produced from the ripened solutions by treatment with a mineral acid, such as sulfuric acid. In this step, the xanthate groups are hydrolyzed to regenerate cellulose and carbon disulfide:
Aside from regenerated cellulose, acidification gives hydrogen sulfide (H2S), sulfur, and carbon disulfide. The thread made from the regenerated cellulose is washed to remove residual acid. The sulfur is then removed by the addition of sodium sulfide solution, and impurities are oxidized by bleaching with sodium hypochlorite solution or hydrogen peroxide solution.
Production begins with processed cellulose obtained from wood pulp and plant fibers. The cellulose content in the pulp should be around 87–97%.
The steps:
Immersion: The cellulose is treated with caustic soda.
Pressing. The treated cellulose is then pressed between rollers to remove excess liquid.
The pressed sheets are crumbled or shredded to produce what is known as "white crumb".
The "white crumb" is aged through exposure to oxygen. This is a depolymerization step and is avoided in the case of polynosics.
The aged "white crumb" is mixed in vats with carbon disulfide to form the xanthate. This step produces "orange-yellow crumb".
The "yellow crumb" is dissolved in a caustic solution to form viscose. The viscose is set to stand for a period of time, allowing it to "ripen". During this stage the molecular weight of the polymer changes.
After ripening, the viscose is filtered, degassed, and then extruded through a spinneret into a bath of sulfuric acid, resulting in the formation of rayon filaments. The acid is used as a regenerating agent. It converts cellulose xanthate back to cellulose. The regeneration step is rapid, which does not allow proper orientation of cellulose molecules. So to delay the process of regeneration, zinc sulfate is used in the bath, which converts cellulose xanthate to zinc cellulose xanthate, thus providing time for proper orientation to take place before regeneration.
Spinning. The spinning of viscose rayon fiber is done using a wet-spinning process. The filaments are allowed to pass through a coagulation bath after extrusion from the spinneret holes. The two-way mass transfer takes place.
Drawing. The rayon filaments are stretched, in a procedure known as drawing, to straighten out the fibers.
Washing. The fibers are then washed to remove any residual chemicals from them.
Cutting. If filament fibers are desired, then the process ends here. The filaments are cut down when producing staple fibers.
Lyocell method
The lyocell process relies on dissolution of cellulose products in a solvent, N-methyl morpholine N-oxide (NMMO).
The process starts with cellulose and involves dry jet-wet spinning. It was developed at the now defunct American Enka Company and Courtaulds Fibres. Lenzing's Tencel is an example of a lyocell fiber. Unlike the viscose process, the lycocell process does not use highly toxic carbon disulfide. "Lyocell" has become a genericized trademark, used to refer to the lyocell process for making cellulose fibers.
, the lyocell process is not widely used, because it is still more expensive than the viscose process.
Properties
Rayon is a versatile fiber and is widely claimed to have the same comfort properties as natural fibers, although the drape and slipperiness of rayon textiles are often more like nylon. It can imitate the feel and texture of silk, wool, cotton, and linen. The fibers are easily dyed in a wide range of colors. Rayon fabrics are soft, smooth, cool, comfortable, and highly absorbent, but they do not always insulate body heat, making them ideal for use in hot and humid climates, although also making their "hand" (feel) cool and sometimes almost slimy to the touch.
The durability and appearance retention of regular viscose rayons are low, especially when wet; also, rayon has the lowest elastic recovery of any fiber. However, HWM rayon (high-wet-modulus rayon) is much stronger and exhibits higher durability and appearance retention. Recommended care for regular viscose rayon is dry-cleaning only. HWM rayon can be machine-washed.
Regular rayon has lengthwise lines called striations and its cross-section is an indented circular shape. The cross-sections of HWM and cupra rayon are rounder. Filament rayon yarns vary from 80 to 980 filaments per yarn and vary in size from 40 to 5000 denier. Staple fibers range from 1.5 to 15 denier and are mechanically or chemically crimped. Rayon fibers are naturally very bright, but the addition of delustering pigments cuts down on this natural brightness.
Structural modification
The physical properties of rayon remained unchanged until the development of high-tenacity rayon in the 1940s. Further research and development led to high-wet-modulus rayon (HWM rayon) in the 1950s. Research in the UK was centred on the government-funded British Rayon Research Association.
High-tenacity rayon is another modified version of viscose that has almost twice the strength of HWM. This type of rayon is typically used for industrial purposes such as tire cord.
Industrial applications of rayon emerged around 1935. Substituting cotton fiber in tires and belts, industrial types of rayon developed a totally different set of properties, amongst which tensile strength and elastic modulus were paramount.
is a genericized trademark of Lenzing AG, used for (viscose) rayon which is stretched as it is made, aligning the molecules along the fibers. Two forms are available: "polynosics" and "high wet modulus" (HWM). High-wet-modulus rayon is a modified version of viscose that is stronger when wet. It can be mercerized like cotton. HWM rayons are also known as "polynosic". Polynosic fibers are dimensionally stable and do not shrink or get pulled out of shape when wet like many rayons. They are also wear-resistant and strong while maintaining a soft, silky feel. They are sometimes identified by the trade name Modal. Modal is used alone or with other fibers (often cotton or spandex) in clothing and household items like pajamas, underwear, bathrobes, towels, and bedsheets. Modal can be tumble-dried without damage. The fabric has been known to pill less than cotton due to fiber properties and lower surface friction. The trademarked Modal is made by spinning beech-tree cellulose and is considered a more eco-friendly alternative to cotton, as the production process uses on average 10–20 times less water.
Producers and brand names
In 2018, viscose fiber production in the world was approximately 5.8 million tons, and China was the largest producer with about 65% of total global production. Trade names are used within the rayon industry to label the type of rayon in the product. Viscose rayon was first produced in Coventry, England in 1905 by Courtaulds.
Bemberg is a trade name for cuprammonium rayon developed by J. P. Bemberg. Bemberg performs much like viscose but has a smaller diameter and comes closest to silk in feel. Bemberg is now only produced in Japan. The fibers are finer than viscose rayon.
Modal and Tencel are widely used forms of rayon produced by Lenzing AG. Tencel, generic name lyocell, is made by a slightly different solvent recovery process, and is considered a different fiber by the US FTC. Tencel lyocell was first produced commercially by Courtaulds' Grimsby plant in England. The process, which dissolves cellulose without a chemical reaction, was developed by Courtaulds Research.
Birla Cellulose is also a volume manufacturer of rayon. They have plants located in India, Indonesia and China.
Accordis was a major manufacturer of cellulose-based fibers and yarns. Production facilities can be found throughout Europe, the U.S. and Brazil.
Visil rayon and HOPE FR are flame retardant forms of viscose that have silica embedded in the fiber during manufacturing.
North American Rayon Corporation of Tennessee produced viscose rayon until its closure in the year 2000.
Indonesia is one of the largest producers of rayon in the world, and Asia Pacific Rayon (APR) of the country has an annual production capacity of 0.24 million tons.
Environmental impact
The biodegradability of various fibers in soil burial and sewage sludge was evaluated by Korean researchers. Rayon was found to be more biodegradable than cotton, and cotton more than acetate. The more water-repellent the rayon-based fabric, the more slowly it will decompose. Subsequent experiments have shown that wood-based fibres, like Lyocell, biodegrade much more readily than polyester. Silverfish—like the firebrat—can eat rayon, but damage was found to be minor, potentially due to the heavy, slick texture of the tested rayon. Another study states that "artificial silk [...] [was] readily eaten" by the grey silverfish.
A 2014 ocean survey found that rayon contributed to 56.9% of the total fibers found in deep ocean areas, the rest being polyester, polyamides, acetate and acrylic. A 2016 study found a discrepancy in the ability to identify natural fibers in a marine environment via Fourier transform infrared spectroscopy. Later research of oceanic microfibers instead found cotton being the most frequent match (50% of all fibers), followed by other cellulosic fibers at 29.5% (e.g., rayon/viscose, linen, jute, kenaf, hemp, etc.). Further analysis of the specific contribution of rayon to ocean fibers was not performed due to the difficulty in distinguishing between natural and man-made cellulosic fibers using FTIR spectra.
For several years, there have been concerns about links between rayon manufacturers and deforestation. As a result of these concerns, FSC and PEFC came on the same platform with CanopyPlanet to focus on these issues. CanopyPlanet subsequently started publishing a yearly Hot Button report, which puts all the man-made cellulosics manufacturers globally on the same scoring platform. The scoring from the 2020 report scores all such manufacturers on a scale of 35, the highest scores having been achieved by Birla Cellulose (33) and Lenzing (30.5).
Carbon disulfide toxicity
Carbon disulfide is highly toxic. It is well documented to have seriously harmed the health of rayon workers in developed countries, and emissions may also harm the health of people living near rayon plants and their livestock. Rates of disability in modern factories (mainly in China, Indonesia, and India) are unknown. This has raised ethical concerns over viscose rayon production. , production facilities located in developing countries generally do not provide environmental or worker safety data.
Most global carbon disulfide emissions come from rayon production, as of 2008. , about 250 g of carbon disulfide is emitted per kilogram of rayon produced.
Control technologies have enabled improved collection of carbon disulfide and reuse of it, resulting in a lower emissions of carbon disulfide. These have not always been implemented in places where it was not legally required and profitable.
Carbon disulfide is volatile and is lost before the rayon gets to the consumer; the rayon itself is basically pure cellulose.
Studies from the 1930s show that 30% of American rayon workers experienced significant health impacts due to carbon disulfide exposure. Courtaulds worked hard to prevent this information being published in Britain.
During the Second World War, political prisoners in Nazi Germany were made to work in appalling conditions at the Phrix rayon factory in Krefeld. Nazis used forced labour to produce rayon across occupied Europe.
In the 1990s, viscose rayon producers faced lawsuits for negligent environmental pollution. Emissions abatement technologies had been consistently used. Carbon-bed recovery, for instance, which reduces emissions by about 90%, was used in Europe, but not in the US, by Courtaulds. Pollution control and worker safety started to become cost-limiting factors in production.
Japan has reduced carbon disulfide emissions per kilogram of viscose rayon produced (by about 16% per year), but in other rayon-producing countries, including China, emissions are uncontrolled. Rayon production is steady or decreasing except in China, where it is increasing, .
Rayon production has largely moved to the developing world, especially China, Indonesia and India. Rates of disability in these factories are unknown, , and concerns for worker safety continue.
Controversy
Studies have found the production of rayon can be harmful to the health of factory workers. Workers in factories utilizing the viscose process may be exposed to high levels of carbon disulfide, which can cause coronary heart disease, retinal damage, behavioral changes, impaired motor function, and various fertility and hormonal effects.
Related materials
Related materials are not regenerated cellulose, but esters of cellulose.
Nitrocellulose is a derivative of cellulose that is soluble in organic solvents. It is mainly used as an explosive or as a lacquer. Many early plastics, including celluloid, were made from nitrocellulose.
Cellulose acetate shares many traits with viscose rayon and was formerly considered the same textile. However, rayon resists heat, while acetate is prone to melting. Acetate must be laundered with care either by hand-washing or dry cleaning, and acetate garments disintegrate when heated in a tumble dryer. The two fabrics are now required to be listed distinctly on USA garment labels.
Cellophane is generally made by the viscose process, but dried into sheets instead of fibers.
| Technology | Fabrics and fibers | null |
162197 | https://en.wikipedia.org/wiki/Duralumin | Duralumin | Duralumin (also called duraluminum, duraluminium, duralum, dural(l)ium, or dural) is a trade name for one of the earliest types of age-hardenable aluminium–copper alloys. The term is a combination of Düren and aluminium. Its use as a trade name is obsolete. Today the term mainly refers to aluminium-copper alloys, designated as the 2000 series by the international alloy designation system (IADS), as with 2014 and 2024 alloys used in airframe fabrication.
Duralumin was developed in 1909 in Germany.
Duralumin is known for its strength and hardness, making it suitable for various applications, especially in the aviation and aerospace industry. However, it is susceptible to corrosion, which can be mitigated by using alclad-duralum materials.
History
Duralumin was developed by the German metallurgist Alfred Wilm at private military-industrial laboratory (Center for Scientific-Technical Research) in Neubabelsberg. In 1903, Wilm discovered that after quenching, an aluminium alloy containing 4% copper would harden when left at room temperature for several days. Further improvements led to the introduction of duralumin in 1909. The name, originally a trade mark of Dürener Metallwerke AG which acquired Wilm's patents and commercialized the material, is mainly used in pop-science to describe all Al-Cu alloys system, or '2000' series, as designated through the international alloy designation system originally created in 1970 by the Aluminum Association.
Composition
In addition to aluminium, the main materials in duralumin are copper, manganese and magnesium. For instance, Duraluminium 2024 consists of 91-95% aluminium, 3.8-4.9% copper, 1.2-1.8% magnesium, 0.3-0.9% manganese, <0.5% iron, <0.5% silicon, <0.25% zinc, <0.15% titanium, <0.10% chromium and no more than 0.15% of other elements together. Although the addition of copper improves strength, it also makes these alloys susceptible to corrosion. Corrosion resistance can be greatly enhanced by the metallurgical bonding of a high-purity aluminium surface layer, referred to as alclad-duralum. Alclad materials are commonly used in the aircraft industry to this day.
Microstructure
Duralumin's remarkable strength and durability stem from its unique microstructure, which is significantly influenced by heat treatment processes.
Initial Microstructure
Solid Solution: After initial solidification, duralumin exists as a single-phase solid solution, primarily composed of aluminium atoms with dispersed copper, magnesium, and other alloying elements. This initial state is relatively soft and ductile.
Heat Treatment and Microstructural Changes
Solution Annealing: Duralumin undergoes solution annealing, a high-temperature heat treatment process that dissolves the alloying elements into the aluminium matrix, creating a homogeneous solid solution.
Quenching: Rapid cooling (quenching) after solution annealing freezes the high-temperature solid solution, preventing the precipitation of strengthening phases.
Aging (Precipitation Hardening): During aging, the supersaturated solid solution becomes unstable. Fine precipitates, such as CuAl2 and Mg2Si, form within the aluminum matrix. These precipitates act as obstacles to dislocation movement, significantly increasing the alloy's strength and hardness.
Final Microstructure
The final microstructure of duralumin consists of a predominantly aluminium matrix dispersed fine precipitates (CuAl2, Mg2Si)
Grain boundaries. The size, distribution, and type of precipitates play a crucial role in determining the mechanical properties of duralumin. Optimal aging conditions lead to the formation of finely dispersed precipitates, resulting in peak strength and hardness.
Applications
Aluminium alloyed with copper (Al-Cu alloys), which can be precipitation hardened, are designated by the International Alloy Designation System as the 2000 series. Typical uses for wrought Al-Cu alloys include:
2011: Wire, rod, and bar for screw machine products. Applications where good machinability and good strength are required.
2014: Heavy-duty forgings, plate, and extrusions for aircraft fittings, wheels, and major structural components, space booster tankage and structure, truck frame and suspension components. Applications requiring high strength and hardness including service at elevated temperatures.
2017 or Avional (France): Around 1% Si. Good machinability. Acceptable resistance to corrosion in air and mechanical properties. Also called AU4G in France. Used for aircraft applications between the wars in France and Italy. Also saw some use in motor-racing applications from the 1960s, as it is a tolerant alloy that could be press-formed with relatively unsophisticated equipment.
2024: Aircraft structures, rivets, hardware, truck wheels, screw machine products, and other structural applications.
2036: Sheet for auto body panels
2048: Sheet and plate in structural components for aerospace application and military equipment
Aviation
German scientific literature openly published information about duralumin, its composition and heat treatment, before the outbreak of World War I in 1914. Despite this, use of the alloy outside Germany did not occur until after fighting ended in 1918. Reports of German use during World War I, even in technical journals such as Flight, could still mis-identify its key alloying component as magnesium rather than copper. Engineers in the UK showed little interest in duralumin until after the war.
The earliest known attempt to use duralumin for a heavier-than-air aircraft structure occurred in 1916, when Hugo Junkers first introduced its use in the airframe of the Junkers J 3, a single-engined monoplane "technology demonstrator" that marked the first use of the Junkers trademark duralumin corrugated skinning. The Junkers company completed only the covered wings and tubular fuselage framework of the J 3 before abandoning its development. The slightly later, solely IdFlieg-designated Junkers J.I armoured sesquiplane of 1917, known to the factory as the Junkers J 4, had its all-metal wings and horizontal stabilizer made in the same manner as the J 3's wings had been, like the experimental and airworthy all-duralumin Junkers J 7 single-seat fighter design, which led to the Junkers D.I low-wing monoplane fighter, introducing all-duralumin aircraft structural technology to German military aviation in 1918.
Its first use in aerostatic airframes came in rigid airship frames, eventually including all those of the "Great Airship" era of the 1920s and 1930s: the British-built R100, the German passenger Zeppelins LZ 127 Graf Zeppelin, LZ 129 Hindenburg, LZ 130 Graf Zeppelin II, and the U.S. Navy airships USS Los Angeles (ZR-3, ex-LZ 126), USS Akron (ZRS-4) and USS Macon (ZRS-5).
Bicycles
Duralumin was used to manufacture bicycle components and framesets from the 1930s to 1990s. Several companies in Saint-Étienne, France stood out for their early, innovative adoption of duralumin: in 1932, Verot et Perrin developed the first light alloy crank arms; in 1934, Haubtmann released a complete crankset; from 1935 on, Duralumin freewheels, derailleurs, pedals, brakes and handlebars were manufactured by several companies.
Complete framesets followed quickly, including those manufactured by: Mercier (and Aviac and other licensees) with their popular Meca Dural family of models, the Pelissier brothers and their race-worthy La Perle models, and Nicolas Barra and his exquisite mid-twentieth century “Barralumin” creations. Other names that come up here also included: Pierre Caminade, with his beautiful Caminargent creations and their exotic octagonal tubing, and also Gnome et Rhône, with its deep heritage as an aircraft engine manufacturer that also diversified into motorcycles, velomotors and bicycles after World War Two.
Mitsubishi Heavy Industries, which was prohibited from producing aircraft during the American occupation of Japan, manufactured the “cross” bicycle out of surplus wartime duralumin in 1946. The “cross” was designed by Kiro Honjo, a former aircraft designer responsible for the Mitsubishi G4M.
Duralumin use in bicycle manufacturing faded in the 1970s and 1980s. Vitus nonetheless released the venerable “979” frameset in 1979, a “Duralinox” model that became an instant classic among cyclists. The Vitus 979 was the first production aluminium frameset whose thin-wall 5083/5086 tubing was slip-fit and then glued together using a dry heat-activated epoxy. The result was an extremely lightweight but very durable frameset. Production of the Vitus 979 continued until 1992.
Automotive
In 2011, BBS Automotive made the RI-D, the world's first production automobile wheel made of duralumin. The company has since made other wheels of duralumin also, such as the RZ-D.
| Physical sciences | Specific alloys | Chemistry |
162264 | https://en.wikipedia.org/wiki/Penicillium | Penicillium | Penicillium () is a genus of ascomycetous fungi that is part of the mycobiome of many species and is of major importance in the natural environment, in food spoilage, and in food and drug production.
Some members of the genus produce penicillin, a molecule that is used as an antibiotic, which kills or stops the growth of certain kinds of bacteria. Other species are used in cheesemaking. According to the Dictionary of the Fungi (10th edition, 2008), the widespread genus contains over 300 species.
Taxonomy
The genus was first described in the scientific literature by Johann Heinrich Friedrich Link in his 1809 work ; he wrote, , () where means "having tufts of fine hair". Link included three species—P. candidum, P. expansum, and P. glaucum—all of which produced a brush-like conidiophore (asexual spore-producing structure). The common apple rot fungus P. expansum was later selected as the type species.
In his 1979 monograph, John I. Pitt divided Penicillium into four subgenera based on conidiophore morphology and branching pattern: Aspergilloides, Biverticillium, Furcatum, and Penicillium. Species included in subgenus Biverticillium were later merged into Talaromyces.
Species
Selected species include:
Penicillium albocoremium
Penicillium aurantiogriseum, a grain contaminant
Penicillium bilaiae, an agricultural inoculant
Penicillium camemberti, used in the production of Camembert, Brie and Cambozola cheeses
Penicillium candidum, which is used in making Brie and Camembert. It has been reduced to synonymy with Penicillium camemberti
Penicillium chrysogenum (previously known as Penicillium notatum), which produces the antibiotic penicillin
Penicillium claviforme
Penicillium commune
Penicillium crustosum
Penicillium digitatum, a Citrus pathogen
Penicillium echinulatum produces Mycophenolic acid
Penicillium expansum, a pathogen of apples and other fruit, produces patulin
Penicillium glabrum
Penicillium glaucum, a mold that is used in the making of some types of blue cheese, including Bleu de Gex, Rochebaron, and some varieties of Bleu d'Auvergne and Gorgonzola.
Penicillium imranianum
Penicillium italicum, a Citrus pathogen
Penicillium lacussarmientei
Penicillium lusitanum, isolated from marine habitat
Penicillium purpurogenum
Penicillium roqueforti, used in making Roquefort, Danish Blue cheese, English Blue Stilton cheese, Gorgonzola cheese, and Cambozola
Penicillium stoloniferum
Penicillium ulaiense, a Citrus pathogen in Asia
Penicillium verrucosum, a grain contaminant which produces ochratoxin A
Penicillium viridicatum
Etymology
The genus name is derived from the Latin root penicillum, meaning "painter's brush", and refers to the chains of conidia that resemble a broom.
Characteristics
The thallus (mycelium) consists of highly branched networks of multinucleated, usually colourless hyphae, with each pair of cells separated by a septum. Conidiophores are at the end of each branch accompanied by green spherical constricted units called conidia. These propagules play a significant role in reproduction; conidia are the main dispersal strategy of these fungi.
Sexual reproduction involves the production of ascospores, commencing with the fusion of an archegonium and an antheridium, with sharing of nuclei. The irregularly distributed asci contain eight unicellular ascospores each.
Ecology
Species of Penicillium are ubiquitous soil fungi preferring cool and moderate climates, commonly present wherever organic material is available. Saprophytic species of Penicillium and Aspergillus are among the best-known representatives of the Eurotiales and live mainly on organic biodegradable substances. Commonly known in America as molds, they are among the main causes of food spoilage, especially species of subgenus Penicillium. Many species produce highly toxic mycotoxins. The ability of these Penicillium species to grow on seeds and other stored foods depends on their propensity to thrive in low humidity and to colonize rapidly by aerial dispersion while the seeds are sufficiently moist. Some species have a blue color, commonly growing on old bread and giving it a blue fuzzy texture.
Some Penicillium species affect the fruits and bulbs of plants, including P. expansum, apples and pears; P. digitatum, citrus fruits; and P. allii, garlic. Some species are known to be pathogenic to animals; P. corylophilum, P. fellutanum, P. implicatum, P. janthinellum, P. viridicatum, and P. waksmanii are potential pathogens of mosquitoes.
Penicillium species are present in the air and dust of indoor environments, such as homes and public buildings. The fungus can be readily transported from the outdoors, and grow indoors using building material or accumulated soil to obtain nutrients for growth. Penicillium growth can still occur indoors even if the relative humidity is low, as long as there is sufficient moisture available on a given surface. A British study determined that Aspergillus- and Penicillium-type spores were the most prevalent in the indoor air of residential properties, and exceeded outdoor levels. Even ceiling tiles can support the growth of Penicillium—as one study demonstrated—if the relative humidity is 85% and the moisture content of the tiles is greater than 2.2%.
Some Penicillium species cause damage to machinery and the combustible materials and lubricants used to run and maintain them. For example, P. chrysogenum (formerly P. notatum), P. steckii, P. cyclopium, and P. nalgiovensis affect fuels; P. chrysogenum, P. rubrum, and P. verrucosum cause damage to oils and lubricants; P. regulosum damages optical and protective glass.
Economic value
Several species of the genus Penicillium play a central role in the production of cheese and of various meat products. To be specific, Penicillium molds are found in blue cheese. Penicillium camemberti and Penicillium roqueforti are the molds on Camembert, Brie, Roquefort, and many other cheeses. Penicillium nalgiovense is used in soft mold-ripened cheeses, such as Nalžovy (ellischau) cheese, and to improve the taste of sausages and hams, and to prevent colonization by other molds and bacteria.
In addition to their importance in the food industry, species of Penicillium and Aspergillus serve in the production of a number of biotechnologically produced enzymes and other macromolecules, such as gluconic, citric, and tartaric acids, as well as several pectinases, lipase, amylases, cellulases, and proteases. Some Penicillium species have shown potential for use in bioremediation, more specifically mycoremediation, because of their ability to break down a variety of xenobiotic compounds.
The genus includes a wide variety of species molds that are the source molds of major antibiotics. Penicillin, a drug produced by P. chrysogenum (formerly P. notatum), was accidentally discovered by Alexander Fleming in 1929, and found to inhibit the growth of Gram-positive bacteria (see beta-lactams). Its potential as an antibiotic was realized in the late 1930s, and Howard Florey and Ernst Chain purified and concentrated the compound. The drug's success in saving soldiers in World War II who had been dying from infected wounds resulted in Fleming, Florey and Chain jointly winning the Nobel Prize in Medicine in 1945.
Griseofulvin is an antifungal drug and a potential chemotherapeutic agent that was discovered in P. griseofulvum. Additional species that produce compounds capable of inhibiting the growth of tumor cells in vitro include: P. pinophilum, P. canescens, and P. glabrum.
Reproduction
Although many eukaryotes are able to reproduce sexually, as much as 20% of fungal species had been thought to reproduce exclusively by asexual means. However recent studies have revealed that sex occurs even in some of the supposedly asexual species. For example, sexual capability was recently shown for the fungus Penicillium roqueforti, used as a starter for blue cheese production. This finding was based, in part, on evidence for functional mating type (MAT) genes that are involved in fungal sexual compatibility, and the presence in the sequenced genome of most of the important genes known to be involved in meiosis. Penicillium chrysogenum is of major medical and historical importance as the original and present-day industrial source of the antibiotic penicillin. The species was considered asexual for more than 100 years despite concerted efforts to induce sexual reproduction. However, in 2013, Bohm et al. finally demonstrated sexual reproduction in P. chrysogenum.
These findings with Penicillium species are consistent with accumulating evidence from studies of other eukaryotic species that sex was likely present in the common ancestor of all eukaryotes. Furthermore, these recent results suggest that sex can be maintained even when very little genetic variability is produced.
Prior to 2013, when the "one fungus, one name" nomenclature change came into effect, Penicillium was used as the genus for anamorph (clonal forms) of fungi and Talaromyces was used for the teleomorph (sexual forms) of fungi. After 2013 however, fungi were reclassified based on their genetic relatedness to each other and now the genera Penicillium and Talaromyces both contain some species capable of only clonal reproduction and others that can reproduce sexually.
| Biology and health sciences | Basics | Plants |
162269 | https://en.wikipedia.org/wiki/Magnesium%20oxide | Magnesium oxide | Magnesium oxide (MgO), or magnesia, is a white hygroscopic solid mineral that occurs naturally as periclase and is a source of magnesium (see also oxide). It has an empirical formula of MgO and consists of a lattice of Mg2+ ions and O2− ions held together by ionic bonding. Magnesium hydroxide forms in the presence of water (MgO + H2O → Mg(OH)2), but it can be reversed by heating it to remove moisture.
Magnesium oxide was historically known as magnesia alba (literally, the white mineral from Magnesia), to differentiate it from magnesia nigra, a black mineral containing what is now known as manganese.
Related oxides
While "magnesium oxide" normally refers to MgO, the compound magnesium peroxide MgO2 is also known. According to evolutionary crystal structure prediction, MgO2 is thermodynamically stable at pressures above 116 GPa (gigapascals), and a semiconducting suboxide Mg3O2 is thermodynamically stable above 500 GPa. Because of its stability, MgO is used as a model system for investigating vibrational properties of crystals.
Electric properties
Pure MgO is not conductive and has a high resistance to electric current at room temperature. The pure powder of MgO has a relative permittivity inbetween 3.2 to 9.9 with an approximate dielectric loss of tan(δ) > 2.16x103 at 1kHz.
Production
Magnesium oxide is produced by the calcination of magnesium carbonate or magnesium hydroxide. The latter is obtained by the treatment of magnesium chloride solutions, typically seawater, with limewater or milk of lime.
Mg2+ + Ca(OH)2 → Mg(OH)2 + Ca2+
Calcining at different temperatures produces magnesium oxide of different reactivity. High temperatures 1500 – 2000 °C diminish the available surface area and produces dead-burned (often called dead burnt) magnesia, an unreactive form used as a refractory. Calcining temperatures 1000 – 1500 °C produce hard-burned magnesia, which has limited reactivity and calcining at lower temperature, (700–1000 °C) produces light-burned magnesia, a reactive form, also known as caustic calcined magnesia. Although some decomposition of the carbonate to oxide occurs at temperatures below 700 °C, the resulting materials appear to reabsorb carbon dioxide from the air.
Applications
Refractory insulator
MgO is prized as a refractory material, i.e. a solid that is physically and chemically stable at high temperatures. It has the useful attributes of high thermal conductivity and low electrical conductivity. According to a 2006 reference book:
MgO is used as a refractory material for crucibles. It is also used as an insulator in heat-resistant electrical cable.
Biomedical
Among metal oxide nanoparticles, magnesium oxide nanoparticles (MgO NPs) have distinct physicochemical and biological properties, including biocompatibility, biodegradability, high bioactivity, significant antibacterial properties, and good mechanical properties, which make it a good choice as a reinforcement in composites.
Heating elements
It is used extensively as an electrical insulator in tubular construction heating elements as in electric stove and cooktop heating elements. There are several mesh sizes available and most commonly used ones are 40 and 80 mesh per the American Foundry Society. The extensive use is due to its high dielectric strength and average thermal conductivity. MgO is usually crushed and compacted with minimal airgaps or voids.
Cement
MgO is one of the components in Portland cement in dry process plants.
Sorel cement uses MgO as the main component in combination with MgCl2 and water.
Fertilizer
MgO has an important place as a commercial plant fertilizer and as animal feed.
Fireproofing
It is a principal fireproofing ingredient in construction materials. As a construction material, magnesium oxide wallboards have several attractive characteristics: fire resistance, termite resistance, moisture resistance, mold and mildew resistance, and strength, but also a severe downside as it attracts moisture and can cause moisture damage to surrounding materials.
Medical
Magnesium oxide is used for relief of heartburn and indigestion, as an antacid, magnesium supplement, and as a short-term laxative. It is also used to improve symptoms of indigestion. Side effects of magnesium oxide may include nausea and cramping. In quantities sufficient to obtain a laxative effect, side effects of long-term use may rarely cause enteroliths to form, resulting in bowel obstruction.
Waste treatment
Magnesium oxide is used extensively in the soil and groundwater remediation, wastewater treatment, drinking water treatment, air emissions treatment, and waste treatment industries for its acid buffering capacity and related effectiveness in stabilizing dissolved heavy metal species.
Many heavy metals species, such as lead and cadmium, are least soluble in water at mildly basic conditions (pH in the range 8–11). Solubility of metals increases their undesired bioavailability and mobility in soil and groundwater. Granular MgO is often blended into metals-contaminating soil or waste material, which is also commonly of a low pH (acidic), in order to drive the pH into the 8–10 range. Metal-hydroxide complexes tend to precipitate out of aqueous solution in the pH range of 8–10.
MgO is packed in bags around transuranic waste in the disposal cells (panels) at the Waste Isolation Pilot Plant, as a getter to minimize the complexation of uranium and other actinides by carbonate ions and so to limit the solubility of radionuclides. The use of MgO is preferred over CaO since the resulting hydration product () is less soluble and releases less hydration heat. Another advantage is to impose a lower pH value (about 10.5) in case of accidental water ingress into the dry salt layers, in contast to the more soluble which would create a higher pH of 12.5 (strongly alkaline conditions). The cation being the second most abundant cation in seawater and in rocksalt, the potential release of magnesium ions dissolving in brines intruding the deep geological repository is also expected to minimize the geochemical disruption.
Niche uses
As a food additive, it is used as an anticaking agent. It is known to the US Food and Drug Administration for cacao products; canned peas; and frozen dessert. It has an E number of E530.
As a reagent in the installation of the carboxybenzyl (Cbz) group using benzyl chloroformate in EtOAc for the N-protection of amines and amides.
Doping MgO (about 1–5% by weight) into hydroxyapatite, a bioceramic mineral, increases the fracture toughness by migrating to grain boundaries, where it reduces grain size and changes the fracture mode from intergranular to transgranular.
Pressed MgO is used as an optical material. It is transparent from 0.3 to 7 μm. The refractive index is 1.72 at 1 μm and the Abbe number is 53.58. It is sometimes known by the Eastman Kodak trademarked name Irtran-5, although this designation is obsolete. Crystalline pure MgO is available commercially and has a small use in infrared optics.
An aerosolized solution of MgO is used in library science and collections management for the deacidification of at-risk paper items. In this process, the alkalinity of MgO (and similar compounds) neutralizes the relatively high acidity characteristic of low-quality paper, thus slowing the rate of deterioration.
Magnesium oxide is used as an oxide barrier in spin-tunneling devices. Owing to the crystalline structure of its thin films, which can be deposited by magnetron sputtering, for example, it shows characteristics superior to those of the commonly used amorphous Al2O3. In particular, spin polarization of about 85% has been achieved with MgO versus 40–60 % with aluminium oxide. The value of tunnel magnetoresistance is also significantly higher for MgO (600% at room temperature and 1,100 % at 4.2 K) than Al2O3 (ca. 70% at room temperature).
MgO is a common pressure transmitting medium used in high pressure apparatuses like the multi-anvil press.
Brake lining
Magnesia is used in brake linings for its heat conductivity and intermediate hardness. It helps dissipate heat from friction surfaces, preventing overheating, while minimizing wear on metal components. Its stability under high temperatures ensures reliable and durable braking performance in automotive and industrial applications.
Thin film transistors
In thin film transistors(TFTs), MgO is often used as a dielectric material or an insulator due to its high thermal stability, excellent insulating properties, and wide bandgap. Optimized IGZO/MgO TFTs demonstrated an electron mobility of 1.63 cm²/Vs, an on/off current ratio of 10⁶, and a subthreshold swing of 0.50 V/decade at −0.11 V. These TFTs are integral to low-power applications, wearable devices, and radiation-hardened electronics, contributing to enhanced efficiency and durability across diverse domains.
Historical uses
It was historically used as a reference white color in colorimetry, owing to its good diffusing and reflectivity properties. It may be smoked onto the surface of an opaque material to form an integrating sphere.
Early gas mantle designs for lighting, such as the Clamond basket, consisted mainly of magnesium oxide.
Precautions
Inhalation of magnesium oxide fumes can cause metal fume fever.
| Physical sciences | Alkali oxide salts | Chemistry |
162275 | https://en.wikipedia.org/wiki/Chlamydomonas | Chlamydomonas | Chlamydomonas ( ) is a genus of green algae consisting of about 150 species of unicellular flagellates, found in stagnant water and on damp soil, in freshwater, seawater, and even in snow as "snow algae". Chlamydomonas is used as a model organism for molecular biology, especially studies of flagellar motility and chloroplast dynamics, biogenesis, and genetics. One of the many striking features of Chlamydomonas is that it contains ion channels (channelrhodopsins) that are directly activated by light. Some regulatory systems of Chlamydomonas are more complex than their homologs in Gymnosperms, with evolutionarily related regulatory proteins being larger and containing additional domains.
Molecular phylogeny studies indicated that the traditional genus Chlamydomonas as defined using morphological data, was polyphyletic within Volvocales. Many species were subsequently reclassified (e.g., Oogamochlamys, Lobochlamys), and many other "Chlamydomonas" s.l. lineages are still to be reclassified.
Etymology
The name Chlamydomonas comes from the Greek roots chlamys, meaning cloak or mantle, and monas, meaning solitary, now used conventionally for unicellular flagellates.
Description
Morphology
All Chlamydomonas are motile, unicellular organisms. Cells are generally spherical to cylindrical in shape, but may be elongately spindle-shaped, and a papilla may be present or absent. Chloroplasts are green and usually cup-shaped. A key feature of the genus is its two anterior flagella, each as long as the other. The flagellar microtubules may each be disassembled by the cell to provide spare material to rebuild the other's microtubules if they are damaged.
Cell wall is made up of a glycoprotein and non-cellulosic polysaccharides instead of cellulose.
Two anteriorly inserted whiplash flagella. Each flagellum originates from a basal granule in the anterior papillate or non-papillate region of the cytoplasm. Each flagellum shows a typical 9+2 arrangement of the component fibrils.
Contractile vacuoles are near the bases of flagella.
Prominent cup or bowl-shaped chloroplast is present. The chloroplast contains bands composed of a variable number of the photosynthetic thylakoids which are not organised into grana-like structures.
The nucleus is enclosed in a cup-shaped chloroplast, which has a single large pyrenoid where starch is formed from photosynthetic products. Pyrenoid with starch sheath is present in the posterior end of the chloroplast.
Eye spot present in the anterior portion of the chloroplast. It consists of two or three, more or less parallel rows of linearly arranged fat droplets.
Species
About 500 species of Chlamydomonas have been described.
Chlamydomonas acidophila
Chlamydomonas caudata
Chlamydomonas ehrenbergii
Chlamydomonas elegans
Chlamydomonas moewusii
Chlamydomonas muriella
Chlamydomonas nivalis
Chlamydomonas ovoidae
Chlamydomonas priscuii
Chlamydomonas smithii
Chlamydomonas reinhardtii
Ecology
Chlamydomonas is widely distributed in freshwater or damp soil. It is generally found in a habitat rich in ammonium salt. It possesses red eye spots for photosensitivity and reproduces both asexually and sexually.
Chlamydomonas'''s asexual reproduction occurs by zoospores, aplanospores, hypnospores, or a palmella stage, while its sexual reproduction is through isogamy, anisogamy or oogamy.
Nutrition
Most species are obligate phototrophs but C. reinhardtii and C. dysostosis are facultative heterotrophs that can grow in the dark in the presence of acetate as a carbon source.
Uses
Some Chlamydomonas'' are edible.
| Biology and health sciences | Green algae | Plants |
162296 | https://en.wikipedia.org/wiki/Aphid | Aphid | Aphids are small sap-sucking insects and members of the superfamily Aphidoidea. Common names include greenfly and blackfly, although individuals within a species can vary widely in color. The group includes the fluffy white woolly aphids. A typical life cycle involves flightless females giving live birth to female nymphs—who may also be already pregnant, an adaptation scientists call telescoping generations—without the involvement of males. Maturing rapidly, females breed profusely so that the number of these insects multiplies quickly. Winged females may develop later in the season, allowing the insects to colonize new plants. In temperate regions, a phase of sexual reproduction occurs in the autumn, with the insects often overwintering as eggs.
The life cycle of some species involves an alternation between two species of host plants, for example between an annual crop and a woody plant. Some species feed on only one type of plant, while others are generalists, colonizing many plant groups. About 5,000 species of aphid have been described, all included in the family Aphididae. Around 400 of these are found on food and fiber crops, and many are serious pests of agriculture and forestry, as well as an annoyance for gardeners. So-called dairying ants have a mutualistic relationship with aphids, tending them for their honeydew and protecting them from predators.
Aphids are among the most destructive insect pests on cultivated plants in temperate regions. In addition to weakening the plant by sucking sap, they act as vectors for plant viruses and disfigure ornamental plants with deposits of honeydew and the subsequent growth of sooty moulds. Because of their ability to rapidly increase in numbers by asexual reproduction and telescopic development, they are a highly successful group of organisms from an ecological standpoint.
Large-scale control of aphids is not easy. Insecticides do not always produce reliable results, because of resistance to several classes of insecticide, and because aphids often feed on the undersides of leaves, and are thus shielded. On a small scale, water jets and soap sprays are quite effective. Natural enemies include predatory ladybugs, hoverfly larvae, parasitic wasps, aphid midge larvae, crab spiders, lacewing larvae, and entomopathogenic fungi. An integrated pest management strategy using biological pest control can work, but is difficult to achieve except in enclosed environments such as greenhouses.
Etymology
The name aphid is from Carl Linnaeus's modern Latin, most likely from misreading the Middle Greek κόρῐς, koris, 'bug' as αφῐς, aphis.
Distribution
Aphids are distributed worldwide, but are most common in temperate zones. In contrast to many taxa, aphid species diversity is much lower in the tropics than in the temperate zones. They can migrate great distances, mainly through passive dispersal by winds. Winged aphids may also rise up in the day as high as 600 m where they are transported by strong winds. For example, the currant-lettuce aphid, Nasonovia ribisnigri, is believed to have spread from New Zealand to Tasmania around 2004 through easterly winds. Aphids have also been spread by human transportation of infested plant materials, making some species nearly cosmopolitan in their distribution.
Evolution
Fossil history
Aphids, and the closely related adelgids and phylloxerans, probably evolved from a common ancestor some , in the Early Permian period. They probably fed on plants like Cordaitales or Cycadophyta. With their soft bodies, aphids do not fossilize well, and the oldest known fossil is of the species Triassoaphis cubitus from the Triassic. They do however sometimes get stuck in plant exudates which solidify into amber. In 1967, when Professor Ole Heie wrote his monograph Studies on Fossil Aphids, about sixty species have been described from the Triassic, Jurassic, Cretaceous and mostly the Tertiary periods, with Baltic amber contributing another forty species. The total number of species was small, but increased considerably with the appearance of the angiosperms , as this allowed aphids to specialise, the speciation of aphids going hand-in-hand with the diversification of flowering plants. The earliest aphids were probably polyphagous, with monophagy developing later. It has been hypothesized that the ancestors of the Adelgidae lived on conifers while those of the Aphididae fed on the sap of Podocarpaceae or Araucariaceae that survived extinctions in the late Cretaceous. Organs like the cornicles did not appear until the Cretaceous period. One study alternatively suggests that ancestral aphids may have lived on angiosperm bark and that feeding on leaves may be a derived trait. The Lachninae have long mouth parts that are suitable for living on bark and it has been suggested that the mid-Cretaceous ancestor fed on the bark of angiosperm trees, switching to leaves of conifer hosts in the late Cretaceous. The Phylloxeridae may well be the oldest family still extant, but their fossil record is limited to the Lower Miocene Palaeophylloxera.
Taxonomy
Late 20th-century reclassification within the Hemiptera reduced the old taxon "Homoptera" to two suborders: Sternorrhyncha (aphids, whiteflies, scales, psyllids, etc.) and Auchenorrhyncha (cicadas, leafhoppers, treehoppers, planthoppers, etc.) with the suborder Heteroptera containing a large group of insects known as the true bugs. The infraorder Aphidomorpha within the Sternorrhyncha varies with circumscription with several fossil groups being especially difficult to place but includes the Adelgoidea, the Aphidoidea and the Phylloxeroidea. Some authors use a single superfamily Aphidoidea within which the Phylloxeridae and Adelgidae are also included while others have Aphidoidea with a sister superfamily Phylloxeroidea within which the Adelgidae and Phylloxeridae are placed. Early 21st-century reclassifications substantially rearranged the families within Aphidoidea: some old families were reduced to subfamily rank (e.g., Eriosomatidae), and many old subfamilies were elevated to family rank. The most recent authoritative classifications have three superfamilies Adelgoidea, Phylloxeroidea and Aphidoidea. The Aphidoidea includes a single large family Aphididae that includes all the ~5000 extant species.
Phylogeny
External
Aphids, adelgids, and phylloxerids are very closely related within the suborder Sternorrhyncha, the plant-sucking bugs. They are either placed in the insect superfamily Aphidoidea or into the superfamily Phylloxeroidea which contains the family Adelgidae and the family Phylloxeridae. Like aphids, phylloxera feed on the roots, leaves, and shoots of grape plants, but unlike aphids, do not produce honeydew or cornicle secretions. Phylloxera (Daktulosphaira vitifoliae) are insects which caused the Great French Wine Blight that devastated European viticulture in the 19th century. Similarly, adelgids or woolly conifer aphids, also feed on plant phloem and are sometimes described as aphids, but are more properly classified as aphid-like insects, because they have no cauda or cornicles.
The treatment of the groups especially concerning fossil groups varies greatly due to difficulties in resolving relationships. Most modern treatments include the three superfamilies, the Adelogidea, the Aphidoidea, and the Phylloxeroidea within the infraorder Aphidomorpha along with several fossil groups.
Internal
The phylogenetic tree, based on Papasotiropoulos 2013 and Kim 2011, with additions from Ortiz-Rivas and Martinez-Torres 2009, shows the internal phylogeny of the Aphididae.
It has been suggested that the phylogeny of the aphid groups might be revealed by examining the phylogeny of their bacterial endosymbionts, especially the obligate endosymbiont Buchnera. The results depend on the assumption that the symbionts are strictly transmitted vertically through the generations. This assumption is well supported by the evidence, and several phylogenetic relationships have been suggested on the basis of endosymbiont studies.
Anatomy
Most aphids have soft bodies, which may be green, black, brown, pink, or almost colorless. Aphids have antennae with two short, broad basal segments and up to four slender terminal segments. They have a pair of compound eyes, with an ocular tubercle behind and above each eye, made up of three lenses (called triommatidia). They feed on sap using sucking mouthparts called stylets, enclosed in a sheath called a rostrum, which is formed from modifications of the mandible and maxilla of the insect mouthparts.
They have long, thin legs with two-jointed, two-clawed tarsi. The majority of aphids are wingless, but winged forms are produced at certain times of year in many species. Most aphids have a pair of cornicles (siphunculi), abdominal tubes on the dorsal surface of their fifth abdominal segment, through which they exude droplets of a quick-hardening defensive fluid containing triacylglycerols, called cornicle wax. Other defensive compounds can also be produced by some species. Aphids have a tail-like protrusion called a cauda above their rectal apertures. They have lost their Malpighian tubules.
When host plant quality becomes poor or conditions become crowded, some aphid species produce winged offspring (alates) that can disperse to other food sources. The mouthparts or eyes can be small or missing in some species and forms.
Diet
Many aphid species are monophagous (that is, they feed on only one plant species). Others, like the green peach aphid, feed on hundreds of plant species across many families. About 10% of species feed on different plants at different times of the year.
A new host plant is chosen by a winged adult by using visual cues, followed by olfaction using the antennae; if the plant smells right, the next action is probing the surface upon landing. The stylus is inserted and saliva secreted, the sap is sampled, the xylem may be tasted and finally, the phloem is tested. Aphid saliva may inhibit phloem-sealing mechanisms and has pectinases that ease penetration. Non-host plants can be rejected at any stage of the probe, but the transfer of viruses occurs early in the investigation process, at the time of the introduction of the saliva, so non-host plants can become infected.
Aphids usually feed passively on sap of phloem vessels in plants, as do many of other hemipterans such as scale insects and cicadas. Once a phloem vessel is punctured, the sap, which is under pressure, is forced into the aphid's food canal. Occasionally, aphids also ingest xylem sap, which is a more dilute diet than phloem sap as the concentrations of sugars and amino acids are 1% of those in the phloem. Xylem sap is under negative hydrostatic pressure and requires active sucking, suggesting an important role in aphid physiology. As xylem sap ingestion has been observed following a dehydration period, aphids are thought to consume xylem sap to replenish their water balance; the consumption of the dilute sap of xylem permitting aphids to rehydrate. However, recent data showed aphids consume more xylem sap than expected and they notably do so when they are not dehydrated and when their fecundity decreases. This suggests aphids, and potentially, all the phloem-sap feeding species of the order Hemiptera, consume xylem sap for reasons other than replenishing water balance. Although aphids passively take in phloem sap, which is under pressure, they can also draw fluid at negative or atmospheric pressure using the cibarial-pharyngeal pump mechanism present in their head.
Xylem sap consumption may be related to osmoregulation. High osmotic pressure in the stomach, caused by high sucrose concentration, can lead to water transfer from the hemolymph to the stomach, thus resulting in hyperosmotic stress and eventually to the death of the insect. Aphids avoid this fate by osmoregulating through several processes. Sucrose concentration is directly reduced by assimilating sucrose toward metabolism and by synthesizing oligosaccharides from several sucrose molecules, thus reducing the solute concentration and consequently the osmotic pressure. Oligosaccharides are then excreted through honeydew, explaining its high sugar concentrations, which can then be used by other animals such as ants. Furthermore, water is transferred from the hindgut, where osmotic pressure has already been reduced, to the stomach to dilute stomach content. Eventually, aphids consume xylem sap to dilute the stomach osmotic pressure. All these processes function synergetically, and enable aphids to feed on high-sucrose-concentration plant sap, as well as to adapt to varying sucrose concentrations.
Plant sap is an unbalanced diet for aphids, as it lacks essential amino acids, which aphids, like all animals, cannot synthesise, and possesses a high osmotic pressure due to its high sucrose concentration. Essential amino acids are provided to aphids by bacterial endosymbionts, harboured in special cells, bacteriocytes. These symbionts recycle glutamate, a metabolic waste of their host, into essential amino acids.
Carotenoids and photoheterotrophy
Some species of aphids have acquired the ability to synthesise red carotenoids by horizontal gene transfer from fungi. They are the only animals other than two-spotted spider mites and the oriental hornet with this capability. Using their carotenoids, aphids may well be able to absorb solar energy and convert it to a form that their cells can use, ATP. This is the only known example of photoheterotrophy in animals. The carotene pigments in aphids form a layer close to the surface of the cuticle, ideally placed to absorb sunlight. The excited carotenoids seem to reduce NAD to NADH which is oxidized in the mitochondria for energy.
Reproduction
The simplest reproductive strategy is for an aphid to have a single host all year round. On this it may alternate between sexual and asexual generations (holocyclic) or alternatively, all young may be produced by parthenogenesis, eggs never being laid (anholocyclic). Some species can have both holocyclic and anholocyclic populations under different circumstances but no known aphid species reproduce solely by sexual means. The alternation of sexual and asexual generations may have evolved repeatedly.
However, aphid reproduction is often more complex than this and involves migration between different host plants. In about 10% of species, there is an alternation between woody (primary hosts) on which the aphids overwinter and herbaceous (secondary) host plants, where they reproduce abundantly in the summer. A few species can produce a soldier caste, other species show extensive polyphenism under different environmental conditions and some can control the sex ratio of their offspring depending on external factors.
When a typical sophisticated reproductive strategy is used, only females are present in the population at the beginning of the seasonal cycle (although a few species of aphids have been found to have both male and female sexes at this time). The overwintering eggs that hatch in the spring result in females, called fundatrices (stem mothers). Reproduction typically does not involve males (parthenogenesis) and results in a live birth (viviparity). The live young are produced by pseudoplacental viviparity, which is the development of eggs, deficient in the yolk, the embryos fed by a tissue acting as a placenta. The young emerge from the mother soon after hatching.
Eggs are parthenogenetically produced without meiosis and the offspring are clonal to their mother, so they are all female (thelytoky). The embryos develop within the mothers' ovarioles, which then give birth to live (already hatched) first-instar female nymphs. As the eggs begin to develop immediately after ovulation, an adult female can house developing female nymphs which already have parthenogenetically developing embryos inside them (i.e. they are born pregnant). This telescoping of generations enables aphids to increase in number with great rapidity. The offspring resemble their parent in every way except size. Thus, a female's diet can affect the body size and birth rate of more than two generations (daughters and granddaughters).
This process repeats itself throughout the summer, producing multiple generations that typically live 20 to 40 days. For example, some species of cabbage aphids (like Brevicoryne brassicae) can produce up to 41 generations of females in a season. Thus, one female hatched in spring can theoretically produce billions of descendants, were they all to survive.
In autumn, aphids reproduce sexually and lay eggs. Environmental factors such as a change in photoperiod and temperature, or perhaps a lower food quantity or quality, causes females to parthenogenetically produce sexual females and males. The males are genetically identical to their mothers except that, with the aphids' X0 sex-determination system, they have one fewer sex chromosome. These sexual aphids may lack wings or even mouthparts. Sexual females and males mate, and females lay eggs that develop outside the mother. The eggs survive the winter and hatch into winged (alate) or wingless females the following spring. This occurs in, for example, the life cycle of the rose aphid (Macrosiphum rosae), which may be considered typical of the family. However, in warm environments, such as in the tropics or a greenhouse, aphids may go on reproducing asexually for many years.
Aphids reproducing asexually by parthenogenesis can have genetically identical winged and non-winged female progeny. Control is complex; some aphids alternate during their life-cycles between genetic control (polymorphism) and environmental control (polyphenism) of production of winged or wingless forms. Winged progeny tend to be produced more abundantly under unfavorable or stressful conditions. Some species produce winged progeny in response to low food quality or quantity. e.g. when a host plant is starting to senesce. The winged females migrate to start new colonies on a new host plant. For example, the apple aphid (Aphis pomi), after producing many generations of wingless females gives rise to winged forms that fly to other branches or trees of its typical food plant. Aphids that are attacked by ladybugs, lacewings, parasitoid wasps, or other predators can change the dynamics of their progeny production. When aphids are attacked by these predators, alarm pheromones, in particular beta-farnesene, are released from the cornicles. These alarm pheromones cause several behavioral modifications that, depending on the aphid species, can include walking away and dropping off the host plant. Additionally, alarm pheromone perception can induce the aphids to produce winged progeny that can leave the host plant in search of a safer feeding site. Viral infections, which can be extremely harmful to aphids, can also lead to the production of winged offspring. For example, Densovirus infection has a negative impact on rosy apple aphid (Dysaphis plantaginea) reproduction, but contributes to the development of aphids with wings, which can transmit the virus more easily to new host plants. Additionally, symbiotic bacteria that live inside of the aphids can also alter aphid reproductive strategies based on the exposure to environmental stressors.
In the autumn, host-alternating (heteroecious) aphid species produce a special winged generation that flies to different host plants for the sexual part of the life cycle. Flightless female and male sexual forms are produced and lay eggs. Some species such as Aphis fabae (black bean aphid), Metopolophium dirhodum (rose-grain aphid), Myzus persicae (peach-potato aphid), and Rhopalosiphum padi (bird cherry-oat aphid) are serious pests. They overwinter on primary hosts on trees or bushes; in summer, they migrate to their secondary host on a herbaceous plant, often a crop, then the gynoparae return to the tree in autumn. Another example is the soybean aphid (Aphis glycines). As fall approaches, the soybean plants begin to senesce from the bottom upwards. The aphids are forced upwards and start to produce winged forms, first females and later males, which fly off to the primary host, buckthorn. Here they mate and overwinter as eggs.
Ecology
Ant mutualism
Some species of ants farm aphids, protecting them on the plants where they are feeding, and consuming the honeydew the aphids release from the terminations of their alimentary canals. This is a mutualistic relationship, with these dairying ants milking the aphids by stroking them with their antennae. Although mutualistic, the feeding behaviour of aphids is altered by ant attendance. Aphids attended by ants tend to increase the production of honeydew in smaller drops with a greater concentration of amino acids.
Some farming ant species gather and store the aphid eggs in their nests over the winter. In the spring, the ants carry the newly hatched aphids back to the plants. Some species of dairying ants (such as the European yellow meadow ant, Lasius flavus) manage large herds of aphids that feed on roots of plants in the ant colony. Queens leaving to start a new colony take an aphid egg to found a new herd of underground aphids in the new colony. These farming ants protect the aphids by fighting off aphid predators. Some bees in coniferous forests collect aphid honeydew to make forest honey.
An interesting variation in ant–aphid relationships involves lycaenid butterflies and Myrmica ants. For example, Niphanda fusca butterflies lay eggs on plants where ants tend herds of aphids. The eggs hatch as caterpillars which feed on the aphids. The ants do not defend the aphids from the caterpillars, since the caterpillars produce a pheromone which deceives the ants into treating them like ants, and carrying the caterpillars into their nest. Once there, the ants feed the caterpillars, which in return produce honeydew for the ants. When the caterpillars reach full size, they crawl to the colony entrance and form cocoons. After two weeks, the adult butterflies emerge and take flight. At this point, the ants attack the butterflies, but the butterflies have a sticky wool-like substance on their wings that disables the ants' jaws, allowing the butterflies to fly away without being harmed.
Another ant-mimicking gall aphid, Paracletus cimiciformis (Eriosomatinae), has evolved a complex double strategy involving two morphs of the same clone and Tetramorium ants. Aphids of the round morph cause the ants to farm them, as with many other aphids. The flat morph aphids are aggressive mimics with a "wolf in sheep's clothing" strategy: they have hydrocarbons in their cuticle that mimic those of the ants, and the ants carry them into the brood chamber of the ants' nest and raise them like ant larvae. Once there, the flat morph aphids behave like predators, drinking the body fluids of ant larvae.
Bacterial endosymbiosis
Endosymbiosis with micro-organisms is common in insects, with more than 10% of insect species relying on intracellular bacteria for their development and survival. Aphids harbour a vertically transmitted (from parent to its offspring) obligate symbiosis with Buchnera aphidicola, the primary symbiont, inside specialized cells, the bacteriocytes. Five of the bacteria genes have been transferred to the aphid nucleus. The original association is estimated to have occurred in a common ancestor and enabled aphids to exploit a new ecological niche, feeding on phloem-sap of vascular plants. B. aphidicola provides its host with essential amino acids, which are present in low concentrations in plant sap. The metabolites from endosymbionts are also excreted in honeydew. The stable intracellular conditions, as well as the bottleneck effect experienced during the transmission of a few bacteria from the mother to each nymph, increase the probability of transmission of mutations and gene deletions. As a result, the size of the B. aphidicola genome is greatly reduced, compared to its putative ancestor. Despite the apparent loss of transcription factors in the reduced genome, gene expression is highly regulated, as shown by the ten-fold variation in expression levels between different genes under normal conditions. Buchnera aphidicola gene transcription, although not well understood, is thought to be regulated by a small number of global transcriptional regulators and/or through nutrient supplies from the aphid host.
Some aphid colonies also harbour secondary or facultative (optional extra) bacterial symbionts. These are vertically transmitted, and sometimes also horizontally (from one lineage to another and possibly from one species to another). So far, the role of only some of the secondary symbionts has been described; Regiella insecticola plays a role in defining the host-plant range, Hamiltonella defensa provides resistance to parasitoids but only when it is in turn infected by the bacteriophage APSE, and Serratia symbiotica prevents the deleterious effects of heat.
Predators
Aphids are eaten by many bird and insect predators. In a study on a farm in North Carolina, six species of passerine bird consumed nearly a million aphids per day between them, the top predators being the American goldfinch, with aphids forming 83% of its diet, and the vesper sparrow. Insects that attack aphids include the adults and larvae of predatory ladybirds, hoverfly larvae, parasitic wasps, aphid midge larvae, "aphid lions" (the larvae of green lacewings), and arachnids such as spiders. Among ladybirds, Myzia oblongoguttata is a dietary specialist which only feeds on conifer aphids, whereas Adalia bipunctata and Coccinella septempunctata are generalists, feeding on large numbers of species. The eggs are laid in batches, each female laying several hundred. Female hoverflies lay several thousand eggs. The adults feed on pollen and nectar but the larvae feed voraciously on aphids; Eupeodes corollae adjusts the number of eggs laid to the size of the aphid colony.
Aphids are often infected by bacteria, viruses, and fungi. They are affected by the weather, such as precipitation, temperature and wind. Fungi that attack aphids include Neozygites fresenii, Entomophthora, Beauveria bassiana, Metarhizium anisopliae, and entomopathogenic fungi such as Lecanicillium lecanii. Aphids brush against the microscopic spores. These stick to the aphid, germinate, and penetrate the aphid's skin. The fungus grows in the aphid's hemolymph. After about three days, the aphid dies and the fungus releases more spores into the air. Infected aphids are covered with a woolly mass that progressively grows thicker until the aphid is obscured. Often, the visible fungus is not the one that killed the aphid, but a secondary infection.
Aphids can be easily killed by unfavourable weather, such as late spring freezes. Excessive heat kills the symbiotic bacteria that some aphids depend on, which makes the aphids infertile. Rain prevents winged aphids from dispersing, and knocks aphids off plants and thus kills them from the impact or by starvation, but cannot be relied on for aphid control.
Anti-predator defences
Most aphids have little protection from predators. Some species interact with plant tissues forming a gall, an abnormal swelling of plant tissue. Aphids can live inside the gall, which provides protection from predators and the elements. A number of galling aphid species are known to produce specialised "soldier" forms, sterile nymphs with defensive features which defend the gall from invasion. For example, Alexander's horned aphids are a type of soldier aphid that has a hard exoskeleton and pincer-like mouthparts. A woolly aphid, Colophina clematis, has first instar "soldier" nymphs that protect the aphid colony, killing larvae of ladybirds, hoverflies and the flower bug Anthocoris nemoralis by climbing on them and inserting their stylets.
Although aphids cannot fly for most of their life cycle, they can escape predators and accidental ingestion by herbivores by dropping off the plant onto the ground. Others species use the soil as a permanent protection, feeding on the vascular systems of roots and remaining underground all their lives. They are often attended by ants, for the honeydew they produce and are carried from plant to plant by the ants through their tunnels.
Some species of aphid, known as "woolly aphids" (Eriosomatinae), excrete a "fluffy wax coating" for protection. The cabbage aphid, Brevicoryne brassicae, sequesters secondary metabolites from its host, stores them and releases chemicals that produce a violent chemical reaction and strong mustard oil smell to repel predators. Peptides produced by aphids, Thaumatins, are thought to provide them with resistance to some fungi.
It was common at one time to suggest that the cornicles were the source of the honeydew, and this was even included in the Shorter Oxford English Dictionary and the 2008 edition of the World Book Encyclopedia. In fact, honeydew secretions are produced from the anus of the aphid, while cornicles mostly produce defensive chemicals such as waxes. There also is evidence of cornicle wax attracting aphid predators in some cases.
Some clones of Aphis craccivora are sufficiently toxic to the invasive and dominant predatory ladybird Harmonia axyridis to suppress it locally, favouring other ladybird species; the toxicity is in this case narrowly specific to the dominant predator species.
Parasitoids
Aphids are abundant and widespread, and serve as hosts to a large number of parasitoids, many of them being very small (c. long) parasitoid wasps.
One species, Aphis ruborum, for example, is host to at least 12 species of parasitoid wasps. Parasitoids have been investigated intensively as biological control agents, and many are used commercially for this purpose.
Plant-aphid interactions
Plants mount local and systemic defenses against aphid attack. Young leaves in some plants contain chemicals that discourage attack while the older leaves have lost this resistance, while in other plant species, resistance is acquired by older tissues and the young shoots are vulnerable. Volatile products from interplanted onions have been shown to prevent aphid attack on adjacent potato plants by encouraging the production of terpenoids, a benefit exploited in the traditional practice of companion planting, while plants neighboring infested plants showed increased root growth at the expense of the extension of aerial parts. The wild potato, Solanum berthaultii, produces an aphid alarm pheromone, (E)-β-farnesene, as an allomone, a pheromone to ward off attack; it effectively repels the aphid Myzus persicae at a range of up to 3 millimetres. S. berthaultii and other wild potato species have a further anti-aphid defence in the form of glandular hairs which, when broken by aphids, discharge a sticky liquid that can immobilise some 30% of the aphids infesting a plant.
Plants exhibiting aphid damage can have a variety of symptoms, such as decreased growth rates, mottled leaves, yellowing, stunted growth, curled leaves, browning, wilting, low yields, and death. The removal of sap creates a lack of vigor in the plant, and aphid saliva is toxic to plants. Aphids frequently transmit plant viruses to their hosts, such as to potatoes, cereals, sugarbeets, and citrus plants.
There are two types of virus transmission between plant-aphid interactions: non-circulative transmission and circulative transmission. In non-circulative transmission, the virus attaches itself to the aphids mouthparts and is released when the aphids feed on a different plant. These non-circulatory transmitted viruses promotes rapid dispersion of the vector, or aphids. In circulative transmission, the virus is ingested and passes through the gut lining to enter the hemolymph, where it is then circulated throughout the entire body. After reaching the salivary glands, the virus is then released into the saliva upon transmission sites in plants. Circulatory transmitted viruses allows for long-term feeding by the aphids and increases the chances of being infected with the virus.
The green peach aphid, Myzus persicae, is a vector for more than 110 plant viruses. Cotton aphids (Aphis gossypii) often infect sugarcane, papaya and peanuts with viruses. In plants which produce the phytoestrogen coumestrol, such as alfalfa, damage by aphids is linked with higher concentrations of coumestrol.
The coating of plants with honeydew can contribute to the spread of fungi which can damage plants. Honeydew produced by aphids has been observed to reduce the effectiveness of fungicides as well.
A hypothesis that insect feeding may improve plant fitness was floated in the mid-1970s by Owen and Wiegert. It was felt that the excess honeydew would nourish soil micro-organisms, including nitrogen fixers. In a nitrogen-poor environment, this could provide an advantage to an infested plant over an uninfested plant. However, this does not appear to be supported by observational evidence.
Sociality
Some aphids show some of the traits of eusociality, joining insects such as ants, bees, and termites. However, there are differences between these sexual social insects and the clonal aphids, which are all descended from a single female parthenogenetically and share an identical genome. About fifty species of aphid, scattered among the closely related, host-alternating lineages Eriosomatinae and Hormaphidinae, have some type of defensive morph. These are gall-creating species, with the colony living and feeding inside a gall that they form in the host's tissues. Among the clonal population of these aphids, there may be several distinct morphs and this lays the foundation for a possible specialization of function, in this case, a defensive caste. The soldier morphs are mostly first and second instars with the third instar being involved in Eriosoma moriokense and only in Smythurodes betae are adult soldiers known. The hind legs of soldiers are clawed, heavily sclerotized and the stylets are robust making it possible to rupture and crush small predators. The larval soldiers are altruistic individuals, unable to advance to breeding adults but acting permanently in the interests of the colony. Another requirement for the development of sociality is provided by the gall, a colonial home to be defended by the soldiers.
The soldiers of gall-forming aphids also carry out the job of cleaning the gall. The honeydew secreted by the aphids is coated in a powdery wax to form "liquid marbles" that the soldiers roll out of the gall through small orifices. Aphids that form closed galls use the plant's vascular system for their plumbing: the inner surfaces of the galls are highly absorbent and wastes are absorbed and carried away by the plant.
Interactions with humans
Pest status
About 5000 species of aphid have been described and of these, some 450 species have colonized food and fiber crops. As direct feeders on plant sap, they damage crops and reduce yields, but they have a greater impact by being vectors of plant viruses. The transmission of these viruses depends on the movements of aphids between different parts of a plant, between nearby plants, and further afield. In this respect, the probing behavior of an aphid tasting a host is more damaging than lengthy aphid feeding and reproduction by stay-put individuals. The movement of aphids influences the timing of virus epidemics. They are major pests of greenhouse crops and species often encountered in greenhouses include: green peach aphid (Myzus persicae), cotton or melon aphid (Aphis gossypii), potato aphid (Macrosiphum euphorbiae), foxglove aphid (Aulacorthum solani) and chrysanthemum aphid (Macrosiphoniella sanborni) and others, which cause leaf yellowing, distorted leaves, and plant stunting; the excreted honeydew is a growing medium for a number of fungal pathogens including black sooty molds from the genera Capnodium, Fumago, and Scorias which then infect leaves and inhibit growth by reducing photosynthesis.
Aphids, especially during large outbreaks, have been known to trigger allergic inhalant reactions in sensitive humans.
Dispersal can be by walking or flight, appetitive dispersal, or by migration. Winged aphids are weak fliers, lose their wings after a few days and only fly by day. Dispersal by flight is affected by the impact, air currents, gravity, precipitation, and other factors, or dispersal may be accidental, caused by the movement of plant materials, animals, farm machinery, vehicles, or aircraft.
Control
Insecticide control of aphids is difficult, as they breed rapidly, so even small areas missed may enable the population to recover promptly. Aphids may occupy the undersides of leaves where spray misses them, while systemic insecticides do not move satisfactorily into flower petals. Finally, some aphid species are resistant to common insecticide classes including carbamates, organophosphates, and pyrethroids.
For small backyard infestations, spraying plants thoroughly with a strong water jet every few days may be sufficient protection. An insecticidal soap solution can be an effective household remedy to control aphids, but it only kills aphids on contact and has no residual effect. Soap spray may damage plants, especially at higher concentrations or at temperatures above ; some plant species are sensitive to soap sprays.
Aphid populations can be sampled using yellow-pan or Moericke traps. These are yellow containers with water that attract aphids. Aphids respond positively to green and their attraction to yellow may not be a true colour preference but related to brightness. Their visual receptors peak in sensitivity from 440 to 480 nm and are insensitive in the red region. Moericke found that aphids avoided landing on white coverings placed on soil and were repelled even more by shiny aluminium surfaces. Integrated pest management of various species of aphids can be achieved using biological insecticides based on fungi such as Lecanicillium lecanii, Beauveria bassiana or Isaria fumosorosea. Fungi are the main pathogens of aphids; Entomophthorales can quickly cut aphid numbers in nature.
Aphids may also be controlled by the release of natural enemies, in particular lady beetles and parasitoid wasps. However, since adult lady beetles tend to fly away within 48 hours after release, without laying eggs, repeated applications of large numbers of lady beetles are needed to be effective. For example, one large, heavily infested rose bush may take two applications of 1500 beetles each.
The ability to produce allomones such as farnesene to repel and disperse aphids and to attract their predators has been experimentally transferred to transgenic Arabidopsis thaliana plants using an Eβf synthase gene in the hope that the approach could protect transgenic crops. Eβ farnesene has however found to be ineffective in crop situations although stabler synthetic forms help improve the effectiveness of control using fungal spores and insecticides through increased uptake caused by movements of aphids.
In human culture
Aphids are familiar to farmers and gardeners, mainly as pests. Peter Marren and Richard Mabey record that Gilbert White described an invading "army" of black aphids that arrived in his village of Selborne, Hampshire, England, in August 1774 in "great clouds", covering every plant, while in the unusually hot summer of 1783, White found that honeydew was so abundant as to "deface and destroy the beauties of my garden", though he thought the aphids were consuming rather than producing it.
Infestation of the Chinese sumac (Rhus chinensis) by Chinese sumac aphids (Schlechtendalia chinensis) can create "Chinese galls" which are valued as a commercial product. As "Galla Chinensis", they are used in traditional Chinese medicine to treat coughs, diarrhea, night sweats, dysentery and to stop intestinal and uterine bleeding. Chinese galls are also an important source of tannins.
| Biology and health sciences | Hemiptera (true bugs) | null |
162312 | https://en.wikipedia.org/wiki/Mechanical%20wave | Mechanical wave | In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a material medium.
(Vacuum is, from classical perspective, a non-material medium, where electromagnetic waves propagate.)
While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves.
Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one.
Transverse wave
A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave.
To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave.
Longitudinal wave
Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave.
Surface waves
This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean, lake, or any other type of water body. There are two types of surface waves, namely Rayleigh waves and Love waves.
Rayleigh waves, also known as ground roll, are waves that travel as ripples with motion similar to those of waves on the surface of water. Such waves are much slower than body waves, at roughly 90% of the velocity of for a typical homogeneous elastic medium. Rayleigh waves have energy losses only in two dimensions and are hence more destructive in earthquakes than conventional bulk waves, such as P-waves and S-waves, which lose energy in all three directions.
A Love wave is a surface wave having horizontal waves that are shear or transverse to the direction of propagation. They usually travel slightly faster than Rayleigh waves, at about 90% of the body wave velocity, and have the largest amplitude.
Examples
Seismic waves
Sound waves
Wind waves on seas and lakes
Vibration
| Physical sciences | Waves | Physics |
162393 | https://en.wikipedia.org/wiki/Rin%20Tin%20Tin | Rin Tin Tin | Rin Tin Tin or Rin-Tin-Tin (September 10, 1918 – August 10, 1932) was a male German Shepherd born in Flirey, France, who became an international star in motion pictures. He was rescued from a World War I battlefield by an American soldier, Lee Duncan, who nicknamed him "Rinty". Duncan trained Rin Tin Tin and obtained silent film work for the dog. Rin Tin Tin was an immediate box-office success and went on to appear in 27 Hollywood films, gaining worldwide fame. Along with the earlier canine film star Strongheart, Rin Tin Tin was responsible for greatly increasing the popularity of German Shepherd dogs as family pets. The immense profitability of his films contributed to the success of Warner Bros. studios and helped advance the career of Darryl F. Zanuck from screenwriter to producer and studio executive.
After the dog's only appearance in color (the 1929 musical revue The Show of Shows, in which he barks an introduction to a musical pageant), Warner Bros. dispensed with the services of both Rin Tin Tin and Lee Duncan. The studio was intent on promoting its "all-talking" stars, and silent-film personality Rin Tin Tin obviously couldn't speak. Undaunted, Duncan sought further film work and signed with independent producer Nat Levine, who starred Rin Tin Tin in serials and feature films.
After Rin Tin Tin died in 1932, the name was given to several related German Shepherd dogs featured in fictional stories on film, radio, and television. Rin Tin Tin Jr. appeared in some serialized films, but was not as talented as his father. Rin Tin Tin III, said to be Rin Tin Tin's grandson, but probably only distantly related, helped promote the military use of dogs during World War II. Rin Tin Tin III also appeared in a film with child actor Robert Blake in 1947.
Duncan groomed Rin Tin Tin IV for the 1950s television series The Adventures of Rin Tin Tin, produced by Bert Leonard. However, the dog performed poorly in a screen test and was replaced in the TV show by trainer Frank Barnes's dogs, primarily one named Flame Jr., called JR, with the public led to believe otherwise. Instead of shooting episodes, Rin Tin Tin IV stayed at home in Riverside, California. The TV show Rin Tin Tin was nominated for a PATSY Award in both 1958 and 1959 but did not win.
After Duncan died in 1960, the screen property of Rin Tin Tin passed to his business partner Bert Leonard, who worked on further adaptations such as the 1988–1993 Canadian-made TV show Katts and Dog, which was called Rin Tin Tin: K-9 Cop in the US and Rintintin Junior in France. Following Leonard's death in 2006, his lawyer James Tierney made the 2007 children's film Finding Rin Tin Tin, an American–Bulgarian production based on Duncan's discovery of the dog in France. Meanwhile, a Rin Tin Tin memorabilia collection was being amassed by Texas resident Jannettia Propps Brodsgaard, who had purchased several direct descendant dogs from Duncan beginning with Rinty Tin Tin Brodsgaard in 1957. Brodsgaard bred the dogs to keep the bloodline. Brodsgaard's granddaughter, Daphne Hereford, continued to build on the tradition and bloodline of Rin Tin Tin from 1988 to 2011; she was the first to trademark the name Rin Tin Tin, in 1993, and she bought the domain names rintintin.com and rintintin.net to establish a website. Hereford opened a short-lived Rin Tin Tin museum in Latexo, Texas and passed the tradition to her daughter, Dorothy Yanchak, in 2011. The dog Rin Tin Tin XII, owned by Yanchak, takes part in public events to represent the Rin Tin Tin legacy.
Origins
Following advances made by American forces during the Battle of Saint-Mihiel, Corporal Lee Duncan, an armourer of the U.S. Army Air Service, was sent forward on September 15, 1918, to the small French village of Flirey to see if it would make a suitable flying field for his unit, the 135th Aero Squadron. The area had been subjected to aerial bombing and artillery fire, and Duncan found a severely damaged kennel which had once supplied the Imperial German Army with German Shepherd dogs. The only dogs left alive in the kennel were a starving mother with a litter of five nursing puppies, their eyes still shut because they were less than a week old. Duncan rescued the dogs and brought them back to his unit.
When the puppies were weaned, he gave the mother to an officer and three of the litter to other soldiers, but he kept one puppy of each sex. He felt that these two dogs were symbols of his good luck. He dubbed them Rin Tin Tin and Nanette after a pair of good luck charms called Rintintin and Nénette that French children often gave to the American soldiers (the soldiers were usually told that Rintintin and Nénette were lucky lovers who had survived a bombing attack, but the original dolls had been designed by Francisque Poulbot before the war in late 1913 to look like Paris street urchins. Contrary to linguistic clues and popular usage, Poulbot said that Rintintin was the girl doll.). Duncan sensed that Nanette was the more intelligent of the two puppies.
In July 1919, Duncan sneaked the dogs aboard a ship taking him back to the US at the end of the war. When he got to Long Island, New York, for re-entry processing, he put his dogs in the care of a Hempstead breeder named Mrs. Leo Wanner, who trained police dogs. Nanette was diagnosed with pneumonia; as a replacement, the breeder gave Duncan another female German Shepherd puppy. Duncan travelled to California by rail with his dogs. While Duncan was travelling by train, Nanette died in Hempstead. As a memorial, Duncan named his new puppy Nanette II, but he called her Nanette. Duncan, Rin Tin Tin, and Nanette II settled at his home in Los Angeles. Rin Tin Tin was a dark sable color and had very dark eyes. Nanette II was much lighter in color.
An athletic silent film actor named Eugene Pallette was one of Duncan's friends. The two men enjoyed the outdoors; they took the dogs to the Sierras, where Pallette liked to hunt, while Duncan taught Rin Tin Tin various tricks. Duncan thought that his dog might win a few awards at dog shows and thus be a valuable source of puppies bred with Nanette for sale. In 1922, Duncan was a founding member of the Shepherd Dog Club of California, based in Los Angeles. At the club's first show, Rin Tin Tin showed his agility but also demonstrated an aggressive temper, growling, barking, and snapping. It was a very poor performance, but the worst moment came afterward when Duncan was walking home. A heavy bundle of newspapers was thrown from a delivery truck and landed on the dog, breaking his left front leg. Duncan had the injured limb set in plaster and he nursed the dog back to health for nine months.
Ten months after the fracture, the leg was healed and Rin Tin Tin was entered in a show for German Shepherd dogs in Los Angeles. Rin Tin Tin had learned to leap great heights. At the dog show while making a winning leap, he was filmed by Duncan's acquaintance Charley Jones, who had just developed a slow-motion camera. Seeing his dog being filmed, Duncan became convinced Rin Tin Tin could become the next Strongheart, a successful German Shepherd film dog that lived in his own full-sized stucco bungalow with its own street address in the Hollywood Hills, separate from the mansion of his owners, who lived a street away next to Roy Rogers. Duncan later wrote, "I was so excited over the film idea that I found myself thinking of it night and day."
Career
Duncan walked his dog up and down Poverty Row, talking to anyone in a position to put Rin Tin Tin in film, however modest the role. The dog's first break came when he was asked to replace a camera-shy wolf in The Man from Hell's River (1922) featuring Wallace Beery. The wolf was not performing properly for the director, but under the guidance of Duncan's voice commands, Rin Tin Tin was very easy to work with. When the film was completed, the dog was billed as "Rin Tan". Rin Tin Tin would be cast as a wolf or wolf-hybrid many times in his career because it was much more convenient for filmmakers to work with a trained dog. In another 1922 film titled My Dad, Rin Tin Tin picked up a small part as a household dog. The credits read: "Rin Tin Tin – Played by himself".
Rin Tin Tin's first starring role was in Where the North Begins (1923), in which he played alongside silent screen actress Claire Adams. This film was a huge success and has often been credited with saving Warner Bros. from bankruptcy. It was followed by 24 more screen appearances by Rin Tin Tin. Each of these films was very popular, making such a profit for Warner Bros. that Rin Tin Tin was called "the mortgage lifter" by studio insiders. A young screenwriter named Darryl F. Zanuck was involved in creating stories for Rin Tin Tin; the success of the films raised him to the position of film producer. In New York City, Mayor Jimmy Walker gave Rin Tin Tin a key to the city.
Rin Tin Tin was much sought after and was signed for endorsement deals. Dog food makers Ken-L Ration, Ken-L-Biskit, and Pup-E-Crumbles all featured him in their advertisements. Warner Bros. fielded fan letters by the thousands, sending back a glossy portrait signed with a paw print and a message written by Duncan: "Most faithfully, Rin Tin Tin." In the 1920s, Rin Tin Tin's success for Warner Bros. inspired several imitations from other studios looking to cash in on his popularity, notably RKO's Ace the Wonder Dog, also a German Shepherd. Around the world, Rin Tin Tin was extremely popular because as a dog he was equally well understood by all viewers. At the time, silent films were easily adapted for various countries by simply changing the language of the intertitles. Rin Tin Tin's films were widely distributed. Film historian Jan-Christopher Horak wrote that by 1927, Rin Tin Tin was the most popular actor with the very sophisticated film audience in Berlin. "He is a human dog," one fan wrote, "human in the real big sense of the word."
A Hollywood legend holds it that at the first Academy Awards in 1929, Rin Tin Tin was voted Best Actor, but that the Academy of Motion Picture Arts and Sciences, wishing to appear more serious and thus determined to have a human actor win the award, removed Rin Tin Tin as a choice and re-ran the vote, leading to German actor Emil Jannings winning the award. Author Susan Orlean stated this story as fact in her 2011 book Rin Tin Tin: The Life and the Legend. However, former Academy head Bruce Davis has described the story as an urban legend, attributing its origins to a joke ballot circulated the previous year by Zanuck, who wanted to mock the concept of the Academy Awards. Davis consulted the original 1928 ballots, which are kept in storage at the Academy's Margaret Herrick Library, and confirmed that no one voted for Rin Tin Tin (although Jack L. Warner did, as a joke, include the dog on his nomination ballot); as well, since the ballots that year were signed rather than secret, Davis ascertained that Zanuck did not even vote for that year's awards.
Although primarily a star of silent films, Rin Tin Tin did appear in four sound features, including the 12-part Mascot Studios chapter-play The Lightning Warrior (1931), co-starring with Frankie Darro. In these films, vocal commands would have been picked up by the microphones, so Duncan likely guided Rin Tin Tin by hand signals. Rin Tin Tin and the rest of the crew filmed much of the outdoor action footage for The Lightning Warrior on the Iverson Movie Ranch in Chatsworth, Los Angeles, known for its huge sandstone boulders and widely recognized as the most heavily filmed outdoor shooting location in the history of the movies.
Rin Tin Tin and Nanette II produced at least 48 puppies; Duncan kept two of them, selling the rest or giving them as gifts. Greta Garbo, W.K. Kellogg, and Jean Harlow each owned one of Rin Tin Tin's descendants.
Death and accolades
On August 10, 1932, Rin Tin Tin died at Duncan's home on Club View Drive in Los Angeles. Duncan wrote about the death in his unpublished memoir. He heard Rin Tin Tin bark in a peculiar fashion so he went to see what was wrong. He found the dog lying on the ground, moments away from death. Newspapers across the nation carried obituaries. Magazine articles were written about his life, and a special Movietone News feature was shown to movie audiences. In the press, aspects of the death were fabricated in various ways, such as Rin Tin Tin dying on the set of the film Pride of the Legion (where Rin Tin Tin Jr. was working), dying at night, or dying at home on the front lawn in the arms of actress Jean Harlow, who lived on the same street. In a private ceremony, Duncan buried Rin Tin Tin in a bronze casket in his own backyard with a plain wooden cross to mark the location. Duncan was suffering the financial effects of the Great Depression and could not afford a finer burial, nor even his own expensive house. He sold his house and quietly arranged to have the dog's body returned to his country of birth for reburial in the Cimetière des Chiens et Autres Animaux Domestiques, the pet cemetery in the Parisian suburb of Asnières-sur-Seine.
In the United States, his death set off a national response. Regular programming was interrupted by a news bulletin. An hour-long program about Rin Tin Tin played the next day. In a ceremony on February 8, 1960, Rin Tin Tin was honored with a star on the Hollywood Walk of Fame at 1627 Vine Street.
Filmography 1922–1931
Successors
Rin Tin Tin Jr.
Rin Tin Tin Jr. was sired by Rin Tin Tin, and his mother was Champion Asta of Linwood, also owned by Lee Duncan. Junior appeared in several films in the 1930s. He starred with Rex the Wild Horse in the Mascot Pictures serials The Law of the Wild (1934) and The Adventures of Rex and Rinty (1935). He voiced the part of Rinty in the radio shows produced during that era as well.
Rin Tin Tin Jr. died in December 1941 of pneumonia.
Filmography 1932–1939
Rin Tin Tin III
Rin Tin Tin III starred alongside a young Robert Blake in 1947's The Return of Rin Tin Tin but is primarily credited with assisting Duncan in the training of more than 5,000 dogs for the World War II war effort at Camp Hahn, California.
Filmography 1947
Radio
Between 1930 and 1955, Rin Tin Tin was cast in three different radio series, beginning April 5, 1930, with The Wonder Dog, in which the original Rin Tin Tin performed some of the sound effects until his death in 1932. (Most of the dog noises were performed live on radio by a young Bob Barker.) This 15-minute program was broadcast Saturdays on the Blue Network at 8:15 pm until March 1931, when it moved to Thursdays. Storylines were often highly unlikely, with Rin Tin Tin saving a group of space-exploring scientists from giant Martians in one episode.
In September 1930, the title changed from The Wonder Dog to Rin Tin Tin. Don Ameche and Junior McLain starred in the series, which ended June 8, 1933. With Ken-L Ration as a sponsor, the series continued on CBS from October 5, 1933, until May 20, 1934, airing Sundays at 7:45 pm.
The final radio series was broadcast on Mutual from January 2, 1955, to December 25, 1955, a 30-minute program heard Sunday evenings. Sponsored by Shredded Wheat and Milk-Bone for The National Biscuit Company, the series featured Rin Tin Tin's adventures with the 101st Cavalry in the same manner as the concurrent TV show, The Adventures of Rin Tin Tin. The radio show also starred Lee Aaker (1943–2021) as Rusty, James Brown (1920–1992) as Lieutenant Ripley "Rip" Masters, and Joe Sawyer (1906–1982) as Sergeant Biff O'Hara.
Television
The Adventures of Rin Tin Tin, an ABC television series, ran from October 1954 to May 1959. Duncan's Rin Tin Tin IV was nominally the lead dog, but nearly all of the screen work was performed by a dog named Flame Jr., nicknamed J.R., owned by trainer Frank Barnes. Other dogs that sometimes played TV's Rin Tin Tin included Barnes's dog Blaze and Duncan's dog Hey You from the Rin Tin Tin bloodline. Hey You had suffered an eye injury during his youth; he was used as a stunt dog and for fight scenes. TV's Rin Tin Tin was far lighter in color than the original sable-colored dog of silent film.
Legacy
Lee Duncan died on September 20, 1960, without ever having trademarked the name "Rin Tin Tin". The tradition continued in Texas with Jannettia Brodsgaard Propps, who had purchased several direct descendant dogs from Duncan. Her granddaughter, Daphne Hereford, continued the lineage and the legacy of Rin Tin Tin following her grandmother's death on December 17, 1988. Hereford passed the tradition to her daughter, Dorothy Yanchak, in July 2011. The current Rin Tin Tin is twelfth in line from the original silent film star and makes personal appearances across the country to promote responsible pet ownership. Rin Tin Tin was the recipient of the 2011 American Humane Association Legacy award, accepted by a twelfth-generation Rin Tin Tin legacy dog in October 2011 at the first annual Hero Dog Awards in Beverly Hills. Mickey Rooney narrated a memorial tribute film about Rin Tin Tin. The next year, Rin Tin Tin was honored by the Academy of Arts and Sciences in a special program, Hollywood Dogs: From Rin Tin Tin to Uggie, on June 6, 2012, at the Samuel Goldwyn Theatre. The career of contemporary film dog Uggie (2002–2015) was compared to Rin Tin Tin's silent-era career.
Cultural references
In 1976, a film loosely based on Rin Tin Tin's debut was produced: Won Ton Ton, the Dog Who Saved Hollywood. Producer David V. Picker offered a fee to Herbert B. Leonard, but Leonard objected to the premise of a film ridiculing the famous dog. Leonard sued the filmmakers for infringement on the Rin Tin Tin legacy and lost.
Originally co-produced by Leonard, the 1988–93 Canadian TV series Katts and Dog featured the adventures of a police officer and his canine partner. The series was titled Rin Tin Tin: K9 Cop for its American showings; in France it was presented as Rintintin Junior. Leonard was funded by the Christian Broadcasting Network, whose founder, televangelist Pat Robertson, had been enthusiastic for the idea. Leonard was criticized by his fellow producers for staying with his new wife in Los Angeles rather than helping with the show on location in Canada. Partway through the first season, Robertson said that some of his viewers were deeply concerned that the plot involved a widowed mother who was living unmarried in the same house with the brother of her late husband. Robertson recommended the mother character be killed off to stop the complaints, but Leonard protested such a change. After Leonard quit the show, the problematic character was killed off. Though separated from the show, Leonard continued to receive a fee for the screen rights to Rin Tin Tin.
In 2007, a children's film was produced—Finding Rin Tin Tin—based on the story of Lee Duncan finding Rin Tin Tin on a battlefield in France and making a star of him in Hollywood. The film was the subject of a lawsuit brought in October 2008 by Daphne Hereford, who asked a federal court in Houston, Texas, to protect her rights to the Rin Tin Tin trademark. The judge ruled in favor of the filmmakers, declaring the use of the name in the film to be fair use.
A fictionalized account of Lee Duncan finding and raising Rin Tin Tin is a major part of the novel Sunnyside by Glen David Gold.
Rin Tin Tin has been featured as a character in many works of fiction, including a children's book in which Rin Tin Tin and the other animal characters are able to talk to one another but are unable to talk to humans.
Rin Tin Tin finds mention in Anne Frank's diary in her second entry on June 14, 1942. Frank wishes she had a dog like Rin Tin Tin. She also wrote about the 1924 Rin Tin Tin silent film The Lighthouse by the Sea, which she and her school friends watched together in her house for her birthday party. According to her, the movie was a big hit with her friends.
The Hank Williams Jr song "Attitude Adjustment" includes the line, "Now some sticks to the head, and some kicks to the shin, and several bites by Rin Tin Tin, and I couldn't wait to get into that jail."
The Clash's 1981 song "The Magnificent Seven" referenced the dog - "Plato the Greek or Rin Tin Tin/ who's more famous to the Billion Millions?".
In The Simpsons season 14 episode "Old Yeller-Belly", Rin Tin Tin is referred to as the "first openly gay dog in Hollywood."
The Finnish pop rock band Leevi and the Leavings has a song called "Rin Tin Tin" on their album Häntä koipien välissä (1988).
| Biology and health sciences | Individual animals | Animals |
162404 | https://en.wikipedia.org/wiki/Stratovolcano | Stratovolcano | A stratovolcano, also known as a composite volcano, is a typically conical volcano built up by many alternating layers (strata) of hardened lava and tephra. Unlike shield volcanoes, stratovolcanoes are characterized by a steep profile with a summit crater and explosive eruptions. Some have collapsed summit craters called calderas. The lava flowing from stratovolcanoes typically cools and solidifies before spreading far, due to high viscosity. The magma forming this lava is often felsic, having high to intermediate levels of silica (as in rhyolite, dacite, or andesite), with lesser amounts of less viscous mafic magma. Extensive felsic lava flows are uncommon, but can travel as far as 8 km (5 mi).
The term composite volcano is used because strata are usually mixed and uneven instead of neat layers. They are among the most common types of volcanoes; more than 700 stratovolcanoes have erupted lava during the Holocene Epoch (the last 11,700 years), and many older, now extinct, stratovolcanoes erupted lava as far back as Archean times. Stratovolcanoes are typically found in subduction zones but they also occur in other geological settings. Two examples of stratovolcanoes famous for catastrophic eruptions are Krakatoa in Indonesia (which erupted in 1883 claiming 36,000 lives) and Mount Vesuvius in Italy (which erupted in 79 A.D killing an estimated 2,000 people). In modern times, Mount St. Helens (1980) in Washington State, US, and Mount Pinatubo (1991) in the Philippines have erupted catastrophically, but with fewer deaths.
Distribution
Stratovolcanoes are common at subduction zones, forming chains and clusters along plate tectonic boundaries where an oceanic crust plate is drawn under a continental crust plate (continental arc volcanism, e.g. Cascade Range, Andes, Campania) or another oceanic crust plate (island arc volcanism, e.g. Japan, Philippines, Aleutian Islands).
Stratovolcanoes also occur in some other geological settings, for example as a result of intraplate volcanism on oceanic islands far from plate boundaries. Examples are Teide in the Canary Islands, and Pico do Fogo in Cape Verde.
Stratovolcanoes have formed in continental rifts. Examples in the East African Rift are Ol Doinyo Lengai in Tanzania, and Longonot in Kenya.
Formation
Subduction zone volcanoes form when hydrous minerals are pulled down into the mantle on the slab. These hydrous minerals, such as chlorite and serpentine, release their water into the mantle which decreases its melting point by 60 to 100 °C. The release of water from hydrated minerals is termed "dewatering", and occurs at specific pressures and temperatures for each mineral, as the plate descends to greater depths. This allows the mantle to partially melt and generate magma. This is called flux melting. The magma then rises through the crust, incorporating silica-rich crustal rock, leading to a final intermediate composition. When the magma nears the top surface, it pools in a magma chamber within the crust below the stratovolcano.
The processes that trigger the final eruption remain a question for further research. Possible mechanisms include:
Magma differentiation, in which the lightest, most silica-rich magma and volatiles such as water, halogens, and sulfur dioxide accumulate in the uppermost part of the magma chamber. This can dramatically increase pressures.
Fractional crystallization of the magma. When anhydrous minerals such as feldspar crystallize out of the magma, this concentrates volatiles in the remaining liquid, which can lead to a second boiling that causes a gas phase (carbon dioxide or water) to separate from the liquid magma and raise magma chamber pressures.
Injection of fresh magma into the magma chamber, which mixes and heats the cooler magma already present. This could force volatiles out of solution and lower the density of the cooler magma, both of which increase pressure. There is considerable evidence for magma mixing just before many eruptions, including magnesium-rich olivine crystals in freshly erupted silicic lava that show no reaction rim. This is possible only if the lava erupted immediately after mixing since olivine rapidly reacts with silicic magma to form a rim of pyroxene.
Progressive melting of the surrounding country rock.
These internal triggers may be modified by external triggers such as sector collapse, earthquakes, or interactions with groundwater. Some of these triggers operate only under limited conditions. For example, sector collapse (where part of the flank of a volcano collapses in a massive landslide) can only trigger the eruption of a very shallow magma chamber. Magma differentiation and thermal expansion also are ineffective as triggers for eruptions from deep magma chambers.
Hazards
In recorded history, explosive eruptions at subduction zone (convergent-boundary) volcanoes have posed the greatest hazard to civilizations. Subduction-zone stratovolcanoes, such as Mount St. Helens, Mount Etna and Mount Pinatubo, typically erupt with explosive force because the magma is too viscous to allow easy escape of volcanic gases. As a consequence, the tremendous internal pressures of the trapped volcanic gases remain and intermingle in the pasty magma. Following the breaching of the vent and the opening of the crater, the magma degasses explosively. The magma and gases blast out with high speed and full force.
Since 1600 CE, nearly 300,000 people have been killed by volcanic eruptions. Most deaths were caused by pyroclastic flows and lahars, deadly hazards that often accompany explosive eruptions of subduction-zone stratovolcanoes. Pyroclastic flows are swift, avalanche-like, ground-sweeping, incandescent mixtures of hot volcanic debris, fine ash, fragmented lava, and superheated gases that can travel at speeds over . Around 30,000 people were killed by pyroclastic flows during the 1902 eruption of Mount Pelée on the island of Martinique in the Caribbean. During March and April 1982, El Chichón in the State of Chiapas in southeastern Mexico, erupted 3 times, causing the worst volcanic disaster in that country's history and killied more than 2,000 people in pyroclastic flows.
Two Decade Volcanoes that erupted in 1991 provide examples of stratovolcano hazards. On 15 June, Mount Pinatubo erupted and caused an ash cloud to shoot 40 km (25 mi) into the air. It produced large pyroclastic surges and lahar floods that caused a lot of damage to the surrounding area. Mount Pinatubo, located in Central Luzon just 90 km (56 mi) west-northwest of Manila, had been dormant for six centuries before the 1991 eruption. This eruption was one of the 2nd largest in the 20th century. It produced a large volcanic ash cloud that affected global temperatures, lowering them in areas as much as .5 °C. The volcanic ash cloud consisted of 22 million tons of SO2 which combined with water droplets to create sulfuric acid. In 1991 Japan's Mount Unzen also erupted, after 200 years of inactivity. It's located on the island of Kyushu about 40 km (25 mi) east of Nagasaki. Beginning in June, a newly formed lava dome repeatedly collapsed. This generated a pyroclastic flow that flowed down the mountain's slopes at speeds as high as 200 km/h (120 mph). The 1991 eruption of Mount Unzen was one of the worst volcanic disasters in Japan's history, once killing more than 15,000 people in 1792.
The eruption of Mount Vesuvius in 79 AD is the most famous example of a hazardous stratovolcano eruption. It completely smothered the nearby ancient cities of Pompeii and Herculaneum with thick deposits of pyroclastic surges and pumice ranging from 6–7 meters deep. Pompeii had 10,000–20,000 inhabitants at the time of eruption. Mount Vesuvius is recognized as one of the most dangerous of the world's volcanoes, due to its capacity for powerful explosive eruptions coupled with the high population density of the surrounding Metropolitan Naples area (totaling about 3.6 million inhabitants).
Ash
In addition to potentially affecting the climate, volcanic ash clouds from explosive eruptions pose a serious hazard to aviation. Volcanic ash clouds consist of silt- or sand-sized pieces of rock, mineral, volcanic glass. Volcanic ash grains are jagged, abrasive, and don't dissolve in water. For example, during the 1982 eruption of Galunggung in Java, British Airways Flight 9 flew into the ash cloud, causing it to sustain temporary engine failure and structural damage. Although no crashes have happened due to ash, more than 60, mostly commercial aircraft, have been damaged. Some of these incidents resulted in emergency landings. Ashfalls are a threat to health when inhaled and are also a threat to property. A square yard of a 4-inch thick volcanic ash layer can weigh 120–200 pounds and can get twice as heavy when wet. Wet ash also poses a risk to electronics due to its conductive nature. Dense clouds of hot volcanic ash can be expelled due to the collapse of an eruptive column, or laterally due to the partial collapse of a volcanic edifice or lava dome during explosive eruptions. These clouds are known as pyroclastic surges and in addition to volcanic ash, they contain hot lava, pumice, rock, and volcanic gas. Pyroclastic surges flow at speeds over 50 mph and are at temperatures between 200 °C – 700 °C. These surges can cause major damage to property and people in their path.
Lava
Lava flows from stratovolcanoes are generally not a significant threat to humans or animals because the highly viscous lava moves slowly enough for everyone to evacuate. Most deaths attributed to lava are due to related causes such as explosions and asphyxiation from toxic gas. Lava flows can bury homes and farms in thick volcanic rock which greatly reduces property value. However, not all stratovolcanoes erupt viscous and sticky lava. Nyiragongo, near Lake Kivu in central Africa, is very dangerous because its magma has an unusually low silica content, making it much less viscous than other stratovolcanoes. Low viscosity lava can generate massive lava fountains, while lava of thicker viscosity can solidify within the vent, creating a volcanic plug. Volcanic plugs can trap volcanic gas and create pressure in the magma chamber, resulting in violent eruptions. Lava is typically between 700 and 1,200 °C (1,300–2,200 °F).
Volcanic bombs
Volcanic bombs are masses of unconsolidated rock and lava that are ejected during an eruption. Volcanic bombs are classified as larger than 64mm (2.5 inches). Anything from 2 to 64mm is classified as lapilli. When erupted, volcanic bombs are still molten and partially cool and solidify on their descent. They can form ribbon or oval shapes that can also flatten on impact with the ground. Volcanic bombs are associated with Strombolian and Vulcanian eruptions and basaltic lava. Ejection velocities ranging from 200 to 400 m/s have been recorded causing volcanic bombs to be destructive.
Lahar
Lahars (from a Javanese term for volcanic mudflows) are a mixture of volcanic debris and water. Lahars can result from heavy rainfall during or before the eruption or interaction with ice and snow. Meltwater mixes with volcanic debris causing a fast moving mudflow. Lahars are typically about 60% sediment and 40% water. Depending on the abundance of volcanic debris the lahar can be fluid or thick like concrete. Lahars have the strength and speed to flatten structures and cause great bodily harm, gaining speeds up to dozens of kilometers per hour. In the 1985 eruption of Nevado del Ruiz in Colombia, Pyroclastic surges melted snow and ice atop the 5,321 m (17,457 ft) high Andean volcano. The ensuing lahar killed 25,000 people and flooded the city of Armero and nearby settlements.
Volcanic gas
As a volcano forms, several different gases mix with magma in the volcanic chamber. During an eruption the gases are then released into the atmosphere which can lead to toxic human exposure. The most abundant of these gases is H2O (water) followed by CO2 (carbon dioxide), SO2 (sulfur dioxide), H2S (hydrogen sulfide), and HF (hydrogen fluoride). If at concentrations of more than 3% in the air, when breathed in CO2 can cause dizziness and difficulty breathing. At more than 15% concentration CO2 causes death. CO2 can settle into depressions in the land, leading to deadly, odorless pockets of gas. SO2 classified as a respiratory, skin, and eye irritant if come into contact with. It is known for its pungent egg smell and role in ozone depletion and has the potential to cause acid rain downwind of an eruption. H2S has an even stronger odor than SO2 as well as being even more toxic. Exposure for less than an hour at concentrations of over 500 ppm causes death. HF and similar species can coat ash particles and once deposited can poison soil and water. Gases are also emitted during volcanic degassing, which is a passive release of gas during periods of dormancy.
Eruptions that affected global climate
As per the above examples, while eruptions like Mount Unzen have caused deaths and local damage, the impact of the June 1991 eruption of Mount Pinatubo was seen globally. The eruptive columns reached heights of 40 km and dumped 17 megatons of SO2 into the lower stratosphere. The aerosols that formed from the sulfur dioxide (SO2), carbon dioxide (CO2), and other gases dispersed around the world. The SO2 in this cloud combined with water (both of volcanic and atmospheric origin) and formed sulfuric acid, blocking a portion of the sunlight from reaching the troposphere. This caused the global temperature to decrease by about 0.4 °C (0.72 °F) from 1992 to 1993. These aerosols caused the ozone layer to reach the lowest concentrations recorded at that time. An eruption the size of Mount Pinatubo affected the weather for a few years; with warmer winters and cooler summers observed.
A similar phenomenon occurred in the April 1815, the eruption of Mount Tambora on Sumbawa island in Indonesia. This eruption is recognized as the most powerful eruption in recorded history. Its eruption cloud lowered global temperatures as much as 0.4 to 0.7 °C. In the year following the eruption, most of the Northern Hemisphere experienced cooler temperatures during the summer. In the northern hemisphere, 1816 was known as the "Year Without a Summer". The eruption caused crop failures, food shortages, and floods that killed over 100,000 people across Europe, Asia, and North America.
List
| Physical sciences | Volcanology | Earth science |
162498 | https://en.wikipedia.org/wiki/Nuclear%20meltdown | Nuclear meltdown | A nuclear meltdown (core meltdown, core melt accident, meltdown or partial core melt) is a severe nuclear reactor accident that results in core damage from overheating. The term nuclear meltdown is not officially defined by the International Atomic Energy Agency or by the United States Nuclear Regulatory Commission. It has been defined to mean the accidental melting of the core of a nuclear reactor, however, and is in common usage a reference to the core's either complete or partial collapse.
A core meltdown accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. This differs from a fuel element failure, which is not caused by high temperatures. A meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits.
Once the fuel elements of a reactor begin to melt, the fuel cladding has been breached, and the nuclear fuel (such as uranium, plutonium, or thorium) and fission products (such as caesium-137, krypton-85, or iodine-131) within the fuel elements can leach out into the coolant. Subsequent failures can permit these radioisotopes to breach further layers of containment. Superheated steam and hot metal inside the core can lead to fuel–coolant interactions, hydrogen explosions, or steam hammer, any of which could destroy parts of the containment. A meltdown is considered very serious because of the potential for radioactive materials to breach all containment and escape (or be released) into the environment, resulting in radioactive contamination and fallout, and potentially leading to radiation poisoning of people and animals nearby.
Causes
Nuclear power plants generate electricity by heating fluid via a nuclear reaction to run a generator. If the heat from that reaction is not removed adequately, the fuel assemblies in a reactor core can melt. A core damage incident can occur even after a reactor is shut down because the fuel continues to produce decay heat.
A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss-of-pressure-control accident, a loss-of-coolant accident (LOCA), an uncontrolled power excursion. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth ensure that multiple layers of safety systems are always present to make such accidents unlikely.
The containment building is the last of several safeguards that prevent the release of radioactivity to the environment. Many commercial reactors are contained within a thick pre-stressed, steel-reinforced, air-tight concrete structure that can withstand hurricane-force winds and severe earthquakes.
In a loss-of-coolant accident, either the physical loss of coolant (which is typically deionized water, an inert gas, NaK, or liquid sodium) or the loss of a method to ensure a sufficient flow rate of the coolant occurs. A loss-of-coolant accident and a loss-of-pressure-control accident are closely related in some reactors. In a pressurized water reactor, a LOCA can also cause a "steam bubble" to form in the core due to excessive heating of stalled coolant or by the subsequent loss-of-pressure-control accident caused by a rapid loss of coolant. In a loss-of-forced-circulation accident, a gas cooled reactor's circulators (generally motor or steam driven turbines) fail to circulate the gas coolant within the core, and heat transfer is impeded by this loss of forced circulation, though natural circulation through convection will keep the fuel cool as long as the reactor is not depressurized.
In a loss-of-pressure-control accident, the pressure of the confined coolant falls below specification without the means to restore it. In some cases, this may reduce the heat transfer efficiency (when using an inert gas as a coolant), and in others may form an insulating "bubble" of steam surrounding the fuel assemblies (for pressurized water reactors). In the latter case, due to localized heating of the "steam bubble" due to decay heat, the pressure required to collapse the "steam bubble" may exceed reactor design specifications until the reactor has had time to cool down. (This event is less likely to occur in boiling water reactors, where the core may be deliberately depressurized so that the emergency core cooling system may be turned on). In a depressurization fault, a gas-cooled reactor loses gas pressure within the core, reducing heat transfer efficiency and posing a challenge to the cooling of fuel; as long as at least one gas circulator is available, however, the fuel will be kept cool.
Light-water reactors (LWRs)
Before the core of a light-water nuclear reactor can be damaged, two precursor events must have already occurred:
A limiting fault (or a set of compounded emergency conditions) that leads to the failure of heat removal within the core (the loss of cooling). Low water level uncovers the core, allowing it to heat up.
Failure of the emergency core cooling system (ECCS). The ECCS is designed to rapidly cool the core and make it safe in the event of the maximum fault (the design basis accident) that nuclear regulators and plant engineers could imagine. There are at least two copies of the ECCS built for every reactor. Each division (copy) of the ECCS is capable, by itself, of responding to the design basis accident. The latest reactors have as many as four divisions of the ECCS. This is the principle of redundancy, or duplication. As long as at least one ECCS division functions, no core damage can occur. Each of the several divisions of the ECCS has several internal "trains" of components. Thus the ECCS divisions themselves have internal redundancy – and can withstand failures of components within them.
The Three Mile Island accident was a compounded group of emergencies that led to core damage. What led to this was an erroneous decision by operators to shut down the ECCS during an emergency condition due to gauge readings that were either incorrect or misinterpreted; this caused another emergency condition that, several hours after the fact, led to core exposure and a core damage incident. If the ECCS had been allowed to function, it would have prevented both exposure and core damage. During the Fukushima incident the emergency cooling system had also been manually shut down several minutes after it started.
If such a limiting fault were to occur, and a complete failure of all ECCS divisions were to occur, both Kuan, et al and Haskin, et al describe six stages between the start of the limiting fault (the loss of cooling) and the potential escape of molten corium into the containment (a so-called "full meltdown"):
Uncovering of the Core – In the event of a transient, upset, emergency, or limiting fault, LWRs are designed to automatically SCRAM (a SCRAM being the immediate and full insertion of all control rods) and spin up the ECCS. This greatly reduces reactor thermal power (but does not remove it completely); this delays core becoming uncovered, which is defined as the point when the fuel rods are no longer covered by coolant and can begin to heat up. As Kuan states: "In a small-break LOCA with no emergency core coolant injection, core uncovery [sic] generally begins approximately an hour after the initiation of the break. If the reactor coolant pumps are not running, the upper part of the core will be exposed to a steam environment and heatup of the core will begin. However, if the coolant pumps are running, the core will be cooled by a two-phase mixture of steam and water, and heatup of the fuel rods will be delayed until almost all of the water in the two-phase mixture is vaporized. The TMI-2 accident showed that operation of reactor coolant pumps may be sustained for up to approximately two hours to deliver a two phase mixture that can prevent core heatup."
Pre-damage heat up – "In the absence of a two-phase mixture going through the core or of water addition to the core to compensate water boiloff, the fuel rods in a steam environment will heat up at a rate between 0.3 °C/s (0.5 °F/s) and 1 °C/s (1.8 °F/s) (3)."
Fuel ballooning and bursting – "In less than half an hour, the peak core temperature would reach . At this temperature, the zircaloy cladding of the fuel rods may balloon and burst. This is the first stage of core damage. Cladding ballooning may block a substantial portion of the flow area of the core and restrict the flow of coolant. However, complete blockage of the core is unlikely because not all fuel rods balloon at the same axial location. In this case, sufficient water addition can cool the core and stop core damage progression."
Rapid oxidation – "The next stage of core damage, beginning at approximately , is the rapid oxidation of the Zircaloy by steam. In the oxidation process, hydrogen is produced and a large amount of heat is released. Above , the power from oxidation exceeds that from decay heat (4,5) unless the oxidation rate is limited by the supply of either zircaloy or steam."
Debris bed formation – "When the temperature in the core reaches about , molten control materials (1,6) will flow to and solidify in the space between the lower parts of the fuel rods where the temperature is comparatively low. Above , the core temperature may escalate in a few minutes to the melting point of zircaloy [] due to increased oxidation rate. When the oxidized cladding breaks, the molten zircaloy, along with dissolved UO2 (1,7) would flow downward and freeze in the cooler, lower region of the core. Together with solidified control materials from earlier down-flows, the relocated zircaloy and UO2 would form the lower crust of a developing cohesive debris bed."
(Corium) Relocation to the lower plenum – "In scenarios of small-break LOCAs, there is generally a pool of water in the lower plenum of the vessel at the time of core relocation. The release of molten core materials into the water always generates large amounts of steam. If the molten stream of core materials breaks up rapidly in water, there is also a possibility of a steam explosion. During relocation, any unoxidized zirconium in the molten material may also be oxidized by steam, and in the process hydrogen is produced. Recriticality also may be a concern if the control materials are left behind in the core and the relocated material breaks up in unborated water in the lower plenum."
At the point at which the corium relocates to the lower plenum, Haskin, et al relate that the possibility exists for an incident called a fuel–coolant interaction (FCI) to substantially stress or breach the primary pressure boundary when the corium relocates to the lower plenum of the reactor pressure vessel ("RPV").
This is because the lower plenum of the RPV may have a substantial quantity of water - the reactor coolant - in it, and, assuming the primary system has not been depressurized, the water will likely be in the liquid phase, and consequently dense, and at a vastly lower temperature than the corium. Since corium is a liquid metal-ceramic eutectic at temperatures of , its fall into liquid water at may cause an extremely rapid evolution of steam that could cause a sudden extreme overpressure and consequent gross structural failure of the primary system or RPV. Though most modern studies hold that it is physically infeasible, or at least extraordinarily unlikely, Haskin, et al state that there exists a remote possibility of an extremely violent FCI leading to something referred to as an alpha-mode failure, or the gross failure of the RPV itself, and subsequent ejection of the upper plenum of the RPV as a missile against the inside of the containment, which would likely lead to the failure of the containment and release of the fission products of the core to the outside environment without any substantial decay having taken place.
The American Nuclear Society has commented on the TMI-2 accident, that despite melting of about one-third of the fuel, the reactor vessel itself maintained its integrity and contained the damaged fuel.
Breach of the primary pressure boundary
There are several possibilities as to how the primary pressure boundary could be breached by corium.
Steam explosion
As previously described, FCI could lead to an overpressure event leading to RPV fail, and thus, primary pressure boundary fail. Haskin et al report that in the event of a steam explosion, failure of the lower plenum is far more likely than ejection of the upper plenum in the alpha mode. In the event of lower plenum failure, debris at varied temperatures can be expected to be projected into the cavity below the core. The containment may be subject to overpressure, though this is not likely to fail the containment. The alpha-mode failure will lead to the consequences previously discussed.
Pressurized melt ejection (PME)
It is quite possible, especially in pressurized water reactors, that the primary loop will remain pressurized following corium relocation to the lower plenum. As such, pressure stresses on the RPV will be present in addition to the weight stress that the molten corium places on the lower plenum of the RPV; when the metal of the RPV weakens sufficiently due to the heat of the molten corium, it is likely that the liquid corium will be discharged under pressure out of the bottom of the RPV in a pressurized stream, together with entrained gases. This mode of corium ejection may lead to direct containment heating (DCH).
Severe accident ex-vessel interactions and challenges to containment
Haskin et al identify six modes by which the containment could be credibly challenged; some of these modes are not applicable to core melt accidents.
Overpressure
Dynamic pressure (shockwaves)
Internal missiles
External missiles (not applicable to core melt accidents)
Meltthrough
Bypass
Standard failure modes
If the melted core penetrates the pressure vessel, there are theories and speculations as to what may then occur.
In modern Russian plants, there is a "core catching device" in the bottom of the containment building. The melted core is supposed to hit a thick layer of a "sacrificial metal" that would melt, dilute the core and increase the heat conductivity, and finally the diluted core can be cooled down by water circulating in the floor. There has never been any full-scale testing of this device, however.
In Western plants there is an airtight containment building. Though radiation would be at a high level within the containment, doses outside of it would be lower. Containment buildings are designed for the orderly release of pressure without releasing radionuclides, through a pressure release valve and filters. Hydrogen/oxygen recombiners also are installed within the containment to prevent gas explosions.
In a melting event, one spot or area on the RPV will become hotter than other areas, and will eventually melt. When it melts, corium will pour into the cavity under the reactor. Though the cavity is designed to remain dry, several NUREG-class documents advise operators to flood the cavity in the event of a fuel melt incident. This water will become steam and pressurize the containment. Automatic water sprays will pump large quantities of water into the steamy environment to keep the pressure down. Catalytic recombiners will rapidly convert the hydrogen and oxygen back into water. One debated positive effect of the corium falling into water is that it is cooled and returns to a solid state.
Extensive water spray systems within the containment along with the ECCS, when it is reactivated, will allow operators to spray water within the containment to cool the core on the floor and reduce it to a low temperature.
These procedures are intended to prevent release of radioactivity. In the Three Mile Island event in 1979, a theoretical person standing at the plant property line during the entire event would have received a dose of approximately 2 millisieverts (200 millirem), between a chest X-ray's and a CT scan's worth of radiation. This was due to outgassing by an uncontrolled system that, today, would have been backfitted with activated carbon and HEPA filters to prevent radionuclide release.
In the Fukushima incident, however, this design failed. Despite the efforts of the operators at the Fukushima Daiichi nuclear power plant to maintain control, the reactor cores in units 1–3 overheated, the nuclear fuel melted and the three containment vessels were breached. Hydrogen was released from the reactor pressure vessels, leading to explosions inside the reactor buildings in units 1, 3 and 4 that damaged structures and equipment and injured personnel. Radionuclides were released from the plant to the atmosphere and were deposited on land and on the ocean. There were also direct releases into the sea.
As the natural decay heat of the corium eventually reduces to an equilibrium with convection and conduction to the containment walls, it becomes cool enough for water spray systems to be shut down and the reactor to be put into safe storage. The containment can be sealed with release of extremely limited offsite radioactivity and release of pressure. After perhaps a decade for fission products to decay, the containment can be reopened for decontamination and demolition.
Another scenario sees a buildup of potentially explosive hydrogen, but passive autocatalytic recombiners inside the containment are designed to prevent this. In Fukushima, the containments were filled with inert nitrogen, which prevented hydrogen from burning; the hydrogen leaked from the containment to the reactor building, however, where it mixed with air and exploded. During the 1979 Three Mile Island accident, a hydrogen bubble formed in the pressure vessel dome. There were initial concerns that the hydrogen might ignite and damage the pressure vessel or even the containment building; but it was soon realized that lack of oxygen prevented burning or explosion.
Speculative failure modes
One scenario consists of the reactor pressure vessel failing all at once, with the entire mass of corium dropping into a pool of water (for example, coolant or moderator) and causing extremely rapid generation of steam. The pressure rise within the containment could threaten integrity if rupture disks could not relieve the stress. Exposed flammable substances could burn, but there are few, if any, flammable substances within the containment.
Another theory, called an "alpha mode" failure by the 1975 Rasmussen (WASH-1400) study, asserted steam could produce enough pressure to blow the head off the reactor pressure vessel (RPV). The containment could be threatened if the RPV head collided with it. (The WASH-1400 report was replaced by better-based newer studies, and now the Nuclear Regulatory Commission has disavowed them all and is preparing the overarching State-of-the-Art Reactor Consequence Analyses [SOARCA] study - see the Disclaimer in NUREG-1150.)
By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss-of-coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn through of the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment. The hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. Some fear that a molten reactor core could penetrate the reactor pressure vessel and containment structure and burn downwards to the level of the groundwater.
It has not been determined to what extent a molten mass can melt through a structure (although that was tested in the loss-of-fluid-test reactor described in Test Area North's fact sheet). The Three Mile Island accident provided real-life experience with an actual molten core: the corium failed to melt through the reactor pressure vessel after over six hours of exposure due to dilution of the melt by the control rods and other reactor internals, validating the emphasis on defense in depth against core damage incidents.
Other reactor types
Other types of reactors have different capabilities and safety profiles than the LWR does. Advanced varieties of several of these reactors have the potential to be inherently safe.
CANDU reactors
CANDU reactors, Canadian-invented deuterium-uranium design, are designed with at least one, and generally two, large low-temperature and low-pressure water reservoirs around their fuel/coolant channels. The first is the bulk heavy-water moderator (a separate system from the coolant), and the second is the light-water-filled shield tank (or calandria vault). These backup heat sinks are sufficient to prevent either the fuel meltdown in the first place (using the moderator heat sink), or the breaching of the core vessel should the moderator eventually boil off (using the shield tank heat sink). Other failure modes aside from fuel melt will probably occur in a CANDU rather than a meltdown, such as deformation of the calandria into a non-critical configuration. All CANDU reactors are located within standard Western containments as well.
Gas-cooled reactors
One type of Western reactor, known as the advanced gas-cooled reactor (or AGR), built by the United Kingdom, is not very vulnerable to loss-of-cooling accidents or to core damage except in the most extreme of circumstances. By virtue of the relatively inert coolant (carbon dioxide), the large volume and high pressure of the coolant, and the relatively high heat transfer efficiency of the reactor, the time frame for core damage in the event of a limiting fault is measured in days. Restoration of some means of coolant flow will prevent core damage from occurring.
Lead and lead-bismuth-cooled reactors
Recently heavy liquid metal, such as lead or lead-bismuth, has been proposed as a reactor coolant. Because of the similar densities of the fuel and the HLM, an inherent passive safety self-removal feedback mechanism due to buoyancy forces is developed, which propels the packed bed away from the wall when certain threshold of temperature is attained and the bed becomes lighter than the surrounding coolant, thus preventing temperatures that can jeopardize the vessel’s structural integrity and also reducing the recriticality potential by limiting the allowable bed depth.
Experimental or conceptual designs
Some design concepts for nuclear reactors emphasize resistance to meltdown and operating safety.
The PIUS (process inherent ultimate safety) designs, originally engineered by the Swedes in the late 1970s and early 1980s, are LWRs that by virtue of their design are resistant to core damage. No units have ever been built.
Power reactors, including the Deployable Electrical Energy Reactor, a larger-scale mobile version of the TRIGA for power generation in disaster areas and on military missions, and the TRIGA Power System, a small power plant and heat source for small and remote community use, have been put forward by interested engineers, and share the safety characteristics of the TRIGA due to the uranium zirconium hydride fuel used.
The Hydrogen Moderated Self-regulating Nuclear Power Module, a reactor that uses uranium hydride as a moderator and fuel, similar in chemistry and safety to the TRIGA, also possesses these extreme safety and stability characteristics, and has attracted a good deal of interest in recent times.
The liquid fluoride thorium reactor is designed to naturally have its core in a molten state, as a eutectic mix of thorium and fluorine salts. As such, a molten core is reflective of the normal and safe state of operation of this reactor type. In the event the core overheats, a metal plug will melt, and the molten salt core will drain into tanks where it will cool in a non-critical configuration. Since the core is liquid, and already melted, it cannot be damaged.
Advanced liquid metal reactors, such as the U.S. Integral Fast Reactor and the Russian BN-350, BN-600, and BN-800, all have a coolant with very high heat capacity, sodium metal. As such, they can withstand a loss of cooling without SCRAM and a loss of heat sink without SCRAM, qualifying them as inherently safe.
Soviet Union–designed reactors
RBMKs
Soviet-designed RBMK reactors (Reaktor Bolshoy Moshchnosti Kanalnyy), found only in Russia and other post-Soviet states and now shut down everywhere except Russia, do not have containment buildings, are naturally unstable (tending to dangerous power fluctuations), and have emergency cooling systems (ECCS) considered grossly inadequate by Western safety standards.
RBMK emergency core cooling systems only have one division and little redundancy within that division. Though the large core of the RBMK is less energy-dense than the smaller Western LWR core, it is harder to cool. The RBMK is moderated by graphite. In the presence of both steam and oxygen at high temperatures, graphite forms synthesis gas and with the water gas shift reaction, the resultant hydrogen burns explosively. If oxygen contacts hot graphite, it will burn. Control rods used to be tipped with graphite, a material that slows neutrons and thus speeds up the chain reaction. Water is used as a coolant, but not a moderator. If the water boils away, cooling is lost, but moderation continues. This is termed a positive void coefficient of reactivity.
The RBMK tends towards dangerous power fluctuations. Control rods can become stuck if the reactor suddenly heats up and they are moving. Xenon-135, a neutron absorbent fission product, has a tendency to build up in the core and burn off unpredictably in the event of low power operation. This can lead to inaccurate neutronic and thermal power ratings.
The RBMK does not have any containment above the core. The only substantial solid barrier above the fuel is the upper part of the core, called the upper biological shield, which is a piece of concrete interpenetrated with control rods and with access holes for refueling while online. Other parts of the RBMK were shielded better than the core itself. Rapid shutdown (SCRAM) takes 10 to 15 seconds. Western reactors take 1 - 2.5 seconds.
Western aid has been given to provide certain real-time safety monitoring capacities to the operating staff. Whether this extends to automatic initiation of emergency cooling is not known. Training has been provided in safety assessment from Western sources, and Russian reactors have evolved in response to the weaknesses that were in the RBMK. Nonetheless, numerous RBMKs still operate.
Though it might be possible to stop a loss-of-coolant event prior to core damage occurring, any core damage incidents will probably allow massive release of radioactive materials.
Upon entering the EU in 2004, Lithuania was required to phase out its two RBMKs at Ignalina NPP, deemed totally incompatible with European nuclear safety standards. The country planned to replace them with safer reactors at Visaginas Nuclear Power Plant.
MKER
The MKER is a modern Russian-engineered channel type reactor that is a distant descendant of the RBMK, designed to optimize the benefits and fix the serious flaws of the original.
Several unique features of the MKER's design make it a credible and interesting option. The reactor remains online during refueling, ensuring outages only occasionally for maintenance, with uptime up to 97-99%. The moderator design allows the use of less-enriched fuels, with a high burnup rate. Neutronics characteristics have been optimized for civilian use, for superior fuel fertilization and recycling; and graphite moderation achieves better neutronics than is possible with light water moderation. The lower power density of the core greatly enhances thermal regulation.
An array of improvements make the MKER's safety comparable to Western Generation III reactors: improved quality of parts, advanced computer controls, comprehensive passive emergency core cooling system, and very strong containment structure, along with a negative void coefficient and a fast-acting rapid shutdown system. The passive emergency cooling system uses reliable natural phenomena to cool the core, rather than depending on motor-driven pumps. The containment structure is designed to withstand severe stress and pressure. In the event of a pipe break of a cooling-water channel, the channel can be isolated from the water supply, preventing a general failure.
The greatly enhanced safety and unique benefits of the MKER design enhance its competitiveness in countries considering full fuel-cycle options for nuclear development.
VVER
The VVER is a pressurized light-water reactor that is far more stable and safe than the RBMK. This is because it uses light water as a moderator (rather than graphite), has well-understood operating characteristics, and has a negative void coefficient of reactivity. In addition, some have been built with more than marginal containments, some have quality ECCS systems, and some have been upgraded to international standards of control and instrumentation. Present generations of VVERs (starting from the VVER-1000) are built to Western-equivalent levels of instrumentation, control, and containment systems.
Even with these positive developments, however, certain older VVER models raise a high level of concern, especially the VVER-440 V230.
The VVER-440 V230 has no containment building, but only has a structure capable of confining steam surrounding the RPV. This is a volume of thin steel, perhaps in thickness, grossly insufficient by Western standards.
Has no ECCS. Can survive at most one pipe break (there are many pipes greater than that size within the design).
Has six steam generator loops, adding unnecessary complexity.
Apparently steam generator loops can be isolated, however, in the event that a break occurs in one of these loops. The plant can remain operating with one isolated loop—a feature found in few Western reactors.
The interior of the pressure vessel is plain alloy steel, exposed to water. This can lead to rust, if the reactor is exposed to water. One point of distinction in which the VVER surpasses the West is the reactor water cleanup facility—built, no doubt, to deal with the enormous volume of rust within the primary coolant loop—the product of the slow corrosion of the RPV.
This model is viewed as having inadequate process control systems.
Bulgaria had a number of VVER-440 V230 models, but they opted to shut them down upon joining the EU rather than backfit them, and are instead building new VVER-1000 models. Many non-EU states maintain V230 models, including Russia and the CIS. Many of these states, rather than abandon the reactors entirely, have opted to install an ECCS, develop standard procedures, and install proper instrumentation and control systems. Though confinements cannot be transformed into containments, the risk of a limiting fault resulting in core damage can be greatly reduced.
The VVER-440 V213 model was built to the first set of Soviet nuclear safety standards. It possesses a modest containment building, and the ECCS systems, though not completely to Western standards, are reasonably comprehensive. Many VVER-440 V213 models operated by former Soviet bloc countries have been upgraded to fully automated Western-style instrumentation and control systems, improving safety to Western levels for accident prevention—but not for accident containment, which is of a modest level compared to Western plants. These reactors are regarded as "safe enough" by Western standards to continue operation without major modifications, though most owners have performed major modifications to bring them up to generally equivalent levels of nuclear safety.
During the 1970s, Finland built two VVER-440 V213 models to Western standards with a large-volume full containment and world-class instrumentation, control standards and an ECCS with multiple redundant and diversified components. In addition, passive safety features such as 900-tonne ice condensers have been installed, making these two units safety-wise the most advanced VVER-440s in the world.
The VVER-1000 type has a definitely adequate Western-style containment, the ECCS is sufficient by Western standards, and instrumentation and control has been markedly improved to Western 1970s-era levels.
Effects
The effects of a nuclear meltdown depend on the safety features designed into a reactor. A modern reactor is designed both to make a meltdown unlikely, and to contain one should it occur.
In a modern reactor, a nuclear meltdown, whether partial or total, should be contained inside the reactor's containment structure. Thus (assuming that no other major disasters occur) while the meltdown will severely damage the reactor itself, possibly contaminating the whole structure with highly radioactive material, a meltdown alone should not lead to significant radioactivity release or danger to the public.
Reactor design
Although pressurized water reactors are more susceptible to nuclear meltdown in the absence of active safety measures, this is not a universal feature of civilian nuclear reactors. Much of the research in civilian nuclear reactors is for designs with passive nuclear safety features that may be less susceptible to meltdown, even if all emergency systems failed. For example, pebble bed reactors are designed so that complete loss of coolant for an indefinite period does not result in the reactor overheating. The General Electric ESBWR and Westinghouse AP1000 have passively activated safety systems. The CANDU reactor has two low-temperature and low-pressure water systems surrounding the fuel (i.e. moderator and shield tank) that act as back-up heat sinks and preclude meltdowns and core-breaching scenarios. Liquid fueled reactors can be stopped by draining the fuel into tankage, which not only prevents further fission but draws decay heat away statically, and by drawing off the fission products (which are the source of post-shutdown heating) incrementally. The ideal is to have reactors that fail-safe through physics rather than through redundant safety systems or human intervention.
Certain fast breeder reactor designs may be more susceptible to meltdown than other reactor types, due to their larger quantity of fissile material and the higher neutron flux inside the reactor core. Other reactor designs, such as Integral Fast Reactor model EBR II, had been explicitly engineered to be meltdown-immune. It was tested in April 1986, just before the Chernobyl failure, to simulate loss of coolant pumping power, by switching off the power to the primary pumps. As designed, it shut itself down, in about 300 seconds, as soon as the temperature rose to a point designed as higher than proper operation would require. This was well below the boiling point of the unpressurised liquid metal coolant, which had entirely sufficient cooling ability to deal with the heat of fission product radioactivity, by simple convection.
The second test, deliberate shut-off of the secondary coolant loop that supplies the generators, caused the primary circuit to undergo the same safe shutdown. This test simulated the case of a water-cooled reactor losing its steam turbine circuit, perhaps by a leak.
United States
The Westinghouse TR-2 suffered partial core damage in 1960 when a likely fuel cladding defect caused one fuel element (out of over 200) to overheat and melt.
The reactor at EBR-I suffered a partial meltdown during a coolant flow test on 29 November 1955.
The Sodium Reactor Experiment in Santa Susana Field Laboratory was an experimental nuclear reactor that operated from 1957 to 1964 and was the first commercial power plant in the world to experience a core meltdown in July 1959.
The partial meltdown at the Fermi 1 experimental fast breeder reactor, in 1966, required the reactor to be repaired, though it never achieved full operation afterward.
The SNAP8DR reactor at the Santa Susana Field Laboratory experienced damage to approximately a third of its fuel in an accident in 1969.
The Three Mile Island accident, in 1979, referred to in the press as a "partial core melt", led to the total dismantlement and the permanent shutdown of reactor 2. Unit 1 continued to operate until 2019.
Soviet Union
A number of Soviet Navy nuclear submarines experienced nuclear meltdowns, including K-27, K-140, and K-431.
Reactor 4 of Chernobyl experienced a full reactor meltdown, after the failure of a test.
Japan
During the Fukushima Daiichi nuclear disaster following the earthquake and tsunami in March 2011, three of the power plant's six reactors suffered meltdowns. Most of the fuel in the reactor No. 1 Nuclear Power Plant melted.
Switzerland
The Lucens reactor, Switzerland, in 1969.
Canada
NRX (military), Ontario, Canada, in 1952
United Kingdom
Windscale (military), Sellafield, England, in 1957 (see Windscale fire)
Chapelcross nuclear power station (civilian), Scotland, in 1967
France
Saint-Laurent Nuclear Power Plant (civilian), France, in 1969
China syndrome
The China syndrome (loss-of-coolant accident) is a nuclear reactor operations accident characterized by the severe meltdown of the core components of the reactor, which then burn through the containment vessel and the housing building, then (figuratively) through the crust and body of the Earth until reaching the opposite end, presumed to be in "China". While the antipodes of China include Argentina with its Atucha Nuclear Power Plant the phrasing is metaphorical; there is no way a core could penetrate the several-kilometer thickness of the Earth's crust, and even if it did melt to the center of the Earth, it would not travel back upwards against the pull of gravity. Moreover, any tunnel behind the material would be closed by immense lithostatic pressure.
History
The system design of the nuclear power plants built in the late 1960s raised the concern that a severe reactor accident could release large quantities of radioactive materials into the atmosphere and environment. By 1970, there were doubts about the ability of the emergency core cooling system to cope with the effects of a loss of coolant accident and the consequent meltdown of the fuel core. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. In the event, Lapp’s hypothetical nuclear accident was cinematically adapted as The China Syndrome (1979).
The real scare, however, came from a quote in the 1979 film The China Syndrome, which stated, "It melts right down through the bottom of the plant—theoretically to China, but of course, as soon as it hits ground water, it blasts into the atmosphere and sends out clouds of radioactivity. The number of people killed would depend on which way the wind was blowing, rendering an area the size of Pennsylvania permanently uninhabitable." The actual threat of this was coincidentally tested just 12 days after the release of the film when a meltdown at Pennsylvania's Three Mile Island Plant 2 (TMI-2) created a molten core that moved toward "China" before the core froze at the bottom of the reactor pressure vessel. Thus, the TMI-2 reactor fuel and fission products breached the fuel rods, but the melted core itself did not break the containment of the reactor vessel.
A similar concern arose during the Chernobyl disaster. After the reactor was destroyed, a liquid corium mass from the melting core began to breach the concrete floor of the reactor vessel, which was situated above the bubbler pool (a large water reservoir for emergency pumps and to contain any steam pipe rupture). There was concern that a steam explosion would have occurred if the hot corium made contact with the water, resulting in more radioactive materials being released into the air. Due to damages from the accident, three station workers manually operated the valves necessary to drain this pool. However, this concern was proven to be unfounded as (unknown to those at the time) the corium already contacted the reservoir before it could be drained, where instead of creating a steam explosion it harmlessly cooled rapidly and created a light-brown ceramic pumice that floated on the water.
| Technology | Power generation | null |
162540 | https://en.wikipedia.org/wiki/Bird-of-paradise | Bird-of-paradise | The birds-of-paradise are members of the family Paradisaeidae of the order Passeriformes. The majority of species are found in eastern Indonesia, Papua New Guinea, and eastern Australia. The family has 45 species in 17 genera. The members of this family are perhaps best known for the plumage of the males of the species, the majority of which are sexually dimorphic. The males of these species tend to have very long, elaborate feathers extending from the beak, wings, tail, or head. For the most part, they are confined to dense rainforest habitats. The diet of all species is dominated by fruit and to a lesser extent arthropods. The birds-of-paradise have a variety of breeding systems, ranging from monogamy to lek-type polygamy.
A number of species are threatened by hunting and habitat loss.
Taxonomy
The family Paradisaeidae is introduced (as Paradiseidae) in 1825 with Paradisaea as the type genus by the English naturalist William Swainson. For many years the birds-of-paradise were treated as being closely related to the bowerbirds. Today while both are treated as being part of the Australasian lineage Corvida, the two are now thought to be only distantly related. The closest evolutionary relatives of the birds-of-paradise are the crow and jay family Corvidae, the monarch flycatchers Monarchidae, and the Australian mudnesters Struthideidae.
A 2009 study examining the mitochondrial DNA of all species to examine the relationships within the family and to its nearest relatives estimated that the family emerged 24 million years ago, earlier than previous estimates. The study identified five clades within the family, and placed the split between the first clade, which contains the monogamous manucodes and paradise-crow, and all the other birds-of-paradise, to be 10 million years ago. The second clade includes the parotias and the King of Saxony bird-of-paradise. The third clade provisionally contains several genera, including Seleucidis, the Drepanornis sicklebills, Semioptera, Ptiloris, and Lophorina, although some of these are questionable. The fourth clade includes the Epimachus sicklebills, Paradigalla, and the astrapias. The final clade includes the Cicinnurus and the Paradisaea birds-of-paradise.
The exact limits of the family have been the subject of revision as well. The three species of satinbird (the genera Cnemophilus and Loboparadisea) were treated as a subfamily of the birds-of-paradise, Cnemophilinae. In spite of differences in the mouth, foot morphology, and nesting habits they remained in the family until a 2000 study moved them to a separate family closer to the berrypeckers and longbills (Melanocharitidae). The same study found that the Macgregor's bird-of-paradise was actually a member of the large Australasian honeyeater family. In addition to these three species, a number of systematically enigmatic species and genera have been considered potential members of this family. The two species in the genus Melampitta, also from New Guinea, have been linked with the birds-of-paradise, but their relationships remain uncertain, more recently being linked with the Australian mudnesters. The silktail of Fiji has been linked with the birds-of-paradise many times since its discovery, but never formally assigned to the family. Recent molecular evidence now places the species with the fantails.
Phylogeny
A genus level phylogeny of the family has been determined by Martin Irestedt and collaborators.
Species
genus: Lycocorax
Halmahera paradise-crow, Lycocorax pyrrhopterus
Obi paradise-crow, Lycocorax obiensis
genus: Manucodia
Glossy-mantled manucode, Manucodia ater
Tagula manucode, Manucodia alter
Jobi manucode, Manucodia jobiensis
Crinkle-collared manucode, Manucodia chalybatus
Curl-crested manucode, Manucodia comrii
genus: Phonygammus
Trumpet manucode, Phonygammus keraudrenii
genus: Paradigalla
Long-tailed paradigalla, Paradigalla carunculata
Short-tailed paradigalla, Paradigalla brevicauda
genus: Astrapia
Arfak astrapia, Astrapia nigra
Splendid astrapia, Astrapia splendidissima
Ribbon-tailed astrapia, Astrapia mayeri
Stephanie's astrapia, Astrapia stephaniae
Huon astrapia, Astrapia rothschildi
genus: Parotia
Western parotia, Parotia sefilata
Carola's parotia, Parotia carolae
Bronze parotia, Parotia berlepschi
Lawes's parotia, Parotia lawesii
Eastern parotia, Parotia helenae (Disputed)
Wahnes's parotia, Parotia wahnesi
genus: Pteridophora
King of Saxony bird-of-paradise, Pteridophora alberti
genus: Lophorina
Greater lophorina, Lophorina superba
Crescent-caped lophorina, Lophorina niedda
Lesser lophorina, Lophorina minor
genus: Ptiloris
Magnificent riflebird, Ptiloris magnificus
Growling riflebird, Ptiloris intercedens
Paradise riflebird, Ptiloris paradiseus
Victoria's riflebird, Ptiloris victoriae
genus: Epimachus
Black sicklebill, Epimachus fastosus
Brown sicklebill, Epimachus meyeri
genus: Drepanornis
Black-billed sicklebill, Drepanornis albertisi
Pale-billed sicklebill, Drepanornis bruijnii
genus: Cicinnurus
King bird-of-paradise, Cicinnurus regius
Magnificent bird-of-paradise, Cicinurrus magnificus/Diphyllodes magnificus
Wilson's bird-of-paradise, Cicinnurus respublica/Diphyllodes respublica
genus: Diphyllodes
Magnificent bird-of-paradise, Diphyllodes magnificus
Wilson's bird-of-paradise, Diphyllodes respublica
genus: Semioptera
Standardwing bird-of-paradise, Semioptera wallacii
genus: Seleucidis
Twelve-wired bird-of-paradise, Seleucidis melanoleucus
genus: Paradisaea
Lesser bird-of-paradise, Paradisaea minor
Greater bird-of-paradise, Paradisaea apoda
Raggiana bird-of-paradise, Paradisaea raggiana
Goldie's bird-of-paradise, Paradisaea decora
Red bird-of-paradise, Paradisaea rubra
Emperor bird-of-paradise, Paradisaea guilielmi
genus: Paradisornis
Blue bird-of-paradise, Paradisornis rudolphi
Hybrids
Hybrid birds-of-paradise may occur when individuals of different species, that look similar and have overlapping ranges, confuse each other for their own species and crossbreed.
When Erwin Stresemann realised that hybridisation among birds-of-paradise might be an explanation as to why so many of the described species were so rare, he examined many controversial specimens and, during the 1920s and 1930s, published several papers on his hypothesis. Many of the species described in the late 19th and early 20th centuries are now generally considered to be hybrids, though some are still subject to dispute; their status is not likely to be settled definitely without genetic examination of museum specimens, which will come soon in summer 2021 in North America, South America, Africa, Europe, Asia, and Australia, and some birds in an aviary in Central Park Zoo.
Description
Birds-of-paradise are closely related to the corvids. Birds-of-paradise range in size from the king bird-of-paradise at and to the curl-crested manucode at and . The male black sicklebill, with its long tail, is the longest species at . In most species, the tails of the males are larger and longer than those of the females, the differences ranging from slight to extreme. The wings are rounded and, in some species, structurally modified on the males in order to make sound. There is considerable variation in the family with regard to bill shape. Bills may be long and decurved, as in the sicklebills and riflebirds, or small and slim like the Astrapias. As with body size, bill size varies between the sexes, although species where the females have larger bills than the males are more common, particularly in the insect-eating species.
Plumage variation between the sexes is closely related to the breeding system. The manucodes and paradise-crow, which are socially monogamous, are sexually monomorphic. So are the two species of Paradigalla, which are polygamous. All these species have generally black plumage with varying amounts of green and blue iridescence. The female plumage of the dimorphic species is typically drab to blend in with their habitat, unlike the bright attractive colours found on the males. Younger males of these species have female-like plumage, and sexual maturity takes a long time, with the full adult plumage not being obtained for up to seven years. This affords the younger males protection from predators of more subdued colours and also reduces hostility from adult males.
Distribution and habitat
The centre of bird-of-paradise diversity is the large island of New Guinea; all but two genera are found in New Guinea. Those other two are the monotypic genera Lycocorax and Semioptera, both of which are endemic to the Maluku Islands, to the west of New Guinea. Of the riflebirds in the genus Ptiloris, two are endemic to the coastal forests of eastern Australia, one occurs in both Australia and New Guinea, and one is only found in New Guinea. The only other genus to have a species outside New Guinea is Phonygammus, one representative of which is found in the extreme north of Queensland. The remaining species are restricted to New Guinea and some of the surrounding islands. Many species have very small ranges, particularly those with restricted habitat types such as mid-montane forest (like the black sicklebill) or island endemics (like the Wilson's bird-of-paradise).
The majority of birds-of-paradise live in tropical forests, including rainforests, swamps, and moss forests, nearly all of them solitary tree dwellers. Several species have been recorded in coastal mangroves. The southernmost species, the paradise riflebird of Australia, lives in sub-tropical and temperate wet forests. As a group the manucodes are the most plastic in their habitat requirements; in particular, the glossy-mantled manucode, which inhabits both forest and open savanna woodland. Mid-montane habitats are the most commonly occupied habitat, with thirty of the forty species occurring in the 1000–2000 m altitudinal band.
Behaviour and ecology
Diet and feeding
The diet of the birds-of-paradise is dominated by fruit and arthropods, although small amounts of nectar and small vertebrates may also be taken. The ratio of the two food types varies by species, with fruit predominating in some species, and arthropods dominating the diet in others. The ratio of the two will affect other aspects of the behaviour of the species; for example, frugivorous species tend to feed in the forest canopy, whereas insectivores may feed lower down in the middle storey. Frugivores are more social than the insectivores, which are more solitary and territorial.
Even the birds-of-paradise that are primarily insect eaters will still take large amounts of fruit. The family is overall an important seed disperser for the forests of New Guinea, as they do not digest the seeds. Species that feed on fruit will range widely searching for fruit, and while they may join other fruit-eating species at a fruiting tree, they will not associate with them otherwise and will not stay with other species for long. Fruit is eaten while perched and not in the air, and birds-of-paradise are able to use their feet as tools to manipulate and hold their food, allowing them to extract certain capsular fruit. There is some niche differentiation in fruit choice by species and any one species will only consume a limited number of fruit types compared to the large choice available. For example, the trumpet manucode and crinkle-collared manucode will eat mostly figs, whereas the Lawes's parotia focuses mostly on berries and the greater lophorina and raggiana bird-of-paradise take mostly capsular fruit.
Breeding
Most species have elaborate mating rituals, with at least eight species exhibiting lek mating systems, including the genus Paradisaea. Others, such as the Cicinnurus and Parotia species, have highly ritualised mating dances. Across the family (Paradisaeidae), female preference is incredibly important in shaping the courtship behaviors of males and, in fact, drives the evolution of ornamental combinations of sound, color, and behavior. Males are polygamous in the sexually dimorphic species, but monogamous in at least some of the monomorphic species. Hybridisation is frequent in these birds, suggesting the polygamous species of bird of paradise are very closely related despite being in different genera. Many hybrids have been described as new species in the past, and doubt remains regarding whether some forms, such as Rothschild's lobe-billed bird-of-paradise, are valid.
Birds-of-paradise build their nests from soft materials, such as leaves, ferns, and vine tendrils, typically placed in a tree fork. The typical number of eggs in each clutch varies among the species and is not known for every species. For larger species, it is almost always just one egg, but smaller species may produce clutches of 2–3 eggs. Eggs hatch after 16–22 days, and the young leave the nest at between 16 and 30 days of age.
Relationship with humans
Societies of New Guinea often use bird-of-paradise plumes in their dress and rituals, and the plumes were popular in Europe in past centuries as adornment for ladies' millinery. Hunting for plumes and habitat destruction have reduced some species to endangered status; habitat destruction due to deforestation is now the predominant threat.
Best known are the members of the genus Paradisaea, including the type species, the greater bird-of-paradise, Paradisaea apoda. This species was described from specimens brought back to Europe from trading expeditions in the early sixteenth century. These specimens had been prepared by native traders by removing their wings and feet so that they could be used as decorations. This was not known to the explorers, and in the absence of information, many beliefs arose about them. They were briefly thought to be the mythical phoenix. The often footless and wingless condition of the skins led to the belief that the birds never landed but were kept permanently aloft by their plumes. The first Europeans to encounter their skins were the voyagers in Ferdinand Magellan's circumnavigation of the Earth. Antonio Pigafetta wrote that "The people told us that those birds came from the terrestrial paradise, and they call them bolon diuata, that is to say, 'birds of God'." This is the origin of both the name "bird of paradise" and the specific name apoda – without feet. An alternate account by Maximilianus Transylvanus used the term Mamuco Diata, a variant of Manucodiata, which was used as a synonym for birds-of-paradise up to the 19th century.
Birdwatching
In recent years the availability of pictures and videos about birds of paradise on the internet has raised the interest of birdwatchers around the world. Many of them fly to West Papua to watch various species of birds of paradise from Wilson's Bird of Paradise (Diphyllodes respublica) and Red Bird of Paradise (Paradisaea rubra) in Raja Ampat to Lesser Birds of Paradise (Paradisaea minor), Magnificent Riflebird (Ptiloris magnificus), King Bird of Paradise (Cicinnurus regius), crescent-caped lophorina (Lophorina niedda), and Magnificent Bird of Paradise (Diphyllodes magnificus) in Susnguakti forest.
This activity significantly reduces the number of local villagers who are involved in the hunting of paradise birds.
Hunting
Hunting of birds of paradise has occurred for a long time, possibly since the beginning of human settlement. It is a peculiarity that among the most frequently hunted species, males start mating opportunistically even before they grow their ornamental plumage. This may be an adaptation to maintaining population levels in the face of hunting pressures, which have probably been present for hundreds of years.
The naturalist, explorer, and author Alfred Russel Wallace spent six years in the region, which he chronicled in The Malay Archipelago (published in 1869). His expedition team shot, collected, and described many specimens of animals and birds, including the great, king, twelve-wired, superb, red, and six-shafted birds of paradise.
Hunting to provide plumes for the millinery trade was extensive in the late 19th and early 20th century, but today the birds have legal protection except for hunting at a sustainable level to fulfill the ceremonial needs of the local tribal population. In the case of Pteridophora plumes, scavenging from old bowerbird bowers is encouraged.
Other examples
The southern hemisphere constellation Apus represents a bird-of-paradise.
An adult-plumaged male bird-of-paradise is depicted on the flag of Papua New Guinea, designed by Susan Karike.
The various members of the family were profiled by David Attenborough in Attenborough in Paradise.
The Indonesian Army has a Military Area Command named after "Cenderawasih", the local name for the bird.
The plume from the bird of paradise was used in Shripech, the royal crown worn by the King of Nepal, before the establishment of a republic. Now, the crown is housed in Naraynhiti Palace Museum.
| Biology and health sciences | Corvoidea | null |
162577 | https://en.wikipedia.org/wiki/Corvidae | Corvidae | Corvidae is a cosmopolitan family of oscine passerine birds that contains the crows, ravens, rooks, magpies, jackdaws, jays, treepies, choughs, and nutcrackers. In colloquial English, they are known as the crow family or corvids. Currently, 139 species are included in this family. The genus Corvus containing 50 species makes up over a third of the entire family. Corvids (ravens) are the largest passerines.
Corvids display remarkable intelligence for animals of their size, and are among the most intelligent birds thus far studied. Specifically, members of the family have demonstrated self-awareness in mirror tests (Eurasian magpies) and tool-making ability (e.g. crows and rooks), skills which until recently were thought to be possessed only by humans and a few other higher mammals. Their total brain-to-body mass ratio is equal to that of non-human great apes and cetaceans, and only slightly lower than that of humans.
They are medium to large in size, with strong feet and bills, rictal bristles, and a single moult each year (most passerines moult twice). Corvids are found worldwide, except for the southern tip of South America and the polar ice caps. The majority of the species are found in tropical South and Central America and in southern Asia, with fewer than 10 species each in Africa and Australasia. The genus Corvus has re-entered Australia in relatively recent geological prehistory, with five species and one subspecies there. Several species of raven have reached oceanic islands, and some of these species are now highly threatened with extinction, or have already become extinct.
Systematics, taxonomy, and evolution
The name Corvidae for the family was introduced by the English zoologist William Elford Leach in a guide to the contents of the British Museum published in 1820. Over the years, much disagreement has arisen on the exact evolutionary relationships of the corvid family and their relatives. What eventually seemed clear was that corvids are derived from Australasian ancestors, and spread throughout the world from there. Other lineages derived from these ancestors evolved into ecologically diverse, but often Australasian, groups. In the late 1970s and throughout the 1980s, Sibley and Ahlquist united the corvids with other taxa in the Corvida, based on DNA–DNA hybridization. The presumed corvid relatives included: currawongs, birds of paradise, whipbirds, quail-thrushes, whistlers, monarch flycatchers and drongos, shrikes, vireos, and vangas, but current research favors the theory that this grouping is partly artificial. The corvids constitute the core group of the Corvoidea, together with their closest relatives (the birds of paradise, Australian mud-nesters, and shrikes). They are also the core group of the Corvida, which includes the related groups, such as Old World orioles and vireos.
Clarification of the interrelationships of the corvids has been achieved based on cladistic analysis of several DNA sequences. The jays and magpies do not constitute monophyletic lineages, but rather seem to split up into an American and Old World lineage, and an Holarctic and Oriental lineage, respectively. These are not closely related among each other. The position of the azure-winged magpie, which has always been of undistinguished lineage, is less clear than previously thought.
The crested jayshrike (Platylophus galericulatus) is traditionally included in the Corvidae, but is not a true member of this family, being closer to the helmetshrikes (Malaconotidae) or shrikes (Laniidae). Likewise, the Hume's ground "jay" (Pseudopodoces humilis) is, in fact, a member of the tit family, Paridae. The following tree showing the phylogeny of the crow family is based on a molecular study by Jenna McCullough and collaborators published in 2023.
Fossil record
The earliest corvid fossils date to mid-Miocene Europe, about 17 million years ago; Miocorvus and Miopica may be ancestral to crows and some of the magpie lineage, respectively, or similar to the living forms, due to convergent evolution. The known prehistoric corvid genera appear to be mainly of the New World and Old World jay and Holarctic magpie lineages:
Miocorvus (Middle Miocene of Sansan, Gers in southwestern France)
Miopica (Middle Miocene of SW Ukraine)
Miocitta (Pawnee Creek Late Miocene of Logan County, US)
Corvidae gen. et sp. indet. (Edson Early Pliocene of Sherman County, Kansas, US)
Protocitta (Early Pleistocene of Reddick, US)
Corvidae gen. et sp. indet. (Early/Middle Pleistocene of Sicily) – probably belongs in an extant genus
Henocitta (Arredondo Clay Middle Pleistocene of Williston, US)
In addition, there are numerous fossil species of extant genera since the Mio–Pliocene, mainly European Corvus.
Morphology
Corvids are large to very large passerines with a robust build and strong legs; all species, except the pinyon jay, have nostrils covered by bristle-like feathers. Many corvids of temperate zones have mainly black or blue coloured plumage; however, some are pied black and white, some have a blue-purple iridescence, and many tropical species are brightly coloured. The sexes are very similar in color and size. Corvids have strong, stout bills and large wingspans. The family includes the largest members of the passerine order.
The smallest corvid is the dwarf jay (Cyanolyca nanus), at and . The largest corvids are the common raven (Corvus corax) and the thick-billed raven (Corvus crassirostris), both of which regularly exceed and .
Species can be identified based on size, shape, and geography; however, some, especially the Australian crows, are best identified by their raucous calls.
Ecology
Corvids occur in most climatic zones. Most are sedentary, and do not migrate significantly. However, during a shortage of food, irruptive migration can occur. When species are migratory, they will form large flocks in the fall (around August in the Northern Hemisphere) and travel south.
One reason for the success of crows, compared to ravens, is their ability to overlap breeding territory. During breeding season, crows were shown to overlap breeding territory six times as much as ravens. This invasion of breeding ranges allowed a related increase in local population density.
Since crows and magpies have benefited and even increased in numbers due to human development, it was suggested that this might cause increased rates of nest predation of smaller bird species, leading to declines. Several studies have shown this concern to be unfounded. One study examined American crows, which had increased in numbers, were a suspect in nest predation of threatened marbled murrelets. However, Steller's jays, which are successful independently of human development, are more efficient in plundering small birds' nests than American crows and common ravens. Therefore, the human relationship with crows and ravens did not significantly increase nest predation when compared to other factors, such as habitat destruction. Similarly, a study examining the decline of British songbirds found no link between Eurasian magpie numbers and population changes of 23 songbird species.
Behaviour
Some corvids have strong organization and community groups. Jackdaws, for example, have a strong social hierarchy, and are facultatively colonial during breeding. Providing mutual aid has also been recorded within many of the corvid species.
Young corvids have been known to play and take part in elaborate social games. Documented group games follow "king of the mountain" or "follow the leader" patterns. Other play involves the manipulation, passing, and balancing of sticks. Corvids also take part in other activities, such as sliding down smooth surfaces. These games are understood to play a large role in the adaptive and survival ability of the birds.
Mate selection is quite complex, and accompanied with much social play in the Corvidae. Youngsters of social corvid species undergo a series of tests, including aerobatic feats, before being accepted as a mate by the opposite sex.
Some corvids can be aggressive. Blue jays, for example, are well known to attack anything that threatens their nest. Crows have been known to attack dogs, cats, ravens, and birds of prey. Most of the time, these assaults take place as a distraction long enough to allow an opportunity for stealing food.
Food and feeding
The natural diet of many corvid species is omnivorous, consisting of invertebrates, nestlings, small mammals, berries, fruits, seeds, and carrion. However, some corvids, especially the crows, have adapted well to human conditions, and have come to rely on human food sources. In a US study of American crows, common ravens, and Steller's jays around campgrounds and human settlements, the crows appeared to have the most diverse diet of all, taking anthropogenic foods, such as: bread, spaghetti, fried potatoes, dog food, sandwiches, and livestock feed. The increase in available human food sources is contributing to population rises in some corvid species.
Some corvids are predators of other birds. During the wintering months, corvids typically form foraging flocks. However, some crows also eat many agricultural pests, including cutworms, wireworms, grasshoppers, and harmful weeds. Some corvids will eat carrion, and since they lack a specialized beak for tearing into flesh, they must wait until animals are opened, whether by other predators or as roadkill.
Reproduction
Many species of corvid are territorial, protecting territories throughout the year, or simply during the breeding season. In some cases, territories may only be guarded during the day, with the pair joining off-territory roosts at night. Some corvids are well-known communal roosters. Some groups of roosting corvids can be very large, with a roost of 65,000 rooks counted in Scotland. Some, including the rook and the jackdaw, are also communal nesters.
The partner bond in corvids is extremely strong, and even lifelong in some species. This monogamous lifestyle, however, can still contain extra-pair copulations. Males and females build large nests together in trees or on ledges; jackdaws are known to breed in buildings and in rabbit warrens. The male will also feed the female during incubation. The nests are constructed of a mass of bulky twigs lined with grass and bark. Corvids can lay between 3 and 10 eggs, typically ranging between 4 and 7. The eggs are usually greenish in colour with brown blotches. Once hatched, the young remain in the nests for up to 6–10 weeks depending on the species.
Corvids use several different forms of parental care, including bi-parental care and cooperative breeding. Cooperative breeding takes place when parents are helped in raising their offspring, usually by relatives, but also sometimes by non-related adults. Such helpers at the nest in most cooperatively-breeding birds are males, while females join other groups. White-throated magpie-jays are cooperatively-breeding corvids where the helpers are mostly female.
Intelligence
Jerison (1973) has suggested that the degree of brain encephalization (the ratio of brain size to body size, EQ) may correlate with an animal's intelligence and cognitive skills. Corvids and psittacids have higher EQ than other bird families, similar to that of the apes. Among the Corvidae, ravens possess the largest brain to body size ratio. In addition to the high EQ, the Corvid's intelligence is boosted by their living environment. Firstly, Corvids are found in some of the harshest environments on Earth, where surviving requires higher intelligence and better adaptations. Secondly, most of the Corvids are omnivorous, suggesting that they are exposed to a wider variety of different stimuli and environments. Furthermore, many corvid species live in a large family group, and demonstrate high social complexities.
Their intelligence is boosted by the long growing period of the young. By remaining with the parents, the young have more opportunities to learn necessary skills.
When compared to dogs and cats in an experiment testing the ability to seek out food according to three-dimensional clues, corvids out-performed the mammals. A meta-analysis testing how often birds invented new ways to acquire food in the wild found corvids to be the most innovative birds. A 2004 review suggested that their cognitive abilities are on par with those of non-human great apes. Despite structural differences, the brains of corvids and great apes both evolved the ability to make geometrical measurements.
Empathy-consolation
Ravens are found to show bystander affiliation, and solicited bystander affiliation after aggressive conflicts. Most of the time, bystanders already sharing a valuable relationship with the victim are more likely to affiliate with the victim to alleviate the victim's distress ("consolation") as a representation of empathy. Ravens are believed to be able to be sensitive to other's emotions.
Empathy-emotional contagion
Emotion contagion refers to the emotional state matching between individuals. Adriaense et al. (2018) used a bias paradigm to quantify emotional valence, which along with emotional arousal, define emotions. They manipulated the positive and negative affective states in the demonstrator ravens, which showed significantly different responses to the two states: behaving pessimism to the negative states, and optimism to the positive states. Then, the researchers trained another observer raven to first observe the demonstrator's responses. The observer raven was then presented with ambiguous stimuli. The experiment results confirmed the existence of negative emotional contagions in ravens, while the positive emotional contagion remained unclear. Therefore, ravens are capable of both discerning the negative emotions in their conspecifics and showing signs of empathy.
Interspecific communications
Interspecific communications are evolutionarily beneficial for species living in the same environment. Facial expressions are the most widely used method to express emotions by humans. Tate et al. (2006) explored the issue of non-human mammals processing the visual cues from faces to achieve interspecific communication with humans. Researchers also examined the avian species' capabilities to interpret this non-verbal communication, and their extent of sensitivity to human emotions. Based on the experimental subject of American Crows' behavioral changes to varying human gazes and facial expressions, Clucas et al. (2013) identified that crows are able to change their behaviors to the presence of direct human gaze, but did not respond differentially to human emotional facial expressions. They further suggested that the high intelligence of the crows enables them to adapt well to human-dominated environments.
Personality conformity
It is considered difficult to study emotions in animals when humans could not communicate with them. One way to identify animal personality traits is to observe the consistency of the individual's behavior over time and circumstances. For group-living species, there are two opposing hypotheses regarding the assortment of personalities within a group: the social niche specialization hypothesis, and the conformity hypothesis. To test these two hypotheses, McCune et al. (2018) performed an experiment on the boldness of two species in Corvidae: the Mexican Jay and California Scrub-Jay. Their results confirmed the conformity hypothesis, supported by the significant differences in the group effects.
Social construction
The individual personality is both determined by genetics and shaped by social contexts. Miller et al. (2016) examined the role of the developmental and social environment in personality formation in common ravens and carrion crows, which are highly social corvids. The researchers highlighted the correlation between social contexts and an individual's consistent behavior over time (personality), by showing that conspecific presence promoted the behavioral similarities between individuals. Therefore, the researchers demonstrated that social contexts had a significant impact on the development of the raven's and crow's personalities.
Social complexity
The social complexity hypothesis suggests that living in a social group enhances the cognitive abilities of animals. Corvid ingenuity is represented through their feeding skills, memorization abilities, use of tools, and group behaviour. Living in large social groups has long been connected with high cognitive ability. To live in a large group, a member must be able to recognize individuals, and track the social position and foraging of other members over time. Members must also be able to distinguish between sex, age, reproductive status, and dominance, and to update this information constantly. It might be that social complexity corresponds to their high cognition, as well as contributing to the spread of information between members of the group.
Consciousness, culture-rudiments, and neurology
A study published in 2008 suggested that the Eurasian magpie is the only non-mammal species known to be able to recognize itself in a mirror test, but later research could not replicate this finding. Studies using very similar setups could not find such behaviour in other corvids (e.g., Carrion crows). Magpies have been observed taking part in elaborate grieving rituals, which have been likened to human funerals, including laying grass wreaths. Marc Bekoff, at the University of Colorado, argues that it shows that they are capable of feeling complex emotions, including grief. Furthermore, carrion crows show a neuronal response that correlates with their perception of a stimulus, which some scientists have argued to be an empirical marker of (avian/corvid) sensory consciousness—the conscious perception of sensory input—in the crows which do not have a cerebral cortex. A related study shows that the birds' pallium's neuroarchitecture is reminiscent of the mammalian cortex.
Tool use, memory, and complex rational thought
There are also specific examples of corvid cleverness. One carrion crow was documented cracking nuts by placing them on a crosswalk, letting the passing cars crack the shell, waiting for the light to turn red, and then safely retrieving the contents. A group of crows in England took turns lifting garbage bin lids while their companions collected food.
Members of the corvid family have been known to watch other birds, remember where they hide their food, then return once the owner leaves. Corvids also move their food around between hiding places to avoid thievery—but only if they have previously been thieves themselves (that is, they remember previous relevant social contexts, use their own experience of having been a thief to predict the behavior of a pilferer, and can determine the safest course to protect their caches from being pilfered). Studies to assess similar cognitive abilities in apes have been inconclusive.
The ability to hide food requires highly accurate spatial memories. Corvids have been recorded to recall their food's hiding places up to nine months later. It is suggested that vertical landmarks (like trees) are used to remember locations. There has also been evidence that California scrub jays, which store perishable foods, not only remember where they stored their food, but for how long. This has been compared to episodic memory, previously thought unique to humans.
New Caledonian crows (Corvus moneduloides) are notable for their highly developed tool fabrication. They make angling tools of twigs and leaves trimmed into hooks, and then subsequently use the hooks to pull insect larvae from tree holes. Tools are engineered according to task, and apparently, also to learned preferences. Recent studies revealed abilities to solve complicated problems, which suggested high levels of innovation of a complex nature. Other corvids that have been observed using tools include: the American crow, blue jay, and green jay. Researchers have discovered that New Caledonian crows do not just use single objects as tools—they can also construct novel compound tools through assemblage of otherwise non-functional elements. Diversity in tool design among corvids suggests cultural variation. Again, great apes are the only other animals known to use tools in such a fashion.
Clark's nutcrackers and jackdaws were compared in a 2002 study based on geometric rule learning. The corvids, along with a domestic pigeon, had to locate a target between two landmarks, while distances and landmarks were altered. The nutcrackers were more accurate in their searches than the jackdaws and pigeons.
Implications and specific comparisons with other animals
The scarecrow is an archetypal scare tactic in the agricultural business. However, due to corvids' quick wit, scarecrows are soon ignored, and used as perches. Despite farmers' efforts to rid themselves of corvid pests, their attempts have only expanded corvid territories, and strengthened their numbers.
Contrary to earlier teleological classifications, in which they were seen as "highest" songbirds due to their intelligence, current systematics might place corvids—based on their total number of physical characteristics, instead of just their brains (which are the most developed of birds)—in the lower middle of the passerine evolutionary tree, dependent on which subgroup is chosen as the most derived. As per one observer:
The other major group of highly intelligent birds of the order Psittaciformes (which includes 'true' parrots, cockatoos, and New Zealand parrots) is not closely related to corvids.
A study found that four-month-old ravens can have physical and social cognitive skills similar to that of adult great apes, and concluded that the "dynamic of the different influences that, during ontogeny, contributes to adult cognition" is required for the study of cognition.
Disease
Corvids are reservoirs (carriers) for the West Nile virus in the United States. They are infected by mosquitoes (the vectors), primarily of the Culex species. Crows and ravens are quickly killed by this disease, so their deaths are an early-warning system when West Nile virus arrives in an area (as are horses and other bird-species deaths). One of the first signs that West Nile virus first arrived in the US in 1999 was the death of crows in New York.
Relationship with humans
Several different corvids, particularly ravens, have occasionally served as pets, although they are not able to speak as readily as parrots, and are not suited to a caged environment.
It is illegal to own corvids, or any other migratory bird, without a permit in North America, due to the Migratory Bird Act.
Humans have been able to coexist with many members of the Corvidae family throughout history, most notably crows and ravens (see: "Role in myth and culture" section below). These positive interactions have extended into modern times.
Role in myth and culture
Folklore often represents corvids as clever, and even mystical, animals. Some Native Americans, such as the Haida, believed that a raven created the earth, and despite being a trickster spirit, ravens were popular on totems, credited with creating man, and considered responsible for placing the Sun in the sky.
Due to their carrion diet, the Celtic peoples strongly associated corvids with war, death, and the battlefield; their great intelligence meant that they were often considered messengers, or manifestations of the gods, such as Bendigeidfran (Welsh for "Blessed Crow") or the Irish Morrigan (Middle Irish for "Great Queen"), both who were underworld deities that may be related to the later Arthurian Fisher King. The Welsh Dream of Rhonabwy illustrates well the association of ravens with war. In many parts of Britain, gatherings of crows, or more often magpies, are counted using the divination rhyme: "one for sorrow, two for joy, three for a girl, four for a boy, five for silver, six for gold, seven for a secret never to be told." Another rhyme is: "one for sorrow, two for mirth, three for a funeral, four for a birth, five for heaven, six for hell, and seven for the Devil, his own sel." Cornish superstition holds that when a lone magpie is encountered, it must be loudly greeted with respect.
Various Germanic peoples highly revered the raven, and the raven was often depicted as a motif on shields or other war gear in Anglo-Saxon art, such as the Sutton Hoo burial, and Vendel period art. The major deity, Odin, was so commonly associated with ravens throughout history that he gained the kenning "Raven God," and the raven banner was the flag of various Viking Age Scandinavian chieftains. Odin was also attended by Hugin and Munin, two ravens who flew all over the world, and whispered information they acquired into his ears. The Valravn sometimes appeared in modern Scandinavian folklore. On a shield and purse lid excavated among the Sutton Hoo treasures, imagery of stylised corvids with scrolled beaks are meticulously detailed in the decorative enamel work. The corvid symbolism reflected their common totemic status to the Anglo-Saxons, whose pre-Christian indigenous beliefs were of the same origin as that of the aforementioned Vikings.
The sixth century BCE Greek scribe Aesop featured corvids as intelligent antagonists in many fables. Later, in western literature, popularized by American poet Edgar Allan Poe's work "The Raven", the common raven becomes a symbol of the main character's descent into madness.
The children's book Mrs. Frisby and the Rats of NIMH and its animated film adaptation features a protagonist crow named Jeremy.
Status and conservation
Unlike many other bird families, corvid fitness and reproduction, especially with many crows, has increased due to human development. The survival and reproductive success of certain crows and ravens is assisted by their close relationship with humans.
Human development provides additional resources by clearing land, creating shrublands rich in berries and insects. When the cleared land naturally replenishes, jays and crows use the young dense trees for nesting sites. Ravens typically use larger trees in denser forest.
Most corvids are not threatened, and many species are even increasing in population due to human activity. However, a few species are in danger. For example, the destruction of the Southeast Asian rainforest is endangering mixed-species feeding flocks with members from the family Corvidae. Also, since its semiarid scrubland habitat is an endangered ecosystem, the Florida scrub jay has a small and declining population. A number of island species, which are more vulnerable to introduced species and habitat loss, have been driven to extinction, such as the New Zealand raven, or are threatened, like the Mariana crow.
The American crow population of the United States has grown over the years. It is possible that the American crow, due to humans increasing suitable habitat, will cause Northwestern crows and fish crows to decline.
Species
FAMILY CORVIDAE
Choughs
Genus Pyrrhocorax
Alpine chough, Pyrrhocorax graculus
Red-billed chough, Pyrrhocorax pyrrhocorax
Treepies
Genus Crypsirina
Hooded treepie, Crypsirina cucullata
Racket-tailed treepie, Crypsirina temia
Genus Dendrocitta
Andaman treepie, Dendrocitta bayleii
Bornean treepie, Dendrocitta cinerascens
Grey treepie, Dendrocitta formosae
Collared treepie, Dendrocitta frontalis
White-bellied treepie, Dendrocitta leucogastra
Sumatran treepie, Dendrocitta occipitalis
Rufous treepie, Dendrocitta vagabunda
Genus Platysmurus
Malayan black magpie, Platysmurus leucopterus
Bornean black magpie, Platysmurus aterrimus
Genus Temnurus
Ratchet-tailed treepie, Temnurus temnurus
Oriental magpies
Genus Cissa
Common green magpie, Cissa chinensis
Indochinese green magpie, Cissa hypoleuca
Javan green magpie, Cissa thalassina
Bornean green magpie, Cissa jefferyi
Genus Urocissa
Taiwan blue magpie, Urocissa caerulea
Red-billed blue magpie, Urocissa erythroryncha
Yellow-billed blue magpie, Urocissa flavirostris
Sri Lanka blue magpie, Urocissa ornata
White-winged magpie, Urocissa whiteheadi
Old World jays and close relatives
Genus Garrulus
Eurasian jay, Garrulus glandarius
Black-headed jay, Garrulus lanceolatus
Lidth's jay, Garrulus lidthi
Genus Podoces – ground jays
Xinjiang ground jay, Podoces biddulphi
Mongolian ground jay, Podoces hendersoni
Turkestan ground jay, Podoces panderi
Iranian ground jay, Podoces pleskei
Genus Ptilostomus
Piapiac, Ptilostomus afer
Genus Zavattariornis
Stresemann's bushcrow, Zavattariornis stresemanni
Nutcrackers
Genus Nucifraga
Northern nutcracker, Nucifraga caryocatactes
Southern nutcracker, Nucifraga hemispila
Kashmir nutcracker, Nucifraga multipunctata
Clark's nutcracker, Nucifraga columbiana
Holarctic magpies
Genus Pica
Black-billed magpie, Pica hudsonia
Yellow-billed magpie, Pica nuttalli
Maghreb magpie, Pica mauritanica
Eurasian magpie, Pica pica
Korean magpie, Pica (pica) serica
Genus Cyanopica
Azure-winged magpie, Cyanopica cyanus
Iberian magpie, Cyanopica cooki
True crows (crows, ravens, jackdaws and rooks)
Genus Corvus
Australian and Melanesian species
Little crow, Corvus bennetti
Australian raven, Corvus coronoides
Bismarck crow, Corvus insularis
Brown-headed crow, Corvus fuscicapillus
Bougainville crow, Corvus meeki
Little raven, Corvus mellori
New Caledonian crow, Corvus moneduloides
Torresian crow, Corvus orru
Forest raven, Corvus tasmanicus
Relict raven, Corvus (tasmanicus) boreus
Grey crow, Corvus tristis
Long-billed crow, Corvus validus
White-billed crow, Corvus woodfordi
Pacific island species
Alalā (Hawaiian crow), Corvus hawaiiensis (formerly Corvus tropicus) (extinct in the wild)
Mariana crow, Corvus kubaryi
Tropical Asian species
Daurian jackdaw, Coloeus dauuricus
Sunda crow, Corvus enca
Sulawesi crow, Corvus celebensis
Samar crow, Corvus samarensis
Sierra Madre crow, Corvus sierramadrensis
Palawan crow, Corvus pusillus
Flores crow, Corvus florensisLarge-billed crow, Corvus macrorhynchosEastern jungle crow, Corvus levaillantiiIndian jungle crow, Corvus culminatusHouse crow, Corvus splendensCollared crow, Corvus torquatusPiping crow, Corvus typicusBanggai crow, Corvus unicolorEurasian and North African species
Hooded crow, Corvus cornixMesopotamian crow, Corvus (cornix) capellanusCarrion crow (western carrion crow), Corvus coroneEastern carrion crow, Corvus (corone) orientalisRook, Corvus frugilegusWestern jackdaw, Coloeus monedulaFan-tailed raven, Corvus rhipidurusBrown-necked raven, Corvus ruficollisHolarctic species
Common raven, Corvus corax (see also next section)
Pied raven, Corvus corax varius morpha leucophaeus (an extinct color variant)North and Central American species
American crow, Corvus brachyrhynchosNorthwestern crow, Corvus brachyrhynchos caurinusChihuahuan raven, Corvus cryptoleucusTamaulipas crow, Corvus imparatusJamaican crow, Corvus jamaicensisWhite-necked crow, Corvus leucognaphalusCuban crow, Corvus nasicusFish crow, Corvus ossifragusPalm crow, Corvus palmarumSinaloa crow, Corvus sinaloaeWestern raven, Corvus (corax) sinuatusTropical African species
White-necked raven, Corvus albicollisPied crow, Corvus albusCape crow, Corvus capensisThick-billed raven, Corvus crassirostrisSomali crow (dwarf raven), Corvus edithaeBoreal jays
Genus PerisoreusCanada jay, Perisoreus canadensisSiberian jay, Perisoreus infaustusSichuan jay, Perisoreus internigransNew World jays
Genus Aphelocoma – scrub-jays
California scrub jay, Aphelocoma californicaIsland scrub jay, Aphelocoma insularisWoodhouse's scrub jay, Aphelocoma woodhouseiiFlorida scrub jay, Aphelocoma coerulescensMexican jay, Aphelocoma wollweberiTransvolcanic jay, Aphelocoma ultramarinaUnicolored jay, Aphelocoma unicolorGenus CyanocittaBlue jay, Cyanocitta cristataSteller's jay, Cyanocitta stelleriGenus CyanocoraxBlack-throated magpie-jay, Cyanocorax collieiWhite-throated magpie-jay, Cyanocorax formosaBlack-chested jay, Cyanocorax affinisPurplish-backed jay, Cyanocorax beecheiiAzure jay, Cyanocorax caeruleusCayenne jay, Cyanocorax cayanusPlush-crested jay, Cyanocorax chrysopsCurl-crested jay, Cyanocorax cristatellusPurplish jay, Cyanocorax cyanomelasWhite-naped jay, Cyanocorax cyanopogonTufted jay, Cyanocorax dickeyiAzure-naped jay, Cyanocorax heilpriniBushy-crested jay, Cyanocorax melanocyaneusWhite-tailed jay, Cyanocorax mystacalisSan Blas jay, Cyanocorax sanblasianusViolaceous jay, Cyanocorax violaceusGreen jay, Cyanocorax luxuosusInca jay, Cyanocorax yncasYucatan jay, Cyanocorax yucatanicusBrown jay, Cyanocorax morioGenus CyanolycaSilvery-throated jay, Cyanolyca argentigulaBlack-collared jay, Cyanolyca armillataAzure-hooded jay, Cyanolyca cucullataWhite-throated jay, Cyanolyca mirabilisDwarf jay, Cyanolyca nanusBeautiful jay, Cyanolyca pulchraBlack-throated jay, Cyanolyca pumiloTurquoise jay, Cyanolyca turcosaWhite-collared jay, Cyanolyca viridicyanusGenus GymnorhinusPinyon jay, Gymnorhinus cyanocephalus Explanatory notes
| Biology and health sciences | Corvoidea | null |
162621 | https://en.wikipedia.org/wiki/Braided%20river | Braided river | A braided river (also called braided channel or braided stream) consists of a network of river channels separated by small, often temporary, islands called braid bars or, in British English usage, aits or eyots.
Braided streams tend to occur in rivers with high sediment loads or coarse grain sizes, and in rivers with steeper slopes than typical rivers with straight or meandering channel patterns. They are also associated with rivers with rapid and frequent variation in the amount of water they carry, i.e., with "flashy" rivers, and with rivers with weak banks.
Braided channels are found in a variety of environments all over the world, including gravelly mountain streams, sand bed rivers, on alluvial fans, on river deltas, and across depositional plains.
Description
A braided river consists of a network of multiple shallow channels that diverge and rejoin around ephemeral braid bars. This gives the river a fancied resemblance to the interwoven strands of a braid. The braid bars, also known as channel bars, branch islands, or accreting islands, are usually unstable and may be completely covered at times of high water. The channels and braid bars are usually highly mobile, with the river layout often changing significantly during flood events. When the islets separating channels are stabilized by vegetation, so that they are more permanent features, they are sometimes called aits or eyots.
A braided river differs from a meandering river, which has a single sinuous channel. It is also distinct from an anastomosing river, which consist of multiple interweaving semi-permanent channels which are separated by floodplain rather than channel bars; these channels may themselves be braided.
Formation
The physical processes that determine whether a river will be braided or meandering are not fully understood. However, there is wide agreement that a river becomes braided when it carries an abundant supply of sediments.
Experiments with flumes suggest that a river becomes braided when a threshold level of sediment load or slope is reached. On timescales long enough for the river to evolve, a sustained increase in sediment load will increase the bed slope of the river, so that a variation of slope is equivalent to a variation in sediment load, provided the amount of water carried by the river is unchanged. A threshold slope was experimentally determined to be 0.016 (ft/ft) for a stream with poorly sorted coarse sand. Any slope over this threshold created a braided stream, while any slope under the threshold created a meandering stream or – for very low slopes – a straight channel. Also important to channel development is the proportion of suspended load sediment to bed load. An increase in suspended sediment allowed for the deposition of fine erosion-resistant material on the inside of a curve, which accentuated the curve and in some instances, caused a river to shift from a braided to a meandering profile.
These experimental results were expressed in formulas relating the critical slope for braiding to the discharge and grain size. The higher the discharge, the lower the critical slope, while larger grain size yields a higher critical slope. However, these give only an incomplete picture, and numerical simulations have become increasingly important for understanding braided rivers.
Aggradation (net deposition of sediments) favors braided rivers, but is not essential. For example, the Rakaia and Waitaki Rivers of New Zealand are not aggrading, due to retreating shorelines, but are nonetheless braided rivers. Variable discharge has also been identified as important in braided rivers, but this may be primarily due to the tendency for frequent floods to reduce bank vegetation and destabilize the banks, rather than because variable discharge is an essential part of braided river formation.
Numerical models suggest that bedload transport (movement of sediment particles by rolling or bouncing along the river bottom) is essential to formation of braided rivers, with net erosion of sediments at channel divergences and net deposition at convergences. Braiding is reliably reproduced in simulations whenever there is little lateral constraint on flow and there is significant bedload transport. Braiding is not observed in simulations of the extreme cases of pure scour (no deposition taking place), which produces a dendritic system, or of cohesive sediments with no bedload transport. Meanders fully develop only when the river banks are sufficiently stabilized to limit lateral flow. An increase in suspended sediment relative to bedload allows the deposition of fine erosion-resistant material on the inside of a curve, which accentuated the curve and in some instances, causes a river to shift from a braided to a meandering profile. A stream with cohesive banks that are resistant to erosion will form narrow, deep, meandering channels, whereas a stream with highly erodible banks will form wide, shallow channels, preventing the helical flow of the water necessary for meandering and resulting in the formation of braided channels.
Occurrences
Braided rivers occur in many environments, but are most common in wide valleys associated with mountainous regions or their piedmonts or in areas of coarse-grained sediments and limited growth of vegetation near the river banks. They are also found on fluvial (stream-dominated) alluvial fans. Extensive braided river systems are found in Alaska, Canada, New Zealand's South Island, and the Himalayas, which all contain young, rapidly eroding mountains.
The enormous Brahmaputra river in Northeastern India is a classic example of a braided river.
A notable example of a large braided stream in the contiguous United States is the Platte River in central and western Nebraska. Platte-type braided rivers are characterized by abundant linguoid (tonguelike) bar and dune deposits.
The Scott River of southern Alaska is the type for braided glacial outwash rivers characterized by longitudinal gravel bars and by sand lenses deposited in scours from times of high water.
The Donjek River of the Yukon Basin is the type for braided rivers showing repeated cycles of deposition, with finer sediments towards the top of each cycle.
The Bijou Creek of Colorado is the type for braided rivers characterized by laminated sand deposits emplaced during floods.
A portion of the lower Yellow River takes a braided form.
The Sewanee Conglomerate, a Pennsylvanian coarse sandstone and conglomerate unit present on the Cumberland Plateau near the University of the South, may have been deposited by an ancient braided and meandering river that once existed in the eastern United States. Others have interpreted the depositional environment for this unit as a tidal delta.
The Tagliamento of Italy is an example of a gravel bed braided river.
The Piave, also in Italy, is an example of a river that is transitioning from braided to meandering due to human interventions.
The Waimakariri River of New Zealand is an example of a braided river with an extensive floodplain.
| Physical sciences | Hydrology | Earth science |
162714 | https://en.wikipedia.org/wiki/Dust | Dust | Dust is made of fine particles of solid matter. On Earth, it generally consists of particles in the atmosphere that come from various sources such as soil lifted by wind (an aeolian process), volcanic eruptions, and pollution.
Dust in homes is composed of about 20–50% dead skin cells. The rest, and in offices and other built environments, is composed of small amounts of plant pollen, human hairs, animal fur, textile fibers, paper fibers, minerals from outdoor soil, burnt meteorite particles, and many other materials which may be found in the local environment.
Atmospheric
Atmospheric or wind-borne fugitive dust, also known as aeolian dust, comes from dry regions where high-speed winds can remove mostly silt-sized material, abrading susceptible surfaces. This includes areas where grazing, ploughing, vehicle use, and other human behaviors have further destabilized the land, though not all source areas have been largely affected by anthropogenic impacts. Dust-producing surfaces cover one-third of the global land area. These are made up of hyper-arid regions like the Sahara, which covers 0.9 billion hectares, and drylands, which occupy 5.2 billion hectares.
Dust in the atmosphere is produced by saltation and abrasive sandblasting of sand-sized grains, and it is transported through the troposphere. This airborne dust is considered an aerosol, and once in the atmosphere, it can produce strong local radiative forcing. Saharan dust, in particular, can be transported and deposited as far as the Caribbean and the Amazon basin and may affect air temperature, cause ocean cooling, and alter rainfall amounts.
Middle East
Dust in the Middle East has been a historic phenomenon. Recently, because of climate change and the escalating process of desertification, the problem has worsened dramatically. As a multi-factor phenomenon, there is not yet a clear consensus on the sources or potential solutions to the problem.
Iran
The dust in Iraq and Iran are migratory systems that move from west to east or east to west in the spring and have the highest intensity, concentration, and extent until mid-summer. The causes of their occurrence are the lack of humidity, dry environment, low rainfall, and annual droughts. Due to the decrease of rainfall in areas such as Iraq and Syria, most of the dust in Iran also originates from the regions of Iraq, Syria, and Jordan.
In addition to the foreign foci, there are areas inside the country that have either formed new dust foci in recent years or were from the past and their extent has increased. Among these areas, parts of southern Tehran, south of Alborz province – which in the past were plains, riverbeds, seasonal lakes, and seasonal reservoirs – and Gavkhoni wetland of Isfahan province can be mentioned because they have become dry and prone to dust. Among other areas that have become dust centers, Qom province, the Qom salt lake and its surroundings can be mentioned, as well as the Urmia lake, which due to strong winds and due to the dryness of the lake and the reduction of its size, some areas of its bed which were underwater in the past are subject to wind erosion.
In Iran, the dust directly affects more than 5 million people and has become a serious government issue recently. In the Khuzestan province, it has led to the severe increase of air pollution. The amount of pollutants in the air has surpassed more than 50 times the normal level several times in a year. Recently, initiatives such as Project-Dust have been established to study dust in the Middle East directly.
The continuation of drought has caused water scarcity or drying up of some wetlands and lakes such as Hamon and Urmia Lake. This has turned them into centers of dust.
Director General of the Office of Desert Affairs of Iran's Natural Resources and Watershed Organization stated that according to the data of the 2018 studies, 30 million hectares of land in the country are affected by wind erosion, and 14 million hectares of this area are considered to be the focal points of wind erosion, which causes serious damage to infrastructure.
Roads
Dust kicked up by vehicles traveling on roads is a significant source of harmful air pollution. Road dust consists of deposits of vehicle and industrial exhaust gas, particles from tire and brake wear, dust from paved roads or potholes, and dust from construction sites. Road dust is a significant contributor to the generation and release of particulates into the atmosphere. Control of road dust is a significant challenge in urban areas, and also in other locations with high levels of vehicular traffic upon unsealed roads, such as mines and landfills.
Road dust may be suppressed by mechanical methods like street sweeper, vehicles equipped with vacuum cleaners, vegetable oil sprays, or with water sprayers. Calcium chloride can be used. Improvements in automotive engineering have reduced the amount of PM10s produced by road traffic; the proportion representing re-suspension of existing particulates has increased as a result.
Coal
Coal dust is responsible for the respiratory disease known as pneumoconiosis, including coal worker's pneumoconiosis disease that occurs among coal miners. The danger of coal dust resulted in environmental law regulating workplace air quality in some jurisdictions. In addition, if enough coal dust is dispersed within the air in a given area, in very rare circumstances, it can cause a dust explosion. These circumstances are typically within confined spaces.
Control
Atmospheric
Most governmental Environmental Protection Agencies, including the United States Environmental Protection Agency (EPA) mandate that facilities that generate fugitive dust, minimize or mitigate the production of dust in their operation. The most frequent dust control violations occur at new residential housing developments in urban areas. United States federal law requires that construction sites obtain planning permissions to conduct earth moving and clearing of areas, so that plans to control dust emissions while the work is being carried out are specified. Control measures include such simple practices as spraying construction and demolition sites with water, and preventing the tracking of dust onto adjacent roads.
Some of the issues include:
Reducing dust related health risks that include allergic reactions, pneumonia and asthmatic attacks.
Improving visibility and road traffic safety.
Providing cleaner air, cleaner vehicles and cleaner homes and promoting better health.
Improving agricultural productivity.
Reducing vehicle maintenance costs by lowering the levels of dust that clog filters, bearings and machinery.
Reducing driver fatigue, maintenance on car suspension systems and improving fuel economy in automobiles.
Increasing cumulative effects—each new application builds on previous progress.
US federal laws require dust control on sources such as vacant lots, unpaved parking lots, and dirt roads. Dust in such places may be suppressed by mechanical methods, including paving or laying down gravel, or stabilizing the surface with water, vegetable oils or other dust suppressants, or by using water misters to suppress dust that is already airborne.
Domestic
Dust control is the suppression of solid particles with diameters less than 500micrometers (i.e. half a millimeter). Dust poses a health risk to children, older people, and those with respiratory diseases.
House dust can become airborne easily. Care is required when removing dust to avoid causing the dust to become airborne. A feather duster tends to agitate the dust so it lands elsewhere.
Certified HEPA (tested to MIL STD 282) can effectively trap 99.97% of dust at 0.3 micrometers. Not all HEPA filters can effectively stop dust; while vacuum cleaners with HEPA filters, water, or cyclones may filter more effectively than without, they may still exhaust millions of particles per cubic foot of air circulated. Central vacuum cleaners can be effective in removing dust, especially if they are exhausted directly to the outdoors.
Air filters differ greatly in their effectiveness. Laser particle counters are an effective way to measure filter effectiveness; medical grade instruments can test for particles as small as 0.3 micrometers. In order to test for dust in the air, there are several options available. Pre-weighed filter and matched weight filters made from polyvinyl chloride or mixed cellulose ester are suitable for respirable dust (less than 10 micrometers in diameter).
Dust resistant surfaces
A dust resistant surface is a state of prevention against dust contamination or damage, by a design or treatment of materials and items in manufacturing or through a repair process . A reduced tacticity of a synthetic layer or covering can protect surfaces and release small molecules that could have remained attached. A panel, container or enclosure with seams may feature types of strengthened structural rigidity or sealant to vulnerable edges and joins.
Outer space
Cosmic dust is widely present in outer space, where gas and dust clouds are the primary precursors for planetary systems. The zodiacal light, as seen in a dark night sky, is produced by sunlight reflected from particles of dust in orbit around the Sun. The tails of comets are produced by emissions of dust and ionized gas from the body of the comet. Dust also covers solid planetary bodies, and vast dust storms can occur on Mars which cover almost the entire planet. Interstellar dust is found between the stars, and high concentrations produce diffuse nebulae and reflection nebulae.
Dust is widely present in the galaxy. Ambient radiation heats dust and re-emits radiation into the microwave band, which may distort the cosmic microwave background power spectrum. Dust in this regime has a complicated emission spectrum and includes both thermal dust emission and spinning dust emission.
Dust samples returned from outer space have provided information about conditions of the early solar system. Several spacecraft have sought to gather samples of dust and other materials. Among these craft was Stardust, which flew past 81P/Wild in 2004, and returned a capsule of the comet's remains to Earth. In 2010 the Japanese Hayabusa spacecraft returned samples of dust from the surface of an asteroid.
Atmospheric gallery
Dust mites
House dust mites are present indoors wherever humans live. Positive tests for dust mite allergies are extremely common among people with asthma. Dust mites are microscopic arachnids whose primary food is dead human skin cells, but they do not live on living people. They and their feces and other allergens are major constituents of house dust, but because they are so heavy they are not suspended for long in the air. They are generally found on the floor and other surfaces until disturbed (by walking, for example). It could take between twenty minutes and two hours for dust mites to settle back out of the air.
Dust mites are a nesting species that prefer a dark, warm, and humid climate. They flourish in mattresses, bedding, upholstered furniture, and carpets. Their feces include enzymes that are released upon contact with a moist surface, which can happen when a person inhales, and these enzymes can kill cells within the human body. House dust mites did not become a problem until humans began to use textiles, such as western style blankets and clothing.
| Physical sciences | Storms | Earth science |
162717 | https://en.wikipedia.org/wiki/Embryology | Embryology | Embryology (from Greek ἔμβρυον, embryon, "the unborn, embryo"; and -λογία, -logia) is the branch of animal biology that studies the prenatal development of gametes (sex cells), fertilization, and development of embryos and fetuses. Additionally, embryology encompasses the study of congenital disorders that occur before birth, known as teratology.
Early embryology was proposed by Marcello Malpighi, and known as preformationism, the theory that organisms develop from pre-existing miniature versions of themselves. Aristotle proposed the theory that is now accepted, epigenesis. Epigenesis is the idea that organisms develop from seed or egg in a sequence of steps. Modern embryology developed from the work of Karl Ernst von Baer, though accurate observations had been made in Italy by anatomists such as Aldrovandi and Leonardo da Vinci in the Renaissance.
Comparative embryology
Preformationism and epigenesis
As recently as the 18th century, the prevailing notion in western human embryology was preformation: the idea that semen contains an embryo – a preformed, miniature infant, or homunculus – that simply becomes larger during development.
The competing explanation of embryonic development was epigenesis, originally proposed 2,000 years earlier by Aristotle. Much early embryology came from the work of the Italian anatomists Aldrovandi, Aranzio, Leonardo da Vinci, Marcello Malpighi, Gabriele Falloppio, Girolamo Cardano, Emilio Parisano, Fortunio Liceti, Stefano Lorenzini, Spallanzani, Enrico Sertoli, and Mauro Ruscóni. According to epigenesis, the form of an animal emerges gradually from a relatively formless egg. As microscopy improved during the 19th century, biologists could see that embryos took shape in a series of progressive steps, and epigenesis displaced preformation as the favored explanation among embryologists.
Cleavage
Cleavage is the very beginning steps of a developing embryo. Cleavage refers to the many mitotic divisions that occur after the egg is fertilized by the sperm. The ways in which the cells divide is specific to certain types of animals and may have many forms.
Holoblastic
Holoblastic cleavage is the complete division of cells. Holoblastic cleavage can be radial (see: Radial cleavage), spiral (see: Spiral cleavage), bilateral (see: Bilateral cleavage), or rotational (see: Rotational cleavage). In holoblastic cleavage, the entire egg will divide and become the embryo, whereas in meroblastic cleavage, some cells will become the embryo and others will be the yolk sac.
Meroblastic
Meroblastic cleavage is the incomplete division of cells. The division furrow does not protrude into the yolky region as those cells impede membrane formation and this causes the incomplete separation of cells. Meroblastic cleavage can be bilateral (see: Bilateral cleavage), discoidal (see: Discoidal cleavage), or centrolecithal (see: Centrolecithal).
Basal phyla
Animals that belong to the basal phyla have holoblastic radial cleavage which results in radial symmetry (see: Symmetry in biology). During cleavage, there is a central axis that all divisions rotate about. The basal phyla also has only one to two embryonic cell layers, compared to the three in bilateral animals (endoderm, mesoderm, and ectoderm).
Bilaterians
In bilateral animals, cleavage can be either holoblastic or meroblastic depending on the species. During gastrulation, the blastula develops in one of two ways that divide the whole animal kingdom into two-halves (see: Embryological origins of the mouth and anus). If in the blastula, the first pore, or blastopore, becomes the mouth of the animal, it is a protostome; if the blastopore becomes the anus, then it is a deuterostome. The protostomes include most invertebrate animals, such as insects, worms and molluscs, while the deuterostomes include a few invertebrates such as the echinoderms (starfish and relatives) and all the vertebrates. In due course, the blastula changes into a more differentiated structure called the gastrula. Soon after the gastrula is formed, three distinct layers of cells (the germ layers) from which all the bodily organs and tissues then develop.
Germ layers
The innermost layer, or endoderm, give rise to the digestive organs, the gills, lungs or swim bladder if present, and kidneys or nephrites.
The middle layer, or mesoderm, gives rise to the muscles, skeleton if any, and blood system.
The outer layer of cells, or ectoderm, gives rise to the nervous system, including the brain, and skin or carapace and hair, bristles, or scales.
Drosophila melanogaster (fruit fly)
Drosophila have been used as a developmental model for many years. The studies that have been conducted have discovered many useful aspects of development that not only apply to fruit flies but other species as well.
Outlined below is the process that leads to cell and tissue differentiation.
Maternal-effect genes help to define the anterior-posterior axis using Bicoid (gene) and Nanos (gene).
Gap genes establish 3 broad segments of the embryo.
Pair-rule genes define 7 segments of the embryo within the confines of the second broad segment that was defined by the gap genes.
Segment-polarity genes define another 7 segments by dividing each of the pre-existing 7 segments into anterior and posterior halves using a gradient of Hedgehog and Wnt.
Homeotic (Hox) genes use the 14 segments as pinpoints for specific types of cell differentiation and the histological developments that correspond to each cell type.
Humans
Humans are bilateral animals that have holoblastic rotational cleavage. Humans are also deuterostomes. In regard to humans, the term embryo refers to the ball of dividing cells from the moment the zygote implants itself in the uterus wall until the end of the eighth week after conception. Beyond the eighth week after conception (tenth week of pregnancy), the developing human is then called a fetus.
Evolutionary embryology
Evolutionary embryology is the expansion of comparative embryology by the ideas of Charles Darwin. Similarly to Karl Ernst von Baer's principles that explained why many species often appear similar to one another in early developmental stages, Darwin argued that the relationship between groups can be determined based upon common embryonic and larval structures.
Von Baer's principles
The general features appear earlier in development than do the specialized features.
More specialized characters develop from the more general ones.
The embryo of a given species never resembles the adult form of a lower one.
The embryo of a given species does resemble the embryonic form of a lower one.
Using Darwin's theory evolutionary embryologists have since been able to distinguish between homologous and analogous structures between varying species. Homologous structures are those that the similarities between them are derived from a common ancestor, such as the human arm and bat wings. Analogous structures are those that appear to be similar but have no common ancestral derivation.
Origins of modern embryology
Until the birth of modern embryology through observation of the mammalian ovum by Karl Ernst von Baer in 1827, there was no clear scientific understanding of embryology, although later discussions in this article show that some cultures had a fairly refined understanding of some of the principles. Only in the late 1950s when ultrasound was first used for uterine scanning, was the true developmental chronology of human fetus available. Karl Ernst von Baer along with Heinz Christian Pander, also proposed the germ layer theory of development which helped to explain how the embryo developed in progressive steps. Part of this explanation explored why embryos in many species often appear similar to one another in early developmental stages using his four principles.
Modern embryology research
Embryology is central to evolutionary developmental biology ("evo-devo"), which studies the genetic control of the development process (e.g. morphogens), its link to cell signalling, its roles in certain diseases and mutations, and its links to stem cell research. Embryology is the key to Gestational Surrogacy, which is when the sperm of the intended father and egg of intended mother are fused in a lab forming an embryo. This embryo is then put into the surrogate who carries the child to term.
Medical embryology
Medical embryology is used widely to detect abnormalities before birth. 2-5% of babies are born with an observable abnormality and medical embryology explores the different ways and stages that these abnormalities appear in. Genetically derived abnormalities are referred to as malformations. When there are multiple malformations, this is considered a syndrome. When abnormalities appear due to outside contributors, these are disruptions. The outside contributors causing disruptions are known as teratogens. Common teratogens are alcohol, retinoic acid, ionizing radiation or hyperthermic stress.
Vertebrate and invertebrate embryology
Many principles of embryology apply to invertebrates as well as to vertebrates. Therefore, the study of invertebrate embryology has advanced the study of vertebrate embryology. However, there are many differences as well. For example, numerous invertebrate species release a larva before development is complete; at the end of the larval period, an animal for the first time comes to resemble an adult similar to its parent or parents. Although invertebrate embryology is similar in some ways for different invertebrate animals, there are also countless variations. For instance, while spiders proceed directly from egg to adult form, many insects develop through at least one larval stage.
For decades, a number of so-called normal staging tables were produced for the embryology of particular species, mainly focussing on external developmental characters. As variation in developmental progress makes comparison among species difficult, a character-based Standard Event System was developed, which documents these differences and allows for phylogenetic comparisons among species.
Birth of developmental biology
After the 1950s, with the DNA helical structure being unraveled and the increasing knowledge in the field of molecular biology, developmental biology emerged as a field of study which attempts to correlate the genes with morphological change, and so tries to determine which genes are responsible for each morphological change that takes place in an embryo, and how these genes are regulated.
As of today, human embryology is taught as a cornerstone subject in medical schools, as well as in biology and zoology programs at both an undergraduate and graduate level.
History
Ancient Egypt
Knowledge of the placenta goes back at least to ancient Egypt, where it was viewed as the seat of the soul. There was an Egyptian official with the title Opener of the Kings Placenta. An Egyptian text from the time of Akhenaten said that a human originates from the egg that grows in women.
Ancient Asia
Various interpretations of embryology have existed in Asia throughout history. Included in the ancient Indian tradition of Ayurveda is garbhasharir or the study of embryology, which refers to conceptions of embryology from antiquity. Descriptions of the amniotic sac appear in the Bhagavad Gita, Bhagavata Purana, and the Sushruta Samhita. One of the Upanishads known as the Garbhopanisaḍ states that the embryo is "like water in the first night, in seven nights it is like a bubble, at the end of half a month it becomes a ball. At the end of a month it is hardened, in two months the head is formed". In Indian literature, the start of consciousness in an embryo is not clearly defined. Some scriptures state that it is active at conception, while others suggest that consciousness begins in the seventh to ninth month of fetal development. Many South Asian traditions, including some Tibetan traditions, believe that the fetus has conscious experiences towards the end of its development.
The development of the human embryo is mentioned in the ancient Buddhist text of Garbhāvakrāntisūtra (1st-4th century CE). It mentions the human gestation period of 38 days. The text describes embryonic development in first three weeks as a liquid part of yogurt and the differentiation of body parts such as arms, leg, feet and head in the third month.
Ancient Greece
Pre-Socratic philosophers
Many pre-Socratic philosophers are recorded as having opinions on different aspects of embryology, although there is some bias in the description of their views in later authors such as Aristotle. According to Empedocles (whose views are described by Plutarch in the 1st century AD), who lived in the 5th century BC, the embryo derives and receives its blood from four vessels in all; two arteries and two veins. He also held sinews as originating from equal mixtures of earth and air. He further said men begin to form within the first month and are finished within fifty days. Asclepiades agreed that men are formed within fifty days, but he believed that women took a full two months to be fully knit. One observation, variously attributed to either Anaxagoras of Clazomenae or Alcmaeon of Croton, says that the milk produced by mammals is analogous to the white of fowl egg. Diogenes of Apollonia said that a mass of flesh forms first, only then followed by the development of bone and nerves. Diogenes recognized that the placenta was a nutritional source for the growing fetus. He also said that the development of males took four months, but that the development of females took five months. He did not think the embryo was alive. Alcmaeon also made some contributions, and he is the first person reported to have practiced dissection. One idea, first stated by Parmenides, was that there was a connection between the right side of the body and the male embryo, and between the left side of the body and the female embryo. According to Democritus and Epicurus, the fetus is nourished at the mouth inside the mother and there are comparable teats that supply this nourishment within the mother's body to the fetus. Discussion on various views regarding how long it takes for specific parts of the embryo to form appear in an anonymous document known as the Nutriment.
Ancient Greeks discussed whether only the male had a seed which developed into the embryo within the female womb, or both the male and the female each had a seed that made a contribution to the developing embryo. The difficulty that one-seed theorists confronted was to explain the maternal resemblance of the progeny. One issue that two-seed theorists confronted was why the female seed was needed if the male already had a seed. One common solution to this problem was to assert that the female seed was either inferior or inactive. Another question was the origin of the seed. The encephalomyelogenic theory stated that the seed originated from the brain or and/or bone marrow. Later came pangenesis, which asserted the seed was drawn from the whole body in order to explain the general resemblance in the body of the offspring. Later on hematogenous theory developed which asserted that the seed was drawn from the blood. A third question was how or in what form the progeny existed in the seed prior to developing into an embryo and a fetus. According to preformationists, the body of the progeny already existed in a pre-existing but undeveloped form in the seed. Three variants of preformationism were homoiomerous preformationism, anhomoiomerous preformationism, and homuncular preformationism. According to the first, the homoiomerous parts of the body (e.g. humors, bone) already exist pre-formed in the seed. The second held that it was the anhomoiomerous parts that were pre-formed. Finally, the third view held that the whole was already a unified organic thing. Preformationism was not the only view. According to epigenesists, parts of the embryo successively form after conception takes place.
Hippocrates
Some of the most well-known early ideas on embryology come from Hippocrates and the Hippocratic Corpus, where discussion on the embryo is usually given in the context of discussing obstetrics (pregnancy and childbirth). Some of the most relevant Hippocratic texts on embryology include the Regimen on Acute Diseases, On Semen, and On the Development of the Child. Hippocrates claimed that the development of the embryo is put into motion by fire and that nourishment comes from food and breath introduced into the mother. An outer layer of the embryo solidifies, and the fire within consumes humidity which makes way for development of bone and nerve. The fire in the innermost part becomes the belly and air channels are developed in order to route nourishment to it. The enclosed fire also helps form veins and allows for circulation. In this description, Hippocrates aims at describing the causes of development rather than describing what develops. Hippocrates also develops views similar to preformationism, where he claims that all parts of the embryo simultaneously develop. Hippocrates also believed that maternal blood nourishes the embryo. This blood flows and coagulates to help form the flesh of the fetus. This idea was derived from the observation that menstrual blood ceases during pregnancy, which Hippocrates took to imply that it was being redirected to fetal development. Hippocrates also claimed that the flesh differentiates into different organs of the body, and Hippocrates saw as analogous an experiment where a mixture of substances placed into water will differentiate into different layers. Comparing the seed to the embryo, Hippocrates further compared the stalk to the umbilical cord.
Aristotle
Some embryological discussion appears in the writings of Aristotle's predecessor Plato, especially in his Timaeus. One of his views were that the bone marrow acted as the seedbed, and that the soul itself was the seed out of which the embryo developed, though he did not explain how this development proceeded. Scholars also continue to debate the views he held on various other aspects of embryology. However, a much more voluminous discussion on the subject comes from the writings of Aristotle, especially as appears in his On the Generation of Animals. Some ideas related to embryology also appear in his History of Animals, On the Parts of Animals, On Respiration, and On the Motion of Animals. Means by which we know Aristotle studied embryology, and most likely his predecessors as well, was through studying developing embryos taken out from animals as well as aborted and miscarried human embryos. Aristotle believed that the female supplied the matter for the development of the embryo, formed from the menstrual blood whereas the semen that comes from the male shapes that matter. Aristotle's belief that both the male and female made a contribution to the actual fetus goes against some prior beliefs. According to Aeschylus and some Egyptian traditions, the fetus solely develops from the male contribution and that the female womb simply nourishes this growing fetus. On the other hand, the Melanesians held that the fetus is solely a product of the female contribution. Aristotle did not believe there were any external influences on the development of the embryo. Against Hippocrates, Aristotle believed that new parts of the body developed over time rather than all forming immediately and developing from then on. He also considered whether each new part derives from a previously formed part or develops independently of any previously formed part. On the basis that different parts of the body do not resemble each other, he decided in favor of the latter view. He also described development of fetal parts in terms of mechanical and automatic processes. In terms of the development of the embryo, he says it begins in a liquid-like state as the material secreted by the female combines with the semen of the male, and then the surface begins to solidify as it interacts with processes of heating and cooling. The first part of the body to differentiate is the heart, which Aristotle and many of his contemporaries believed was the location of reason and thinking. Aristotle claimed that vessels join to the uterus in order to supply nourishment to the developing fetus. Some of the most solid parts of the fetus cool and, as they lose moisture to heat, turn into nails, horns, hoofs, beaks, etc. Internal heat dries away moisture and forms sinews and bones and the skin results from drying of the flesh. Aristotle also describes the development of birds in eggs at length. He further described embryonic development in dolphins, some sharks, and many other animals. Aristotle singularly wrote more on embryology than any other pre-modern author, and his influence on the subsequent discussion on the subject for many centuries was immense, introducing into the subject forms of classification, a comparative method from various animals, discussion of the development of sexual characteristics, compared the development of the embryo to mechanistic processes, and so forth.
Later Greek embryology
Reportedly, some Stoics claimed that most parts of the body formed at once during embryological development. Some Epicureans claimed that the fetus is nourished by either the amniotic fluid or the blood, and that both male and female supply material to the development of the fetus. According to the writings of Tertullian, Herophilus in the 5th century BC described the ovaries and fallopian tubes (but not past what was already described by Aristotle) and also dissected some embryos. One advance Herophilus made, against the conceptions of other individuals such as Aristotle, was that the brain was the center of intellect rather than the heart. Though not a part of Greek tradition, in Job 10, the formation of the embryo is likened to the curdling of milk into cheese, as described by Aristotle. Whereas Needham sees this statement in Job as part of the Aristotelian tradition, others see it as evidence that the milk analogy predates the Aristotelian Greek tradition and originates in Jewish circles. In addition, the Wisdom of Solomon (7:2) also has the embryo formed from menstrual blood. Soranus of Ephesus also wrote texts on embryology which went into use for a long time. Some rabbinic texts discuss the embryology of a female Greek writer named Cleopatra, a contemporary of Galen and Soranus, who was said to have claimed that the male fetus is complete in 41 days whereas the female fetus is complete in 81 days. Various other texts of less importance also appear and describe various aspects of embryology, though without making much progress from Aristotle. Plutarch has a chapter in one of his works titled "Whether was before, the hen or egg?" Discussion on embryological tradition also appears in many Neoplatonic traditions.
Next to Aristotle, the most impactful and important Greek writer on biology was Galen of Pergamum, and his works were transmitted throughout the Middle Ages. Galen discusses his understanding of embryology in two of his texts, those being his On the Natural Faculties and his On the Formation of the Foetus. There is an additional text spuriously attributed to Galen known as On the Question of whether the Embryo is an Animal. Galen described embryological development in four stages. In the first stage, the semen predominates. In the second stage, the embryo is filled with blood. In the third stage, the main outlines of the organs have developed but various other parts remain undeveloped. In the fourth stage, formation is complete and has reached a stage where we can call it a child. Galen described processes that played a role in furthering development of the embryo such as warming, drying, cooling, and combinations thereof. As this development plays out, the form of life of the embryo also moves from that like a plant to that of an animal (where the analogy between the root and umbilical cord is made). Galen claimed that the embryo forms from menstrual blood, by which his experimental analogy was that when you cut the vein of an animal and allow blood to flow out and into some mildly heated water, a sort of coagulation can be observed. He gave detailed descriptions of the position of the umbilical cord relative to other veins.
Patristics
The question of embryology is discussed among a number of early Christian writers, largely in terms of theological questions such as whether the fetus has value and/or when it begins to have value. (Although a number of Christian authors continued the classical discussions on the description of the development of the embryo, such as Jacob of Serugh. Passing reference to the embryo also appears in the eighth hymn of Ephrem the Syrian's Paradise Hymns.) Many patristic treatments of embryology continued in the stream of Greek tradition. The earlier Greek and Roman view that it was not was reversed and all pre-natal infanticide was condemned. Tertullian held that the soul was present from the moment of conception. The Quinisext Council concluded that "we pay no attention to the subtle division as to whether the foetus is formed or unformed". In this time, then, the Roman practice of child exposure came to an end, where unwanted yet birthed children, usually females, were discarded by the parents to die. Other more liberal traditions followed Augustine, who instead viewed that the animation of life began on the 40th day in males and the 80th day in females but not prior. Before the 40th day for men and 80th day for women, the embryo was referred to as the embryo informatus, and after this period was reached, it was referred to as the embryo formatus. The notion originating from the Greeks that the male embryo developed faster remained in various authors until it was experimentally disproven by Andreas Ottomar Goelicke in 1723.
Various patristic literature from backgrounds ranging from Nestorian, Monophysite and Chalcedonian discuss and choose between three different conceptions on the relation between the soul and the embryo. According to one view, the soul pre-exists and enters the embryo at the moment of conception (prohyparxis). According to a second view, the soul enters into existence at the moment of conception (synhyparxis). In a third view, the soul enters into the body after it has been formed (methyparxis). The first option was proposed by Origen, but was increasingly rejected after the fourth century. On the other hand, the other two options were equally accepted after this point. The second position appears to have been proposed as a response to Origen's notion of a pre-existing soul. After the sixth century, the second position was also increasingly seen as Origenist and so rejected on those grounds. The writings of Origen were condemned during the Second Origenist Crises in 553. Those defending prohyparxis usually appealed to the Platonic notion of an eternally moving soul. Those defending the second position also appealed to Plato but rejected his notion on the eternality of the soul. Finally, those appealing to the third position appealed both to Aristotle and scripture. Aristotelian notions included the progression of the development of the soul, from an initial plant-like soul, to a sensitive soul found in animals and allows for movement and perception, and finally the formation of a rational soul which can only be found in the fully-formed human. Furthermore, some scriptural texts were seen as implying the formation of the soul temporally after the formation of the body (namely Genesis 2:7; Exodus 21:22-23; Zachariah 12:1). In the De hominis opificio of Gregory of Nyssa, Aristotle's triparitate notion of the soul was accepted. Gregory also held that the rational soul was present at conception. Theodoret argued based on Genesis 2:7 and Exodus 21:22 that the embryo is only ensouled after the body is fully formed. Based on Exodus 21:22 and Zachariah 12:1, the Monophysite Philoxenus of Mabbug claimed that the soul was created in the body forty days after conception. In his De opificio mundi, the Christian philosopher John Philoponus claimed that the soul is formed after the body. Later still, the author Leontius held that the body and soul were created simultaneously, though it is also possible he held that the soul pre-existed the body.
Some Monophysites and Chalcedonians seemed to have been compelled into accepting synhyparxis in the case of Jesus because of their view that the incarnation of Christ resulted in both one hypostasis and one nature, whereas some Nestorians claimed that Christ, like us, must have had his soul formed after the formation of his body because, per Hebrews 4:15, Christ was like us in all ways but sin. (On the other hand, Leontinus dismissed the relevance of Hebrews 4:15 on the basis that Christ differed from us not only in sinfulness but also conception without semen, making synhyparxis another of Christ's supernatural feats.) They felt comfortable holding this view, under their belief that the human nature of Jesus was separate from the divine hypostasis. Some Nestorians still wondered, however, if the body united with the soul in the moment the soul was created or whether it came with it only later. The Syriac author Babai argued for the former on the basis that the latter was hardly better than adoptionism. Maximus the Confessor ridiculed the Aristotelian notion of the development of the soul on the basis that it would make humans parents of both plants and animals. He held to synhyparxis and regarded the other two positions both as incorrect extremes. After the 7th century, Chalcedonian discussion on embryology is slight and the few works that touch on the topic support synhyparxis. But debate among other groups remains lively, still divided on similar sectarian grounds. The patriarch Timothy I argued that the Word first united with the body, and only later with the soul. He cited John 1:1, claiming on its basis that the Word became flesh first, not a human being first. Then, Jacob of Edessa rejected prohyparxis because Origen had defended it and methyparxis because he believed that it made the soul ontologically inferior and as only being made for the body. Then, Moses Bar Kepha claimed, for Christological reasons as a Monophysite, that only synhyparxis was acceptable. He claimed that Genesis 2:7 has no temporal sequence and that Exodus 21:22 regards the formation of the body and not the soul and so is not relevant. To argue against methyparxis, he reasoned that body and soul are both present at death and, because what is at the end must correspond to what is also at the beginning, conception must also have body and soul together.
Embryology in Jewish tradition
Many Jewish authors also discussed notions of embryology, especially as they appear in the Talmud. Much of the embryological data in the Talmud is part of discussions related to the impurity of the mother after childbirth. The embryo was described as the peri habbetten (fruit of the body) and it developed through various stages: (1) golem (formless and rolled-up) (2) shefir meruqqam (embroidered foetus) (3) ubbar (something carried) (4) walad (child) (5) walad shel qayama (viable child) (6) ben she-kallu khadashaw (child whose months have been completed).
Some mystical notions regarding embryology appear in the Sefer Yetzirah. The text in the Book of Job relating to the fetus forming by analogy to the curdling of milk into cheese was cited in the Babylonian Talmud and in even greater detail in the Midrash: "When the womb of the woman is full of retained blood which then comes forth to the area of her menstruation, by the will of the Lord comes a drop of white-matter which falls into it: at once the embryo is created. [This can be] compared to milk being put in a vessel: if you add to it some lab-ferment [drug or herb], it coagulates and stands still; if not, the milk remains liquid." The Talmud sages held that there were two seeds that participated in the formation of the embryo, one from the male and one from the female, and that their relative proportions determine whether that develops into a male or a female.
In the Tractate Nidda, the mother was said to provide a "red-seed" which allows for the development of skin, flesh, hair, and the black part of the eye (pupil), whereas the father provides the "white-seed" which forms the bones, nerves, brain, and the white part of the eye. And finally, God himself was thought to provide the spirit and soul, facial expressions, capacity for hearing and vision, movement, comprehension, and intelligence. Not all strands of Jewish tradition accepted that both the male and female contributed parts to the formation of the fetus.
The 13th century medieval commentator Nachmanides, for example, rejected the female contribution. In Tractate Hullin in the Talmud, whether the organs of the child resemble more closely those of the mother or father is said to depend on which one contribute more matter to the embryo depending on the child. Rabbi Ishmael and other sages are said to have disagreed on one matter: they agreed that the male embryo developed on the 41st day, but disagreed on whether this was the case for the female embryo. Some believed that the female embryo was complete later, whereas others held that they were finished at the same time. The only ancient Jewish authors who associated abortion with homicide were Josephus and Philo of Alexandria in the 1st century. In the Talmud, a child is granted humanness at birth, while other rabbinical texts place it at the 13th postnatal day.
Some Talmudic texts discuss magical influences on the development of the embryo, such as one text which claims that if one sleeps on a bed that is pointed to the north–south will have a male child. According to Nachmanides, a child born of a cold drop of semen will be foolish, one born from a warm drop of semen will be passionate and irascible, and one born from a semen drop of medium temperature will be clever and level-headed. Some Talmudic discussions follow from Hippocratic claims that a child born on the eighth month could not survive, whereas others follow Aristotle in claiming that they sometimes could survive. One text even says that survival is possible on the seventh month, but not the eighth. Talmudic embryology, in various aspects, follows Greek discourses especially from Hippocrates and Aristotle, but in other areas, makes novel statements on the subject.
Judaism allows assisted reproduction, such as IVF embryo transfer and maternal surrogacy, when the spermatozoon and oocyte originate from the respective husband and wife.
Embryology in the Islamic tradition
Passing reference to embryological notions also appear in the Qur'an (22:5), where the development of the embryo proceeds in four stages from drop, to a clinging clot, to a partially developed stage, to a fully developed child. The notion of clay turning into flesh is seen by some as analogous to a text by Theodoret that describes the same process. The four stages of development in the Qur'an are similar to the four stages of embryological development as described by Galen. In the early 6th century, Sergius of Reshaina devoted himself to the translation of Greek medical texts into Syriac and became the most important figure in this process. Included in his translations were the relevant embryological texts of Galen. Anurshirvan founded a medical school in the southern Mesopotamian city of Gundeshapur, known as the Academy of Gondishapur, which also acted as a medium for the transmission, reception, and development of notions from Greek medicine. These factors helped the transmission of Greek notions on embryology, such as found in Galen, to enter into the Arabian milieu. Very similar embryonic descriptions also appear in the Syriac Jacob of Serugh's letter to the Archdeacon Mar Julian.
Embryological discussions also appear in the Islamic legal tradition.
| Biology and health sciences | Animal ontogeny | Biology |
162780 | https://en.wikipedia.org/wiki/Megafauna | Megafauna | In zoology, megafauna (from Greek μέγας megas "large" and Neo-Latin fauna "animal life") are large animals. The precise definition of the term varies widely, though a common threshold is approximately , with other thresholds as low as or as high as . Large body size is generally associated with other traits, such as having a slow rate of reproduction and, in large herbivores, reduced or negligible adult mortality from being killed by predators.
Megafauna species have considerable effects on their local environment, including the suppression of the growth of woody vegetation and a consequent reduction in wildfire frequency. Megafauna also play a role in regulating and stabilizing the abundance of smaller animals.
During the Pleistocene, megafauna were diverse across the globe, with most continental ecosystems exhibiting similar or greater species richness in megafauna as compared to ecosystems in Africa today. During the Late Pleistocene, particularly from around 50,000 years ago onwards, most large mammal species became extinct, including 80% of all mammals greater than , while small animals were largely unaffected. This pronouncedly size-biased extinction is otherwise unprecedented in the geological record. Humans and climatic change have been implicated by most authors as the likely causes, though the relative importance of either factor has been the subject of significant controversy.
History
One of the earliest occurrences of the term "megafauna" is Alfred Russel Wallace's 1876 work The geographical distribution of animals. He described the animals as "the hugest, and fiercest, and strangest forms". In the 20th and 21st centuries, the term usually refers to large animals. There are variations in thresholds used to define megafauna as a whole or certain groups of megafauna. Many scientific literature adopt Paul S. Martin's proposed threshold of to classify animals as megafauna. However, for freshwater species, is the preferred threshold. Some scientists define herbivorous terrestrial megafauna as having a weight exceeding , and terrestrial carnivorous megafauna as more than . Additionally, Owen-Smith coined the term megaherbivore to describe herbivores that weighed over , which has seen some use by other researchers.
Among living animals, the term megafauna is most commonly used for the largest extant terrestrial mammals, which includes (but is not limited to) elephants, giraffes, hippopotamuses, rhinoceroses, and larger bovines. Of these five categories of large herbivores, only bovines are presently found outside of Africa and Asia, but all the others were formerly more wide-ranging, with their ranges and populations continually shrinking and decreasing over time. Wild equines are another example of megafauna, but their current ranges are largely restricted to the Old World, specifically in Africa and Asia. Megafaunal species may be categorized according to their dietary type: megaherbivores (e.g., elephants), megacarnivores (e.g., lions), and megaomnivores (e.g., bears).
Ecological strategy
Megafauna animals – in the sense of the largest mammals and birds – are generally K-strategists, with high longevity, slow population growth rates, low mortality rates, and (at least for the largest) few or no natural predators capable of killing adults. These characteristics, although not exclusive to such megafauna, make them vulnerable to human overexploitation, in part because of their slow population recovery rates.
Evolution of large body size
One observation that has been made about the evolution of larger body size is that rapid rates of increase that are often seen over relatively short time intervals are not sustainable over much longer time periods. In an examination of mammal body mass changes over time, the maximum increase possible in a given time interval was found to scale with the interval length raised to the 0.25 power. This is thought to reflect the emergence, during a trend of increasing maximum body size, of a series of anatomical, physiological, environmental, genetic and other constraints that must be overcome by evolutionary innovations before further size increases are possible. A strikingly faster rate of change was found for large decreases in body mass, such as may be associated with the phenomenon of insular dwarfism. When normalized to generation length, the maximum rate of body mass decrease was found to be over 30 times greater than the maximum rate of body mass increase for a ten-fold change.
In terrestrial mammals
Subsequent to the Cretaceous–Paleogene extinction event that eliminated the non-avian dinosaurs about Ma (million years) ago, terrestrial mammals underwent a nearly exponential increase in body size as they diversified to occupy the ecological niches left vacant. Starting from just a few kg before the event, maximum size had reached ~ a few million years later, and ~ by the end of the Paleocene. This trend of increasing body mass appears to level off about 40 Ma ago (in the late Eocene), suggesting that physiological or ecological constraints had been reached, after an increase in body mass of over three orders of magnitude. However, when considered from the standpoint of rate of size increase per generation, the exponential increase is found to have continued until the appearance of Indricotherium 30 Ma ago. (Since generation time scales with body mass0.259, increasing generation times with increasing size cause the log mass vs. time plot to curve downward from a linear fit.)
Megaherbivores eventually attained a body mass of over . The largest of these, indricotheres and proboscids, have been hindgut fermenters, which are believed to have an advantage over foregut fermenters in terms of being able to accelerate gastrointestinal transit in order to accommodate very large food intakes. A similar trend emerges when rates of increase of maximum body mass per generation for different mammalian clades are compared (using rates averaged over macroevolutionary time scales). Among terrestrial mammals, the fastest rates of increase of body mass0.259 vs. time (in Ma) occurred in perissodactyls (a slope of 2.1), followed by rodents (1.2) and proboscids (1.1), all of which are hindgut fermenters. The rate of increase for artiodactyls (0.74) was about a third of the perissodactyls. The rate for carnivorans (0.65) was slightly lower yet, while primates, perhaps constrained by their arboreal habits, had the lowest rate (0.39) among the mammalian groups studied.
Terrestrial mammalian carnivores from several eutherian groups (the artiodactyl Andrewsarchus – formerly considered a mesonychid, the oxyaenid Sarkastodon, and the carnivorans Amphicyon and Arctodus) all reached a maximum size of about (the carnivoran Arctotherium and the hyaenodontid Simbakubwa may have been somewhat larger). The largest known metatherian carnivore, Proborhyaena gigantea, apparently reached , also close to this limit. A similar theoretical maximum size for mammalian carnivores has been predicted based on the metabolic rate of mammals, the energetic cost of obtaining prey, and the maximum estimated rate coefficient of prey intake. It has also been suggested that maximum size for mammalian carnivores is constrained by the stress the humerus can withstand at top running speed.
Analysis of the variation of maximum body size over the last 40 Ma suggests that decreasing temperature and increasing continental land area are associated with increasing maximum body size. The former correlation would be consistent with Bergmann's rule, and might be related to the thermoregulatory advantage of large body mass in cool climates, better ability of larger organisms to cope with seasonality in food supply, or other factors; the latter correlation could be explained in terms of range and resource limitations. However, the two parameters are interrelated (due to sea level drops accompanying increased glaciation), making the driver of the trends in maximum size more difficult to identify.
In marine mammals
Since tetrapods (first reptiles, later mammals) returned to the sea in the Late Permian, they have dominated the top end of the marine body size range, due to the more efficient intake of oxygen possible using lungs. The ancestors of cetaceans are believed to have been the semiaquatic pakicetids, no larger than dogs, of about 53 million years (Ma) ago. By 40 Ma ago, cetaceans had attained a length of or more in Basilosaurus, an elongated, serpentine whale that differed from modern whales in many respects and was not ancestral to them. Following this, the evolution of large body size in cetaceans appears to have come to a temporary halt and then to have backtracked, although the available fossil records are limited. However, in the period from 31 Ma ago (in the Oligocene) to the present, cetaceans underwent a significantly more rapid sustained increase in body mass (a rate of increase in body mass0.259 of a factor of 3.2 per million years) than achieved by any group of terrestrial mammals. This trend led to the largest animal of all time, the modern blue whale. Several reasons for the more rapid evolution of large body size in cetaceans are possible. Fewer biomechanical constraints on increases in body size may be associated with suspension in water as opposed to standing against the force of gravity, and with swimming movements as opposed to terrestrial locomotion. Also, the greater heat capacity and thermal conductivity of water compared to air may increase the thermoregulatory advantage of large body size in marine endotherms, although diminishing returns apply.
Among the toothed whales, maximum body size appears to be limited by food availability. Larger size, as in sperm and beaked whales, facilitates deeper diving to access relatively easily-caught, large cephalopod prey in a less competitive environment. Compared to odontocetes, the efficiency of baleen whales' filter feeding scales more favorably with increasing size when planktonic food is dense, making larger sizes more advantageous. The lunge feeding technique of rorquals appears to be more energy efficient than the ram feeding of balaenid whales; the latter technique is used with less dense and patchy plankton. The cooling trend in Earth's recent history may have generated more localities of high plankton abundance via wind-driven upwellings, facilitating the evolution of gigantic whales.
Cetaceans are not the only marine mammals to reach tremendous sizes. The largest mammal carnivorans of all time are marine pinnipeds, the largest of which is the southern elephant seal, which can reach more than in length and weigh up to . Other large pinnipeds include the northern elephant seal at , walrus at , and Steller sea lion at . The sirenians are another group of marine mammals which adapted to fully aquatic life around the same time as the cetaceans did. Sirenians are closely related to elephants. The largest sirenian was the Steller's sea cow, which reached up to in length and weighed , and was hunted to extinction in the 18th century.
In flightless birds
Because of the small initial size of all mammals following the extinction of the non-avian dinosaurs, nonmammalian vertebrates had a roughly ten-million-year-long window of opportunity (during the Paleocene) for evolution of gigantism without much competition. During this interval, apex predator niches were often occupied by reptiles, such as terrestrial crocodilians (e.g. Pristichampsus), large snakes (e.g. Titanoboa) or varanid lizards, or by flightless birds (e.g. Paleopsilopterus in South America). This is also the period when megafaunal flightless herbivorous gastornithid birds evolved in the Northern Hemisphere, while flightless paleognaths evolved to large size on Gondwanan land masses and Europe. Gastornithids and at least one lineage of flightless paleognath birds originated in Europe, both lineages dominating niches for large herbivores while mammals remained below (in contrast with other landmasses like North America and Asia, which saw the earlier evolution of larger mammals) and were the largest European tetrapods in the Paleocene.
Flightless paleognaths, termed ratites, have traditionally been viewed as representing a lineage separate from that of their small flighted relatives, the Neotropic tinamous. However, recent genetic studies have found that tinamous nest well within the ratite tree, and are the sister group of the extinct moa of New Zealand. Similarly, the small kiwi of New Zealand have been found to be the sister group of the extinct elephant birds of Madagascar. These findings indicate that flightlessness and gigantism arose independently multiple times among ratites via parallel evolution.
Predatory megafaunal flightless birds were often able to compete with mammals in the early Cenozoic. Later in the Cenozoic, however, they were displaced by advanced carnivorans and died out. In North America, the bathornithids Paracrax and Bathornis were apex predators but became extinct by the Early Miocene. In South America, the related phorusrhacids shared the dominant predatory niches with metatherian sparassodonts during most of the Cenozoic but declined and ultimately went extinct after eutherian predators arrived from North America (as part of the Great American Interchange) during the Pliocene. In contrast, large herbivorous flightless ratites have survived to the present.
However, none of the flightless birds of the Cenozoic, including the predatory Brontornis, possibly omnivorous Dromornis stirtoni or herbivorous Aepyornis, ever grew to masses much above , and thus never attained the size of the largest mammalian carnivores, let alone that of the largest mammalian herbivores. It has been suggested that the increasing thickness of avian eggshells in proportion to egg mass with increasing egg size places an upper limit on the size of birds. The largest species of Dromornis, D. stirtoni, may have gone extinct after it attained the maximum avian body mass and was then outcompeted by marsupial diprotodonts that evolved to sizes several times larger.
In giant turtles
Giant tortoises were important components of late Cenozoic megafaunas, being present in every nonpolar continent until the arrival of homininans. The largest known terrestrial tortoise was Megalochelys atlas, an animal that probably weighed about .
Some earlier aquatic Testudines, e.g. the marine Archelon of the Cretaceous and freshwater Stupendemys of the Miocene, were considerably larger, weighing more than .
Megafaunal mass extinctions
Timing and possible causes
Numerous extinctions occurred during the latter half of the Last Glacial Period when most large mammals went extinct in the Americas, Australia-New Guinea, and Eurasia, including over 80% of all terrestrial animals with a body mass greater than . Small animals and other organisms like plants were generally unaffected by the extinctions, which is unprecented in previous extinctions during the last 30 million years.
Various theories have attributed the wave of extinctions to human hunting, climate change, disease, extraterrestrial impact, competition from other animals or other causes. However, this extinction near the end of the Pleistocene was just one of a series of megafaunal extinction pulses that have occurred during the last 50,000 years over much of the Earth's surface, with Africa and Asia (where the local megafauna had a chance to evolve alongside modern humans) being comparatively less affected. The latter areas did suffer gradual attrition of megafauna, particularly of the slower-moving species (a class of vulnerable megafauna epitomized by giant tortoises), over the last several million years.
Outside the mainland of Afro-Eurasia, these megafaunal extinctions followed a highly distinctive landmass-by-landmass pattern that closely parallels the spread of humans into previously uninhabited regions of the world, and which shows no overall correlation with climatic history (which can be visualized with plots over recent geological time periods of climate markers such as marine oxygen isotopes or atmospheric carbon dioxide levels). Australia and nearby islands (e.g., Flores) were struck first around 46,000 years ago, followed by Tasmania about 41,000 years ago (after formation of a land bridge to Australia about 43,000 years ago). The role of humans in the extinction of Australia and New Guinea's megafauna has been disputed, with multiple studies showing a decline in the number of species prior to the arrival of humans on the continent and the absence of any evidence of human predation; the impact of climate change has instead been cited for their decline. Similarly, Japan lost most of its megafauna apparently about 30,000 years ago, North America 13,000 years ago and South America about 500 years later, Cyprus 10,000 years ago, the Antilles 6,000 years ago, New Caledonia and nearby islands 3,000 years ago, Madagascar 2,000 years ago, New Zealand 700 years ago, the Mascarenes 400 years ago, and the Commander Islands 250 years ago. Nearly all of the world's isolated islands could furnish similar examples of extinctions occurring shortly after the arrival of humans, though most of these islands, such as the Hawaiian Islands, never had terrestrial megafauna, so their extinct fauna were smaller, but still displayed island gigantism.
An analysis of the timing of Holarctic megafaunal extinctions and extirpations over the last 56,000 years has revealed a tendency for such events to cluster within interstadials, periods of abrupt warming, but only when humans were also present. Humans may have impeded processes of migration and recolonization that would otherwise have allowed the megafaunal species to adapt to the climate shift. In at least some areas, interstadials were periods of expanding human populations.
An analysis of Sporormiella fungal spores (which derive mainly from the dung of megaherbivores) in swamp sediment cores spanning the last 130,000 years from Lynch's Crater in Queensland, Australia, showed that the megafauna of that region virtually disappeared about 41,000 years ago, at a time when climate changes were minimal; the change was accompanied by an increase in charcoal, and was followed by a transition from rainforest to fire-tolerant sclerophyll vegetation. The high-resolution chronology of the changes supports the hypothesis that human hunting alone eliminated the megafauna, and that the subsequent change in flora was most likely a consequence of the elimination of browsers and an increase in fire. The increase in fire lagged the disappearance of megafauna by about a century, and most likely resulted from accumulation of fuel once browsing stopped. Over the next several centuries grass increased; sclerophyll vegetation increased with a lag of another century, and a sclerophyll forest developed after about another thousand years. During two periods of climate change about 120,000 and 75,000 years ago, sclerophyll vegetation had also increased at the site in response to a shift to cooler, drier conditions; neither of these episodes had a significant impact on megafaunal abundance. Similar conclusions regarding the culpability of human hunters in the disappearance of Pleistocene megafauna were derived from high-resolution chronologies obtained via an analysis of a large collection of eggshell fragments of the flightless Australian bird Genyornis newtoni, from analysis of Sporormiella fungal spores from a lake in eastern North America and from study of deposits of Shasta ground sloth dung left in over half a dozen caves in the American Southwest.
Continuing human hunting and environmental disturbance has led to additional megafaunal extinctions in the recent past, and has created a serious danger of further extinctions in the near future (see examples below). Direct killing by humans, primarily for meat or other body parts, is the most significant factor in contemporary megafaunal decline.
A number of other mass extinctions occurred earlier in Earth's geologic history, in which some or all of the megafauna of the time also died out. Famously, in the Cretaceous–Paleogene extinction event, the non-avian dinosaurs and most other giant reptiles were eliminated. However, the earlier mass extinctions were more global and not so selective for megafauna; i.e., many species of other types, including plants, marine invertebrates and plankton, went extinct as well. Thus, the earlier events must have been caused by more generalized types of disturbances to the biosphere.
Consequences of depletion of megafauna
Depletion of herbivorous megafauna results in increased growth of woody vegetation, and a consequent increase in wildfire frequency. Megafauna may help to suppress the growth of invasive plants. Large herbivores and carnivores can suppress the abundance of smaller animals, resulting in their population increase when megafauna are removed.
Effect on nutrient transport
Megafauna play a significant role in the lateral transport of mineral nutrients in an ecosystem, tending to translocate them from areas of high to those of lower abundance. They do so by their movement between the time they consume the nutrient and the time they release it through elimination (or, to a much lesser extent, through decomposition after death). In South America's Amazon Basin, it is estimated that such lateral diffusion was reduced over 98% following the megafaunal extinctions that occurred roughly 12,500 years ago. Given that phosphorus availability is thought to limit productivity in much of the region, the decrease in its transport from the western part of the basin and from floodplains (both of which derive their supply from the uplift of the Andes) to other areas is thought to have significantly impacted the region's ecology, and the effects may not yet have reached their limits. In the sea, cetaceans and pinnipeds that feed at depth are thought to translocate nitrogen from deep to shallow water, enhancing ocean productivity, and counteracting the activity of zooplankton, which tend to do the opposite.
Effect on methane emissions
Large populations of megaherbivores have the potential to contribute greatly to the atmospheric concentration of methane, which is an important greenhouse gas. Modern ruminant herbivores produce methane as a byproduct of foregut fermentation in digestion and release it through belching or flatulence. Today, around 20% of annual methane emissions come from livestock methane release. In the Mesozoic, it has been estimated that sauropods could have emitted 520 million tons of methane to the atmosphere annually, contributing to the warmer climate of the time (up to 10 °C (18 °F) warmer than at present). This large emission follows from the enormous estimated biomass of sauropods, and because methane production of individual herbivores is believed to be almost proportional to their mass.
Recent studies have indicated that the extinction of megafaunal herbivores may have caused a reduction in atmospheric methane. This hypothesis is relatively new. One study examined the methane emissions from the bison that occupied the Great Plains of North America before contact with European settlers. The study estimated that the removal of the bison caused a decrease of as much as 2.2 million tons per year. Another study examined the change in the methane concentration in the atmosphere at the end of the Pleistocene epoch after the extinction of megafauna in the Americas. After early humans migrated to the Americas about 13,000 BP, their hunting and other associated ecological impacts led to the extinction of many megafaunal species there. Calculations suggest that this extinction decreased methane production by about 9.6 million tons per year. This suggests that the absence of megafaunal methane emissions may have contributed to the abrupt climatic cooling at the onset of the Younger Dryas. The decrease in atmospheric methane that occurred at that time, as recorded in ice cores, was 2 to 4 times more rapid than any other decrease in the last half million years, suggesting that an unusual mechanism was at work.
Gallery
Pleistocene extinct megafauna
Other extinct Cenozoic megafauna
Extant
| Biology and health sciences | General classifications | Animals |
162843 | https://en.wikipedia.org/wiki/Color%20television | Color television | Color television (American English) or colour television (Commonwealth English) is a television transmission technology that includes color information for the picture, so the video image can be displayed in color on the television set. It improves on the monochrome or black-and-white television technology, which displays the image in shades of gray (grayscale). Television broadcasting stations and networks in most parts of the world upgraded from black-and-white to color transmission between the 1960s and the 1980s. The invention of color television standards was an important part of the history and technology of television.
Transmission of color images using mechanical scanners had been conceived as early as the 1880s. A demonstration of mechanically scanned color television was given by John Logie Baird in 1928, but its limitations were apparent even then. Development of electronic scanning and display made a practical system possible. Monochrome transmission standards were developed prior to World War II, but civilian electronics development was frozen during much of the war. In August 1944, Baird gave the world's first demonstration of a practical fully electronic color television display. In the United States, competing color standards were developed, finally resulting in the NTSC color standard that was compatible with the prior monochrome system. Although the NTSC color standard was proclaimed in 1953, and limited programming soon became available, it was not until the early 1970s that color television in North America outsold black-and-white units. Color broadcasting in Europe did not standardize on the PAL or SECAM formats until the 1960s.
Broadcasters began to upgrade from analog color television technology to higher resolution digital television ; the exact year varies by country. While the changeover is complete in many countries, analog television remains in use in some countries.
Development
The human eye's detection system in the retina consists primarily of two types of light detectors: rod cells that capture light, dark, and shapes/figures, and the cone cells that detect color. A typical retina contains 120 million rods and 4.5 million to 6 million cones, which are divided into three types, each one with a characteristic profile of excitability by different wavelengths of the spectrum of visible light. This means that the eye has far more resolution in brightness, or "luminance", than in color. However, post-processing of the optic nerve and other portions of the human visual system combine the information from the rods and cones to re-create what appears to be a high-resolution color image.
The eye has limited bandwidth to the rest of the visual system, estimated at just under 8 Mbit/s. This manifests itself in a number of ways, but the most important in terms of producing moving images is the way that a series of still images displayed in quick succession will appear to be continuous smooth motion. This illusion starts to work at about 16 frame/s, and common motion pictures use 24 frame/s. Television, using power from the electrical grid, historically tuned its rate in order to avoid interference with the alternating current being supplied – in North America, some Central and South American countries, Taiwan, Korea, part of Japan, the Philippines, and a few other countries, this was 60 video fields per second to match the 60 Hz power, while in most other countries it was 50 fields per second to match the 50 Hz power. The NTSC color system changed from the black-and-white 60-fields-per-second standard to 59.94 fields per second to make the color circuitry simpler; the 1950s TV sets had matured enough that the power frequency/field rate mismatch was no longer important. Modern TV sets can display multiple field rates (50, 59.94, or 60, in either interlaced or progressive scan) while accepting power at various frequencies (often the operating range is specified as 48–62 Hz).
In its most basic form, a color broadcast can be created by broadcasting three monochrome images, one each in the three colors of red, green, and blue (RGB). When displayed together or in rapid succession, these images will blend together to produce a full-color image as seen by the viewer. To do so without making the images flicker, the refresh time of all three images put together would have to be above the critical limit, and generally the same as a single black and white image. This would require three times the number of images to be sent in the same time, greatly increasing the amount of radio bandwidth required to send the complete signal and thus similarly increasing the required radio spectrum. Early plans for color television in the United States included a move from very high frequency (VHF) to ultra high frequency (UHF) to open up additional spectrum.
One of the great technical challenges of introducing color broadcast television was the desire to conserve bandwidth. In the United States, after considerable research, the National Television Systems Committee approved an all-electronic system developed by RCA that encoded the color information separately from the brightness information and greatly reduced the resolution of the color information in order to conserve bandwidth. The brightness image remained compatible with existing black-and-white television sets at slightly reduced resolution, while color-capable televisions could decode the extra information in the signal and produce a limited-resolution color display. The higher resolution black-and-white and lower resolution color images combine in the eye to produce a seemingly high-resolution color image. The NTSC standard represented a major technical achievement.
Early television
Experiments with facsimile image transmission systems that used radio broadcasts to transmit images date to the 19th century. It was not until the 20th century that advances in electronics and light detectors made television practical. A key problem was the need to convert a 2D image into a "1D" radio signal; some form of image scanning was needed to make this work. Early systems generally used a device known as a "Nipkow disk", which was a spinning disk with a series of holes punched in it that caused a spot to scan across and down the image. A single photodetector behind the disk captured the image brightness at any given spot, which was converted into a radio signal and broadcast. A similar disk was used at the receiver side, with a light source behind the disk instead of a detector.
A number of such mechanical television systems were being used experimentally in the 1920s. The best-known was John Logie Baird's, which was actually used for regular public broadcasting in Britain for several years. Indeed, Baird's system was demonstrated to members of the Royal Institution in London in 1926 in what is generally recognized as the first demonstration of a true, working television system. In spite of these early successes, all mechanical television systems shared a number of serious problems. Being mechanically driven, perfect synchronization of the sending and receiving discs was not easy to ensure, and irregularities could result in major image distortion. Another problem was that the image was scanned within a small, roughly rectangular area of the disk's surface, so that larger, higher-resolution displays required increasingly unwieldy disks and smaller holes that produced increasingly dim images. Rotating drums bearing small mirrors set at progressively greater angles proved more practical than Nipkow discs for high-resolution mechanical scanning, allowing images of 240 lines and more to be produced, but such delicate, high-precision optical components were not commercially practical for home receivers.
It was clear to a number of developers that a completely electronic scanning system would be superior, and that the scanning could be achieved in a vacuum tube via electrostatic or magnetic means. Converting this concept into a usable system took years of development and several independent advances. The two key advances were Philo Farnsworth's electronic scanning system, and Vladimir Zworykin's Iconoscope camera. The Iconoscope, based on Kálmán Tihanyi's early patents, superseded the Farnsworth-system. With these systems, the BBC began regularly scheduled black-and-white television broadcasts in 1936, but these were shut down again with the start of World War II in 1939. In this time thousands of television sets had been sold. The receivers developed for this program, notably those from Pye Ltd., played a key role in the development of radar.
By 22 March 1935, 180-line black-and-white television programs were being broadcast from the Paul Nipkow TV station in Berlin. In 1936, under the guidance of the Minister of Public Enlightenment and Propaganda, Joseph Goebbels, direct transmissions from fifteen mobile units at the Olympic Games in Berlin were transmitted to selected small television houses () in Berlin and Hamburg.
In 1941, the first NTSC meetings produced a single standard for US broadcasts. US television broadcasts began in earnest in the immediate post-war era, and by 1950 there were 6 million televisions in the United States.
All-mechanical color
The basic idea of using three monochrome images to produce a color image had been experimented with almost as soon as black-and-white televisions had first been built.
Among the earliest published proposals for television was one by Maurice Le Blanc in 1880 for a color system, including the first mentions in television literature of line and frame scanning, although he gave no practical details. Polish inventor Jan Szczepanik patented a color television system in 1897, using a selenium photoelectric cell at the transmitter and an electromagnet controlling an oscillating mirror and a moving prism at the receiver. But his system contained no means of analyzing the spectrum of colors at the transmitting end, and could not have worked as he described it. An Armenian inventor, Hovannes Adamian, also experimented with color television as early as 1907. The first color television project is claimed by him, and was patented in Germany on 31 March 1908, patent number 197183, then in Britain, on 1 April 1908, patent number 7219, in France (patent number 390326) and in Russia in 1910 (patent number 17912).
Shortly after his practical demonstration of black and white television, on 3 July 1928, Baird demonstrated the world's first color transmission. This used scanning discs at the transmitting and receiving ends with three spirals of apertures, each spiral with filters of a different primary color; and three light sources, controlled by the signal, at the receiving end, with a commutator to alternate their illumination. The demonstration was of a young girl wearing different colored hats. The girl, Noele Gordon, later became a TV actress in the soap opera Crossroads. Baird also made the world's first color over-the-air broadcast on 4 February 1938, sending a mechanically scanned 120-line image from Baird's Crystal Palace studios to a projection screen at London's Dominion Theatre.
Mechanically scanned color television was also demonstrated by Bell Laboratories in June 1929 using three complete systems of photoelectric cells, amplifiers, glow-tubes, and color filters, with a series of mirrors to superimpose the red, green, and blue images into one full-color image.
Hybrid systems
As was the case with black-and-white television, an electronic means of scanning would be superior to the mechanical systems like Baird's. The obvious solution on the broadcast end would be to use three conventional Iconoscopes with colored filters in front of them to produce an RGB signal. Using three separate tubes each looking at the same scene would produce slight differences in parallax between the frames, so in practice a single lens was used with a mirror or prism system to separate the colors for the separate tubes. Each tube captured a complete frame and the signal was converted into radio in a fashion essentially identical to the existing black-and-white systems.
The problem with this approach was there was no simple way to recombine them on the receiver end. If each image was sent at the same time on different frequencies, the images would have to be "stacked" somehow on the display, in real time. The simplest way to do this would be to reverse the system used in the camera: arrange three separate black-and-white displays behind colored filters and then optically combine their images using mirrors or prisms onto a suitable screen, like frosted glass. RCA built just such a system in order to present the first electronically scanned color television demonstration on 5 February 1940, privately shown to members of the US Federal Communications Commission at the RCA plant in Camden, New Jersey. This system, however, suffered from the twin problems of costing at least three times as much as a conventional black-and-white set, as well as having very dim pictures, the result of the fairly low illumination given off by tubes of the era. Projection systems of this sort would become common decades later, however, with improvements in technology.
Another solution would be to use a single screen, but break it up into a pattern of closely spaced colored phosphors instead of an even coating of white. Three receivers would be used, each sending its output to a separate electron gun, aimed at its colored phosphor. However, this solution was not practical. The electron guns used in monochrome televisions had limited resolution, and if one wanted to retain the resolution of existing monochrome displays, the guns would have to focus on individual dots three times smaller. This was beyond the state of the art of the technology at the time.
Instead, a number of hybrid solutions were developed that combined a conventional monochrome display with a colored disk or mirror. In these systems the three colored images were sent one after each other, in either complete frames in the "field-sequential color system", or for each line in the "line-sequential" system. In both cases a colored filter was rotated in front of the display in sync with the broadcast. Since three separate images were being sent in sequence, if they used existing monochrome radio signaling standards they would have an effective refresh rate of only 20 fields, or 10 frames, a second, well into the region where flicker would become visible. In order to avoid this, these systems increased the frame rate considerably, making the signal incompatible with existing monochrome standards.
The first practical example of this sort of system was again pioneered by John Logie Baird. In 1940 he publicly demonstrated a color television combining a traditional black-and-white display with a rotating colored disk. This device was very "deep", but was later improved with a mirror folding the light path into an entirely practical device resembling a large conventional console. However, Baird was not happy with the design, and as early as 1944 had commented to a British government committee that a fully electronic device would be better.
In 1939, Hungarian engineer Peter Carl Goldmark introduced an electro-mechanical system while at CBS, which contained an Iconoscope sensor. The CBS field-sequential color system was partly mechanical, with a disc made of red, blue, and green filters spinning inside the television camera at 1,200 rpm, and a similar disc spinning in synchronization in front of the cathode ray tube inside the receiver set. The system was first demonstrated to the Federal Communications Commission (FCC) on 29 August 1940, and shown to the press on 4 September.
CBS began experimental color field tests using film as early as 28 August 1940, and live cameras by 12 November. NBC (owned by RCA) made its first field test of color television on 20 February 1941. CBS began daily color field tests on 1 June 1941. These color systems were not compatible with existing black-and-white television sets, and as no color television sets were available to the public at this time, viewing of the color field tests was restricted to RCA and CBS engineers and the invited press. The War Production Board halted the manufacture of television and radio equipment for civilian use from 22 April 1942, to 20 August 1945, limiting any opportunity to introduce color television to the general public.
Fully electronic
As early as 1940, Baird had started work on a fully electronic system he called the "Telechrome". Early Telechrome devices used two electron guns aimed at either side of a phosphor plate. The phosphor was patterned so the electrons from the guns only fell on one side of the patterning or the other. Using cyan and magenta phosphors, a reasonable limited-color image could be obtained. Baird's demonstration on 16 August 1944, was the first example of a practical color television system. Work on the Telechrome continued and plans were made to introduce a three-gun version for full color. However, Baird's untimely death in 1946 ended the development of the Telechrome system.
Similar concepts were common through the 1940s and 1950s, differing primarily in the way they re-combined the colors generated by the three guns. The Geer tube was similar to Baird's concept, but used small pyramids with the phosphors deposited on their outside faces, instead of Baird's 3D patterning on a flat surface. The Penetron used three layers of phosphor on top of each other and increased the power of the beam to reach the upper layers when drawing those colors. The Chromatron used a set of focusing wires to select the colored phosphors arranged in vertical stripes on the tube.
FCC color
In the immediate post-war era, the Federal Communications Commission (FCC) was inundated with requests to set up new television stations. Worrying about congestion of the limited number of channels available, the FCC put a moratorium on all new licenses in 1948 while considering the problem. A solution was immediately forthcoming; rapid development of radio receiver electronics during the war had opened a wide band of higher frequencies to practical use, and the FCC set aside a large section of these new UHF bands for television broadcast. At the time, black-and-white television broadcasting was still in its infancy in the U.S., and the FCC started to look at ways of using this newly available bandwidth for color broadcasts. Since no existing television would be able to tune in these stations, they were free to pick an incompatible system and allow the older VHF channels to die off over time.
The FCC called for technical demonstrations of color systems in 1948, and the Joint Technical Advisory Committee (JTAC) was formed to study them. CBS displayed improved versions of its original design, now using a single 6 MHz channel (like the existing black-and-white signals) at 144 fields per second and 405 lines of resolution. Color Television Inc. (CTI) demonstrated its line-sequential system, while Philco demonstrated a dot-sequential system based on its beam-index tube-based "Apple" tube technology. Of the entrants, the CBS system was by far the best-developed, and won head-to-head testing every time.
While the meetings were taking place it was widely known within the industry that RCA was working on a dot-sequential system that was compatible with existing black-and-white broadcasts, but RCA declined to demonstrate it during the first series of meetings. Just before the JTAC presented its findings, on 25 August 1949, RCA broke its silence and introduced its system as well. The JTAC still recommended the CBS system, and after the resolution of an ensuing RCA lawsuit, color broadcasts using the CBS system started on 25 June 1951. By this point the market had changed dramatically; when color was first being considered in 1948 there were fewer than a million television sets in the U.S., but by 1951 there were well over 10 million. The idea that the VHF band could be allowed to "die" was no longer practical.
During its campaign for FCC approval, CBS gave the first demonstrations of color television to the general public, showing an hour of color programs daily Mondays through Saturdays, beginning 12 January 1950, and running for the remainder of the month, over WOIC in Washington, D.C., where the programs could be viewed on eight 16-inch color receivers in a public building. Due to high public demand, the broadcasts were resumed 13–21 February, with several evening programs added. CBS initiated a limited schedule of color broadcasts from its New York station WCBS-TV Mondays to Saturdays beginning 14 November 1950, making ten color receivers available for the viewing public. All were broadcast using the single color camera that CBS owned. The New York broadcasts were extended by coaxial cable to Philadelphia's WCAU-TV beginning 13 December, and to Chicago on 10 January, making them the first network color broadcasts.
After a series of hearings beginning in September 1949, the FCC found the RCA and CTI systems fraught with technical problems, inaccurate color reproduction, and expensive equipment, and so formally approved the CBS system as the U.S. color broadcasting standard on 11 October 1950. An unsuccessful lawsuit by RCA delayed the first commercial network broadcast in color until 25 June 1951, when a musical variety special titled simply Premiere was shown over a network of five East Coast CBS affiliates. Viewing was again restricted: the program could not be seen on black-and-white sets, and Variety estimated that only thirty prototype color receivers were available in the New York area. Regular color broadcasts began that same week with the daytime series The World Is Yours and Modern Homemakers.
While the CBS color broadcasting schedule gradually expanded to twelve hours per week (but never into prime time), and the color network expanded to eleven affiliates as far west as Chicago, its commercial success was doomed by the lack of color receivers necessary to watch the programs, the refusal of television manufacturers to create adapter mechanisms for their existing black-and-white sets, and the unwillingness of advertisers to sponsor broadcasts seen by almost no one. CBS had bought a television manufacturer in April, and in September 1951, production began on the only CBS-Columbia color television model, with the first color sets reaching retail stores on 28 September. However, it was too little, too late. Only 200 sets had been shipped, and only 100 sold, when CBS discontinued its color television system on 20 October 1951, ostensibly by request of the National Production Authority for the duration of the Korean War, and bought back all the CBS color sets it could to prevent lawsuits by disappointed customers. RCA chairman David Sarnoff later charged that the NPA's order had come "out of a situation artificially created by one company to solve its own perplexing problems" because CBS had been unsuccessful in its color venture.
Compatible color
While the FCC was holding its JTAC meetings, development was taking place on a number of systems allowing true simultaneous color broadcasts, "dot-sequential color systems". Unlike the hybrid systems, dot-sequential televisions used a signal very similar to existing black-and-white broadcasts, with the intensity of every dot on the screen being sent in succession.
In 1938 Georges Valensi demonstrated an encoding scheme that would allow color broadcasts to be encoded so they could be picked up on existing black-and-white sets as well. In his system the output of the three camera tubes were re-combined to produce a single "luminance" value that was very similar to a monochrome signal and could be broadcast on the existing VHF frequencies. The color information was encoded in a separate "chrominance" signal, consisting of two separate signals, the original blue signal minus the luminance (B'–Y'), and red-luma (R'–Y'). These signals could then be broadcast separately on a different frequency; a monochrome set would tune in only the luminance signal on the VHF band, while color televisions would tune in both the luminance and chrominance on two different frequencies, and apply the reverse transforms to retrieve the original RGB signal. The downside to this approach is that it required a major boost in bandwidth use, something the FCC was interested in avoiding.
RCA used Valensi's concept as the basis of all of its developments, believing it to be the only proper solution to the broadcast problem. However, RCA's early sets using mirrors and other projection systems all suffered from image and color quality problems, and were easily bested by CBS's hybrid system. But solutions to these problems were in the pipeline, and RCA in particular was investing massive sums (later estimated at $100 million) to develop a usable dot-sequential tube. RCA was beaten to the punch by the Geer tube, which used three B&W tubes aimed at different faces of colored pyramids to produce a color image. All-electronic systems included the Chromatron, Penetron and beam-index tube that were being developed by various companies. While investigating all of these, RCA's teams quickly started focusing on the shadow mask system.
In July 1938 the shadow mask color television was patented by Werner Flechsig (1900–1981) in Germany, and was demonstrated at the International radio exhibition Berlin in 1939. Most CRT color televisions used today are based on this technology. His solution to the problem of focusing the electron guns on the tiny colored dots was one of brute-force; a metal sheet with holes punched in it allowed the beams to reach the screen only when they were properly aligned over the dots. Three separate guns were aimed at the holes from slightly different angles, and when their beams passed through the holes the angles caused them to separate again and hit the individual spots a short distance away on the back of the screen. The downside to this approach was that the mask cut off the vast majority of the beam energy, allowing it to hit the screen only 15% of the time, requiring a massive increase in beam power to produce acceptable image brightness.
The first publicly announced network demonstration of a program using a "compatible color" system was an episode of NBC's Kukla, Fran and Ollie on 10 October 1949, viewable in color only at the FCC. It did not receive FCC approval.
In spite of these problems in both the broadcast and display systems, RCA pressed ahead with development and was ready for a second assault on the standards by 1950.
Second NTSC
The possibility of a compatible color broadcast system was so compelling that the NTSC decided to re-form, and held a second series of meetings starting in January 1950. Having only recently selected the CBS system, the FCC heavily opposed the NTSC's efforts. One of the FCC Commissioners, R. F. Jones, went so far as to assert that the engineers testifying in favor of a compatible system were "in a conspiracy against the public interest".
Unlike the FCC approach where a standard was simply selected from the existing candidates, the NTSC would produce a board that was considerably more pro-active in development.
Starting before CBS color even got on the air, the U.S. television industry, represented by the National Television System Committee, worked in 1950–1953 to develop a color system that was compatible with existing black-and-white sets and would pass FCC quality standards, with RCA developing the hardware elements. RCA first made publicly announced field tests of the dot sequential color system over its New York station WNBT in July 1951. When CBS testified before Congress in March 1953 that it had no further plans for its own color system, the National Production Authority dropped its ban on the manufacture of color television receivers, and the path was open for the NTSC to submit its petition for FCC approval in July 1953, which was granted on 17 December. The first publicly announced network demonstration of a program using the NTSC "compatible color" system was an episode of NBC's Kukla, Fran and Ollie on 30 August 1953, although it was viewable in color only at the network's headquarters. The first network broadcast to go out over the air in NTSC color was a performance of the opera Carmen on 31 October 1953.
Adoption
North America
Canada
Colour broadcasts from the United States were available to Canadian population centres near the border from the mid-1950s. At the time that NTSC colour broadcasting was officially introduced into Canada in 1966, less than one percent of Canadian households had a colour television set. Colour television in Canada was launched on the Canadian Broadcasting Corporation's (CBC) English language TV service on 1 September 1966. Private television broadcaster CTV also started colour broadcasts in early September 1966.
The CBC's French-language service, Radio-Canada, was broadcasting colour programming on its television network for 15 hours a week in 1968. Full-time colour transmissions started in 1974 on the CBC, with other private sector broadcasters in the country doing so by the end of the 1970s.
The following provinces and areas of Canada introduced colour television by the years as stated
Saskatchewan, Alberta, Manitoba, British Columbia, Ontario, Quebec (1966; Major networks only – private sector around 1968 to 1972)
Newfoundland and Labrador (1967)
Nova Scotia, New Brunswick (1968)
Prince Edward Island (1969)
Yukon (1971)
Northwest Territories (including Nunavut) (1972; Major networks in large centers, many remote areas in the far north did not get colour until at least 1977 and 1978)
Cuba
Cuba in 1958 became the second country in the world to introduce color television broadcasting, with Havana's Channel 12 using the American NTSC standard and technology patented by RCA. But the color transmissions ended when broadcasting stations were seized in the Cuban Revolution in 1959, and did not return until 1975, using equipment acquired from Japan's NEC Corporation, and SECAM equipment from the Soviet Union, adapted for the American NTSC standard.
Mexico
Guillermo González Camarena independently invented and developed a field-sequential tricolor disk system in Mexico in the late 1930s, for which he requested a patent in Mexico on 19 August 1940, and in the United States in 1941. González Camarena produced his color television system in his Gon-Cam laboratory for the Mexican market and exported it to the Columbia College of Chicago, which regarded it as the best system in the world. Goldmark had actually applied for a patent for the same field-sequential tricolor system in the US on 7 September 1940, while González Camarena had made his Mexican filing 19 days before, on 19 August.
On 31 August 1946, González Camarena sent his first color transmission from his lab in the offices of the Mexican League of Radio Experiments at Lucerna St. No. 1, in Mexico City. The video signal was transmitted at a frequency of 115 MHz and the audio in the 40-metre band. He obtained authorization to make the first publicly announced color broadcast in Mexico, on 8 February 1963, of the program Paraíso Infantil on Mexico City's XHGC-TV, using the NTSC system that had by now been adopted as the standard for color programming.
González Camarena also invented the "simplified Mexican color TV system" as a much simpler and cheaper alternative to the NTSC system. Due to its simplicity, NASA used a modified version of the system in its Voyager mission of 1979, to take pictures and video of Jupiter.
United States
Although all-electronic color was introduced in the US in 1953, high prices and the scarcity of color programming greatly slowed its acceptance in the marketplace. The first national color broadcast (the 1954 Tournament of Roses Parade) occurred on 1 January 1954, but over the next dozen years most network broadcasts, and nearly all local programming, continued to be in black-and-white. In 1956, NBC's The Perry Como Show became the first live network television series to present a majority of episodes in color. The CBS television production of Rodgers & Hammerstein's Cinderella was broadcast live in color on 31 March 1957. It was their only musical written directly for television, and had the highest one-night number of viewers to date at 107 million. CBS's The Big Record, starring pop vocalist Patti Page, in 1957–1958 became the first television show broadcast in color for an entire season. The production costs for these shows were greater than most movies were at the time, not only because of all the stars featured in the musical and on the hour-long variety extravaganza, but also due to the extremely high-intensity lighting and electronics required for the new RCA TK-41 cameras, which were the first practical color television cameras.
It was not until the mid-1960s that color sets started selling in large numbers, due in part to the color transition of 1965 in which it was announced that over half of all network prime-time programming would be broadcast in color that autumn. The first all-color prime-time season came just one year later.
NBC's pioneering coast-to-coast color broadcast of the 1954 Tournament of Roses Parade was accompanied by public demonstrations given across the United States on prototype color receivers by manufacturers RCA, General Electric, Philco, Raytheon, Hallicrafters, Hoffman, Pacific Mercury, and others. Two days earlier, Admiral had demonstrated to its distributors the prototype of Admiral's first color television set planned for consumer sale using the NTSC standards, priced at $1,175 (). It is not known when actual commercial sales of this receiver began. Production was extremely limited, and no advertisements for it were published in New York newspapers, nor those in Washington, DC.
A color model from Admiral C1617A became available in the Chicago area on 4 January 1954 and appeared in various stores throughout the country, including those in Maryland on 6 January 1954, San Francisco, 14 January 1954, Indianapolis on 17 January 1954, Pittsburgh on 25 January 1954, and Oakland on 26 January 1954, among other cities thereafter. A color model from Westinghouse H840CK15 ($1,295, or ) became available in the New York area on 28 February 1954; Only 30 sets were sold in its first month. A less expensive color model from RCA (CT-100) reached dealers in April 1954. Television's first prime time network color series was The Marriage, a situation comedy broadcast live by NBC in the summer of 1954. NBC's anthology series Ford Theatre became the first network color-filmed series that October; however, due to the high cost of the first fifteen color episodes, Ford ordered that two black-and-white episodes be filmed for every color episode. The first series to be filmed entirely in color was NBC's Norby, a sitcom that lasted 13 weeks, from January to April 1955, and was replaced by repeats of Ford Theatres color episodes.
Early color telecasts could be preserved only on the black-and-white kinescope process introduced in 1947. It was not until September 1956 that NBC began using color film to time-delay and preserve some of its live color telecasts. Ampex introduced a color videotape recorder in 1958, which NBC used to tape An Evening with Fred Astaire, the oldest surviving network color videotape. This system was also used to unveil a demonstration of color television for the press. On 22 May 1958, President Dwight D. Eisenhower visited the WRC-TV NBC studios in Washington, D.C., and gave a speech touting the new technology's merits. His speech was recorded in color, and a copy of this videotape was given to the Library of Congress for posterity.
The syndicated The Cisco Kid had been filmed in color since 1949 in anticipation of color broadcasting. Several other syndicated shows had episodes filmed in color during the 1950s, including The Lone Ranger, My Friend Flicka, and Adventures of Superman. The first was carried by some stations equipped for color telecasts well before NBC began its regular weekly color dramas in 1959, beginning with the Western series Bonanza.
NBC was at the forefront of color programming because its parent company RCA manufactured the most successful line of color sets in the 1950s and, at the end of August 1956, announced that in comparison with 1955–56 (when only three of its regularly scheduled programs were broadcast in color) the 1956–57 season would feature 17 series in color. By 1959 RCA was the only remaining major manufacturer of color sets, competitors having discontinued models that used RCA picture tubes because of poor sales, while working on their own improved tube designs. CBS and ABC, not affiliated with set manufacturers and not eager to promote their competitor's product, were much slower to broadcast in color. CBS broadcast color specials and sometimes aired its big weekly variety shows in color, but it offered no regularly scheduled color programming until the fall of 1965. At least one CBS show, The Lucy Show, was filmed in color beginning in 1963, but continued to be telecast in black and white through the end of the 1964–65 season. ABC delayed its first color programs until 1962, but these were initially only broadcasts of the cartoon shows The Flintstones, The Jetsons and Beany and Cecil. The DuMont network, although it did have a television-manufacturing parent company, was in financial decline by 1954 and was dissolved two years later. The only known original color programming broadcast over the DuMont network was a high school football Thanksgiving game from New Jersey in 1957, a year after the network had ceased regular operations.
The relatively small amount of network color programming, combined with the high cost of color television sets, meant that as late as 1964 only 3.1 percent of television households in the US had a color set. However, by the mid-1960s, the subject of color programming turned into a ratings war. A 1965 American Research Bureau (ARB) study that proposed an emerging trend in color television set sales convinced NBC that a full shift to color would gain a ratings advantage over its two competitors. As a result, NBC provided the catalyst for rapid color expansion by announcing that its prime time schedule for fall 1965 would be almost entirely in color. ABC and CBS followed suit and over half of their combined prime-time programming also moved to color that season, but they were still reluctant to telecast all their programming in color due to production costs. All three broadcast networks were airing full color prime time schedules by the 1966–67 broadcast season, and ABC aired its last new black-and-white daytime programming in December 1967. Public broadcasting networks like NET, however, did not use color for a majority of their programming until 1968. The number of color television sets sold in the US did not exceed black-and-white sales until 1972, which was also the first year that more than fifty percent of television households in the US had a color set. This was also the year that "in color" notices before color television programs ended, due to the rise in color television set sales, and color programming having become the norm.
In a display of foresight, Disney had filmed many of its earlier shows in color so they were able to be repeated on NBC, and since most of Disney's feature-length films were also made in color, they could now also be telecast in that format. To emphasize the new feature, the series was re-dubbed Walt Disney's Wonderful World of Color, which premiered in September 1961, and retained that moniker until 1969.
By the mid-1970s, the only stations broadcasting in black-and-white were a few high-numbered UHF stations in small markets, and a handful of low-power repeater stations in even smaller markets such as vacation spots. By 1979, even the last of these had converted to color and by the early 1980s, B&W sets had been pushed into niche markets, notably low-power uses, small portable sets, or use as video monitor screens in lower-cost consumer equipment. These black-and-white displays were still compatible with color signals and remained usable through the 1990s and the first decade of the 21st Century for uses that did not require a full color display. The digital television transition in the United States in 2009 rendered the remaining black-and-white television sets obsolete; all digital television receivers are capable of displaying full color.
Color broadcasting in Hawaii started on 5 May 1957. One of the last television stations in North America to convert to color, WQEX (now WINP-TV) in Pittsburgh, started broadcasting in color on 16 October 1986, after its black-and-white transmitter, which dated from the 1950s, broke down in February 1985 and the parts required to fix it were no longer available. The owner of WQEX, PBS member station WQED, used some of its pledge money to buy a color transmitter.
Early color sets were either floor-standing console models or tabletop versions nearly as bulky and heavy, so in practice, they remained firmly anchored in one place. The introduction of GE's relatively compact and lightweight Porta-Color set in the spring of 1966 made watching color television a more flexible and convenient proposition. In 1972, the year sales of color sets finally surpassed sales of black-and-white sets, the last holdout among daytime network programs converted to color, resulting in the first completely all-color network season.
Europe
The first two color television broadcasts in Europe were made by early tests in France (SECAM) between 1963 and 1966, then officially launched in October 1967 and by the UK's BBC2 beginning on 1 July 1967 and West Germany's Das Erste and ZDF in August, both using the PAL system. They were followed by the Netherlands in September (PAL). On 1 October 1968, the first scheduled television program in color was broadcast in Switzerland. Denmark, Norway, Sweden, Finland, Austria, East Germany, Czechoslovakia, and Hungary all started regular color broadcasts around 1969–1970. Ireland's national TV station RTÉ began using color in 1968 for recorded programs; the first outside broadcast made in color for RTÉ Television was when Ireland hosted the Eurovision Song Contest in Dublin in 1971. The PAL system spread through most of Western Europe.
More European countries introduced color television using the PAL system in the 1970s and early 1980s; examples include Belgium (1971), Bulgaria (1971, but not fully implemented until 1972), SFR Yugoslavia (1971), Spain (1972, but not fully implemented until 1977), Iceland (1973, but not fully implemented until 1976), Portugal (1975, but not fully implemented until 1980), Albania (1981), Turkey (1981) and Romania (1983, but not fully implemented until 1985–1991). In Italy there were debates to adopt a national color television system, the ISA, developed by Indesit, but that idea was scrapped. As a result, and after a test during the 1972 Summer Olympics, Italy was one of the last European countries to officially adopt the PAL system in the 1976–1977 season.
France, Luxembourg, and most of the Eastern Bloc along with their overseas territories opted for SECAM. SECAM was a popular choice in countries with much hilly terrain, and countries with a very large installed base of older monochrome equipment, which could cope much better with the greater ruggedness of the SECAM signal. However, for many countries the decision was more down to politics than technical merit.
A drawback of SECAM for production is that, unlike PAL or NTSC, certain post-production operations of encoded SECAM signals are not really possible without a significant drop in quality. As an example, a simple fade to black is trivial in NTSC and PAL: one merely reduces the signal level until it is zero. However, in SECAM the color difference signals, which are frequency modulated, need first to be decoded to e.g. RGB, then the fade-to-black is applied, and finally the resulting signal is re-encoded into SECAM. Because of this, much SECAM video editing was actually done using PAL equipment, then the resultant signal was converted to SECAM. Another drawback of SECAM is that comb filtering, allowing better color separation, is of limited use in SECAM receivers. This was not, however, much of a drawback in the early days of SECAM as such filters were not readily available in high-end TV sets before the 1990s.
The first regular color broadcasts in SECAM were started on 1 October 1967, on France's Second Channel (ORTF 2e chaîne).
In France and the UK color broadcasts were made on 625-line UHF frequencies, the VHF band being used for black and white, 405 lines in UK or 819 lines in France, until the beginning of the 1980s. Countries elsewhere that were already broadcasting 625-line monochrome on VHF and UHF, simply transmitted color programs on the same channels.
Some British television programs, particularly those made by or for ITC Entertainment, were shot on color film before the introduction of color television to the UK, for the purpose of sales to US networks. The first British show to be made in color was the drama series The Adventures of Sir Lancelot (1956–57), which was initially made in black and white but later shot in color for sale to the NBC network in the United States. Other British color television programs made before the introduction of color television in the UK include Stingray (1964–1965), which was claimed to be the first British TV show to be filmed entirely in color, although when this claim was made in the 1960s it was protested by Francis Coudrill who said his series The Stoopendus Adventures of Hank had been shot entirely in color some years previously; Thunderbirds (1965–1966), The Baron (1966–1967), The Saint (from 1966 to 1969), The Avengers (from 1967 to 1969), Man in a Suitcase (1967–1968), The Prisoner (1967–1968) and Captain Scarlet and the Mysterons (1967–1968). However, most UK series predominantly made using videotape, such as Doctor Who (1963–89; 2005–present) did not begin color production until later, with the first color Doctor Who episodes not airing until 1970. (The first four, comprising the story Spearhead from Space, were shot on film owing to a technician's strike, with videotape being used thereafter). Although marginal, some UK viewers are still using black and white tv sets. The number of black and white licenses issued was 212000 in 2000 and 6586 in 2019.
The last country in Europe to introduce color television was Romania in 1983.
Asia and the Pacific
In Japan, NHK and NTV introduced color television, using a variation of the NTSC system (called NTSC-J) on 10 September 1960, making it the first country in Asia to introduce color television. The Philippines (1966) and Taiwan (1969) also adopted the NTSC system.
Other countries in the region instead used the PAL system, starting with Australia (1967, originally scheduled for 1972, but not fully implemented until 1975–1978), and then Thailand (1967–69; this country converted from 525-line NTSC to 625-line PAL), Hong Kong (1967–70), the People's Republic of China (1970, but not fully implemented until 1984), New Zealand (1973), North Korea (1974), Singapore (1974), Indonesia (1974, but not fully implemented until 1979–82), Pakistan (1976, but not fully implemented until 1982), Kazakhstan (1977), Vietnam (1977), Malaysia (1978, but not fully implemented until 1980), India (1979, but not fully implemented until 1982–86), Burma (1980), and Bangladesh (1980). South Korea did not introduce color television (using NTSC) until 1980–1981, although it was already manufacturing color television sets for export. The last country in Asia and the world to introduce color television was Cambodia in 1986.
China
The People's Republic of China began plans and early testing for color TV as early as 1960, but were quickly cancelled.
China started testing again in 1970 and adopted PAL the next year.
Regular full-time color broadcasts on what is now CCTV-2 since October 1973, and full-time color transmissions for the CCTV's then-two channels since July 1977.
The following provinces and areas of China introduced color television by the years as stated:
Beijing (1973)
Shanghai (1974)
Jilin (1977)
Tibet and Inner Mongolia (1979)
Ningxia (1980)
Xinjiang (1982, peripheral in 1984)
Henan (1983)
Middle East
Nearly all of the countries in the Middle East use PAL. The first country in the Middle East to introduce color television was Lebanon in 1967. Jordan, Iraq and Oman, become second in the early-1970s. Saudi Arabia, the United Arab Emirates, Kuwait, Bahrain, and Qatar followed in the mid-1970s, but Israel and Cyprus continued to broadcast in black and white until the early 1980s. Israeli television even erased the color signals using a device called the mehikon.
Africa
The first color television service in Africa was introduced on the Tanzanian island of Zanzibar, in 1973, using PAL. In 1973 also, MBC of Mauritius broadcast the OCAMM Conference, in color, using SECAM. At the time, South Africa did not have a television service at all, owing to opposition from the apartheid regime, but in 1976, one was finally launched. Nigeria adopted PAL for color transmissions in 1974 in the Benue Plateau state in the north central region of the country, but countries such as Zimbabwe and Ghana continued with black and white until 1982 and 1985 respectively. The Sierra Leone Broadcasting Service (SLBS) started television broadcasting in 1963 as a cooperation between the SLBS and commercial interests; coverage was extended to all districts in 1978 when the service was also upgraded to color.
South America
Unlike most other countries in the Americas, which had adopted NTSC, Brazil began broadcasting in color using PAL-M, on 19 February 1972. Ecuador was the first South American country to broadcast in color using NTSC, on 5 November 1974. In 1978, Argentina started international broadcasting in color using PAL-B in connection with the country's hosting of the FIFA World Cup. However domestic color broadcasting remained black & white until 1 May 1980 when regular broadcasting started using PAL-N, a variation of PAL-B specially suited for Argentina, Uruguay and Paraguay.
Also in April 1978, Chile adopted color television officially through the NTSC standard, This led to experimental broadcasts during the Viña del Mar Festival and the widespread use of color TV during the 1978 FIFA World Cup, followed by the charity event Teletón in December of the same year.
Some other countries in South America, including Bolivia, Paraguay, Peru, and Uruguay [1981], did not broadcast full-time color television until the early 1980s.
Cor Dillen, director and later CEO of the South American branch of Philips, was responsible for bringing color television to South America.
Color standards
There are three main analog broadcast television systems in use around the world, PAL (Phase Alternating Line), NTSC (National Television Standards Committee), and SECAM (Séquentiel Couleur à Mémoire—Sequential Color with Memory).
The system used in The Americas and part of the Far East is NTSC. Most of Asia, Western Europe, Australia, Africa, and Eastern South America use PAL (though Brazil and Cambodia uses a hybrid PAL-M system). Eastern Europe and France uses SECAM. Generally, a device (such as a television) can only read or display video encoded to a standard that the device is designed to support; otherwise, the source must be converted (such as when European programs are broadcast in North America or vice versa).
This table illustrates the differences:
[1] For SECAM the color sub-carrier alternates between 4.25000 MHz for the lines containing the Db color signal and 4.40625 MHz for the Dr signal (both are frequency modulated unlike both PAL and NTSC, which are phase modulated). The frequency of the sub-carrier is the only means that the decoder has of determining which color difference signal is actually being transmitted.
Digital television broadcasting standards, such as ATSC, DVB-T, DVB-T2, and ISDB, have superseded these analog transmission standards in many countries.
| Technology | Broadcasting | null |
163103 | https://en.wikipedia.org/wiki/Future | Future | The future is the time after the past and present. Its arrival is considered inevitable due to the existence of time and the laws of physics. Due to the apparent nature of reality and the unavoidability of the future, everything that currently exists and will exist can be categorized as either permanent, meaning that it will exist forever, or temporary, meaning that it will end. In the Occidental view, which uses a linear conception of time, the future is the portion of the projected timeline that is anticipated to occur. In special relativity, the future is considered absolute future, or the future light cone.
In the philosophy of time, presentism is the belief that only the present exists and the future and the past are unreal. Religions consider the future when they address issues such as karma, life after death, and eschatologies that study what the end of time and the end of the world will be. Religious figures such as prophets and diviners have claimed to see into the future.
Future studies, or futurology, is the science, art, and practice of postulating possible futures. Modern practitioners stress the importance of alternative and plural futures, rather than one monolithic future, and the limitations of prediction and probability, versus the creation of possible and preferable futures. Predeterminism is the belief that the past, present, and future have been already decided.
The concept of the future has been explored extensively in cultural production, including art movements and genres devoted entirely to its elucidation, such as the 20th-century movement futurism.
In physics
In physics, time is the fourth dimension. Physicists argue that spacetime can be understood as a sort of stretchy fabric that bends due to forces such as gravity. In classical physics the future is just a half of the timeline, which is the same for all observers. In special relativity the flow of time is relative to the observer's frame of reference. The faster an observer is traveling away from a reference object, the slower that object seems to move through time. Hence, the future is not an objective notion anymore. A more modern notion is absolute future, or the future light cone. While a person can move backward or forwards in the three spatial dimensions, many physicists argue you are only able to move forward in time.
One of the outcomes of Special Relativity Theory is that a person can travel into the future (but never come back) by traveling at very high speeds. While this effect is negligible under ordinary conditions, space travel at very high speeds can change the flow of time considerably. As depicted in many science fiction stories and movies (e.g. Déjà Vu), a person traveling for even a short time at near light speed will return to an Earth that is many years in the future.
Some physicists claim that by using a wormhole to connect two regions of spacetime a person could theoretically travel in time. Physicist Michio Kaku points out that to power this hypothetical time machine and "punch a hole into the fabric of space-time" would require the energy of a star. Another theory is that a person could travel in time with cosmic strings.
In philosophy
In the philosophy of time, presentism is the belief that only the present exists, and the future and past are unreal. Past and future "entities" are construed as logical constructions or fictions. The opposite of presentism is 'eternalism', which is the belief that things in the past and things yet to come exist eternally. Another view (not held by many philosophers) is sometimes called the 'growing block' theory of time—which postulates that the past and present exist, but the future does not.
Presentism is compatible with Galilean relativity, in which time is independent of space, but is probably incompatible with Lorentzian/Albert Einsteinian relativity in conjunction with certain other philosophical theses that many find uncontroversial. Saint Augustine proposed that the present is a knife edge between the past and the future and could not contain any extended period of time.
Contrary to Saint Augustine, some philosophers propose that conscious experience is extended in time. For instance, William James said that time is "...the short duration of which we are immediately and incessantly sensible." Augustine proposed that God is outside of time and present for all times, in eternity. Other early philosophers who were presentists include the Buddhists (in the tradition of Indian Buddhism). A leading scholar from the modern era on Buddhist philosophy is Stcherbatsky, who has written extensively on Buddhist presentism:
In psychology
Human behavior is known to encompass anticipation of the future. Anticipatory behavior can be the result of a psychological outlook toward the future, for examples optimism, pessimism, and hope.
Optimism is an outlook on life such that one maintains a view of the world as a positive place. People would say that optimism is seeing the glass "half full" of water as opposed to half empty. It is the philosophical opposite of pessimism. Optimists generally believe that people and events are inherently good, so that most situations work out in the end for the best. Hope is a belief in a positive outcome related to events and circumstances in one's life. Hope implies a certain amount of despair, wanting, wishing, suffering or perseverance—i.e., believing that a better or positive outcome is possible even when there is some evidence to the contrary. "Hopefulness" is somewhat different from optimism in that hope is an emotional state, whereas some theories point to optimism as a conclusion reached through a deliberate thought pattern that leads to a positive personal attitudes and by extension is linked to more philanthropic behaviours.
Pessimism as stated before is the opposite of optimism. It is the tendency to see, anticipate, or emphasize only bad or undesirable outcomes, results, or problems. The word originates in Latin from Pessimus meaning worst and Malus meaning bad and has a link to misanthropic belief systems.
In religion
Religions consider the future when they address issues such as karma, life after death, and eschatologies which consider what the end of time and the end of the world will be like. In religion, major prophets are said to have the power to change the future. Common religious figures have claimed to see into the future, such as minor prophets and diviners.
The term "afterlife" refers to the continuation of existence of the soul, spirit or mind of a human (or animal) after physical death, typically in a spiritual or ghostlike afterworld. Deceased persons are usually believed to go to a specific region or plane of existence in this afterworld, often depending on the rightness of their actions during life.
Some believe the afterlife includes some form of preparation for the soul to transfer to another body (reincarnation). The major views on the afterlife derive from religion, esotericism and metaphysics. There are those who are skeptical of the existence of the afterlife, or believe that it is absolutely impossible, such as the materialist-reductionists, who believe that the topic is supernatural, therefore does not really exist or is unknowable. In metaphysical models, theists generally, believe some sort of afterlife awaits people when they die. Atheists generally do not believe in a life after death. Members of some generally non-theistic religions such as Buddhism, tend to believe in an afterlife like reincarnation but without reference to God.
Agnostics generally hold the position that like the existence of God, the existence of supernatural phenomena, such as souls or life after death, is unverifiable and therefore unknowable. Many religions, whether they believe in the soul's existence in another world like Christianity, Islam and many pagan belief systems, or in reincarnation like many forms of Hinduism and Buddhism, believe that one's status in the afterlife is a reward or punishment for their conduct during life, with the exception of Calvinistic variants of Protestant Christianity, which believe one's status in the afterlife is a gift from God and cannot be earned during life.
Eschatology is a part of theology and philosophy concerned with the final events in the Human history, or the ultimate destiny of humanity, commonly referred to as the end of the world. While in mysticism the phrase refers metaphorically to the end of ordinary reality and reunion with the Divine, in many traditional religions it is taught as an actual future event prophesied in sacred texts or folklore. More broadly, eschatology may encompass related concepts such as the Messiah or Messianic Age, the end time, and the end of days.
In grammar
In grammar, actions are classified according to one of the following twelve verb tenses: past (past, past continuous, past perfect, or past perfect continuous), present (present, present continuous, present perfect, or present perfect continuous), or future (future, future continuous, future perfect, or future perfect continuous). The future tense refers to actions that have not yet happened, but which are due, expected, or may occur in the future. For example, in the sentence, "She will walk home," the verb "will walk" is in the future tense because it refers to an action that is going to, or may, happen at a point in time beyond the present.
Verbs in the future continuous tense indicate actions that will happen beyond the present and will continue for a period of time. In the sentence, "She will be walking home," the verb phrase "will be walking" is in the future continuous tense because the action described is not happening now, but will happen sometime afterwards and is expected to continue happening for some time. Verbs in the future perfect tense indicate actions that will be completed at a particular point in the future. For example, the verb phrase, "will have walked," in the sentence, "She will have walked home," is in the future perfect tense because it refers to an action that is completed as of a specific time in the future. Finally, verbs in the future perfect continuous tense combine the features of the perfect and continuous tenses, describing the future status of actions that have been happening continually from now or the past through to a particular time in the future. In the sentence, "She will have been walking home," the verb phrase "will have been walking" is in the future perfect continuous tense because it refers to an action that the speaker anticipates will be finished in the future.
Another way to think of the various future tenses is that actions described by the future tense will be completed at an unspecified time in the future, actions described by the future continuous tense will keep happening in the future, actions described by the future perfect tense will be completed at a specific time in the future, and actions described by the future perfect continuous tense are expected to be continuing as of a specific time in the future.
Linear and cyclic culture
The linear view of time (common in Western thought) draws a stronger distinction between past and future than does the more common cyclic time of cultures such as India, where past and future can coalesce much more readily.
Futures studies
Futures studies or futurology is the science, art, and practice of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them. Futures studies seek to understand what is likely to continue, what is likely to change, and what is novel. Part of the discipline thus seeks a systematic and pattern-based understanding of past and present, and to determine the likelihood of future events and trends. A key part of this process is understanding the potential future impact of decisions made by individuals, organizations, and governments. Leaders use the results of such work to assist in decision-making.
Futures is an interdisciplinary field, studying yesterday's and today's changes, and aggregating and analyzing both lay and professional strategies, and opinions with respect to tomorrow. It includes analyzing the sources, patterns, and causes of change and stability in the attempt to develop foresight and to map possible futures. Modern practitioners stress the importance of alternative and plural futures, rather than one monolithic future, and the limitations of prediction and probability, versus the creation of possible and preferable futures.
Three factors usually distinguish futures studies from the research conducted by other disciplines (although all disciplines overlap, to differing degrees). First, futures studies often examines not only possible but also probable, preferable, and "wild card" futures. Second, futures studies typically attempts to gain a holistic or systemic view based on insights from a range of different disciplines. Third, futures studies challenges and unpacks the assumptions behind dominant and contending views of the future. The future thus is not empty but fraught with hidden assumptions.
Futures studies do not generally include the work of economists who forecast movements of interest rates over the next business cycle, or of managers or investors with short-term time horizons. Most strategic planning, which develops operational plans for preferred futures with time horizons of one to three years, is also not considered futures. But plans and strategies with longer time horizons that specifically attempt to anticipate and be robust to possible future events, are part of a major subdiscipline of futures studies called strategic foresight.
The futures field also excludes those who make future predictions through professed supernatural means. At the same time, it does seek to understand the model's such groups use and the interpretations they give to these models.
Forecasting
Forecasting is the process of estimating outcomes in uncontrolled situations. Forecasting is applied in many areas, such as weather forecasting, earthquake prediction, transport planning, and labour market planning. Due to the element of the unknown, risk and uncertainty are central to forecasting.
Statistically based forecasting employs time series with cross-sectional or longitudinal data. Econometric forecasting methods use the assumption that it is possible to identify the underlying factors that might influence the variable that is being forecast. If the causes are understood, projections of the influencing variables can be made and used in the forecast. Judgmental forecasting methods incorporate intuitive judgments, opinions, and probability estimates, as in the case of the Delphi method, scenario building, and simulations.
Prediction is similar to forecasting but is used more generally, for instance, to also include baseless claims on the future. Organized efforts to predict the future began with practices like astrology, haruspicy, and augury. These are all considered to be pseudoscience today, evolving from the human desire to know the future in advance.
Modern efforts such as futures studies attempt to predict technological and societal trends, while more ancient practices, such as weather forecasting, have benefited from scientific and causal modelling. Despite the development of cognitive instruments for the comprehension of future, the stochastic and chaotic nature of many natural and social processes has made precise forecasting of the future elusive.
In art and culture
Futurism
Futurism as an art movement originated in Italy at the beginning of the 20th century. It developed largely in Italy and in Russia, although it also had adherents in other countries—in England and Portugal for example. The Futurists explored every medium of art, including painting, sculpture, poetry, theatre, music, architecture, and even gastronomy. Futurists had passionate loathing of ideas from the past, especially political and artistic traditions. They also espoused a love of speed, technology, and violence. Futurists dubbed the love of the past passéisme. The car, the plane, and the industrial town were all legendary for the Futurists because they represented the technological triumph of people over nature. The Futurist Manifesto of 1909 declared: "We will glorify war—the world's only hygiene—militarism, patriotism, the destructive gesture of freedom-bringers, beautiful ideas worth dying for, and scorn for woman." Though it owed much of its character and some of its ideas to radical political movements, it had little involvement in politics until the autumn of 1913.
Futurism in Classical Music arose during this same time period. Closely identified with the central Italian Futurist movement were brother composers Luigi Russolo (1885–1947) and Antonio Russolo (1877–1942), who used instruments known as intonarumori—essentially sound boxes used to create music out of noise. Luigi Russolo's futurist manifesto, "The Art of Noises", is considered one of the most important and influential texts in 20th-century musical aesthetics. Other examples of futurist music include Arthur Honegger's "Pacific 231" (1923), which imitates the sound of a steam locomotive, Prokofiev's "The Steel Step" (1926), Alexander Mosolov's "Iron Foundry" (1927), and the experiments of Edgard Varèse.
Literary futurism made its debut with F.T. Marinetti's Manifesto of Futurism (1909). Futurist poetry used unexpected combinations of images and hyper-conciseness (not to be confused with the actual length of the poem). Futurist theater works have scenes a few sentences long, use nonsensical humor, and try to discredit the deep-rooted dramatic traditions with parody. Longer literature forms, such as novels, had no place in the Futurist aesthetic, which had an obsession with speed and compression.
Futurism expanded to encompass other artistic domains and ultimately included painting, sculpture, ceramics, graphic design, industrial design, interior design, theatre design, textiles, drama, literature, music and architecture. In architecture, it featured a distinctive thrust towards rationalism and modernism through the use of advanced building materials. The ideals of futurism remain as significant components of modern Western culture; the emphasis on youth, speed, power and technology finding expression in much of modern commercial cinema and commercial culture. Futurism has produced several reactions, including the 1980s-era literary genre of cyberpunk—which often treated technology with a critical eye.
Science fiction
More generally, one can regard science fiction as a broad genre of fiction that often involves speculations based on current or future science or technology. Science fiction is found in books, art, television, films, games, theater, and other media. Science fiction differs from fantasy in that, within the context of the story, its imaginary elements are largely possible within scientifically established or scientifically postulated laws of nature (though some elements in a story might still be pure imaginative speculation). Settings may include the future, or alternative time-lines, and stories may depict new or speculative scientific principles (such as time travel or psionics), or new technology (such as nanotechnology, faster-than-light travel or robots). Exploring the consequences of such differences is the traditional purpose of science fiction, making it a "literature of ideas".
Some science fiction authors construct a postulated history of the future called a "future history" that provides a common background for their fiction. Sometimes authors publish a timeline of events in their history, while other times the reader can reconstruct the order of the stories from information in the books. Some published works constitute "future history" in a more literal sense—i.e., stories or whole books written in the style of a history book but describing events in the future. Examples include H.G. Wells' The Shape of Things to Come (1933)—written in the form of a history book published in the year 2106 and in the manner of a real history book with numerous footnotes and references to the works of (mostly fictitious) prominent historians of the 20th and 21st centuries.
| Technology | Timekeeping | null |
163106 | https://en.wikipedia.org/wiki/Ethane | Ethane | Ethane ( , ) is a naturally occurring organic chemical compound with chemical formula . At standard temperature and pressure, ethane is a colorless, odorless gas. Like many hydrocarbons, ethane is isolated on an industrial scale from natural gas and as a petrochemical by-product of petroleum refining. Its chief use is as feedstock for ethylene production. The ethyl group is formally, although rarely practically, derived from ethane.
History
Ethane was first synthesised in 1834 by Michael Faraday, applying electrolysis of a potassium acetate solution. He mistook the hydrocarbon product of this reaction for methane and did not investigate it further. The process is now called Kolbe electrolysis:
CH3COO− → CH3• + CO2 + e−
CH3• + •CH3 → C2H6
During the period 1847–1849, in an effort to vindicate the radical theory of organic chemistry, Hermann Kolbe and Edward Frankland produced ethane by the reductions of propionitrile (ethyl cyanide) and ethyl iodide with potassium metal, and, as did Faraday, by the electrolysis of aqueous acetates. They mistook the product of these reactions for the methyl radical (), of which ethane () is a dimer.
This error was corrected in 1864 by Carl Schorlemmer, who showed that the product of all these reactions was in fact ethane. Ethane was discovered dissolved in Pennsylvanian light crude oil by Edmund Ronalds in 1864.
Properties
At standard temperature and pressure, ethane is a colorless, odorless gas. It has a boiling point of and melting point of . Solid ethane exists in several modifications. On cooling under normal pressure, the first modification to appear is a plastic crystal, crystallizing in the cubic system. In this form, the positions of the hydrogen atoms are not fixed; the molecules may rotate freely around the long axis. Cooling this ethane below ca. changes it to monoclinic metastable ethane II (space group P 21/n). Ethane is only very sparingly soluble in water.
The bond parameters of ethane have been measured to high precision by microwave spectroscopy and electron diffraction: rC−C = 1.528(3) Å, rC−H = 1.088(5) Å, and ∠CCH = 111.6(5)° by microwave and rC−C = 1.524(3) Å, rC−H = 1.089(5) Å, and ∠CCH = 111.9(5)° by electron diffraction (the numbers in parentheses represents the uncertainties in the final digits).
Rotating a molecular substructure about a twistable bond usually requires energy. The minimum energy to produce a 360° bond rotation is called the rotational barrier.
Ethane gives a classic, simple example of such a rotational barrier, sometimes called the "ethane barrier". Among the earliest experimental evidence of this barrier (see diagram at left) was obtained by modelling the entropy of ethane. The three hydrogens at each end are free to pinwheel about the central carbon–carbon bond when provided with sufficient energy to overcome the barrier. The physical origin of the barrier is still not completely settled, although the overlap (exchange) repulsion between the hydrogen atoms on opposing ends of the molecule is perhaps the strongest candidate, with the stabilizing effect of hyperconjugation on the staggered conformation contributing to the phenomenon. Theoretical methods that use an appropriate starting point (orthogonal orbitals) find that hyperconjugation is the most important factor in the origin of the ethane rotation barrier.
As far back as 1890–1891, chemists suggested that ethane molecules preferred the staggered conformation with the two ends of the molecule askew from each other.
Atmospheric and extraterrestrial
Ethane occurs as a trace gas in the Earth's atmosphere, currently having a concentration at sea level of 0.5 ppb. Global ethane quantities have varied over time, likely due to flaring at natural gas fields. Global ethane emission rates declined from 1984 to 2010, though increased shale gas production at the Bakken Formation in the U.S. has arrested the decline by half.
Although ethane is a greenhouse gas, it is much less abundant than methane, has a lifetime of only a few months compared to over a decade, and is also less efficient at absorbing radiation relative to mass. In fact, ethane's global warming potential largely results from its conversion in the atmosphere to methane. It has been detected as a trace component in the atmospheres of all four giant planets, and in the atmosphere of Saturn's moon Titan.
Atmospheric ethane results from the Sun's photochemical action on methane gas, also present in these atmospheres: ultraviolet photons of shorter wavelengths than 160 nm can photo-dissociate the methane molecule into a methyl radical and a hydrogen atom. When two methyl radicals recombine, the result is ethane:
CH4 → CH3• + •H
CH3• + •CH3 → C2H6
In Earth's atmosphere, hydroxyl radicals convert ethane to methanol vapor with a half-life of around three months.
It is suspected that ethane produced in this fashion on Titan rains back onto the moon's surface, and over time has accumulated into hydrocarbon seas covering much of the moon's polar regions. In mid-2005, the Cassini orbiter discovered Ontario Lacus in Titan's south polar regions. Further analysis of infrared spectroscopic data presented in July 2008 provided additional evidence for the presence of liquid ethane in Ontario Lacus. Several significantly larger hydrocarbon lakes, Ligeia Mare and Kraken Mare being the two largest, were discovered near Titan's north pole using radar data gathered by Cassini. These lakes are believed to be filled primarily by a mixture of liquid ethane and methane.
In 1996, ethane was detected in Comet Hyakutake, and it has since been detected in some other comets. The existence of ethane in these distant solar system bodies may implicate ethane as a primordial component of the solar nebula from which the sun and planets are believed to have formed.
In 2006, Dale Cruikshank of NASA/Ames Research Center (a New Horizons co-investigator) and his colleagues announced the spectroscopic discovery of ethane on Pluto's surface.
Chemistry
The reactions of ethane involve chiefly free radical reactions. Ethane can react with the halogens, especially chlorine and bromine, by free-radical halogenation. This reaction proceeds through the propagation of the ethyl radical:
Cl2 → 2 Cl•
C2H6• + Cl• → C2H5• + HCl
C2H5• + Cl2 → C2H5Cl + Cl•
Cl• + C2H6 → C2H5• + HCl
The combustion of ethane releases 1559.7 kJ/mol, or 51.9 kJ/g, of heat, and produces carbon dioxide and water according to the chemical equation:
2 C2H6 + 7 O2 → 4 CO2 + 6 H2O + 3120 kJ
Combustion may also occur without an excess of oxygen, yielding carbon monoxide, acetaldehyde, methane, methanol, and ethanol. At higher temperatures, especially in the range , ethylene is a significant product:
Such oxidative dehydrogenation reactions are relevant to the production of ethylene.
Production
After methane, ethane is the second-largest component of natural gas. Natural gas from different gas fields varies in ethane content from less than 1% to more than 6% by volume. Prior to the 1960s, ethane and larger molecules were typically not separated from the methane component of natural gas, but simply burnt along with the methane as a fuel. Today, ethane is an important petrochemical feedstock and is separated from the other components of natural gas in most well-developed gas fields. Ethane can also be separated from petroleum gas, a mixture of gaseous hydrocarbons produced as a byproduct of petroleum refining.
Ethane is most efficiently separated from methane by liquefying it at cryogenic temperatures. Various refrigeration strategies exist: the most economical process presently in wide use employs a turboexpander, and can recover more than 90% of the ethane in natural gas. In this process, chilled gas is expanded through a turbine, reducing the temperature to approximately . At this low temperature, gaseous methane can be separated from the liquefied ethane and heavier hydrocarbons by distillation. Further distillation then separates ethane from the propane and heavier hydrocarbons.
Usage
The chief use of ethane is the production of ethylene (ethene) by steam cracking. Steam cracking of ethane is fairly selective for ethylene, while the steam cracking of heavier hydrocarbons yields a product mixture poorer in ethylene and richer in heavier alkenes (olefins), such as propene (propylene) and butadiene, and in aromatic hydrocarbons.
Ehane has been investigated as a feedstock for other commodity chemicals. Oxidative chlorination of ethane has long appeared to be a potentially more economical route to vinyl chloride than ethylene chlorination. Many patent exist on this theme, but poor selectivity for vinyl chloride and corrosive reaction conditions have discouraged the commercialization of most of them. Presently, INEOS operates a 1000 t/a (tonnes per annum) ethane-to-vinyl chloride pilot plant at Wilhelmshaven in Germany.
SABIC operates a 34,000 t/a plant at Yanbu to produce acetic acid by ethane oxidation. The economic viability of this process may rely on the low cost of ethane near Saudi oil fields, and it may not be competitive with methanol carbonylation elsewhere in the world.
Ethane can be used as a refrigerant in cryogenic refrigeration systems.
In the laboratory
On a much smaller scale, in scientific research, liquid ethane is used to vitrify water-rich samples for cryo-electron microscopy. A thin film of water quickly immersed in liquid ethane at −150 °C or colder freezes too quickly for water to crystallize. Slower freezing methods can generate cubic ice crystals, which can disrupt soft structures by damaging the samples and reduce image quality by scattering the electron beam before it can reach the detector.
Health and safety
At room temperature, ethane is an extremely flammable gas. When mixed with air at 3.0%–12.5% by volume, it forms an explosive mixture.
Ethane is not a carcinogen.
| Physical sciences | Hydrocarbons | null |
163180 | https://en.wikipedia.org/wiki/Randomized%20controlled%20trial | Randomized controlled trial | A randomized controlled trial (or randomized control trial; RCT) is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments.
Participants who enroll in RCTs differ from one another in known and unknown ways that can influence study outcomes, and yet cannot be directly controlled. By randomly allocating participants among compared treatments, an RCT enables statistical control over these influences. Provided it is designed well, conducted properly, and enrolls enough participants, an RCT may achieve sufficient control over these confounding factors to deliver a useful comparison of the treatments studied.
Definition and examples
An RCT in clinical research typically compares a proposed new treatment against an existing standard of care; these are then termed the 'experimental' and 'control' treatments, respectively. When no such generally accepted treatment is available, a placebo may be used in the control group so that participants are blinded, or not given information, about their treatment allocations. This blinding principle is ideally also extended as much as possible to other parties including researchers, technicians, data analysts, and evaluators. Effective blinding experimentally isolates the physiological effects of treatments from various psychological sources of bias.
The randomness in the assignment of participants to treatments reduces selection bias and allocation bias, balancing both known and unknown prognostic factors, in the assignment of treatments. Blinding reduces other forms of experimenter and subject biases.
A well-blinded RCT is considered the gold standard for clinical trials. Blinded RCTs are commonly used to test the efficacy of medical interventions and may additionally provide information about adverse effects, such as drug reactions. A randomized controlled trial can provide compelling evidence that the study treatment causes an effect on human health.
The terms "RCT" and "randomized trial" are sometimes used synonymously, but the latter term omits mention of controls and can therefore describe studies that compare multiple treatment groups with each other in the absence of a control group. Similarly, the initialism is sometimes expanded as "randomized clinical trial" or "randomized comparative trial", leading to ambiguity in the scientific literature. Not all RCTs are randomized controlled trials (and some of them could never be, as in cases where controls would be impractical or unethical to use). The term randomized controlled clinical trial is an alternative term used in clinical research; however, RCTs are also employed in other research areas, including many of the social sciences.
History
The first reported clinical trial was conducted by James Lind in 1747 to identify a treatment for scurvy. The first blind experiment was conducted by the French Royal Commission on Animal Magnetism in 1784 to investigate the claims of mesmerism. An early essay advocating the blinding of researchers came from Claude Bernard in the latter half of the 19th century. Bernard recommended that the observer of an experiment should not have knowledge of the hypothesis being tested. This suggestion contrasted starkly with the prevalent Enlightenment-era attitude that scientific observation can only be objectively valid when undertaken by a well-educated, informed scientist. The first study recorded to have a blinded researcher was published in 1907 by W. H. R. Rivers and H. N. Webber to investigate the effects of caffeine.
Randomized experiments first appeared in psychology, where they were introduced by Charles Sanders Peirce and Joseph Jastrow in the 1880s, and in education. The earliest experiments comparing treatment and control groups were published by Robert Woodworth and Edward Thorndike in 1901, and by John E. Coover and Frank Angell in 1907.
In the early 20th century, randomized experiments appeared in agriculture, due to Jerzy Neyman and Ronald A. Fisher. Fisher's experimental research and his writings popularized randomized experiments.
The first published Randomized Controlled Trial in medicine appeared in the 1948 paper entitled "Streptomycin treatment of pulmonary tuberculosis", which described a Medical Research Council investigation. One of the authors of that paper was Austin Bradford Hill, who is credited as having conceived the modern RCT.
Trial design was further influenced by the large-scale ISIS trials on heart attack treatments that were conducted in the 1980s.
By the late 20th century, RCTs were recognized as the standard method for "rational therapeutics" in medicine. As of 2004, more than 150,000 RCTs were in the Cochrane Library. To improve the reporting of RCTs in the medical literature, an international group of scientists and editors published Consolidated Standards of Reporting Trials (CONSORT) Statements in 1996, 2001 and 2010, and these have become widely accepted. Randomization is the process of assigning trial subjects to treatment or control groups using an element of chance to determine the assignments in order to reduce the bias.
Ethics
Although the principle of clinical equipoise ("genuine uncertainty within the expert medical community... about the preferred treatment") common to clinical trials has been applied to RCTs, the ethics of RCTs have special considerations. For one, it has been argued that equipoise itself is insufficient to justify RCTs. For another, "collective equipoise" can conflict with a lack of personal equipoise (e.g., a personal belief that an intervention is effective). Finally, Zelen's design, which has been used for some RCTs, randomizes subjects before they provide informed consent, which may be ethical for RCTs of screening and selected therapies, but is likely unethical "for most therapeutic trials."
Although subjects almost always provide informed consent for their participation in an RCT, studies since 1982 have documented that RCT subjects may believe that they are certain to receive treatment that is best for them personally; that is, they do not understand the difference between research and treatment. Further research is necessary to determine the prevalence of and ways to address this "therapeutic misconception".
The RCT method variations may also create cultural effects that have not been well understood. For example, patients with terminal illness may join trials in the hope of being cured, even when treatments are unlikely to be successful.
Trial registration
In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that all trials starting enrolment after July 1, 2005, must be registered prior to consideration for publication in one of the 12 member journals of the committee. However, trial registration may still occur late or not at all.
Medical journals have been slow in adapting policies requiring mandatory clinical trial registration as a prerequisite for publication.
Classifications
By study design
One way to classify RCTs is by study design. From most to least common in the healthcare literature, the major categories of RCT study designs are:
Parallel-group – each participant is randomly assigned to a group, and all the participants in the group receive (or do not receive) an intervention.
Crossover – over time, each participant receives (or does not receive) an intervention in a random sequence.
Cluster – pre-existing groups of participants (e.g., villages, schools) are randomly selected to receive (or not receive) an intervention.
Factorial – each participant is randomly assigned to a group that receives a particular combination of interventions or non-interventions (e.g., group 1 receives vitamin X and vitamin Y, group 2 receives vitamin X and placebo Y, group 3 receives placebo X and vitamin Y, and group 4 receives placebo X and placebo Y).
An analysis of the 616 RCTs indexed in PubMed during December 2006 found that 78% were parallel-group trials, 16% were crossover, 2% were split-body, 2% were cluster, and 2% were factorial.
By outcome of interest (efficacy vs. effectiveness)
RCTs can be classified as "explanatory" or "pragmatic." Explanatory RCTs test efficacy in a research setting with highly selected participants and under highly controlled conditions. In contrast, pragmatic RCTs (pRCTs) test effectiveness in everyday practice with relatively unselected participants and under flexible conditions; in this way, pragmatic RCTs can "inform decisions about practice."
By hypothesis (superiority vs. noninferiority vs. equivalence)
Another classification of RCTs categorizes them as "superiority trials", "noninferiority trials", and "equivalence trials", which differ in methodology and reporting. Most RCTs are superiority trials, in which one intervention is hypothesized to be superior to another in a statistically significant way. Some RCTs are noninferiority trials "to determine whether a new treatment is no worse than a reference treatment." Other RCTs are equivalence trials in which the hypothesis is that two interventions are indistinguishable from each other.
Randomization
The advantages of proper randomization in RCTs include:
"It eliminates bias in treatment assignment," specifically selection bias and confounding.
"It facilitates blinding (masking) of the identity of treatments from investigators, participants, and assessors."
"It permits the use of probability theory to express the likelihood that any difference in outcome between treatment groups merely indicates chance."
There are two processes involved in randomizing patients to different interventions. First is choosing a randomization procedure to generate an unpredictable sequence of allocations; this may be a simple random assignment of patients to any of the groups at equal probabilities, may be "restricted", or may be "adaptive." A second and more practical issue is allocation concealment, which refers to the stringent precautions taken to ensure that the group assignment of patients are not revealed prior to definitively allocating them to their respective groups. Non-random "systematic" methods of group assignment, such as alternating subjects between one group and the other, can cause "limitless contamination possibilities" and can cause a breach of allocation concealment.
However empirical evidence that adequate randomization changes outcomes relative to inadequate randomization has been difficult to detect.
Procedures
The treatment allocation is the desired proportion of patients in each treatment arm.
An ideal randomization procedure would achieve the following goals:
Maximize statistical power, especially in subgroup analyses. Generally, equal group sizes maximize statistical power, however, unequal groups sizes may be more powerful for some analyses (e.g., multiple comparisons of placebo versus several doses using Dunnett's procedure ), and are sometimes desired for non-analytic reasons (e.g., patients may be more motivated to enroll if there is a higher chance of getting the test treatment, or regulatory agencies may require a minimum number of patients exposed to treatment).
Minimize selection bias. This may occur if investigators can consciously or unconsciously preferentially enroll patients between treatment arms. A good randomization procedure will be unpredictable so that investigators cannot guess the next subject's group assignment based on prior treatment assignments. The risk of selection bias is highest when previous treatment assignments are known (as in unblinded studies) or can be guessed (perhaps if a drug has distinctive side effects).
Minimize allocation bias (or confounding). This may occur when covariates that affect the outcome are not equally distributed between treatment groups, and the treatment effect is confounded with the effect of the covariates (i.e., an "accidental bias"). If the randomization procedure causes an imbalance in covariates related to the outcome across groups, estimates of effect may be biased if not adjusted for the covariates (which may be unmeasured and therefore impossible to adjust for).
However, no single randomization procedure meets those goals in every circumstance, so researchers must select a procedure for a given study based on its advantages and disadvantages.
Simple
This is a commonly used and intuitive procedure, similar to "repeated fair coin-tossing." Also known as "complete" or "unrestricted" randomization, it is robust against both selection and accidental biases. However, its main drawback is the possibility of imbalanced group sizes in small RCTs. It is therefore recommended only for RCTs with over 200 subjects.
Restricted
To balance group sizes in smaller RCTs, some form of "restricted" randomization is recommended. The major types of restricted randomization used in RCTs are:
Permuted-block randomization or blocked randomization: a "block size" and "allocation ratio" (number of subjects in one group versus the other group) are specified, and subjects are allocated randomly within each block. For example, a block size of 6 and an allocation ratio of 2:1 would lead to random assignment of 4 subjects to one group and 2 to the other. This type of randomization can be combined with "stratified randomization", for example by center in a multicenter trial, to "ensure good balance of participant characteristics in each group." A special case of permuted-block randomization is random allocation, in which the entire sample is treated as one block. The major disadvantage of permuted-block randomization is that even if the block sizes are large and randomly varied, the procedure can lead to selection bias. Another disadvantage is that "proper" analysis of data from permuted-block-randomized RCTs requires stratification by blocks.
Adaptive biased-coin randomization methods (of which urn randomization is the most widely known type): In these relatively uncommon methods, the probability of being assigned to a group decreases if the group is overrepresented and increases if the group is underrepresented. The methods are thought to be less affected by selection bias than permuted-block randomization.
Adaptive
At least two types of "adaptive" randomization procedures have been used in RCTs, but much less frequently than simple or restricted randomization:
Covariate-adaptive randomization, of which one type is minimization: The probability of being assigned to a group varies in order to minimize "covariate imbalance." Minimization is reported to have "supporters and detractors" because only the first subject's group assignment is truly chosen at random, the method does not necessarily eliminate bias on unknown factors.
Response-adaptive randomization, also known as outcome-adaptive randomization: The probability of being assigned to a group increases if the responses of the prior patients in the group were favorable. Although arguments have been made that this approach is more ethical than other types of randomization when the probability that a treatment is effective or ineffective increases during the course of an RCT, ethicists have not yet studied the approach in detail.
Allocation concealment
"Allocation concealment" (defined as "the procedure for protecting the randomization process so that the treatment to be allocated is not known before the patient is entered into the study") is important in RCTs. In practice, clinical investigators in RCTs often find it difficult to maintain impartiality. Stories abound of investigators holding up sealed envelopes to lights or ransacking offices to determine group assignments in order to dictate the assignment of their next patient. Such practices introduce selection bias and confounders (both of which should be minimized by randomization), possibly distorting the results of the study. Adequate allocation concealment should defeat patients and investigators from discovering treatment allocation once a study is underway and after the study has concluded. Treatment related side-effects or adverse events may be specific enough to reveal allocation to investigators or patients thereby introducing bias or influencing any subjective parameters collected by investigators or requested from subjects.
Some standard methods of ensuring allocation concealment include sequentially numbered, opaque, sealed envelopes (SNOSE); sequentially numbered containers; pharmacy controlled randomization; and central randomization. It is recommended that allocation concealment methods be included in an RCT's protocol, and that the allocation concealment methods should be reported in detail in a publication of an RCT's results; however, a 2005 study determined that most RCTs have unclear allocation concealment in their protocols, in their publications, or both. On the other hand, a 2008 study of 146 meta-analyses concluded that the results of RCTs with inadequate or unclear allocation concealment tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective.
Sample size
The number of treatment units (subjects or groups of subjects) assigned to control and treatment groups, affects an RCT's reliability. If the effect of the treatment is small, the number of treatment units in either group may be insufficient for rejecting the null hypothesis in the respective statistical test. The failure to reject the null hypothesis would imply that the treatment shows no statistically significant effect on the treated in a given test. But as the sample size increases, the same RCT may be able to demonstrate a significant effect of the treatment, even if this effect is small.
Blinding
An RCT may be blinded, (also called "masked") by "procedures that prevent study participants, caregivers, or outcome assessors from knowing which intervention was received." Unlike allocation concealment, blinding is sometimes inappropriate or impossible to perform in an RCT; for example, if an RCT involves a treatment in which active participation of the patient is necessary (e.g., physical therapy), participants cannot be blinded to the intervention.
Traditionally, blinded RCTs have been classified as "single-blind", "double-blind", or "triple-blind"; however, in 2001 and 2006 two studies showed that these terms have different meanings for different people. The 2010 CONSORT Statement specifies that authors and editors should not use the terms "single-blind", "double-blind", and "triple-blind"; instead, reports of blinded RCT should discuss "If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how."
RCTs without blinding are referred to as "unblinded", "open", or (if the intervention is a medication) "open-label". In 2008 a study concluded that the results of unblinded RCTs tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective; for example, in an RCT of treatments for multiple sclerosis, unblinded neurologists (but not the blinded neurologists) felt that the treatments were beneficial. In pragmatic RCTs, although the participants and providers are often unblinded, it is "still desirable and often possible to blind the assessor or obtain an objective source of data for evaluation of outcomes."
Analysis of data
The types of statistical methods used in RCTs depend on the characteristics of the data and include:
For dichotomous (binary) outcome data, logistic regression (e.g., to predict sustained virological response after receipt of peginterferon alfa-2a for hepatitis C) and other methods can be used.
For continuous outcome data, analysis of covariance (e.g., for changes in blood lipid levels after receipt of atorvastatin after acute coronary syndrome) tests the effects of predictor variables.
For time-to-event outcome data that may be censored, survival analysis (e.g., Kaplan–Meier estimators and Cox proportional hazards models for time to coronary heart disease after receipt of hormone replacement therapy in menopause) is appropriate.
Regardless of the statistical methods used, important considerations in the analysis of RCT data include:
Whether an RCT should be stopped early due to interim results. For example, RCTs may be stopped early if an intervention produces "larger than expected benefit or harm", or if "investigators find evidence of no important difference between experimental and control interventions."
The extent to which the groups can be analyzed exactly as they existed upon randomization (i.e., whether a so-called "intention-to-treat analysis" is used). A "pure" intention-to-treat analysis is "possible only when complete outcome data are available" for all randomized subjects; when some outcome data are missing, options include analyzing only cases with known outcomes and using imputed data. Nevertheless, the more that analyses can include all participants in the groups to which they were randomized, the less bias that an RCT will be subject to.
Whether subgroup analysis should be performed. These are "often discouraged" because multiple comparisons may produce false positive findings that cannot be confirmed by other studies.
Reporting of results
The CONSORT 2010 Statement is "an evidence-based, minimum set of recommendations for reporting RCTs." The CONSORT 2010 checklist contains 25 items (many with sub-items) focusing on "individually randomised, two group, parallel trials" which are the most common type of RCT.
For other RCT study designs, "CONSORT extensions" have been published, some examples are:
Consort 2010 Statement: Extension to Cluster Randomised Trials
Consort 2010 Statement: Non-Pharmacologic Treatment Interventions
"Reporting of surrogate endpoints in randomised controlled trial reports (CONSORT-Surrogate): extension checklist with explanation and elaboration"
Relative importance and observational studies
Two studies published in The New England Journal of Medicine in 2000 found that observational studies and RCTs overall produced similar results. The authors of the 2000 findings questioned the belief that "observational studies should not be used for defining evidence-based medical care" and that RCTs' results are "evidence of the highest grade." However, a 2001 study published in Journal of the American Medical Association concluded that "discrepancies beyond chance do occur and differences in estimated magnitude of treatment effect are very common" between observational studies and RCTs. According to a 2014 (updated in 2024) Cochrane review, there is little evidence for significant effect differences between observational studies and randomized controlled trials. To evaluate differences it is necessary to consider things other than design, such as heterogeneity, population, intervention or comparator.
Two other lines of reasoning question RCTs' contribution to scientific knowledge beyond other types of studies:
If study designs are ranked by their potential for new discoveries, then anecdotal evidence would be at the top of the list, followed by observational studies, followed by RCTs.
RCTs may be unnecessary for treatments that have dramatic and rapid effects relative to the expected stable or progressively worse natural course of the condition treated. One example is combination chemotherapy including cisplatin for metastatic testicular cancer, which increased the cure rate from 5% to 60% in a 1977 non-randomized study.
Interpretation of statistical results
Like all statistical methods, RCTs are subject to both type I ("false positive") and type II ("false negative") statistical errors. Regarding Type I errors, a typical RCT will use 0.05 (i.e., 1 in 20) as the probability that the RCT will falsely find two equally effective treatments significantly different. Regarding Type II errors, despite the publication of a 1978 paper noting that the sample sizes of many "negative" RCTs were too small to make definitive conclusions about the negative results, by 2005-2006 a sizeable proportion of RCTs still had inaccurate or incompletely reported sample size calculations.
Peer review
Peer review of results is an important part of the scientific method. Reviewers examine the study results for potential problems with design that could lead to unreliable results (for example by creating a systematic bias), evaluate the study in the context of related studies and other evidence, and evaluate whether the study can be reasonably considered to have proven its conclusions. To underscore the need for peer review and the danger of overgeneralizing conclusions, two Boston-area medical researchers performed a randomized controlled trial in which they randomly assigned either a parachute or an empty backpack to 23 volunteers who jumped from either a biplane or a helicopter. The study was able to accurately report that parachutes fail to reduce injury compared to empty backpacks. The key context that limited the general applicability of this conclusion was that the aircraft were parked on the ground, and participants had only jumped about two feet.
Advantages
RCTs are considered to be the most reliable form of scientific evidence in the hierarchy of evidence that influences healthcare policy and practice because RCTs reduce spurious causality and bias. Results of RCTs may be combined in systematic reviews which are increasingly being used in the conduct of evidence-based practice. Some examples of scientific organizations' considering RCTs or systematic reviews of RCTs to be the highest-quality evidence available are:
As of 1998, the National Health and Medical Research Council of Australia designated "Level I" evidence as that "obtained from a systematic review of all relevant randomised controlled trials" and "Level II" evidence as that "obtained from at least one properly designed randomised controlled trial."
Since at least 2001, in making clinical practice guideline recommendations the United States Preventive Services Task Force has considered both a study's design and its internal validity as indicators of its quality. It has recognized "evidence obtained from at least one properly randomized controlled trial" with good internal validity (i.e., a rating of "I-good") as the highest quality evidence available to it.
The GRADE Working Group concluded in 2008 that "randomised trials without important limitations constitute high quality evidence."
For issues involving "Therapy/Prevention, Aetiology/Harm", the Oxford Centre for Evidence-based Medicine as of 2011 defined "Level 1a" evidence as a systematic review of RCTs that are consistent with each other, and "Level 1b" evidence as an "individual RCT (with narrow Confidence Interval)."
Notable RCTs with unexpected results that contributed to changes in clinical practice include:
After Food and Drug Administration approval, the antiarrhythmic agents flecainide and encainide came to market in 1986 and 1987 respectively. The non-randomized studies concerning the drugs were characterized as "glowing", and their sales increased to a combined total of approximately 165,000 prescriptions per month in early 1989. In that year, however, a preliminary report of an RCT concluded that the two drugs increased mortality. Sales of the drugs then decreased.
Prior to 2002, based on observational studies, it was routine for physicians to prescribe hormone replacement therapy for post-menopausal women to prevent myocardial infarction. In 2002 and 2004, however, published RCTs from the Women's Health Initiative claimed that women taking hormone replacement therapy with estrogen plus progestin had a higher rate of myocardial infarctions than women on a placebo, and that estrogen-only hormone replacement therapy caused no reduction in the incidence of coronary heart disease. Possible explanations for the discrepancy between the observational studies and the RCTs involved differences in methodology, in the hormone regimens used, and in the populations studied. The use of hormone replacement therapy decreased after publication of the RCTs.
Disadvantages
Many papers discuss the disadvantages of RCTs. Among the most frequently cited drawbacks are:
Time and costs
RCTs can be expensive; one study found 28 Phase III RCTs funded by the National Institute of Neurological Disorders and Stroke prior to 2000 with a total cost of US$335 million, for a mean cost of US$12 million per RCT. Nevertheless, the return on investment of RCTs may be high, in that the same study projected that the 28 RCTs produced a "net benefit to society at 10-years" of 46 times the cost of the trials program, based on evaluating a quality-adjusted life year as equal to the prevailing mean per capita gross domestic product.
The conduct of an RCT takes several years until being published; thus, data is restricted from the medical community for long years and may be of less relevance at time of publication.
It is costly to maintain RCTs for the years or decades that would be ideal for evaluating some interventions.
Interventions to prevent events that occur only infrequently (e.g., sudden infant death syndrome) and uncommon adverse outcomes (e.g., a rare side effect of a drug) would require RCTs with extremely large sample sizes and may, therefore, best be assessed by observational studies.
Due to the costs of running RCTs, these usually only inspect one variable or very few variables, rarely reflecting the full picture of a complicated medical situation; whereas the case report, for example, can detail many aspects of the patient's medical situation (e.g. patient history, physical examination, diagnosis, psychosocial aspects, follow up).
Conflict of interest dangers
A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals; 15 from specialty medicine journals, and 3 from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed an aggregate of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources with 219 (69%) industry funded. 132 of the 509 RCTs reported author conflict of interest disclosures, with 91 studies (69%) disclosing industry financial ties with one or more authors. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded "without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers' understanding and appraisal of the evidence from the meta-analysis may be compromised."
Some RCTs are fully or partly funded by the health care industry (e.g., the pharmaceutical industry) as opposed to government, nonprofit, or other sources. A systematic review published in 2003 found four 1986–2002 articles comparing industry-sponsored and nonindustry-sponsored RCTs, and in all the articles there was a correlation of industry sponsorship and positive study outcome. A 2004 study of 1999–2001 RCTs published in leading medical and surgical journals determined that industry-funded RCTs "are more likely to be associated with statistically significant pro-industry findings." These results have been mirrored in trials in surgery, where although industry funding did not affect the rate of trial discontinuation it was however associated with a lower odds of publication for completed trials. One possible reason for the pro-industry results in industry-funded published RCTs is publication bias. Other authors have cited the differing goals of academic and industry sponsored research as contributing to the difference. Commercial sponsors may be more focused on performing trials of drugs that have already shown promise in early stage trials, and on replicating previous positive results to fulfill regulatory requirements for drug approval.
Ethics
If a disruptive innovation in medical technology is developed, it may be difficult to test this ethically in an RCT if it becomes "obvious" that the control subjects have poorer outcomes—either due to other foregoing testing, or within the initial phase of the RCT itself. Ethically it may be necessary to abort the RCT prematurely, and getting ethics approval (and patient agreement) to withhold the innovation from the control group in future RCTs may not be feasible.
Historical control trials (HCT) exploit the data of previous RCTs to reduce the sample size; however, these approaches are controversial in the scientific community and must be handled with care.
In social science
Due to the recent emergence of RCTs in social science, the use of RCTs in social sciences is a contested issue. Some writers from a medical or health background have argued that existing research in a range of social science disciplines lacks rigour, and should be improved by greater use of randomized control trials.
Transport science
Researchers in transport science argue that public spending on programmes such as school travel plans could not be justified unless their efficacy is demonstrated by randomized controlled trials. Graham-Rowe and colleagues reviewed 77 evaluations of transport interventions found in the literature, categorising them into 5 "quality levels". They concluded that most of the studies were of low quality and advocated the use of randomized controlled trials wherever possible in future transport research.
Dr. Steve Melia took issue with these conclusions, arguing that claims about the advantages of RCTs, in establishing causality and avoiding bias, have been exaggerated. He proposed the following eight criteria for the use of RCTs in contexts where interventions must change human behaviour to be effective:
The intervention:
Has not been applied to all members of a unique group of people (e.g. the population of a whole country, all employees of a unique organisation etc.)
Is applied in a context or setting similar to that which applies to the control group
Can be isolated from other activities—and the purpose of the study is to assess this isolated effect
Has a short timescale between its implementation and maturity of its effects
And the causal mechanisms:
Are either known to the researchers, or else all possible alternatives can be tested
Do not involve significant feedback mechanisms between the intervention group and external environments
Have a stable and predictable relationship to exogenous factors
Would act in the same way if the control group and intervention group were reversed
Criminology
A 2005 review found 83 randomized experiments in criminology published in 1982–2004, compared with only 35 published in 1957–1981. The authors classified the studies they found into five categories: "policing", "prevention", "corrections", "court", and "community". Focusing only on offending behavior programs, Hollin (2008) argued that RCTs may be difficult to implement (e.g., if an RCT required "passing sentences that would randomly assign offenders to programmes") and therefore that experiments with quasi-experimental design are still necessary.
Education
RCTs have been used in evaluating a number of educational interventions. Between 1980 and 2016, over 1,000 reports of RCTs have been published. For example, a 2009 study randomized 260 elementary school teachers' classrooms to receive or not receive a program of behavioral screening, classroom intervention, and parent training, and then measured the behavioral and academic performance of their students. Another 2009 study randomized classrooms for 678 first-grade children to receive a classroom-centered intervention, a parent-centered intervention, or no intervention, and then followed their academic outcomes through age 19.
Criticism
A 2018 review of the 10 most cited randomised controlled trials noted poor distribution of background traits, difficulties with blinding, and discussed other assumptions and biases inherent in randomised controlled trials. These include the "unique time period assessment bias", the "background traits remain constant assumption", the "average treatment effects limitation", the "simple treatment at the individual level limitation", the "all preconditions are fully met assumption", the "quantitative variable limitation" and the "placebo only or conventional treatment only limitation".
| Mathematics | Statistics and probability | null |
163389 | https://en.wikipedia.org/wiki/Diamond%20dust | Diamond dust | Diamond dust is a ground-level cloud composed of tiny ice crystals. This meteorological phenomenon is also referred to simply as ice crystals and is reported in the METAR code as IC. Diamond dust generally forms under otherwise clear or nearly clear skies, so it is sometimes referred to as clear-sky precipitation. Diamond dust is most commonly observed in Antarctica and the Arctic, but can occur anywhere with a temperature well below freezing. In the polar regions of Earth, diamond dust may persist for several days without interruption.
Characteristics
Diamond dust is similar to fog in that it is a cloud based at the surface; however, it differs from fog in two main ways. Generally fog refers to a cloud composed of liquid water (the term ice fog usually refers to a fog that formed as liquid water and then froze, and frequently seems to occur in valleys with airborne pollution such as Fairbanks, Alaska, while diamond dust forms directly as ice). Also, fog is a dense-enough cloud to significantly reduce visibility, while diamond dust is usually very thin and may not have any effect on visibility (there are far fewer crystals in a volume of air than there are droplets in the same volume with fog). Because mist is often classified to be more transparent than fog, diamond dust has often been referred to as ice mist. However, diamond dust still can often reduce the visibility, in some cases to under .
The depth of the diamond dust layer can vary substantially from as little as to . Because diamond dust does not always reduce visibility it is often first noticed by the brief flashes caused when the tiny crystals, tumbling through the air, reflect sunlight to the eye. This glittering effect gives the phenomenon its name since it looks like many tiny diamonds are flashing in the air.
Formation
These ice crystals usually form when a temperature inversion is present at the surface and the warmer air above the ground mixes with the colder air near the surface. Since warmer air frequently contains more water vapor than colder air, this mixing will usually also transport water vapor into the air near the surface, causing the relative humidity of the near-surface air to increase. If the relative humidity increase near the surface is large enough then ice crystals may form.
To form diamond dust the temperature must be below the freezing point of water, , or the ice cannot form or would melt. However, diamond dust is not often observed at temperatures near . At temperatures between and about increasing the relative humidity can cause either fog or diamond dust. This is because very small droplets of water can remain liquid well below the freezing point, a state known as supercooled water. In areas with a lot of small particles in the air, from human pollution or natural sources like dust, the water droplets are likely to be able to freeze at a temperature around , but in very clean areas, where there are no particles (ice nuclei) to help the droplets freeze, they can remain liquid to , at which point even very tiny, pure water droplets will freeze. In the interior of Antarctica diamond dust is fairly common at temperatures below about .
Artificial diamond dust can form from snow machines which blow ice crystals into the air. These are found at ski resorts. Diamond dust may also be observed immediately downwind from manufacturing facilities or chilled water plants that produce steam.
Optical properties
Diamond dust is often associated with halos, such as sun dogs, light pillars, etc. Like the ice crystals in cirrus or cirrostratus clouds, diamond dust crystals form directly as simple hexagonal ice crystals — as opposed to freezing drops — and generally form slowly. This combination results in crystals with well defined shapes - usually either hexagonal plates or columns - which, like a prism, can reflect and/or refract light in specific directions.
Climatology
While diamond dust can be seen in any area of the world that has cold winters, it is most frequent in the interior of Antarctica, where it is common year-round. Schwerdtfeger (1970) shows that diamond dust was observed on average 316 days a year at Plateau Station in Antarctica, and Radok and Lile (1977) estimate that over 70% of the precipitation that fell at Plateau Station in 1967 fell in the form of diamond dust. Once melted, the total precipitation for the year was only .
Weather reporting and interference
Diamond dust may sometimes cause a problem for automated airport weather stations. The ceilometer and visibility sensor do not always correctly interpret the falling diamond dust and report the visibility and ceiling as zero (overcast skies). However, a human observer would correctly notice clear skies and unrestricted visibility. The METAR identifier for diamond dust within international hourly weather reports is IC.
| Physical sciences | Clouds | Earth science |
163395 | https://en.wikipedia.org/wiki/Traffic%20light | Traffic light | Traffic lights, traffic signals, or stoplights – also known as robots in South Africa, Zambia, and Namibia – are signaling devices positioned at road intersections, pedestrian crossings, and other locations in order to control the flow of traffic.
Traffic lights normally consist of three signals, transmitting meaningful information to road users through colours and symbols, including arrows and bicycles. The regular traffic light colours are red to stop traffic, amber for traffic change, and green for allowing the traffic, arranged vertically or horizontally in that order. Although this is internationally standardised, variations in traffic light sequences and laws exist on national and local scales.
Traffic lights were first introduced in December 1868 on Parliament Square in London to reduce the need for police officers to control traffic. Since then, electricity and computerised control have advanced traffic light technology and increased intersection capacity. The system is also used for other purposes, including the control of pedestrian movements, variable lane control (such as tidal flow systems or smart motorways), and railway level crossings.
History
The first system of traffic signals, which was a semaphore traffic signal, was installed as a way to replace police officer control of vehicular traffic outside the Houses of Parliament in London on 9 December 1868. This system exploded on 2 January 1869 and was thus taken down. But this early traffic signal led to other parts of the world implementing similar traffic signal systems. In the first two decades of the 20th century, semaphore traffic signals like the one in London were in use all over the United States. These traffic signals were controlled by a traffic officer who would change the commands on the signal to direct traffic.
In 1912, the first electric traffic light was developed by Lester Wire, a policeman in Salt Lake City, Utah. It was installed by the American Traffic Signal Company on the corner of East 105th Street and Euclid Avenue in Cleveland, Ohio, on August 5, 1914. The first four-way, three-colour traffic light was created by William Potts in Detroit, Michigan in 1920. His design was the first to include an amber 'caution' light along with red and green lights. Potts was Superintendent of Signals for the Police Department of Detroit. He installed automatic four-way, three-colour traffic lights in 15 towers across Detroit in 1921. By 1922, traffic towers were beginning to be controlled by automatic timers more widely. The main advantage of the use of the timer was that it saved cities money by replacing traffic officers. The city of New York was able to reassign all but 500 of its 6,000 officers working on the traffic squad, saving the city $12,500,000.
In 1923, Garrett Morgan patented a design of a manually operated three-way traffic light with moving arms.
The control of traffic lights made a big turn with the rise of computers in America in the 1950s. One of the best historical examples of computerized control of lights was in Denver in 1952. In 1967, the city of Toronto was the first to use more advanced computers that were better at vehicle detection. The computers maintained control over 159 signals in the cities through telephone lines.
Vehicular signals
A set of lights, known as a signal head, may have one, two, three, or more aspects. The most common signal type has three aspects facing the oncoming traffic: red on top, amber (yellow) below, and green below that. Additional aspects may be fitted to the signal, usually to indicate specific restrictions or filter movements.
Meanings of signals
The 1968 Vienna Convention on Road Signs and Signals Chapter III provides international standards for the setup of traffic signal operations. Not all states have ratified the convention. A three-colour signal head should have three non-flashing lights which are red, amber, and green, either arranged horizontally (on the side opposite to the direction of traffic) or vertically (with red on top). A two-colour signal head may be used in temporary operation and consists of red and green non-flashing lights. In both cases, all lights should be circular or arrow-shaped. Permissible signals for regulating vehicle traffic (other than public transport vehicles) are outlined in Article 23:
Green arrows are added to signals to indicate that drivers can travel in a particular direction, while the main lights for that approach are red, or that drivers can only travel in one particular direction. Alternatively, when combined with another green signal, they may indicate that turning traffic has priority over oncoming traffic (known as a "filter arrow"). Flashing amber arrows typically indicate that road users must give way (to other drivers and pedestrians) before making a movement in the direction of the arrow. These are used because they are safer, cause less delay, and are more flexible. Flashing amber arrows will normally be located below the solid amber.
Green arrows
Arrow aspects may be used to permit certain movements or convey other messages to road users. A green arrow may display to require drivers to turn in a particular direction only or to allow drivers to continue in a particular direction when the signal is red. Generally, a green phase is illuminated at the beginning of the green phase (a "leading turn") or at the end of the green phase (a "lagging turn"). An 'indicative arrow' may be displayed alongside a green light. This indicates to drivers that oncoming traffic is stopped, such that they do not need to give way to that traffic when turning across it. As right-turning traffic (left-side drive) or left-turning traffic (right-side drive) does not normally have priority, this arrow is used to allow turning traffic to clear before the next phase begins.
Some variations of this setup exist. One version is a horizontal bar with five lights – the green and amber arrows are located between the standard green and amber lights. A vertical five-light bar holds the arrows underneath the standard green light (in this arrangement, the amber arrow is sometimes omitted, leaving only the green arrow below the steady green light, or possibly an LED-based device capable of showing both green and amber arrows within a single lamp housing).
A third type is known as a "doghouse" or "cluster head" – a vertical column with the two normal lights is on the right side of the signal, a vertical column with the two arrows is located on the left, and the normal red signal is in the middle above the two columns. Cluster signals in Australia and New Zealand use six signals, the sixth being a red arrow that can operate separately from the standard red light. In a fourth type, sometimes seen at intersections in Ontario and Quebec, Canada, there is no dedicated left-turn lamp per se. Instead, the normal green lamp flashes rapidly, indicating permission to go straight as well as make a left turn in front of opposing traffic, which is being held by a steady red lamp. (This "advance green", or flashing green can be somewhat startling and confusing to drivers not familiar with this system. This also can cause confusion amongst visitors to British Columbia, where a flashing green signal denotes a pedestrian-controlled crosswalk. For this reason, Ontario is phasing out the use of flashing green signals and instead replacing them with arrows.)
Countdown lights
Popular in Vietnam and China, countdown lights are additional lights installed next to (or above or below) the main signal lights. The countdown light is displayed by a countdown number with different colors (usually red, yellow, green), matching the color of the light on. When the light counts to "0" (or 1), the main light color immediately changes. Countdown lights may have zeros in the tens or none, some countdown lights may flash when getting ready to zero. Yellow lights can also have countdown lights, but most lights do not. Usually the countdown light has 2 digits, in case the time of the main light (usually the red light, rarely the green light) is longer than 100 seconds, depending on the type of light, the following possibilities may occur:
Lights have not counted down, when 99 seconds are left, start counting. During the standby time, the light may be displayed as "99", "00", "--" or not displayed.
Last 2 digits count light of the timeout (the counter light is 15 while the time is 115 seconds, there are some types of lights that count as "-9" or "9-" when the time is 109 seconds)
Tens digit on the displayer becomes a letter. Displaying A0 for 100 seconds, B0 for 110 seconds, so forth.
Displaying only last 2 digits but with flashing to indicate it's more than 100.
Issue about yellow light dilemma zone in South Korea
In South Korea, the yellow light dilemma zone is not legally recognized. In other words, when the yellow light is on, traffic may not pass the stop line or enter the intersection even if cannot safely stop when the light shows.
This has been reaffirmed by the ruling of the Supreme Court of Korea in May 2024, for a case where the driver was speeding at 62 km/h in a street limited up to 40 km/h ( % upper than the allowed speed).
Criticism in South Korea says that this is unrealistic and unreasonable. In addition, this can cause multiple collisions due to sudden braking.
In 2016 when speed limit was up to 60 km/h, proposed alternatives to this kind of collision were only roundabouts, speed compliance increase and speed practice reduction or elderly zones are also proposed solutions.
Yellow trap
Without an all-red phase, cross-turning traffic may be caught in a yellow trap. When the signal turns yellow, a turning driver may assume oncoming traffic will stop and a crash may result. For this reason, the US bans sequences that may cause a yellow trap. This can also happen when emergency vehicles or railroads preempt normal signal operation.
In the United States, signs reading "Oncoming traffic has extended green" or "Oncoming traffic may have extended green" must be posted at intersections where the "yellow trap" condition exists.
Variations
The United States is not party to the Vienna Convention; rather, the Manual on Uniform Traffic Control Devices (MUTCD) outlines correct operation in that country. In the US, a single signal head may have three, four, or five aspects (though a single aspect green arrow may be displayed to indicate a continuous movement). The signals must be arranged red, amber, and green vertically (top to bottom) or horizontally (left to right). In the US, a single-aspect flashing amber signal can be used to raise attention to a warning sign and a single-aspect flashing red signal can be used to raise attention to a "stop", "do not enter", or "wrong way" sign. Flashing red or amber lights, known as intersection control beacons, are used to reinforce stop signs at intersections. The MUTCD specifies the following vehicular signals:
In the Canadian province of Quebec and the Maritime provinces, lights are often arranged horizontally, but each aspect is a different shape: red is a square (larger than the normal circle) and usually in pairs at either end of the fixture, amber is a diamond, and green is a circle. In many southern and southwestern U.S. states, most traffic signals are similarly horizontal in order to ease wind resistance during storms and hurricanes. Japanese traffic signals mostly follow the same rule except that the green "go" signals are referred to as 青 (ao), typically translated as "blue", reflecting a historical change in the Japanese language. As a result, Japanese officials decreed in 1973 that the "go" light should be changed to the bluest possible shade of green, bringing the name more in line with the color without violating the international "green means go" rule.
In the UK, normal traffic lights follow this sequence:
Red – Stop, do not proceed
Red and Amber – Get ready to proceed, but do not proceed yet
Green – Proceed if the intersection or crossing is clear; vehicles are not allowed to block the intersection or crossing
Amber – Stop, unless it is unsafe to do so
A speed sign is a special traffic light, variable traffic sign, or variable-message sign giving drivers a recommended speed to approach the next traffic light in its green phase and avoid a stop due to reaching the intersection when lights are red.
Pedestrian signals
Pedestrian signals are used to inform pedestrians when to cross a road. Most pedestrian signal heads will have two lights: a 'walk' light (normally a walking human figure, typically coloured green or white) and a 'don't walk' light (normally either a red or orange man figure or a hand), though other variations exist.
Where pedestrians need to cross the road between junctions, a signal-controlled crossing may be provided as an alternative to a zebra crossing or uncontrolled crossing. Traffic lights are normally used at crossings where vehicle speeds are high, where either vehicle or pedestrian flows are high or near signalised junctions. In the UK, this type of crossing is called a pelican crossing, though more modern iterations are puffin and pedex crossings. In the UK, these crossings normally need at least four traffic signals, which are of a regular type (red, amber, and green), two facing in each direction. Furthermore, pedestrians will be provided with push buttons and pedestrian signals, consisting of a red and green man. Farside signals are located across the crossing, while nearside signals are located below the traffic lights, facing in the direction of oncoming traffic. A HAWK beacon is a special type of traffic used in the US at mid-block crossings. These consist of two red signals above a single amber signal. The beacon is unlit until a pedestrian pushes the cross button. Then an amber light will show, followed by both red lights, at which point the 'Walk' symbol will illuminate for pedestrians. At the end of the crossing phase, the 'Don't Walk' symbol will flash, as will the amber traffic light.
Pedestrians are usually incorporated into urban signalised junctions in one of four ways: no facilities, parallel walk, walk with traffic, or all-red stages. No facilities may be provided if pedestrian demand is low, in areas where pedestrians are not permitted, or if there is a subway or overpass. No provision of formal facilities means pedestrians will have to self-evaluate when it is safe to cross, which can be intimidating for pedestrians. With a "parallel walk" design, pedestrians walk alongside the traffic flow. A leading pedestrian interval may be provided, whereby pedestrians get a "walk" signal before the traffic gets a green light, allowing pedestrians to establish themselves on the crossing before vehicles begin to turn, to encourage drivers to give way. A 'walk with traffic' facility allows pedestrians to go at the same time as other traffic movements with no conflict between movements. This can work well on one-way roads, where turning movements are banned or where the straight-ahead movement runs in a different stage from the turning movement. A splitter island could also be provided. Traffic will pass on either side of the island and pedestrians can cross the road safely between the other flows.
An all-red stage, also known as a full pedestrian stage, a pedestrian scramble or a Barnes Dance, holds all vehicular traffic at the junction to allow pedestrians time to safely cross without conflict from vehicles. It allows allows the use of diagonal crossings. This may require a longer cycle time and increase pedestrian wait periods, though the latter can be eased by providing two pedestrian stages.
Pedestrian countdown timers are becoming common at urban signal-controlled crossings. Where a pedestrian countdown is shown, it is normally used in conjunction with the flashing hand signal (in the US and Canada) or blackout period (UK), showing the amount of time remaining in seconds until the end of the flashing hand or blackout. Pedestrian countdown timers do not significantly increase or reduce the number of red- and amber-light running drivers. Studies have found that pedestrian countdown timers do significantly improve pedestrian compliance over traditional pedestrian signals; however, results are mixed.
Smartphone Zombie ribbon
As 12 to 45% of pedestrian deaths caused by 'pedestrian distraction' have been linked to cell phone usage, some cities (including Sydney, Seoul, Augsburg, Bodegraven, Tel Aviv, and Singapore) have installed LED strips embedded in the sidewalk before crosswalks to warn distracted pedestrians of immanent pedestrian crossings. This additional signal, which is synchronized with conventional signals, aims to decrease injury rates by telling distracted pedestrians when it is safe to cross the road without them having to lift their head.
Auditory and tactile signals
In some jurisdictions such as Australia, pedestrian lights are associated with a sound device, for the benefit of blind and visually impaired pedestrians. These make a slow beeping sound when the pedestrian lights are red and a continuous buzzing or fast beeping sound when the lights are green. In the Australian States of Queensland, New South Wales, Victoria, and Western Australia, the sound is produced in the same unit as the push buttons. In a circle above the button, the sound is produced and can be felt along with a raised arrow that points in the direction to walk. This system of assistive technology is also widely used at busy intersections in Canadian cities. In the United Kingdom, the Puffin crossings and their predecessor, the Pelican crossing, will make a fast beeping sound to indicate that it is safe to cross the road. The beeping sound is disabled during the nighttime so as not to disturb any nearby residents.
In some states in the United States, at some busy intersections, buttons will make a beeping sound for blind people. When the light changes, a speaker built into the button will play a recording to notify blind people that it is safe to cross. When the signal flashes red, the recording will start to count down with the countdown timer. In several countries such as New Zealand, technology also allows deaf and blind people to feel when lights have changed to allow safe crossing. A small pad, housed within an indentation in the base of the box housing the button mechanism, moves downwards when the lights change to allow crossing. This is designed to be felt by anyone waiting to cross who has limited ability to detect sight or sound. In Japan, a traffic light emits an electronic sound that mimics the sound of birdsong to help the visually impaired. Some traffic lights fix the order and type of sound so that they can tell which direction is a green light. In general, "Piyo" (peep) and "Piyo-piyo", which is a small bird call, and "Kakkō" and "Ka-kakkō", which is a cuckoo call, are associated with this system. Some pedestrian crossings in Lithuania make a slow beeping sound indicating that the traffic light is about to turn off.
Cycle signals
Where cycle lanes or cycle tracks exist on the approach to a signal-controlled junction, it must be considered how to incorporate cyclists safely into the junction to reduce conflict between motor vehicles and cyclists.
An advanced stop line can be placed after the stop line at traffic lights. This allows cyclists to position themselves in front of traffic at a red light and get a headstart.
In the US, design advice typically advises that the cycle lane should continue through the junction to the left of the right-turn lane; however, this creates conflict where motor vehicles wish to enter the right lane, as they must cross the cycle lane at a bad angle.
Under Dutch engineering principles, cyclists are instead kept to the right of the junction, with protected kerbs. This improves safety by putting cyclists into the eyeline of motor vehicles at the stop line, allowing cyclists a headstart over turning traffic. This design also allows cyclists to complete far-side turns without having to wait in the centre of the junction. UK engineers have innovated on this design through the Cycle Optimised Protected Signals (CYCLOPS) junction, e.g. in Manchester. This places the cycle track around the edge of the signal junction and gives cyclists and pedestrians a single all-red phase, entirely separate from motor traffic and shortens pedestrian crossing times.
Alternatively, cyclists can be considered pedestrians on approach to a junction, or where a cycle track crosses a road and combined pedestrian-cyclist traffic lights (known as Toucan crossings in the UK) can be provided.
Public transport signals
Traffic lights for public transport often use signals that are distinct from those for private traffic. They can be letters, arrows or bars of white or (an LED 100-watt typical) coloured light.
Transit signals in North America
MUTCD specifies a standard vertically oriented signal with either two or three lenses, displaying white lines on a black background.
Some systems use the letter B for buses and T for trams. The METRO light rail system in Minneapolis, Minnesota, the Valley Metro Rail in Phoenix, Arizona, and the RTA Streetcar System in New Orleans use a simplified variant of the Belgian/French system in the respective city's central business district where only the "go" and "stop" configurations are used. A third signal equal to amber is accomplished by flashing the "go" signal.
Public transport signals in Europe
In some European countries and Russia, dedicated traffic signals for public transport (tram, as well any that is using a dedicated lane) have four white lights that form the letter T. If the three top lamps are lit, this means "stop". If the bottom lamp and some lamps on the top row are lit, this means permission to go in a direction shown. In the case of a tram signal, if there are no tram junctions or turns at an intersection, a simpler system of one amber signal in the form of the letter T is used instead; the tram must proceed only when the signal is lit.
In North European countries, the tram signals feature white lights of different forms: "S" for "stop", "—" for "caution" and arrows to permit passage in a given direction. In Sweden, All signals use white lighting and special symbols ("S", "–" and an arrow) to distinguish them from regular signals.
The Netherlands uses a distinctive "negenoog" (nine-eyed) design shown on the top row of the diagram; bottom row signals are used in Belgium, Luxembourg, France, and Germany. The signals mean (from left to right): "go straight ahead", "go left", "go right", "go in any direction" (like the "green" of a normal traffic light), "stop, unless the emergency brake is needed" (equal to "amber"), and "stop" (equal to "red").
Public transport signals in the Asia-Pacific region
In Japan, tram signals are under the regular vehicle signal; however, the colour of the signal intended for trams is orange. The small light at the top tells the driver when the vehicle's transponder signal is received by the traffic light. In Hong Kong, an amber T-signal is used for trams, in place of the green signal. In addition, at any tramway junction, another set of signals is available to indicate the direction of the tracks. In Australia and New Zealand, a white "B" or "T" sometimes replaces the green light indicating that buses or trams (respectively) have right of way.
Preemption and priority
Some regions have signals that are interruptible, giving priority to special traffic, usually emergency vehicles such as firefighting apparatus, ambulances, and police cars. Most of the systems operate with small transmitters that send radio waves, infrared signals, or strobe light signals that are received by a sensor on or near the traffic lights. Some systems use audio detection, where a certain type of siren must be used and detected by a receiver on the traffic light structure.
Upon activation, the normal traffic light cycle is suspended and replaced by the "preemption sequence": the traffic lights to all approaches to the intersection are switched to "red" with the exception of the light for the vehicle that has triggered the preemption sequence. Sometimes, an additional signal light is placed nearby to indicate to the preempting vehicle that the preempting sequence has been activated and to warn other motorists of the approach of an emergency vehicle. The normal traffic light cycle resumes after the sensor has been passed by the vehicle that triggered the preemption.
In lieu of preemptive mechanisms, in most jurisdictions, emergency vehicles are not required to respect traffic lights. However, emergency vehicles must slow down, proceed cautiously and activate their emergency lights to alert oncoming drivers to the preemption when crossing an intersection against the light.
Unlike preemption, which immediately interrupts a signal's normal operation to serve the preempting vehicle and is usually reserved for emergency use, "priority" is a set of strategies intended to reduce delay for specific vehicles, especially mass transit vehicles such as buses. A variety of strategies exist to give priority to transit but they all generally work by detecting approaching transit vehicles and making small adjustments to the signal timing. These adjustments are designed to either decrease the likelihood that the transit vehicle will arrive during a red interval or decrease the length of the red interval for those vehicles that are stopped. Priority does not guarantee that transit vehicles always get a green light the instant they arrive as preemption does.
Operation
A variety of different control systems are used to operate signal cycles smoothly, ranging from simple clockwork mechanisms to sophisticated computerised control systems. Computerised systems are normally actuated, i.e. controlled by loop detectors or other sensors on junction approaches. Area-wide coordination can allow green wave systems to be set up for vehicles or cycle tracks. Smart traffic light systems combine traditional actuation, a wider array of sensors and artificial intelligence to further improve performance of signal systems. A traffic signal junction or crossing is typically controlled by a controller mounted inside a cabinet nearby.
"Phases" (or "signal groups" in Australia and New Zealand) are indications show simultaneously, e.g. multiple green lights which control the same traffic approach. A "movement" is any path through the junction which vehicles or pedestrians are permitted to take, which is "conflicting" if these paths cross one another. A stage (or "phase" in ANZ) is a group of non-conflicting phases which move at the same time. The stages are collectively known as a "cycle". The time between two conflicting green phases is called an "intergreen period", which is set at an appropriate length for the junction to safely clear, especially for turning traffic which may be waiting in the centre of the junction. This often results in an all red stage, when all approaches are shown a red light and no vehicle can proceed. This all red is sometimes extended to allow a pedestrian scramble, where pedestrians can cross the empty junction in any direction all at once. Some signals have no "all red" phase: the light turns green for cross traffic the instant the other light turns red.
Many traffic light installations are fitted with vehicle actuation, i.e. detection, to improve the flexibility of traffic systems to respond to varying traffic flows. Detectors come in the form of digital sensors fitted to the signal heads or induction loops within the road surface. Induction loops are beneficial due to their smaller chance of breakdown, but their simplicity can limit their ability to handle some situations, particularly involving lighter vehicles such as motorcycles or pedal cycles. This situation most often occurs at times of day when other traffic is sparse as well as when the small vehicle is coming from a direction that does not have a high volume of traffic.
Timing
The timing of the intergreen is usually based on the size of the intersection, which can range from two to five seconds. Modelling programs include the ability to calculate intergreen times automatically. Intergreen periods are determined by calculating the path distance for every conflict point in the junction, which is the distance travelled to the conflict point by the movement losing right of way minus the distance travelled to the same conflict point by the movement gaining right of way using the possible conflict points (including with pedestrians) and calculating both the time it would take the last vehicle to clear the furthest collision point and the first vehicle from the next stage to arrive at the conflict point. At actuated junctions, integreens can be varied to account for traffic conditions.
Engineers also need to set the amber timings (and red-amber, where appropriate), which is normally standardised by a traffic authority. For example, in the UK, the amber time is fixed nationally at three seconds and the red-amber time at two seconds, which results in a minimum intergreen time of five seconds (plus any all-red time). The US also uses a minimum of three seconds, but local traffic authorities can make timings longer, especially on wider, suburban roads. This variation has resulted in controversy when municipalities with shorter amber times use red light cameras. Where pedestrian signals are used, the timing of the "inivitation to cross" – the period where a steady walk signal shows – and clearance periods – time when the walk signal flashes or no signal is shown – need to be calculated. This is normally set against a design speed, e.g. . Similarly, these can be made extendable using sensors, allowing slower-moving pedestrians more time to cross the street.
Design guidance
National or sub-national highway authorities often issue guidance documents on the specification of traffic signals and design of signalised intersections according to national or local regulations. For example, in the United States the Federal Highway Administration issues the Manual on Uniform Traffic Control Devices and the Signalized Intersections Information Guide, which is a synthesis of best practices and treatments to help practitioners make informed decisions.
Variable lane control
Variable lane control is a form of intelligent transportation systems which involve the use of lane-use control signals, typically on a gantry above a carriageway. These lights are used in tidal flow systems to allow or forbid traffic to use one or more of the available lanes by the use of green lights or arrows (to permit) or by red lights or crosses (to prohibit). Variable lane control may be in use at toll plazas to indicate open or closed booths; during heavy traffic to facilitate merging traffic from a slip road.
In the US, most notably the Southeastern, there often is a "continuous-flow" lane. This lane is protected by a single, constant-green arrow pointing down at the lane(s) permitting the continuous flow of traffic, without regard to the condition of signals for other lanes or cross streets. Continuous lanes are restricted in that vehicles turning from a side street may not cross over the double white line to enter the continuous lane, and no lane changes are permitted to the continuous lane from an adjacent lane or from the continuous lane to an adjacent lane until the double white line has been passed. Some continuous lanes are protected by a raised curb located between the continuous lane and a normal traffic lane, with white and/or amber reflective paint or tape, prohibiting turning or adjacent traffic from entering the lane.
Continuous-flow traffic lanes are found only at "T" intersections where there is no side street or driveway entrance on the right side of the main thoroughfare; additionally, no pedestrians are permitted to cross the main thoroughfare at intersections with a continuous-flow lane, although crossing at the side street may be permitted. Intersections with continuous-flow lanes will be posted with a white regulatory sign approximately before the intersection with the phrase, "right lane continuous traffic," or other, similar, wording. If the arrow is extinguished for any reason, whether by malfunction or design, traffic through the continuous lane will revert to the normal traffic pattern for adjacent lanes, except that turning or moving into or out of the restricted lane is still prohibited.
Waterways and railways
The three-aspect standard is also used at locks on the Upper Mississippi River. Red means that another vessel is passing through. Amber means that the lock chamber is being emptied or filled to match the level of the approaching vessel. After the gate opens, green means that the vessel may enter.
Railroad signals, for stopping trains in their own right of way, generally use the opposite positioning of the colours; that is, for signals above the driver's eyeline, green on top and red below is the standard placement of the signal colours on railroad tracks. There are three reasons for this variation: there is no risk that railway signals will be masked by a tall vehicle between the driver and the signal; train speeds in fog are much higher than for road vehicles, so it is important that the most restrictive signal is closest to the driver's eyeline; and with railway signals often in exposed rural locations, there is a risk of any signal other than the bottom one being masked by snow building up on the hood of the signal below.
Rules
Traffic lights control flows of traffic using social norms and legal rules. In most jurisdictions, it is against the law to disobey traffic signals and the police, or devices such as red light cameras, can issue fines or other penalties – and in some cases prosecute – drivers who break those laws. US-based studies have found that the majority of drivers think that it is dangerous to run a red light at speed and the most common reason for red light running include inattentive driving, following an oversized vehicle or during inclement weather.
The rules governing traffic light junctions for vehicles differ by jurisdiction. For example, it is common in North America that drivers can turn kerb-to-kerb (i.e. turning right at most junctions), even when a red light shows. On the other hand, this turn on red rule is uncommon in Europe, unless an arrow signal or traffic sign specifically permits it.
Design
Bulbs
Conventional traffic signal lighting, still common in some areas, uses a standard light bulb. The light then bounces off a mirrored glass or polished aluminium reflector bowl, and out through a polycarbonate plastic or glass signal lens. In some signals, these lenses were cut to include a specific refracting pattern. Traditionally, incandescent and halogen bulbs were used. Because of the low efficiency of light output and a single point of failure (filament burnout), some traffic authorities are choosing to retrofit traffic signals with LED arrays that consume less power, have increased light output, and last significantly longer. Moreover, in the event of an individual LED failure, the aspect will still operate albeit with a reduced light output. The light pattern of an LED array can be comparable to the pattern of an incandescent or halogen bulb fitted with a prismatic lens.
The low energy consumption of LED lights can pose a driving risk in some areas during winter. Unlike incandescent and halogen bulbs, which generally get hot enough to melt away any snow that may settle on individual lights, LED displays – using only a fraction of the energy – remain too cool for this to happen. As a response to the safety concerns, a heating element on the lens was developed.
Programmable visibility signals
Signals such as the 3M High Visibility Signal utilize light-diffusing optics and a Fresnel lens to create the signal indication. The light from a 150 W PAR46 sealed-beam lamp in these "programmable visibility" signals passes through a set of two glass lenses at the back of the signal. The first lens, a frosted glass diffusing lens, diffuses the light into a uniform ball of light around five inches in diameter. The light then passes through a nearly identical lens known as an optical limiter (3M's definition of the lens itself), also known as a "programming lens", also five inches in diameter.
Using a special aluminium foil-based adhesive tape, these signals are "masked" or programmed by the programming lens so that only certain lanes of traffic will view the indication. At the front of these programmable visibility signals is a 12" Fresnel lens, each lens tinted to meet United States Institute of Transportation Engineers (ITE) chromaticity and luminance standards. The Fresnel lens collimates the light output created by the lamp and creates a uniform display of light for the lane in which it is intended.
In addition to being positioned and mounted for desired visibility for their respective traffic, some traffic lights are also aimed, louvered, or shaded to minimize misinterpretation from other lanes. For example, a Fresnel lens on an adjacent through-lane signal may be aimed to prevent left-turning traffic from anticipating its own green arrow. Intelight Inc. manufactures a programmable traffic signal that uses a software-controlled LED array and electronics to steer the light beam toward the desired approach. The signal is programmed unlike the 3M and McCain models. It requires a connection to a laptop or smartphone with the manufacturer's software installed. Connections can be made directly with a direct-serial interface kit, or wirelessly with a radio kit over WIFI to the signal. In addition to aiming, Fresnel lenses, and louvers, visors and back panels are also useful in areas where sunlight would diminish the contrast and visibility of a signal face. Typical applications for these signals were skewed intersections, specific multi-lane control, left-turn pocket signals, or other areas where complex traffic situations existed.
Size
In the United States, traffic lights are currently designed with lights approximately in diameter. Previously the standard had been ; however, those are slowly being phased out in favour of the larger and more visible 12 inch lights. Variations used have also included a hybrid design, which had one or more 12 inch lights along with one or more lights of on the same light.
In the United Kingdom, 12-inch lights were implemented only with Mellor Design Signal heads designed by David Mellor. These were designed for symbolic optics to compensate for the light loss caused by the symbol. However, following a study sponsored by the UK Highways Agency and completed by Aston University, Birmingham, UK, an enhanced optical design was introduced in the mid-1990s. Criticism of sunlight washout (cannot see the illuminated signal due to sunlight falling on it), and sun-phantom (signal appearing to be illuminated even when not due to sunlight reflecting from the parabolic mirror at low sun angles), led to the design of a signal that used lenslets to focus light from a traditional incandescent bulb through apertures in a matt black front mask. This cured both problems in an easily manufactured solution. This design proved successful and was taken into production by a number of traffic signal manufacturers through the engineering designs of Dr. Mark Aston, working firstly at the SIRA Ltd in Kent, and latterly as an independent optical designer.
The manufacturers took a licence for the generic design from the Highways Agency, with Dr. Aston engineering a unique solution for each manufacturer. Producing both bulb and LED versions of the signal aspects, these signals are still the most common type of traffic light on UK roads. With the invention of anti-phantom, highly visible Aston lenses, lights of could be designed to give the same output as plain lenses, so a larger surface area was unnecessary. Consequently, lights of are no longer approved for use in the UK and all lights installed on new installations have to be in accordance with TSRGD (Traffic Signs Regulations and General Directions). Exemptions are made for temporary or replacement signals.
Mounting and placement
The MUTCD identifies five types of traffic light mounts. On pedestals, signal heads are mounted on a single pole (this is the normal installation method for the UK). On mast arms, signal heads are mounted on a rigid arm over the road protuding from the pole. On strained poles, signals are suspended over a roadway on a wire, attached to poles at opposite kerbs. This is the most common installation method in the United States. Unipoles are similar to strain poles, but a single structure over the road, rather than two poles linked with wire. Finally, signals can be attached to existing structures such as an overpass. Dummy lights are traffic signs located in the centre of a junction, which operate on a fixed cycle. These have generally been decommissioned due to safety concerns; however, a number remain due to historic value.
Signals can either be placed nearside – between the stop line and the kerbline of the intersecting road – or farside – on the opposite side of the junction. In European countries, signals are often placed on the nearside. In the UK, at least two signal heads are required (known as the primary and secondary heads), one of which is normally nearside and the other of which could be nearside or farside. In the US, signals are normally located farside, though in some states, nearside signals are also used. Nearside signals can be beneficial to road safety, as drivers have more time to see a red light and are less likely to encroach on pedestrian crossings.
Effects
Drivers spend on average around 2% of journey time passing through signalised junctions. Traffic lights can increase the traffic capacity at intersections and reduce delay for side road traffic, but can also result in increased delay for main road traffic. Hans Monderman, the innovative Dutch traffic engineer, and pioneer of shared space schemes, was sceptical of their role, and is quoted as having said of them: "We only want traffic lights where they are useful and I haven't found anywhere where they are useful yet."
A World Economic Forum study found that signalised junctions are linked to higher rates of localised air pollution. Drivers accelerate and stop frequently at lights and as such peak particle concentration can be around 29 times higher than during free-flow conditions. The WEF recommends that traffic authorities synchronise traffic signals, consider alternative traffic management systems and consider placing traffic lights away from residential areas, schools, and hospitals.
The separation of conflicting streams of traffic in time can reduce the chances of right-angle collisions by turning traffic and cross traffic, but they can increase the frequency of rear-end crashes by up to 50%. Since right-angled and turn-against-traffic collisions are more likely to result in injuries, this is often an acceptable trade-off. They can also adversely affect the safety of bicycle and pedestrian traffic. Between 1979 and 1988, the city of Philadelphia, Pennsylvania, removed signals at 199 intersections that were not warranted. On average, the intersections had 24% fewer crashes after the unwarranted signals were removed. The traffic lights had been erected in the 1960s because of since-resolved protests over traffic. By 1992, over 800 traffic lights had been removed at 426 intersections, and the number of crashes at these intersections dropped by 60%.
Justification
Criteria have been developed to help ensure that new traffic lights are installed only where they will do more good than harm and to justify the removal of existing traffic lights where they are not warranted. They are most often placed on arterial roads at intersections with either another arterial road or a collector road, or on an expressway where an interchange is not warranted. In some situations, traffic signals can also be found on collector roads in busy settings.
The International Municipal Signal Association provides input as to standards concerning traffic signals and control devices. One example is the input the association provided for the Manual on Uniform Traffic Control Devices (MUTCD). The MUTCD is issued by the Federal Highway Administration (FHWA) of the United States Department of Transportation (USDOT).
In the United States, the criteria for installation of a traffic control signal are prescribed by the Manual on Uniform Traffic Control Devices (MUTCD), which defines the criteria in nine warrants:
Eight-hour vehicular volume. Traffic volume must exceed prescribed minima for eight hours of an average weekday.
Four-hour vehicular volume. Traffic volume must exceed prescribed minima for four hours of an average weekday.
Peak hour volume or delay. This is applied only in unusual cases, such as office parks, industrial complexes, and park and ride lots that attract or discharge large numbers of vehicles in a short time, and for a minimum of one hour of an average weekday. The side road traffic suffers undue delays when entering or crossing the major street.
Pedestrian volume. If the traffic volume on a major street is so heavy that pedestrians experience excessive delays in attempting to cross it.
School crossing. If the traffic density at school crossing times exceeds one per minute which is considered to provide too few gaps in the traffic for children to safely cross the street.
Coordinated signal system. For places where adjacent traffic control signals do not keep traffic grouped together efficiently.
Crash experience. The volumes in the eight- and four-hour warrants may be reduced if five or more right-angle and cross traffic turn collisions have happened at the intersection in a twelve-month period.
Roadway network. Installing a traffic control signal at some intersections might be justified to encourage concentration and organization of traffic flow on a roadway network.
Intersection near a grade crossing. A traffic control signal is often justified at an intersection near a railroad crossing, in order to provide a preemption sequence to allow traffic queued up on the tracks an opportunity to clear the tracks before the train arrives.
In the US, an intersection is usually required to meet one or more of these warrants before a signal is installed. However, meeting one or more warrants does not require the installation of a traffic signal, it only suggests that they may be suitable. It could be that a roundabout would work better. There may be other unconsidered conditions that lead traffic engineers to conclude that a signal is undesirable. For example, it may be decided not to install a signal at an intersection if traffic stopped by it will back up and block another, more heavily trafficked intersection. Also, if a signal meets only the peak hour warrant, the advantages during that time may not outweigh the disadvantages during the rest of the day.
In other contexts
The symbolism of a traffic light (and the meanings of the three primary colours used in traffic lights) are frequently found in many other contexts. Since they are often used as single spots of colour without the context of vertical position, they are typically not comprehensible to up to one in ten males who are colour blind.
Traffic lights have also been used in computer software, such as the macOS user interface, and in pieces of artwork, particularly Traffic Light Tree in London, UK.
Racing
Automobile racing circuits can also use standard traffic signals to indicate to racing car drivers the status of racing. On an oval track, four sets may be used, two facing a straight-away and two facing the middle of the 180-degree turn between straight-away. Green would indicate racing is underway, while amber would indicate to slow or while following a pace car; red would indicate to stop, probably for emergency reasons.
Scuderia Ferrari, a Formula One racing team, formerly used a traffic light system during their pit stops to signal to their drivers when to leave the pits. The red light was on when the tires were being changed and fuel was being added, amber was on when the tires were changed, and green was on when all work was completed. The system is (usually) completely automatic. However, the system was withdrawn after the 2008 Singapore Grand Prix, due to the fact that it heavily delayed Felipe Massa during the race, when he was in the lead. Usually, the system was automatic, but heavy traffic in the pit lane forced the team to operate it manually. A mechanic accidentally pressed the green light button when the fuel hose was still attached to the car, causing Massa to drive off, towing the fuel hose along. Additionally, Massa drove into the path of Adrian Sutil, earning him a penalty. He finally stopped at the end of the pit lane, forcing Ferrari's mechanics to sprint down the whole of the pit lane to remove the hose. As a result of this, and the penalty he also incurred, Massa finished 13th. Ferrari decided to use a traditional "lollipop" for the remainder of the 2008 season.
Another type of traffic light that is used in racing is the Christmas Tree, which is used in drag racing. The Christmas Tree has six lights: a blue staging light, three amber lights, a green light, and a red light. The blue staging light is divided into two parts: Pre-stage and stage. Sometimes, there are two sets of bulbs on top of each other to represent them. Once a driver is staged at the starting line, then the starter will activate the light to commence racing, which can be done in two ways. If a Pro tree is used, then the three amber lights will flash at the same time. For the Sportsman tree, the amber light will flash from top to bottom. When the green light comes up, the race officially begins but if a driver crosses the line before that happens, then a red light will come up and that will be a foul.
As a rating mechanism
The colours red, amber, and green are often used as a simple-to-understand rating system for products and processes. It may be extended by analogy to provide a greater range of intermediate colours, with red and green at the extremes.
In Unicode
In Unicode, the symbol for is HORIZONTAL TRAFFIC LIGHT and is VERTICAL TRAFFIC LIGHT.
| Technology | Road infrastructure | null |
163423 | https://en.wikipedia.org/wiki/Amazon%20parrot | Amazon parrot | Amazon parrots are parrots in the genus Amazona. They are medium-sized, short-tailed parrots native to the Americas, with their range extending from South America to Mexico and the Caribbean. Amazona is one of the 92 genera of parrots that make up the order Psittaciformes and is in the family Psittacidae, one of three families of true parrots. It contains about thirty species. Most amazons are predominantly green, with accenting colors that depend on the species, and they can be quite vivid. They feed primarily on seeds, nuts, and fruits, supplemented by leafy matter.
Many amazons have the ability to mimic human speech and other sounds. Partly because of this, they are popular as pets or companion parrots, and a small industry has developed in breeding parrots in captivity for this market. This popularity has led to many parrots being taken from the wild to the extent that some species have become threatened. The United States and the European Union have made the capture of wild parrots for the pet trade illegal in an attempt to help protect wild populations. Feral populations of amazons can be found in different parts of the world, including in South Africa, Europe, and major cities in the Americas.
Taxonomy
The genus Amazona was introduced by the French naturalist René Lesson in 1830. The type species was subsequently designated as the mealy amazon (Amazona farinosa) by the Italian zoologist Tommaso Salvadori in 1891. The genus name is a Latinized version of the name Amazone given to them in the 18th century by the Comte de Buffon, who believed they were native to Amazonian jungles.
Amazona contains about thirty species of parrots, such as the Cuban amazon, festive amazon, and red-necked amazon. The taxonomy of the yellow-crowned amazon (Amazona ochrocephala complex) is disputed, with some authorities only listing a single species (A. ochrocephala), while others split it into as many as three species (A. ochrocephala, A. auropalliata and A. oratrix). The split is primarily based on differences related to extension of yellow to the plumage and the colour of bill and legs. Phylogenetic analyses of mtDNA do not support the traditional split.
A 2017 study published by ornithologists Tony Silva, Antonio Guzmán, Adam D. Urantówka and Paweł Mackiewicz proposed a new species from the Yucatán Peninsula area in Mexico called the blue-winged amazon (Amazona gomezgarzai). However, subsequent studies question its validity, indicating that these organisms possibly had an artificial hybrid origin.
The yellow-faced parrot (Alipiopsitta xanthops) was traditionally placed within this genus, but recent research has shown that it is more closely related to the short-tailed parrot and species in the genus Pionus, resulting in it being transferred to the monotypic genus Alipiopsitta.
Extinct hypothetical species
Populations of amazon parrots that lived on the Caribbean islands of Martinique and Guadeloupe are now extinct. It is not known if they were distinct species or subspecies, or if they originated from parrots introduced to the islands by humans, so they are regarded as hypothetical extinct species. No evidence of them remains, and their taxonomy may never be established. Populations of several parrot species were described mainly in the unscientific writings of early travelers, and subsequently scientifically described by several naturalists (to have their names linked to the species that they were proposing) mainly in the 20th century, with no more evidence than the earlier observations and without specimens. An illustration of a specimen termed "George Edwards' parrot" has sometimes been considered a possibly distinct, extinct species, but it may also have been a yellow-billed or Cuban amazon with aberrant colouration.
Martinique amazon, Amazona martinica. A.H. Clark, 1905.
Guadeloupe amazon, Amazona violacea. Originally called Psittacus violaceus by J.F. Gmelin in 1789.
Description
Most amazon parrots are predominantly green, with contrasting colors on parts of the body such as the crown, face and flight feathers; these colours vary by species. They are medium- to large-sized parrots, measuring between long, and have short, rounded tails and wings. They are heavy-billed, and have a distinct notch on the upper mandible and a prominent naked cere with setae on it. Male and female amazon parrots are roughly the same size, though males can be larger at times - most amazon parrots do not show sexual dimorphism, exceptions being the white-fronted amazon, Yucatan amazon and the turquoise-fronted amazon, the latter species being sexually dimorphic when viewed in the ultraviolet spectrum, invisible to humans. They can weigh from 190g to more than 565g. The average body temperature of an amazon parrot is 41.8 degrees Celsius, or 107.1 degrees Fahrenheit. Their heart rates range from 340 to 600 beats per minute, with 15-45 breaths per minute.
Distribution and habitat
Amazon parrots are native to the Neotropical Americas, ranging from South America to Mexico, and the Caribbean. Outside of their native habitats, more than 14 species of amazon parrots have been observed. In Italy, there are two reproductive populations of Amazona, dating back to their introduction in 1991 to the city of Genoa. The birds are present in Germany, but their status is unclear. They are also found in Spain, where the most common parrot present is the turquoise-fronted amazon. Portugal, California (where the birds were largely introduced during the 20th century), Puerto Rico, South Africa, and the Netherlands have also reported sightings of Amazona parrots. More than 12 species of amazon parrots can be found in the US state of Florida, mostly around the city of Miami. Feral populations are also present in São Paulo, Porto Alegre, Buenos Aires, and Río Cuarto within South America.
Amazon parrots mostly inhabit forests such as scrub forests, palm groves and rainforests, but some prefer drier areas such as savannas. Vinaceous-breasted amazons are thought to prefer parana pine trees, and have been shown to prefer forest fragments or isolated trees, while Tucumán amazons nest at higher elevations than other amazon parrots, mostly in Blepharocalyx trees, within the cloud-forest. Yellow-headed amazons nest in the canopy of tall trees, mostly in Astronium graveolens and Enterolobium cyclocarpum.
Behavior
Breeding
The exact breeding age of wild birds is not precisely known. For captive-bred birds, the average breeding age is around four years, with some larger groups like yellow-crowned amazons requiring six years. Captive birds as old as 30 years have laid eggs. Amazon parrots average 5 weeks for nest initiation, with most successful nestings averaging 2.2 fledglings. Amazon parrots mostly breed during late winter and spring, as they are seasonal breeders. This may happen due to seasonal food availability or a lower chance of flooding, as the period is generally dry. West Indian amazon parrots tend to breed earlier than Mexican amazon parrots, with Mexican amazon parrots having their peak at March to April while West Indian amazon parrots peak in March.
Captive birds are likelier to be less fertile. A variety of hypotheses to explain the phenomenon have been proposed - Low (1995) suggests that this is because amazon parrots have shorter breeding seasons, while Hagen (1994) suggests that this is because male and female parrots may not be ready for breeding at the same times.
Feeding
Amazon parrots feed primarily on seeds, nuts, fruits, berries, buds, nectar, and flowers, supplemented by leafy matter. Their beaks enable them to crack nut shells with ease, and they hold their food with a foot. In captivity, the birds enjoy vegetables such as squash, boiled potato, peas, beans, and carrots. Mainland amazon parrots forage and then feed their young twice a day (usually one hour after sunrise and one and a half hours before sunset), while West Indian amazon parrots do so 4-5 times. Hypotheses proposed for why this is include the nutritional value of food in the region as well as temperature stress. During the downtime before foraging expeditions in the afternoon, amazon parrots spend their time preening themselves and their mates.
Communication and sociality
Amazon parrots mostly communicate vocally. Species such as orange-winged amazons have nine different recorded vocalizations used in different situations. However, patterns of gestural communication have been observed with the birds, thought to be used to avoid predators. In general, amazon parrots are very social birds in their foraging, roosting, and nesting. Most amazon parrots travel in large groups and have clumped nesting, but the four species in the Lesser Antilles are less social. Theories for why this is include the lack of predation risk. In captivity, amazon parrots are known for their ability to talk- learning to communicate by mimicking speech and other sounds of human origin. They also appear to have an affinity for human music and singing.
Extensive studies of vocal behavior in wild yellow-naped amazons show the presence of vocal dialects, in which the repertoire of calls that parrots vocalize change at discrete geographic boundaries, similar to how humans have different languages or dialects. Dialects are stable over long periods of time and are meaningful to the parrots; they are less responsive to calls that are not their own dialect.
Conservation status
As of June 2020, 58% (18 out of 31) of species were listed by the International Union for Conservation of Nature (IUCN) as threatened or extinct in the wild. The most common threats are habitat loss, persecution, the pet trade, and the introduction of other species. The Puerto Rican amazon is critically endangered. 15 species are on Appendix 1 of the Convention on International Trade in Endangered Species, while 16 are on Appendix 2. In the case of illegal smuggling of amazon parrots, some smugglers bleach the heads of green-headed parrots to make them look yellow and sell them off as young amazon parrots, which can cause dermatitis. The United States Fish and Wildlife Service and the United States Department of Agriculture sometimes confiscate and quarantine parrots for Newcastle disease and then auction them off.
The Puerto Rican parrot in particular, as a critically endangered species, has seen considerable conservation efforts, including but not limited to changes in land management, legal protection, research, and increasing nesting success. However, these efforts were significantly hindered by natural events such as Hurricane Hugo, which affected the Luquillo forest in which most Puerto Rican parrots were living.
Within the rest of the West Indies, the four species of amazon parrots in the Lesser Antilles have seen successful attempts at increasing their population. In the Greater Antilles, the population of amazon parrots has been stable. The Cuban amazon has seen greatly successful conservation efforts and as a result has experienced a large increase in its population.
Aviculture
Low (2005) describes adaptability and joyfulness as the special positive attributes of the genus from an avicultural perspective. The yellow-headed amazon, yellow-naped amazon, and turquoise-fronted amazon are some of the species which are commonly kept as pets. They can live for 30 to 50 years, with one report of a yellow-crowned amazon living for 56 years in captivity. However, some amazons can have hormonally-induced aggressiveness and attack their owners, which has led to owners seeking behavior modification for their parrots. On the other hand, unlike many other amazon species, the lilacine amazon and mealy amazon are said to possess gentle, easy-going and affectionate temperaments. To maintain health and happiness, pet parrots require much more training than domesticated animals such as dogs or even cats. They require understanding, manipulative toys, and rewards for good pet-like behavior, or they can develop quite aggressive behaviors (particularly male birds), which can be clearly observed through the bird's body language - pinning the eyes, flaring the tail, raising the head and neck feathers and engaging in a "macho strut". They have a strong, innate need to chew; thus, they require safe, destructible toys. One of the main problems amazon parrots face in captivity is obesity, which can be avoided with the correct diet and exercise. Within captivity, it is recommended to feed amazon parrots a variety of food, mostly consisting of pelleted food. Seeds should never be used as a whole diet and should be used as part of a balanced diet, balanced with food such as fresh fruit (except avocado, which is toxic to parrots) and vegetables, with nuts and seed provided only in moderation. Amazon parrots should also be given opportunities to forage for food instead of simply being given it, as they are motivated to forage even when an easier alternative is available.
Amazon parrots should be given a dark and quiet sleeping area. It is recommended to give the bird either downtime and naps or to keep them in total darkness for 12 hours so they can rest. Parrots also need to be bathed or sprayed with water once every week to allow for bathing behaviors.
Trade
Amazon parrots are traded and exploited as pets. Archeological evidence shows that the parrot trade has existed in South America since pre-Columbian times, with mummified parrots (including amazon species) being found in the Atacama Desert region of Chile. The most traded species of amazons are blue-fronted amazons and yellow-crowned/yellow-headed amazons. A 1992 ban on wild bird trade by the US led to a sharp drop in the trade and a diversion of 66% of it to the European Union, and a further EU ban on the trade in 2005 led to another drop. Between 1980 and 2013, 372,988 amazon parrots were traded. Some illegal trade still occurs between Mexico and the United States.
| Biology and health sciences | Psittaciformes | Animals |
163617 | https://en.wikipedia.org/wiki/Chicory | Chicory | Common chicory (Cichorium intybus) is a somewhat woody, perennial herbaceous plant of the family Asteraceae, usually with bright blue flowers, rarely white or pink. Native to Europe, it has been introduced to the Americas and Australia.
Many varieties are cultivated for salad leaves, chicons (blanched buds), or roots (var. sativum), which are baked, ground, and used as a coffee substitute and food additive. In the 21st century, inulin, an extract from chicory root, has been used in food manufacturing as a sweetener and source of dietary fiber. Chicory is also grown as a forage crop for livestock.
Description
When flowering, chicory has a tough, grooved, and more or less hairy stem. It can grow to tall. The leaves are stalked, lanceolate and unlobed; they range from in length (smallest near the top) and wide. The flower heads are wide, and usually light blue or lavender; it has also rarely been described as white or pink. Of the two rows of involucral bracts, the inner is longer and erect, the outer is shorter and spreading. It flowers from March until October. The seed has small scales at the tip.
Chemistry
Substances which contribute to the plant's bitterness are primarily the two sesquiterpene lactones, lactucin and lactucopicrin. Other components are aesculetin, aesculin, cichoriin, umbelliferone, scopoletin, 6,7-dihydrocoumarin, and further sesquiterpene lactones and their glycosides. Around 1970, it was discovered that the root contains up to 20% inulin, a polysaccharide similar to starch.
Names
Common chicory is also known as blue daisy, blue dandelion, blue sailors, blue weed, bunk, coffeeweed, cornflower, hendibeh, horseweed, ragged sailors, succory, wild bachelor's buttons, and wild endive. ("Cornflower" is also commonly applied to Centaurea cyanus.) Common names for varieties of var. foliosum include endive, radicchio, radichetta, Belgian endive, French endive, red endive, sugarloaf, and witloof (or witlof).
Distribution and habitat
Chicory is native to western Asia, North Africa, and Europe. It lives as a wild plant on roadsides in Europe. The plant was brought to North America by early European colonists. It is also common in China, and Australia, where it has become widely naturalized.
It is more common in areas with abundant rain.
Ecology
Chicory is both a cultivated crop and a weedy plant with a cosmopolitan distribution. Analysis of introduced weedy populations in North America has revealed that naturalized weedy chicory is partially descended from domesticated cultivars.
Chicory grows in roadsides, waste places, and other disturbed areas, and can survive in lawns due to its ability to resprout from its low basal rosette of leaves. It typically does not enter undisturbed natural areas. It most prefers limestone soils, but tolerates an array of conditions. Bees, butterflies, and flies feed upon it. Chicory is classified as a drought tolerant plant.
Uses
Culinary
The entire plant is edible.
Raw chicory leaves are 92% water, 5% carbohydrates, 2% protein, and contain negligible fat. In a 100-gram (3½ oz) reference amount, raw chicory leaves provide and significant amounts (more than 20% of the Daily Value) of vitamin K, vitamin A, vitamin C, some B vitamins, and manganese. Vitamin E and calcium are present in moderate amounts. Raw endive is 94% water and has low nutrient content.
Root chicory
Root chicory (Cichorium intybus var. sativum) has long been cultivated in Europe as a coffee substitute. The roots are baked, roasted, ground, and used as an additive, especially in the Mediterranean region (where the plant is native). As a coffee additive, it is also mixed in Indian filter coffee and in parts of Southeast Asia, South Africa, and the southern United States, particularly in New Orleans. In France, a mixture of 60% chicory and 40% coffee is sold under the trade name Ricoré. It has been more widely used during economic crises such as the Great Depression in the 1930s and during World War II in Continental Europe. Chicory, with sugar beet and rye, was used as an ingredient of the East German (mixed coffee), introduced during the "East German coffee crisis" of 1976–1979. It is also added to coffee in Spanish, Greek, Turkish, Syrian, Lebanese and Palestinian cuisines.
Some beer brewers use roasted chicory to add flavor to stouts (commonly expected to have a coffee-like flavor). Others have added it to strong blond Belgian-style ales, to augment the hops, making a , from the Dutch name for the plant.
The roots can also be cooked like parsnips.
Leaf chicory
Wild
While edible raw, wild chicory leaves usually have a bitter taste, especially the older leaves. The flavor is appreciated in certain cuisines, such as in the Ligurian and Apulian regions of Italy and also in the southern part of India. In Ligurian cuisine, wild chicory leaves are an ingredient of preboggion and in the Apulian region, wild chicory leaves are combined with fava bean puree in the traditional local dish fave e cicorie selvatiche. In Albania, the leaves are used as a spinach substitute, mainly served simmered and marinated in olive oil, or as ingredient for fillings of byrek. In Greece a variety of wild chicory found in Crete and known as stamnagathi (spiny chicory) is used as a salad served with olive oil and lemon juice.
By cooking and discarding the water, the bitterness is reduced, after which the chicory leaves may be sautéed with garlic, anchovies, and other ingredients. In this form, the resulting greens might be combined with pasta or accompany meat dishes.
Cultivated
Chicory may be cultivated for its leaves, usually eaten raw as salad leaves. Cultivated chicory is generally divided into three types, of which there are many varieties:
Radicchio usually has variegated red or red and green leaves. Some only refer to the white-veined red-leaved type as radicchio, also known as red endive and red chicory. It has a bitter and spicy taste, which mellows when it is grilled or roasted. It can also be used to add color and zest to salads. It is largely used in Italy in different varieties, the most famous being the ones from Treviso (known as radicchio rosso di Treviso), from Verona (radicchio di Verona), and Chioggia (radicchio di Chioggia), which are classified as an IGP. It is also common in Greece, where it is known as radiki and mainly boiled in salads, and is used in pies.
Belgian endive is known in Dutch as or ("white leaf"), in Italy, in Spain, chicory in the UK, as witlof in Australia, endive in France and Canada, and in parts of northern France, in Wallonia and (in French) in Luxembourg. It has a small head of cream-colored, bitter leaves. The harvested root is allowed to sprout indoors in the absence of sunlight, which prevents the leaves from turning green and opening up (etiolation). It is often sold wrapped in blue paper to protect it from light, so as to preserve its pale color and delicate flavor. The smooth, creamy white leaves may be served stuffed, baked, boiled, cut, or cooked in a milk sauce, or simply cut raw. The tender leaves are slightly bitter; the whiter the leaf, the less bitter the taste. The harder inner part of the stem at the bottom of the head can be cut out before cooking to prevent bitterness. Belgium exports chicon/witloof to over 40 countries. The technique for growing these blanched endives was accidentally discovered in the 1850s at the Botanical Garden of Brussels in Saint-Josse-ten-Noode, Belgium. Today France is the largest producer of endive.
Catalogna chicory (Cichorium intybus var. foliosum), also known as puntarelle, includes a whole subfamily (some varieties from Belgian endive and some from radicchio) of chicory and is used throughout Italy.
Although leaf chicory is often called "endive", true endive (Cichorium endivia) is a different species in the same genus, distinct from Belgian endive.
Chicory root and inulin
Inulin is mainly found in the plant family Asteraceae as a storage carbohydrate (e.g. Jerusalem artichoke, dahlia, and yacon). It is used as a sweetener in the food industry, with 10% of the sweetening power of sucrose and is sometimes added to yogurts as a 'prebiotic'. It is also a source of dietary fiber.
Fresh chicory root may contain 13–23% inulin as a percentage of its total carbohydrate content.
Traditional use
Chicory root contains essential oils similar to those found in plants in the related genus Tanacetum. In alternative medicine, chicory has been listed as one of the 38 plants used to prepare Bach flower remedies.
Forage
Chicory is highly digestible for ruminants and has a low fiber concentration. Chicory roots were once considered an "excellent substitute for oats" for horses due to their protein and fat content. Chicory contains a low quantity of reduced tannins that may increase protein utilization efficiency in ruminants.
Some tannins reduce intestinal parasites. Dietary chicory may be toxic to internal parasites, with studies of ingesting chicory by farm animals having lower worm burdens, leading to its use as a forage supplement. Although chicory might have originated in France, Italy and India, much development of chicory for use with livestock has been undertaken in New Zealand.
Forage varieties
'Puna' ('Grasslands Puna'): Developed in New Zealand, Grasslands Puna is well adapted to different climates, being grown from Alberta, Canada, New Mexico, Florida to Australia. It is resistant to bolting, which leads to high nutrient levels in the leaves in spring. It also is able to quickly come back after grazing.
'Forage Feast': A variety from France used for human consumption and also for wildlife plots, where animals such as deer might graze. It is resistant to bolting. It is very cold-hardy, and being lower in tannins than other forage varieties, is suitable for human consumption.
'Choice': has been bred for high winter and early-spring growth activity, and lower amounts of lactucin and lactone, which are believed to taint milk. It is also use for seeding deer wildlife plots.
'Oasis': was bred for increased lactone rates for the forage industry, and for higher resistance to fungal diseases such as Sclerotinia (mainly s. minor and S. sclerotiorum.)
'Puna II': This variety is more winter-active than most others, which leads to greater persistence and longevity.
'Grouse': A New Zealand variety, it is used as a planting companion for forage brassicas. More prone to early flowering than other varieties, it has higher crowns more susceptible to overbrowsing.
'Six Point': A United States variety, winter hardy and resistant to bolting. It is very similar to Puna.
Others varieties known include; 'Chico', 'Ceres Grouse', 'Good Hunt', 'El Nino' and 'Lacerta'.
History
The plant has a history reaching back to ancient Egypt. In ancient Rome, a dish called puntarelle was made with chicory sprouts. It was mentioned by Horace in reference to his own diet, which he describes as very simple: ("As for me, olives, endives, and light mallows provide sustenance"). Chicory was first described as a cultivated plant in the 17th century. When coffee was introduced to Europe, the Dutch thought that chicory made a lively addition to the bean drink.
In 1766, Frederick the Great banned the importation of coffee into Prussia, leading to the development of a coffee substitute by Brunswick innkeeper Christian Gottlieb Förster (died 1801), who gained a concession in 1769–70 to manufacture it in Brunswick and Berlin. By 1795, 22 to 24 factories of this type were in Brunswick. Lord Monboddo describes the plant in 1779 as the "chicoree", which the French cultivated as a pot herb. In Napoleonic Era France, chicory frequently appeared as an adulterant in coffee, or as a coffee substitute. Chicory was also adopted as a coffee substitute by Confederate soldiers during the American Civil War, and has become common in the U.S. It was also used in the UK during World War II, where Camp Coffee, a coffee and chicory essence, has been on sale since 1885.
In the U.S., chicory root has long been used as a coffee substitute in prisons. By the 1840s, the port of New Orleans was the second-largest importer of coffee (after New York). Louisianans began to add chicory root to their coffee when Union naval blockades during the American Civil War cut off the port of New Orleans, thereby creating a long-standing tradition.
In culture
Chicory is mentioned in certain ancient Chinese texts about silk production. Amongst traditional recommendations the primary caretaker of the silkworms, the "silkworm mother", should not eat or even touch it.
The chicory flower is often seen as inspiration for the Romantic concept of the Blue Flower (e.g. in German language Blauwarte ≈ blue lookout by the wayside). Similar to the springwort and moonwort, it could open locked doors, according to European folklore. However, the plant must be gathered at noon or midnight on St. James's Day and cut with gold while being silent, or else one would die afterwards.
Chicory was also believed to grant its possessor invisibility.
| Biology and health sciences | Asterales | null |
163711 | https://en.wikipedia.org/wiki/Harrow%20%28tool%29 | Harrow (tool) | In agriculture, a harrow is a farm implement used for surface tillage. It is used after ploughing for breaking up and smoothing out the surface of the soil. The purpose of harrowing is to break up clods and to provide a soil structure, called tilth, that is suitable for planting seeds. Coarser harrowing may also be used to remove weeds and to cover seed after sowing.
Harrows differ from ploughs, which cut the upper 12 to 25 centimetre (5 to 10 in) layer of soil, and leave furrows, parallel trenches. Harrows differ from cultivators in that they disturb the whole surface of the soil, while a cultivator instead disturbs only narrow tracks between the crop rows to kill weeds.
There are four general types of harrows: disc harrows, tine harrows (including spring-tooth harrows, drag harrows, and spike harrows), chain harrows, and chain-disk harrows. Harrows were originally drawn by draft animals, such as horses, mules, or oxen, or in some times and places by manual labourers. In modern practice they are almost always tractor-mounted implements, either trailed after the tractor by a drawbar or mounted on the three-point hitch.
A modern development of the traditional harrow is the rotary power harrow, often just called a power harrow.
Harrow action
In modern mechanized farming, generally a farmer will use two harrows, one after the other. The disk harrow is used first to slice up the large clods left by the mould-board plough, followed by the spring-tooth harrow. To save time and fuel they may be pulled by one tractor; the disk hitched to the tractor, and the spring-tooth hitched to, and directly behind, the disk. The result is a smooth field with powdery dirt at the surface.
Types
In cooler climates, the most common types are the disc harrow, the chain harrow, the tine harrow or spike harrow and the spring tine harrow. Chain harrows are often used for lighter work, such as leveling the tilth or covering the seed, while disc harrows are typically used for heavy work, such as following ploughing to break up the sod. In addition, there are various types of power harrow, in which the cultivators are power-driven from the tractor rather than depending on its forward motion.
Tine harrows are used to refine seed-bed conditions before planting, remove small weeds in growing crops, and loosen the inter-row soils to allow water to soak into the subsoil. The fourth is a chain disk harrow. Disks attached to chains are pulled at an angle over the ground. These harrows move rapidly across the surface. The chain and disk rotate to stay clean while breaking up the top surface to about deep. A smooth seedbed is prepared for planting with one pass.
Chain harrowing can be used on pasture land to spread dung and break up dead material (thatch) in the sward. Similarly, in sports-ground maintenance, light chain harrowing is often used to level off the ground after heavy use to remove and smooth out boot marks and indentations. Used on tilled land in combination with the other two types, chain harrowing rolls remaining larger soil clumps to the surface, where weather breaks them down and prevents interference with seed germination.
All four harrow types can be used in one pass to prepare soil for seeding. Using any combination of two harrows for various tilling processes is also common. Where harrowing provides a very fine tilth or the soil is very light so that it might easily be wind-blown, a roller is often added as the last of the set.
Harrows may be of several types and weights, depending on their purpose. They almost always consist of a rigid frame that holds discs, teeth, linked chains, or other means of moving soil—but tine and chain harrows are often only supported by a rigid towing bar at the front of the set.
In the southern hemisphere, so-called giant discs are a specialised kind of disc harrows that can stand in for a plough in rough country where a mouldboard plough cannot handle tree stumps and rocks, and a disc-plough is too slow (because of its limited number of discs). Giant scalloped-edged discs operate in a set, or frame, that is often weighted with concrete or steel blocks to improve penetration of the cutting edges. This cultivation is usually followed by broadcast fertilisation and seeding rather than drilled or row seeding.
A drag is a heavy harrow.
Power harrow
A rotary power harrow, or simply a power harrow, has multiple sets of vertical tines. Each set of tines is rotated on a vertical axis and tills the soil horizontally. The result is that, unlike a rotary tiller, soil layers are not turned over or inverted, which is useful in preventing dormant weed seeds from being brought to the surface, and there is no horizontal slicing of the subsurface soil that can lead to hardpan formation.
Historical reference
In Europe, harrows were used in antiquity and the Middle Ages. The oldest known illustration of a harrow is in Scene 10 of the eleventh-century Bayeux Tapestry. An Arabic reference to harrows is to be found in Abu Bakr Ibn Wahshiyya's Nabatean Agriculture (Kitab al-Filaha al-Nabatiyya), of the 10th century, but claiming knowledge from Babylonian sources.
| Technology | Agricultural tools | null |
163806 | https://en.wikipedia.org/wiki/Oil%20platform | Oil platform | An oil platform (also called an oil rig, offshore platform, oil production platform, etc.) is a large structure with facilities to extract and process petroleum and natural gas that lie in rock formations beneath the seabed. Many oil platforms will also have facilities to accommodate the workers, although it is also common to have a separate accommodation platform linked by bridge to the production platform. Most commonly, oil platforms engage in activities on the continental shelf, though they can also be used in lakes, inshore waters, and inland seas. Depending on the circumstances, the platform may be fixed to the ocean floor, consist of an artificial island, or float. In some arrangements the main facility may have storage facilities for the processed oil. Remote subsea wells may also be connected to a platform by flow lines and by umbilical connections. These sub-sea facilities may include one or more subsea wells or manifold centres for multiple wells.
Offshore drilling presents environmental challenges, both from the produced hydrocarbons and the materials used during the drilling operation. Controversies include the ongoing US offshore drilling debate.
There are many different types of facilities from which offshore drilling operations take place. These include bottom-founded drilling rigs (jackup barges and swamp barges), combined drilling and production facilities, either bottom-founded or floating platforms, and deepwater mobile offshore drilling units (MODU), including semi-submersibles and drillships. These are capable of operating in water depths up to . In shallower waters, the mobile units are anchored to the seabed. However, in deeper water (more than ), the semisubmersibles or drillships are maintained at the required drilling location using dynamic positioning.
History
Jan Józef Ignacy Łukasiewicz (Polish pronunciation: [iɡˈnatsɨ wukaˈɕɛvitʂ] ⓘ; 8 March 1822 – 7 January 1882) was a Polish pharmacist, engineer, businessman, inventor, and philanthropist. He was one of the most prominent philanthropists in the Kingdom of Galicia and Lodomeria, crown land of Austria-Hungary. He was a pioneer who in 1856 built the world's first modern oil refinery
Around 1891, the first submerged oil wells were drilled from platforms built on piles in the fresh waters of the Grand Lake St. Marys (a.k.a. Mercer County Reservoir) in Ohio. The wide but shallow reservoir was built from 1837 to 1845 to provide water to the Miami and Erie Canal.
Around 1896, the first submerged oil wells in salt water were drilled in the portion of the Summerland field extending under the Santa Barbara Channel in California. The wells were drilled from piers extending from land out into the channel.
Other notable early submerged drilling activities occurred on the Canadian side of Lake Erie since 1913 and Caddo Lake in Louisiana in the 1910s. Shortly thereafter, wells were drilled in tidal zones along the Gulf Coast of Texas and Louisiana. The Goose Creek field near Baytown, Texas, is one such example. In the 1920s, drilling was done from concrete platforms in Lake Maracaibo, Venezuela.
The oldest offshore well recorded in Infield's offshore database is the Bibi Eibat well which came on stream in 1923 in Azerbaijan. Landfill was used to raise shallow portions of the Caspian Sea.
In the early 1930s, the Texas Company developed the first mobile steel barges for drilling in the brackish coastal areas of the gulf.
In 1937, Pure Oil Company (now Chevron Corporation) and its partner Superior Oil Company (now part of ExxonMobil Corporation) used a fixed platform to develop a field in of water, one mile (1.6 km) offshore of Calcasieu Parish, Louisiana.
In 1938, Humble Oil built a mile-long wooden trestle with railway tracks into the sea at McFadden Beach on the Gulf of Mexico, placing a derrick at its end – this was later destroyed by a hurricane.
In 1945, concern for American control of its offshore oil reserves caused President Harry Truman to issue an Executive Order unilaterally extending American territory to the edge of its continental shelf, an act that effectively ended the 3-mile limit "freedom of the seas" regime.
In 1946, Magnolia Petroleum (now ExxonMobil) drilled at a site off the coast, erecting a platform in of water off St. Mary Parish, Louisiana.
In early 1947, Superior Oil erected a drilling/production platform in of water some 18 miles off Vermilion Parish, Louisiana. But it was Kerr-McGee Oil Industries (now part of Occidental Petroleum), as operator for partners Phillips Petroleum (ConocoPhillips) and Stanolind Oil & Gas (BP), that completed its historic Ship Shoal Block 32 well in October 1947, months before Superior actually drilled a discovery from their Vermilion platform farther offshore. In any case, that made Kerr-McGee's well the first oil discovery drilled out of sight of land.
The British Maunsell Forts constructed during World War II are considered the direct predecessors of modern offshore platforms. Having been pre-constructed in a very short time, they were then floated to their location and placed on the shallow bottom of the Thames and the Mersey estuary.
In 1954, the first jackup oil rig was ordered by Zapata Oil. It was designed by R. G. LeTourneau and featured three electro-mechanically operated lattice-type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955, and christened "Scorpion". The Scorpion was put into operation in May 1956 off Port Aransas, Texas. It was lost in 1969.
When offshore drilling moved into deeper waters of up to , fixed platform rigs were built, until demands for drilling equipment was needed in the to depth of the Gulf of Mexico, the first jack-up rigs began appearing from specialized offshore drilling contractors such as forerunners of ENSCO International.
The first semi-submersible resulted from an unexpected observation in 1961. Blue Water Drilling Company owned and operated the four-column submersible Blue Water Rig No.1 in the Gulf of Mexico for Shell Oil Company. As the pontoons were not sufficiently buoyant to support the weight of the rig and its consumables, it was towed between locations at a draught midway between the top of the pontoons and the underside of the deck. It was noticed that the motions at this draught were very small, and Blue Water Drilling and Shell jointly decided to try operating the rig in its floating mode. The concept of an anchored, stable floating deep-sea platform had been designed and tested back in the 1920s by Edward Robert Armstrong for the purpose of operating aircraft with an invention known as the "seadrome". The first purpose-built drilling semi-submersible Ocean Driller was launched in 1963. Since then, many semi-submersibles have been purpose-designed for the drilling industry mobile offshore fleet.
The first offshore drillship was the CUSS 1 developed for the Mohole project to drill into the Earth's crust.
As of June, 2010, there were over 620 mobile offshore drilling rigs (Jackups, semisubs, drillships, barges) available for service in the competitive rig fleet.
One of the world's deepest hubs is currently the Perdido in the Gulf of Mexico, floating in 2,438 meters of water. It is operated by Shell plc and was built at a cost of $3 billion. The deepest operational platform is the Petrobras America Cascade FPSO in the Walker Ridge 249 field in 2,600 meters of water.
Main offshore basins
Notable offshore basins include:
the North Sea
the Gulf of Mexico (offshore Texas, Louisiana, Mississippi, Alabama and Florida)
California (in the Los Angeles Basin and Santa Barbara Channel, part of the Ventura Basin)
the Caspian Sea (notably some major fields offshore Azerbaijan)
the Campos and Santos Basins off the coasts of Brazil
Newfoundland and Nova Scotia (Atlantic Canada)
several fields off West Africa, south of Nigeria, and central Africa, west of Angola
offshore fields in South East Asia and Sakhalin, Russia
major offshore oil fields are located in the Persian Gulf such as Safaniya, Manifa and Marjan which belong to Saudi Arabia and are developed by Saudi Aramco
fields in India (Mumbai High, K G Basin-East Coast Of India, Tapti Field, Gujarat, India)
the Baltic Sea oil and gas fields
the Taranaki Basin in New Zealand
the Kara Sea north of Siberia
the Arctic Ocean off the coasts of Alaska and Canada's Northwest Territories
the offshore fields in the Adriatic Sea
Types
Larger lake- and sea-based offshore platforms and drilling rig for oil.
1) & 2) Conventional fixed platforms (deepest: Shell's Bullwinkle in 1991 at 412 m/1,353 ft GOM)
3) Compliant tower (deepest: ChevronTexaco's Petronius in 1998 at 534 m /1,754 ft GOM)
4) & 5) Vertically moored tension leg and mini-tension leg platform (deepest: ConocoPhillips's Magnolia in 2004 1,425 m/4,674 ft GOM)
6) Spar (deepest: Shell's Perdido in 2010, 2,450 m/8,000 ft GOM)
7) & 8) Semi-submersibles (deepest: Shell's NaKika in 2003, 1920 m/6,300 ft GOM)
9) Floating production, storage, and offloading facility (deepest: 2005, 1,345 m/4,429 ft Brazil)
10) Sub-sea completion and tie-back to host facility (deepest: Shell's Coulomb tie to NaKika 2004, 2,307 m/ 7,570 ft)
(Numbered from left to right; all records from 2005 data)
Fixed platforms
These platforms are built on concrete or steel legs, or both, anchored directly onto the seabed, supporting the deck with space for drilling rigs, production facilities and crew quarters. Such platforms are, by virtue of their immobility, designed for very long term use (for instance the Hibernia platform). Various types of structure are used: steel jacket, concrete caisson, floating steel, and even floating concrete. Steel jackets are structural sections made of tubular steel members, and are usually piled into the seabed. To see more details regarding Design, construction and installation of such platforms refer to: and.
Concrete caisson structures, pioneered by the Condeep concept, often have in-built oil storage in tanks below the sea surface and these tanks were often used as a flotation capability, allowing them to be built close to shore (Norwegian fjords and Scottish firths are popular because they are sheltered and deep enough) and then floated to their final position where they are sunk to the seabed. Fixed platforms are economically feasible for installation in water depths up to about .
Compliant towers
These platforms consist of slender, flexible towers and a pile foundation supporting a conventional deck for drilling and production operations. Compliant towers are designed to sustain significant lateral deflections and forces, and are typically used in water depths ranging from .
Semi-submersible platform
These platforms have hulls (columns and pontoons) of sufficient buoyancy to cause the structure to float, but of weight sufficient to keep the structure upright. Semi-submersible platforms can be moved from place to place and can be ballasted up or down by altering the amount of flooding in buoyancy tanks. They are generally anchored by combinations of chain, wire rope or polyester rope, or both, during drilling and/or production operations, though they can also be kept in place by the use of dynamic positioning. Semi-submersibles can be used in water depths from .
Jack-up drilling rigs
Jack-up Mobile Drilling Units (or jack-ups), as the name suggests, are rigs that can be jacked up above the sea using legs that can be lowered, much like jacks. These MODUs (Mobile Offshore Drilling Units) are typically used in water depths up to , although some designs can go to depth. They are designed to move from place to place, and then anchor themselves by deploying their legs to the ocean bottom using a rack and pinion gear system on each leg.
Drillships
A drillship is a maritime vessel that has been fitted with drilling apparatus. It is most often used for exploratory drilling of new oil or gas wells in deep water but can also be used for scientific drilling. Early versions were built on a modified tanker hull, but purpose-built designs are used today. Most drillships are outfitted with a dynamic positioning system to maintain position over the well. They can drill in water depths up to .
Floating production systems
The main types of floating production systems are FPSO (floating production, storage, and offloading system). FPSOs consist of large monohull structures, generally (but not always) shipshaped, equipped with processing facilities. These platforms are moored to a location for extended periods, and do not actually drill for oil or gas. Some variants of these applications, called FSO (floating storage and offloading system) or FSU (floating storage unit), are used exclusively for storage purposes, and host very little process equipment. This is one of the best sources for having floating production.
The world's first floating liquefied natural gas (FLNG) facility is in production. See the section on particularly large examples below.
Tension-leg platform
TLPs are floating platforms tethered to the seabed in a manner that eliminates most vertical movement of the structure. TLPs are used in water depths up to about . The "conventional" TLP is a 4-column design that looks similar to a semisubmersible. Proprietary versions include the Seastar and MOSES mini TLPs; they are relatively low cost, used in water depths between . Mini TLPs can also be used as utility, satellite or early production platforms for larger deepwater discoveries.
Gravity-based structure
A GBS can either be steel or concrete and is usually anchored directly onto the seabed. Steel GBS are predominantly used when there is no or limited availability of crane barges to install a conventional fixed offshore platform, for example in the Caspian Sea. There are several steel GBS's in the world today (e.g. offshore Turkmenistan Waters (Caspian Sea) and offshore New Zealand). Steel GBS do not usually provide hydrocarbon storage capability. It is mainly installed by pulling it off the yard, by either wet-tow or/and dry-tow, and self-installing by controlled ballasting of the compartments with sea water. To position the GBS during installation, the GBS may be connected to either a transportation barge or any other barge (provided it is large enough to support the GBS) using strand jacks. The jacks shall be released gradually whilst the GBS is ballasted to ensure that the GBS does not sway too much from target location.
Spar platforms
Spars are moored to the seabed like TLPs, but whereas a TLP has vertical tension tethers, a spar has more conventional mooring lines. Spars have to-date been designed in three configurations: the "conventional" one-piece cylindrical hull; the "truss spar", in which the midsection is composed of truss elements connecting the upper buoyant hull (called a hard tank) with the bottom soft tank containing permanent ballast; and the "cell spar", which is built from multiple vertical cylinders. The spar has more inherent stability than a TLP since it has a large counterweight at the bottom and does not depend on the mooring to hold it upright. It also has the ability, by adjusting the mooring line tensions (using chain-jacks attached to the mooring lines), to move horizontally and to position itself over wells at some distance from the main platform location. The first production spar was Kerr-McGee's Neptune, anchored in in the Gulf of Mexico; however, spars (such as Brent Spar) were previously used as FSOs.
Eni's Devil's Tower located in of water in the Gulf of Mexico, was the world's deepest spar until 2010. The world's deepest platform as of 2011 was the Perdido spar in the Gulf of Mexico, floating in 2,438 metres of water. It is operated by Royal Dutch Shell and was built at a cost of $3 billion.
The first truss spars were Kerr-McGee's Boomvang and Nansen.
The first (and, as of 2010, only) cell spar is Kerr-McGee's Red Hawk.
Normally unmanned installations (NUI)
These installations, sometimes called toadstools, are small platforms, consisting of little more than a well bay, helipad and emergency shelter. They are designed to be operated remotely under normal conditions, only to be visited occasionally for routine maintenance or well work.
Conductor support systems
These installations, also known as satellite platforms, are small unmanned platforms consisting of little more than a well bay and a small process plant. They are designed to operate in conjunction with a static production platform which is connected to the platform by flow lines or by umbilical cable, or both.
Particularly large examples
The Petronius Platform is a compliant tower in the Gulf of Mexico modeled after the Hess Baldpate platform, which stands above the ocean floor. It is one of the world's tallest structures.
The Hibernia platform in Canada is the world's heaviest offshore platform, located on the Jeanne D'Arc Basin, in the Atlantic Ocean off the coast of Newfoundland. This gravity base structure (GBS), which sits on the ocean floor, is high and has storage capacity for of crude oil in its high caisson. The platform acts as a small concrete island with serrated outer edges designed to withstand the impact of an iceberg. The GBS contains production storage tanks and the remainder of the void space is filled with ballast with the entire structure weighing in at 1.2 million tons.
Royal Dutch Shell has developed the first Floating Liquefied Natural Gas (FLNG) facility, which is situated approximately 200 km off the coast of Western Australia. It is the largest floating offshore facility. It is approximately 488m long and 74m wide with displacement of around 600,000t when fully ballasted.
Maintenance and supply
A typical oil production platform is self-sufficient in energy and water needs, housing electrical generation, water desalinators and all of the equipment necessary to process oil and gas such that it can be either delivered directly onshore by pipeline or to a floating platform or tanker loading facility, or both. Elements in the oil/gas production process include wellhead, production manifold, production separator, glycol process to dry gas, gas compressors, water injection pumps, oil/gas export metering and main oil line pumps.
Larger platforms are assisted by smaller ESVs (emergency support vessels) like the British Iolair that are summoned when something has gone wrong, e.g. when a search and rescue operation is required. During normal operations, PSVs (platform supply vessels) keep the platforms provisioned and supplied, and AHTS vessels can also supply them, as well as tow them to location and serve as standby rescue and firefighting vessels.
Crew
Essential personnel
Not all of the following personnel are present on every platform. On smaller platforms, one worker can perform a number of different jobs. The following also are not names officially recognized in the industry:
OIM (offshore installation manager) who is the ultimate authority during his/her shift and makes the essential decisions regarding the operation of the platform;
Operations Team Leader (OTL);
Offshore Methods Engineer (OME) who defines the installation methodology of the platform;
Offshore Operations Engineer (OOE) who is the senior technical authority on the platform;
PSTL or operations coordinator for managing crew changes;
Dynamic positioning operator, navigation, ship or vessel maneuvering (MODU), station keeping, fire and gas systems operations in the event of incident;
Automation systems specialist, to configure, maintain and troubleshoot the process control systems (PCS), process safety systems, emergency support systems and vessel management systems;
Second mate to meet manning requirements of flag state, operates fast rescue craft, cargo operations, fire team leader;
Third mate to meet manning requirements of flag state, operate fast rescue craft, cargo operations, fire team leader;
Ballast control operator to operate fire and gas systems;
Crane operators to operate the cranes for lifting cargo around the platform and between boats;
Scaffolders to rig up scaffolding for when it is required for workers to work at height;
Coxswains to maintain the lifeboats and manning them if necessary;
Control room operators, especially FPSO or production platforms;
Catering crew, including people tasked with performing essential functions such as cooking, laundry and cleaning the accommodation;
Production techs to run the production plant;
Helicopter pilot(s) living on some platforms that have a helicopter based offshore and transporting workers to other platforms or to shore on crew changes;
Maintenance technicians (instrument, electrical or mechanical).
Fully qualified medic.
Radio operator to operate all radio communications.
Store Keeper, keeping the inventory well supplied
Technician to record the fluid levels in tanks
Incidental personnel
Drill crew will be on board if the installation is performing drilling operations. A drill crew will normally comprise:
Toolpusher
Driller
Roughnecks
Roustabouts
Company man
Mud engineer
Motorman See: Glossary of oilfield jargon
Derrickhand
Geologist
Welders and Welder Helpers
Well services crew will be on board for well work. The crew will normally comprise:
Well services supervisor
Wireline or coiled tubing operators
Pump operator
Pump hanger and ranger
Drawbacks
Risks
The nature of their operation—extraction of volatile substances sometimes under extreme pressure in a hostile environment—means risk; accidents and tragedies occur regularly. The U.S. Minerals Management Service reported 69 offshore deaths, 1,349 injuries, and 858 fires and explosions on offshore rigs in the Gulf of Mexico from 2001 to 2010. On July 6, 1988, 167 people died when Occidental Petroleum's Piper Alpha offshore production platform, on the Piper field in the UK sector of the North Sea, exploded after a gas leak. The resulting investigation conducted by Lord Cullen and publicized in the first Cullen Report was highly critical of a number of areas, including, but not limited to, management within the company, the design of the structure, and the Permit to Work System. The report was commissioned in 1988, and was delivered in November 1990. The accident greatly accelerated the practice of providing living accommodations on separate platforms, away from those used for extraction.
The offshore can be in itself a hazardous environment. In March 1980, the 'flotel' (floating hotel) platform Alexander L. Kielland capsized in a storm in the North Sea with the loss of 123 lives.
In 2001, Petrobras 36 in Brazil exploded and sank five days later, killing 11 people.
Given the number of grievances and conspiracy theories that involve the oil business, and the importance of gas/oil platforms to the economy, platforms in the United States are believed to be potential terrorist targets. Agencies and military units responsible for maritime counter-terrorism in the US (Coast Guard, Navy SEALs, Marine Recon) often train for platform raids.
On April 21, 2010, the Deepwater Horizon platform, 52 miles off-shore of Venice, Louisiana, (property of Transocean and leased to BP) exploded, killing 11 people, and sank two days later. The resulting undersea gusher, conservatively estimated to exceed as of early June 2010, became the worst oil spill in US history, eclipsing the Exxon Valdez oil spill.
Ecological effects
In British waters, the cost of removing all platform rig structures entirely was estimated in 2013 at £30 billion.
Aquatic organisms invariably attach themselves to the undersea portions of oil platforms, turning them into artificial reefs. In the Gulf of Mexico and offshore California, the waters around oil platforms are popular destinations for sports and commercial fishermen, because of the greater numbers of fish near the platforms. The United States and Brunei have active Rigs-to-Reefs programs, in which former oil platforms are left in the sea, either in place or towed to new locations, as permanent artificial reefs. In the US Gulf of Mexico, as of September 2012, 420 former oil platforms, about 10 percent of decommissioned platforms, have been converted to permanent reefs.
On the US Pacific coast, marine biologist Milton Love has proposed that oil platforms off California be retained as artificial reefs, instead of being dismantled (at great cost), because he has found them to be havens for many of the species of fish which are otherwise declining in the region, in the course of 11 years of research. Love is funded mainly by government agencies, but also in small part by the California Artificial Reef Enhancement Program. Divers have been used to assess the fish populations surrounding the platforms.
Effects on the environment
Offshore oil production involves environmental risks, most notably oil spills from oil tankers or pipelines transporting oil from the platform to onshore facilities, and from leaks and accidents on the platform. Produced water is also generated, which is water brought to the surface along with the oil and gas; it is usually highly saline and may include dissolved or unseparated hydrocarbons.
Offshore rigs are shut down during hurricanes. In the Gulf of Mexico hurricanes are increasing because of the increasing number of oil platforms that heat surrounding air with methane, it is estimated that U.S. Gulf of Mexico, oil and gas facilities emit approximately 500000 tons of methane each year, corresponding to a loss of produced gas of 2.9 percent. The increasing number of oil rigs also increase movement of oil tankers which also increases levels which directly warm water in the zone, warm waters are a key factor for hurricanes to form.
To reduce the amount of carbon emissions otherwise released into the atmosphere, methane pyrolysis of natural gas pumped up by oil platforms is a possible alternative to flaring for consideration. Methane pyrolysis produces non-polluting hydrogen in high volume from this natural gas at low cost. This process operates at around 1000 °C and removes carbon in a solid form from the methane, producing hydrogen. The carbon can then be pumped underground and is not released into the atmosphere.
It is being evaluated in such research laboratories as Karlsruhe Liquid-metal Laboratory (KALLA). and the chemical engineering team at University of California – Santa Barbara
Repurposing
If not decommissioned, old platforms can be repurposed to pump into rocks below the seabed. Others have been converted to launch rockets into space, and more are being redesigned for use with heavy-lift launch vehicles.
In Saudi Arabia, there are plans to repurpose decommissioned oil rigs into a theme park.
Challenges
Offshore oil and gas production is more challenging than land-based installations due to the remote and harsher environment. Much of the innovation in the offshore petroleum sector concerns overcoming these challenges, including the need to provide very large production facilities. Production and drilling facilities may be very large and a large investment, such as the Troll A platform standing on a depth of 300 meters.
Another type of offshore platform may float with a mooring system to maintain it on location. While a floating system may be lower cost in deeper waters than a fixed platform, the dynamic nature of the platforms introduces many challenges for the drilling and production facilities.
The ocean can add several thousand meters or more to the fluid column. The addition increases the equivalent circulating density and downhole pressures in drilling wells, as well as the energy needed to lift produced fluids for separation on the platform.
The trend today is to conduct more of the production operations subsea, by separating water from oil and re-injecting it rather than pumping it up to a platform, or by flowing to onshore, with no installations visible above the sea. Subsea installations help to exploit resources at progressively deeper waters—locations that had been inaccessible—and overcome challenges posed by sea ice such as in the Barents Sea. One such challenge in shallower environments is seabed gouging by drifting ice features (means of protecting offshore installations against ice action includes burial in the seabed).
Offshore manned facilities also present logistics and human resources challenges. An offshore oil platform is a small community in itself with cafeteria, sleeping quarters, management and other support functions. In the North Sea, staff members are transported by helicopter for a two-week shift. They usually receive higher salaries than onshore workers do. Supplies and waste are transported by ship, and the supply deliveries need to be carefully planned because storage space on the platform is limited. Today, much effort goes into relocating as many of the personnel as possible onshore, where management and technical experts are in touch with the platform by video conferencing. An onshore job is also more attractive for the aging workforce in the petroleum industry, at least in the western world. These efforts among others are contained in the established term integrated operations. The increased use of subsea facilities helps achieve the objective of keeping more workers onshore. Subsea facilities are also easier to expand, with new separators or different modules for different oil types, and are not limited by the fixed floor space of an above-water installation.
Deepest platforms
The world's deepest oil platform is the floating Perdido, which is a spar platform in the Gulf of Mexico in a water depth of .
Non-floating compliant towers and fixed platforms, by water depth:
Petronius Platform,
Baldpate Platform,
Troll A Platform,
Bullwinkle Platform,
Pompano Platform,
Benguela-Belize Lobito-Tomboco Platform,
Gullfaks C Platform,
Tombua Landana Platform,
Harmony Platform,
| Technology | Fuel | null |
164040 | https://en.wikipedia.org/wiki/Formula | Formula | In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities.
The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin).
In mathematics
In mathematics, a formula generally refers to an equation or inequality relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius:
Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form.
In a general context, formulas often represent mathematical models of real world phenomena, and as such can be used to provide solutions (or approximate solutions) to real world problems, with some being more general than others. For example, the formula
is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations.
Expressions are distinct from formulas in the sense that they don't usually contain relations like equality (=) or inequality (<). Expressions denote a mathematical object, where as formulas denote a statement about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example, is an expression, while is a formula.
However, in some areas mathematics, and in particular in computer algebra, formulas are viewed as expressions that can be evaluated to true or false, depending on the values that are given to the variables occurring in the expressions. For example takes the value false if is given a value less than 1, and the value true otherwise. (See Boolean expression)
In mathematical logic
In mathematical logic, a formula (often referred to as a well-formed formula) is an entity constructed using the symbols and formation rules of a given logical language. For example, in first-order logic,
is a formula, provided that is a unary function symbol, a unary predicate symbol, and a ternary predicate symbol.
Chemical formulas
In modern chemistry, a chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using a single line of chemical element symbols, numbers, and sometimes other symbols, such as parentheses, brackets, and plus (+) and minus (−) signs. For example, H2O is the chemical formula for water, specifying that each molecule consists of two hydrogen (H) atoms and one oxygen (O) atom. Similarly, O denotes an ozone molecule consisting of three oxygen atoms and a net negative charge.
A chemical formula identifies each constituent element by its chemical symbol, and indicates the proportionate number of atoms of each element.
In empirical formulas, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound—as ratios to the key element. For molecular compounds, these ratio numbers can always be expressed as whole numbers. For example, the empirical formula of ethanol may be written C2H6O, because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written as empirical formulas which contains only the whole numbers. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio, with n ranging from over 4 to more than 6.5.
When the chemical compound of the formula consists of simple molecules, chemical formulas often employ ways to suggest the structure of the molecule. There are several types of these formulas, including molecular formulas and condensed formulas. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is C6H12O6 rather than the glucose empirical formula, which is CH2O. Except for the very simple substances, molecular chemical formulas generally lack needed structural information, and might even be ambiguous in occasions.
A structural formula is a drawing that shows the location of each atom, and which atoms it binds to.
In computing
In computing, a formula typically describes a calculation, such as addition, to be performed on one or more variables. A formula is often implicitly provided in the form of a computer instruction such as.
Degrees Celsius = (5/9)*(Degrees Fahrenheit - 32)
In computer spreadsheet software, a formula indicating how to compute the value of a cell, say A3, could be written as
=A1+A2
where A1 and A2 refer to other cells (column A, row 1 or 2) within the spreadsheet. This is a shortcut for the "paper" form A3 = A1+A2, where A3 is, by convention, omitted because the result is always stored in the cell itself, making the stating of the name redundant.
Units
Formulas used in science almost always require a choice of units. Formulas are used to express relationships between various quantities, such as temperature, mass, or charge in physics; supply, profit, or demand in economics; or a wide range of other quantities in other disciplines.
An example of a formula used in science is Boltzmann's entropy formula. In statistical thermodynamics, it is a probability equation relating the entropy S of an ideal gas to the quantity W, which is the number of microstates corresponding to a given macrostate:
where k is the Boltzmann constant, equal to , and W is the number of microstates consistent with the given macrostate.
| Mathematics | Basics | null |
7195284 | https://en.wikipedia.org/wiki/Tropical%20cyclogenesis | Tropical cyclogenesis | Tropical cyclogenesis is the development and strengthening of a tropical cyclone in the atmosphere. The mechanisms through which tropical cyclogenesis occur are distinctly different from those through which temperate cyclogenesis occurs. Tropical cyclogenesis involves the development of a warm-core cyclone, due to significant convection in a favorable atmospheric environment.
Tropical cyclogenesis requires six main factors: sufficiently warm sea surface temperatures (at least ), atmospheric instability, high humidity in the lower to middle levels of the troposphere, enough Coriolis force to develop a low-pressure center, a pre-existing low-level focus or disturbance, and low vertical wind shear.
Tropical cyclones tend to develop during the summer, but have been noted in nearly every month in most basins. Climate cycles such as ENSO and the Madden–Julian oscillation modulate the timing and frequency of tropical cyclone development. The maximum potential intensity is a limit on tropical cyclone intensity which is strongly related to the water temperatures along its path.
An average of 86 tropical cyclones of tropical storm intensity form annually worldwide. Of those, 47 reach strength higher than , and 20 become intense tropical cyclones (at least Category 3 intensity on the Saffir–Simpson scale).
Conditions for tropical cyclogenesis
There are six main requirements for tropical cyclogenesis: sufficiently warm sea surface temperatures, atmospheric instability, high humidity in the lower to middle levels of the troposphere, enough Coriolis force to sustain a low-pressure center, a preexisting low-level focus or disturbance, and low vertical wind shear. While these conditions are necessary for tropical cyclone formation, they do not guarantee that a tropical cyclone will form.
Warm waters, instability, and mid-level moisture
Normally, an ocean temperature of 26.5 °C (79.7 °F) spanning through at least a 50-metre depth is considered the minimum to maintain a tropical cyclone. These warm waters are needed to maintain the warm core that fuels tropical systems. This value is well above 16.1 °C (60.9 °F), the global average surface temperature of the oceans.
Tropical cyclones are known to form even when normal conditions are not met. For example, cooler air temperatures at a higher altitude (e.g., at the 500 hPa level, or 5.9 km) can lead to tropical cyclogenesis at lower water temperatures, as a certain lapse rate is required to force the atmosphere to be unstable enough for convection. In a moist atmosphere, this lapse rate is 6.5 °C/km, while in an atmosphere with less than 100% relative humidity, the required lapse rate is 9.8 °C/km.
At the 500 hPa level, the air temperature averages −7 °C (18 °F) within the tropics, but air in the tropics is normally dry at this level, giving the air room to wet-bulb, or cool as it moistens, to a more favorable temperature that can then support convection. A wet-bulb temperature at 500 hPa in a tropical atmosphere of −13.2 °C is required to initiate convection if the water temperature is 26.5 °C, and this temperature requirement increases or decreases proportionally by 1 °C in the sea surface temperature for each 1 °C change at 500 hpa.
Under a cold cyclone, 500 hPa temperatures can fall as low as −30 °C, which can initiate convection even in the driest atmospheres. This also explains why moisture in the mid-levels of the troposphere, roughly at the 500 hPa level, is normally a requirement for development. However, when dry air is found at the same height, temperatures at 500 hPa need to be even colder as dry atmospheres require a greater lapse rate for instability than moist atmospheres. At heights near the tropopause, the 30-year average temperature (as measured in the period encompassing 1961 through 1990) was −77 °C (−105 °F). A recent example of a tropical cyclone that maintained itself over cooler waters was Epsilon of the 2005 Atlantic hurricane season.
Role of Maximum Potential Intensity (MPI)
Kerry Emanuel created a mathematical model around 1988 to compute the upper limit of tropical cyclone intensity based on sea surface temperature and atmospheric profiles from the latest global model runs. Emanuel's model is called the maximum potential intensity, or MPI. Maps created from this equation show regions where tropical storm and hurricane formation is possible, based upon the thermodynamics of the atmosphere at the time of the last model run. This does not take into account vertical wind shear.
Coriolis force
A minimum distance of from the equator (about 4.5 degrees from the equator) is normally needed for tropical cyclogenesis. The Coriolis force imparts rotation on the flow and arises as winds begin to flow in toward the lower pressure created by the pre-existing disturbance. In areas with a very small or non-existent Coriolis force (e.g. near the Equator), the only significant atmospheric forces in play are the pressure gradient force (the pressure difference that causes winds to blow from high to low pressure) and a smaller friction force; these two alone would not cause the large-scale rotation required for tropical cyclogenesis. The existence of a significant Coriolis force allows the developing vortex to achieve gradient wind balance. This is a balance condition found in mature tropical cyclones that allows latent heat to concentrate near the storm core; this results in the maintenance or intensification of the vortex if other development factors are neutral.
Low level disturbance
Whether it be a depression in the Intertropical Convergence Zone (ITCZ), a tropical wave, a broad surface front, or an outflow boundary, a low-level feature with sufficient vorticity and convergence is required to begin tropical cyclogenesis. Even with perfect upper-level conditions and the required atmospheric instability, the lack of a surface focus will prevent the development of organized convection and a surface low. Tropical cyclones can form when smaller circulations within the Intertropical Convergence Zone come together and merge.
Weak vertical wind shear
Vertical wind shear of less than 10 m/s (20 kt, 22 mph) between the surface and the tropopause is favored for tropical cyclone development. Weaker vertical shear makes the storm grow faster vertically into the air, which helps the storm develop and become stronger. If the vertical shear is too strong, the storm cannot rise to its full potential and its energy becomes spread out over too large of an area for the storm to strengthen. Strong wind shear can "blow" the tropical cyclone apart, as it displaces the mid-level warm core from the surface circulation and dries out the mid-levels of the troposphere, halting development. In smaller systems, the development of a significant mesoscale convective complex in a sheared environment can send out a large enough outflow boundary to destroy the surface cyclone. Moderate wind shear can lead to the initial development of the convective complex and surface low similar to the mid-latitudes, but it must diminish to allow tropical cyclogenesis to continue.
Favorable trough interactions
Limited vertical wind shear can be positive for tropical cyclone formation. When an upper-level trough or upper-level low is roughly the same scale as the tropical disturbance, the system can be steered by the upper level system into an area with better diffluence aloft, which can cause further development. Weaker upper cyclones are better candidates for a favorable interaction. There is evidence that weakly sheared tropical cyclones initially develop more rapidly than non-sheared tropical cyclones, although this comes at the cost of a peak in intensity with much weaker wind speeds and higher minimum pressure. This process is also known as baroclinic initiation of a tropical cyclone. Trailing upper cyclones and upper troughs can cause additional outflow channels and aid in the intensification process. Developing tropical disturbances can help create or deepen upper troughs or upper lows in their wake due to the outflow jet emanating from the developing tropical disturbance/cyclone.
There are cases where large, mid-latitude troughs can help with tropical cyclogenesis when an upper-level jet stream passes to the northwest of the developing system, which will aid divergence aloft and inflow at the surface, spinning up the cyclone. This type of interaction is more often associated with disturbances already in the process of recurvature.
Times of formation
Worldwide, tropical cyclone activity peaks in late summer when water temperatures are warmest. Each basin, however, has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active.
In the North Atlantic, a distinct hurricane season occurs from June 1 through November 30, sharply peaking from late August through October. The statistical peak of the North Atlantic hurricane season is September 10. The Northeast Pacific has a broader period of activity, but in a similar time frame to the Atlantic. The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and a peak in early September. In the North Indian basin, storms are most common from April to December, with peaks in May and November.
In the Southern Hemisphere, tropical cyclone activity generally occurs between early November and April 30. Southern Hemisphere activity peaks in mid-February to early March. Virtually all the Southern Hemisphere activity is seen from the southern African coast eastward, toward South America. Tropical cyclones are rare events across the south Atlantic Ocean and the far southeastern Pacific Ocean.
Unusual areas of formation
Middle latitudes
Areas farther than 30 degrees from the equator (except in the vicinity of a warm current) are not normally conducive to tropical cyclone formation or strengthening, and areas more than 40 degrees from the equator are often very hostile to such development. The primary limiting factor is water temperatures, although higher shear at increasing latitudes is also a factor. These areas are sometimes frequented by cyclones moving poleward from tropical latitudes. On rare occasions, such as Pablo in 2019, Alex in 2004, Alberto in 1988, and the 1975 Pacific Northwest hurricane, storms may form or strengthen in this region. Typically, tropical cyclones will undergo extratropical transition after recurving polewards, and typically become fully extratropical after reaching 45–50° of latitude. The majority of extratropical cyclones tend to restrengthen after completing the transition period.
Near the Equator
Areas within approximately ten degrees latitude of the equator do not experience a significant Coriolis force, a vital ingredient in tropical cyclone formation. However, a few tropical cyclones have been observed forming within five degrees of the equator.
South Atlantic
A combination of wind shear and a lack of tropical disturbances from the Intertropical Convergence Zone (ITCZ) makes it very difficult for the South Atlantic to support tropical activity. At least six tropical cyclones have been observed here, including a weak tropical storm in 1991 off the coast of Africa near Angola, Hurricane Catarina in March 2004, which made landfall in Brazil at Category 2 strength, Tropical Storm Anita in March 2010, Tropical Storm Iba in March 2019, Tropical Storm 01Q in February 2021, and Tropical Storm Akará in February 2024.
Mediterranean and Black Seas
Storms that appear similar to tropical cyclones in structure sometimes occur in the Mediterranean Sea. Notable examples of these "Mediterranean tropical cyclones" include an unnamed system in September 1969, Leucosia in 1982, Celeno in 1995, Cornelia in 1996, Querida in 2006, Rolf in 2011, Qendresa in 2014, Numa in 2017, Ianos in 2020, and Daniel in 2023. However, there is debate on whether these storms were tropical in nature.
The Black Sea has, on occasion, produced or fueled storms that begin cyclonic rotation, and that appear to be similar to tropical-like cyclones observed in the Mediterranean. Two of these storms reached tropical storm and subtropical storm intensity in August 2002 and September 2005 respectively.
Elsewhere
Tropical cyclogenesis is extremely rare in the far southeastern Pacific Ocean, due to the cold sea-surface temperatures generated by the Humboldt Current, and also due to unfavorable wind shear; as such, Cyclone Yaku in March 2023 is the only known instance of a tropical cyclone impacting western South America. Besides Yaku, there have been several other systems that have been observed developing in the region east of 120°W, which is the official eastern boundary of the South Pacific basin. On May 11, 1983, a tropical depression developed near 110°W, which was thought to be the easternmost forming South Pacific tropical cyclone ever observed in the satellite era. In mid-2015, a rare subtropical cyclone was identified in early May, slightly near Chile, even further east than the 1983 tropical depression. This system was unofficially dubbed Katie by researchers. Another subtropical cyclone was identified at 77.8 degrees longitude west in May 2018, just off the coast of Chile. This system was unofficially named Lexi by researchers. A subtropical cyclone was spotted just off the Chilean coast in January 2022, named Humberto by researchers.
Vortices have been reported off the coast of Morocco in the past. However, it is debatable if they are truly tropical in character.
Tropical activity is also extremely rare in the Great Lakes. However, a storm system that appeared similar to a subtropical or tropical cyclone formed in September 1996 over Lake Huron. The system developed an eye-like structure in its center, and it may have briefly been a subtropical or tropical cyclone.
Inland intensification
Tropical cyclones typically began to weaken immediately following and sometimes even prior to landfall as they lose the sea fueled heat engine and friction slows the winds. However, under some circumstances, tropical or subtropical cyclones may maintain or even increase their intensity for several hours in what is known as the brown ocean effect. This is most likely to occur with warm moist soils or marshy areas, with warm ground temperatures and flat terrain, and when upper level support remains conducive.
Influence of large-scale climate cycles
Influence of ENSO
El Niño (ENSO) shifts the region (warmer water, up and down welling at different locations, due to winds) in the Pacific and Atlantic where more storms form, resulting in nearly constant accumulated cyclone energy (ACE) values in any one basin. The El Niño event typically decreases hurricane formation in the Atlantic, and far western Pacific and Australian regions, but instead increases the odds in the central North and South Pacific and particular in the western North Pacific typhoon region.
Tropical cyclones in the northeastern Pacific and north Atlantic basins are both generated in large part by tropical waves from the same wave train.
In the Northwestern Pacific, El Niño shifts the formation of tropical cyclones eastward. During El Niño episodes, tropical cyclones tend to form in the eastern part of the basin, between 150°E and the International Date Line (IDL). Coupled with an increase in activity in the North-Central Pacific (IDL to 140°W) and the South-Central Pacific (east of 160°E), there is a net increase in tropical cyclone development near the International Date Line on both sides of the equator. While there is no linear relationship between the strength of an El Niño and tropical cyclone formation in the Northwestern Pacific, typhoons forming during El Niño years tend to have a longer duration and higher intensities. Tropical cyclogenesis in the Northwestern Pacific is suppressed west of 150°E in the year following an El Niño event.
Influence of the MJO
In general, westerly wind increases associated with the Madden–Julian oscillation lead to increased tropical cyclogenesis in all basins. As the oscillation propagates from west to east, it leads to an eastward march in tropical cyclogenesis with time during that hemisphere's summer season. There is an inverse relationship between tropical cyclone activity in the western Pacific basin and the north Atlantic basin, however. When one basin is active, the other is normally quiet, and vice versa. The main cause appears to be the phase of the Madden–Julian oscillation, or MJO, which is normally in opposite modes between the two basins at any given time.
Influence of equatorial Rossby waves
Research has shown that trapped equatorial Rossby wave packets can increase the likelihood of tropical cyclogenesis in the Pacific Ocean, as they increase the low-level westerly winds within that region, which then leads to greater low-level vorticity. The individual waves can move at approximately 1.8 m/s (4 mph) each, though the group tends to remain stationary.
Seasonal forecasts
Since 1984, Colorado State University has been issuing seasonal tropical cyclone forecasts for the north Atlantic basin, with results that they claim are better than climatology. The university claims to have found several statistical relationships for this basin that appear to allow long range prediction of the number of tropical cyclones. Since then, numerous others have issued seasonal forecasts for worldwide basins. The predictors are related to regional oscillations in the global climate system: the Walker circulation which is related to the El Niño–Southern Oscillation; the North Atlantic oscillation (NAO); the Arctic oscillation (AO); and the Pacific North American pattern (PNA).
| Physical sciences | Storms | Earth science |
4140245 | https://en.wikipedia.org/wiki/Operation%20%28mathematics%29 | Operation (mathematics) | In mathematics, an operation is a function from a set to itself. For example, an operation on real numbers will take in real numbers and return a real number. An operation can take zero or more input values (also called "operands" or "arguments") to a well-defined output value. The number of operands is the arity of the operation.
The most commonly studied operations are binary operations (i.e., operations of arity 2), such as addition and multiplication, and unary operations (i.e., operations of arity 1), such as additive inverse and multiplicative inverse. An operation of arity zero, or nullary operation, is a constant. The mixed product is an example of an operation of arity 3, also called ternary operation.
Generally, the arity is taken to be finite. However, infinitary operations are sometimes considered, in which case the "usual" operations of finite arity are called finitary operations.
A partial operation is defined similarly to an operation, but with a partial function in place of a function.
Types of operation
There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation.
Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the unary operation of complementation. Operations on functions include composition and convolution.
Operations may not be defined for every possible value of its domain. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain of definition or active domain. The set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its codomain of definition, active codomain, image or range. For example, in the real numbers, the squaring operation only produces non-negative numbers; the codomain is the set of real numbers, but the range is the non-negative numbers.
Operations can involve dissimilar objects: a vector can be multiplied by a scalar to form another vector (an operation known as scalar multiplication), and the inner product operation on two vectors produces a quantity that is scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on.
The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs (including the case of zero input and infinitely many inputs).
An operator is similar to an operation in that it refers to the symbol or the process used to denote the operation. Hence, their point of view is different. For instance, one often speaks of "the operation of addition" or "the addition operation," when focusing on the operands and result, but one switch to "addition operator" (rarely "operator of addition"), when focusing on the process, or from the more symbolic viewpoint, the function (where X is a set such as the set of real numbers).
Definition
An n-ary operation ω on a set X is a function . The set is called the domain of the operation, the output set is called the codomain of the operation, and the fixed non-negative integer n (the number of operands) is called the arity of the operation. Thus a unary operation has arity one, and a binary operation has arity two. An operation of arity zero, called a nullary operation, is simply an element of the codomain Y. An n-ary operation can also be viewed as an -ary relation that is total on its n input domains and unique on its output domain.
An n-ary partial operation ω from is a partial function . An n-ary partial operation can also be viewed as an -ary relation that is unique on its output domain.
The above describes what is usually called a finitary operation, referring to the finite number of operands (the value n). There are obvious extensions where the arity is taken to be an infinite ordinal or cardinal, or even an arbitrary set indexing the operands.
Often, the use of the term operation implies that the domain of the function includes a power of the codomain (i.e. the Cartesian product of one or more copies of the codomain), although this is by no means universal, as in the case of dot product, where vectors are multiplied and result in a scalar. An n-ary operation is called an . An n-ary operation where is called an external operation by the scalar set or operator set S. In particular for a binary operation, is called a left-external operation by S, and is called a right-external operation by S. An example of an internal operation is vector addition, where two vectors are added and result in a vector. An example of an external operation is scalar multiplication, where a vector is multiplied by a scalar and result in a vector.
An n-ary multifunction or ω is a mapping from a Cartesian power of a set into the set of subsets of that set, formally .
| Mathematics | Basics | null |
4141488 | https://en.wikipedia.org/wiki/Triflic%20acid | Triflic acid | Triflic acid, the short name for trifluoromethanesulfonic acid, TFMS, TFSA, HOTf or TfOH, is a sulfonic acid with the chemical formula CF3SO3H. It is one of the strongest known acids. Triflic acid is mainly used in research as a catalyst for esterification. It is a hygroscopic, colorless, slightly viscous liquid and is soluble in polar solvents.
Synthesis
Trifluoromethanesulfonic acid is produced industrially by electrochemical fluorination (ECF) of methanesulfonic acid:
CH3SO3H + 4 HF ->CF3SO2F + H2O + 3 H2
The resulting CF3SO2F is hydrolyzed, and the resulting triflate salt is reprotonated. Alternatively, trifluoromethanesulfonic acid arises by oxidation of trifluoromethylsulfenyl chloride:
CF3SCl + 2 Cl2 + 3 H2O -> CF3SO3H + 5 HCl
Triflic acid is purified by distillation from triflic anhydride.
Historical
Trifluoromethanesulfonic acid was first synthesized in 1954 by Robert Haszeldine and Kidd by the following reaction:
Reactions
As an acid
In the laboratory, triflic acid is useful in protonations because the conjugate base of triflic acid is nonnucleophilic. It is also used as an acidic titrant in nonaqueous acid-base titration because it behaves as a strong acid in many solvents (acetonitrile, acetic acid, etc.) where common mineral acids (such as HCl or H2SO4) are only moderately strong.
With a Ka = , pKa = , triflic acid qualifies as a superacid. It owes many of its useful properties to its great thermal and chemical stability. Both the acid and its conjugate base CF3SO, known as triflate, resist oxidation/reduction reactions, whereas many strong acids are oxidizing, such as perchloric or nitric acid. Further recommending its use, triflic acid does not sulfonate substrates, which can be a problem with sulfuric acid, fluorosulfuric acid, and chlorosulfonic acid. Below is a prototypical sulfonation, which triflic acid does not undergo:
C6H6 + H2SO4 ->[\ce{SO3}] C6H5(SO3H) + H2O
Triflic acid fumes in moist air and forms a stable solid monohydrate, CF3SO3H·H2O, melting point 34 °C.
Salt and complex formation
The triflate ligand is labile, reflecting its low basicity. Trifluoromethanesulfonic acid reacts exothermically with metal carbonates, hydroxides, and oxides. Illustrative is the synthesis of Cu(OTf)2.
Cu2CO3(OH)2 + 4 CF3SO3H -> 2 Cu(O3SCF3)2 + 3 H2O + CO2
Chloride ligands can be converted to the corresponding triflates:
3 CF3SO3H + [Co(NH3)5Cl]Cl2 -> [Co(NH3)5O3SCF3](O3SCF3)2 + 3 HCl
This conversion is conducted in neat HOTf at 100 °C, followed by precipitation of the salt upon the addition of ether.
Organic chemistry
Triflic acid reacts with acyl halides to give mixed triflate anhydrides, which are strong acylating agents, e.g. in Friedel–Crafts reactions.
CH3C(O)Cl + CF3SO3H -> CH3C(O)OSO2CF3 + HCl
CH3C(O)OSO2CF3 + C6H6 -> CH3C(O)C6H5 + CF3SO3H
Triflic acid catalyzes the reaction of aromatic compounds with sulfonyl chlorides, probably also through the intermediacy of a mixed anhydride of the sulfonic acid.
Triflic acid promotes other Friedel–Crafts-like reactions including the cracking of alkanes and alkylation of alkenes, which are very important to the petroleum industry. These triflic acid derivative catalysts are very effective in isomerizing straight chain or slightly branched hydrocarbons that can increase the octane rating of a particular petroleum-based fuel.
Triflic acid reacts exothermically with alcohols to produce ethers and olefins.
Dehydration gives the acid anhydride, trifluoromethanesulfonic anhydride, (CF3SO2)2O.
Safety
Triflic acid is one of the strongest acids. Contact with skin causes severe burns with delayed tissue destruction. On inhalation it causes fatal spasms, inflammation and edema.
Like sulfuric acid, triflic acid must be slowly added to polar solvents to prevent thermal runaway.
| Physical sciences | Specific acids | Chemistry |
4142438 | https://en.wikipedia.org/wiki/Sharovipteryx | Sharovipteryx | Sharovipteryx ("Sharov's wing", known until 1981 as Podopteryx, "foot wing") is a genus of early gliding reptiles containing the single species Sharovipteryx mirabilis. It is known from a single fossil and is the only glider with a membrane surrounding the pelvis instead of the pectoral girdle. This lizard-like reptile was found in 1965 in the Madygen Formation, Dzailauchou, on the southwest edge of the Fergana Valley in Kyrgyzstan, in what was then the Asian part of the U.S.S.R. dating to the middle-late Triassic period (about 225 million years ago). The Madygen horizon displays flora that put it in the Upper Triassic. An unusual reptile, Longisquama, was also found there.
S. mirabilis is known from a unique holotype specimen, which was first described by Aleksandr Grigorevich Sharov in 1971. Sharov named the species Podopteryx mirabilis, "foot wing", for the wing membranes on the hind limbs. However, that name had previously been used for a genus of damselfly, Podopteryx, so in 1981 Richard Cowen created the new genus name Sharovipteryx for the species.
Description
The skeleton is preserved in dorsal view and largely complete, with the bones still articulated and impressions of some of the integument. But part of the pectoral girdle is missing and part is still encased in stone.
In 1987, Gans et al. published a revised description: they found that the patagium did not extend to the forelimbs. Their experiments with models showed that the reptile could glide with its uropatagium and stabilize its glide by changing the angles of its forelimbs to provide an aeronautic canard or by bending its tail up or down to produce drag.In 2006, Dyke et al. published a study on possible gliding techniques for Sharovipteryx. The authors found that the wing membrane, which stretched between its very long hind legs and tail, would have allowed it to glide as a delta wing aircraft does. If the tiny front limbs also supported a membrane, they could have acted as a very efficient means of controlling pitch stability, very much like an aeronautic canard. Without a forewing, the authors find, controlled gliding would have been very difficult. Together with the canards on the forelimbs, these anterior membranes may have formed excellent control surfaces for gliding. The area around the forelimbs was completely prepared away in the only known fossil, destroying any possible trace of a membrane there.
Classification
Sharovipteryx is generally agreed to belong to a group of early archosaur relatives known as the protorosaurs (or prolacertiformes). A possible close relative of Sharovipteryx, Ozimek volans was recovered as a member of the family Tanystropheidae in the phylogenetic analysis conducted by Pritchard & Sues (2019); Sharovipteryx itself was not included in this analysis, but the authors considered it possible that both Ozimek and Sharovipteryx were nested within Tanystropheidae.
| Biology and health sciences | Other prehistoric reptiles | Animals |
4144876 | https://en.wikipedia.org/wiki/Longisquama | Longisquama | Longisquama is a genus of extinct reptile. There is only one species, Longisquama insignis, known from a poorly preserved skeleton and several incomplete fossil impressions from the Middle to Late Triassic Madygen Formation in Kyrgyzstan. It is known from the type fossil specimen, slab and counterslab (PIN 2548/4 and PIN 2584/5) and five referred specimens of possible integumentary appendages (PIN 2584/7 through 9). All specimens are in the collection of the Paleontological Institute of the Russian Academy of Sciences in Moscow.
Longisquama means "long scales"; the specific name insignis refers to its small size. The Longisquama holotype is notable for a number of long structures that appear to grow from its skin. The current opinion is that Longisquama is an ambiguous diapsid and has no bearing on the origin of birds.
History
Interpretation
Researchers Haubold and Buffetaut believed that the structures were long, modified scales attached in pairs to the lateral walls of the body, like paired gliding membranes. They published a reconstruction of Longisquama with plumes in a pattern akin to gliding lizards like Draco species and Kuehneosaurus latus, allowing it to glide, or at least parachute. Though the reconstruction is now thought to have been inaccurate, versions of it are still often portrayed in modern paleoart.
Other researchers place the scales differently. Unwin and Benton interpreted them as a single, unpaired row of modified scales that run along the dorsal midline. Jones et al. interpreted them as two paired rows of structures that are anatomically very much like feathers, and which are in positions like those of birds' spinal feather tracts. Feather-development expert Richard Prum (and also Reisz and Sues) see the structures as anatomically very different from feathers, and thinks they are elongate, ribbonlike scales.
Still other observers (e.g. Fraser in 2006) believe that the structures are not part of Longisquama at all, that they are simply plant fronds that were preserved along with the reptile and were misinterpreted. Buchwitz & Voigt (2012) argue that the structures of Longisquama are not plant remains, because all of the structures except for the last in the holotype PIN 2584/4 are arranged regularly, and that they are not preserved as carbon films, the usual mode of preservation for plants in the Madygen Formation. The only plant from Madygen with similarities to the Longisquama structures is Mesenteriophyllum kotschnevii, but its leaves do not have the distinct hockey-stick shape of the structures attributed to Longisquama.
Description
Integumentary structures
Longisquama is characterized by distinctive integumentary structures along its back. The holotype (specimen PIN 2584/4) is the only known fossil preserving these appendages projecting from the back of an associated skeleton. It has seven appendages radiating in a fan-like pattern, but the tips are not preserved. PIN 2584/9 preserves five complete appendages spaced close together. PIN 2584/6 preserves two long, curved appendage running side by side. Other specimens, such as PIN 2585/7 and FG 596/V/1, preserve only one appendage. These structures are long and narrow throughout most of their lengths, and angle backward near the tip to give the appearance of a hockey stick. The proximal straight section is divided into three longitudinal lobes: a smooth lobe on either side and a transversely ridged lobe running between them. The middle ridged lobe is made up of raised "rugae" and deep "interstices", which Sharov compared to rosary beads. The distal section is thought to be an extension of the middle and anterior lobes of the proximal section. While the anterior lobe widens in the distal section, the posterior lobe of the proximal section narrows until it ends at the base of the distal section. In addition, an "anterior flange" appears about two-thirds the way up the proximal section and continues to the tip of the distal section. Both lobes in the distal section are ridged and separated by a grooved axis. In some specimens, the rugae of either lobe in the distal section line up with each other, while in other specimens they do not. Some specimens have straight rugae projecting perpendicular to the axis, while others have rugae that curve in an S-shape. One specimen of Longisquama, PIN 2584/5, has small spines projecting from the axis of the distal section.
The holotype skeleton shows each structure attaching to a vertebral spine. These anchorage points are visible as raised knobs. The base of each appendage is slightly convex, unlike the flattened shape of the rest of the structure. The convex shape may be evidence that the base of each structure was tubular in life, anchoring like other integumentary structures such as mammalian hair or avian feathers into a follicle. Moreover, the proximity of each structure to its corresponding vertebra suggests that a thick layer of soft tissue, possibly including a follicle, surrounded each base.
Classification
Like the 'long scales', the skeletal features of Longisquama are equally difficult to diagnose. As a result, Longisquama has been related by scientists to many different sauropsid groups.
Sharov determined that it was a "pseudosuchian" (a "primitive" archosaur, but as an archosaur a relatively derived reptile) on the basis of two features: a mandibular fenestra and an antorbital fenestra.
Sharov's original description also includes an elongate scapula.
Jones et al. see Longisquama as an archosaur, adding to Sharov's two characters a furcula.
Olshevsky believes that Longisquama is an archosaur and, moreover, an early dinosaur.
Unwin & Benton did not think it was possible to diagnose the crucial fenestrae; the holes could simply be damage to the fossil.
They agreed with Sharov that Longisquama has acrodont teeth and an interclavicle, but instead of a furcula, they saw paired clavicles.
These features would be more typical of a member of Lepidosauromorpha, meaning that Longisquama is not an archosaur and thus not closely related to birds.
According to a cladistic study by Phil Senter in 2004, Longisquama would be an even more basal diapsid and a member of Avicephala, more closely related to Coelurosauravus.
A 2012 re-examination of the fossil found that the presence of fenestrae in the skull crucial to classification as an archosaur could not be confirmed; in fact, a section of the skull in one of the fossil slabs that had previously been used to justify the presence of antorbital fenestrae does not contain any actual bone. This study concluded that none of the proposed classifications of Longisquama could be confirmed or refuted using the available evidence. The authors of the study tentatively placed Longisquama among the Archosauromorpha as a result of their hypothesis of developmental "deep homology" between its plumes, bird feathers, crocodile scales and pterosaur pycnofibres.
Debate over bird origins
The questions relating to the reptilian classification of Longisquama and to the exact function of the 'long scales' relate to a dismissed proposal that birds are not dinosaurs, but rather descend from earlier archosaurs like Longisquama.
Background
A consensus of paleontologists agrees that birds evolved from theropod dinosaurs. The scenario for this hypothesis is that early theropod dinosaurs were endothermic, and evolved simple filamentous feathers for insulation. These feathers later increased in size and complexity and then adapted to aerodynamic uses. Ample evidence for this hypothesis has been found in the fossil record, specifically for such dinosaurs as Kulindadromeus, Sinosauropteryx, Caudipteryx, Microraptor and many others. Longisquama is thus regarded as a diapsid with strange scales, ambiguous skeletal features and no real significance to bird evolution.
An extreme minority of scientists posit the hypothesis that birds evolved from small, arboreal archosaurs like Longisquama. They see these as ectothermic animals that adapted to gliding by developing elongated scales and then pennaceous feathers. This hypothesis, however, is not supported by cladistic analysis.
| Biology and health sciences | Other prehistoric reptiles | Animals |
329549 | https://en.wikipedia.org/wiki/Surface%20of%20revolution | Surface of revolution | A surface of revolution is a surface in Euclidean space created by rotating a curve (the generatrix) one full revolution around an axis of rotation (normally not intersecting the generatrix, except at its endpoints).
The volume bounded by the surface created by this revolution is the solid of revolution.
Examples of surfaces of revolution generated by a straight line are cylindrical and conical surfaces depending on whether or not the line is parallel to the axis. A circle that is rotated around any diameter generates a sphere of which it is then a great circle, and if the circle is rotated around an axis that does not intersect the interior of a circle, then it generates a torus which does not intersect itself (a ring torus).
Properties
The sections of the surface of revolution made by planes through the axis are called meridional sections. Any meridional section can be considered to be the generatrix in the plane determined by it and the axis.
The sections of the surface of revolution made by planes that are perpendicular to the axis are circles.
Some special cases of hyperboloids (of either one or two sheets) and elliptic paraboloids are surfaces of revolution. These may be identified as those quadratic surfaces all of whose cross sections perpendicular to the axis are circular.
Area formula
If the curve is described by the parametric functions , , with ranging over some interval , and the axis of revolution is the -axis, then the surface area is given by the integral
provided that is never negative between the endpoints and . This formula is the calculus equivalent of Pappus's centroid theorem. The quantity
comes from the Pythagorean theorem and represents a small segment of the arc of the curve, as in the arc length formula. The quantity is the path of (the centroid of) this small segment, as required by Pappus' theorem.
Likewise, when the axis of rotation is the -axis and provided that is never negative, the area is given by
If the continuous curve is described by the function , , then the integral becomes
for revolution around the -axis, and
for revolution around the y-axis (provided ). These come from the above formula.
This can also be derived from multivariable integration. If a plane curve is given by then its corresponding surface of revolution when revolved around the x-axis has Cartesian coordinates given by with . Then the surface area is given by the surface integral
Computing the partial derivatives yields
and computing the cross product yields
where the trigonometric identity was used. With this cross product, we get
where the same trigonometric identity was used again. The derivation for a surface obtained by revolving around the y-axis is similar.
For example, the spherical surface with unit radius is generated by the curve , , when ranges over . Its area is therefore
For the case of the spherical curve with radius , rotated about the -axis
A minimal surface of revolution is the surface of revolution of the curve between two given points which minimizes surface area. A basic problem in the calculus of variations is finding the curve between two points that produces this minimal surface of revolution.
There are only two minimal surfaces of revolution (surfaces of revolution which are also minimal surfaces): the plane and the catenoid.
Coordinate expressions
A surface of revolution given by rotating a curve described by around the x-axis may be most simply described by . This yields the parametrization in terms of and as . If instead we revolve the curve around the y-axis, then the curve is described by , yielding the expression in terms of the parameters and .
If x and y are defined in terms of a parameter , then we obtain a parametrization in terms of and . If and are functions of , then the surface of revolution obtained by revolving the curve around the x-axis is described by , and the surface of revolution obtained by revolving the curve around the y-axis is described by .
Geodesics
Meridians are always geodesics on a surface of revolution. Other geodesics are governed by Clairaut's relation.
Toroids
A surface of revolution with a hole in, where the axis of revolution does not intersect the surface, is called a toroid. For example, when a rectangle is rotated around an axis parallel to one of its edges, then a hollow square-section ring is produced. If the revolved figure is a circle, then the object is called a torus.
| Mathematics | Three-dimensional space | null |
329915 | https://en.wikipedia.org/wiki/Supercentenarian | Supercentenarian | A supercentenarian, sometimes hyphenated as super-centenarian, is a person who is 110 years or older. This age is achieved by about one in 1,000 centenarians. Supercentenarians typically live a life free of significant age-related diseases until shortly before the maximum human lifespan is reached.
Etymology
The term "supercentenarian" has been used since 1832 or earlier. Norris McWhirter, editor of The Guinness Book Of Records, used the term in association with age claims researcher A. Ross Eckler Jr. in 1976, and the term was further popularised in 1991 by William Strauss and Neil Howe in their book Generations.
The term "semisupercentenarian", has been used to describe someone aged 105-109. Originally the term "supercentenarian" was used to mean someone well over the age of 100, but 110 years and over became the cutoff point of accepted criteria for demographers.
Incidence
The Gerontology Research Group maintains a top 30–40 list of oldest verified living people. The researchers estimate, based on a 0.15% to 0.25% survival rate of centenarians until the age of 110, that there should be between 300 and 450 living supercentenarians in the world. A study conducted in 2010 by the Max Planck Institute for Demographic Research found 663 validated supercentenarians, living and dead, and showed that the countries with the highest total number (not frequency) of supercentenarians (in decreasing order) were the United States, Japan, England plus Wales, France, and Italy. The first verified supercentenarian in human history was Dutchman Geert Adriaans Boomgaard (1788–1899), and it was not until the 1980s that the oldest verified age surpassed 115.
History
While claims of extreme age have persisted from the earliest times in history, the earliest supercentenarian accepted by Guinness World Records is Dutchman Thomas Peters (reportedly c. 1745–1857). However, Peters's age cannot be reliably verified due to an absence of any documents recording his early life. Other scholars, such as French demographer Jean-Marie Robine, consider Geert Adriaans Boomgaard, also of the Netherlands, who turned 110 in 1898, to be the first verifiable case, as the alleged evidence for Peters has apparently been lost. The evidence for the 112 years of Englishman William Hiseland (reportedly 1620–1732) does not meet the standards required by Guinness World Records.
Church of Norway records, the accuracy of which is subject to dispute, also show what appear to be several supercentenarians who lived in the south-central part of present-day Norway during the 16th and 17th centuries, including Johannes Torpe (1549–1664), and Knud Erlandson Etun (1659–1770), both residents of Valdres, Oppland.
In 1902, Margaret Ann Neve, born in 1792, became the first verified female supercentenarian.
Jeanne Calment of France, who died in 1997 aged 122 years, 164 days, had the longest human lifespan documented. The oldest man ever verified is Jiroemon Kimura of Japan, who died in 2013 aged 116 years and 54 days.
Inah Canabarro Lucas (born 8 June 1908) of Brazil is the world's oldest living person, aged . João Marinho Neto (born 5 October 1912) of Brazil is the world's oldest living man, aged .
Research into centenarians
Research into centenarians helps scientists understand how an ordinary person might live longer.
Organisations that research centenarians and supercentenarians include the GRG, LongeviQuest, and the Supercentenarian Research Foundation.
In May 2021, whole genome sequencing analysis of 81 Italian semi-supercentenarians and supercentenarians were published, along with 36 control group people from the same region who were simply of advanced age.
Morbidity
Research on the morbidity of supercentenarians has found that they remain free of major age-related diseases (e.g., stroke, cardiovascular disease, dementia, cancer, Parkinson's disease and diabetes) until the very end of life when they die of exhaustion of organ reserve, which is the ability to return organ function to homeostasis. About 10% of supercentenarians survive until the last three months of life without major age-related diseases, as compared to only 4% of semi-supercentenarians and 3% of centenarians.
By measuring the biological age of various tissues from supercentenarians, researchers may be able to identify the nature of those that are protected from ageing effects. According to a study of 30 different body parts from a 112-year-old female supercentenarian, along with younger controls, the cerebellum is protected from ageing, according to an epigenetic biomarker of tissue age known as the epigenetic clock—the reading is about 15 years younger than expected in a centenarian. These findings could explain why the cerebellum exhibits fewer neuropathological hallmarks of age-related dementia as compared to other brain regions.
A 2021 genomic study identified genetic characteristics that protect against age-related diseases, particularly variants that improve DNA repair. Five variants were found to be significant, affecting STK17A (increased expression) and COA1 (reduced expression) genes. Supercentenarians also had an unexpectedly low level of somatic mutations.
| Biology and health sciences | Fields of medicine | Health |
330091 | https://en.wikipedia.org/wiki/Rhaphidophoridae | Rhaphidophoridae | The orthopteran family Rhaphidophoridae of the suborder Ensifera has a worldwide distribution. Common names for these insects include cave crickets, camel crickets, spider crickets (sometimes shortened to "criders" or "sprickets"), and sand treaders. Those occurring in New Zealand are typically referred to as jumping or cave wētā. Most are found in forest environments or within caves, animal burrows, cellars, under stones, or in wood or similar environments. All species are flightless and nocturnal, usually with long antennae and legs. More than 500 species of Rhaphidophoridae are described.
The well-known field crickets are from a different superfamily (Grylloidea) and only look vaguely similar, while members of the family Tettigoniidae may look superficially similar in body form.
Description
Most cave crickets have very large hind legs with "drumstick-shaped" femora and equally long, thin tibiae, and long, slender antennae. The antennae arise closely and next to each other on the head. They are brownish in color and rather humpbacked in appearance, always wingless, and up to long in body and for the legs. The bodies of early instars may appear translucent.
As their name suggests, cave crickets are commonly found in caves or old mines. Some inhabit other cool, damp environments such as rotten logs, stumps and hollow trees, and under damp leaves, stones, boards, and logs. Occasionally, they prove to be a nuisance in the basements of homes in suburban areas, drains, sewers, wells, and firewood stacks. Some reach into alpine areas and live close to permanent ice, such as the Mount Cook "flea" (Pharmacus montanus) and its relatives in New Zealand.
Subfamilies and genera
Aemodogryllinae
Genera include:
tribe Aemodogryllini Jacobson, 1905 – Asia (Korea, Indochina, Russia, China), Europe
Diestrammena Brunner von Wattenwyl, 1888
Tachycines Adelung, 1902
tribe Diestramimini Gorochov, 1998 – India, southern China, Indochina
Diestramima Storozhenko, 1990
Gigantettix Gorochov, 1998
Anoplophilinae
Genera include:
Alpinanoplophilus Ishikawa, 1993 – Japan
Anoplophilus Karny, 1931 – Japan and Korea
Ceuthophilinae
cave crickets, camel crickets and sand treaders: North America
Genera include:
tribe Argyrtini Saussure & Pictet, 1897
Anargyrtes Hubbell, 1972
Argyrtes Saussure & Pictet, 1897
Leptargyrtes Hubbell, 1972
tribe Ceuthophilini Tepper, 1892
Ceuthophilus Scudder, 1863
Macrobaenetes Tinkham, 1962
Rhachocnemis Caudell, 1916
Styracosceles Hubbell, 1936
Typhloceuthophilus Hubbell, 1940
Udeopsylla Scudder, 1863
Utabaenetes Tinkham, 1970
tribe Daihiniini Karny, 1930
Ammobaenetes Hubbell, 1936
Daihinia Haldeman, 1850
Daihinibaenetes Tinkham, 1962
Daihiniella Hubbell, 1936
Daihiniodes Hebard, 1929
Phrixocnemis Scudder, 1894
tribe Hadenoecini Ander, 1939 – North America
Euhadenoecus Hubbell, 1978
Hadenoecus Scudder, 1863
tribe Pristoceuthophilini Rehn, 1903
Exochodrilus Hubbell, 1972
Farallonophilus Rentz, 1972
Pristoceuthophilus Rehn, 1903
Salishella Hebard, 1939
Dolichopodainae
cave crickets: southern Europe, western Asia
Dolichopoda Bolivar, 1880
Gammarotettiginae
Auth. Karny, 1937 – North America
tribe Gammarotettigini Karny, 1937
Gammarotettix Brunner von Wattenwyll, 1888
Macropathinae
Gondwanan cave crickets
Genera include:
tribe Macropathini Karny, 1930 – Australia, New Zealand, South America, South Africa, the Falkland Islands
Australotettix Richards, 1964 – Australia (Queensland, New South Wales)
Cavernotettix Richards, 1966 – Australia (New South Wales, Victoria, Tasmania)
Crux Trewick, 2024 - New Zealand
Dendroplectron Richards, 1964 – New Zealand
Heteromallus Brunner von Wattenwyll, 1888 – South America
Insulanoplectron Richards, 1970 – New Zealand
Ischyroplectron Hutton, 1896 – New Zealand
Isoplectron Hutton, 1896 – New Zealand
Macropathus Walker, 1869 – New Zealand
Maotoweta Johns & Cook, 2014 – New Zealand
Micropathus Richards, 1964 – Australia (Tasmania)
Miotopus Hutton, 1898 – New Zealand
Neonetus Brunner von Wattenwyll, 1888 – New Zealand
Notoplectron Richards, 1964 – New Zealand
Novoplectron Richards, 1966 – New Zealand
Novotettix Richards, 1966 – Australia (South Australia)
Occultastella Trewick, 2024 - New Zealand
Pachyrhamma Brunner von Wattenwyll, 1888 – New Zealand
Pallidoplectron Richards, 1958 – New Zealand
Pallidotettix Richards, 1968 – Australia (South Australia, Western Australia)
Paraneonetus Salmon, 1958 – New Zealand
Parudenus Enderlein, 1910 – South America
Pharmacus Pictet & Saussure, 1893 – New Zealand
Pleioplectron Hutton, 1896 – New Zealand
Praecantrix Hegg, Morgan-Richards & Trewick 2024 – New Zealand
Spelaeiacris Peringuey, 1916 – South Africa
Speleotettix Chopard, 1944 – Australia (South Australia, Victoria)
Tasmanoplectron Richards, 1971 – Australia (Tasmania)
Udenus Brunner von Wattenwyll, 1900– South America
tribe Talitropsini Gorochov, 1988
Talitropsis Bolivar, 1882 – New Zealand
† Protroglophilinae
† Prorhaphidophora Chopard, 1936
† Protroglophilus Gorochov, 1989
Rhaphidophorinae
Genera include:
tribe Rhaphidophorini Walker, 1869 – India, southern China, Japan, Indochina, Malaysia, Australasia
Eurhaphidophora Gorochov, 1999
Rhaphidophora Serville, 1838
Stonychophora Karny, 1934
Troglophilinae
cave crickets: the Mediterranean region
Troglophilus Krauss, 1879
Tropidischiinae
camel crickets: Canada
Tropidischia Scudder, 1869
An as-yet-unnamed genus was discovered within a cave in Grand Canyon–Parashant National Monument, on the Utah/Arizona border, in 2005. Its most distinctive characteristic is that it has functional grasping cerci on its posterior.
Ecology
Their distinctive limbs and antennae serve a double purpose. Typically living in a lightless environment, or active at night, they rely heavily on their sense of touch, which is limited by reach. While they have been known to take up residence in the basements of buildings, many cave crickets live out their entire lives deep inside caves. In those habitats, they sometimes face long spans of time with insufficient access to nutrients. Given their limited vision, cave crickets often jump to avoid predation. Those species of Rhaphidophoridae that have been studied are primarily scavengers, eating plant, animal, and fungi material. Although they look intimidating, they are completely harmless.
The group known as "sand treaders" is restricted to sand dunes, and are adapted to live in this environment. They are active only at night, and spend the day burrowed into the sand to minimize water loss. In the large sand dunes of California and Utah, they serve as food for scorpions and at least one specialized bird, LeConte's thrasher (Toxostoma lecontei). The thrasher roams the dunes looking for the tell-tale debris of the diurnal hiding place and excavates the sand treaders (the range of bird is in the Mojave and Colorado Deserts in the U.S.).
Interactions with humans
Cave and camel crickets are of little economic importance except as a nuisance in buildings and homes, especially basements. They are usually "accidental invaders" that wander in from adjacent areas. They may reproduce indoors, and are seen in dark, moist conditions such as a basement, shower, or laundry area, as well as in organic debris (e.g., compost heaps) that serve as food. They are fairly common invaders of homes in Hokkaido and other chilly regions in Japan. They are called kamado-uma or colloquially benjo korogi (便所コオロギ, literally, "toilet cricket").
A representation of a female from the Troglophilus genus has been found engraved on a bison bone in the Cave of the Trois-Frères, showing that they were likely already present around humans, maybe as pets or pests, in caves inhabited by prehistoric populations in the Magdalenian.
| Biology and health sciences | Orthoptera | Animals |
330158 | https://en.wikipedia.org/wiki/Artificial%20island | Artificial island | An artificial island or man-made island is an island that has been constructed by humans rather than formed through natural processes. Other definitions may suggest that artificial islands are lands with the characteristics of human intervention in their formation process, while others argue that artificial islands are created by expanding existing islets, constructing on existing reefs, or amalgamating several islets together. Although constructing artificial islands is not a modern phenomenon, there is no definite legal definition of it. Artificial islands may vary in size from small islets reclaimed solely to support a single pillar of a building or structure to those that support entire communities and cities. Archaeologists argue that such islands were created as far back as the Neolithic era. Early artificial islands included floating structures in still waters or wooden or megalithic structures erected in shallow waters (e.g. crannógs and Nan Madol discussed below).
In modern times, artificial islands are usually formed by land reclamation, but some are formed by flooding of valleys resulting in the tops of former knolls getting isolated by water (e.g., Barro Colorado Island). There are several reasons for the construction of these islands, which include residential, industrial, commercial, structural (for bridge pylons) or strategic purposes. One of the world's largest artificial islands, René-Levasseur Island, was formed by the flooding of two adjacent reservoirs. Technological advancements have made it feasible to build artificial islands in waters as deep as 75 meters. The size of the waves and the structural integrity of the island play a crucial role in determining the maximum depth.
History
Despite a popular image of modernity, artificial islands actually have a long history in many parts of the world, dating back to the reclaimed islands of Ancient Egyptian civilization, the Stilt crannogs of prehistoric Wales, Scotland and Ireland, the ceremonial centers of Nan Madol in Micronesia and the still extant floating islands of Lake Titicaca. The city of Tenochtitlan, the Aztec predecessor of Mexico City that was home to 500,000 people when the Spaniards arrived, stood on a small natural island in Lake Texcoco that was surrounded by countless artificial chinamitl islands.
The people of Langa Langa Lagoon and Lau Lagoon in Malaita, Solomon Islands, built about 60 artificial islands on the reef including Funaafou, Sulufou, and Adaege. The people of Lau Lagoon build islands on the reef as this provided protection against attack from the people who lived in the centre of Malaita. These islands were formed literally one rock at a time. A family would take their canoe out to the reef which protects the lagoon and then dive for rocks, bring them to the surface and then return to the selected site and drop the rocks into the water. Living on the reef was also healthier as the mosquitoes, which infested the coastal swamps, were not found on the reef islands. The Lau people continue to live on the reef islands.
Many artificial islands have been built in urban harbors to provide either a site deliberately isolated from the city or just spare real estate otherwise unobtainable in a crowded metropolis. An example of the first case is Dejima (or Deshima), created in the bay of Nagasaki in Japan's Edo period as a contained center for European merchants. During the isolationist era, Dutch people were generally banned from Nagasaki and Japanese from Dejima. Similarly, Ellis Island, in Upper New York Bay beside New York City, a former tiny islet greatly expanded by land reclamation, served as an isolated immigration center for the United States in the late 19th and early 20th century, preventing an escape to the city of those refused entry for disease or other perceived flaws, who might otherwise be tempted toward illegal immigration. One of the most well-known artificial islands is the Île Notre-Dame in Montreal, built for Expo 67.
The Venetian Islands in Miami Beach, Florida, in Biscayne Bay added valuable new real estate during the Florida land boom of the 1920s. When the bubble that the developers were riding burst, the bay was left scarred with the remnants of their failed project. A boom town development company was building a sea wall for an island that was to be called Isola di Lolando but could not stay in business after the 1926 Miami Hurricane and the Great Depression, dooming the island-building project. The concrete pilings from the project still stand as another development boom roared around them, 80 years later.
Largest artificial islands according to their size (reclaimed lands)
Modern projects
Bahrain
Bahrain has several artificial islands including Northern City, Diyar Al Muharraq, and Durrat Al Bahrain. Named after the 'most perfect pearl' in the Persian Gulf, Durrat Al Bahrain is a US$6 billion joint development owned by the Bahrain Mumtalakat Holding Company and Kuwait Finance House Bahrain (KFH). The project is designed by the firm Atkins. It consists of a series of 15 large artificial islands covering an area of about 5 km2 (54,000,000 sq ft) and has six atolls, five fish-shaped islands, two crescent-shaped islands, and two more small islands related to the Marina area.
Netherlands
In 1969, the Flevopolder in the Netherlands was finished, as part of the Zuiderzee Works. It has a total land surface of 970 km2, which makes it by far the largest artificial island by land reclamation in the world. The island consists of two polders, Eastern Flevoland and Southern Flevoland. Together with the Noordoostpolder, which includes some small former islands like Urk, the polders form Flevoland, the 12th province of the Netherlands that almost entirely consists of reclaimed land.
An entire artificial archipelago, Marker Wadden has been built as a conservation area for birds and other wildlife, the project started in 2016.
Maldives
Maldives have been creating various artificial islands to promote economic development and to address the threat of rising sea level. Hulhumalé island was reclaimed to establish a new land mass required to meet the existing and future housing, industrial and commercial development demands of the Malé region. The official settlement was inaugurated on May 12, 2004.
Qatar
The Pearl Island is in the north of the Qatari capital Doha, home to a range of residential, commercial and tourism activities. Qanat Quartier is designed to be a 'Virtual Venice in the Middle East'. Lusail & large areas around Ras Laffan, Hamad International Airport & Hamad Port. The New Doha International Airport is the second largest artificial island built in the world, with a size of 22km2. The Pearl-Qatar is the third largest artificial island in the world, with a size of 13.9km2. The island was built in 2006, by main contractor DEME Group.
United Arab Emirates
The United Arab Emirates is home to several artificial island projects. They include the Yas Island, augmentations to Saadiyat Island, Khalifa Port, Al Reem Island, Al Lulu Island, Al Raha Creek, al Hudairiyat Island, The Universe and the Dubai Waterfront. Palm Islands (Palm Jumeirah, Palm Jebel Ali, and Deira Island) and the World Islands off Dubai are created for leisure and tourism purposes.
The Burj Al Arab is on its own artificial island. The Universe, Palm Jebel Ali, Dubai Waterfront, and Palm Deira are on hold.
China
China has conducted a land reclamation project which had built at least seven artificial islands in the South China Sea off the coast of Palawan totaling 2000 acres in size by mid 2015. One artificial island built on Fiery Cross Reef near the Spratly Islands is now the site of a military barracks, lookout tower and a runway long enough to handle Chinese military aircraft.
A largely touristic and commercial project is the Ocean Flower Island project on Hainan island.
Indonesia
Pantai Indah Kapuk (PIK) in North Jakarta is an area featuring luxury residential and commercial developments. Two artificial islands, Golf Island and Ebony Island, were created to expand the PIK area. They offer facilities, recreational spaces, scenic waterfront views and residential areas.
Airports
Kansai International Airport is the first airport to be built completely on an artificial island in 1994, followed by Chūbu Centrair International Airport in 2005, and both the New Kitakyushu Airport and Kobe Airport in 2006, Ordu Giresun Airport in 2016, and Rize-Artvin Airport in 2022
When Hong Kong International Airport opened in 1998, 75% of the property was created using land reclamation upon the existing islands of Chek Lap Kok and Lam Chau. Currently China is building several airports on artificial islands, they include runways of Shanghai international Airport Dalian Jinzhouwan International Airport being built on a 21 square kilometer artificial island, Xiamen Xiang'an International Airport, Sanya Hongtangwan International Airport designed by Bentley Systems which is being built on a 28 square kilometer artificial islands.
Environmental impact
Artificial islands negatively impact the marine environment. The large quantities of sand required to build these islands are acquired through dredging, which is harmful to coral reefs and disrupts marine life. The increased amount of sand, sediment, and fine particles creates turbid conditions, blocking necessary UV rays from reaching coral reefs, creating coral turbidity (where more organic material is taken in by coral) and increasing bacterial activity (more harmful bacteria are introduced into coral).
The construction of artificial islands also decreases the subaqueous area in surrounding waters, leading to habitat destruction or degradation for many species.
Political status
Under the United Nations Convention on the Law of the Sea treaty (UNCLOS), artificial islands are not considered harbor works (Article 11) and are under the jurisdiction of the nearest coastal state if within (Article 56). Artificial islands are also not considered islands for purposes of having their own territorial waters or exclusive economic zones, and only the coastal state may authorize their construction (Article 60); however, on the high seas beyond national jurisdiction, any "state" may construct artificial islands (Article 87).
The unrecognised micronation known as the Principality of Sealand (often shorted to simply "Sealand") is entirely on a single artificial island.
Greyzone warfare strategies
Over time, after World War II, several countries have been reported to have built artificial islands for strategic and military purposes. For instance, the Philippines and China have been reported to have constructed artificial islands in the South China Sea, primarily to assert territorial claims over the disputed waters. Similarly, Russia has allegedly done so in the Arctic, both for strategic and military purposes. These reports are subject to ongoing political and diplomatic debates.
China
The island-building activities of China have been the subject of close examination by experts, who suggest that they are driven by strategic objectives. The issue at the heart of the matter revolves around China's claim that its historical entitlement justifies its actions in the area. This is opposed by the legal argument supported by the United Nations Convention on the Law of the Sea (UNCLOS). It is noteworthy that UNCLOS serves as the primary legal framework that governs the use and control of maritime zones. This convention establishes regulations on how coastal states can exercise their sovereignty over territorial waters, contiguous zones, exclusive economic zones (EEZs), and the continental shelf.
China's claim to the South China Sea dates back to the 1940s. At that time, China recovered islands in the name of the Cairo Declaration and the Potsdam Proclamation, and there was no reaction from Vietnam or any other state against it. In 1947, China drafted the eleven-dash line (also referred to as the nine-dash line) to outline the geographical scope of its authority over the South China Sea. China began building islands in the 1980s, initially creating a series of minor military garrisons. However, the reason why China faces criticism is because some of the reclaimed islands fall within the EEZs of other countries, which raises concerns about China's compliance with UNCLOS. Vietnam has also made a historical claim, pointing to its rule over the islands in the 17th century. The Philippines argues for its rights based on geographical proximity. Meanwhile, Malaysia and Brunei claim parts of the sea using EEZ as the basis of their claims. UNCLOS Article 60 stipulates that naturally formed islands can generate EEZs, while artificial islands cannot. Therefore, China's construction of artificial islands raises questions about whether they can legitimately claim an EEZ around those islands. UNCLOS also enshrines the freedom of navigation and overflight in the EEZ of coastal states, which implies that all countries have the right to sail, fly, and conduct military exercises in those waters. Nevertheless, China has repeatedly challenged this principle by constructing artificial islands, imposing restrictions on navigation, and militarising the area.
Legal status of artificial islands by China
The legal implications surrounding China's island construction efforts present complex challenges. A key issue revolves around determining the classification of land masses as either rocks or seabed, which holds significant importance in these disputed cases. Maritime law establishes a clear distinction between land masses eligible for expansion into new island groups and those that do not qualify. According to this legal framework, low-tide elevations are considered part of the seabed and do not generate a territorial sea, EEZ, or continental shelf. However, they serve as a reference point for measuring the entitlements of nearby rocks or islands. Rocks, unlike islands, lack the capacity to sustain human habitation or support economic activity. While they generate a territorial sea, they do not establish an EEZ or continental shelf. UNCLOS stipulates that both rocks and islands must be naturally formed and remain above water at high tide.
The Spratly Islands have been a subject of contention among multiple countries, including Taiwan, Vietnam, the Philippines, Malaysia, Brunei, and China. China's claim to the islands, despite entering the dispute relatively late, has been supported by arguments asserting historical presence and construction activities on the islands as a basis for their claim. In terms of international law, land reclamation itself is not explicitly prohibited. There is no specific rule within international law that prohibits any country from engaging in land reclamation at sea. The legality of such activities primarily depends on their location in relation to adjacent land territories. Within the 12 nautical mile territorial sea, a country holds the right to reclaim land as it falls under its sovereign authority. However, beyond this 12 nautical mile limit, the country must consider whether its actions conform to the rights and jurisdictions recognised by UNCLOS. Reclamation activities conducted between 12 and 200 nautical miles are considered part of the process of establishing and utilising artificial islands, installations, and structures, governed by specific provisions within UNCLOS. It is worth mentioning that artificial islands may include stationary oil rigs. Coastal states are permitted to undertake reclamation within designated areas as long as they fulfil their obligation to inform other countries and respect their rights, as outlined by UNCLOS rules. However, any artificial islands created through this process are restricted to maintaining a 500-meter safety zone around them and must not obstruct international navigation.
Hybrid warfare and China's greyzone tactics
Hybrid warfare is understood as a form of conflict that combines conventional and irregular tactics. Hybrid warfare may also be defined as a multifaceted strategy aimed at destabilising a functioning state and dividing its society. This comprehensive definition portrays hybrid strategy as a versatile and complex approach utilising a combination of conventional and unconventional means, overt and covert activities, involving military, paramilitary, irregular, and civilian actors across different domains of power. The ultimate objective of hybrid warfare is to exploit vulnerabilities and weaknesses in order to achieve geopolitical and strategic goals.
Some argue, that China's greyzone tactics mainly aim to improve its geopolitical position in a peaceful manner. In contrast to the greyzone tactics used by Russia in Crimea in 2014, China's approach differs significantly. One supporting argument is that the majority of the activities occur in uninhabited areas at sea, which contradicts a definition of hybrid warfare that suggests it is targeted at populations. Additionally, China's objective is not to destabilise other states, but rather to enhance its national security by gaining control over regional waters. Furthermore, China is not aiming to seize control from another power, but rather seeks to establish a dominant security and political position in the region. It is worth noting that China employs unarmed or lightly armed vessels deliberately, as they are unlikely to resort to deadly force.
However, others argue that China's greyzone tactics can be classified as hybrid warfare. Some viewpoints contend that China's establishment of military bases on artificial islands serves as a means to assert their territorial claims through the use of force. This approach is referred to as the Cabbage strategy, wherein a contested area is encircled by multiple layers of security to deny access to rival nations, ultimately solidifying their claim.
While there is no consensus on China's motives behind the creation of artificial islands, it is widely acknowledged that China aims to bolster its power and influence in the region. These actions contribute to the escalating tensions in the South China Sea.
Gallery
| Physical sciences | Artificial landforms | null |
330206 | https://en.wikipedia.org/wiki/Differentiable%20function | Differentiable function | In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp.
If is an interior point in the domain of a function , then is said to be differentiable at if the derivative exists. In other words, the graph of has a non-vertical tangent line at the point . is said to be differentiable on if it is differentiable at every point of . is said to be continuously differentiable if its derivative is also a continuous function over the domain of the function . Generally speaking, is said to be of class if its first derivatives exist and are continuous over the domain of the function .
For a multivariable function, as shown here, the differentiability of it is something more complex than the existence of the partial derivatives of it.
Differentiability of real functions of one variable
A function , defined on an open set , is said to be differentiable at if the derivative
exists. This implies that the function is continuous at .
This function is said to be differentiable on if it is differentiable at every point of . In this case, the derivative of is thus a function from into
A continuous function is not necessarily differentiable, but a differentiable function is necessarily continuous (at every point where it is differentiable) as is shown below (in the section Differentiability and continuity). A function is said to be continuously differentiable if its derivative is also a continuous function; there exist functions that are differentiable but not continuously differentiable (an example is given in the section Differentiability classes).
Differentiability and continuity
If is differentiable at a point , then must also be continuous at . In particular, any differentiable function must be continuous at every point in its domain. The converse does not hold: a continuous function need not be differentiable. For example, a function with a bend, cusp, or vertical tangent may be continuous, but fails to be differentiable at the location of the anomaly.
Most functions that occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions that have a derivative at some point is a meagre set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function.
Differentiability classes
A function is said to be if the derivative exists and is itself a continuous function. Although the derivative of a differentiable function never has a jump discontinuity, it is possible for the derivative to have an essential discontinuity. For example, the function
is differentiable at 0, since
exists. However, for differentiation rules imply
which has no limit as Thus, this example shows the existence of a function that is differentiable but not continuously differentiable (i.e., the derivative is not a continuous function). Nevertheless, Darboux's theorem implies that the derivative of any function satisfies the conclusion of the intermediate value theorem.
Similarly to how continuous functions are said to be of continuously differentiable functions are sometimes said to be of . A function is of if the first and second derivative of the function both exist and are continuous. More generally, a function is said to be of if the first derivatives all exist and are continuous. If derivatives exist for all positive integers the function is smooth or equivalently, of
Differentiability in higher dimensions
A function of several real variables is said to be differentiable at a point if there exists a linear map such that
If a function is differentiable at , then all of the partial derivatives exist at , and the linear map is given by the Jacobian matrix, an n × m matrix in this case. A similar formulation of the higher-dimensional derivative is provided by the fundamental increment lemma found in single-variable calculus.
If all the partial derivatives of a function exist in a neighborhood of a point and are continuous at the point , then the function is differentiable at that point .
However, the existence of the partial derivatives (or even of all the directional derivatives) does not guarantee that a function is differentiable at a point. For example, the function defined by
is not differentiable at , but all of the partial derivatives and directional derivatives exist at this point. For a continuous example, the function
is not differentiable at , but again all of the partial derivatives and directional derivatives exist.
Differentiability in complex analysis
In complex analysis, complex-differentiability is defined using the same definition as single-variable real functions. This is allowed by the possibility of dividing complex numbers. So, a function is said to be differentiable at when
Although this definition looks similar to the differentiability of single-variable real functions, it is however a more restrictive condition. A function , that is complex-differentiable at a point is automatically differentiable at that point, when viewed as a function . This is because the complex-differentiability implies that
However, a function can be differentiable as a multi-variable function, while not being complex-differentiable. For example, is differentiable at every point, viewed as the 2-variable real function , but it is not complex-differentiable at any point because the limit does not exist (the limit depends on the angle of approach).
Any function that is complex-differentiable in a neighborhood of a point is called holomorphic at that point. Such a function is necessarily infinitely differentiable, and in fact analytic.
Differentiable functions on manifolds
If M is a differentiable manifold, a real or complex-valued function f on M is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate chart defined around p. If M and N are differentiable manifolds, a function f: M → N is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate charts defined around p and f(p).
| Mathematics | Differential calculus | null |
330310 | https://en.wikipedia.org/wiki/Rank%E2%80%93nullity%20theorem | Rank–nullity theorem | The rank–nullity theorem is a theorem in linear algebra, which asserts:
the number of columns of a matrix is the sum of the rank of and the nullity of ; and
the dimension of the domain of a linear transformation is the sum of the rank of (the dimension of the image of ) and the nullity of (the dimension of the kernel of ).
It follows that for linear transformations of vector spaces of equal finite dimension, either injectivity or surjectivity implies bijectivity.
Stating the theorem
Linear transformations
Let be a linear transformation between two vector spaces where 's domain is finite dimensional. Then
where is the rank of (the dimension of its image) and is the nullity of (the dimension of its kernel). In other words,
This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Explicitly, since induces an isomorphism from to the existence of a basis for that extends any given basis of implies, via the splitting lemma, that Taking dimensions, the rank–nullity theorem follows.
Matrices
Linear maps can be represented with matrices. More precisely, an matrix represents a linear map where is the underlying field. So, the dimension of the domain of is , the number of columns of , and the rank–nullity theorem for an matrix is
Proofs
Here we provide two proofs. The first operates in the general case, using linear maps. The second proof looks at the homogeneous system where is a with rank and shows explicitly that there exists a set of linearly independent solutions that span the null space of .
While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.
First proof
Let be vector spaces over some field and defined as in the statement of the theorem with .
As is a subspace, there exists a basis for it. Suppose and let
be such a basis.
We may now, by the Steinitz exchange lemma, extend with linearly independent vectors to form a full basis of .
Let
such that
is a basis for .
From this, we know that
We now claim that is a basis for .
The above equality already states that is a generating set for ; it remains to be shown that it is also linearly independent to conclude that it is a basis.
Suppose is not linearly independent, and let
for some .
Thus, owing to the linearity of , it follows that
This is a contradiction to being a basis, unless all are equal to zero. This shows that is linearly independent, and more specifically that it is a basis for .
To summarize, we have , a basis for , and , a basis for .
Finally we may state that
This concludes our proof.
Second proof
Let be an matrix with linearly independent columns (i.e. ). We will show that:
To do this, we will produce an matrix whose columns form a basis of the null space of .
Without loss of generality, assume that the first columns of are linearly independent. So, we can write
where
is an matrix with linearly independent column vectors, and
is an matrix such that each of its columns is linear combinations of the columns of .
This means that for some matrix (see rank factorization) and, hence,
Let
where is the identity matrix. So, is an matrix such that
Therefore, each of the columns of are particular solutions of .
Furthermore, the columns of are linearly independent because will imply for :
Therefore, the column vectors of constitute a set of linearly independent solutions for .
We next prove that any solution of must be a linear combination of the columns of .
For this, let
be any vector such that . Since the columns of are linearly independent, implies .
Therefore,
This proves that any vector that is a solution of must be a linear combination of the special solutions given by the columns of . And we have already seen that the columns of are linearly independent. Hence, the columns of constitute a basis for the null space of . Therefore, the nullity of is . Since equals rank of , it follows that . This concludes our proof.
A third fundamental subspace
When is a linear transformation between two finite-dimensional subspaces, with and (so can be represented by an matrix ), the rank–nullity theorem asserts that if has rank , then is the dimension of the null space of , which represents the kernel of . In some texts, a third fundamental subspace associated to is considered alongside its image and kernel: the cokernel of is the quotient space , and its dimension is . This dimension formula (which might also be rendered ) together with the rank–nullity theorem is sometimes called the fundamental theorem of linear algebra.
Reformulations and generalizations
This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.
In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that
is a short exact sequence of vector spaces, then , hence
Here plays the role of and is , i.e.
In the finite-dimensional case, this formulation is susceptible to a generalization: if
is an exact sequence of finite-dimensional vector spaces, then
The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map , where and are finite-dimensional, is defined by
Intuitively, is the number of independent solutions of the equation , and is the number of independent restrictions that have to be put on to make solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement
We see that we can easily read off the index of the linear map from the involved spaces, without any need to analyze in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.
Citations
| Mathematics | Linear algebra | null |
330439 | https://en.wikipedia.org/wiki/Scorpaenidae | Scorpaenidae | The Scorpaenidae (also known as scorpionfish) are a family of mostly marine fish that includes many of the world's most venomous species. As their name suggests, scorpionfish have a type of "sting" in the form of sharp spines coated with venomous mucus. The family is a large one, with hundreds of members. They are widespread in tropical and temperate seas but mostly found in the Indo-Pacific. They should not be confused with the cabezones, of the genus Scorpaenichthys, which belong to a separate, though related, family, Cottidae.
Taxonomy
Scorpaenidae was described as a family in 1826 by the French naturalist Antoine Risso. The family is included in the suborder Scorpaenoidei of the order Scorpaeniformes in the 5th Edition of Fishes of the World but other authorities place it in the Perciformes either in the suborder Scorpaenoidei or the superfamily Scorpaenoidea. The subfamilies of this family are treated as valid families by some authorities.
Subfamilies and tribes
Scorpaenidae is divided into the following subfamilies and tribes, containing a total of 65 genera with no less than 454 species:
Subfamily Sebastinae Kaup, 1873 (Rockfishes)
Tribe Sebastini Kaup, 1873
Tribe Sebastolobini Matsubara, 1943
Subfamily Setarchinae Matsubara, 1943
Subfamily Neosebastinae Matsubara, 1943
Subfamily Scorpaeninae Risso, 1826 (Scorpionfishes and lionfishes)
Tribe Scorpaenini Risso, 1826
Tribe Pteroini Kaup, 1873
Subfamily Caracanthinae Gill, 1885 (Orbicular velvetfishes or coral crouchers)
Subfamily Apistinae Gill, 1859
Subfamily Tetraroginae J.L.B. Smith, 1949 (Sailback scorpionfishes or wasp fishes)
Subfamily Synanceiinae Swainson, 1839 (Stonefishes)
Tribe Minoini Jordan & Starks, 1904
Tribe Choridactylini Kaup, 1859
Tribe Synanceiini Swainson 1839
Subfamily Plectrogeniinae Fowler, 1938
Characteristics
Scorpaenidae have a compressed body with the head typically having ridges and spines. There are 1–2 spines on the operculum, with 2 normally being divergent, and 3–5 on the preoperculum, normally 5. The suborbital stay is normally securely attached to the preoperculum, although in some species it may not be attached. If there are scales they are typically ctenoid. They normally have a single dorsal fin which is frequently incised. The dorsal fin contains between 11 and 17 spines and 8 and 17 soft rays while the anal fin usually has between 1 and 3 spines, normally 3, and 3 to 9 soft rays, typically 5, There is a single spine in the pelvic fin and between 2 and 5 soft rays, again typically 5, while the large pectoral fin contains 11–25 soft rays and sometimes has a few of the lower rays free of its membrane. The gill membranes are not attached to the isthmus. In some species, there is no swim bladder. There are venom glands in the spines of the dorsal, anal, and pelvic fins in some species. Most species utilise internal fertilisation, and some species are ovoviviparous while others lay their eggs in a gelatinous mass, with Scorpaena guttata being reported to create a gelatinous "egg balloon" as large as across. The largest species is the shortraker rockfish (Sebastes borealis) which attains a maximum total length of while many species have maximum total lengths of .
Distribution and habitat
Scorpaenidae species are mainly found in the Pacific and Indian Oceans, but some species are also found in the Atlantic Ocean. Some species such as the lionfishes in the genus Pterois are invasive non native species in areas such as the Caribbean and the eastern Mediterranean Sea. They are found in marine and brackish habitats. They typically inhabit reefs, but can also be found in estuaries, bays, and lagoons.
| Biology and health sciences | Acanthomorpha | Animals |
330981 | https://en.wikipedia.org/wiki/Tau%20%28particle%29 | Tau (particle) | The tau (), also called the tau lepton, tau particle or tauon, is an elementary particle similar to the electron, with negative electric charge and a spin of . Like the electron, the muon, and the three neutrinos, the tau is a lepton, and like all elementary particles with half-integer spin, the tau has a corresponding antiparticle of opposite charge but equal mass and spin. In the tau's case, this is the "antitau" (also called the positive tau). Tau particles are denoted by the symbol and the antitaus by .
Tau leptons have a lifetime of and a mass of /c2 (compared to /c2 for muons and /c2 for electrons). Since their interactions are very similar to those of the electron, a tau can be thought of as a much heavier version of the electron. Because of their greater mass, tau particles do not emit as much bremsstrahlung (braking radiation) as electrons; consequently they are potentially much more highly penetrating than electrons.
Because of its short lifetime, the range of the tau is mainly set by its decay length, which is too small for bremsstrahlung to be noticeable. Its penetrating power appears only at ultra-high velocity and energy (above petaelectronvolt energies), when time dilation extends its otherwise very short path-length.
As with the case of the other charged leptons, the tau has an associated tau neutrino, denoted by .
History
The search for tau started in 1960 at CERN by the Bologna-CERN-Frascati (BCF) group led by Antonino Zichichi. Zichichi came up with the idea of a new sequential heavy lepton, now called tau, and invented a method of search. He performed the experiment at the ADONE facility in 1969 once its accelerator became operational; however, the accelerator he used did not have enough energy to search for the tau particle.
The tau was independently anticipated in a 1971 article by Yung-su Tsai. Providing the theory for this discovery, the tau was detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his and Tsai's colleagues at the Stanford Linear Accelerator Center (SLAC) and Lawrence Berkeley National Laboratory (LBL) group. Their equipment consisted of SLAC's then-new electron–positron colliding ring, called SPEAR, and the LBL magnetic detector. They could detect and distinguish between leptons, hadrons, and photons. They did not detect the tau directly, but rather discovered anomalous events:
The need for at least two undetected particles was shown by the inability to conserve energy and momentum with only one. However, no other muons, electrons, photons, or hadrons were detected. It was proposed that this event was the production and subsequent decay of a new particle pair:
This was difficult to verify, because the energy to produce the pair is similar to the threshold for D meson production. The mass and spin of the tau were subsequently established by work done at DESY-Hamburg with the Double Arm Spectrometer (DASP), and at SLAC-Stanford with the SPEAR Direct Electron Counter (DELCO),
The symbol was derived from the Greek (triton, meaning "third" in English), since it was the third charged lepton discovered.
Martin Lewis Perl shared the 1995 Nobel Prize in Physics with Frederick Reines. The latter was awarded his share of the prize for the experimental discovery of the neutrino.
Tau decay
The tau is the only lepton with enough mass to decay into hadrons. Like the leptonic decay modes of the tau, the hadronic decay is through the weak interaction.
The branching fractions of the dominant hadronic tau decays are:
25.49% for decay into a charged pion, a neutral pion, and a tau neutrino;
10.82% for decay into a charged pion and a tau neutrino;
9.26% for decay into a charged pion, two neutral pions, and a tau neutrino;
8.99% for decay into three charged pions (of which two have the same electrical charge) and a tau neutrino;
2.74% for decay into three charged pions (of which two have the same electrical charge), a neutral pion, and a tau neutrino;
1.04% for decay into three neutral pions, a charged pion, and a tau neutrino.
In total, the tau lepton will decay hadronically approximately 64.79% of the time.
The branching fractions of the common purely leptonic tau decays are:
17.82% for decay into a tau neutrino, electron and electron antineutrino;
17.39% for decay into a tau neutrino, muon, and muon antineutrino.
The similarity of values of the two branching fractions is a consequence of lepton universality.
Exotic atoms
The tau lepton is predicted to form exotic atoms like other charged subatomic particles. One of such consists of an antitau and an electron: , called tauonium.
Another one is an onium atom called ditauonium or true tauonium, which is a challenge to detect due to the difficulty to form it from two (opposite-sign) short-lived tau leptons.
Its experimental detection would be an interesting test of quantum electrodynamics.
| Physical sciences | Fermions | null |
330994 | https://en.wikipedia.org/wiki/Squeeze%20theorem | Squeeze theorem | In calculus, the squeeze theorem (also known as the sandwich theorem, among other names) is a theorem regarding the limit of a function that is bounded between two other functions.
The squeeze theorem is used in calculus and mathematical analysis, typically to confirm the limit of a function via comparison with two other functions whose limits are known. It was first used geometrically by the mathematicians Archimedes and Eudoxus in an effort to compute , and was formulated in modern terms by Carl Friedrich Gauss.
Statement
The squeeze theorem is formally stated as follows.
The functions and are said to be lower and upper bounds (respectively) of .
Here, is not required to lie in the interior of . Indeed, if is an endpoint of , then the above limits are left- or right-hand limits.
A similar statement holds for infinite intervals: for example, if , then the conclusion holds, taking the limits as .
This theorem is also valid for sequences. Let be two sequences converging to , and a sequence. If we have , then also converges to .
Proof
According to the above hypotheses we have, taking the limit inferior and superior:
so all the inequalities are indeed equalities, and the thesis immediately follows.
A direct proof, using the -definition of limit, would be to prove that for all real there exists a real such that for all with we have Symbolically,
As
means that
and
means that
then we have
We can choose . Then, if , combining () and (), we have
which completes the proof. Q.E.D
The proof for sequences is very similar, using the -definition of the limit of a sequence.
Examples
First example
The limit
cannot be determined through the limit law
because
does not exist.
However, by the definition of the sine function,
It follows that
Since , by the squeeze theorem, must also be 0.
Second example
Probably the best-known examples of finding a limit by squeezing are the proofs of the equalities
The first limit follows by means of the squeeze theorem from the fact that
for close enough to 0. The correctness of which for positive can be seen by simple geometric reasoning (see drawing) that can be extended to negative as well. The second limit follows from the squeeze theorem and the fact that
for close enough to 0. This can be derived by replacing in the earlier fact by and squaring the resulting inequality.
These two limits are used in proofs of the fact that the derivative of the sine function is the cosine function. That fact is relied on in other proofs of derivatives of trigonometric functions.
Third example
It is possible to show that
by squeezing, as follows.
In the illustration at right, the area of the smaller of the two shaded sectors of the circle is
since the radius is and the arc on the unit circle has length . Similarly, the area of the larger of the two shaded sectors is
What is squeezed between them is the triangle whose base is the vertical segment whose endpoints are the two dots. The length of the base of the triangle is , and the height is 1. The area of the triangle is therefore
From the inequalities
we deduce that
provided , and the inequalities are reversed if . Since the first and third expressions approach as , and the middle expression approaches the desired result follows.
Fourth example
The squeeze theorem can still be used in multivariable calculus but the lower (and upper functions) must be below (and above) the target function not just along a path but around the entire neighborhood of the point of interest and it only works if the function really does have a limit there. It can, therefore, be used to prove that a function has a limit at a point, but it can never be used to prove that a function does not have a limit at a point.
cannot be found by taking any number of limits along paths that pass through the point, but since
therefore, by the squeeze theorem,
| Mathematics | Real analysis | null |
331039 | https://en.wikipedia.org/wiki/Hong%20Kong%E2%80%93Zhuhai%E2%80%93Macau%20Bridge | Hong Kong–Zhuhai–Macau Bridge | The Hong Kong–Zhuhai–Macau Bridge (HZMB) is a bridge–tunnel system consisting of a series of three cable-stayed bridges, an undersea tunnel, and four artificial islands. It is both the longest sea crossing and the longest open-sea fixed link in the world. The HZMB spans the Lingding and Jiuzhou channels, connecting Hong Kong and Macau with Zhuhai—a major city on the Pearl River Delta in China.
The HZM Bridge was designed to last for 120 years and cost ¥127 billion (US$18.8 billion) to build. The cost of constructing the Main Bridge was estimated at ¥51.1 billion (US$7.56 billion) funded by bank loans and shared among the governments of mainland China, Hong Kong and Macau.
Originally set to be opened to traffic in late 2016, the structure was completed on 6 February 2018 and journalists were subsequently taken for a ride over the bridge. On 24 October 2018 the HZMB was opened to the public after its inauguration a day earlier by Chinese leader Xi Jinping.
Planning
Background
Hopewell Holdings founder and then-managing director Gordon Wu proposed the concept of a bridge-tunnel linking Mainland China, British Hong Kong and Portuguese Macau in the 1980s. Wu stated that he got the idea in 1983 from the Chesapeake Bay Bridge–Tunnel. In 1988 Wu pitched the concept to Guangdong and Beijing officials. He envisaged a link farther north than the current design, beginning at Black Point near Tuen Mun, Hong Kong and crossing the Pearl River estuary via Nei Lingding Island and Qi'ao Island. His proposed bridge would have ended at the Chinese village of Tangjia, and a new road would have continued south through Zhuhai before terminating at Macau. Discussions stalled after the Tiananmen Square protests in mid-1989 "unnerved" Wu and other foreign investors, and caused Hopewell's Hong Kong share prices to plunge.
The route proposed by Wu was promoted by the Zhuhai government under the name Lingdingyang Bridge. In the mid-1990s, Zhuhai built a bridge between the Zhuhai mainland and Qi'ao Island that was intended as the first phase of this route, though the full scheme had not been approved by either the Chinese or Hong Kong governments at the time. China's central government showed support for this project on 30 December 1997. The new Hong Kong government was reluctant, stating that it was still awaiting cross-border traffic study results, and Hong Kong media questioned the environmental impact of the project with regard to air pollution, traffic and marine life.
In December 2001 the Legislative Council of Hong Kong passed a motion urging the Administration to develop the logistics industry including the construction of a bridge connecting Hong Kong, Zhuhai and Macao. In September 2002, the China/Hong Kong Conference on Co-ordination of Major Infrastructure Projects agreed to a joint study on a transport link between Hong Kong and Pearl River West.
Preparation
To coordinate the project, the Advance Work Coordination Group of HZMB was set up in 2003. Officials from three sides solved issues such as landing points and alignments of the bridge, operation of the Border Crossing Facilities, and project financing.
In August 2008, China's central government, the governments of Guangdong, Hong Kong and Macau agreed to finance 42 percent of the total costs. The remaining 58% consisted of loans (approximately ¥22 billion or US$3.23 billion) from the Bank of China.
In March 2009, it was further reported that China's central government, Hong Kong and Macau agreed to finance 22 percent of the total costs. The remaining 78 percent consisted of loans (approximately ¥57.3 billion or US$8.4 billion) from a consortium of banks led by Bank of China.
Construction
Construction of the HZMB project began on 15 December 2009 on the Chinese side, with the Politburo Standing member and Vice Premier of China Li Keqiang holding a commencement ceremony. Construction of the Hong Kong section of the project began in December 2011 after a delay caused by a legal challenge regarding the environmental impact of the bridge.
The last bridge tower was erected on 2 June 2016. The last straight element of the straight section of the undersea tunnel was installed on 12 July 2016, and the final tunnel joint was installed on 2 May 2017. Construction of the Main Bridge, consisting of a viaduct and an undersea tunnel was completed on 6 July 2017. The entire construction project was completed on 6 February 2018. During the construction 19 workers died.
Sections and elements
The HZMB consists of three main sections: the Main Bridge () in the middle of the Pearl River estuary, the Hong Kong Link Road () in the east, and the Zhuhai Link Road () in the west of the estuary.
Main Bridge
The Main Bridge, the largest part of the HZMB project, is a bridge-tunnel system constructed by the mainland Chinese authorities. It connects Zhuhai-Macao Port Artificial Island, an artificial island housing the Boundary Crossing Facilities (BCF) for both mainland China and Macau in the west, to the Hong Kong Link Road in the east.
This section includes a bridge construction and a immersed tube undersea tunnel that runs between two artificial islands, the Blue Dolphin Island on the west and the White Dolphin Island on the east. The bridge construction crosses the Pearl River estuary with three cable-stayed bridges spanning between , allowing shipping traffic to pass underneath.
Hong Kong Link Road
Administered under Highways Department of HKSAR, the Hong Kong Link Road connects the main bridge-tunnel to an artificial island housing the Hong Kong Boundary Crossing Facilities (HKBCF). This section includes a bridge construction, a Scenic Hill Tunnel, and a at-grade road along the east coast of the Chek Lap Kok.
Zhuhai Link Road
The Zhuhai Link Road starts from Zhuhai-Macao Port Artificial Island, passes through the developed area of Gongbei via a tunnel towards Zhuhai, and connects to three major expressways, namely, the Jing-Zhu Expressway, Guang-Zhu West Expressway, and Jiang-Zhu Expressway.
Macau Bridge Link
Opened in October 2024, the bridge provided alternative access to Taipa Island from the HZMB boundary office.
Left- and right-hand traffic
Although the HZMB connects two left-hand traffic (LHT) areas, namely Hong Kong and Macau, the crossing itself is right-hand traffic (RHT), the same as in Zhuhai and other regions of mainland China (the bridge is technically in Zhuhai for most of its length). Thus, drivers from Hong Kong and Macau need to make use of crossing viaducts to switch to RHT upon entering the bridge, and back to LHT upon leaving the bridge when they are back to Hong Kong and Macau. Traffic between Zhuhai and the bridge requires no left-right conversion as they are both RHT.
Transport
Shuttle buses
The HZMBus shuttle bus service (colloquially referred as the "golden buses") runs 24 hours a day with bus departures as frequent as every five minutes. The journey across the HZMB takes about 40 minutes.
The HZMB Hong Kong Port can be reached from Hong Kong by taxi or various buses including Cityflyer airport routes A10, A11, A12, A17, A21, A22, A23, A25, A26, A28 and A29, Long Win Bus airport routes A30, A31, A32, A33X, A34, A36, A37, A38, A41, A41P, A42, A43, A46 and A47X, NLB airport route A35, Green Minibus route 901, the B4 shuttle bus from Hong Kong International Airport, the B5 shuttle bus from Sunny Bay MTR station, or the B6 bus from Tung Chung. In addition, all overnight airport buses (NA-prefixed routes) which are operated by Cityflyer or Long Win stops or terminates at the Hong Kong Port.
The HZMB Zhuhai Port can be reached from Zhuhai by taxis or the L1 bus which uses historic tourist vehicles, or Line-12, 23, 25 or 3 buses.
The HZMB Macau Port can be reached from Macau by taxis or various buses including the 101X bus and the 102X bus from St Paul's and Taipa, or the HZMB Integrated Resort Connection bus from Taipa Ferry Terminal or the Exterior Ferry terminal, connecting with free casino shuttle buses.
Private vehicles
Since the Hong Kong government imposes significant fees, taxes and administrative paperwork on private vehicle ownership and usage to deal with road congestion, driving a car on the HZMB would incur the same restrictions as current cross-border traffic. These include applying for separate driving licenses for both Hong Kong and mainland China, a Hong Kong Closed Road Permit for cross-boundary vehicles, and an Approval Notice from the Guangdong Public Security Bureau. Vehicle owners also need to ensure they have the appropriate insurance coverage for the regions they are travelling to.
By the end of 2017 only 10,000 permits for private vehicles to drive across the HZMB from Hong Kong to Zhuhai had been issued. In addition, the number of vehicles permitted to enter Hong Kong and Macau from other regions is subject to a daily quota.
In addition, to help compact Macau tackle its road congestion problems, drivers arriving from other regions are strongly encouraged to use a park and ride scheme, leaving their vehicles at a car park on the edge of Macau. A small quota of 300 vehicles are allowed to enter Macau directly.
In 2023 the permit/quota system relaxed further, allowing more Hong Kong and Macau vehicles to enter Guangdong Province via the Northbound Travel scheme. A similar Southbound Travel scheme is planned for 2025.
Economic effects
The HZMB links three major cities—Hong Kong, Zhuhai, and Macau—which are geographically close but separated by water. With the bridge in place, travelling time between Zhuhai and Hong Kong was cut down from about 4 hours to 30 minutes on the road.
The HZMB project is part of a Beijing-driven strategy to create an economic hub and promote the economic development of the whole area of the Pearl River Delta, which is also known as Greater Bay Area. Hoping to leverage the bridge and create an economic zone linking the three cities, Zhuhai's Hengqin area was designated as a free trade zone in 2015.
Controversies
White elephant project
Some residents have complained that the bridge had been a waste of taxpayers' money due to the restrictive criteria to be met and large amount of administrative paperwork needed in order to use the bridge with their own vehicle. Consequently an average of only 8,900 vehicles per day have used the bridge in 2023. This is a huge difference compared to the similar Shenzhen-Zhongshan Link where over 3 million vehicles have used the bridge alone in its first month of opening.
Delays and budget overruns
The artificial island housing the Hong Kong Boundary Crossing Facilities (HKBCF) was reported drifting due to an unconventional method, hitherto unused in Hong Kong, for land reclamation using a row of circular steel cells pile-driven into the mud and filled with inert material to form a seawall.
The drifting of parts of the reclaimed island allegedly caused a delay in the HZMB project. The Highways Department denied various reports of movement up to but admitted that parts of the reclaimed land had moved "up to six or seven metres", claiming that some movement was expected and safety had not been jeopardised.
Mainland contractors also reportedly had difficulty constructing immersed tubes for their section of the project, with the director of the Guangdong National Development and Reform Commission stating that 2020 would be a difficult target to meet.
By 2017, the Main Bridge of the HZMB project had experienced a cost overrun of about ¥10 billion, blamed on increased labour and material costs, as well as changes to the design and construction schemes.
Worker deaths and injuries
The number of deaths and injuries during the construction project came under scrutiny in Hong Kong. In addition to nine fatalities on the mainland side, more than ten deaths were reported on the Hong Kong side of the construction project, plus between 234 and 600 injuries, depending on the source. In April 2017, the Construction Site Workers General Union, the Labour Party and the Confederation of Trade Unions demonstrated at the Central Government Complex, demanding the government take action.
Lawmaker Fernando Cheung also expressed concern over the unknown death toll on the Chinese side of the project, speculating: "the project is known as the 'bridge of blood and tears' and we are only talking about the Hong Kong side. We don't even know what is happening in China. I suppose the situation could be 10 times worse than that in Hong Kong." He said that the Hong Kong Government had a responsibility to consider worker safety on the Chinese side.
Faked safety testing
In 2017, Hong Kong's Independent Commission Against Corruption (ICAC) arrested 21 employees (2 senior executives, 14 laboratory technicians, and 5 laboratory assistants) of Jacobs China Limited, a contractor of the Civil Engineering and Development Department for falsifying concrete test results, thus potentially risking the safety of the bridge for public use. In December 2017, a lab technician pleaded guilty and was sentenced to imprisonment for eight months, while the others await sentencing. Hong Kong's Highways Department conducted tests again after the falsified results were exposed and found all test results met safety standards.
Seawall integrity
In April 2018, the public and media raised questions over the integrity of the seawalls protecting the artificial islands at both ends of the undersea tunnel. In footage taken by drone users and mariners, the dolosse installed at the edges of the artificial islands appear to have dislodged. Some civil engineers suggested that there was an error in design. In dismissing the safety concerns, the HZMB Authority said the dolosse were designed to be submerged and the design was working as intended. Director of Highways Department Daniel Chung denied on 8 April 2018 that the breakwater components had been washed away by waves.
Subsequent aerial footage posted online showed a section of the dolosse breakwater completely underwater. Civil engineer So Yiu-kwan told Hong Kong media on 12 April 2018 that the water level, at the time the photos were taken, was about 1.74 mPD (metres above Principal Datum), but the maximum water level could reach 2.7 mPD. He said the dolosse would offer no wave protection if entirely submerged, and further alleged that they had been installed backwards.
Impact on wildlife
Conservationists at WWF Hong Kong blamed the construction of the HZMB for the falling number of white dolphins in the waters near the bridge. The dolphins found near waters of Lantau were worst hit with numbers dropping by 60 percent between April 2015 and March 2016.
| Technology | Multi-modal crossings | null |
331560 | https://en.wikipedia.org/wiki/St.%20Elmo%27s%20fire | St. Elmo's fire | St. Elmo's fire (also called witchfire or witch's fire) is a weather phenomenon in which luminous plasma is created by a corona discharge from a rod-like object such as a mast, spire, chimney, or animal horn in an atmospheric electric field. It has also been observed on the leading edges of aircraft, as in the case of British Airways Flight 009, and by US Air Force pilots.
The intensity of the effect, a blue or violet glow around the object, often accompanied by a hissing or buzzing sound, is proportional to the strength of the electric field and therefore noticeable primarily during thunderstorms or volcanic eruptions.
St. Elmo's fire is named after St. Erasmus of Formia (also known as St. Elmo), the patron saint of sailors. The phenomenon, which can warn of an imminent lightning strike, was regarded by sailors with awe and sometimes considered to be a good omen.
Cause
St. Elmo's fire is a reproducible and demonstrable form of plasma. The electric field around the affected object causes ionization of the air molecules, producing a faint glow easily visible in low-light conditions. Conditions that can generate St. Elmo's fire are present during thunderstorms, when high-voltage differentials are present between clouds and the ground underneath. A local electric field of about is required to begin a discharge in moist air. The magnitude of the electric field depends greatly on the geometry (shape and size) of the object. Sharp points lower the necessary voltage because electric fields are more concentrated in areas of high curvature, so discharges preferentially occur and are more intense at the ends of pointed objects.
The nitrogen and oxygen in the Earth's atmosphere cause St. Elmo's fire to fluoresce with blue or violet light; this is similar to the mechanism that causes neon lights to glow, albeit at a different colour due to the different gas involved.
In 1751, Benjamin Franklin hypothesized that a pointed iron rod would light up at the tip during a lightning storm, similar in appearance to St. Elmo's fire.
In an August 2020 paper, researchers in MIT's Department of Aeronautics and Astronautics demonstrated that St. Elmo's fire behaves differently in airborne objects versus grounded structures. They show that electrically isolated structures accumulate charge more effectively in high wind, in contrast to the corona discharge observed in grounded structures.
Research
Vacuum ultraviolet light
Researchers at Rutgers University have devised a method to generate vacuum ultraviolet light using different forms of lighting, by employing sharp conductive needles placed within a dense gas, such as xenon, contained in a cell. They achieve this by applying a high negative voltage to the needles in the xenon-filled cell, resulting in the efficient production of vacuum ultraviolet light. St. Elmo's Fire being similar, they believe it could be used as lighting but with a higher power source, thus increasing efficiency by over 50%.
In history and culture
In ancient Greece, the appearance of a single instance of St. Elmo's fire was called (), literally meaning "torch", with two instances referred to as Castor and Pollux, names of the mythological twin brothers of Helen.
After the medieval period, St. Elmo's fire was sometimes associated with the Greek element of fire, such as with one of Paracelsus's elementals, specifically the salamander, or, alternatively, with a similar creature referred to as an acthnici.
Welsh mariners referred to St. Elmo's fire as or ("candles of the Holy Ghost" or the "candles of St. David").
Russian sailors also historically documented instances of St. Elmo's fire, known as "Saint Nicholas" or "Saint Peter's lights", also sometimes called St. Helen's or St. Hermes' fire, perhaps through linguistic confusion.
St. Elmo's fire is reported to have been seen during the Siege of Constantinople by the Ottoman Empire in 1453. It was reportedly seen emitting from the top of the Hippodrome. The Byzantines attributed it to a sign that the Christian God would soon come and destroy the conquering Muslim army. According to George Sphrantzes, it disappeared just days before Constantinople fell, ending the Byzantine Empire.
Accounts of Magellan's first circumnavigation of the globe refer to St. Elmo's fire (calling it the body of St. Anselm) being seen around the fleet's ships multiple times off the coast of South America. The sailors saw these as favourable omens.
En route to Nagasaki with the Fat Man atom bomb on 9 August 1945, the B-29 Bockscar experienced an uncanny luminous blue plasma forming around the spinning propellers, "as though we were riding the whirlwind through space on a chariot of blue fire."
St Elmo's fire was seen during the 1955 Great Plains tornado outbreak in Kansas and Oklahoma.
Among the phenomena experienced on British Airways Flight 9 on 24 June 1982, were glowing light flashes along the leading edges of the aircraft, including the wings and cockpit windscreen, which were seen by both passengers and crew. While the bright flashes of light shared similarities with St Elmo's fire, the glow experienced was from the impact of ash particles on the leading edges of the aircraft, similar to that seen by operators of sandblasting equipment.
St. Elmo's fire was observed and its optical spectrum recorded during a University of Alaska research flight over the Amazon in 1995 to study sprites.
Ill-fated Air France Flight 447 from Rio de Janeiro–Galeão International Airport to Paris Charles de Gaulle Airport in 2009 is understood to have experienced St. Elmo's fire 23 minutes prior to crashing into the Atlantic Ocean; however, the phenomenon was not a factor in the disaster.
Apoy ni San Elmo – commonly shortened to santelmo – is a bad omen or a flying spirit in Filipino folklore, although the description for santelmo is more similar to ball lightning than St. Elmo's fire. There are various indigenous names for santelmo which has existed before the term santelmo was coined. The term santelmo originated from Spanish colonial rule in the Philippines.
Notable observations
Classical texts
St. Elmo's fire is referenced in the works of Julius Caesar (De Bello Africo, 47) and Pliny the Elder (Naturalis Historia, book 2, par. 101), Alcaeus frag. 34. Earlier, Xenophanes of Colophon had alluded to the phenomenon.
Zheng He
In 15th-century Ming China, Admiral Zheng He and his associates composed the Liujiagang and Changle inscriptions, the two epitaphs of the Ming treasure voyages, where they made a reference to St. Elmo's fire as a divine omen of Tianfei, the goddess of sailors and seafarers.
Accounts associated with Magellan and da Gama
Mention of St. Elmo's fire can be found in Antonio Pigafetta's journal of his voyage with Ferdinand Magellan. St. Elmo's fire, also known as "corposants" or "corpusants" from the Portuguese corpo santo ("holy body"), is also described in The Lusiads, the epic account of Vasco da Gama's voyages of discovery.
Robert Burton
Robert Burton wrote of St. Elmo's fire in his Anatomy of Melancholy (1621): "Radzivilius, the Lithuanian duke, calls this apparition Sancti Germani sidus; and saith moreover that he saw the same after in a storm, as he was sailing, 1582, from Alexandria to Rhodes". This refers to the voyage made by Mikołaj Krzysztof "the Orphan" Radziwiłł in 1582–1584.
John Davis
On 9 May 1605, while on the second voyage of John Davis commanded by Sir Edward Michelborne to the East Indies, an unknown writer aboard the Tiger describes the phenomenon: "In the extremity of our storm appeared to us in the night, upon our maine Top-mast head, a flame about the bigness of a great Candle, which the Portugals call Corpo Sancto, holding it a most divine token that when it appeareth the worst is past. As, thanked be God, we had better weather after it".
Pierre Testu-Brissy
Pierre Testu-Brissy was a pioneering French balloonist. On 18 June 1786, he flew for 11 hours and made the first electrical observations as he ascended into thunderclouds. He stated that he drew remarkable discharges from the clouds by means of an iron rod carried in the basket. He also experienced Saint Elmo's fire.<ref name="Ballooning Who's Who"/
William Bligh
William Bligh recorded in his log on Sunday 4 May 1788, on board HMS Bounty of 'Mutiny On The Bounty' fame:
'Corpo-Sant. Some electrical Vapour seen about the Iron at the Yard Arms about the Size of the blaze of a Candle.'
The location of this event was in the South Atlantic sailing from Cape Horn, (having failed to round the cape in the winter months), en route to Cape of Good Hope and west of Tristan da Cunha. The log records the ship's location as: Latd. 42°:34'S, Longd (by the time keeper K2) as 34°:38'W.
Reference: Log of the Proceedings of His Majestys Ship Bounty in a Voyage to the South Seas, (to take the Breadfruit plant from the Society Islands to the West Indies,) under the Command of Lieutenant William Bligh, 1 December 1787 – 22 October 1788 Safe 1/46, Mitchell Library, State Library of NSW
William Noah
William Noah, a silversmith convicted in London of stealing 2,000 pounds of lead, while en route to Sydney, New South Wales on the convict transport ship , recorded two such observations in his detailed daily journal. The first was in the Southern Ocean midway between Cape Town and Sydney and the second was in the Tasman Sea, a day out of Port Jackson:
While the exact nature of these weather phenomena cannot be certain, they appear to be mostly about two observations of St. Elmo's fire with perhaps some ball lightning and even a direct lightning strike to the ship thrown into the mix.
James Braid
On 20 February 1817, during a severe electrical storm, James Braid, surgeon at Lord Hopetoun's mines at Leadhills, Lanarkshire, had an extraordinary experience whilst on horseback:
Weeks earlier, reportedly on 17 January 1817, a luminous snowstorm occurred in Vermont and New Hampshire. Saint Elmo's fire appeared as static discharges on roof peaks, fence posts, and the hats and fingers of people. Thunderstorms prevailed over central New England.
Charles Darwin
Charles Darwin noted the effect while aboard the Beagle. He wrote of the episode in a letter to J. S. Henslow that one night when the Beagle was anchored in the estuary of the Río de la Plata:
He also describes the above night in his book The Voyage of the Beagle:
Richard Henry Dana
In Two Years Before the Mast, Richard Henry Dana Jr., (1815–1882) describes seeing a corposant in the horse latitudes of the northern Atlantic Ocean. However, he may have been talking about ball lightning; as mentioned earlier, it is often erroneously identified as St. Elmo's fire:
The observation by R. H. Dana of this phenomenon in Two Years Before the Mast is a straightforward description of an extraordinary experience apparently only known to mariners and airline pilots.
Nikola Tesla
Nikola Tesla created St. Elmo's fire in 1899 while testing a Tesla coil at his laboratory in Colorado Springs, Colorado, United States. St. Elmo's fire was seen around the coil and was said to have lit up the wings of butterflies with blue halos as they flew around.
Mark Heald
A minute before the crash of the Luftschiffbau Zeppelin's LZ 129 Hindenburg on 6 May 1937, Professor Mark Heald (1892–1971) of Princeton saw St. Elmo's Fire flickering along the airship's back. Standing outside the main gate to the Naval Air Station, he watched, together with his wife and son, as the airship approached the mast and dropped her bow lines. A minute thereafter, by Heald's estimation, he first noticed a dim "blue flame" flickering along the backbone girder about one-quarter the length abaft the bow to the tail. There was time for him to remark to his wife, "Oh, heavens, the thing is afire," for her to reply, "Where?" and for him to answer, "Up along the top ridge" – before there was a big burst of flaming hydrogen from a point he estimated to be about one-third the ship's length from the stern.
William L. Laurence
St. Elmo's fire was reported by The New York Times reporter William L. Laurence on 9 August 1945, as he was aboard a plane following Bockscar on the way to Nagasaki.
In popular culture
In literature
One of the earliest references to the phenomenon appears in Alcaeus's Fragment 34a about the Dioscuri, or Castor and Pollux. It is also referenced in Homeric Hymn 33 to the Dioscuri who were from Homeric times associated with it. Whether the Homeric Hymn antedates the Alcaeus fragment is unknown.
The phenomenon appears to be described first in the Gesta Herwardi, written around 1100 and concerning an event of the 1070s. However, one of the earliest direct references to St. Elmo's fire made in fiction can be found in Ludovico Ariosto's epic poem Orlando Furioso (1516). It is located in the 17th canto (19th in the revised edition of 1532) after a storm has punished the ship of Marfisa, Astolfo, Aquilant, Grifon, and others, for three straight days, and is positively associated with hope:
In William Shakespeare's The Tempest (c. 1623), Act I, Scene II, St. Elmo's fire acquires a more negative association, appearing as evidence of the tempest inflicted by Ariel according to the command of Prospero:
The fires are also mentioned as "death fires" in Samuel Taylor Coleridge's The Rime of the Ancient Mariner:
Later in the 18th and 19th centuries, literature associated St. Elmo's fire with a bad omen or divine judgment, coinciding with the growing conventions of Romanticism and the Gothic novel. For example, in Ann Radcliffe's The Mysteries of Udolpho (1794), during a thunderstorm above the ramparts of the castle:
In the 1864 novel Journey to the Centre of the Earth by Jules Verne, the author describes the fire occurring while sailing during a subterranean electrical storm (chapter 35, page 191):
In Herman Melville's novel Moby-Dick, Starbuck points out "corpusants" during a thunder storm in the Japanese sea in chapter 119, "The Candles".
St. Elmo's fire makes an appearance in The Adventures of Tintin comic, Tintin in Tibet, by Hergé. Tintin recognizes the phenomenon on Captain Haddock's ice-axe.
The phenomenon appears in the first stanza of Robert Hayden's poem "The Ballad of Nat Turner"; it is also referred to with the term "corposant" in the first section of his long poem "Middle Passage".
In Kurt Vonnegut's Slaughterhouse-Five, Billy Pilgrim sees the phenomenon on soldiers' helmets and on rooftops. Vonnegut's The Sirens of Titan also notes the phenomenon affecting Winston Niles Rumfoord's dog, Kazak, the Hound of Space, in conjunction with solar disturbances of the chrono-synclastic infundibulum.
In Robert Aickman's story "Niemandswasser" (1975), the protagonist, Prince Albrecht von Allendorf, is "known as Elmo to his associates, because of the fire which to them emanated from him". "There was an inspirational force in Elmo of which the sensitive soon became aware, and which had led to his Spottname or nickname."
In On the Banks of Plum Creek by Laura Ingalls Wilder, St. Elmo's fire is seen by the girls and Ma during one of the blizzards. It was described as coming down the stove pipe and rolling across the floor following Ma's knitting needles; it did not burn the floor (pages 309–310). The phenomenon as described, however, is more similar to ball lightning.
In Voyager, the third major novel in Diana Gabaldon's popular Outlander series, the primary characters experience St. Elmo's fire while lost at sea in a thunderstorm between Hispaniola and coastal Georgia.
St. Elmo's fire is also mentioned in the novel, Castaways of the Flying Dutchman by Brian Jacques.
It is referenced multiple times in the novel Pet Sematary by Stephen King.
It is referenced multiple times in the Urban-Fantasy series The Dresden Files by Jim Butcher, particularly when magical beings such as the protagonist's dog are exerting power, especially during conflict, or to describe the visual effects of magic being used.
In television
On the children's television series The Mysterious Cities of Gold (1982), episode four shows St. Elmo's fire affecting the ship as it sailed past the Strait of Magellan. The real-life footage at the end of the episode has snippets of an interview with Japanese sailor Fukunari Imada, whose comments were translated to: "Although I've never seen St. Elmo's fire, I'd certainly like to. It was often considered a bad omen, as it played havoc with compasses and equipment". The TV series also referred to St. Elmo's fire as being a bad omen during the cartoon. The footage was captured as part of his winning solo yacht race in 1981.
On the American television series Rawhide, in a 1959 episode titled "Incident of the Blue Fire", cattle drovers on a stormy night see St. Elmo's fire glowing on the horns of their steers, which the men regard as a deadly omen. St. Elmo's fire is also referenced in a 1965 episode of Bonanza in which religious pilgrims staying on the Cartwright property believe an experience with St. Elmo's fire is the work of Satan.
On The Waltons episode "The Grandchild" (1977), Mary Ellen witnesses St. Elmo's Fire while running through the woods.
On the American animated television series Futurama episode titled "Möbius Dick", Turanga Leela refers to the phenomenon as "Tickle me Elmo's Fire."
On the Netflix original Singaporean animated series Trese (2021), the Santelmo (St. Elmo's Fire) is one of the protagonist's, Alexandra Trese's, allies whom she contacts using her old Nokia phone, dialing the date of the Great Binondo fire, 0003231870.
In film
In Moby Dick (1956), St. Elmo's fire stops Captain Ahab from killing Starbuck.
In The Last Sunset (1961), outlaw/cowhand Brendan "Bren" O'Malley (Kirk Douglas) rides in from the herd and leads the recently widowed Belle Breckenridge (Dorothy Malone) to an overview of the cattle. As he takes the rifle from her, he proclaims, "Something out there, you could live five lifetimes, and never see again," the audience is then shown a shot of the cattle with a blue or violet glow coming from their horns. "Look. St. Elmo's fire. Never seen it except on ships," O'Malley says as Belle says, "I've never seen it anywhere. What is it?" Trying to win her back, he says, "Well, a star fell and smashed and scattered its glow all over the place."
In St. Elmo's Fire (1985), Rob Lowe's character Billy Hicks erroneously claims that the phenomenon is "not even a real thing."
In the Western miniseries Lonesome Dove (1989–1990), lightning strikes a herd of cattle during a storm, causing their horns to glow blue.
In The Hunt for Red October (film) (1990) during a scene where the USS Dallas, a Los-Angeles-class submarine, is attempting to evade a torpedo, the crew discusses the presence of St. Elmo's fire on the sub's periscope.
In The Perfect Storm (film); based on the true story of the Andrea Gail fishing vessel, there is a scene where the crew encounters St. Elmo's fire during the height of a storm.
In Lars von Trier's 2011 film Melancholia, the phenomenon features in the opening sequence and later in the film as the rogue planet Melancholia approaches the Earth for an impact event.
In Robert Eggers's 2019 horror film The Lighthouse, it appears in reference to the mysterious salvation that lighthouse keeper Thomas Wake (Willem Dafoe) is hiding from Ephraim Winslow (Robert Pattinson) inside the Fresnel lens of the lantern.
In music
Brian Eno's third studio album Another Green World (1975) contains a song titled "St. Elmo's Fire" in which guesting King Crimson guitarist Robert Fripp (credited with playing "Wimshurst guitar" in the liner notes) improvises a lightning-fast solo that would imitate an electrical charge between two poles on a Wimshurst high-voltage generator.
"St. Elmo's Fire (Man in Motion)" is a song recorded by John Parr. It hit number one on the Billboard Hot 100 on 7 September 1985, remaining there for two weeks. It was the main theme for Joel Schumacher's 1985 film St. Elmo's Fire.
"St. Elmo's Fire" by Michael Franks.
The Sammarinese entry for the 2017 Eurovision Song Contest in Kyiv "Spirit of the Night" contains references to St. Elmo's Fire.
| Physical sciences | Storms | Earth science |
10881051 | https://en.wikipedia.org/wiki/Gliese%20581d | Gliese 581d | Gliese 581d (often shortened to Gl 581d or GJ 581d) is a doubtful, and frequently disputed, exoplanet candidate orbiting within the Gliese 581 system, approximately 20.4 light-years away in the Libra constellation. It was the third planet claimed in the system and the fourth (in a 4-planet model) or fifth (in a disproven 5- or 6-planet model) in order from the star. Multiple subsequent studies found that the planetary signal in fact originates from stellar activity, and thus the planet does not exist, but this remains disputed.
Though significantly more massive than Earth (at a minimum mass of 6.98 Earth masses), this super-Earth was the first exoplanet of relatively low mass regarded as orbiting within the habitable zone of its parent star. Assuming its existence, computer climate simulations have confirmed the possibility of the existence of surface water and these factors combine to a relatively high measure of planetary habitability.
History
Discovery
A team of astronomers led by Stéphane Udry of the Geneva Observatory used the HARPS instrument on the European Southern Observatory 3.6 meter telescope in La Silla, Chile, to discover the planet in 2007. Udry's team employed the radial velocity technique, in which the minimum mass of a planet is determined based on the small perturbations it induces in its parent star's orbit via gravity. This study estimated an orbital period of 83 days for the planet.
In late April 2009, the original discovery team revised its original estimate of the planet's orbital parameters, finding that it orbits closer to its star than originally determined with an orbital period of 66.8 days. They concluded that the planet is within the habitable zone where liquid water could exist. A 2010 study of aliasing in radial velocity data found that the true period of Gliese 581d remained unclear, with even a 1-day period being a possibility. Later models of the system including planet d from 2010-2013 supported a 67-day period.
Disputed existence
In September 2012, Roman Baluev filtered out the "red noise" from the Keck data and concluded that this planet's existence is probable only to 2.2 standard deviations, and thus is uncertain. Earlier that same year, however, S. S. Vogt (USNO), together with R. P. Butler and N. Haghighipour, published a study that supported the existence of the planet with a much higher probability; they also pursued a dynamical analysis of the system.
Additional work on Gliese 581 as a four-planet system (thus, including planet d), demonstrating its long-term orbital stability, was given by Makarov and coauthors.
A study in 2014 concluded that Gliese 581d is "an artifact of stellar activity which, when incompletely corrected, causes the false detection of planet g." In 2015, a study by Guillem Anglada-Escudé and Mikko Tuomi questioned the 2014 work, claiming a significant shortcoming in the adopted statistical method; however, this study was published along with a rebuttal by the team that published the 2014 refutation. Another 2015 study added support to the conclusion that the radial velocity signal originates from stellar activity, and a 2016 study provided additional strong evidence for it.
In 2016, E. R. Newton and collaborators pointed out that for early M dwarfs, planets in their habitable zones may have orbital periods coinciding with the stellar rotation period (or in rare cases, such as Gliese 581d, half of it, if the standard value of 132 days is assumed); this aspect seriously complicates the verification of any such planets.
Evidence based on a 2022 paper confirmed the results of previous studies suggesting that the announcement of Gliese 581d stems from a false detection due to stellar activity. This work uses an updated technique correlating stellar activity to RV signals.
A 2024 research note argued that it is still possible that Gliese 581d might exist, on the basis of a new measurement of Gliese 581's rotation period at , as opposed to the previous value of from the 2014 study refuting the planet. This new rotation period is not a multiple of the planet's proposed period. However, no reanalysis of the radial velocity data was done, so further research will be needed to draw a conclusion.
Orbital characteristics
Gliese 581d was thought to orbit Gliese 581 at 0.21847 AU, approximately a fifth of the distance that the Earth orbits the Sun, though its orbital eccentricity has not been confirmed. There were two models for its orbit, a circular one like Earth's, and an eccentric one like Mercury's. These were based on the six-planet and four-planet model for the Gliese 581 system, respectively. Under the four-planet model Gliese 581d would most probably be in a spin-orbit resonance of 2:1, rotating twice for each orbit of its parent star. Therefore, the day on Gliese 581d should approximately be 67 Earth days long.
The orbital distance places it at the outer limits of the habitable zone, the distance at which it is believed possible for water to exist on the surface of a planetary body. At the time of its discovery, the planet's orbit was originally thought to be farther out. However, in late April 2009 the original discovery team revised its original estimate of the planet's orbital parameters, finding that it orbits closer to its star than originally determined with an orbital period of 66.87 days. They concluded that the planet is within the habitable zone where liquid water could exist.
Physical characteristics
The motion of the parent star indicates a minimum mass for Gliese 581d of 5.6 Earth masses (earlier analyses gave higher values). Dynamical simulations of the Gliese 581 system assuming that the orbits of the three planets are coplanar show that the system becomes unstable if the masses of the planets exceed 1.6–2 times the minimum values. Using earlier minimum mass values for Gliese 581d, this implies an upper mass limit for Gliese 581d of 13.8 Earth masses. The composition of the planet, however, is not known.
Climate and habitability
As the planet is not known to transit from Earth and atmospheric conditions are not observable with current technology, no atmosphere for the planet has been confirmed to date. As such, all climate predictions for the planet are based on predicted orbits and computer modelling of theoretical atmospheric conditions.
Because Gliese 581d was believed to orbit outside the habitable zone of its star it was originally thought to be too cold for liquid water to be present. With the 2009 revised orbit, climate simulations
conducted by researchers in France in 2011 indicated possible temperatures suitable for surface water at sufficient atmospheric pressure. According to Stéphane Udry, "It could be covered by a 'large and deep ocean'; it is the first serious ocean planet candidate."
On average, the light that Gliese 581d receives from its star has about 30% of the intensity of light the Earth receives from the Sun. By comparison, sunlight on Mars has about 40% of the intensity of that on Earth. That might seem to suggest that Gliese 581d is too cold to support liquid water and hence is inhospitable to life. However, an atmospheric greenhouse effect can significantly raise planetary temperatures. For example, Earth's own mean temperature would be about −18 °C without any greenhouse gases, ranging from around 100 °C on the day side to −150 °C at night, much like that found on the Moon. If the atmosphere of Gliese 581d produces a sufficiently large greenhouse effect, and the planet's geophysics stabilize the CO2 levels (as Earth's does via plate tectonics), then the surface temperature might permit a liquid water cycle, conceivably allowing the planet to support life. Calculations by Barnes et al. suggest, however, that tidal heating is too low to keep plate tectonics active on the planet, unless radiogenic heating is somewhat higher than expected.
Gliese 581d is probably too massive to be made only of rocky material. It may have originally formed on a more distant orbit as an icy planet that then migrated closer to its star.
If Gliese 581d exists, it would be the first super-Earth identified to be located in a habitable zone outside of the Solar System, according to work published in 2007.
Hello from Earth
As part of the 2009 National Science Week celebrations in Australia, Cosmos magazine launched a website called "Hello from Earth" to collect messages for transmission to Gliese 581d. The maximum length of the messages was 160 characters, and they were restricted to the English language. In total, 25,880 messages were collected from 195 countries around the world. The messages were transmitted from the DSS-43 70 m radio telescope at the Canberra Deep Space Communication Complex at Tidbinbilla, Australia, on 28 August 2009.
In popular culture
Gliese 581d is the setting for the Doctor Who episode "Smile".
It is also shown in Into the Universe with Stephen Hawking'''s Episode 3: The Story of Everything, and in Episode 3 and 8 of Season 2 of How the Universe Works''.
| Physical sciences | Notable exoplanets | Astronomy |
10883868 | https://en.wikipedia.org/wiki/Super-Earth | Super-Earth | A Super-Earth or super-terran or super-tellurian is a type of exoplanet with a mass higher than Earth, but substantially below those of the Solar System's ice giants, Uranus and Neptune, which are 14.5 and 17.1 times Earth's, respectively. The term "super-Earth" refers only to the mass of the planet, and so does not imply anything about the surface conditions or habitability. The alternative term "gas dwarfs" may be more accurate for those at the higher end of the mass scale, although "mini-Neptunes" is a more common term.
Definition
In general, super-Earths are defined by their masses. The term does not imply temperatures, compositions, orbital properties, habitability, or environments. While sources generally agree on an upper bound of 10 Earth masses (~69% of the mass of Uranus, which is the Solar System's giant planet with the least mass), the lower bound varies from 1 or 1.9 to 5, with various other definitions appearing in the popular media. The term "super-Earth" is also used by astronomers to refer to planets bigger than Earth-like planets (from 0.8 to 1.2 Earth-radius), but smaller than mini-Neptunes (from 2 to 4 Earth-radii).
This definition was made by the Kepler space telescope personnel.
Some authors further suggest that the term Super-Earth might be limited to rocky planets without a significant atmosphere, or planets that have not just atmospheres but also solid surfaces or oceans with a sharp boundary between liquid and atmosphere, which the four giant planets in the Solar System do not have.
Planets above 10 Earth masses are termed massive solid planets, mega-Earths, or gas giant planets, depending on whether they are mostly made of rock and ice or mostly gas.
History and discoveries
First
The first super-Earths were discovered by Aleksander Wolszczan and Dale Frail around the pulsar PSR B1257+12 in 1992. The two outer planets (Poltergeist and Phobetor) of the system have masses approximately four times Earth—too small to be gas giants.
The first super-Earth around a main-sequence star was discovered by a team under Eugenio Rivera in 2005. It orbits Gliese 876 and received the designation Gliese 876 d (two Jupiter-sized gas giants had previously been discovered in that system). It has an estimated mass of 7.5 Earth masses and a very short orbital period of about 2 days. Due to the proximity of Gliese 876 d to its host star (a red dwarf), it may have a surface temperature of 430–650 kelvin and be too hot to support liquid water.
First in habitable zone
In April 2007, a team headed by Stéphane Udry based in Switzerland announced the discovery of two new super-Earths within the Gliese 581 planetary system, both on the edge of the habitable zone around the star where liquid water may be possible on the surface. With Gliese 581c having a mass of at least 5 Earth masses and a distance from Gliese 581 of 0.073 astronomical units (6.8 million mi, 11 million km), it is on the "warm" edge of the habitable zone around Gliese 581 with an estimated mean temperature (without considering effects from an atmosphere) of −3 degrees Celsius with an albedo comparable to Venus and 40 degrees Celsius with an albedo comparable to Earth. Subsequent research suggested Gliese 581c had likely suffered a runaway greenhouse effect like Venus.
Others by year
2006
Two further possible super-Earths were discovered in 2006: OGLE-2005-BLG-390Lb with a mass of 5.5 Earth masses, which was found by gravitational microlensing, and HD 69830 b with a mass of 10 Earth masses.
2008
The smallest super-Earth found as of 2008 was MOA-2007-BLG-192Lb. The planet was announced by astrophysicist David P. Bennett for the international MOA collaboration on June 2, 2008. This planet has approximately 3.3 Earth masses and orbits a brown dwarf. It was detected by gravitational microlensing.
In June 2008, European researchers announced the discovery of three super-Earths around the star HD 40307, a star that is only slightly less massive than the Sun. Planets have at least the following minimum masses: 4.2, 6.7, and 9.4 times Earth's. The planets were detected by the radial velocity method by the HARPS (High Accuracy Radial Velocity Planet Searcher) in Chile.
In addition, the same European research team announced a planet 7.5 times the mass of Earth orbiting the star HD 181433. This star also has a Jupiter-like planet that orbits it every three years.
2009
Planet COROT-7b, with a mass estimated at 4.8 Earth masses and an orbital period of only 0.853 days, was announced on 3 February 2009. The density estimate obtained for COROT-7b points to a composition including rocky silicate minerals similar to that of the Solar System's four inner planets, a new and significant discovery. COROT-7b, discovered right after HD 7924 b, is the first super-Earth discovered that orbits a main sequence star that is G class or larger.
The discovery of Gliese 581e with a minimum mass of 1.9 Earth masses was announced on 21 April 2009. It was at the time the smallest extrasolar planet discovered around a normal star and the closest in mass to Earth. Being at an orbital distance of just 0.03 AU and orbiting its star in just 3.15 days, it is not in the habitable zone, and may have 100 times more tidal heating than Jupiter's volcanic satellite Io.
A planet found in December 2009, GJ 1214 b, is 2.7 times as large as Earth and orbits a star much smaller and less luminous than the Sun. "This planet probably does have liquid water," said David Charbonneau, a Harvard professor of astronomy and lead author of an article on the discovery. However, interior models of this planet suggest that under most conditions it does not have liquid water.
By November 2009, a total of 30 super-Earths had been discovered, 24 of which were first observed by HARPS.
2010
Discovered on 5 January 2010, a planet HD 156668 b with a minimum mass of 4.15 Earth masses, is the least massive planet detected by the radial velocity method. The only confirmed radial velocity planet smaller than this planet is Gliese 581e at 1.9 Earth masses (see above). On 24 August, astronomers using ESO's HARPS instrument announced the discovery of a planetary system with up to seven planets orbiting a Sun-like star, HD 10180, one of which, although not yet confirmed, has an estimated minimum mass of 1.35 ± 0.23 times that of Earth, which would be the lowest mass of any exoplanet found to date orbiting a main-sequence star. Although unconfirmed, there is a 98.6% probability that this planet does exist.
The National Science Foundation announced on 29 September the discovery of a fourth super-Earth (Gliese 581g) orbiting within the Gliese 581 planetary system. The planet has a minimum mass 3.1 times that of Earth and a nearly circular orbit at 0.146 AU with a period of 36.6 days, placing it in the middle of the habitable zone where liquid water could exist and midway between the planets c and d. It was discovered using the radial velocity method by scientists at the University of California at Santa Cruz and the Carnegie Institution of Washington. However, the existence of Gliese 581 g has been questioned by another team of astronomers, and it is currently listed as unconfirmed at The Extrasolar Planets Encyclopaedia.
2011
On 2 February, the Kepler Space Observatory mission team released a list of 1235 extrasolar planet candidates, including 68 candidates of approximately "Earth-size" (Rp < 1.25 Re) and 288 candidates of "super-Earth-size" (1.25 Re < Rp < 2 Re). In addition, 54 planet candidates were detected in the "habitable zone." Six candidates in this zone were less than twice the size of the Earth [namely: KOI 326.01 (Rp=0.85), KOI 701.03 (Rp=1.73), KOI 268.01 (Rp=1.75), KOI 1026.01 (Rp=1.77), KOI 854.01 (Rp=1.91), KOI 70.03 (Rp=1.96) – Table 6] A more recent study found that one of these candidates (KOI 326.01) is in fact much larger and hotter than first reported. Based on the latest Kepler findings, astronomer Seth Shostak estimates "within a thousand light-years of Earth" there are "at least 30,000 of these habitable worlds." Also based on the findings, the Kepler Team has estimated "at least 50 billion planets in the Milky Way" of which "at least 500 million" are in the habitable zone.
On 17 August, a potentially habitable super-Earth HD 85512 b was found using the HARPS as well as a three super-Earth system 82 G. Eridani. On HD 85512 b, it would be habitable if it exhibits more than 50% cloud cover. Then less than a month later, a flood of 41 new exoplanets, including 10 super-Earths, were announced.
On 5 December 2011, the Kepler space telescope discovered its first planet within the habitable zone or "Goldilocks region" of its Sun-like star. Kepler-22b is 2.4 times the radius of the Earth and occupies an orbit 15% closer to its star than the Earth to the Sun. This is compensated for, however, as the star, with a spectral type G5V, is slightly dimmer than the Sun (G2V). Thus, surface temperatures would still allow liquid water on its surface.
On 5 December 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Compared to the February 2011 figures, the number of Earth-size and super-Earth-size planets increased by 200% and 140% respectively. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars, marking a decrease from the February figure; this was due to the more stringent criteria in use in the December data.
In 2011, a density of 55 Cancri e was calculated which turned out to be similar to Earth's. At the size of about 2 Earth radii, it was the largest planet until 2014, which was determined to lack a significant hydrogen atmosphere.
On 20 December 2011, the Kepler team announced the discovery of the first Earth-size exoplanets, Kepler-20e and Kepler-20f, orbiting a Sun-like star, Kepler-20.
Planet Gliese 667 Cb (GJ 667 Cb) was announced by HARPS on 19 October 2009, together with 29 other planets, while Gliese 667 Cc (GJ 667 Cc) was included in a paper published on 21 November 2011. More detailed data on Gliese 667 Cc were published in early February 2012.
2012
In September 2012, the discovery of two planets orbiting Gliese 163 was announced. One of the planets, Gliese 163 c, about 6.9 times the mass of Earth and somewhat hotter, was considered to be within the habitable zone.
2013
On 7 January 2013, astronomers from the Kepler space observatory announced the discovery of Kepler-69c (formerly KOI-172.02), an Earth-like exoplanet candidate (1.5 times the radius of Earth) orbiting a star similar to the Sun in the habitable zone and possibly a "prime candidate to host alien life".
In April 2013, using observations by NASA's Kepler mission team led by William Borucki, of the agency's Ames Research Center, found five planets orbiting in the habitable zone of a Sun-like star, Kepler-62, 1,200 light years from Earth. These new super-Earths have radii of 1.3, 1.4, 1.6, and 1.9 times that of Earth. Theoretical modelling of two of these super-Earths, Kepler-62e and Kepler-62f, suggests both could be solid, either rocky or rocky with frozen water.
On 25 June 2013, three "super Earth" planets have been found orbiting a nearby star at a distance where life in theory could exist, according to a record-breaking tally announced on Tuesday by the European Southern Observatory. They are part of a cluster of as many as seven planets that circle Gliese 667C, one of three stars located a relatively close 22 light years from Earth in the constellation of Scorpio, it said. The planets orbit Gliese 667C in the so-called Goldilocks Zone — a distance from the star at which the temperature is just right for water to exist in liquid form rather than being stripped away by stellar radiation or locked permanently in ice.
2014
In May 2014, previously discovered Kepler-10c was determined to have the mass comparable to Neptune (17 Earth masses). With the radius of 2.35 , it is currently the largest known planet likely to have a predominantly rocky composition. At 17 Earth masses, it is well above the 10 Earth mass upper limit that is commonly used for the term 'super-Earth' so the term mega-Earth has been proposed. However, in July 2017, more careful analysis of HARPS-N and HIRES data showed that Kepler-10c was much less massive than originally thought, instead around 7.37 (6.18 to 8.69) with a mean density of 3.14 g/cm3. Instead of a primarily rocky composition, the more accurately determined mass of Kepler-10c suggests a world made almost entirely of volatiles, mainly water.
2015
On 6 January 2015, NASA announced the 1000th confirmed exoplanet discovered by the Kepler space telescope. Three of the newly confirmed exoplanets were found to orbit within habitable zones of their related stars: two of the three, Kepler-438b and Kepler-442b, are near-Earth-size and likely rocky; the third, Kepler-440b, is a super-Earth.
On 30 July 2015, Astronomy & Astrophysics said they found a planetary system with three super-Earths orbiting a bright, dwarf star. The four-planet system, dubbed HD 219134, had been found 21 light years from Earth in the M-shaped northern hemisphere of constellation Cassiopeia, but it is not in the habitable zone of its star. The planet with the shortest orbit is HD 219134 b, and is Earth's closest known rocky, and transiting, exoplanet.
2016
In February 2016, it was announced that NASA Hubble Space Telescope had detected hydrogen and helium (and suggestions of hydrogen cyanide), but no water vapor, in the atmosphere of 55 Cancri e, the first time the atmosphere of a super-Earth exoplanet was analyzed successfully.
In August 2016, astronomers announced the detection of Proxima b, an Earth-sized exoplanet that is in the habitable zone of the red dwarf star Proxima Centauri, the closest star to the Sun. Due to its closeness to Earth, Proxima b may be a flyby destination for a fleet of interstellar StarChip spacecraft currently being developed by the Breakthrough Starshot project.
2018
In February 2018, K2-141b, a rocky ultra-short period planet (USP) Super-Earth, with a period of 0.28 days orbiting the host star K2-141 (EPIC 246393474) was reported. Another Super-Earth, K2-155d, is discovered.
In July 2018, the discovery of 40 Eridani b was announced. At 16 light-years it is the closest super-Earth known, and its star is the second-brightest hosting a super-Earth.
2019
In July 2019, the discovery of GJ 357 d was announced. Thirty-one light-years from the Solar System, the planet is at least 6.1 .
2021
In 2021, the exoplanet G 9-40 b was discovered.
2022
In 2022, the discovery of a super-Earth around the red dwarf star Ross 508 was reported. Part of the planet's elliptical orbit takes it within the habitable zone.
2024
On 31 January 2024 NASA reported the discovery of a super-Earth called TOI-715 b located in the habitable zone of a red dwarf star about 137 light-years away.
In Solar system
The Solar System contains no known super-Earths, because Earth is the largest terrestrial planet in the Solar System, and all larger planets have both at least 14 times the mass of Earth and thick gaseous envelopes without well-defined rocky or watery surfaces; that is, they are either gas giants or ice giants, not terrestrial planets. In January 2016, the existence of a hypothetical super-Earth ninth planet in the Solar System, referred to as Planet Nine, was proposed as an explanation for the orbital behavior of six trans-Neptunian objects, but it is speculated to also be an ice giant like Uranus or Neptune. A refined model in 2019 constrains it to around five Earth masses; planets of this mass are probably mini-Neptunes.
The fact that there are barely any asteroids or planetesimals inside the orbit of Mercury led some astronomers believing that a super-Earth might have formed in proximity to the Sun, cleared its neighborhood and rapidly get disrupted by the Sun.
Characteristics
Density and bulk composition
Due to the larger mass of super-Earths, their physical characteristics may differ from Earth's; theoretical models for super-Earths provide four possible main compositions according to their density: low-density super-Earths are inferred to be composed mainly of hydrogen and helium (mini-Neptunes); super-Earths of intermediate density are inferred to either have water as a major constituent (ocean planets), or have a denser core enshrouded with an extended gaseous envelope (gas dwarf or sub-Neptune). A super-Earth of high density is believed to be rocky and/or metallic, like Earth and the other terrestrial planets of the Solar System. A super-Earth's interior could be undifferentiated, partially differentiated, or completely differentiated into layers of different composition. Researchers at Harvard Astronomy Department have developed user-friendly online tools to characterize the bulk composition of the super-Earths. A study on Gliese 876 d by a team around Diana Valencia revealed that it would be possible to infer from a radius measured by the transit method of detecting planets and the mass of the relevant planet what the structural composition is. For Gliese 876 d, calculations range from 9,200 km (1.4 Earth radii) for a rocky planet and very large iron core to 12,500 km (2.0 Earth radii) for a watery and icy planet. Within this range of radii the super-Earth Gliese 876 d would have a surface gravity between 1.9g and 3.3g (19 and 32 m/s2). However, this planet is not known to transit its host star.
The limit between rocky planets and planets with a thick gaseous envelope is calculated with theoretical models. Calculating the effect of the active XUV saturation phase of G-type stars over the loss of the primitive nebula-captured hydrogen envelopes in extrasolar planets, it's obtained that planets with a core mass of more than 1.5 Earth-mass (1.15 Earth-radius max.), most likely cannot get rid of their nebula captured hydrogen envelopes during their whole lifetime. Other calculations point out that the limit between envelope-free rocky super-Earths and sub-Neptunes is around 1.75 Earth-radii, as 2 Earth-radii would be the upper limit to be rocky (a planet with 2 Earth-radii and 5 Earth-masses with a mean Earth-like core composition would imply that 1/200 of its mass would be in a H/He envelope, with an atmospheric pressure near to ). Whether or not the primitive nebula-captured H/He envelope of a super-Earth is entirely lost after formation also depends on the orbital distance. For example, formation and evolution calculations of the Kepler-11 planetary system show that the two innermost planets Kepler-11b and c, whose calculated mass is ≈2 M🜨 and between ≈5 and 6 M🜨 respectively (which are within measurement errors), are extremely vulnerable to envelope loss. In particular, the complete removal of the primordial H/He envelope by energetic stellar photons appears almost inevitable in the case of Kepler-11b, regardless of its formation hypothesis.
If a super-Earth is detectable by both the radial-velocity and the transit methods, then both its mass and its radius can be determined; thus its average bulk density can be calculated. The actual empirical observations are giving similar results as theoretical models, as it's found that planets larger than approximately 1.6 Earth-radius (more massive than approximately 6 Earth-masses) contain significant fractions of volatiles or H/He gas (such planets appear to have a diversity of compositions that is not well-explained by a single mass-radius relation as that found in rocky planets). After measuring 65 super-Earths smaller than 4 Earth-radii, the empirical data points out that Gas Dwarves would be the most usual composition: there is a trend where planets with radii up to 1.5 Earth-radii increase in density with increasing radius, but above 1.5 radii the average planet density rapidly decreases with increasing radius, indicating that these planets have a large fraction of volatiles by volume overlying a rocky core. Another discovery about exoplanets' composition is that about the gap or rarity observed for planets between 1.5 and 2.0 Earth-radii, which is explained by a bimodal formation of planets (rocky Super-Earths below 1.75 and sub-Neptunes with thick gas envelopes being above such radii).
Additional studies, conducted with lasers at the Lawrence Livermore National Laboratory and the OMEGA laboratory at the University of Rochester, show that the magnesium-silicate internal regions of the planet would undergo phase changes under the immense pressures and temperatures of a super-Earth planet, and that the different phases of this liquid magnesium silicate would separate into layers.
Geologic activity
Further theoretical work by Valencia and others suggests that super-Earths would be more geologically active than Earth, with more vigorous plate tectonics due to thinner plates under more stress. In fact, their models suggested that Earth was itself a "borderline" case, just barely large enough to sustain plate tectonics. These findings were corroborated by van Heck et al., who determined that plate tectonics may be more likely on super-Earths than on Earth itself, assuming similar composition. However, other studies determined that strong convection currents in the mantle acting on strong gravity would make the crust stronger and thus inhibit plate tectonics. The planet's surface would be too strong for the forces of magma to break the crust into plates.
Evolution
New research suggests that the rocky centres of super-Earths are unlikely to evolve into terrestrial rocky planets like the inner planets of the Solar System because they appear to hold on to their large atmospheres. Rather than evolving into a planet composed mainly of rock with a thin atmosphere, the small rocky core remains engulfed by its large hydrogen-rich envelope.
Theoretical models show that Hot Jupiters and Hot Neptunes can evolve by hydrodynamic loss of their atmospheres to Mini-Neptunes (as it could be the Super-Earth GJ 1214 b), or even to rocky planets known as chthonian planets (after migrating towards the proximity of their parent star). The amount of the outermost layers that is lost depends on the size and the material of the planet and the distance from the star. In a typical system, a gas giant orbiting 0.02 AU around its parent star loses 5–7% of its mass during its lifetime, but orbiting closer than 0.015 AU can mean evaporation of the whole planet except for its core.
The low densities inferred from observations imply that a fraction of the super-Earth population has substantial H/He envelopes, which may have been even more massive soon after formation. Therefore, contrary to the terrestrial planets of the solar system, these super-Earths must have formed during the gas-phase of their progenitor protoplanetary disk.
Temperatures
Since the atmospheres, albedo and greenhouse effects of super-Earths are unknown, the surface temperatures are unknown and generally only an equilibrium temperature is given. For example, the black-body temperature of the Earth is 255.3 K (−18 °C or 0 °F ). It is the greenhouse gases that keep the Earth warmer. Venus has a black-body temperature of only 184.2 K (−89 °C or −128 °F ) even though Venus has a true temperature of 737 K (464 °C or 867 °F ). Though the atmosphere of Venus traps more heat than Earth's, NASA lists the black-body temperature of Venus based on the fact that Venus has an extremely high albedo (Bond albedo 0.90, Visual geometric albedo 0.67), giving it a lower black body temperature than the more absorbent (lower albedo) Earth.
Magnetic field
Earth's magnetic field results from its flowing liquid metallic core, but in super-Earths the mass can produce high pressures with large viscosities and high melting temperatures, which could prevent the interiors from separating into different layers and so result in undifferentiated coreless mantles. Magnesium oxide, which is rocky on Earth, can be a liquid metal at the pressures and temperatures found in super-Earths and could generate a magnetic field in the mantles of super-Earths. That said, super-Earth magnetic fields are yet to be detected observationally.
Habitability
According to one hypothesis, super-Earths of about two Earth masses may be conducive to life. The higher surface gravity would lead to a thicker atmosphere, increased surface erosion and hence a flatter topography. The result could be an "archipelago planet" of shallow oceans dotted with island chains ideally suited for biodiversity. A more massive planet of two Earth masses would also retain more heat within its interior from its initial formation much longer, sustaining plate tectonics (which is vital for regulating the carbon cycle and hence the climate) for longer. The thicker atmosphere and stronger magnetic field would also shield life on the surface against harmful cosmic rays.
| Physical sciences | Planetary science | Astronomy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.