text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1
value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|
In particular, when then is an abelian group since any non-trivial group element is of order or If some element of is of order then is isomorphic to the cyclic group of order hence abelian. On the other hand, if every non-trivial element in is of order hence by the conclusion above then or We only need to consider the case when then there is an element of which is not in the center of Note that includes and the center which does not contain but at least elements. Hence the order of is strictly larger than therefore therefore is an element of the center of a contradiction. Hence is abelian and in fact isomorphic to the direct product of two cyclic groups each of order
Conjugacy of subgroups and general subsets
More generally, given any subset ( not necessarily a subgroup), define a subset to be conjugate to if there exists some such that Let be the set of all subsets such that is conjugate to
A frequently used theorem is that, given any subset the index of (the normalizer of ) in equals the cardinality of :
This follows since, if then if and only if in other words, if and only if are in the same coset of
By using this formula generalizes the one given earlier for the number of elements in a conjugacy class.
The above is particularly useful when talking about subgroups of The subgroups can thus be divided into conjugacy classes, with two subgroups belonging to the same class if and only if they are conjugate.
Conjugate subgroups are isomorphic, but isomorphic subgroups need not be conjugate. For example, an abelian group may have two different subgroups which are isomorphic, but they are never conjugate.
Geometric interpretation
Conjugacy classes in the fundamental group of a path-connected topological space can be thought of as equivalence classes of free loops under free homotopy.
Conjugacy class and irreducible representations in finite group
In any finite group, the number of nonisomorphic irreducible representations over the complex numbers is precisely the number of conjugacy classes. | Conjugacy class | Wikipedia | 436 | 49176 | https://en.wikipedia.org/wiki/Conjugacy%20class | Mathematics | Abstract algebra | null |
Antiviral drugs are a class of medication used for treating viral infections. Most antivirals target specific viruses, while a broad-spectrum antiviral is effective against a wide range of viruses. Antiviral drugs are a class of antimicrobials, a larger group which also includes antibiotic (also termed antibacterial), antifungal and antiparasitic drugs, or antiviral drugs based on monoclonal antibodies. Most antivirals are considered relatively harmless to the host, and therefore can be used to treat infections. They should be distinguished from virucides, which are not medication but deactivate or destroy virus particles, either inside or outside the body. Natural virucides are produced by some plants such as eucalyptus and Australian tea trees.
Medical uses
Most of the antiviral drugs now available are designed to help deal with HIV, herpes viruses, the hepatitis B and C viruses, and influenza A and B viruses.
Viruses use the host's cells to replicate and this makes it difficult to find targets for the drug that would interfere with the virus without also harming the host organism's cells. Moreover, the major difficulty in developing vaccines and antiviral drugs is due to viral variation.
The emergence of antivirals is the product of a greatly expanded knowledge of the genetic and molecular function of organisms, allowing biomedical researchers to understand the structure and function of viruses, major advances in the techniques for finding new drugs, and the pressure placed on the medical profession to deal with the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS).
The first experimental antivirals were developed in the 1960s, mostly to deal with herpes viruses, and were found using traditional trial-and-error drug discovery methods. Researchers grew cultures of cells and infected them with the target virus. They then introduced into the cultures chemicals which they thought might inhibit viral activity and observed whether the level of virus in the cultures rose or fell. Chemicals that seemed to have an effect were selected for closer study. | Antiviral drug | Wikipedia | 426 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
This was a very time-consuming, hit-or-miss procedure, and in the absence of a good knowledge of how the target virus worked, it was not efficient in discovering effective antivirals which had few side effects. Only in the 1980s, when the full genetic sequences of viruses began to be unraveled, did researchers begin to learn how viruses worked in detail, and exactly what chemicals were needed to thwart their reproductive cycle.
Antiviral drug design
Antiviral targeting
The general idea behind modern antiviral drug design is to identify viral proteins, or parts of proteins, that can be disabled. These "targets" should generally be as unlike any proteins or parts of proteins in humans as possible, to reduce the likelihood of side effects and toxicity. The targets should also be common across many strains of a virus, or even among different species of virus in the same family, so a single drug will have broad effectiveness. For example, a researcher might target a critical enzyme synthesized by the virus, but not by the patient, that is common across strains, and see what can be done to interfere with its operation.
Once targets are identified, candidate drugs can be selected, either from drugs already known to have appropriate effects or by actually designing the candidate at the molecular level with a computer-aided design program.
The target proteins can be manufactured in the lab for testing with candidate treatments by inserting the gene that synthesizes the target protein into bacteria or other kinds of cells. The cells are then cultured for mass production of the protein, which can then be exposed to various treatment candidates and evaluated with "rapid screening" technologies.
Approaches by virus life cycle stage
Viruses consist of a genome and sometimes a few enzymes stored in a capsule made of protein (called a capsid), and sometimes covered with a lipid layer (sometimes called an 'envelope'). Viruses cannot reproduce on their own and instead propagate by subjugating a host cell to produce copies of themselves, thus producing the next generation.
Researchers working on such "rational drug design" strategies for developing antivirals have tried to attack viruses at every stage of their life cycles. Some species of mushrooms have been found to contain multiple antiviral chemicals with similar synergistic effects.
Compounds isolated from fruiting bodies and filtrates of various mushrooms have broad-spectrum antiviral activities, but successful production and availability of such compounds as frontline antiviral is a long way away. | Antiviral drug | Wikipedia | 499 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
Viral life cycles vary in their precise details depending on the type of virus, but they all share a general pattern:
Attachment to a host cell.
Release of viral genes and possibly enzymes into the host cell.
Replication of viral components using host-cell machinery.
Assembly of viral components into complete viral particles.
Release of viral particles to infect new host cells.
Before cell entry
One antiviral strategy is to interfere with the ability of a virus to infiltrate a target cell. The virus must go through a sequence of steps to do this, beginning with binding to a specific "receptor" molecule on the surface of the host cell and ending with the virus "uncoating" inside the cell and releasing its contents. Viruses that have a lipid envelope must also fuse their envelope with the target cell, or with a vesicle that transports them into the cell before they can uncoat.
This stage of viral replication can be inhibited in two ways:
Using agents which mimic the virus-associated protein (VAP) and bind to the cellular receptors. This may include VAP anti-idiotypic antibodies, natural ligands of the receptor and anti-receptor antibodies.
Using agents which mimic the cellular receptor and bind to the VAP. This includes anti-VAP antibodies, receptor anti-idiotypic antibodies, extraneous receptor and synthetic receptor mimics.
This strategy of designing drugs can be very expensive, and since the process of generating anti-idiotypic antibodies is partly trial and error, it can be a relatively slow process until an adequate molecule is produced.
Entry inhibitor
A very early stage of viral infection is viral entry, when the virus attaches to and enters the host cell. A number of "entry-inhibiting" or "entry-blocking" drugs are being developed to fight HIV. HIV most heavily targets a specific type of lymphocyte known as "helper T cells", and identifies these target cells through T-cell surface receptors designated "CD4" and "CCR5". Attempts to interfere with the binding of HIV with the CD4 receptor have failed to stop HIV from infecting helper T cells, but research continues on trying to interfere with the binding of HIV to the CCR5 receptor in hopes that it will be more effective. | Antiviral drug | Wikipedia | 463 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
HIV infects a cell through fusion with the cell membrane, which requires two different cellular molecular participants, CD4 and a chemokine receptor (differing depending on the cell type). Approaches to blocking this virus/cell fusion have shown some promise in preventing entry of the virus into a cell. At least one of these entry inhibitors—a biomimetic peptide called Enfuvirtide, or the brand name Fuzeon—has received FDA approval and has been in use for some time. Potentially, one of the benefits from the use of an effective entry-blocking or entry-inhibiting agent is that it potentially may not only prevent the spread of the virus within an infected individual but also the spread from an infected to an uninfected individual.
One possible advantage of the therapeutic approach of blocking viral entry (as opposed to the currently dominant approach of viral enzyme inhibition) is that it may prove more difficult for the virus to develop resistance to this therapy than for the virus to mutate or evolve its enzymatic protocols.
Uncoating inhibitors
Inhibitors of uncoating have also been investigated.
Amantadine and rimantadine have been introduced to combat influenza. These agents act on penetration and uncoating.
Pleconaril works against rhinoviruses, which cause the common cold, by blocking a pocket on the surface of the virus that controls the uncoating process. This pocket is similar in most strains of rhinoviruses and enteroviruses, which can cause diarrhea, meningitis, conjunctivitis, and encephalitis.
Some scientists are making the case that a vaccine against rhinoviruses, the predominant cause of the common cold, is achievable.
Vaccines that combine dozens of varieties of rhinovirus at once are effective in stimulating antiviral antibodies in mice and monkeys, researchers reported in Nature Communications in 2016.
Rhinoviruses are the most common cause of the common cold; other viruses such as respiratory syncytial virus, parainfluenza virus and adenoviruses can cause them too. Rhinoviruses also exacerbate asthma attacks. Although rhinoviruses come in many varieties, they do not drift to the same degree that influenza viruses do. A mixture of 50 inactivated rhinovirus types should be able to stimulate neutralizing antibodies against all of them to some degree.
During viral synthesis
A second approach is to target the processes that synthesize virus components after a virus invades a cell. | Antiviral drug | Wikipedia | 505 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
Reverse transcription
One way of doing this is to develop nucleotide or nucleoside analogues that look like the building blocks of RNA or DNA, but deactivate the enzymes that synthesize the RNA or DNA once the analogue is incorporated. This approach is more commonly associated with the inhibition of reverse transcriptase (RNA to DNA) than with "normal" transcriptase (DNA to RNA).
The first successful antiviral, aciclovir, is a nucleoside analogue, and is effective against herpesvirus infections. The first antiviral drug to be approved for treating HIV, zidovudine (AZT), is also a nucleoside analogue.
An improved knowledge of the action of reverse transcriptase has led to better nucleoside analogues to treat HIV infections. One of these drugs, lamivudine, has been approved to treat hepatitis B, which uses reverse transcriptase as part of its replication process. Researchers have gone further and developed inhibitors that do not look like nucleosides, but can still block reverse transcriptase.
Another target being considered for HIV antivirals include RNase H—which is a component of reverse transcriptase that splits the synthesized DNA from the original viral RNA.
Integrase
Another target is integrase, which integrate the synthesized DNA into the host cell genome. Examples of integrase inhibitors include raltegravir, elvitegravir, and dolutegravir.
Transcription
Once a virus genome becomes operational in a host cell, it then generates messenger RNA (mRNA) molecules that direct the synthesis of viral proteins. Production of mRNA is initiated by proteins known as transcription factors. Several antivirals are now being designed to block attachment of transcription factors to viral DNA. | Antiviral drug | Wikipedia | 360 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
Translation/antisense
Genomics has not only helped find targets for many antivirals, it has provided the basis for an entirely new type of drug, based on "antisense" molecules. These are segments of DNA or RNA that are designed as complementary molecule to critical sections of viral genomes, and the binding of these antisense segments to these target sections blocks the operation of those genomes. A phosphorothioate antisense drug named fomivirsen has been introduced, used to treat opportunistic eye infections in AIDS patients caused by cytomegalovirus, and other antisense antivirals are in development. An antisense structural type that has proven especially valuable in research is morpholino antisense.
Morpholino oligos have been used to experimentally suppress many viral types:
caliciviruses
flaviviruses (including West Nile virus)
dengue
HCV
coronaviruses
Translation/ribozymes
Yet another antiviral technique inspired by genomics is a set of drugs based on ribozymes, which are enzymes that will cut apart viral RNA or DNA at selected sites. In their natural course, ribozymes are used as part of the viral manufacturing sequence, but these synthetic ribozymes are designed to cut RNA and DNA at sites that will disable them.
A ribozyme antiviral to deal with hepatitis C has been suggested, and ribozyme antivirals are being developed to deal with HIV. An interesting variation of this idea is the use of genetically modified cells that can produce custom-tailored ribozymes. This is part of a broader effort to create genetically modified cells that can be injected into a host to attack pathogens by generating specialized proteins that block viral replication at various phases of the viral life cycle.
Protein processing and targeting
Interference with post translational modifications or with targeting of viral proteins in the cell is also possible.
Protease inhibitors
Some viruses include an enzyme known as a protease that cuts viral protein chains apart so they can be assembled into their final configuration. HIV includes a protease, and so considerable research has been performed to find "protease inhibitors" to attack HIV at that phase of its life cycle. Protease inhibitors became available in the 1990s and have proven effective, though they can have unusual side effects, for example causing fat to build up in unusual places. Improved protease inhibitors are now in development. | Antiviral drug | Wikipedia | 506 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
Protease inhibitors have also been seen in nature. A protease inhibitor was isolated from the shiitake mushroom (Lentinus edodes). The presence of this may explain the Shiitake mushrooms' noted antiviral activity in vitro.
Long dsRNA helix targeting
Most viruses produce long dsRNA helices during transcription and replication. In contrast, uninfected mammalian cells generally produce dsRNA helices of fewer than 24 base pairs during transcription. DRACO (double-stranded RNA activated caspase oligomerizer) is a group of experimental antiviral drugs initially developed at the Massachusetts Institute of Technology. In cell culture, DRACO was reported to have broad-spectrum efficacy against many infectious viruses, including dengue flavivirus, Amapari and Tacaribe arenavirus, Guama bunyavirus, H1N1 influenza and rhinovirus, and was additionally found effective against influenza in vivo in weanling mice. It was reported to induce rapid apoptosis selectively in virus-infected mammalian cells, while leaving uninfected cells unharmed. DRACO effects cell death via one of the last steps in the apoptosis pathway in which complexes containing intracellular apoptosis signalling molecules simultaneously bind multiple procaspases. The procaspases transactivate via cleavage, activate additional caspases in the cascade, and cleave a variety of cellular proteins, thereby killing the cell.
Assembly
Rifampicin acts at the assembly phase.
Release phase
The final stage in the life cycle of a virus is the release of completed viruses from the host cell, and this step has also been targeted by antiviral drug developers. Two drugs named zanamivir (Relenza) and oseltamivir (Tamiflu) that have been recently introduced to treat influenza prevent the release of viral particles by blocking a molecule named neuraminidase that is found on the surface of flu viruses, and also seems to be constant across a wide range of flu strains.
Immune system stimulation
Rather than attacking viruses directly, a second category of tactics for fighting viruses involves encouraging the body's immune system to attack them. Some antivirals of this sort do not focus on a specific pathogen, instead stimulating the immune system to attack a range of pathogens. | Antiviral drug | Wikipedia | 466 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
One of the best-known of this class of drugs are interferons, which inhibit viral synthesis in infected cells. One form of human interferon named "interferon alpha" is well-established as part of the standard treatment for hepatitis B and C, and other interferons are also being investigated as treatments for various diseases.
A more specific approach is to synthesize antibodies, protein molecules that can bind to a pathogen and mark it for attack by other elements of the immune system. Once researchers identify a particular target on the pathogen, they can synthesize quantities of identical "monoclonal" antibodies to link up that target. A monoclonal drug is now being sold to help fight respiratory syncytial virus in babies, and antibodies purified from infected individuals are also used as a treatment for hepatitis B.
Antiviral drug resistance
Antiviral resistance can be defined by a decreased susceptibility to a drug caused by changes in viral genotypes. In cases of antiviral resistance, drugs have either diminished or no effectiveness against their target virus. The issue inevitably remains a major obstacle to antiviral therapy as it has developed to almost all specific and effective antimicrobials, including antiviral agents.
The Centers for Disease Control and Prevention (CDC) inclusively recommends anyone six months and older to get a yearly vaccination to protect them from influenza A viruses (H1N1) and (H3N2) and up to two influenza B viruses (depending on the vaccination). Comprehensive protection starts by ensuring vaccinations are current and complete. However, vaccines are preventative and are not generally used once a patient has been infected with a virus. Additionally, the availability of these vaccines can be limited based on financial or locational reasons which can prevent the effectiveness of herd immunity, making effective antivirals a necessity. | Antiviral drug | Wikipedia | 383 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
The three FDA-approved neuraminidase antiviral flu drugs available in the United States, recommended by the CDC, include: oseltamivir (Tamiflu), zanamivir (Relenza), and peramivir (Rapivab). Influenza antiviral resistance often results from changes occurring in neuraminidase and hemagglutinin proteins on the viral surface. Currently, neuraminidase inhibitors (NAIs) are the most frequently prescribed antivirals because they are effective against both influenza A and B. However, antiviral resistance is known to develop if mutations to the neuraminidase proteins prevent NAI binding. This was seen in the H257Y mutation, which was responsible for oseltamivir resistance to H1N1 strains in 2009. The inability of NA inhibitors to bind to the virus allowed this strain of virus with the resistance mutation to spread due to natural selection. Furthermore, a study published in 2009 in Nature Biotechnology emphasized the urgent need for augmentation of oseltamivir stockpiles with additional antiviral drugs including zanamivir. This finding was based on a performance evaluation of these drugs supposing the 2009 H1N1 'Swine Flu' neuraminidase (NA) were to acquire the oseltamivir-resistance (His274Tyr) mutation, which is currently widespread in seasonal H1N1 strains.
Origin of antiviral resistance
The genetic makeup of viruses is constantly changing, which can cause a virus to become resistant to currently available treatments. Viruses can become resistant through spontaneous or intermittent mechanisms throughout the course of an antiviral treatment. Immunocompromised patients, more often than immunocompetent patients, hospitalized with pneumonia are at the highest risk of developing oseltamivir resistance during treatment. Subsequent to exposure to someone else with the flu, those who received oseltamivir for "post-exposure prophylaxis" are also at higher risk of resistance. | Antiviral drug | Wikipedia | 416 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
The mechanisms for antiviral resistance development depend on the type of virus in question. RNA viruses such as hepatitis C and influenza A have high error rates during genome replication because RNA polymerases lack proofreading activity. RNA viruses also have small genome sizes that are typically less than 30 kb, which allow them to sustain a high frequency of mutations. DNA viruses, such as HPV and herpesvirus, hijack host cell replication machinery, which gives them proofreading capabilities during replication. DNA viruses are therefore less error prone, are generally less diverse, and are more slowly evolving than RNA viruses. In both cases, the likelihood of mutations is exacerbated by the speed with which viruses reproduce, which provides more opportunities for mutations to occur in successive replications. Billions of viruses are produced every day during the course of an infection, with each replication giving another chance for mutations that encode for resistance to occur.
Multiple strains of one virus can be present in the body at one time, and some of these strains may contain mutations that cause antiviral resistance. This effect, called the quasispecies model, results in immense variation in any given sample of virus, and gives the opportunity for natural selection to favor viral strains with the highest fitness every time the virus is spread to a new host. Recombination, the joining of two different viral variants, and reassortment, the swapping of viral gene segments among viruses in the same cell, also play a role in resistance, especially in influenza.
Antiviral resistance has been reported in antivirals for herpes, HIV, hepatitis B and C, and influenza, but antiviral resistance is a possibility for all viruses. Mechanisms of antiviral resistance vary between virus types.
Detection of antiviral resistance
National and international surveillance is performed by the CDC to determine effectiveness of the current FDA-approved antiviral flu drugs. Public health officials use this information to make current recommendations about the use of flu antiviral medications. WHO further recommends in-depth epidemiological investigations to control potential transmission of the resistant virus and prevent future progression. As novel treatments and detection techniques to antiviral resistance are enhanced so can the establishment of strategies to combat the inevitable emergence of antiviral resistance. | Antiviral drug | Wikipedia | 459 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
Treatment options for antiviral resistant pathogens
If a virus is not fully wiped out during a regimen of antivirals, treatment creates a bottleneck in the viral population that selects for resistance, and there is a chance that a resistant strain may repopulate the host. Viral treatment mechanisms must therefore account for the selection of resistant viruses.
The most commonly used method for treating resistant viruses is combination therapy, which uses multiple antivirals in one treatment regimen. This is thought to decrease the likelihood that one mutation could cause antiviral resistance, as the antivirals in the cocktail target different stages of the viral life cycle. This is frequently used in retroviruses like HIV, but a number of studies have demonstrated its effectiveness against influenza A, as well. Viruses can also be screened for resistance to drugs before treatment is started. This minimizes exposure to unnecessary antivirals and ensures that an effective medication is being used. This may improve patient outcomes and could help detect new resistance mutations during routine scanning for known mutants. However, this has not been consistently implemented in treatment facilities at this time.
Direct-acting antivirals
The term Direct-acting antivirals (DAA) has long been associated with the combination of antiviral drugs used to treat hepatitis C infections. These are the more effective than older treatments such as ribavirin (partially indirectly acting) and interferon (indirect acting). The DAA drugs against hepatitis C are taken orally, as tablets, for 8 to 12 weeks. The treatment depends on the type or types (genotypes) of hepatitis C virus that are causing the infection. Both during and at the end of treatment, blood tests are used to monitor the effectiveness of the treatment and subsequent cure.
The DAA combination drugs used include:
Harvoni (sofosbuvir and ledipasvir)
Epclusa (sofosbuvir and velpatasvir)
Vosevi (sofosbuvir, velpatasvir, and voxilaprevir)
Zepatier (elbasvir and grazoprevir)
Mavyret (glecaprevir and pibrentasvir) | Antiviral drug | Wikipedia | 445 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
The United States Food and Drug Administration approved DAAs on the basis of a surrogate endpoint called sustained virological response (SVR). SVR is achieved in a patient when hepatitis C virus RNA remains undetectable 12–24 weeks after treatment ends. Whether through DAAs or older interferon-based regimens, SVR is associated with improved health outcomes and significantly decreased mortality. For those who already have advanced liver disease (including hepatocellular carcinoma), however, the benefits of achieving SVR may be less pronounced, though still substantial.
Despite its historical roots in hepatitis C research, the term "direct-acting antivirals" is becoming more broadly used to also include other anti-viral drugs with a direct viral target such as aciclovir (against herpes simplex virus), letermovir (against cytomegalovirus), or AZT (against human immunodeficiency virus). In this context it serves to distinguish these drugs from those with an indirect mechanism of action such as immune modulators like interferon alfa. This difference is of particular relevance for potential drug resistance mutation development.
Public policy
Use and distribution
Guidelines regarding viral diagnoses and treatments change frequently and limit quality care. Even when physicians diagnose older patients with influenza, use of antiviral treatment can be low. Provider knowledge of antiviral therapies can improve patient care, especially in geriatric medicine. Furthermore, in local health departments (LHDs) with access to antivirals, guidelines may be unclear, causing delays in treatment. With time-sensitive therapies, delays could lead to lack of treatment.
Overall, national guidelines, regarding infection control and management, standardize care and improve healthcare worker and patient safety. Guidelines, such as those provided by the Centers for Disease Control and Prevention (CDC) during the 2009 flu pandemic caused by the H1N1 virus, recommend, among other things, antiviral treatment regimens, clinical assessment algorithms for coordination of care, and antiviral chemoprophylaxis guidelines for exposed persons. Roles of pharmacists and pharmacies have also expanded to meet the needs of public during public health emergencies.
Stockpiling | Antiviral drug | Wikipedia | 463 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
Public Health Emergency Preparedness initiatives are managed by the CDC via the Office of Public Health Preparedness and Response. Funds aim to support communities in preparing for public health emergencies, including pandemic influenza. Also managed by the CDC, the Strategic National Stockpile (SNS) consists of bulk quantities of medicines and supplies for use during such emergencies. Antiviral stockpiles prepare for shortages of antiviral medications in cases of public health emergencies. During the H1N1 pandemic in 2009–2010, guidelines for SNS use by local health departments was unclear, revealing gaps in antiviral planning. For example, local health departments that received antivirals from the SNS did not have transparent guidance on the use of the treatments. The gap made it difficult to create plans and policies for their use and future availabilities, causing delays in treatment. | Antiviral drug | Wikipedia | 176 | 49197 | https://en.wikipedia.org/wiki/Antiviral%20drug | Biology and health sciences | Antiviral drugs | Health |
The age of Earth is estimated to be 4.54 ± 0.05 billion years This age may represent the age of Earth's accretion, or core formation, or of the material from which Earth formed. This dating is based on evidence from radiometric age-dating of meteorite material and is consistent with the radiometric ages of the oldest-known terrestrial material and lunar samples.
Following the development of radiometric age-dating in the early 20th century, measurements of lead in uranium-rich minerals showed that some were in excess of a billion years old. The oldest such minerals analyzed to date—small crystals of zircon from the Jack Hills of Western Australia—are at least 4.404 billion years old. Calcium–aluminium-rich inclusions—the oldest known solid constituents within meteorites that are formed within the Solar System—are 4.567 billion years old, giving a lower limit for the age of the Solar System.
It is hypothesised that the accretion of Earth began soon after the formation of the calcium-aluminium-rich inclusions and the meteorites. Because the time this accretion process took is not yet known, and predictions from different accretion models range from a few million up to about 100 million years, the difference between the age of Earth and of the oldest rocks is difficult to determine. It is also difficult to determine the exact age of the oldest rocks on Earth, exposed at the surface, as they are aggregates of minerals of possibly different ages.
Development of modern geologic concepts
Studies of strata—the layering of rocks and soil—gave naturalists an appreciation that Earth may have been through many changes during its existence. These layers often contained fossilized remains of unknown creatures, leading some to interpret a progression of organisms from layer to layer.
Nicolas Steno in the 17th century was one of the first naturalists to appreciate the connection between fossil remains and strata. His observations led him to formulate important stratigraphic concepts (i.e., the "law of superposition" and the "principle of original horizontality"). In the 1790s, William Smith hypothesized that if two layers of rock at widely differing locations contained similar fossils, then it was very plausible that the layers were the same age. Smith's nephew and student, John Phillips, later calculated by such means that Earth was about 96 million years old. | Age of Earth | Wikipedia | 487 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
In the mid-18th century, the naturalist Mikhail Lomonosov suggested that Earth had been created separately from, and several hundred thousand years before, the rest of the universe. Lomonosov's ideas were mostly speculative. In 1779 the Comte du Buffon tried to obtain a value for the age of Earth using an experiment: he created a small globe that resembled Earth in composition and then measured its rate of cooling. This led him to estimate that Earth was about 75,000 years old.
Other naturalists used these hypotheses to construct a history of Earth, though their timelines were inexact as they did not know how long it took to lay down stratigraphic layers. In 1830, geologist Charles Lyell, developing ideas found in James Hutton's works, popularized the concept that the features of Earth were in perpetual change, eroding and reforming continuously, and the rate of this change was roughly constant. This was a challenge to the traditional view, which saw the history of Earth as dominated by intermittent catastrophes. Many naturalists were influenced by Lyell to become "uniformitarians" who believed that changes were constant and uniform.
Early calculations
In 1862, the physicist William Thomson, 1st Baron Kelvin published calculations that fixed the age of Earth at between 20 million and 400 million years. He assumed that Earth had formed as a completely molten object, and determined the amount of time it would take for the near-surface temperature gradient to decrease to its present value. His calculations did not account for heat produced via radioactive decay (a then unknown process) or, more significantly, convection inside Earth, which allows the temperature in the upper mantle to remain high much longer, maintaining a high thermal gradient in the crust much longer. Even more constraining were Thomson's estimates of the age of the Sun, which were based on estimates of its thermal output and a theory that the Sun obtains its energy from gravitational collapse; Thomson estimated that the Sun is about 20 million years old. | Age of Earth | Wikipedia | 407 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
Geologists such as Lyell had difficulty accepting such a short age for Earth. For biologists, even 100 million years seemed much too short to be plausible. In Charles Darwin's theory of evolution, the process of random heritable variation with cumulative selection requires great durations of time, and Darwin stated that Thomson's estimates did not appear to provide enough time. According to modern biology, the total evolutionary history from the beginning of life to today has taken place since 3.5 to 3.8 billion years ago, the amount of time which passed since the last universal ancestor of all living organisms as shown by geological dating.
In a lecture in 1869, Darwin's great advocate, Thomas Henry Huxley, attacked Thomson's calculations, suggesting they appeared precise in themselves but were based on faulty assumptions. The physicist Hermann von Helmholtz (in 1856) and astronomer Simon Newcomb (in 1892) contributed their own calculations of 22 and 18 million years, respectively, to the debate: they independently calculated the amount of time it would take for the Sun to condense down to its current diameter and brightness from the nebula of gas and dust from which it was born. Their values were consistent with Thomson's calculations. However, they assumed that the Sun was only glowing from the heat of its gravitational contraction. The process of solar nuclear fusion was not yet known to science.
In 1892, Thomson was ennobled as Lord Kelvin in appreciation of his many scientific accomplishments. In 1895 John Perry challenged Kelvin's figure on the basis of his assumptions on conductivity, and Oliver Heaviside entered the dialogue, considering it "a vehicle to display the ability of his operator method to solve problems of astonishing complexity." Other scientists backed up Kelvin's figures. Darwin's son, the astronomer George H. Darwin, proposed that Earth and Moon had broken apart in their early days when they were both molten. He calculated the amount of time it would have taken for tidal friction to give Earth its current 24-hour day. His value of 56 million years was additional evidence that Thomson was on the right track. The last estimate Kelvin gave, in 1897, was: "that it was more than 20 and less than 40 million year old, and probably much nearer 20 than 40". In 1899 and 1900, John Joly calculated the rate at which the oceans should have accumulated salt from erosion processes and determined that the oceans were about 80 to 100 million years old.
Radiometric dating | Age of Earth | Wikipedia | 499 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
Overview
By their chemical nature, rock minerals contain certain elements and not others; but in rocks containing radioactive isotopes, the process of radioactive decay generates exotic elements over time. By measuring the concentration of the stable end product of the decay, coupled with knowledge of the half life and initial concentration of the decaying element, the age of the rock can be calculated. Typical radioactive end products are argon from decay of potassium-40, and lead from decay of uranium and thorium. If the rock becomes molten, as happens in Earth's mantle, such nonradioactive end products typically escape or are redistributed. Thus the age of the oldest terrestrial rock gives a minimum for the age of Earth, assuming that no rock has been intact for longer than Earth itself.
Convective mantle and radioactivity
The discovery of radioactivity introduced another factor in the calculation. After Henri Becquerel's initial discovery in 1896, Marie and Pierre Curie discovered the radioactive elements polonium and radium in 1898; and in 1903, Pierre Curie and Albert Laborde announced that radium produces enough heat to melt its own weight in ice in less than an hour. Geologists quickly realized that this upset the assumptions underlying most calculations of the age of Earth. These had assumed that the original heat of Earth and the Sun had dissipated steadily into space, but radioactive decay meant that this heat had been continually replenished. George Darwin and John Joly were the first to point this out, in 1903.
Invention of radiometric dating
Radioactivity, which had overthrown the old calculations, yielded a bonus by providing a basis for new calculations, in the form of radiometric dating.
Ernest Rutherford and Frederick Soddy jointly had continued their work on radioactive materials and concluded that radioactivity was caused by a spontaneous transmutation of atomic elements. In radioactive decay, an element breaks down into another, lighter element, releasing alpha, beta, or gamma radiation in the process. They also determined that a particular isotope of a radioactive element decays into another element at a distinctive rate. This rate is given in terms of a "half-life", or the amount of time it takes half of a mass of that radioactive material to break down into its "decay product". | Age of Earth | Wikipedia | 460 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
Some radioactive materials have short half-lives; some have long half-lives. Uranium and thorium have long half-lives and so persist in Earth's crust, but radioactive elements with short half-lives have generally disappeared. This suggested that it might be possible to measure the age of Earth by determining the relative proportions of radioactive materials in geological samples. In reality, radioactive elements do not always decay into nonradioactive ("stable") elements directly, instead, decaying into other radioactive elements that have their own half-lives and so on, until they reach a stable element. These "decay chains", such as the uranium-radium and thorium series, were known within a few years of the discovery of radioactivity and provided a basis for constructing techniques of radiometric dating.
The pioneers of radioactivity were chemist Bertram B. Boltwood and physicist Rutherford. Boltwood had conducted studies of radioactive materials as a consultant, and when Rutherford lectured at Yale in 1904, Boltwood was inspired to describe the relationships between elements in various decay series. Late in 1904, Rutherford took the first step toward radiometric dating by suggesting that the alpha particles released by radioactive decay could be trapped in a rocky material as helium atoms. At the time, Rutherford was only guessing at the relationship between alpha particles and helium atoms, but he would prove the connection four years later.
Soddy and Sir William Ramsay had just determined the rate at which radium produces alpha particles, and Rutherford proposed that he could determine the age of a rock sample by measuring its concentration of helium. He dated a rock in his possession to an age of 40 million years by this technique. Rutherford wrote of addressing a meeting of the Royal Institution in 1904: | Age of Earth | Wikipedia | 347 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
Rutherford assumed that the rate of decay of radium as determined by Ramsay and Soddy was accurate and that helium did not escape from the sample over time. Rutherford's scheme was inaccurate, but it was a useful first step. Boltwood focused on the end products of decay series. In 1905, he suggested that lead was the final stable product of the decay of radium. It was already known that radium was an intermediate product of the decay of uranium. Rutherford joined in, outlining a decay process in which radium emitted five alpha particles through various intermediate products to end up with lead, and speculated that the radium–lead decay chain could be used to date rock samples. Boltwood did the legwork and by the end of 1905 had provided dates for 26 separate rock samples, ranging from 92 to 570 million years. He did not publish these results, which was fortunate because they were flawed by measurement errors and poor estimates of the half-life of radium. Boltwood refined his work and finally published the results in 1907.
Boltwood's paper pointed out that samples taken from comparable layers of strata had similar lead-to-uranium ratios, and that samples from older layers had a higher proportion of lead, except where there was evidence that lead had leached out of the sample. His studies were flawed by the fact that the decay series of thorium was not understood, which led to incorrect results for samples that contained both uranium and thorium. However, his calculations were far more accurate than any that had been performed to that time. Refinements in the technique would later give ages for Boltwood's 26 samples of 410 million to 2.2 billion years.
Arthur Holmes establishes radiometric dating
Although Boltwood published his paper in a prominent geological journal, the geological community had little interest in radioactivity. Boltwood gave up work on radiometric dating and went on to investigate other decay series. Rutherford remained mildly curious about the issue of the age of Earth but did little work on it. | Age of Earth | Wikipedia | 405 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
Robert Strutt tinkered with Rutherford's helium method until 1910 and then ceased. However, Strutt's student Arthur Holmes became interested in radiometric dating and continued to work on it after everyone else had given up. Holmes focused on lead dating because he regarded the helium method as unpromising. He performed measurements on rock samples and concluded in 1911 that the oldest (a sample from Ceylon) was about 1.6 billion years old. These calculations were not particularly trustworthy. For example, he assumed that the samples had contained only uranium and no lead when they were formed.
More important research was published in 1913. It showed that elements generally exist in multiple variants with different masses, or "isotopes". In the 1930s, isotopes would be shown to have nuclei with differing numbers of the neutral particles known as "neutrons". In that same year, other research was published establishing the rules for radioactive decay, allowing more precise identification of decay series.
Many geologists felt these new discoveries made radiometric dating so complicated as to be worthless. Holmes felt that they gave him tools to improve his techniques, and he plodded ahead with his research, publishing before and after the First World War. His work was generally ignored until the 1920s, though in 1917 Joseph Barrell, a professor of geology at Yale, redrew geological history as it was understood at the time to conform to Holmes's findings in radiometric dating. Barrell's research determined that the layers of strata had not all been laid down at the same rate, and so current rates of geological change could not be used to provide accurate timelines of the history of Earth.
Holmes' persistence finally began to pay off in 1921, when the speakers at the yearly meeting of the British Association for the Advancement of Science came to a rough consensus that Earth was a few billion years old and that radiometric dating was credible. Holmes published The Age of the Earth, an Introduction to Geological Ideas in 1927 in which he presented a range of 1.6 to 3.0 billion years. No great push to embrace radiometric dating followed, however, and the die-hards in the geological community stubbornly resisted. They had never cared for attempts by physicists to intrude in their domain, and had successfully ignored them so far. The growing weight of evidence finally tilted the balance in 1931, when the National Research Council of the US National Academy of Sciences decided to resolve the question of the age of Earth by appointing a committee to investigate. | Age of Earth | Wikipedia | 507 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
Holmes, being one of the few people who was trained in radiometric dating techniques, was a committee member and in fact wrote most of the final report. Thus, Holmes' report concluded that radioactive dating was the only reliable means of pinning down a geologic time scale. Questions of bias were deflected by the great and exacting detail of the report. It described the methods used, the care with which measurements were made, and their error bars and limitations.
Modern radiometric dating
Radiometric dating continues to be the predominant way scientists date geologic time scales. Techniques for radioactive dating have been tested and fine-tuned on an ongoing basis since the 1960s. Forty or so different dating techniques have been utilized to date, working on a wide variety of materials. Dates for the same sample using these different techniques are in very close agreement on the age of the material. Possible contamination problems do exist, but they have been studied and dealt with by careful investigation, leading to sample preparation procedures being minimized to limit the chance of contamination.
Use of meteorites
An age of 4.55 ± 0.07 billion years, very close to today's accepted age, was determined by Clair Cameron Patterson using uranium–lead isotope dating (specifically lead–lead dating) on several meteorites including the Canyon Diablo meteorite and published in 1956. The quoted age of Earth is derived, in part, from the Canyon Diablo meteorite for several important reasons and is built upon a modern understanding of cosmochemistry built up over decades of research.
Most geological samples from Earth are unable to give a direct date of the formation of Earth from the solar nebula because Earth has undergone differentiation into the core, mantle, and crust, and this has then undergone a long history of mixing and unmixing of these sample reservoirs by plate tectonics, weathering and hydrothermal circulation.
All of these processes may adversely affect isotopic dating mechanisms because the sample cannot always be assumed to have remained as a closed system, by which it is meant that either the parent or daughter nuclide (a species of atom characterised by the number of neutrons and protons an atom contains) or an intermediate daughter nuclide may have been partially removed from the sample, which will skew the resulting isotopic date. To mitigate this effect it is usual to date several minerals in the same sample, to provide an isochron. Alternatively, more than one dating system may be used on a sample to check the date. | Age of Earth | Wikipedia | 500 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
Some meteorites are furthermore considered to represent the primitive material from which the accreting solar disk was formed. Some have behaved as closed systems (for some isotopic systems) soon after the solar disk and the planets formed. To date, these assumptions are supported by much scientific observation and repeated isotopic dates, and it is certainly a more robust hypothesis than that which assumes a terrestrial rock has retained its original composition.
Nevertheless, ancient Archaean lead ores of galena have been used to date the formation of Earth as these represent the earliest formed lead-only minerals on the planet and record the earliest homogeneous lead–lead isotope systems on the planet. These have returned age dates of 4.54 billion years with a precision of as little as 1% margin for error.
Statistics for several meteorites that have undergone isochron dating are as follows:
Canyon Diablo meteorite
The Canyon Diablo meteorite was used because it is both large and representative of a particularly rare type of meteorite that contains sulfide minerals (particularly troilite, FeS), metallic nickel-iron alloys, plus silicate minerals. This is important because the presence of the three mineral phases allows investigation of isotopic dates using samples that provide a great separation in concentrations between parent and daughter nuclides. This is particularly true of uranium and lead. Lead is strongly chalcophilic and is found in the sulfide at a much greater concentration than in the silicate, versus uranium. Because of this segregation in the parent and daughter nuclides during the formation of the meteorite, this allowed a much more precise date of the formation of the solar disk and hence the planets than ever before.
The age determined from the Canyon Diablo meteorite has been confirmed by hundreds of other age determinations, from both terrestrial samples and other meteorites. The meteorite samples, however, show a spread from 4.53 to 4.58 billion years ago. This is interpreted as the duration of formation of the solar nebula and its collapse into the solar disk to form the Sun and the planets. This 50 million year time span allows for accretion of the planets from the original solar dust and meteorites. | Age of Earth | Wikipedia | 442 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
The Moon, as another extraterrestrial body that has not undergone plate tectonics and that has no atmosphere, provides quite precise age dates from the samples returned from the Apollo missions. Rocks returned from the Moon have been dated at a maximum of 4.51 billion years old. Martian meteorites that have landed upon Earth have also been dated to around 4.5 billion years old by lead–lead dating. Lunar samples, since they have not been disturbed by weathering, plate tectonics or material moved by organisms, can also provide dating by direct electron microscope examination of cosmic ray tracks. The accumulation of dislocations generated by high energy cosmic ray particle impacts provides another confirmation of the isotopic dates. Cosmic ray dating is only useful on material that has not been melted, since melting erases the crystalline structure of the material, and wipes away the tracks left by the particles. | Age of Earth | Wikipedia | 181 | 49256 | https://en.wikipedia.org/wiki/Age%20of%20Earth | Physical sciences | Basics | Earth science |
In chemistry, hydronium (hydroxonium in traditional British English) is the cation , also written as , the type of oxonium ion produced by protonation of water. It is often viewed as the positive ion present when an Arrhenius acid is dissolved in water, as Arrhenius acid molecules in solution give up a proton (a positive hydrogen ion, ) to the surrounding water molecules (). In fact, acids must be surrounded by more than a single water molecule in order to ionize, yielding aqueous and conjugate base.
Three main structures for the aqueous proton have garnered experimental support:
the Eigen cation, which is a tetrahydrate, H3O+(H2O)3
the Zundel cation, which is a symmetric dihydrate, H+(H2O)2
and the Stoyanov cation, an expanded Zundel cation, which is a hexahydrate: H+(H2O)2(H2O)4
Spectroscopic evidence from well-defined IR spectra overwhelmingly supports the Stoyanov cation as the predominant form. For this reason, it has been suggested that wherever possible, the symbol H+(aq) should be used instead of the hydronium ion.
Relation to pH
The molar concentration of hydronium or ions determines a solution's pH according to
pH = -log([]/M)
where M = mol/L. The concentration of hydroxide ions analogously determines a solution's pOH. The molecules in pure water auto-dissociate into aqueous protons and hydroxide ions in the following equilibrium:
In pure water, there is an equal number of hydroxide and ions, so it is a neutral solution. At , pure water has a pH of 7 and a pOH of 7 (this varies when the temperature changes: see self-ionization of water). A pH value less than 7 indicates an acidic solution, and a pH value more than 7 indicates a basic solution.
Nomenclature
According to IUPAC nomenclature of organic chemistry, the hydronium ion should be referred to as oxonium. Hydroxonium may also be used unambiguously to identify it.
An oxonium ion is any cation containing a trivalent oxygen atom. | Hydronium | Wikipedia | 483 | 49281 | https://en.wikipedia.org/wiki/Hydronium | Physical sciences | Concepts | Chemistry |
Structure
Since and N have the same number of electrons, is isoelectronic with ammonia. As shown in the images above, has a trigonal pyramidal molecular geometry with the oxygen atom at its apex. The bond angle is approximately 113°, and the center of mass is very close to the oxygen atom. Because the base of the pyramid is made up of three identical hydrogen atoms, the molecule's symmetric top configuration is such that it belongs to the point group. Because of this symmetry and the fact that it has a dipole moment, the rotational selection rules are ΔJ = ±1 and ΔK = 0. The transition dipole lies along the c-axis and, because the negative charge is localized near the oxygen atom, the dipole moment points to the apex, perpendicular to the base plane. | Hydronium | Wikipedia | 165 | 49281 | https://en.wikipedia.org/wiki/Hydronium | Physical sciences | Concepts | Chemistry |
Acids and acidity
The hydrated proton is very acidic: at 25 °C, its pKa is approximately 0. The values commonly given for pKaaq(H3O+) are 0 or –1.74. The former uses the convention that the activity of the solvent in a dilute solution (in this case, water) is 1, while the latter uses the value of the concentration of water in the pure liquid of 55.5 M. Silverstein has shown that the latter value is thermodynamically unsupportable. The disagreement comes from the ambiguity that to define pKa of H3O+ in water, H2O has to act simultaneously as a solute and the solvent. The IUPAC has not given an official definition of pKa that would resolve this ambiguity. Burgot has argued that H3O+(aq) + H2O (l) ⇄ H2O (aq) + H3O+ (aq) is simply not a thermodynamically well-defined process. For an estimate of pKaaq(H3O+), Burgot suggests taking the measured value pKaEtOH(H3O+) = 0.3, the pKa of H3O+ in ethanol, and applying the correlation equation pKaaq = pKaEtOH – 1.0 (± 0.3) to convert the ethanol pKa to an aqueous value, to give a value of pKaaq(H3O+) = –0.7 (± 0.3). On the other hand, Silverstein has shown that Ballinger and Long's experimental results support a pKa of 0.0 for the aqueous proton. Neils and Schaertel provide added arguments for a pKa of 0.0 | Hydronium | Wikipedia | 380 | 49281 | https://en.wikipedia.org/wiki/Hydronium | Physical sciences | Concepts | Chemistry |
The aqueous proton is the most acidic species that can exist in water (assuming sufficient water for dissolution): any stronger acid will ionize and yield a hydrated proton. The acidity of (aq) is the implicit standard used to judge the strength of an acid in water: strong acids must be better proton donors than (aq), as otherwise a significant portion of acid will exist in a non-ionized state (i.e.: a weak acid). Unlike (aq) in neutral solutions that result from water's autodissociation, in acidic solutions, (aq) is long-lasting and concentrated, in proportion to the strength of the dissolved acid.
pH was originally conceived to be a measure of the hydrogen ion concentration of aqueous solution. Virtually all such free protons are quickly hydrated; acidity of an aqueous solution is therefore more accurately characterized by its concentration of (aq). In organic syntheses, such as acid catalyzed reactions, the hydronium ion () is used interchangeably with the ion; choosing one over the other has no significant effect on the mechanism of reaction.
Solvation
Researchers have yet to fully characterize the solvation of hydronium ion in water, in part because many different meanings of solvation exist. A freezing-point depression study determined that the mean hydration ion in cold water is approximately : on average, each hydronium ion is solvated by 6 water molecules which are unable to solvate other solute molecules.
Some hydration structures are quite large: the magic ion number structure (called magic number because of its increased stability with respect to hydration structures involving a comparable number of water molecules – this is a similar usage of the term magic number as in nuclear physics) might place the hydronium inside a dodecahedral cage. However, more recent ab initio method molecular dynamics simulations have shown that, on average, the hydrated proton resides on the surface of the cluster. Further, several disparate features of these simulations agree with their experimental counterparts suggesting an alternative interpretation of the experimental results. | Hydronium | Wikipedia | 430 | 49281 | https://en.wikipedia.org/wiki/Hydronium | Physical sciences | Concepts | Chemistry |
Two other well-known structures are the Zundel cation and the Eigen cation. The Eigen solvation structure has the hydronium ion at the center of an complex in which the hydronium is strongly hydrogen-bonded to three neighbouring water molecules. In the Zundel complex the proton is shared equally by two water molecules in a symmetric hydrogen bond. A work in 1999 indicates that both of these complexes represent ideal structures in a more general hydrogen bond network defect.
Isolation of the hydronium ion monomer in liquid phase was achieved in a nonaqueous, low nucleophilicity superacid solution (). The ion was characterized by high resolution nuclear magnetic resonance.
A 2007 calculation of the enthalpies and free energies of the various hydrogen bonds around the hydronium cation in liquid protonated water at room temperature and a study of the proton hopping mechanism using molecular dynamics showed that the hydrogen-bonds around the hydronium ion (formed with the three water ligands in the first solvation shell of the hydronium) are quite strong compared to those of bulk water.
A new model was proposed by Stoyanov based on infrared spectroscopy in which the proton exists as an ion. The positive charge is thus delocalized over 6 water molecules.
Solid hydronium salts
For many strong acids, it is possible to form crystals of their hydronium salt that are relatively stable. These salts are sometimes called acid monohydrates. As a rule, any acid with an ionization constant of or higher may do this. Acids whose ionization constants are below generally cannot form stable salts. For example, nitric acid has an ionization constant of , and mixtures with water at all proportions are liquid at room temperature. However, perchloric acid has an ionization constant of , and if liquid anhydrous perchloric acid and water are combined in a 1:1 molar ratio, they react to form solid hydronium perchlorate (). | Hydronium | Wikipedia | 402 | 49281 | https://en.wikipedia.org/wiki/Hydronium | Physical sciences | Concepts | Chemistry |
The hydronium ion also forms stable compounds with the carborane superacid . X-ray crystallography shows a symmetry for the hydronium ion with each proton interacting with a bromine atom each from three carborane anions 320 pm apart on average. The salt is also soluble in benzene. In crystals grown from a benzene solution the solvent co-crystallizes and a cation is completely separated from the anion. In the cation three benzene molecules surround hydronium forming pi-cation interactions with the hydrogen atoms. The closest (non-bonding) approach of the anion at chlorine to the cation at oxygen is 348 pm.
There are also many known examples of salts containing hydrated hydronium ions, such as the ion in , the and ions both found in .
Sulfuric acid is also known to form a hydronium salt at temperatures below .
Interstellar H3O+
Hydronium is an abundant molecular ion in the interstellar medium and is found in diffuse and dense molecular clouds as well as the plasma tails of comets. Interstellar sources of hydronium observations include the regions of Sagittarius B2, Orion OMC-1, Orion BN–IRc2, Orion KL, and the comet Hale–Bopp.
Interstellar hydronium is formed by a chain of reactions started by the ionization of into by cosmic radiation. can produce either or through dissociative recombination reactions, which occur very quickly even at the low (≥10 K) temperatures of dense clouds. This leads to hydronium playing a very important role in interstellar ion-neutral chemistry.
Astronomers are especially interested in determining the abundance of water in various interstellar climates due to its key role in the cooling of dense molecular gases through radiative processes. However, does not have many favorable transitions for ground-based observations. Although observations of HDO (the deuterated version of water) could potentially be used for estimating abundances, the ratio of HDO to is not known very accurately.
Hydronium, on the other hand, has several transitions that make it a superior candidate for detection and identification in a variety of situations. This information has been used in conjunction with laboratory measurements of the branching ratios of the various dissociative recombination reactions to provide what are believed to be relatively accurate and abundances without requiring direct observation of these species. | Hydronium | Wikipedia | 491 | 49281 | https://en.wikipedia.org/wiki/Hydronium | Physical sciences | Concepts | Chemistry |
Interstellar chemistry
As mentioned previously, is found in both diffuse and dense molecular clouds. By applying the reaction rate constants (α, β, and γ) corresponding to all of the currently available characterized reactions involving , it is possible to calculate k(T) for each of these reactions. By multiplying these k(T) by the relative abundances of the products, the relative rates (in cm3/s) for each reaction at a given temperature can be determined. These relative rates can be made in absolute rates by multiplying them by the . By assuming for a dense cloud and for a diffuse cloud, the results indicate that most dominant formation and destruction mechanisms were the same for both cases. It should be mentioned that the relative abundances used in these calculations correspond to TMC-1, a dense molecular cloud, and that the calculated relative rates are therefore expected to be more accurate at . The three fastest formation and destruction mechanisms are listed in the table below, along with their relative rates. Note that the rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions. All three destruction mechanisms in the table below are classified as dissociative recombination reactions.
It is also worth noting that the relative rates for the formation reactions in the table above are the same for a given reaction at both temperatures. This is due to the reaction rate constants for these reactions having β and γ constants of 0, resulting in which is independent of temperature.
Since all three of these reactions produce either or OH, these results reinforce the strong connection between their relative abundances and that of . The rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions. | Hydronium | Wikipedia | 363 | 49281 | https://en.wikipedia.org/wiki/Hydronium | Physical sciences | Concepts | Chemistry |
Astronomical detections
As early as 1973 and before the first interstellar detection, chemical models of the interstellar medium (the first corresponding to a dense cloud) predicted that hydronium was an abundant molecular ion and that it played an important role in ion-neutral chemistry. However, before an astronomical search could be underway there was still the matter of determining hydronium's spectroscopic features in the gas phase, which at this point were unknown. The first studies of these characteristics came in 1977, which was followed by other, higher resolution spectroscopy experiments. Once several lines had been identified in the laboratory, the first interstellar detection of H3O+ was made by two groups almost simultaneously in 1986. The first, published in June 1986, reported observation of the J = 1 − 2 transition at in OMC-1 and Sgr B2. The second, published in August, reported observation of the same transition toward the Orion-KL nebula.
These first detections have been followed by observations of a number of additional transitions. The first observations of each subsequent transition detection are given below in chronological order:
In 1991, the 3 − 2 transition at was observed in OMC-1 and Sgr B2. One year later, the 3 − 2 transition at was observed in several regions, the clearest of which was the W3 IRS 5 cloud.
The first far-IR 4 − 3 transition at 69.524 μm (4.3121 THz) was made in 1996 near Orion BN-IRc2. In 2001, three additional transitions of in were observed in the far infrared in Sgr B2; 2 − 1 transition at 100.577 μm (2.98073 THz), 1 − 1 at 181.054 μm (1.65582 THz) and 2 − 1 at 100.869 μm (2.9721 THz). | Hydronium | Wikipedia | 389 | 49281 | https://en.wikipedia.org/wiki/Hydronium | Physical sciences | Concepts | Chemistry |
In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by (the Greek letter alpha), is a fundamental physical constant that quantifies the strength of the electromagnetic interaction between elementary charged particles.
It is a dimensionless quantity (dimensionless physical constant), independent of the system of units used, which is related to the strength of the coupling of an elementary charge e with the electromagnetic field, by the formula . Its numerical value is approximately , with a relative uncertainty of
The constant was named by Arnold Sommerfeld, who introduced it in 1916 when extending the Bohr model of the atom. quantified the gap in the fine structure of the spectral lines of the hydrogen atom, which had been measured precisely by Michelson and Morley in 1887.
Why the constant should have this value is not understood, but there are a number of ways to measure its value.
Definition
In terms of other physical constants, may be defined as:
where
is the elementary charge ();
is the Planck constant ();
is the reduced Planck constant, ()
is the speed of light ();
is the electric constant ().
Since the 2019 revision of the SI, the only quantity in this list that does not have an exact value in SI units is the electric constant (vacuum permittivity).
Alternative systems of units
The electrostatic CGS system implicitly sets , as commonly found in older physics literature, where the expression of the fine-structure constant becomes
A nondimensionalised system commonly used in high energy physics sets , where the expression for the fine-structure constant becomesAs such, the fine-structure constant is chiefly a quantity determining (or determined by) the elementary charge: in terms of such a natural unit of charge.
In the system of atomic units, which sets , the expression for the fine-structure constant becomes
Measurement
The CODATA recommended value of is
This has a relative standard uncertainty of
This value for gives , 0.8 times the standard uncertainty away from its old defined value, with the mean differing from the old value by only 0.13 parts per billion.
Historically the value of the reciprocal of the fine-structure constant is often given. The CODATA recommended value is | Fine-structure constant | Wikipedia | 450 | 49295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | Physical sciences | Physical constants | Physics |
While the value of can be determined from estimates of the constants that appear in any of its definitions, the theory of quantum electrodynamics (QED) provides a way to measure directly using the quantum Hall effect or the anomalous magnetic moment of the electron. Other methods include the A.C. Josephson effect and photon recoil in atom interferometry.
There is general agreement for the value of , as measured by these different methods. The preferred methods in 2019 are measurements of electron anomalous magnetic moments and of photon recoil in atom interferometry. The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant (the magnetic moment of the electron is also referred to as the electron -factor ). One of the most precise values of obtained experimentally (as of 2023) is based on a measurement of using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved tenth-order Feynman diagrams:
This measurement of has a relative standard uncertainty of . This value and uncertainty are about the same as the latest experimental results.
Further refinement of the experimental value was published by the end of 2020, giving the value
with a relative accuracy of , which has a significant discrepancy from the previous experimental value.
Physical interpretations
The fine-structure constant, , has several physical interpretations. is:
When perturbation theory is applied to quantum electrodynamics, the resulting perturbative expansions for physical results are expressed as sets of power series in . Because is much less than one, higher powers of are soon unimportant, making the perturbation theory practical in this case. On the other hand, the large value of the corresponding factors in quantum chromodynamics makes calculations involving the strong nuclear force extremely difficult. | Fine-structure constant | Wikipedia | 388 | 49295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | Physical sciences | Physical constants | Physics |
Variation with energy scale
In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron's mass gives a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, is the asymptotic value of the fine-structure constant at zero energy.
At higher energies, such as the scale of the Z boson, about 90 GeV, one instead measures an effective ≈ 1/127.
As the energy scale increases, the strength of the electromagnetic interaction in the Standard Model approaches that of the other two fundamental interactions, a feature important for grand unification theories. If quantum electrodynamics were an exact theory, the fine-structure constant would actually diverge at an energy known as the Landau pole – this fact undermines the consistency of quantum electrodynamics beyond perturbative expansions.
History
Based on the precise measurement of the hydrogen atom spectrum by Michelson and Morley in 1887,
Arnold Sommerfeld extended the Bohr model to include elliptical orbits and relativistic dependence of mass on velocity. He introduced a term for the fine-structure constant in 1916.
The first physical interpretation of the fine-structure constant was as the ratio of the velocity of the electron in the first circular orbit of the relativistic Bohr atom to the speed of light in the vacuum.
Equivalently, it was the quotient between the minimum angular momentum allowed by relativity for a closed orbit, and the minimum angular momentum allowed for it by quantum mechanics. It appears naturally in Sommerfeld's analysis, and determines the size of the splitting or fine-structure of the hydrogenic spectral lines. This constant was not seen as significant until Paul Dirac's linear relativistic wave equation in 1928, which gave the exact fine structure formula. | Fine-structure constant | Wikipedia | 438 | 49295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | Physical sciences | Physical constants | Physics |
With the development of quantum electrodynamics (QED) the significance of has broadened from a spectroscopic phenomenon to a general coupling constant for the electromagnetic field, determining the strength of the interaction between electrons and photons. The term is engraved on the tombstone of one of the pioneers of QED, Julian Schwinger, referring to his calculation of the anomalous magnetic dipole moment.
History of measurements
The CODATA values in the above table are computed by averaging other measurements; they are not independent experiments.
Potential variation over time
Physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying has been proposed as a way of solving problems in cosmology and astrophysics. String theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just ) actually vary.
In the experiments below, represents the change in over time, which can be computed by prev − now . If the fine-structure constant really is a constant, then any experiment should show that
or as close to zero as experiment can measure. Any value far away from zero would indicate that does change over time. So far, most experimental data is consistent with being constant.
Past rate of change
The first experimenters to test whether the fine-structure constant might actually vary examined the spectral lines of distant astronomical objects and the products of radioactive decay in the Oklo natural nuclear fission reactor. Their findings were consistent with no variation in the fine-structure constant between these two vastly separated locations and times.
Improved technology at the dawn of the 21st century made it possible to probe the value of at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in .
Using the Keck telescopes and a data set of 128 quasars at redshifts , Webb et al. found that their spectra were consistent with a slight increase in over the last 10–12 billion years. Specifically, they found that
In other words, they measured the value to be somewhere between and . This is a very small value, but the error bars do not actually include zero. This result either indicates that is not constant or that there is experimental error unaccounted for.
In 2004, a smaller study of 23 absorption systems by Chand et al., using the Very Large Telescope, found no measurable variation: | Fine-structure constant | Wikipedia | 512 | 49295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | Physical sciences | Physical constants | Physics |
However, in 2007 simple flaws were identified in the analysis method of Chand et al., discrediting those results.
King et al. have used Markov chain Monte Carlo methods to investigate the algorithm used by the UNSW group to determine from the quasar spectra, and have found that the algorithm appears to produce correct uncertainties and maximum likelihood estimates for for particular models. This suggests that the statistical uncertainties and best estimate for stated by Webb et al. and Murphy et al. are robust.
Lamoreaux and Torgerson analyzed data from the Oklo natural nuclear fission reactor in 2004, and concluded that has changed in the past 2 billion years by 45 parts per billion. They claimed that this finding was "probably accurate to within 20%". Accuracy is dependent on estimates of impurities and temperature in the natural reactor. These conclusions have yet to be verified.
In 2007, Khatri and Wandelt of the University of Illinois at Urbana-Champaign realized that the 21 cm hyperfine transition in neutral hydrogen of the early universe leaves a unique absorption line imprint in the cosmic microwave background radiation.
They proposed using this effect to measure the value of during the epoch before the formation of the first stars. In principle, this technique provides enough information to measure a variation of 1 part in (4 orders of magnitude better than the current quasar constraints). However, the constraint which can be placed on is strongly dependent upon effective integration time, going as . The European LOFAR radio telescope would only be able to constrain to about 0.3%. The collecting area required to constrain to the current level of quasar constraints is on the order of 100 square kilometers, which is economically impracticable at present.
Present rate of change
In 2008, Rosenband et al.
used the frequency ratio of and in single-ion optical atomic clocks to place a very stringent constraint on the present-time temporal variation of , namely = per year. A present day null constraint on the time variation of alpha does not necessarily rule out time variation in the past. Indeed, some theories
that predict a variable fine-structure constant also predict that the value of the fine-structure constant should become practically fixed in its value once the universe enters its current dark energy-dominated epoch.
Spatial variation – Australian dipole
Researchers from Australia have said they had identified a variation of the fine-structure constant across the observable universe. | Fine-structure constant | Wikipedia | 491 | 49295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | Physical sciences | Physical constants | Physics |
These results have not been replicated by other researchers. In September and October 2010, after released research by Webb et al., physicists C. Orzel and S.M. Carroll separately suggested various approaches of how Webb's observations may be wrong. Orzel argues
that the study may contain wrong data due to subtle differences in the two telescopes
a totally different approach; he looks at the fine-structure constant as a scalar field and claims that if the telescopes are correct and the fine-structure constant varies smoothly over the universe, then the scalar field must have a very small mass. However, previous research has shown that the mass is not likely to be extremely small. Both of these scientists' early criticisms point to the fact that different techniques are needed to confirm or contradict the results, a conclusion Webb, et al., previously stated in their study.
Other research finds no meaningful variation in the fine structure constant.
Anthropic explanation
The anthropic principle is an argument about the reason the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were very different. One example is that, if modern grand unified theories are correct, then needs to be between around 1/180 and 1/85 to have proton decay to be slow enough for life to be possible.
Numerological explanations
As a dimensionless constant which does not seem to be directly related to any mathematical constant, the fine-structure constant has long fascinated physicists.
Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the universe.
This led him in 1929 to conjecture that the reciprocal of the fine-structure constant was not approximately but precisely the integer 137.
By the 1940s experimental values for deviated sufficiently from 137 to refute Eddington's arguments.
Physicist Wolfgang Pauli commented on the appearance of certain numbers in physics, including the fine-structure constant, which he also noted approximates the prime number 137. This constant so intrigued him that he collaborated with psychoanalyst Carl Jung in a quest to understand its significance. Similarly, Max Born believed that if the value of differed, the universe would degenerate, and thus that = is a law of nature.
Richard Feynman, one of the originators and early developers of the theory of quantum electrodynamics (QED), referred to the fine-structure constant in these terms: | Fine-structure constant | Wikipedia | 512 | 49295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | Physical sciences | Physical constants | Physics |
Conversely, statistician I. J. Good argued that a numerological explanation would only be acceptable if it could be based on a good theory that is not yet known but "exists" in the sense of a Platonic Ideal.
Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. However, no numerological explanation has ever been accepted by the physics community.
In the late 20th century, multiple physicists, including Stephen Hawking in his 1988 book A Brief History of Time, began exploring the idea of a multiverse, and the fine-structure constant was one of several universal constants that suggested the idea of a fine-tuned universe.
Quotes | Fine-structure constant | Wikipedia | 141 | 49295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | Physical sciences | Physical constants | Physics |
A bipolar junction transistor (BJT) is a type of transistor that uses both electrons and electron holes as charge carriers. In contrast, a unipolar transistor, such as a field-effect transistor (FET), uses only one kind of charge carrier. A bipolar transistor allows a small current injected at one of its terminals to control a much larger current between the remaining two terminals, making the device capable of amplification or switching.
BJTs use two p–n junctions between two semiconductor types, n-type and p-type, which are regions in a single crystal of material. The junctions can be made in several different ways, such as changing the doping of the semiconductor material as it is grown, by depositing metal pellets to form alloy junctions, or by such methods as diffusion of n-type and p-type doping substances into the crystal. The superior predictability and performance of junction transistors quickly displaced the original point-contact transistor. Diffused transistors, along with other components, are elements of integrated circuits for analog and digital functions. Hundreds of bipolar junction transistors can be made in one circuit at a very low cost.
Bipolar transistor integrated circuits were the main active devices of a generation of mainframe and minicomputers, but most computer systems now use Complementary metal–oxide–semiconductor (CMOS) integrated circuits relying on the field-effect transistor (FET). Bipolar transistors are still used for amplification of signals, switching, and in mixed-signal integrated circuits using BiCMOS. Specialized types are used for high voltage switches, for radio-frequency (RF) amplifiers, or for switching high currents.
Current direction conventions
By convention, the direction of current on diagrams is shown as the direction that a positive charge would move. This is called conventional current. However, current in metal conductors is generally due to the flow of electrons. Because electrons carry a negative charge, they move in the direction opposite to conventional current. On the other hand, inside a bipolar transistor, currents can be composed of both positively charged holes and negatively charged electrons. In this article, current arrows are shown in the conventional direction, but labels for the movement of holes and electrons show their actual direction inside the transistor.
Arrow direction
The arrow on the symbol for bipolar transistors indicates the p–n junction between base and emitter and points in the direction in which conventional current travels. | Bipolar junction transistor | Wikipedia | 512 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
Function
BJTs exist as PNP and NPN types, based on the doping types of the three main terminal regions. An NPN transistor comprises two semiconductor junctions that share a thin p-doped region, and a PNP transistor comprises two semiconductor junctions that share a thin n-doped region. N-type means doped with impurities (such as phosphorus or arsenic) that provide mobile electrons, while p-type means doped with impurities (such as boron) that provide holes that readily accept electrons.
Charge flow in a BJT is due to diffusion of charge carriers (electrons and holes) across a junction between two regions of different charge carrier concentration. The regions of a BJT are called emitter, base, and collector. A discrete transistor has three leads for connection to these regions. Typically, the emitter region is heavily doped compared to the other two layers, and the collector is doped more lightly (typically ten times lighter) than the base. By design, most of the BJT collector current is due to the flow of charge carriers injected from a heavily doped emitter into the base where they are minority carriers (electrons in NPNs, holes in PNPs) that diffuse toward the collector, so BJTs are classified as minority-carrier devices.
In typical operation, the base–emitter junction is forward biased, which means that the p-doped side of the junction is at a more positive potential than the n-doped side, and the base–collector junction is reverse biased. When forward bias is applied to the base–emitter junction, the equilibrium between the thermally generated carriers and the repelling electric field of the emitter depletion region is disturbed. This allows thermally excited carriers (electrons in NPNs, holes in PNPs) to inject from the emitter into the base region. These carriers create a diffusion current through the base from the region of high concentration near the emitter toward the region of low concentration near the collector. | Bipolar junction transistor | Wikipedia | 420 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
To minimize the fraction of carriers that recombine before reaching the collector–base junction, the transistor's base region must be thin enough that carriers can diffuse across it in much less time than the semiconductor's minority-carrier lifetime. Having a lightly doped base ensures recombination rates are low. In particular, the thickness of the base must be much less than the diffusion length of the carriers. The collector–base junction is reverse-biased, and so negligible carrier injection occurs from the collector to the base, but carriers that are injected into the base from the emitter, and diffuse to reach the collector–base depletion region, are swept into the collector by the electric field in the depletion region. The thin shared base and asymmetric collector–emitter doping are what differentiates a bipolar transistor from two separate diodes connected in series.
Voltage, current, and charge control
The collector–emitter current can be viewed as being controlled by the base–emitter current (current control), or by the base–emitter voltage (voltage control). These views are related by the current–voltage relation of the base–emitter junction, which is the usual exponential current–voltage curve of a p–n junction (diode).
The explanation for collector current is the concentration gradient of minority carriers in the base region. Due to low-level injection (in which there are much fewer excess carriers than normal majority carriers) the ambipolar transport rates (in which the excess majority and minority carriers flow at the same rate) is in effect determined by the excess minority carriers.
Detailed transistor models of transistor action, such as the Gummel–Poon model, account for the distribution of this charge explicitly to explain transistor behavior more exactly. The charge-control view easily handles phototransistors, where minority carriers in the base region are created by the absorption of photons, and handles the dynamics of turn-off, or recovery time, which depends on charge in the base region recombining. However, because base charge is not a signal that is visible at the terminals, the current- and voltage-control views are generally used in circuit design and analysis. | Bipolar junction transistor | Wikipedia | 459 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
In analog circuit design, the current-control view is sometimes used because it is approximately linear. That is, the collector current is approximately times the base current. Some basic circuits can be designed by assuming that the base–emitter voltage is approximately constant and that collector current is β times the base current. However, to accurately and reliably design production BJT circuits, the voltage-control model (e.g. the Ebers–Moll model) is required. The voltage-control model requires an exponential function to be taken into account, but when it is linearized such that the transistor can be modeled as a transconductance, as in the Ebers–Moll model, design for circuits such as differential amplifiers again becomes a mostly linear problem, so the voltage-control view is often preferred. For translinear circuits, in which the exponential I–V curve is key to the operation, the transistors are usually modeled as voltage-controlled current sources whose transconductance is proportional to their collector current. In general, transistor-level circuit analysis is performed using SPICE or a comparable analog-circuit simulator, so mathematical model complexity is usually not of much concern to the designer, but a simplified view of the characteristics allows designs to be created following a logical process.
Turn-on, turn-off, and storage delay
Bipolar transistors, and particularly power transistors, have long base-storage times when they are driven into saturation; the base storage limits turn-off time in switching applications. A Baker clamp can prevent the transistor from heavily saturating, which reduces the amount of charge stored in the base and thus improves switching time.
Transistor characteristics: alpha (α) and beta (β)
The proportion of carriers able to cross the base and reach the collector is a measure of the BJT efficiency. The heavy doping of the emitter region and light doping of the base region causes many more electrons to be injected from the emitter into the base than holes to be injected from the base into the emitter. A thin and lightly doped base region means that most of the minority carriers that are injected into the base will diffuse to the collector and not recombine. | Bipolar junction transistor | Wikipedia | 460 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
Common-emitter current gain
The common-emitter current gain is represented by F or the -parameter FE; it is approximately the ratio of the collector's direct current to the base's direct current in forward-active region. (The F subscript is used to indicate the forward-active mode of operation.) It is typically greater than 50 for small-signal transistors, but can be smaller in transistors designed for high-power applications. Both injection efficiency and recombination in the base reduce the BJT gain.
Common-base current gain
Another useful characteristic is the common-base current gain, F. The common-base current gain is approximately the gain of current from emitter to collector in the forward-active region. This ratio usually has a value close to unity; between 0.980 and 0.998. It is less than unity due to recombination of charge carriers as they cross the base region.
Alpha and beta are related by the following identities:
Beta is a convenient figure of merit to describe the performance of a bipolar transistor, but is not a fundamental physical property of the device. Bipolar transistors can be considered voltage-controlled devices (fundamentally the collector current is controlled by the base–emitter voltage; the base current could be considered a defect and is controlled by the characteristics of the base–emitter junction and recombination in the base). In many designs beta is assumed high enough so that base current has a negligible effect on the circuit. In some circuits (generally switching circuits), sufficient base current is supplied so that even the lowest beta value a particular device may have will still allow the required collector current to flow.
Structure
BJTs consists of three differently doped semiconductor regions: the emitter region, the base region and the collector region. These regions are, respectively, p type, n type and p type in a PNP transistor, and n type, p type and n type in an NPN transistor. Each semiconductor region is connected to a terminal, appropriately labeled: emitter (E), base (B) and collector (C). | Bipolar junction transistor | Wikipedia | 443 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
The base is physically located between the emitter and the collector and is made from lightly doped, high-resistivity material. The collector surrounds the emitter region, making it almost impossible for the electrons injected into the base region to escape without being collected, thus making the resulting value of α very close to unity, and so, giving the transistor a large β. A cross-section view of a BJT indicates that the collector–base junction has a much larger area than the emitter–base junction.
The bipolar junction transistor, unlike other transistors, is usually not a symmetrical device. This means that interchanging the collector and the emitter makes the transistor leave the forward active mode and start to operate in reverse mode. Because the transistor's internal structure is usually optimized for forward-mode operation, interchanging the collector and the emitter makes the values of α and β in reverse operation much smaller than those in forward operation; often the α of the reverse mode is lower than 0.5. The lack of symmetry is primarily due to the doping ratios of the emitter and the collector. The emitter is heavily doped, while the collector is lightly doped, allowing a large reverse bias voltage to be applied before the collector–base junction breaks down. The collector–base junction is reverse biased in normal operation. The reason the emitter is heavily doped is to increase the emitter injection efficiency: the ratio of carriers injected by the emitter to those injected by the base. For high current gain, most of the carriers injected into the emitter–base junction must come from the emitter.
The low-performance "lateral" bipolar transistors sometimes used in CMOS processes are sometimes designed symmetrically, that is, with no difference between forward and backward operation.
Small changes in the voltage applied across the base–emitter terminals cause the current between the emitter and the collector to change significantly. This effect can be used to amplify the input voltage or current. BJTs can be thought of as voltage-controlled current sources, but are more simply characterized as current-controlled current sources, or current amplifiers, due to the low impedance at the base.
Early transistors were made from germanium but most modern BJTs are made from silicon. A significant minority are also now made from gallium arsenide, especially for very high speed applications (see HBT, below). | Bipolar junction transistor | Wikipedia | 505 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
The heterojunction bipolar transistor (HBT) is an improvement of the BJT that can handle signals of very high frequencies up to several hundred GHz. It is common in modern ultrafast circuits, mostly RF systems.
Two commonly used HBTs are silicon–germanium and aluminum gallium arsenide, though a wide variety of semiconductors may be used for the HBT structure. HBT structures are usually grown by epitaxy techniques like MOCVD and MBE.
Regions of operation
Bipolar transistors have four distinct regions of operation, defined by BJT junction biases:
Forward-active (or simply active) The base–emitter junction is forward biased and the base–collector junction is reverse biased. Most bipolar transistors are designed to afford the greatest common-emitter current gain, βF, in forward-active mode. If this is the case, the collector–emitter current is approximately proportional to the base current, but many times larger, for small base current variations.
Reverse-active (or inverse-active or inverted) By reversing the biasing conditions of the forward-active region, a bipolar transistor goes into reverse-active mode. In this mode, the emitter and collector regions switch roles. Because most BJTs are designed to maximize current gain in forward-active mode, the βF in inverted mode is several times smaller (2–3 times for the ordinary germanium transistor). This transistor mode is seldom used, usually being considered only for failsafe conditions and some types of bipolar logic. The reverse bias breakdown voltage to the base may be an order of magnitude lower in this region.
Saturation With both junctions forward biased, a BJT is in saturation mode and facilitates high current conduction from the emitter to the collector (or the other direction in the case of NPN, with negatively charged carriers flowing from emitter to collector). This mode corresponds to a logical "on", or a closed switch.
Cut-off In cut-off, biasing conditions opposite of saturation (both junctions reverse biased) are present. There is very little current, which corresponds to a logical "off", or an open switch. | Bipolar junction transistor | Wikipedia | 459 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
Although these regions are well defined for sufficiently large applied voltage, they overlap somewhat for small (less than a few hundred millivolts) biases. For example, in the typical grounded-emitter configuration of an NPN BJT used as a pulldown switch in digital logic, the "off" state never involves a reverse-biased junction because the base voltage never goes below ground; nevertheless the forward bias is close enough to zero that essentially no current flows, so this end of the forward active region can be regarded as the cutoff region.
Active-mode transistors in circuits
The diagram shows a schematic representation of an NPN transistor connected to two voltage sources. (The same description applies to a PNP transistor with reversed directions of current flow and applied voltage.) This applied voltage causes the lower p–n junction to become forward biased, allowing a flow of electrons from the emitter into the base. In active mode, the electric field existing between base and collector (caused by VCE) will cause the majority of these electrons to cross the upper p–n junction into the collector to form the collector current IC. The remainder of the electrons recombine with holes, the majority carriers in the base, making a current through the base connection to form the base current, IB. As shown in the diagram, the emitter current, IE, is the total transistor current, which is the sum of the other terminal currents, (i.e. IE = IB + IC).
In the diagram, the arrows representing current point in the direction of conventional current – the flow of electrons is in the opposite direction of the arrows because electrons carry negative electric charge. In active mode, the ratio of the collector current to the base current is called the DC current gain. This gain is usually 100 or more, but robust circuit designs do not depend on the exact value (for example see op-amp). The value of this gain for DC signals is referred to as , and the value of this gain for small signals is referred to as . That is, when a small change in the currents occurs, and sufficient time has passed for the new condition to reach a steady state is the ratio of the change in collector current to the change in base current. The symbol is used for both and . | Bipolar junction transistor | Wikipedia | 471 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
The emitter current is related to exponentially. At room temperature, an increase in by approximately 60 mV increases the emitter current by a factor of 10. Because the base current is approximately proportional to the collector and emitter currents, they vary in the same way.
History
The bipolar point-contact transistor was invented in December 1947 at the Bell Telephone Laboratories by John Bardeen and Walter Brattain under the direction of William Shockley. The junction version known as the bipolar junction transistor (BJT), invented by Shockley in 1948, was for three decades the device of choice in the design of discrete and integrated circuits. Nowadays, the use of the BJT has declined in favor of CMOS technology in the design of digital integrated circuits. The incidental low performance BJTs inherent in CMOS ICs, however, are often utilized as bandgap voltage reference, silicon bandgap temperature sensor and to handle electrostatic discharge.
Germanium transistors
The germanium transistor was more common in the 1950s and 1960s but has a greater tendency to exhibit thermal runaway. Since germanium p-n junctions have a lower forward bias than silicon, germanium transistors turn on at lower voltage. | Bipolar junction transistor | Wikipedia | 250 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
Early manufacturing techniques
Various methods of manufacturing bipolar transistors were developed.
Point-contact transistor – first transistor ever constructed (December 1947), a bipolar transistor, limited commercial use due to high cost and noise.
Tetrode point-contact transistor – Point-contact transistor having two emitters. It became obsolete in the middle 1950s.
Junction transistors
Grown-junction transistor first bipolar junction transistor made. Invented by William Shockley at Bell Labs on June 23, 1948. Patent filed on June 26, 1948.
Alloy-junction transistor emitter and collector alloy beads fused to base. Developed at General Electric and RCA in 1951.
Micro-alloy transistor (MAT) high-speed type of alloy junction transistor. Developed at Philco.
Micro-alloy diffused transistor (MADT) high-speed type of alloy junction transistor, speedier than MAT, a diffused-base transistor. Developed at Philco.
Post-alloy diffused transistor (PADT) high-speed type of alloy junction transistor, speedier than MAT, a diffused-base transistor. Developed at Philips.
Tetrode transistor high-speed variant of grown-junction transistor or alloy junction transistor with two connections to base.
Surface-barrier transistor high-speed metal-barrier junction transistor. Developed at Philco in 1953.
Drift-field transistor high-speed bipolar junction transistor. Invented by Herbert Kroemer at the Central Bureau of Telecommunications Technology of the German Postal Service, in 1953.
Spacistor around 1957.
Diffusion transistor modern type bipolar junction transistor. Prototypes developed at Bell Labs in 1954.
Diffused-base transistor first implementation of diffusion transistor.
Mesa transistor developed at Texas Instruments in 1957.
Planar transistor the bipolar junction transistor that made mass-produced monolithic integrated circuits possible. Developed by Jean Hoerni at Fairchild in 1959.
Epitaxial transistor a bipolar junction transistor made using vapor-phase deposition. See Epitaxy. Allows very precise control of doping levels and gradients.
Theory and modeling | Bipolar junction transistor | Wikipedia | 465 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
BJTs can be thought of as two diodes (p–n junctions) sharing a common region that minority carriers can move through. A PNP BJT will function like two diodes that share an N-type cathode region, and the NPN like two diodes sharing a P-type anode region. Connecting two diodes with wires will not make a BJT, since minority carriers will not be able to get from one p–n junction to the other through the wire.
Both types of BJT function by letting a small current input to the base control an amplified output from the collector. The result is that the BJT makes a good switch that is controlled by its base input. The BJT also makes a good amplifier, since it can multiply a weak input signal to about 100 times its original strength. Networks of BJTs are used to make powerful amplifiers with many different applications.
In the discussion below, focus is on the NPN BJT. In what is called active mode, the base–emitter voltage and collector–base voltage are positive, forward biasing the emitter–base junction and reverse-biasing the collector–base junction. In this mode, electrons are injected from the forward biased n-type emitter region into the p-type base where they diffuse as minority carriers to the reverse-biased n-type collector and are swept away by the electric field in the reverse-biased collector–base junction.
For an illustration of forward and reverse bias, see semiconductor diodes.
Large-signal models
In 1954, Jewell James Ebers and John L. Moll introduced their mathematical model of transistor currents:
Ebers–Moll model
The DC emitter and collector currents in active mode are well modeled by an approximation to the Ebers–Moll model:
The base internal current is mainly by diffusion (see Fick's law) and
where
is the thermal voltage (approximately 26 mV at 300 K ≈ room temperature).
is the emitter current
is the collector current
is the common base forward short-circuit current gain (0.98 to 0.998)
is the reverse saturation current of the base–emitter diode (on the order of 10−15 to 10−12 amperes)
is the base–emitter voltage
is the diffusion constant for electrons in the p-type base
W is the base width
The and forward parameters are as described previously. A reverse is sometimes included in the model. | Bipolar junction transistor | Wikipedia | 510 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
The unapproximated Ebers–Moll equations used to describe the three currents in any operating region are given below. These equations are based on the transport model for a bipolar junction transistor.
where
is the collector current
is the base current
is the emitter current
is the forward common emitter current gain (20 to 500)
is the reverse common emitter current gain (0 to 20)
is the reverse saturation current (on the order of 10−15 to 10−12 amperes)
is the thermal voltage (approximately 26 mV at 300 K ≈ room temperature).
is the base–emitter voltage
is the base–collector voltage
Base-width modulation
As the collector–base voltage () varies, the collector–base depletion region varies in size. An increase in the collector–base voltage, for example, causes a greater reverse bias across the collector–base junction, increasing the collector–base depletion region width, and decreasing the width of the base. This variation in base width often is called the Early effect after its discoverer James M. Early.
Narrowing of the base width has two consequences:
There is a lesser chance for recombination within the "smaller" base region.
The charge gradient is increased across the base, and consequently, the current of minority carriers injected across the emitter junction increases.
Both factors increase the collector or "output" current of the transistor in response to an increase in the collector–base voltage.
Punchthrough
When the base–collector voltage reaches a certain (device-specific) value, the base–collector depletion region boundary meets the base–emitter depletion region boundary. When in this state the transistor effectively has no base. The device thus loses all gain when in this state.
Gummel–Poon charge-control model
The Gummel–Poon model is a detailed charge-controlled model of BJT dynamics, which has been adopted and elaborated by others to explain transistor dynamics in greater detail than the terminal-based models typically do. This model also includes the dependence of transistor -values upon the direct current levels in the transistor, which are assumed current-independent in the Ebers–Moll model.
Small-signal models
Hybrid-pi model | Bipolar junction transistor | Wikipedia | 467 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
The hybrid-pi model is a popular circuit model used for analyzing the small signal and AC behavior of bipolar junction and field effect transistors. Sometimes it is also called Giacoletto model because it was introduced by L.J. Giacoletto in 1969. The model can be quite accurate for low-frequency circuits and can easily be adapted for higher-frequency circuits with the addition of appropriate inter-electrode capacitances and other parasitic elements.
h-parameter model | Bipolar junction transistor | Wikipedia | 95 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
Another model commonly used to analyze BJT circuits is the h-parameter model, also known as the hybrid equivalent model, closely related to the hybrid-pi model and the y-parameter two-port, but using input current and output voltage as independent variables, rather than input and output voltages. This two-port network is particularly suited to BJTs as it lends itself easily to the analysis of circuit behavior, and may be used to develop further accurate models. As shown, the term x in the model represents a different BJT lead depending on the topology used. For common-emitter mode the various symbols take on the specific values as:
Terminal 1, base
Terminal 2, collector
Terminal 3 (common), emitter; giving x to be e
ii, base current (ib)
io, collector current (ic)
Vin, base-to-emitter voltage (VBE)
Vo, collector-to-emitter voltage (VCE)
and the h-parameters are given by:
hix = hie for the common-emitter configuration, the input impedance of the transistor (corresponding to the base resistance rpi).
hrx = hre, a reverse transfer relationship, it represents the dependence of the transistor's (input) IB–VBE curve on the value of (output) VCE. It is usually very small and is often neglected (assumed to be zero) at DC.
hfx = hfe, the "forward" current-gain of the transistor, sometimes written h21. This parameter, with lower case "fe" to imply small signal (AC) gain, or more often with capital letters for "FE" (specified as hFE) to mean the "large signal" or DC current-gain (βDC or often simply β), is one of the main parameters in datasheets, and may be given for a typical collector current and voltage or plotted as a function of collector current. See below.
hox = 1/hoe, the output impedance of transistor. The parameter hoe usually corresponds to the output admittance of the bipolar transistor and has to be inverted to convert it to an impedance. | Bipolar junction transistor | Wikipedia | 457 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
As shown, the h-parameters have lower-case subscripts and hence signify AC conditions or analyses. For DC conditions they are specified in upper-case. For the CE topology, an approximate h-parameter model is commonly used which further simplifies the circuit analysis. For this the hoe and hre parameters are neglected (that is, they are set to infinity and zero, respectively). The h-parameter model as shown is suited to low-frequency, small-signal analysis. For high-frequency analyses the inter-electrode capacitances that are important at high frequencies must be added.
Etymology of hFE
The h refers to its being an h-parameter, a set of parameters named for their origin in a hybrid equivalent circuit model (see above). As with all h parameters, the choice of lower case or capitals for the letters that follow the "h" is significant; lower-case signifies "small signal" parameters, that is, the slope the particular relationship; upper-case letters imply "large signal" or DC values, the ratio of the voltages or currents. In the case of the very often used hFE:
F is from Forward current amplification also called the current gain.
E refers to the transistor operating in a common Emitter (CE) configuration.
So hFE (or hFE) refers to the (total; DC) collector current divided by the base current, and is dimensionless. It is a parameter that varies somewhat with collector current, but is often approximated as a constant; it is normally specified at a typical collector current and voltage, or graphed as a function of collector current.
Had capital letters not been used for used in the subscript, i.e. if it were written hfe the parameter indicate small signal (AC) current gain, i.e. the slope of the Collector current versus Base current graph at a given point, which is often close to the hFE value unless the test frequency is high.
Industry models | Bipolar junction transistor | Wikipedia | 410 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
The Gummel–Poon SPICE model is often used, but it suffers from several limitations. For instance, reverse breakdown of the base–emitter diode is not captured by the SGP (SPICE Gummel–Poon) model, neither are thermal effects (self-heating) or quasi-saturation. These have been addressed in various more advanced models which either focus on specific cases of application (Mextram, HICUM, Modella) or are designed for universal usage (VBIC).
Applications
The BJT remains a device that excels in some applications, such as discrete circuit design, due to the very wide selection of BJT types available, and because of its high transconductance and output resistance compared to MOSFETs.
The BJT is also the choice for demanding analog circuits, especially for very-high-frequency applications, such as radio-frequency circuits for wireless systems.
High-speed digital logic
Emitter-coupled logic (ECL) use BJTs.
Bipolar transistors can be combined with MOSFETs in an integrated circuit by using a BiCMOS process of wafer fabrication to create circuits that take advantage of the application strengths of both types of transistor.
Amplifiers
The transistor parameters α and β characterize the current gain of the BJT. It is this gain that allows BJTs to be used as the building blocks of electronic amplifiers. The three main BJT amplifier topologies are:
Common emitter
Common base
Common collector
Temperature sensors
Because of the known temperature and current dependence of the forward-biased base–emitter junction voltage, the BJT can be used to measure temperature by subtracting two voltages at two different bias currents in a known ratio.
Logarithmic converters
Because base–emitter voltage varies as the logarithm of the base–emitter and collector–emitter currents, a BJT can also be used to compute logarithms and anti-logarithms. A diode can also perform these nonlinear functions but the transistor provides more circuit flexibility. | Bipolar junction transistor | Wikipedia | 431 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
Avalanche pulse generators
Transistors may be deliberately made with a lower collector to emitter breakdown voltage than the collector to base breakdown voltage. If the emitter–base junction is reverse biased the collector emitter voltage may be maintained at a voltage just below breakdown. As soon as the base voltage is allowed to rise, and current flows avalanche occurs and impact ionization in the collector base depletion region rapidly floods the base with carriers and turns the transistor fully on. So long as the pulses are short enough and infrequent enough that the device is not damaged, this effect can be used to create very sharp falling edges.
Special avalanche transistor devices are made for this application. | Bipolar junction transistor | Wikipedia | 140 | 49338 | https://en.wikipedia.org/wiki/Bipolar%20junction%20transistor | Technology | Semiconductors | null |
Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.
Grids are a form of distributed computing composed of many networked loosely coupled computers acting together to perform large tasks. For certain applications, distributed or grid computing can be seen as a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a computer network (private or public) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back office data processing in support for e-commerce and Web services.
Grid computing combines computers from multiple administrative domains to reach a common goal, to solve a single task, and may then disappear just as quickly. The size of a grid may vary from small—confined to a network of computer workstations within a corporation, for example—to large, public collaborations across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whereas the notion of a larger, wider grid may thus refer to an inter-nodes cooperation".
Coordinating applications on Grids can be a complex task, especially when coordinating the flow of information across distributed computing resources. Grid workflow systems have been developed as a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in the grid context. | Grid computing | Wikipedia | 475 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
Comparison of grids and conventional supercomputers
“Distributed” or “grid” computing in general is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors. The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet.
There are also some differences between programming for a supercomputer and programming for a grid computing system. It can be costly and difficult to write programs that can run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a “thin” layer of “grid” infrastructure can allow conventional, standalone programs, given a different part of the same problem, to run on multiple machines. This makes it possible to write and debug on a single conventional machine and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.
Design considerations and variations
One feature of distributed grids is that they can be formed from computing resources belonging to one or multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks. | Grid computing | Wikipedia | 371 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes. However, due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dial-up Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results in the expected time.
Another set of what could be termed social compatibility issues in the early days of grid computing related to the goals of grid developers to carry their innovation beyond the original field of high-performance computing and across disciplinary boundaries into new fields, like that of high-energy physics.
The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors. In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines. | Grid computing | Wikipedia | 382 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a trade-off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform). Various middleware projects have created generic infrastructure to allow diverse scientific and commercial projects to harness a particular associated grid or for the purpose of setting up new grids. BOINC is a common one for various academic projects seeking public volunteers; more are listed at the end of the article.
In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include SLA management, Trust, and Security, Virtual organization management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.
Market segmentation of the grid computing market
For the segmentation of the grid computing market, two perspectives need to be considered: the provider side and the user side:
The provider side
The overall grid market comprises several specific markets. These are the grid middleware market, the market for grid-enabled applications, the utility computing market, and the software-as-a-service (SaaS) market.
Grid middleware is a specific software product, which enables the sharing of heterogeneous resources, and Virtual Organizations. It is installed and integrated into the existing infrastructure of the involved company or companies and provides a special layer placed among the heterogeneous infrastructure and the specific user applications. Major grid middlewares are Globus Toolkit, gLite, and UNICORE.
Utility computing is referred to as the provision of grid computing and applications as service either as an open grid utility or as a hosting solution for one organization or a VO. Major players in the utility computing market are Sun Microsystems, IBM, and HP. | Grid computing | Wikipedia | 485 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
Grid-enabled applications are specific software applications that can utilize grid infrastructure. This is made possible by the use of grid middleware, as pointed out above.
Software as a service (SaaS) is “software that is owned, delivered and managed remotely by one or more providers.” (Gartner 2007) Additionally, SaaS applications are based on a single set of common code and data definitions. They are consumed in a one-to-many model, and SaaS uses a Pay As You Go (PAYG) model or a subscription model that is based on usage. Providers of SaaS do not necessarily own the computing resources themselves, which are required to run their SaaS. Therefore, SaaS providers may draw upon the utility computing market. The utility computing market provides computing resources for SaaS providers.
The user side
For companies on the demand or user side of the grid computing market, the different segments have significant implications for their IT deployment strategy. The IT deployment strategy as well as the type of IT investments made are relevant aspects for potential grid users and play an important role for grid adoption.
CPU scavenging
CPU-scavenging, cycle-scavenging, or shared computing creates a “grid” from the idle resources in a network of participants (whether worldwide or internal to an organization). Typically, this technique exploits the 'spare' instruction cycles resulting from the intermittent inactivity that typically occurs at night, during lunch breaks, or even during the (comparatively minuscule, though numerous) moments of idle waiting that modern desktop CPU's experience throughout the day (when the computer is waiting on IO from the user, network, or storage). In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power.
Many volunteer computing projects, such as BOINC, use the CPU scavenging model. Since nodes are likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies. | Grid computing | Wikipedia | 422 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
Creating an Opportunistic Environment is another implementation of CPU-scavenging where special workload management system harvests the idle desktop computers for compute-intensive jobs, it also refers as Enterprise Desktop Grid (EDG). For instance, HTCondor (the open-source high-throughput computing software framework for coarse-grained distributed rationalization of computationally intensive tasks) can be configured to only use desktop machines where the keyboard and mouse are idle to effectively harness wasted CPU power from otherwise idle desktop workstations. Like other full-featured batch systems, HTCondor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. It can be used to manage workload on a dedicated cluster of computers as well or it can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment.
History
The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid. The power grid metaphor for accessible computing quickly became canonical when Ian Foster and Carl Kesselman published their seminal work, "The Grid: Blueprint for a new computing infrastructure" (1999). This was preceded by decades by the metaphor of utility computing (1961): computing as a public utility, analogous to the phone system.
CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.
The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster and Steve Tuecke of the University of Chicago, and Carl Kesselman of the University of Southern California's Information Sciences Institute. The trio, who led the effort to create the Globus Toolkit, is widely regarded as the "fathers of the grid". The toolkit incorporates not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation. While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid. | Grid computing | Wikipedia | 499 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid) and earlier utility computing.
Progress
In November 2006, Seidel received the Sidney Fernbach Award at the Supercomputing Conference in Tampa, Florida. "For outstanding contributions to the development of software for HPC and Grid computing to enable the collaborative numerical investigation of complex problems in physics; in particular, modeling black hole collisions." This award, which is one of the highest honors in computing, was awarded for his achievements in numerical relativity.
Fastest virtual supercomputers
As of March 2020, Folding@home – 1.1 exaFLOPS.
As of April 7, 2020, BOINC – 29.8 PFLOPS.
As of November 2019, IceCube via OSG – 350 fp32 PFLOPS.
As of February 2018, Einstein@Home – 3.489 PFLOPS.
As of April 7, 2020, SETI@Home – 1.11 PFLOPS.
As of April 7, 2020, MilkyWay@Home – 1.465 PFLOPS.
As of March 2019, GIMPS – 0.558 PFLOPS.
Also, as of March 2019, the Bitcoin Network had a measured computing power equivalent to over 80,000 exaFLOPS (Floating-point Operations Per Second). This measurement reflects the number of FLOPS required to equal the hash output of the Bitcoin network rather than its capacity for general floating-point arithmetic operations, since the elements of the Bitcoin network (Bitcoin mining ASICs) perform only the specific cryptographic hash computation required by the Bitcoin protocol.
Projects and applications
Grid computing offers a way to solve Grand Challenge problems such as protein folding, financial modeling, earthquake simulation, and climate/weather modeling, and was integral in enabling the Large Hadron Collider at CERN. Grids offer a way of using information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility for commercial and noncommercial clients, with those clients paying only for what they use, as with electricity or water. | Grid computing | Wikipedia | 461 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
As of October 2016, over 4 million machines running the open-source Berkeley Open Infrastructure for Network Computing (BOINC) platform are members of the World Community Grid. One of the projects using BOINC is SETI@home, which was using more than 400,000 computers to achieve 0.828 TFLOPS as of October 2016. As of October 2016 Folding@home, which is not part of BOINC, achieved more than 101 x86-equivalent petaflops on over 110,000 machines.
The European Union funded projects through the framework programmes of the European Commission. BEinGRID (Business Experiments in Grid) was a research project funded by the European Commission as an Integrated Project under the Sixth Framework Programme (FP6) sponsorship program. Started on June 1, 2006, the project ran 42 months, until November 2009. The project was coordinated by Atos Origin. According to the project fact sheet, their mission is “to establish effective routes to foster the adoption of grid computing across the EU and to stimulate research into innovative business models using Grid technologies”. To extract best practice and common themes from the experimental implementations, two groups of consultants are analyzing a series of pilots, one technical, one business. The project is significant not only for its long duration but also for its budget, which at 24.8 million Euros, is the largest of any FP6 integrated project. Of this, 15.7 million is provided by the European Commission and the remainder by its 98 contributing partner companies. Since the end of the project, the results of BEinGRID have been taken up and carried forward by IT-Tude.com. | Grid computing | Wikipedia | 333 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
The Enabling Grids for E-sciencE project, based in the European Union and included sites in Asia and the United States, was a follow-up project to the European DataGrid (EDG) and evolved into the European Grid Infrastructure. This, along with the Worldwide LHC Computing Grid (WLCG), was developed to support experiments using the CERN Large Hadron Collider. A list of active sites participating within WLCG can be found online as can real time monitoring of the EGEE infrastructure. The relevant software and documentation is also publicly accessible. There is speculation that dedicated fiber optic links, such as those installed by CERN to address the WLCG's data-intensive needs, may one day be available to home users thereby providing internet services at speeds up to 10,000 times faster than a traditional broadband connection. The European Grid Infrastructure has been also used for other research activities and experiments such as the simulation of oncological clinical trials.
The distributed.net project was started in 1997.
The NASA Advanced Supercomputing facility (NAS) ran genetic algorithms using the Condor cycle scavenger running on about 350 Sun Microsystems and SGI workstations.
In 2001, United Devices operated the United Devices Cancer Research Project based on its Grid MP product, which cycle-scavenges on volunteer PCs connected to the Internet. The project ran on about 3.1 million machines before its close in 2007. | Grid computing | Wikipedia | 292 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
Definitions
Today there are many definitions of grid computing:
In his article “What is the Grid? A Three Point Checklist”, Ian Foster lists these primary attributes:
Computing resources are not administered centrally.
Open standards are used.
Nontrivial quality of service is achieved.
Plaszczak/Wellner define grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
IBM defines grid computing as “the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across ‘multiple’ administrative domains based on their (resources) availability, capacity, performance, cost and users' quality-of-service requirements”.
An earlier example of the notion of computing as the utility was in 1965 by MIT's Fernando Corbató. Corbató and the other designers of the Multics operating system envisioned a computer facility operating “like a power company or water company”.
Buyya/Venugopal define grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements". | Grid computing | Wikipedia | 295 | 49373 | https://en.wikipedia.org/wiki/Grid%20computing | Technology | Computer architecture concepts | null |
The larynx (), commonly called the voice box, is an organ in the top of the neck involved in breathing, producing sound and protecting the trachea against food aspiration. The opening of larynx into pharynx known as the laryngeal inlet is about 4–5 centimeters in diameter. The larynx houses the vocal cords, and manipulates pitch and volume, which is essential for phonation. It is situated just below where the tract of the pharynx splits into the trachea and the esophagus. The word 'larynx' (: larynges) comes from the Ancient Greek word lárunx ʻlarynx, gullet, throatʼ.
Structure
The triangle-shaped larynx consists largely of cartilages that are attached to one another, and to surrounding structures, by muscles or by fibrous and elastic tissue components. The larynx is lined by a ciliated columnar epithelium except for the vocal folds. The cavity of the larynx extends from its triangle-shaped inlet, to the epiglottis, and to the circular outlet at the lower border of the cricoid cartilage, where it is continuous with the lumen of the trachea. The mucous membrane lining the larynx forms two pairs of lateral folds that project inward into its cavity. The upper folds are called the vestibular folds. They are also sometimes called the false vocal cords for the rather obvious reason that they play no part in vocalization. The Kargyraa style of Tuvan throat singing makes use of these folds to sing an octave lower, and they are used in Umngqokolo, a type of Xhosa throat singing. The lower pair of folds are known as the vocal cords, which produce sounds needed for speech and other vocalizations. The slit-like space between the left and right vocal cords, called the rima glottidis, is the narrowest part of the larynx. The vocal cords and the rima glottidis are together designated as the glottis. The laryngeal cavity above the vestibular folds is called the vestibule. The very middle portion of the cavity between the vestibular folds and the vocal cords is the ventricle of the larynx, or laryngeal ventricle. The infraglottic cavity is the open space below the glottis. | Larynx | Wikipedia | 510 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Location
In adult humans, the larynx is found in the anterior neck at the level of the cervical vertebrae C3–C6. It connects the inferior part of the pharynx (hypopharynx) with the trachea. The laryngeal skeleton consists of nine cartilages: three single (epiglottic, thyroid and cricoid) and three paired (arytenoid, corniculate, and cuneiform). The hyoid bone is not part of the larynx, though the larynx is suspended from the hyoid. The larynx extends vertically from the tip of the epiglottis to the inferior border of the cricoid cartilage. Its interior can be divided in supraglottis, glottis and subglottis.
Cartilages
There are nine cartilages, three unpaired and three paired (3 pairs=6), that support the mammalian larynx and form its skeleton. | Larynx | Wikipedia | 209 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Unpaired cartilages:
Thyroid cartilage: This forms the Adam's apple (also called the laryngeal prominence). It is usually larger in males than in females. The thyrohyoid membrane is a ligament associated with the thyroid cartilage that connects it with the hyoid bone. It supports the front portion of the larynx.
Cricoid cartilage: A ring of hyaline cartilage that forms the inferior wall of the larynx. It is attached to the top of the trachea. The median cricothyroid ligament connects the cricoid cartilage to the thyroid cartilage.
Epiglottis: A large, spoon-shaped piece of elastic cartilage. During swallowing, the pharynx and larynx rise. Elevation of the pharynx widens it to receive food and drink; elevation of the larynx causes the epiglottis to move down and form a lid over the glottis, closing it off.
Paired cartilages:
Arytenoid cartilages: Of the paired cartilages, the arytenoid cartilages are the most important because they influence the position and tension of the vocal cords. These are triangular pieces of mostly hyaline cartilage located at the posterosuperior border of the cricoid cartilage.
Corniculate cartilages: Horn-shaped pieces of elastic cartilage located at the apex of each arytenoid cartilage.
Cuneiform cartilages: Club-shaped pieces of elastic cartilage located anterior to the corniculate cartilages.
Muscles
The muscles of the larynx are divided into intrinsic and extrinsic muscles. The extrinsic muscles act on the region and pass between the larynx and parts around it but have their origin elsewhere; the intrinsic muscles are confined entirely within the larynx and have their origin and insertion there. | Larynx | Wikipedia | 407 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
The intrinsic muscles are divided into respiratory and the phonatory muscles (the muscles of phonation). The respiratory muscles move the vocal cords apart and serve breathing. The phonatory muscles move the vocal cords together and serve the production of voice. The main respiratory muscles are the posterior cricoarytenoid muscles. The phonatory muscles are divided into adductors (lateral cricoarytenoid muscles, arytenoid muscles) and tensors (cricothyroid muscles, thyroarytenoid muscles).
Intrinsic
The intrinsic laryngeal muscles are responsible for controlling sound production.
Cricothyroid muscle lengthen and tense the vocal cords.
Posterior cricoarytenoid muscles abduct and externally rotate the arytenoid cartilages, resulting in abducted vocal cords.
Lateral cricoarytenoid muscles adduct and internally rotate the arytenoid cartilages, increase medial compression.
Transverse arytenoid muscle adduct the arytenoid cartilages, resulting in adducted vocal cords.
Oblique arytenoid muscles narrow the laryngeal inlet by constricting the distance between the arytenoid cartilages.
Thyroarytenoid muscles narrow the laryngeal inlet, shortening the vocal cords, and lowering voice pitch. The internal thyroarytenoid is the portion of the thyroarytenoid that vibrates to produce sound.
Notably the only muscle capable of separating the vocal cords for normal breathing is the posterior cricoarytenoid. If this muscle is incapacitated on both sides, the inability to pull the vocal cords apart (abduct) will cause difficulty breathing. Bilateral injury to the recurrent laryngeal nerve would cause this condition. It is also worth noting that all muscles are innervated by the recurrent laryngeal branch of the vagus except the cricothyroid muscle, which is innervated by the external laryngeal branch of the superior laryngeal nerve (a branch of the vagus). | Larynx | Wikipedia | 425 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Additionally, intrinsic laryngeal muscles present a constitutive Ca2+-buffering profile that predicts their better ability to handle calcium changes in comparison to other muscles. This profile is in agreement with their function as very fast muscles with a well-developed capacity for prolonged work. Studies suggests that mechanisms involved in the prompt sequestering of Ca2+ (sarcoplasmic reticulum Ca2+-reuptake proteins, plasma membrane pumps, and cytosolic Ca2+-buffering proteins) are particularly elevated in laryngeal muscles, indicating their importance for the myofiber function and protection against disease, such as Duchenne muscular dystrophy. Furthermore, different levels of Orai1 in rat intrinsic laryngeal muscles and extraocular muscles over the limb muscle suggests a role for store operated calcium entry channels in those muscles' functional properties and signaling mechanisms.
Extrinsic
The extrinsic laryngeal muscles support and position the larynx within the mid-cervical cereal region.
Sternothyroid muscles depress the larynx. (Innervated by ansa cervicalis)
Omohyoid muscles depress the larynx. (Ansa cervicalis)
Sternohyoid muscles depress the larynx. (Ansa cervicalis)
Inferior constrictor muscles. (CN X)
Thyrohyoid muscles elevates the larynx. (C1)
Digastric elevates the larynx. (CN V3, CN VII)
Stylohyoid elevates the larynx. (CN VII)
Mylohyoid elevates the larynx. (CN V3)
Geniohyoid elevates the larynx. (C1)
Hyoglossus elevates the larynx. (CN XII)
Genioglossus elevates the larynx. (CN XII) | Larynx | Wikipedia | 404 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Nerve supply
The larynx is innervated by branches of the vagus nerve on each side. Sensory innervation to the glottis and laryngeal vestibule is by the internal branch of the superior laryngeal nerve. The external branch of the superior laryngeal nerve innervates the cricothyroid muscle. Motor innervation to all other muscles of the larynx and sensory innervation to the subglottis is by the recurrent laryngeal nerve. While the sensory input described above is (general) visceral sensation (diffuse, poorly localized), the vocal cords also receives general somatic sensory innervation (proprioceptive and touch) by the superior laryngeal nerve.
Injury to the external branch of the superior laryngeal nerve causes weakened phonation because the vocal cords cannot be tightened. Injury to one of the recurrent laryngeal nerves produces hoarseness, if both are damaged the voice may or may not be preserved, but breathing becomes difficult.
Development
In newborn infants, the larynx is initially at the level of the C2–C3 vertebrae, and is further forward and higher relative to its position in the adult body. The larynx descends as the child grows.
Laryngeal cavity
The laryngeal cavity (cavity of the larynx) extends from the laryngeal inlet downwards to the lower border of the cricoid cartilage where it is continuous with that of the trachea.
It is divided into two parts by the projection of the vocal folds, between which is a narrow triangular opening, the rima glottidis.
The portion of the cavity of the larynx above the vocal folds is called the laryngeal vestibule; it is wide and triangular in shape, its base or anterior wall presenting, however, about its center the backward projection of the tubercle of the epiglottis.
It contains the vestibular folds, and between these and the vocal folds are the laryngeal ventricles.
The portion below the vocal folds is called the infraglottic cavity. It is at first of an elliptical form, but lower down it widens out, assumes a circular form, and is continuous with the tube of the trachea.
Function
Sound generation
Sound is generated in the larynx, and that is where pitch and volume are manipulated. The strength of expiration from the lungs also contributes to loudness. | Larynx | Wikipedia | 512 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Manipulation of the larynx is used to generate a source sound with a particular fundamental frequency, or pitch. This source sound is altered as it travels through the vocal tract, configured differently based on the position of the tongue, lips, mouth, and pharynx. The process of altering a source sound as it passes through the filter of the vocal tract creates the many different vowel and consonant sounds of the world's languages as well as tone, certain realizations of stress and other types of linguistic prosody. The larynx also has a similar function to the lungs in creating pressure differences required for sound production; a constricted larynx can be raised or lowered affecting the volume of the oral cavity as necessary in glottalic consonants.
The vocal cords can be held close together (by adducting the arytenoid cartilages) so that they vibrate (see phonation). The muscles attached to the arytenoid cartilages control the degree of opening. Vocal cord length and tension can be controlled by rocking the thyroid cartilage forward and backward on the cricoid cartilage (either directly by contracting the cricothyroids or indirectly by changing the vertical position of the larynx), by manipulating the tension of the muscles within the vocal cords, and by moving the arytenoids forward or backward. This causes the pitch produced during phonation to rise or fall. In most males the vocal cords are longer and have a greater mass than most females' vocal cords, producing a lower pitch.
The vocal apparatus consists of two pairs of folds, the vestibular folds (false vocal cords) and the true vocal cords. The vestibular folds are covered by respiratory epithelium, while the vocal cords are covered by stratified squamous epithelium. The vestibular folds are not responsible for sound production, but rather for resonance. The exceptions to this are found in Tibetan chanting and Kargyraa, a style of Tuvan throat singing. Both make use of the vestibular folds to create an undertone. These false vocal cords do not contain muscle, while the true vocal cords do have skeletal muscle.
Other | Larynx | Wikipedia | 451 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
The most important role of the larynx is its protective function, the prevention of foreign objects from entering the lungs by coughing and other reflexive actions. A cough is initiated by a deep inhalation through the vocal cords, followed by the elevation of the larynx and the tight adduction (closing) of the vocal cords. The forced expiration that follows, assisted by tissue recoil and the muscles of expiration, blows the vocal cords apart, and the high pressure expels the irritating object out of the throat. Throat clearing is less violent than coughing, but is a similar increased respiratory effort countered by the tightening of the laryngeal musculature. Both coughing and throat clearing are predictable and necessary actions because they clear the respiratory passageway, but both place the vocal cords under significant strain.
Another important role of the larynx is abdominal fixation, a kind of Valsalva maneuver in which the lungs are filled with air in order to stiffen the thorax so that forces applied for lifting can be translated down to the legs. This is achieved by a deep inhalation followed by the adduction of the vocal cords. Grunting while lifting heavy objects is the result of some air escaping through the adducted vocal cords ready for phonation.
Abduction of the vocal cords is important during physical exertion. The vocal cords are separated by about during normal respiration, but this width is doubled during forced respiration.
During swallowing, elevation of the posterior portion of the tongue levers (inverts) the epiglottis over the glottis' opening to prevent swallowed material from entering the larynx which leads to the lungs, and provides a path for a food or liquid bolus to "slide" into the esophagus; the hyo-laryngeal complex is also pulled upwards to assist this process. Stimulation of the larynx by aspirated food or liquid produces a strong cough reflex to protect the lungs.
In addition, intrinsic laryngeal muscles are spared from some muscle wasting disorders, such as Duchenne muscular dystrophy, may facilitate the development of novel strategies for the prevention and treatment of muscle wasting in a variety of clinical scenarios. ILM have a calcium regulation system profile suggestive of a better ability to handle calcium changes in comparison to other muscles, and this may provide a mechanistic insight for their unique pathophysiological properties
Clinical significance | Larynx | Wikipedia | 499 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Disorders
There are several things that can cause a larynx to not function properly. Some symptoms are hoarseness, loss of voice, pain in the throat or ears, and breathing difficulties.
Acute laryngitis is the sudden inflammation and swelling of the larynx. It is caused by the common cold or by excessive shouting. It is not serious.
Chronic laryngitis is caused by smoking, dust, frequent yelling, or prolonged exposure to polluted air. It is much more serious than acute laryngitis.
Presbylarynx is a condition in which age-related atrophy of the soft tissues of the larynx results in weak voice and restricted vocal range and stamina. Bowing of the anterior portion of the vocal colds is found on laryngoscopy.
Ulcers may be caused by the prolonged presence of an endotracheal tube.
Polyps and vocal cord nodules are small bumps caused by prolonged exposure to tobacco smoke and vocal misuse, respectively.
Two related types of cancer of the larynx, namely squamous cell carcinoma and verrucous carcinoma, are strongly associated with repeated exposure to cigarette smoke and alcohol.
Vocal cord paresis is weakness of one or both vocal cords that can greatly impact daily life.
Idiopathic laryngeal spasm.
Laryngopharyngeal reflux is a condition in which acid from the stomach irritates and burns the larynx. Similar damage can occur with gastroesophageal reflux disease (GERD).
Laryngomalacia is a very common condition of infancy, in which the soft, immature cartilage of the upper larynx collapses inward during inhalation, causing airway obstruction.
Laryngeal perichondritis, the inflammation of the perichondrium of laryngeal cartilages, causing airway obstruction.
Laryngeal paralysis is a condition seen in some mammals (including dogs) in which the larynx no longer opens as wide as required for the passage of air, and impedes respiration. In mild cases it can lead to exaggerated or "raspy" breathing or panting, and in serious cases can pose a considerable need for treatment. | Larynx | Wikipedia | 465 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Duchenne muscular dystrophy, intrinsic laryngeal muscles (ILM) are spared from the lack of dystrophin and may serve as a useful model to study the mechanisms of muscle sparing in neuromuscular diseases. Dystrophic ILM presented a significant increase in the expression of calcium-binding proteins. The increase of calcium-binding proteins in dystrophic ILM may permit better maintenance of calcium homeostasis, with the consequent absence of myonecrosis. The results further support the concept that abnormal calcium buffering is involved in these neuromuscular diseases. | Larynx | Wikipedia | 127 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Treatments
Patients who have lost the use of their larynx are typically prescribed the use of an electrolarynx device. Larynx transplants are a rare procedure. The world's first successful operation took place in 1998 at the Cleveland Clinic, and the second took place in October 2010 at the University of California Davis Medical Center in Sacramento.
Other animals
Pioneering work on the structure and evolution of the larynx was carried out in the 1920s by the British comparative anatomist Victor Negus, culminating in his monumental work The Mechanism of the Larynx (1929). Negus, however, pointed out that the descent of the larynx reflected the reshaping and descent of the human tongue into the pharynx. This process is not complete until age six to eight years. Some researchers, such as Philip Lieberman, Dennis Klatt, Bart de Boer and Kenneth Stevens using computer-modeling techniques have suggested that the species-specific human tongue allows the vocal tract (the airway above the larynx) to assume the shapes necessary to produce speech sounds that enhance the robustness of human speech. Sounds such as the vowels of the words and , [i] and [u] (in phonetic notation), have been shown to be less subject to confusion in classic studies such as the 1950 Peterson and Barney investigation of the possibilities for computerized speech recognition.
In contrast, though other species have low larynges, their tongues remain anchored in their mouths and their vocal tracts cannot produce the range of speech sounds of humans. The ability to lower the larynx transiently in some species extends the length of their vocal tract, which as Fitch showed creates the acoustic illusion that they are larger. Research at Haskins Laboratories in the 1960s showed that speech allows humans to achieve a vocal communication rate that exceeds the fusion frequency of the auditory system by fusing sounds together into syllables and words. The additional speech sounds that the human tongue enables us to produce, particularly [i], allow humans to unconsciously infer the length of the vocal tract of the person who is talking, a critical element in recovering the phonemes that make up a word. | Larynx | Wikipedia | 435 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Non-mammals
Most tetrapod species possess a larynx, but its structure is typically simpler than that found in mammals. The cartilages surrounding the larynx are apparently a remnant of the original gill arches in fish, and are a common feature, but not all are always present. For example, the thyroid cartilage is found only in mammals. Similarly, only mammals possess a true epiglottis, although a flap of non-cartilagenous mucosa is found in a similar position in many other groups. In modern amphibians, the laryngeal skeleton is considerably reduced; frogs have only the cricoid and arytenoid cartilages, while salamanders possess only the arytenoids.
An example of a frog that possesses a larynx is the túngara frog. While the larynx is the main sound producing organ in túngara frogs, it serves a higher significance due to its contribution to mating call, which consist of two components: 'whine' and 'chuck'. While 'whine' induces female phonotaxis and allows species recognition, 'chuck' increases mating attractiveness. In particular, the túngara frog produces 'chuck' by vibrating the fibrous mass attached to the larynx.
Vocal folds are found only in mammals, and a few lizards. As a result, many reptiles and amphibians are essentially voiceless; frogs use ridges in the trachea to modulate sound, while birds have a separate sound-producing organ, the syrinx.
History
The ancient Greek physician Galen first described the larynx, describing it as the "first and supremely most important instrument of the voice".
Additional images | Larynx | Wikipedia | 354 | 49375 | https://en.wikipedia.org/wiki/Larynx | Biology and health sciences | Respiratory system | Biology |
Deep Blue was a chess-playing expert system run on a unique purpose-built IBM supercomputer. It was the first computer to win a game, and the first to win a match, against a reigning world champion under regular time controls. Development began in 1985 at Carnegie Mellon University under the name ChipTest. It then moved to IBM, where it was first renamed Deep Thought, then again in 1989 to Deep Blue. It first played world champion Garry Kasparov in a six-game match in 1996, where it won one, draw two and lost three games. It was upgraded in 1997 and in a six-game re-match, it defeated Kasparov by winning two games and drawing three. Deep Blue's victory is considered a milestone in the history of artificial intelligence and has been the subject of several books and films.
History
While a doctoral student at Carnegie Mellon University, Feng-hsiung Hsu began development of a chess-playing supercomputer under the name ChipTest. The machine won the North American Computer Chess Championship in 1987 and Hsu and his team followed up with a successor, Deep Thought, in 1988. After receiving his doctorate in 1989, Hsu and Murray Campbell joined IBM Research to continue their project to build a machine that could defeat a world chess champion. Their colleague Thomas Anantharaman briefly joined them at IBM before leaving for the finance industry and being replaced by programmer Arthur Joseph Hoane. Jerry Brody, a long-time employee of IBM Research, subsequently joined the team in 1990.
After Deep Thought's two-game 1989 loss to Kasparov, IBM held a contest to rename the chess machine: the winning name was "Deep Blue", submitted by Peter Fitzhugh Brown, was a play on IBM's nickname, "Big Blue". After a scaled-down version of Deep Blue played Grandmaster Joel Benjamin, Hsu and Campbell decided that Benjamin was the expert they were looking for to help develop Deep Blue's opening book, so hired him to assist with the preparations for Deep Blue's matches against Garry Kasparov. In 1995, a Deep Blue prototype played in the eighth World Computer Chess Championship, playing Wchess to a draw before ultimately losing to Fritz in round five, despite playing as White. | Deep Blue (chess computer) | Wikipedia | 463 | 49387 | https://en.wikipedia.org/wiki/Deep%20Blue%20%28chess%20computer%29 | Technology | Specific hardware | null |
Today, one of the two racks that made up Deep Blue is held by the National Museum of American History, having previously been displayed in an exhibit about the Information Age, while the other rack was acquired by the Computer History Museum in 1997, and is displayed in the Revolution exhibit's "Artificial Intelligence and Robotics" gallery. Several books were written about Deep Blue, among them Behind Deep Blue: Building the Computer that Defeated the World Chess Champion by Deep Blue developer Feng-hsiung Hsu.
Deep Blue versus Kasparov
Subsequent to its predecessor Deep Thought's 1989 loss to Garry Kasparov, Deep Blue played Kasparov twice more. In the first game of the first match, which took place from 10 to 17 February 1996, Deep Blue became the first machine to win a chess game against a reigning world champion under regular time controls. However, Kasparov won three and drew two of the following five games, beating Deep Blue by 4–2 at the close of the match.
Deep Blue's hardware was subsequently upgraded, doubling its speed before it faced Kasparov again in May 1997, when it won the six-game rematch 3½–2½. Deep Blue won the deciding game after Kasparov failed to secure his position in the opening, thereby becoming the first computer system to defeat a reigning world champion in a match under standard chess tournament time controls. The version of Deep Blue that defeated Kasparov in 1997 typically searched to a depth of six to eight moves, and twenty or more moves in some situations. David Levy and Monty Newborn estimate that each additional ply (half-move) of forward insight increases the playing strength between 50 and 70 Elo points.
In the 44th move of the first game of their second match, unknown to Kasparov, a bug in Deep Blue's code led it to enter an unintentional loop, which it exited by taking a randomly selected valid move. Kasparov did not take this possibility into account, and misattributed the seemingly pointless move to "superior intelligence". Subsequently, Kasparov experienced a decline in performance in the following game, though he denies this was due to anxiety in the wake of Deep Blue's inscrutable move. | Deep Blue (chess computer) | Wikipedia | 454 | 49387 | https://en.wikipedia.org/wiki/Deep%20Blue%20%28chess%20computer%29 | Technology | Specific hardware | null |
After his loss, Kasparov said that he sometimes saw unusual creativity in the machine's moves, suggesting that during the second game, human chess players had intervened on behalf of the machine. IBM denied this, saying the only human intervention occurred between games. Kasparov demanded a rematch, but IBM had dismantled Deep Blue after its victory and refused the rematch. The rules allowed the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play that were revealed during the course of the match. Kasparov requested printouts of the machine's log files, but IBM refused, although the company later published the logs on the Internet.
The 1997 tournament awarded a $700,000 first prize to the Deep Blue team and a $400,000 second prize to Kasparov. Carnegie Mellon University awarded an additional $100,000 to the Deep Blue team, a prize created by computer science professor Edward Fredkin in 1980 for the first computer program to beat a reigning world chess champion.
Aftermath
Chess
Kasparov initially called Deep Blue an "alien opponent" but later belittled it, stating that it was "as intelligent as your alarm clock". According to Martin Amis, two grandmasters who played Deep Blue agreed that it was "like a wall coming at you". Hsu had the rights to use the Deep Blue design independently of IBM, but also independently declined Kasparov's rematch offer. In 2003, the documentary film Game Over: Kasparov and the Machine investigated Kasparov's claims that IBM had cheated. In the film, some interviewees describe IBM's investment in Deep Blue as an effort to boost its stock value.
Other games
Following Deep Blue's victory, AI specialist Omar Syed designed a new game, Arimaa, which was intended to be very simple for humans but very difficult for computers to master; however, in 2015, computers proved capable of defeating strong Arimaa players. Since Deep Blue's victory, computer scientists have developed software for other complex board games with competitive communities. The AlphaGo series (AlphaGo, AlphaGo Zero, AlphaZero) defeated top Go players in 2016–2017. | Deep Blue (chess computer) | Wikipedia | 449 | 49387 | https://en.wikipedia.org/wiki/Deep%20Blue%20%28chess%20computer%29 | Technology | Specific hardware | null |
Computer science
Computer scientists such as Deep Blue developer Campbell believed that playing chess was a good measurement for the effectiveness of artificial intelligence, and by beating a world champion chess player, IBM showed that they had made significant progress. Deep Blue is also responsible for the popularity of using games as a display medium for artificial intelligence, as in the cases of IBM Watson or AlphaGo.
While Deep Blue, with its capability of evaluating 200 million positions per second, was the first computer to face a world chess champion in a formal match, it was a then-state-of-the-art expert system, relying upon rules and variables defined and fine-tuned by chess masters and computer scientists. In contrast, current chess engines such as Leela Chess Zero typically use reinforcement machine learning systems that train a neural network to play, developing its own internal logic rather than relying upon rules defined by human experts.
In a November 2006 match between Deep Fritz and world chess champion Vladimir Kramnik, the program ran on a computer system containing a dual-core Intel Xeon 5160 CPU, capable of evaluating only 8 million positions per second, but searching to an average depth of 17 to 18 plies (half-moves) in the middlegame thanks to heuristics; it won 4–2.
Design
Software
Deep Blue's evaluation function was initially written in a generalized form, with many to-be-determined parameters (e.g., how important is a safe king position compared to a space advantage in the center, etc.). Values for these parameters were determined by analyzing thousands of master games. The evaluation function was then split into 8,000 parts, many of them designed for special positions. The opening book encapsulated more than 4,000 positions and 700,000 grandmaster games, while the endgame database contained many six-piece endgames and all five and fewer piece endgames. An additional database named the "extended book" summarizes entire games played by Grandmasters. The system combines its searching ability of 200 million chess positions per second with summary information in the extended book to select opening moves. | Deep Blue (chess computer) | Wikipedia | 424 | 49387 | https://en.wikipedia.org/wiki/Deep%20Blue%20%28chess%20computer%29 | Technology | Specific hardware | null |
Before the second match, the program's rules were fine-tuned by grandmaster Joel Benjamin. The opening library was provided by grandmasters Miguel Illescas, John Fedorowicz, and Nick de Firmian. When Kasparov requested that he be allowed to study other games that Deep Blue had played so as to better understand his opponent, IBM refused, leading Kasparov to study many popular PC chess games to familiarize himself with computer gameplay.
Hardware
Deep Blue used custom VLSI chips to parallelize the alpha–beta search algorithm, an example of symbolic AI. The system derived its playing strength mainly from brute force computing power. It was a massively parallel IBM RS/6000 SP Supercomputer with 30 PowerPC 604e processors and 480 custom 600 nm CMOS VLSI "chess chips" designed to execute the chess-playing expert system, as well as FPGAs intended to allow patching of the VLSIs (which ultimately went unused) all housed in two cabinets. The chess chip has four parts: the move generator, the smart-move stack, the evaluation function, and the search control. The move generator is a 8x8 combinational logic circuit, a chess board in miniature.
Its chess playing program was written in C and ran under the AIX operating system. It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version.
In 1997, Deep Blue was upgraded again to become the 259th most powerful supercomputer according to the TOP500 list, achieving 11.38 GFLOPS on the parallel high performance LINPACK benchmark. | Deep Blue (chess computer) | Wikipedia | 327 | 49387 | https://en.wikipedia.org/wiki/Deep%20Blue%20%28chess%20computer%29 | Technology | Specific hardware | null |
A window is an opening in a wall, door, roof, or vehicle that allows the exchange of light and may also allow the passage of sound and sometimes air. Modern windows are usually glazed or covered in some other transparent or translucent material, a sash set in a frame in the opening; the sash and frame are also referred to as a window. Many glazed windows may be opened, to allow ventilation, or closed to exclude inclement weather. Windows may have a latch or similar mechanism to lock the window shut or to hold it open by various amounts.
Types include the eyebrow window, fixed windows, hexagonal windows, single-hung, and double-hung sash windows, horizontal sliding sash windows, casement windows, awning windows, hopper windows, tilt, and slide windows (often door-sized), tilt and turn windows, transom windows, sidelight windows, jalousie or louvered windows, clerestory windows, lancet windows, skylights, roof windows, roof lanterns, bay windows, oriel windows, thermal, or Diocletian, windows, picture windows, rose windows, emergency exit windows, stained glass windows, French windows, panel windows, double/triple-paned windows, and witch windows.
Etymology
The English language-word window originates from the Old Norse , from 'wind' and 'eye'. In Norwegian, Nynorsk, and Icelandic, the Old Norse form has survived to this day (in Icelandic only as a less used word for a type of small open "window", not strictly a synonym for , the Icelandic word for 'window'). In Swedish, the word remains as a term for a hole through the roof of a hut, and in the Danish language and Norwegian , the direct link to eye is lost, just as for window. The Danish (but not the ) word is pronounced fairly similarly to window. | Window | Wikipedia | 383 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
Window is first recorded in the early 13th century, and originally referred to an unglazed hole in a roof. Window replaced the Old English , which literally means 'eye-hole', and 'eye-door'. Many Germanic languages, however, adopted the Latin word to describe a window with glass, such as standard Swedish , or German . The use of window in English is probably because of the Scandinavian influence on the English language by means of loanwords during the Viking Age. In English, the word fenester was used as a parallel until the mid-18th century. Fenestration is still used to describe the arrangement of windows within a façade, as well as defenestration, meaning 'to throw out of a window'.
History
The Romans were the first known to use glass for windows, a technology likely first produced in Roman Egypt, in Alexandria AD. Presentations of windows can be seen in ancient Egyptian wall art and sculptures from Assyria. Paper windows were economical and widely used in ancient China, Korea, and Japan. In England, glass became common in the windows of ordinary homes only in the early 17th century whereas windows made up of panes of flattened animal horn were used as early as the 14th century. In the 19th century American west, greased paper windows came to be used by pioneering settlers. Modern-style floor-to-ceiling windows became possible only after the industrial plate glass making processes were fully perfected. | Window | Wikipedia | 291 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
Technologies
In the 13th century BC, the earliest windows were unglazed openings in a roof to admit light during the day. Later, windows were covered with animal hide, cloth, or wood. Shutters that could be opened and closed came next. Over time, windows were built that both protected the inhabitants from the elements and transmitted light, using multiple small pieces of translucent material, such as flattened pieces of translucent animal horn, paper sheets, thin slices of marble (such as fengite), or pieces of glass, set in frameworks of wood, iron or lead. In the Far East, paper was used to fill windows.
The Romans were the first known users of glass for windows, exploiting a technology likely first developed in Roman Egypt. Specifically, in Alexandria 100 CE, cast-glass windows, albeit with poor optical properties, began to appear, but these were small thick productions, little more than blown-glass jars (cylindrical shapes) flattened out into sheets with circular striation patterns throughout. It would be over a millennium before window glass became transparent enough to see through clearly, as we expect now. In 1154, Al-Idrisi described glass windows as a feature of the palace belonging to the king of the Ghana Empire.
Over the centuries techniques were developed to shear through one side of a blown glass cylinder and produce thinner rectangular window panes from the same amount of glass material. This gave rise to tall narrow windows, usually separated by a vertical support called a mullion. Mullioned glass windows were the windows of choice among the European well-to-do, whereas paper windows were economical and widely used in ancient China, Korea, and Japan. In England, glass became common in the windows of ordinary homes only in the early-17th century, whereas windows made up of panes of flattened animal horn were used as early as the 14th century.
Modern-style floor-to-ceiling windows became possible only after the industrial plate glass-making processes were perfected in the late 19th century Modern windows are usually filled using glass, although transparent plastic is also used.
Fashions and trends
The introduction of lancet windows into Western European church architecture from the 12th century CE built on a tradition of arched windows inserted between columns, and led not only to tracery and elaborate stained-glass windows but also to a long-standing motif of pointed or rounded window-shapes in ecclesiastical buildings, still seen in many churches today. | Window | Wikipedia | 491 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
Peter Smith discusses overall trends in early-modern rural Welsh window architecture:
Up to about 1680 windows tended to be horizontal in proportion, a shape suitable for lighting the low-ceilinged rooms that had resulted from the insertion of the upper floor into the hall-house. After that date vertically proportioned windows came into fashion, partly at least as a response to the Renaissance taste for the high ceiling. Since 1914 the wheel has come full circle and a horizontally proportioned window is again favoured.
The spread of plate-glass technology made possible the introduction of picture windows (in Levittown, Pennsylvania, founded 1951–1952).
Many modern day windows may have a window screen or mesh, often made of aluminum or fibreglass, to keep bugs out when the window is opened. Windows are primarily designed to facilitate a vital connection with the outdoors, offering those within the confines of the building visual access to the everchanging events occurring outside. The provision of this connection serves as an integral safeguard for the health and well-being of those inhabiting buildings, lest they experience the detrimental effects of enclosed buildings devoid of windows. Among the myriad criteria for the design of windows, several pivotal criteria have emerged in daylight standards: location, time, weather, nature, and people. Of these criteria, windows that are designed to provide views of nature are considered to be the most important by people.
Types
Cross
A cross-window is a rectangular window usually divided into four lights by a mullion and transom that form a Latin cross.
Eyebrow
The term eyebrow window is used in two ways: a curved top window in a wall or an eyebrow dormer; and a row of small windows usually under the front eaves such as the James-Lorah House in Pennsylvania.
Fixed
A fixed window is a window that cannot be opened, whose function is limited to allowing light to enter (unlike an unfixed window, which can open and close). Clerestory windows in church architecture are often fixed. Transom windows may be fixed or operable. This type of window is used in situations where light or vision alone is needed as no ventilation is possible in such windows without the use of trickle vents or overglass vents.
Single-hung sash
A single-hung sash window is a window that has one sash that is movable (usually the bottom one) and the other fixed. This is the earlier form of sliding sash window and is also cheaper. | Window | Wikipedia | 488 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
Double-hung sash
A sash window is the traditional style of window in the United Kingdom, and many other places that were formerly colonized by the UK, with two parts (sashes) that overlap slightly and slide up and down inside the frame. The two parts are not necessarily the same size; where the upper sash is smaller (shorter) it is termed a cottage window. Currently, most new double-hung sash windows use spring balances to support the sashes, but traditionally, counterweights held in boxes on either side of the window were used. These were and are attached to the sashes using pulleys of either braided cord or, later, purpose-made chain. Three types of spring balances are called a tape or clock spring balance; channel or block-and-tackle balance, and a spiral or tube balance.
Double-hung sash windows were traditionally often fitted with shutters. Sash windows can be fitted with simplex hinges that let the window be locked into hinges on one side, while the rope on the other side is detached—so the window can be opened for fire escape or cleaning.
Foldup
A foldup has two equal sashes similar to a standard double-hung but folds upward allowing air to pass through nearly the full-frame opening. The window is balanced using either springs or counterbalances, similar to a double-hung. The sashes can be either offset to simulate a double-hung, or in-line. The inline versions can be made to fold inward or outward. The inward swinging foldup windows can have fixed screens, while the outward swinging ones require movable screens. The windows are typically used for screen rooms, kitchen pass-throughs, or egress.
Horizontal sliding sash
A horizontal sliding sash window has two or more sashes that overlap slightly but slide horizontally within the frame. In the UK, these are sometimes called Yorkshire sash windows, presumably because of their traditional use in that county.
Casement | Window | Wikipedia | 398 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
A casement window is a window with a hinged sash that swings in or out like a door comprising either a side-hung, top-hung (also called "awning window"; see below), or occasionally bottom-hung sash or a combination of these types, sometimes with fixed panels on one or more sides of the sash. In the US, these are usually opened using a crank, but in parts of Europe, they tend to use projection friction stays and espagnolette locking. Formerly, plain hinges were used with a casement stay. Handing applies to casement windows to determine direction of swing; a casement window may be left-handed, right-handed, or double. The casement window is the dominant type now found in modern buildings in the UK and many other parts of Europe.
Awning
An awning window is a casement window that is hung horizontally, hinged on top, so that it swings outward like an awning. In addition to being used independently, they can be stacked, several in one opening, or combined with fixed glass. They are particularly useful for ventilation.
Hopper
A hopper window is a bottom-pivoting casement window that opens by tilting vertically, typically to the inside, resembling a hopper chute.
Pivot
A pivot window is a window hung on one hinge on each of two opposite sides which allows the window to revolve when opened. The hinges may be mounted top and bottom (Vertically Pivoted) or at each jamb (Horizontally Pivoted). The window will usually open initially to a restricted position for ventilation and, once released, fully reverse and lock again for safe cleaning from inside. Modern pivot hinges incorporate a friction device to hold the window open against its weight and may have restriction and reversed locking built-in. In the UK, where this type of window is most common, they were extensively installed in high-rise social housing.
Tilt and slide
A tilt and slide window is a window (more usually a door-sized window) where the sash tilts inwards at the top similar to a hopper window and then slides horizontally behind the fixed pane. | Window | Wikipedia | 438 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
Tilt and turn
A tilt and turn window can both tilt inwards at the top or open inwards from hinges at the side. This is the most common type of window in Germany, its country of origin. It is also widespread in many other European countries. In Europe, it is usual for these to be of the "turn first" type. i.e. when the handle is turned to 90 degrees the window opens in the side hung mode. With the handle turned to 180 degrees the window opens in bottom hung mode. Most usually in the UK the windows will be "tilt first" i.e. bottom hung at 90 degrees for ventilation and side hung at 180 degrees for cleaning the outer face of the glass from inside the building.
Transom
A transom window is a window above a door. In an exterior door the transom window is often fixed, in an interior door, it can open either by hinges at top or bottom, or rotate on hinges. It provided ventilation before forced air heating and cooling. A fan-shaped transom is known as a fanlight, especially in the British Isles.
Side light
Windows beside a door or window are called side-, wing-, margen-lights, and flanking windows.
Jalousie window
Also known as a louvered window, the jalousie window consists of parallel slats of glass or acrylic that open and close like a Venetian blind, usually using a crank or a lever. They are used extensively in tropical architecture. A jalousie door is a door with a jalousie window.
Clerestory
A clerestory window is a window set in a roof structure or high in a wall, used for daylighting.
Skylight
A skylight is a window built into a roof structure. This type of window allows for natural daylight and moonlight.
Roof
A roof window is a sloped window used for daylighting, built into a roof structure. It is one of the few windows that could be used as an exit. Larger roof windows meet building codes for emergency evacuation.
Roof lantern
A roof lantern is a multi-paned glass structure, resembling a small building, built on a roof for day or moon light. Sometimes includes an additional clerestory. May also be called a cupola.
Bay
A bay window is a multi-panel window, with at least three panels set at different angles to create a protrusion from the wall line.
Oriel | Window | Wikipedia | 492 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
An oriel window is a form of bay window. This form most often appears in Tudor-style houses and monasteries. It projects from the wall and does not extend to the ground. Originally a form of porch, they are often supported by brackets or corbels.
Thermal
Thermal, or Diocletian, windows are large semicircular windows (or niches) which are usually divided into three lights (window compartments) by two mullions. The central compartment is often wider than the two side lights on either side of it.
Picture
A picture window is a large fixed window in a wall, typically without glazing bars, or glazed with only perfunctory glazing bars (muntins) near the edge of the window. Picture windows provide an unimpeded view, as if framing a picture.
Multi-lite
A multi-lite window is a window glazed with small panes of glass separated by wooden or lead glazing bars, or muntins, arranged in a decorative glazing pattern often dictated by the building's architectural style. Due to the historic unavailability of large panes of glass, the multi-lit (or lattice window) was the most common window style until the beginning of the 20th century, and is still used in traditional architecture.
Emergency exit/egress
An emergency exit window is a window big enough and low enough so that occupants can escape through the opening in an emergency, such as a fire. In many countries, exact specifications for emergency windows in bedrooms are given in many building codes. Specifications for such windows may also allow for the entrance of emergency rescuers. Vehicles, such as buses, aircraft, and trains frequently have emergency exit windows as well.
Stained glass
A stained glass window is a window composed of pieces of colored glass, transparent, translucent or opaque, frequently portraying persons or scenes. Typically the glass in these windows is separated by lead glazing bars. Stained glass windows were popular in Victorian houses and some Wrightian houses, and are especially common in churches. | Window | Wikipedia | 419 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
French
A French door has two rows of upright rectangular glass panes (lights) extending its full length; and two of these doors on an exterior wall and without a mullion separating them, that open outward with opposing hinges to a terrace or porch, are referred to as a French window. Sometimes these are set in pairs or multiples thereof along the exterior wall of a very large room, but often, one French window is placed centrally in a typically sized room, perhaps among other fixed windows flanking the feature. French windows are known as porte-fenêtre in France and portafinestra in Italy, and frequently are used in modern houses.
Double-paned
Double-paned windows have two parallel panes (slabs of glass) with a separation of typically about 1 cm; this space is permanently sealed and filled at the time of manufacture with dry air or other dry nonreactive gas. Such windows provide a marked improvement in thermal insulation (and usually in acoustic insulation as well) and are resistant to fogging and frosting caused by temperature differential. They are widely used for residential and commercial construction in intemperate climates. In the UK, double-paned and triple-paned are referred to as double-glazing and triple-glazing. Triple-paned windows are now a common type of glazing in central to northern Europe. Quadruple glazing is now being introduced in Scandinavia.
Hexagonal window
A hexagonal window is a hexagon-shaped window, resembling a bee cell or crystal lattice of graphite. The window can be vertically or horizontally oriented, openable or dead. It can also be regular or elongately-shaped and can have a separator (mullion). Typically, the cellular window is used for an attic or as a decorative feature, but it can also be a major architectural element to provide the natural lighting inside buildings.
Guillotine window
A guillotine window is a window that opens vertically. Guillotine windows have more than one sliding frame, and open from bottom to top or top to bottom.
Terms
EN 12519 is the European standard that describes windows terms officially used in EU Member States. The main terms are: | Window | Wikipedia | 455 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
Light, or Lite, is the area between the outer parts of a window (transom, sill and jambs), usually filled with a glass pane. Multiple panes are divided by mullions when load-bearing, muntins when not.
Lattice light is a compound window pane madeup of small pieces of glass held together in a lattice.
Fixed window is a unit of one non-moving lite. The terms single-light, double-light, etc., refer to the number of these glass panes in a window.
Sash unit is a window consisting of at least one sliding glass component, typically composed of two lites (known as a double-light).
Replacement window in the United States means a framed window designed to slip inside the original window frame from the inside after the old sashes are removed. In Europe, it usually means a complete window including a replacement outer frame.
New construction window, in the US, means a window with a nailing fin that is inserted into a rough opening from the outside before applying siding and inside trim. A nailing fin is a projection on the outer frame of the window in the same plane as the glazing, which overlaps the prepared opening, and can thus be 'nailed' into place. In the UK and mainland Europe, windows in new-build houses are usually fixed with long screws into expanding plastic plugs in the brickwork. A gap of up to 13 mm is left around all four sides, and filled with expanding polyurethane foam. This makes the window fixing weatherproof but allows for expansion due to heat.
Lintel is a beam over the top of a window, also known as a transom.
Window sill is the bottom piece in a window frame. Window sills slant outward to drain water away from the inside of the building.
Secondary glazing is an additional frame applied to the inside of an existing frame, usually used on protected or listed buildings to achieve higher levels of thermal and sound insulation without compromising the look of the building
Decorative millwork is the moulding, cornices and lintels often decorating the surrounding edges of the window. | Window | Wikipedia | 444 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
Labeling
The United States NFRC Window Label lists the following terms:
Thermal transmittance (U-factor), best values are around U-0.15 (equal to 0.8 W/m2/K)
Solar heat gain coefficient (SHGC), ratio of solar heat (infrared) passing through the glass to incident solar heat
Visible transmittance (VT), ratio of transmitted visible light divided by incident visible light
Air leakage (AL), measured in cubic foot per minute per linear foot of crack between sash and frame
Condensation resistance (CR), measured between 1 and 100 (the higher the number, the higher the resistance of the formation of condensation)
The European harmonised standard hEN 14351–1, which deals with doors and windows, defines 23 characteristics (divided into essential and non essential). Two other, preliminary European Norms that are under development deal with internal pedestrian doors (prEN 14351–2), smoke and fire resisting doors, and openable windows (prEN 16034).
Construction
Windows can be a significant source of heat transfer. Therefore, insulated glazing units consist of two or more panes to reduce the transfer of heat.
Grids or muntins
These are the pieces of framing that separate a larger window into smaller panes. In older windows, large panes of glass were quite expensive, so muntins let smaller panes fill a larger space. In modern windows, light-colored muntins still provide a useful function by reflecting some of the light going through the window, making the window itself a source of diffuse light (instead of just the surfaces and objects illuminated within the room). By increasing the indirect illumination of surfaces near the window, muntins tend to brighten the area immediately around a window and reduce the contrast of shadows within the room.
Frame and sash construction
Frames and sashes can be made of the following materials:
Composites (also known as Hybrid Windows) are start since early 1998 and combine materials like aluminium + pvc or wood to obtain aesthetics of one material with the functional benefits of another.
A special class of PVC window frames, uPVC window frames, became widespread since the late 20th century, particularly in Europe: there were 83.5 million installed by 1998 with numbers still growing as of 2012.
Glazing and filling | Window | Wikipedia | 474 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
Low-emissivity coated panes reduce heat transfer by radiation, which, depending on which surface is coated, helps prevent heat loss (in cold climates) or heat gains (in warm climates).
High thermal resistance can be obtained by evacuating or filling the insulated glazing units with gases such as argon or krypton, which reduces conductive heat transfer due to their low thermal conductivity. Performance of such units depends on good window seals and meticulous frame construction to prevent entry of air and loss of efficiency.
Modern double-pane and triple-pane windows often include one or more low-e coatings to reduce the window's U-factor (its insulation value, specifically its rate of heat loss). In general, soft-coat low-e coatings tend to result in a lower solar heat gain coefficient (SHGC) than hard-coat low-e coatings.
Modern windows are usually glazed with one large sheet of glass per sash, while windows in the past were glazed with multiple panes separated by glazing bars, or muntins, due to the unavailability of large sheets of glass. Today, glazing bars tend to be decorative, separating windows into small panes of glass even though larger panes of glass are available, generally in a pattern dictated by the architectural style at use. Glazing bars are typically wooden, but occasionally lead glazing bars soldered in place are used for more intricate glazing patterns.
Other construction details
Many windows have movable window coverings such as blinds or curtains to keep out light, provide additional insulation, or ensure privacy.
Windows allow natural light to enter, but too much can have negative effects such as glare and heat gain. Additionally, while windows let the user see outside, there must be a way to maintain privacy on in the inside. Window coverings are practical accommodations for these issues.
Impact of the sun
Sun incidence angle
Historically, windows are designed with surfaces parallel to vertical building walls. Such a design allows considerable solar light and heat penetration due to the most commonly occurring incidence of sun angles. In passive solar building design, an extended eave is typically used to control the amount of solar light and heat entering the window(s). | Window | Wikipedia | 461 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
An alternative method is to calculate an optimum window mounting angle that accounts for summer sun load minimization, with consideration of actual latitude of the building. This process has been implemented, for example, in the Dakin Building in Brisbane, California—in which most of the fenestration is designed to reflect summer heat load and help prevent summer interior over-illumination and glare, by canting windows to nearly a 45 degree angle.
Solar window
Photovoltaic windows not only provide a clear view and illuminate rooms, but also convert sunlight to electricity for the building. In most cases, translucent photovoltaic cells are used.
Passive solar
Passive solar windows allow light and solar energy into a building while minimizing air leakage and heat loss. Properly positioning these windows in relation to sun, wind, and landscape—while properly shading them to limit excess heat gain in summer and shoulder seasons, and providing thermal mass to absorb energy during the day and release it when temperatures cool at night—increases comfort and energy efficiency. Properly designed in climates with adequate solar gain, these can even be a building's primary heating system.
Coverings
A window covering is a shade or screen that provides multiple functions. Some coverings, such as drapes and blinds provide occupants with privacy. Some window coverings control solar heat gain and glare. There are external shading devices and internal shading devices. Low-e window film is a low-cost alternative to window replacement to transform existing poorly-insulating windows into energy-efficient windows. For high-rise buildings, smart glass can provide an alternative.
Gallery | Window | Wikipedia | 324 | 49400 | https://en.wikipedia.org/wiki/Window | Technology | Architectural elements | null |
A sex-determination system is a biological system that determines the development of sexual characteristics in an organism. Most organisms that create their offspring using sexual reproduction have two common sexes, males and females, and in other species, there are hermaphrodites, organisms that can function reproductively as either female or male, or both.
There are also some species in which only one sex is present, temporarily or permanently. This can be due to parthenogenesis, the act of a female reproducing without fertilization. In some plants or algae the gametophyte stage may reproduce itself, thus producing more individuals of the same sex as the parent.
In some species, sex determination is genetic: males and females have different alleles or even different genes that specify their sexual morphology. In animals this is often accompanied by chromosomal differences, generally through combinations of XY, ZW, XO, ZO chromosomes, or haplodiploidy. The sexual differentiation is generally triggered by a main gene (a "sex locus"), with a multitude of other genes following in a domino effect.
In other cases, sex of a fetus is determined by environmental variables (such as temperature). The details of some sex-determination systems are not yet fully understood. Hopes for future fetal biological system analysis include complete-reproduction-system initialized signals that can be measured during pregnancies to more accurately determine whether a determined sex of a fetus is male, or female. Such analysis of biological systems could also signal whether the fetus is hermaphrodite, which includes total or partial of both male and female reproduction organs.
Some species such as various plants and fish do not have a fixed sex, and instead go through life cycles and change sex based on genetic cues during corresponding life stages of their type. This could be due to environmental factors such as seasons and temperature. In some gonochoric species, a few individuals may have conditions that cause a mix of different sex characteristics.
Discovery
Sex determination was discovered in the mealworm by the American geneticist Nettie Stevens in 1903.
In 1694, J.R. Camerarius, conducted early experiments on pollination and reported the existence of male and female characteristics in plants(Maize).
In 1866, Gregor Mendel published on inheritance of genetic traits. This is known as Mendelian inheritance and it eventually established the modern understanding of inheritance from two gametes.
In 1902, C.E. McClung identified sex chromosomes in bugs. | Sex-determination system | Wikipedia | 510 | 49414 | https://en.wikipedia.org/wiki/Sex-determination%20system | Biology and health sciences | Genetics | Biology |
In 1917, C.E. Allen, discovered sex determination mechanisms in plants.
In 1922, C.B. Bridges, put forth the Genic Balance Theory of sex determination.
Chromosomal systems
Among animals, the most common chromosomal sex determination systems are XY, XO, ZW, ZO, but with numerous exceptions.
According to the Tree of Sex database (as of 2023), the known sex determination systems are:
1. complex sex chromosomes, homomorphic sex chromosomes, or others
XX/XY sex chromosomes
The XX/XY sex-determination system is the most familiar, as it is found in humans. The XX/XY system is found in most other mammals, as well as some insects. In this system, females have two of the same kind of sex chromosome (XX), while males have two distinct sex chromosomes (XY). The X and Y sex chromosomes are different in shape and size from each other, unlike the rest of the chromosomes (autosomes), and are sometimes called allosomes. In some species, such as humans, organisms remain sex indifferent for a time during development (embryogenesis); in others, however, such as fruit flies, sexual differentiation occurs as soon as the egg is fertilized.
Y-centered sex determination
Some species (including humans) have a gene SRY on the Y chromosome that determines maleness. Members of SRY-reliant species can have uncommon XY chromosomal combinations such as XXY and still live.
Human sex is determined by the presence or absence of a Y chromosome with a functional SRY gene. Once the SRY gene is activated, cells create testosterone and anti-müllerian hormone which typically ensures the development of a single, male reproductive system. In typical XX embryos, cells secrete estrogen, which drives the body toward the female pathway. | Sex-determination system | Wikipedia | 383 | 49414 | https://en.wikipedia.org/wiki/Sex-determination%20system | Biology and health sciences | Genetics | Biology |
In Y-centered sex determination, the SRY gene is the main gene in determining male characteristics, but multiple genes are required to develop testes. In XY mice, lack of the gene DAX1 on the X chromosome results in sterility, but in humans it causes adrenal hypoplasia congenita. However, when an extra DAX1 gene is placed on the X chromosome, the result is a female, despite the existence of SRY, since it overrides the effects of SRY. Even when there are normal sex chromosomes in XX females, duplication or expression of SOX9 causes testes to develop. Gradual sex reversal in developed mice can also occur when the gene FOXL2 is removed from females. Even though the gene DMRT1 is used by birds as their sex locus, species who have XY chromosomes also rely upon DMRT1, contained on chromosome 9, for sexual differentiation at some point in their formation.
X-centered sex determination
Some species, such as fruit flies, use the presence of two X chromosomes to determine femaleness. Species that use the number of Xs to determine sex are nonviable with an extra X chromosome.
Other variants of XX/XY sex determination
Some fish have variants of the XY sex-determination system, as well as the regular system. For example, while having an XY format, Xiphophorus nezahualcoyotl and X. milleri also have a second Y chromosome, known as Y', that creates XY' females and YY' males.
At least one monotreme, the platypus, presents a particular sex determination scheme that in some ways resembles that of the ZW sex chromosomes of birds and lacks the SRY gene. The platypus has sex chromosomes . The males have , while females have . During meiosis, 5 of X form one chain, and 5 of Y form another chain. Thus, they behave effectively as a typical XY chromosomal system, except each of X and Y is broken into 5 parts, with the effect at recombinations occur very frequently at 4 particular points. One of the X chromosomes is homologous to the human X chromosome, and another is homologous to the bird Z chromosome. | Sex-determination system | Wikipedia | 468 | 49414 | https://en.wikipedia.org/wiki/Sex-determination%20system | Biology and health sciences | Genetics | Biology |
Although it is an XY system, the platypus' sex chromosomes share no homologues with eutherian sex chromosomes. Instead, homologues with eutherian sex chromosomes lie on the platypus chromosome 6, which means that the eutherian sex chromosomes were autosomes at the time that the monotremes diverged from the therian mammals (marsupials and eutherian mammals). However, homologues to the avian DMRT1 gene on platypus sex chromosomes X3 and X5 suggest that it is possible the sex-determining gene for the platypus is the same one that is involved in bird sex-determination. More research must be conducted in order to determine the exact sex determining gene of the platypus.
XX/X0 sex chromosomes
In this variant of the XY system, females have two copies of the sex chromosome (XX) but males have only one (X0). The 0 denotes the absence of a second sex chromosome. Generally in this method, the sex is determined by amount of genes expressed across the two chromosomes. This system is observed in a number of insects, including the grasshoppers and crickets of order Orthoptera and in cockroaches (order Blattodea). A small number of mammals also lack a Y chromosome. These include the Amami spiny rat (Tokudaia osimensis) and the Tokunoshima spiny rat (Tokudaia tokunoshimensis) and Sorex araneus, a shrew species. Transcaucasian mole voles (Ellobius lutescens) also have a form of XO determination, in which both sexes lack a second sex chromosome. The mechanism of sex determination is not yet understood.
The nematode C. elegans is male with one sex chromosome (X0); with a pair of chromosomes (XX) it is a hermaphrodite. Its main sex gene is XOL, which encodes XOL-1 and also controls the expression of the genes TRA-2 and HER-1. These genes reduce male gene activation and increase it, respectively.
ZW/ZZ sex chromosomes | Sex-determination system | Wikipedia | 460 | 49414 | https://en.wikipedia.org/wiki/Sex-determination%20system | Biology and health sciences | Genetics | Biology |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.