id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
4,160,503
https://en.wikipedia.org/wiki/Float%20switch
A float switch is a type of level sensor, a device used to detect the level of liquid within a tank. The switch may be used to control a pump, as an indicator, an alarm, or to control other devices. One type of float switch uses a mercury switch inside a hinged float. Another common type is a float that raises a rod to actuate a microswitch. One pattern uses a reed switch mounted in a tube; a float, containing a magnet, surrounds the tube and is guided by it. When the float raises the magnet to the reed switch, it closes. Several reeds can be mounted in the tube for different level indications by one assembly. A very common application is in sump pumps and condensate pumps where the switch detects the rising level of liquid in the sump or tank and energizes an electrical pump which then pumps liquid out until the level of the liquid has been substantially reduced, at which point the pump is switched off again. Float switches are often adjustable and can include substantial hysteresis. That is, the switch's "turn on" point may be much higher than the "shut off" point. This minimizes the on-off cycling of the associated pump. Some float switches contain a two-stage switch. As liquid rises to the trigger point of the first stage, the associated pump is activated. If the liquid continues to rise (perhaps because the pump has failed or its discharge is blocked), the second stage will be triggered. This stage may switch off the source of the liquid being pumped, trigger an alarm, or both. Where level must be sensed inside a pressurized vessel, often a magnet is used to couple the motion of the float to a switch located outside the pressurized volume. In some cases, a rod through a stuffing box can be used to operate a switch, but this creates high drag and has a potential for leakage. Successful float switch installations minimize the opportunity for accumulation of dirt on the float that would impede its motion. Float switch materials are selected to resist the deleterious effects of corrosive process liquids. In some systems, a properly selected and sized float can be used to sense the interface level between two liquids of different density. See also Float (liquid level) Fuel gauge Level sensor Sight glass References Fluid dynamics Heating, ventilation, and air conditioning Sensors Mechanisms (engineering) Pumps Switches
Float switch
[ "Physics", "Chemistry", "Technology", "Engineering" ]
488
[ "Pumps", "Turbomachinery", "Chemical engineering", "Measuring instruments", "Physical systems", "Hydraulics", "Mechanical engineering", "Piping", "Sensors", "Mechanisms (engineering)", "Fluid dynamics" ]
4,160,656
https://en.wikipedia.org/wiki/Human%20herpesvirus%206
Human herpesvirus 6 (HHV-6) is the common collective name for human betaherpesvirus 6A (HHV-6A) and human betaherpesvirus 6B (HHV-6B). These closely related viruses are two of the nine known herpesviruses that have humans as their primary host. HHV-6A and HHV-6B are double-stranded DNA viruses within the Betaherpesvirinae subfamily and of the genus Roseolovirus. HHV-6A and HHV-6B infect almost all of the human populations that have been tested. HHV-6A has been described as more neurovirulent, and as such is more frequently found in patients with neuroinflammatory diseases such as multiple sclerosis. HHV-6 (and HHV-7) levels in the brain are also elevated in people with Alzheimer's disease. HHV-6B primary infection is the cause of the common childhood illness exanthema subitum (also known as roseola infantum or sixth disease). It is passed on from child to child. It is uncommon for adults to contract this disease as most people have had it by kindergarten, and once contracted, immunity arises and prevents future reinfection. Additionally, HHV-6B reactivation is common in transplant recipients, which can cause several clinical manifestations such as encephalitis, bone marrow suppression, and pneumonitis. A variety of tests are used in the detection of HHV-6, some of which do not differentiate the two species. Both viruses can cause transplacental infection and be passed on to a newborn. HHV-6A and Infertility A 2016 study showed that 43% of women with unexplained infertility tested positive for HHV-6A compared to 0% in the fertile control group. HHV-6A was found present in endometrial epithelial cells from women with unexplained infertility but not in their blood. In the context of infertility, this discovery underscores the importance of targeted testing for HHV-6A within the uterine environment, as the virus was not detected in the bloodstream of the affected individuals. Effective diagnosis, therefore, requires tests that are capable of distinguishing between active and latent HHV-6A infections specifically in endometrial tissue, highlighting the need for tissue-specific viral detection methods in assessing and managing infertility associated with HHV-6A. A 2018 study found 37% of women experiencing recurrent implantation failure after IVF/ET had HHV-6A in their endometrial biopsies, compared to 0% in control groups. A 2019 study confirmed the presence of HHV-6A infection in 40% of idiopathic infertile women. Identifying the effect of HHV-6A infection on endometrial immune status opens up a new perspectives on fertility care. It is possible to choose antiviral therapies and non-hormonal approaches for women with unexplained infertility characterized by HHV-6A to increase their pregnancy rate. Testing for HHV-6 The table below presents a comprehensive overview of various diagnostic tests used to detect human herpesvirus 6 (HHV-6), detailing their ability to distinguish between active and latent infections. It also includes insights on the interpretation of test results, identifies providers that offer these tests, and indicates which methods are suitable for detecting HHV-6A in the endometrial lining—an important consideration for evaluating potential causes of infertility in women. The table serves as a guide for healthcare professionals to select appropriate diagnostic tests for HHV-6. History During 1986, Syed Zaki Salahuddin, Dharam Ablashi, and Robert Gallo cultivated peripheral blood mononuclear cells from patients with AIDS and lymphoproliferative illnesses. Short-lived, large, refractile cells that frequently contained intranuclear and/or intracytoplasmic inclusion bodies were documented. Electron microscopy revealed a novel virus that they named human B-lymphotropic virus (HBLV). Shortly after its discovery, Ablashi et al. described five cell lines that can be infected by the newly discovered HBLV. They published that HSB-2, a particular T-cell line, is highly susceptible to infection. Ablashi's pioneering research concluded by suggesting that the virus name be changed from HBLV to HHV-6, in accord with the published provisional classification of herpes viruses. Years later, HHV-6 was divided into subtypes. Early research (1992) described two very similar, yet unique variants: HHV-6A and HHV-6B. The distinction was warranted due to unique restriction endonuclease cleavages, monoclonal antibody reactions, and growth patterns. HHV-6A includes several adult-derived strains and its disease spectrum is not well defined, although it is thought by some to be more neurovirulent. HHV-6B is commonly detected in children with roseola infantum, as it is the etiologic agent for this condition. Within these two viruses is a sequence homology of 95%. In 2012, HHV-6A and HHV-6B were officially recognized as distinct species. Taxonomy HHV-6A and HHV-6B were recognized by the International Committee on Taxonomy of Viruses (ICTV) as distinct species in 2012. Human roseoloviruses include HHV-6A, HHV-6B and HHV-7. Herpesvirus was established as a genus in 1971 in the first report of the ICTV. This genus consisted of 23 viruses among 4 groups. In 1976, a second ICTV report was released in which this genus was elevated to the family level — the herpetoviridae. Because of possible confusion with viruses derived from reptiles, the family name was changed in the third report (1979) to herpesviridae. In this report, the family Herpesviridae was divided into 3 subfamilies (alphaherpesvirinae, betaherpesvirinae and gammaherpesvirinae) and 5 unnamed genera; 21 viruses were recognized as members of the family. In 2009, the order Herpesvirales was created. This was necessitated by the discovery that the herpes viruses of fish and molluscs are only distantly related to those of birds and mammals. Order Herpesvirales contains three families, the Herpesviridae, which contains the long-recognized herpesviruses of mammals, birds, and reptiles, plus two new families — the family Alloherpesviridae which incorporates herpes viruses of bony fish and frogs, and the family Malacoherpesviridae which contains viruses of molluscs. As of 2012, this order currently has 3 families, 4 subfamilies (1 unassigned), 18 genera (4 unassigned) and 97 species. Structure The diameter of an HHV-6 virion is about 2000 angstroms. The virion's outer portion consists of a lipid bilayer membrane that contains viral glycoproteins and is derived from that of the host. Below this membrane envelope is a tegument which surrounds an icosahedral capsid, composed of 162 capsomeres. The protective capsid of HHV-6 contains double stranded linear DNA. During maturation of HHV-6 virions, human cell membranes are used to form viral lipid envelopes (as is characteristic of all enveloped viruses). During this process HHV-6 utilizes lipid rafts, which are membranous microdomains enriched by cholesterol, sphingolipids, and glycosylphosphatidylinositol-anchored proteins. Early researchers suspected that HHV-6 virions mature in the nucleus; some even incorrectly published this, as they generalized and applied to HHV-6 what was known about other viruses. However, researched published in 2009 suggests that the HHV-6 virus utilizes trans-Golgi-network-derived vesicles for assembly. Genome The genetic material of HHV-6 is composed of linear (circular during an active infection), double stranded DNA which contains an origin of replication, two 8–10 kb left and right direct repeat termini, and a unique segment that is 143–145kb. The origin of replication (often labeled as "oriLyt" in the literature) is where DNA replication begins. The direct repeat termini (DRL and DRR) possess a repeated TTAGGG sequence, identical to that of human telomeres. Variability in the number of telomeric repeats is observed in the range of 15–180. These termini also contain pac-1 and pac-2 cleavage and packing signals that are conserved among herpesviruses. The unique segment contains seven major core gene blocks (U27–U37, U38–U40, U41–U46, U48–U53, U56–U57, U66EX2–U77, and U81–U82), which is also characteristic of herpesviruses. These conserved genes code for proteins that are involved in replication, cleavage, and packing of the viral genome into a mature virion. Additionally, they code for a number of immunomodulatory proteins. The unique segment also possesses a block of genes (U2–U19) that are conserved among HHV-6, HHV-7, and cytomegaloviruses (the betaherpesviruses). A number of the unique segment genes are associated with, for instance, the HCMV US22 family (). The table below outlines some of their known properties. Genes Viral entry HHV-6 receptor When an extracellular HHV-6 virion comes across human cells, it encounters the human receptor protein cluster of differentiation 46 (CD46), which plays a role in regulating the complement system. The CD46 protein possesses a single variable region, as a result of alternative splicing. As such, at least fourteen isoforms of CD46 exist, all of which bind HHV-6a. The extracellular region of CD46 contains four short consensus repeats of about 60 amino acids that fold into a compact beta-barrel domain surrounded by flexible loops. As has been demonstrated for CD46 with other ligands, the CD46 protein structure linearizes upon binding HHV-6. While their precise interaction has not yet been determined, the second and third SCR domains have been demonstrated as required for HHV-6 receptor binding and cellular entry. HHV-6 receptor ligand Mori et al. first identified the gene product gQ1, a glycoprotein unique to HHV-6, and found that it forms a complex with gH and gL glycoproteins. They believed that this heterotrimer complex served as the viral ligand for CD46. Soon thereafter, another glycoprotein named gQ2 was identified and found to be part of the gH/gL/gQ1 ligand complex, forming a heterotetramer that was positively identified as the viral CD46 ligand. The exact process of entry is not yet well understood. Salivary glands The salivary glands have been described as an in vivo reservoir for HHV-6 infection. Leukocytes Researchers conducted a study to show that T cells are highly infectable by HHV-6. Nervous system During the year 2011, researchers at the National Institutes of Health attempted to elucidate the then unknown method whereby HHV-6a gains entry into the nervous system. As such, they autopsied the brains of around 150 subjects. When various anatomical regions were assayed for their viral load, olfactory tissues were found to have the highest HHV-6 content. They concluded that these tissues are the entry point for HHV-6a. The results above are consistent with those of previous studies that involved HSV-1 (and a number of other viruses), which also disseminates into the CNS through olfactory tissue. Researchers also hypothesized that olfactory ensheathing cells (OECs), a group of specialized glial cells found in the nasal cavity, may have a role in HHV-6 infectivity. They suspected this association as a result of OECs having properties similar to those of astrocytes, another type of glial cell that was previously identified as being susceptible to HHV-6 infection. Research continued by infecting OECs in vitro with both types of HHV-6. Ultimately, only OECs in which HHV-6a was used tested positive for signs of de novo viral synthesis, as is also characteristic of astrocytes. Cellular activity Once inside, two outcomes have been described: active and inactive infections. Active infection Active infections involve the linear dsDNA genome circularizing by end to end covalent linkages. This process was first reported for the herpes simplex virus. Once circularized, HHV-6 begins to express what are known as "immediate early" genes. These gene products are believed to be transcription activators and may be regulated by the expression of viral micro RNAs. Subsequent expression of "early genes" then occurs and activates, for instance, viral DNA polymerases. Early genes are also involved in the rolling circle replication that follows. HHV-6's replication results in the formation of concatemers, which are long molecules that contain several repeats of a DNA sequence. These long concatemers are then cleaved between the pac-1 and pac-2 regions for packaging of the genome into individual virions. Inactive infection Not all newly infected cells begin rolling circle replication. Herpesviruses may enter a latent stage, inactively infecting their human host. Since its discovery in 1993, this phenomenon has been found among all of the betaherpesviruses. Other betaherpesviruses establish latency as a nuclear episome, which is a circular DNA molecule (analogous to plasmids). For HHV-6, latency is believed to occur exclusively through the integration of viral telomeric repeats into human subtelomeric regions. Only one other virus, Marek's disease virus, is known to achieve latency in this fashion. This phenomenon is possible as a result of the telomeric repeats found within the direct repeat termini of HHV-6's genome. The right direct repeat terminus integrates within 5 to 41 human telomere repeats, and preferentially does so into the proximal end of chromosomes 9, 17, 18, 19, and 22, but has also occasionally been found in chromosomes 10 and 11. Nearly 70 million individuals are suspected to carry chromosomally integrated HHV-6. A number of genes expressed by HHV-6 are unique to its inactive latency stage. These genes involve maintaining the genome and avoiding destruction of the host cell. For instance, the U94 protein is believed to repress genes that are involved in cellular lysis (apoptosis) and also may aid in telomeric integration. Once stored in human telomeres, the virus is reactivated intermittently. Reactivation and transplantation The specific triggers for reactivation are not well understood. Some researchers have suggested that injury, physical or emotional stress, and hormonal imbalances could be involved. Researchers during 2011 discovered that reactivation can positively be triggered in vitro by histone deacetylase inhibitors. Once reactivation begins, the rolling circle process is initiated and concatemers are formed as described above. A study published in The Journal of Infectious Diseases in 2024 investigated the reactivation of inherited chromosomally integrated human herpesvirus 6 (iciHHV-6B) in a liver transplant recipient and its impact on the graft. The research, conducted by Hannolainen et al., used hybrid capture sequencing and various molecular techniques to analyze the viral sequences and host immune response. The findings demonstrated active replication of iciHHV-6B and significant immune activation, suggesting the pathological impact of viral reactivation on transplant outcomes. The study emphasizes the importance of monitoring iciHHV-6 reactivation in transplant patients. Interactions Human herpesvirus 6 lives primarily on humans and, while variants of the virus can cause mild to fatal illnesses, can live commensally on its host. It has been demonstrated that HHV-6 fosters the progression of HIV-1 upon coinfection in T cells. HHV-6 upregulates the expression of the primary HIV receptor CD4, thus expanding the range of HIV susceptible cells. Several studies also have shown that HHV-6 infection increases production of inflammatory cytokines that enhance in vitro expression of HIV-1, such as TNF-alpha, IL-1 beta, and IL-8. A more recent in vivo study shows HHV-6A coinfection to dramatically accelerate the progression from HIV to AIDS in pigtailed macaques. HHV-6 has also been demonstrated to transactivate Epstein–Barr virus. Epidemiology Age Humans acquire the virus at an early age, some as early as less than one month of age. HHV-6 primary infections account for up to 20% of infant emergency room visits for fever in the United States and are associated with several more severe complications, such as encephalitis, lymphadenopathy, myocarditis and myelosuppression. The prevalence of the virus in the body increases with age (rates of infection are highest among infant between 6 and 12 months old) and it is hypothesized that this is due to the loss of maternal antibodies in a child that protect him or her from infections. There are inconsistencies with the correlations between age and seropositivity: According to some reports there is a decrease of seropositivity with the increase of age, while some indicate no significant decline, and others report an increased rate of seropositivity for individuals age 62 and older. After primary infection, latency is established in salivary glands, hematopoietic stem cells, and other cells, and exists for the lifetime of the host. Geographical distribution The virus is known to be widespread around the world. An HHV-6 infection rate of 64–83% by age 13 months has been reported for countries including the United States, United Kingdom, Japan and Taiwan. Studies have found seroprevalence varying "from approximately 39 to 80% among ethnically diverse adult populations from Tanzania, Malaysia, Thailand, and Brazil." There are no significant differences among ethnic groups living in the same geographical location or between sexes. While HHV-6B is present in almost all of the world's populations, HHV-6A appears to be less frequent in Japan, North America, and Europe. Transmission Transmission is believed to occur most frequently through the shedding of viral particles into saliva. Both HHV-6B and HHV-7 are found in human saliva, the former being at a lower frequency. Studies report varying rates of prevalence of HHV-6 in saliva (between 3–90%), and have also described the salivary glands as an in vivo reservoir for HHV-6. The virus infects the salivary glands, establishes latency, and periodically reactivates to spread infection to other hosts. Vertical transmission has also been described, and occurs in approximately 1% of births in the United States. This form is easily identifiable as the viral genome is contained within every cell of an infected individual. Diagnosis The diagnosis of HHV-6 infection is performed by both serologic and direct methods. The most prominent technique is the quantification of viral DNA in blood, other body fluids, and organs by means of real-time PCR. Clinical significance The classical presentation of primary HHV-6b infection is as exanthema subitum (ES) or "roseola", featuring a high temperature lasting 3 to 5 days followed by a rash on the torso, neck, or face and sometimes febrile convulsions, however, the symptoms are not always present together. However, one study (1997) indicated that a rash is not a distinguishing feature of HHV-6 infection, with rates similar to non-HHV-6 infections (10–20% of febrile children in both groups). HHV-6 infections more frequently present with high temperatures (over 40C), at a rate of around two thirds compared to less than half in the non-HHV-6 patients. Similarly significant differences were seen in malaise, irritability, and tympanic membrane inflammation. Primary infection in adults tend to be more severe. Diagnosis for the virus, particularly HHV-6B, is vital for the patient because of the infection's adverse effects. Symptoms that point to this infection, such as rashes, go unnoticed in patients that receive antibiotics because they can be misinterpreted as a side-effect of the medicine. In addition to exanthema subitum HHV-6B is known to be associated with the hepatitis, febrile convulsions, and encephalitis. The virus periodically re-activates from its latent state, with HHV-6 DNA being detectable in 20–25% of healthy adults in the United States. In the immunocompetent setting, these re-activations are often asymptomatic, but in immunosuppressed individuals there can be serious complications. HHV-6 re-activation causes severe disease in transplant recipients and can lead to graft rejection, often in consort with other betaherpesviridae. Likewise in HIV/AIDS, HHV-6 re-activations cause disseminated infections leading to end organ disease and death. Although up to 100% of the population are exposed (seropositive) to HHV-6, most by 3 years of age, there are rare cases of primary infections in adults. In the United States, these have been linked more with HHV-6a, which is thought to be more pathogenic and more neurotropic and has been linked to several central nervous system-related disorders. HHV-6 has been reported in multiple sclerosis patients and has been implicated as a co-factor in several other diseases, including chronic fatigue syndrome, AIDS, and temporal lobe epilepsy. Multiple sclerosis Multiple sclerosis (MS) is an autoimmune and inflammatory disorder of the nervous system that results in demyelination of axons in the brain and spinal cord. The first study to specifically investigate HHV-6-related demyelination appeared in the literature during 1996, when a previously healthy 19-month-old child developed acute encephalopathy. Levels of myelin basic protein were elevated in his cerebrospinal fluid, suggesting that demyelination was occurring. This link was almost forgotten, until four years later when an MS-related study was published showing an HHV-6 prevalence of 90% among demyelinated brain tissues. In comparison, a mere 13% of disease-free brain tissues possessed the virus. The molecular mimicry hypothesis, in which T cells are essentially confusing an HHV-6 viral protein with myelin basic protein, first appeared around this time. Early on in the development of this hypothesis (2002), Italian researchers used the HHV-6a variant along with bovine myelin basic protein to generate cross-reactive T cell lines. These were compared to the T cells of individuals with MS as well as those of controls, and no significant difference was found between the two. Their early research suggested that molecular mimicry may not be a mechanism that is involved in MS. Several similar studies followed. A study from October 2014 supported the role of long-term HHV-6 infection with demyelination in progressive neurological diseases. Chronic fatigue syndrome Chronic fatigue syndrome (CFS) is a debilitating illness, the cause of which is unknown. Patients with CFS have abnormal neurological, immunological, and metabolic findings. For many, but not all, patients who meet criteria for CFS, the illness begins with an acute, infectious-like syndrome. Cases of CFS can follow well-documented infections with several infectious agents. A study of 259 patients with a "CFS-like" illness published shortly after HHV-6 was discovered used primary lymphocyte cultures to identify people with active replication of HHV-6. Such active replication was found in 70% of the patients vs. 20% of the control subjects (). The question raised but not answered by this study was whether the illness caused subtle immune deficiency that led to reactivation of HHV-6, or whether reactivation of HHV-6 led to the symptoms of the illness. Subsequent studies employing only serological techniques that do not distinguish active from latent infection have produced mixed results: most, but not all, have found an association between CFS and HHV-6 infection. Other studies have employed assays that can detect active infection: primary cell culture, PCR of serum or plasma, or IgM early antigen antibody assays. The majority of these studies have shown an association between CFS and active HHV-6 infection, although a few have not. In summary, active infection with HHV-6 is present in a substantial fraction of patients with CFS. Moreover, HHV-6 is known to infect cells of the nervous system and immune system, organ systems with demonstrable abnormalities in CFS. Despite this association, it remains unproven that reactivated HHV-6 infection is a cause of CFS. Hashimoto's thyroiditis Hashimoto's thyroiditis is the most common thyroid disease and is characterized by abundant lymphocyte infiltrate and thyroid impairment. Recent research suggests a potential role for HHV-6 (possibly variant A) in the development or triggering of Hashimoto's thyroiditis. Pregnancy The role of HHV-6 during pregnancy leading to inflammation in the amniotic cavity has been studied. Infertility HHV-6A DNA was found in the endometrium of almost half of a group of infertile women, but in none of the fertile control group. Natural killer cells specific for HHV-6A, and high uterine levels of certain cytokines, were also found in the endometrium of the infertile women positive for HHV-6A. The authors suggest that HHV-6A may prove to be an important factor in female infertility. Cancer Many human oncogenic viruses have been identified. For instance, HHV-8 is linked to Kaposi's sarcoma, the Epstein–Barr virus to Burkitt's lymphoma, and HPV to cervical cancer. In fact, the World Health Organization estimated (2002) that 17.8% of human cancers were caused by infection. The typical methods whereby viruses initiate oncogenesis involve suppressing the host's immune system, causing inflammation, or altering genes. HHV-6 has been detected in lymphomas, leukemias, cervical cancers, and brain tumors. Various medulloblastoma cell lines as well as the cells of other brain tumors have been demonstrated to express the CD46 receptor. Viral DNA has also been identified in many other non-pathological brain tissues, but the levels are lower. The human P53 protein functions as a tumor suppressor. Individuals who do not properly produce this protein experience a higher incidence of cancer, a phenomenon known as Li-Fraumeni syndrome. One of HHV-6's gene products, the U14 protein, binds P53 and incorporates it into virions. Another gene product, the ORF-1 protein, can also bind and inactivate P53. Cells expressing the ORF-1 gene have even been shown to produce fibrosarcomas when injected into mice. Another product of HHV-6, the immediate early protein U95, has been shown to bind nuclear factor-kappa B. Deregulation of this factor is associated with cancer. Optic neuritis HHV-6 induced ocular inflammation has been reported three times. All three were reported in elderly individuals, two during 2007 and one during 2011. The first two were reported in Japan and France, the most recent one in Japan. These were believed to have occurred as a result of a reactivation, as anti-HHV-6 IgM antibody levels were low. Temporal lobe epilepsy Epilepsy of the mesial temporal lobe is associated with HHV-6 infection. Within this region of the brain exists three structures: the amygdala, hippocampus, and parahippocampal gyrus. Mesial temporal lobe epilepsy (MTLE) is the most common form of chronic epilepsy and its underlying mechanism is not fully understood. Researchers consistently report having found HHV-6 DNA in tissues that were removed from patients with MTLE. Studies have demonstrated a tendency for HHV-6 to aggregate in the temporal lobe, with the highest concentrations in astrocytes of the hippocampus. However, one group of researchers ultimately concluded that HHV-6 may not be involved in MTLE related to mesial temporal sclerosis. Liver failure The virus is a common cause of liver dysfunction and acute liver failure in liver transplant recipients, and has recently been linked to periportal confluent necrosis. Furthermore, HHV-6 DNA is often detectable only in the biopsy tissues as DNA levels fall below the level of detection in blood in persistent cases. Treatment There are no pharmaceuticals approved specifically for treating HHV-6 infection, although the usage of Cytomegalovirus treatments (valganciclovir, ganciclovir, cidofovir, and foscarnet) have shown some success. These drugs are given with the intent of inhibiting proper DNA polymerization by competing with deoxy triphosphate nucleotides or specifically inactivating viral DNA polymerases. Finding a treatment can be difficult when HHV-6 reactivation occurs following transplant surgery because transplant medications include immunosuppressants. References External links Betaherpesvirinae Viruses articles needing expert attention Unaccepted virus taxa
Human herpesvirus 6
[ "Biology" ]
6,497
[ "Biological hypotheses", "Unaccepted virus taxa", "Controversial taxa" ]
4,160,674
https://en.wikipedia.org/wiki/Sight%20glass
A sight glass or water gauge is a type of level sensor, a transparent tube through which the operator of a tank or boiler can observe the level of liquid contained within. Liquid in tanks Simple sight glasses may be just a plastic or glass tube connected to the bottom of the tank at one end and the top of the tank at the other. The level of liquid in the sight glass will be the same as the level of liquid in the tank. Today, however, sophisticated float switches have replaced sight glasses in many such applications. Steam boilers If the liquid is hazardous or under pressure, more sophisticated arrangements must be made. In the case of a boiler, the pressure of the water below and the steam above is equal, so any change in the water level will be seen in the gauge. The transparent tube (the “glass” itself) may be mostly enclosed within a metal or toughened glass shroud to prevent it from being damaged through scratching or impact and offering protection to the operators in the case of breakage. This usually has a patterned backplate to make the magnifying effect of the water in the tube more obvious and so allow for easier reading. In some locomotives where the boiler is operated at very high pressures, the tube itself would be made of metal-reinforced toughened glass. It is important to keep the water at the specified level, otherwise the top of the firebox will be exposed, creating an overheat hazard and causing damage and possibly catastrophic failure. To check that the device is offering a correct reading and the connecting pipes to the boiler are not blocked by scale, the water level needs to be “bobbed” by quickly opening the taps in turn and allowing a brief spurt of water through the drain cock. The National Board of Boiler and Pressure Vessel Inspectors recommends a daily testing procedure described by the American National Standards Institute, chapter 2 part I-204.3 water level gauge. While not strictly required, this procedure is designed to allow an operator to safely verify that all parts of the sight glass are operating correctly and have free flowing connections to the boiler necessary for proper operation. Failure The gauge glass on a boiler needs to be inspected periodically and replaced if it is seen to have worn thin in the vicinity of the gland nuts, but a failure in service can still occur. Drivers are expected to carry two or three glass tubes, pre-cut to the required length, together with hemp or rubber seals, to replace the tubes on the road. Familiarity with this disquieting occurrence was considered so important that a glass would often be smashed deliberately while a trainee driver was on the footplate, to give him practice in fitting a new tube. Although automatic ball valves are fitted in the mounts to limit the release of steam and scalding water, these can fail through accumulation of limescale. It was standard procedure to hold the coal scoop in front of the face while the other hand, holding the cap for protection, reached to turn off the valves at both ends of the glass. Reflex gauges A reflex gauge is more complex in construction but can give a clearer distinction between gas (steam) and liquid (water). Instead of containing the media in a glass tube, the gauge consists of a vertically oriented slotted metal body with a strong glass plate mounted on the open side of the slot facing the operator. The rear of the glass, in contact with the media, has grooves moulded into its surface, running vertically. The grooves form a zig-zag pattern with 90° angles. Incident light entering the glass is refracted at the rear surface in contact with the media. In the region that is contact with the gas, most of the light is reflected from the surface of one groove to the next and back towards the operator, appearing silvery white. In the region that is in contact with the liquid, most of the light is refracted into the liquid causing this region to appear almost black to the operator. Well-known makes of reflex gauge are Clark-Reliance, IGEMA, TGI Ilmadur, Penberthy, Jerguson, Klinger, Cesare-Bonetti and Kenco. Due to the caustic nature of boiler anti-scaling treatments ("water softeners"), reflex gauges tend to become relatively rapidly etched by the water and lose their effectiveness at displaying the liquid level. Therefore, bi-colour gauges are recommended for certain types of boiler, particularly those operating at pressure above 60 bar. Bi-colour gauges A bi-colour gauge is generally preferred for caustic media in order to afford protection to the glass. The gauge consists of a vertically oriented slotted metal body with a strong plain glass to the front and the rear. The front and rear body surfaces are in non-parallel vertical planes. Behind the gauge body are light sources with two quite different wavelengths, typically red and green. Due to the different refraction of the red and green light, the liquid region appears green to the operator, while the gas region appears red. Unlike the reflex gauge, the glass has a plane surface which it does not need to be in direct contact with the media and can be protected with a layer of a caustic-resistant transparent material such as silica. Well-known manufacturers of the highest quality Bi-Colour Level Gauges are Clark-Reliance, Klinger, FPS-Aquarian, IGEMA and Quest-Tec Magnetic indicator In a magnetic indicator is a float on the surface of the liquid contains a permanent magnet. The liquid is contained in a chamber of strong, non-magnetic material, avoiding the use of glass. The level indicator consists of a number of pivoting magnetic vanes arranged one above the other and placed close to the chamber containing the float. The two faces of the vanes are differently coloured. As the magnet passes up and down behind the vanes it cause them to rotate, displaying one colour for the region containing the liquid and another for the region containing gas. Magnetic indicators are stated in various manufacturers' literature to be most suitable for very high pressure and / or temperature and for aggressive liquids. History The first locomotive to be fitted with the device was built in 1829 by John Rastrick at his Stourbridge works. Modern industrial sight glass Industrial observational instruments have changed with industry itself. More structurally sophisticated than the water gauge, the contemporary sight glass — also called the sight window or sight port — can be found on the media vessel at chemical plants and in other industrial settings, including pharmaceutical, food, beverage and bio gas plants. Sight glasses enable operators to visually observe processes inside tanks, pipes, reactors and vessels. The modern industrial sight glass is a glass disk held between two metal frames, which are secured by bolts and gaskets, or the glass disc is fused to the metal frame during manufacture. The glass used for this purpose is either soda lime glass or borosilicate glass, and the metal, usually a type of stainless steel, is chosen for desired properties of strength. Borosilicate glass is superior to other formulations in terms of chemical corrosion resistance and temperature tolerance, as well as transparency. Fused sight glasses are also called mechanically prestressed glass, because the glass is strengthened by compression of the metal ring. Heat is applied to a glass disc and its surrounding steel ring, causing a fusion of the materials. As the steel cools, it contracts, compressing the glass and making it resistant to tension. Because glass typically breaks under tension, mechanically prestressed glass is unlikely to break and endanger workers. The strongest sight glasses are made with borosilicate glass, because of the greater difference in its coefficient of expansions. See also Fuel gauge Fusible plug References External links Reflex Gauge, Flat Glass or Transparent Gauge, and Ported Gauge, FPS-Aquarian Heating, ventilation, and air conditioning Measuring instruments Volumetric instruments Mechanical engineering Glass applications
Sight glass
[ "Physics", "Technology", "Engineering" ]
1,602
[ "Applied and interdisciplinary physics", "Volumetric instruments", "Mechanical engineering", "Measuring instruments" ]
4,160,772
https://en.wikipedia.org/wiki/Fuel%20gauge
In automotive and aerospace engineering, a fuel gauge is an instrument used to indicate the amount of fuel in a fuel tank. In electrical engineering, the term is used for ICs determining the current State of Charge of accumulators. Motor vehicles As used in vehicles, the gauge consists of two parts: The sending unit - in the tank The indicator - on the dashboard The sending unit usually uses a float connected to a potentiometer, typically printed ink design in a modern automobile. As the tank empties, the float drops and slides a moving contact along the resistor, increasing its resistance. In addition, when the resistance is at a certain point, it will also turn on a "low fuel" light on some vehicles. Meanwhile, the indicator unit (usually mounted on the dashboard) is measuring and displaying the amount of electric current flowing through the sending unit. When the tank level is high and maximum current is flowing, the needle points to "F" indicating a full tank. When the tank is empty and the least current is flowing, the needle points to "E" indicating an empty tank; some vehicles use the indicators "1" (for full) and "0" (for empty) or "R" (for reserve) instead. The system can be fail-safe. If an electrical fault opens, the electrical circuit causes the indicator to show the tank as being empty (theoretically provoking the driver to refill the tank) rather than full (which would allow the driver to run out of fuel with no prior notification). Corrosion or wear of the potentiometer will provide erroneous readings of fuel level. However, this system has a potential risk associated with it. An electric current is sent through the variable resistor to which a float is connected, so that the value of resistance depends on the fuel level. In most automotive fuel gauges such resistors are on the inward side of the gauge, i.e., inside the fuel tank. Sending current through such a resistor has a fire hazard and an explosion risk associated with it. These resistance sensors are also showing an increased failure rate with the incremental additions of alcohol in automotive gasoline fuel. Alcohol increases the corrosion rate at the potentiometer, as it is capable of carrying current like water. Potentiometer applications for alcohol fuel use a pulse-and-hold methodology, with a periodic signal being sent to determine fuel level decreasing the corrosion potential. Therefore, demand for another safer, non-contact method for fuel level is desired. Moylan arrow Since the early 1990s, many fuel gauges have included an icon with a fuel pump and an arrow, indicating the side of the vehicle on which the fuel filler is located. The use of the icon and arrow was invented in 1986 by Jim Moylan, a designer for Ford Motor Company. After he proposed the idea in April 1986, the 1989 Ford Escort and Mercury Tracer were the first vehicles to see it implemented. Other automotive companies noticed the addition and began to incorporate it into their own fuel gauges. Aircraft Magnetoresistance type fuel level sensors, now becoming common in small aircraft applications, offer a potential alternative for automotive use. These fuel level sensors work similar to the potentiometer example, however a sealed detector at the float pivot determines the angular position of a magnet pair at the pivot end of the float arm. These are highly accurate, and the electronics are completely outside the fuel. The non-contact nature of these sensors address the fire and explosion hazard, and also the issues related to any fuel combinations or additives to gasoline or to any alcohol fuel mixtures. Magneto resistive sensors are suitable for all fuel or fluid combinations, including LPG and LNG. The fuel level output for these senders can be ratiometric voltage or preferable CAN bus digital. These sensors also fail-safe in that they either provide a level output or nothing. Systems that measure large fuel tanks (including underground storage tanks) may use the same electro-mechanical principle or may make use of a pressure sensor, sometimes connected to a mercury manometer. Many large transport aircraft use a different fuel gauge design principle. An aircraft may use a number (around 30 on an A320) of low voltage tubular capacitor probes where the fuel becomes the dielectric. At different fuel levels, different values of capacitance are measured and therefore the level of fuel can be determined. In early designs, the profiles and values of individual probes were chosen to compensate for fuel tank shape and aircraft pitch and roll attitudes. In more modern aircraft, the probes tend to be linear (capacitance proportional to fuel height) and the fuel computer works out how much fuel there is (slightly different on different manufacturers). This has the advantage that a faulty probe may be identified and eliminated from the fuel calculations. In total this system can be more than 99% accurate. Since most commercial aircraft only take on board fuel necessary for the intended flight (with appropriate safety margins), the system allows the fuel load to be preselected, causing the fuel delivery to be shut off when the intended load has been taken on board. Fuel Gauge ICs In electronics there are different ICs available, which control the current State of Charge of accumulators. These devices are also called "Fuel Gauge". See also Float switch List of auto parts List of vehicle instruments Sight glass External links Explanation of operation of double coil moving iron indicators Notes Fuel containers Fuel technology Vehicle parts
Fuel gauge
[ "Technology" ]
1,119
[ "Vehicle parts", "Components" ]
4,160,792
https://en.wikipedia.org/wiki/Research%20Defence%20Society
The Research Defence Society was a British scientific society and lobby group founded by Stephen Paget in 1908 to fight against the anti-vivisectionist "enemies of reason" at the beginning of the 20th century. At the end of 2008, after being active for 100 years, it merged with the communications group Coalition for Medical Progress to form the advocacy group Understanding Animal Research. The Research Defence Society's aim was to disseminate information about, and to defend the use of, research involving animals, including animal testing. It represented the interests of 5,000 researchers and institutions. Its sources of funding changed over the hundred years that the society was active, and included individuals, government the pharmaceutical industry and universities. The organisation's literature stated that it was funded by its members, including medical scientists, doctors, veterinarians, pharmaceutical companies, research institutes, universities, and charities that support medical research. Its last executive director was Dr. Simon Festing, who became CEO of Understanding Animal Research. One campaign to demonstrate the support for animal research within the scientific and medical community was the co-signing of a petition in support of the use of animals in research called Declaration on Animals in Medical Research. The declaration was signed in 1990, and a modified version in 2005. Over 700 scientists, of whom 500 were British, signed the declaration in the first month, including three Nobel laureates, 190 Fellows of the Royal Society and the Medical Royal Colleges and over 250 academic professors. Notes References Understanding Animal Research website (the RDS site no longer exists) Woods, Richard & Ungoed-Thomas, Jonathan. "A campaigning hero", Sunday Times, February 26, 2006 Animal testing in the United Kingdom Vivisection Organizations established in 1908
Research Defence Society
[ "Chemistry" ]
350
[ "Vivisection" ]
4,160,992
https://en.wikipedia.org/wiki/Invariant-based%20programming
Invariant-based programming is a programming methodology where specifications and invariants are written before the actual program statements. Writing down the invariants during the programming process has a number of advantages: it requires the programmer to make their intentions about the program behavior explicit before actually implementing it, and invariants can be evaluated dynamically during execution to catch common programming errors. Furthermore, if strong enough, invariants can be used to prove the correctness of the program based on the formal semantics of program statements. A combined programming and specification language, connected to a powerful formal proof system, will generally be required for full verification of non-trivial programs. In this case a high degree of automation of proofs is also possible. In most existing programming languages the main organizing structures are control flow blocks such as for loops, while loops and if statements. Such languages may not be ideal for invariants-first programming, since they force the programmer to make decisions about control flow before writing the invariants. Furthermore, most programming languages do not have good support for writing specifications and invariants, since they lack quantifier operators and one can typically not express higher order properties. The idea of developing the program together with its proof originated from E.W. Dijkstra. Actually writing invariants before program statements has been considered in a number of different forms by M.H. van Emden, J.C. Reynolds and R-J Back. See also Eiffel (programming language) References Formal methods Programming paradigms
Invariant-based programming
[ "Engineering" ]
303
[ "Software engineering", "Formal methods" ]
16,079,692
https://en.wikipedia.org/wiki/Sewage%20treatment
Sewage treatment (or domestic wastewater treatment, municipal wastewater treatment) is a type of wastewater treatment which aims to remove contaminants from sewage to produce an effluent that is suitable to discharge to the surrounding environment or an intended reuse application, thereby preventing water pollution from raw sewage discharges. Sewage contains wastewater from households and businesses and possibly pre-treated industrial wastewater. There are a high number of sewage treatment processes to choose from. These can range from decentralized systems (including on-site treatment systems) to large centralized systems involving a network of pipes and pump stations (called sewerage) which convey the sewage to a treatment plant. For cities that have a combined sewer, the sewers will also carry urban runoff (stormwater) to the sewage treatment plant. Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes and nutrient removal. Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage,  using aerobic or anaerobic biological processes. A so-called quarternary treatment step (sometimes referred to as advanced treatment) can also be added for the removal of organic micropollutants, such as pharmaceuticals. This has been implemented in full-scale for example in Sweden. A large number of sewage treatment technologies have been developed, mostly using biological treatment processes. Design engineers and decision makers need to take into account technical and economical criteria of each alternative when choosing a suitable technology. Often, the main criteria for selection are: desired effluent quality, expected construction and operating costs, availability of land, energy requirements and sustainability aspects. In developing countries and in rural areas with low population densities, sewage is often treated by various on-site sanitation systems and not conveyed in sewers. These systems include septic tanks connected to drain fields, on-site sewage systems (OSS), vermifilter systems and many more. On the other hand, advanced and relatively expensive sewage treatment plants may include tertiary treatment with disinfection and possibly even a fourth treatment stage to remove micropollutants. At the global level, an estimated 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%. The treatment of sewage is part of the field of sanitation. Sanitation also includes the management of human waste and solid waste as well as stormwater (drainage) management. The term sewage treatment plant is often used interchangeably with the term wastewater treatment plant. Terminology The term sewage treatment plant (STP) (or sewage treatment works) is nowadays often replaced with the term wastewater treatment plant (WWTP). Strictly speaking, the latter is a broader term that can also refer to industrial wastewater treatment. The terms water recycling center or water reclamation plants are also in use as synonyms. Purposes and overview The overall aim of treating sewage is to produce an effluent that can be discharged to the environment while causing as little water pollution as possible, or to produce an effluent that can be reused in a useful manner. This is achieved by removing contaminants from the sewage. It is a form of waste management. With regards to biological treatment of sewage, the treatment objectives can include various degrees of the following: to transform or remove organic matter, nutrients (nitrogen and phosphorus), pathogenic organisms, and specific trace organic constituents (micropollutants). Some types of sewage treatment produce sewage sludge which can be treated before safe disposal or reuse. Under certain circumstances, the treated sewage sludge might be termed biosolids and can be used as a fertilizer. Sewage characteristics Collection Types of treatment processes Sewage can be treated close to where the sewage is created, which may be called a decentralized system or even an on-site system (on-site sewage facility, septic tanks, etc.). Alternatively, sewage can be collected and transported by a network of pipes and pump stations to a municipal treatment plant. This is called a centralized system (see also sewerage and pipes and infrastructure). A large number of sewage treatment technologies have been developed, mostly using biological treatment processes (see list of wastewater treatment technologies). Very broadly, they can be grouped into high tech (high cost) versus low tech (low cost) options, although some technologies might fall into either category. Other grouping classifications are intensive or mechanized systems (more compact, and frequently employing high tech options) versus extensive or natural or nature-based systems (usually using natural treatment processes and occupying larger areas) systems. This classification may be sometimes oversimplified, because a treatment plant may involve a combination of processes, and the interpretation of the concepts of high tech and low tech, intensive and extensive, mechanized and natural processes may vary from place to place. Low tech, extensive or nature-based processes Examples for more low-tech, often less expensive sewage treatment systems are shown below. They often use little or no energy. Some of these systems do not provide a high level of treatment, or only treat part of the sewage (for example only the toilet wastewater), or they only provide pre-treatment, like septic tanks. On the other hand, some systems are capable of providing a good performance, satisfactory for several applications. Many of these systems are based on natural treatment processes, requiring large areas, while others are more compact. In most cases, they are used in rural areas or in small to medium-sized communities. For example, waste stabilization ponds are a low cost treatment option with practically no energy requirements but they require a lot of land. Due to their technical simplicity, most of the savings (compared with high tech systems) are in terms of operation and maintenance costs. Anaerobic digester types and anaerobic digestion, for example: Upflow anaerobic sludge blanket reactor Septic tank Imhoff tank Constructed wetland (see also biofilters) Decentralized wastewater system Nature-based solutions On-site sewage facility Sand filter Vermi filter Waste stabilization pond with sub-types: e.g. Facultative ponds, high rate ponds, maturation ponds Examples for systems that can provide full or partial treatment for toilet wastewater only: Composting toilet (see also dry toilets in general) Urine-diverting dry toilet Vermifilter toilet High tech, intensive or mechanized processes Examples for more high-tech, intensive or mechanized, often relatively expensive sewage treatment systems are listed below. Some of them are energy intensive as well. Many of them provide a very high level of treatment. For example, broadly speaking, the activated sludge process achieves a high effluent quality but is relatively expensive and energy intensive. Activated sludge systems Aerobic treatment system Enhanced biological phosphorus removal Expanded granular sludge bed digestion Filtration Membrane bioreactor Moving bed biofilm reactor Rotating biological contactor Trickling filter Ultraviolet disinfection Disposal or treatment options There are other process options which may be classified as disposal options, although they can also be understood as basic treatment options. These include: Application of sludge, irrigation, soak pit, leach field, fish pond, floating plant pond, water disposal/groundwater recharge, surface disposal and storage. The application of sewage to land is both: a type of treatment and a type of final disposal. It leads to groundwater recharge and/or to evapotranspiration. Land application include slow-rate systems, rapid infiltration, subsurface infiltration, overland flow. It is done by flooding, furrows, sprinkler and dripping. It is a treatment/disposal system that requires a large amount of land per person. Design aspects Population equivalent The per person organic matter load is a parameter used in the design of sewage treatment plants. This concept is known as population equivalent (PE). The base value used for PE can vary from one country to another. Commonly used definitions used worldwide are: 1 PE equates to 60 gram of BOD per person per day, and it also equals 200 liters of sewage per day. This concept is also used as a comparison parameter to express the strength of industrial wastewater compared to sewage. Process selection When choosing a suitable sewage treatment process, decision makers need to take into account technical and economical criteria. Therefore, each analysis is site-specific. A life cycle assessment (LCA) can be used, and criteria or weightings are attributed to the various aspects. This makes the final decision subjective to some extent. A range of publications exist to help with technology selection. In industrialized countries, the most important parameters in process selection are typically efficiency, reliability, and space requirements. In developing countries, they might be different and the focus might be more on construction and operating costs as well as process simplicity. Choosing the most suitable treatment process is complicated and requires expert inputs, often in the form of feasibility studies. This is because the main important factors to be considered when evaluating and selecting sewage treatment processes are numerous. They include: process applicability, applicable flow, acceptable flow variation, influent characteristics, inhibiting or refractory compounds, climatic aspects, process kinetics and reactor hydraulics, performance, treatment residuals, sludge processing, environmental constraints, requirements for chemical products, energy and other resources; requirements for personnel, operating and maintenance; ancillary processes, reliability, complexity, compatibility, area availability. With regards to environmental impacts of sewage treatment plants the following aspects are included in the selection process: Odors, vector attraction, sludge transportation, sanitary risks, air contamination, soil and subsoil contamination, surface water pollution or groundwater contamination, devaluation of nearby areas, inconvenience to the nearby population. Odor control Odors emitted by sewage treatment are typically an indication of an anaerobic or septic condition. Early stages of processing will tend to produce foul-smelling gases, with hydrogen sulfide being most common in generating complaints. Large process plants in urban areas will often treat the odors with carbon reactors, a contact media with bio-slimes, small doses of chlorine, or circulating fluids to biologically capture and metabolize the noxious gases. Other methods of odor control exist, including addition of iron salts, hydrogen peroxide, calcium nitrate, etc. to manage hydrogen sulfide levels. Energy requirements The energy requirements vary with type of treatment process as well as sewage strength. For example, constructed wetlands and stabilization ponds have low energy requirements. In comparison, the activated sludge process has a high energy consumption because it includes an aeration step. Some sewage treatment plants produce biogas from their sewage sludge treatment process by using a process called anaerobic digestion. This process can produce enough energy to meet most of the energy needs of the sewage treatment plant itself. For activated sludge treatment plants in the United States, around 30 percent of the annual operating costs is usually required for energy. Most of this electricity is used for aeration, pumping systems and equipment for the dewatering and drying of sewage sludge. Advanced sewage treatment plants, e.g. for nutrient removal, require more energy than plants that only achieve primary or secondary treatment. Small rural plants using trickling filters may operate with no net energy requirements, the whole process being driven by gravitational flow, including tipping bucket flow distribution and the desludging of settlement tanks to drying beds. This is usually only practical in hilly terrain and in areas where the treatment plant is relatively remote from housing because of the difficulty in managing odors. Co-treatment of industrial effluent In highly regulated developed countries, industrial wastewater usually receives at least pretreatment if not full treatment at the factories themselves to reduce the pollutant load, before discharge to the sewer. The pretreatment has the following two main aims: Firstly, to prevent toxic or inhibitory compounds entering the biological stage of the sewage treatment plant and reduce its efficiency. And secondly to avoid toxic compounds from accumulating in the produced sewage sludge which would reduce its beneficial reuse options. Some industrial wastewater may contain pollutants which cannot be removed by sewage treatment plants. Also, variable flow of industrial waste associated with production cycles may upset the population dynamics of biological treatment units. Design aspects of secondary treatment processes Non-sewered areas Urban residents in many parts of the world rely on on-site sanitation systems without sewers, such as septic tanks and pit latrines, and fecal sludge management in these cities is an enormous challenge. For sewage treatment the use of septic tanks and other on-site sewage facilities (OSSF) is widespread in some rural areas, for example serving up to 20 percent of the homes in the U.S. Available process steps Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes. Different types of sewage treatment may utilize some or all of the process steps listed below. Preliminary treatment Preliminary treatment (sometimes called pretreatment) removes coarse materials that can be easily collected from the raw sewage before they damage or clog the pumps and sewage lines of primary treatment clarifiers. Screening The influent in sewage water passes through a bar screen to remove all large objects like cans, rags, sticks, plastic packets, etc. carried in the sewage stream. This is most commonly done with an automated mechanically raked bar screen in modern plants serving large populations, while in smaller or less modern plants, a manually cleaned screen may be used. The raking action of a mechanical bar screen is typically paced according to the accumulation on the bar screens and/or flow rate. The solids are collected and later disposed in a landfill, or incinerated. Bar screens or mesh screens of varying sizes may be used to optimize solids removal. If gross solids are not removed, they become entrained in pipes and moving parts of the treatment plant, and can cause substantial damage and inefficiency in the process. Grit removal Grit consists of sand, gravel, rocks, and other heavy materials. Preliminary treatment may include a sand or grit removal channel or chamber, where the velocity of the incoming sewage is reduced to allow the settlement of grit. Grit removal is necessary to (1) reduce formation of deposits in primary sedimentation tanks, aeration tanks, anaerobic digesters, pipes, channels, etc. (2) reduce the frequency of tank cleaning caused by excessive accumulation of grit; and (3) protect moving mechanical equipment from abrasion and accompanying abnormal wear. The removal of grit is essential for equipment with closely machined metal surfaces such as comminutors, fine screens, centrifuges, heat exchangers, and high pressure diaphragm pumps. Grit chambers come in three types: horizontal grit chambers, aerated grit chambers, and vortex grit chambers. Vortex grit chambers include mechanically induced vortex, hydraulically induced vortex, and multi-tray vortex separators. Given that traditionally, grit removal systems have been designed to remove clean inorganic particles that are greater than , most of the finer grit passes through the grit removal flows under normal conditions. During periods of high flow deposited grit is resuspended and the quantity of grit reaching the treatment plant increases substantially. Flow equalization Equalization basins can be used to achieve flow equalization. This is especially useful for combined sewer systems which produce peak dry-weather flows or peak wet-weather flows that are much higher than the average flows. Such basins can improve the performance of the biological treatment processes and the secondary clarifiers. Disadvantages include the basins' capital cost and space requirements. Basins can also provide a place to temporarily hold, dilute and distribute batch discharges of toxic or high-strength wastewater which might otherwise inhibit biological secondary treatment (such was wastewater from portable toilets or fecal sludge that is brought to the sewage treatment plant in vacuum trucks). Flow equalization basins require variable discharge control, typically include provisions for bypass and cleaning, and may also include aerators and odor control. Fat and grease removal In some larger plants, fat and grease are removed by passing the sewage through a small tank where skimmers collect the fat floating on the surface. Air blowers in the base of the tank may also be used to help recover the fat as a froth. Many plants, however, use primary clarifiers with mechanical surface skimmers for fat and grease removal. Primary treatment Primary treatment is the "removal of a portion of the suspended solids and organic matter from the sewage".It consists of allowing sewage to pass slowly through a basin where heavy solids can settle to the bottom while oil, grease and lighter solids float to the surface and are skimmed off. These basins are called primary sedimentation tanks or primary clarifiers and typically have a hydraulic retention time (HRT) of 1.5 to 2.5 hours. The settled and floating materials are removed and the remaining liquid may be discharged or subjected to secondary treatment. Primary settling tanks are usually equipped with mechanically driven scrapers that continually drive the collected sludge towards a hopper in the base of the tank where it is pumped to sludge treatment facilities. Sewage treatment plants that are connected to a combined sewer system sometimes have a bypass arrangement after the primary treatment unit. This means that during very heavy rainfall events, the secondary and tertiary treatment systems can be bypassed to protect them from hydraulic overloading, and the mixture of sewage and storm-water receives primary treatment only. Primary sedimentation tanks remove about 50–70% of the suspended solids, and 25–40% of the biological oxygen demand (BOD). Secondary treatment The main processes involved in secondary sewage treatment are designed to remove as much of the solid material as possible. They use biological processes to digest and remove the remaining soluble material, especially the organic fraction. This can be done with either suspended-growth or biofilm processes. The microorganisms that feed on the organic matter present in the sewage grow and multiply, constituting the biological solids, or biomass. These grow and group together in the form of flocs or biofilms and, in some specific processes, as granules. The biological floc or biofilm and remaining fine solids form a sludge which can be settled and separated. After separation, a liquid remains that is almost free of solids, and with a greatly reduced concentration of pollutants. Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage,  using aerobic or anaerobic processes. The organisms involved in these processes are sensitive to the presence of toxic materials, although these are not expected to be present at high concentrations in typical municipal sewage. Tertiary treatment Advanced sewage treatment generally involves three main stages, called primary, secondary and tertiary treatment but may also include intermediate stages and final polishing processes. The purpose of tertiary treatment (also called advanced treatment) is to provide a final treatment stage to further improve the effluent quality before it is discharged to the receiving water body or reused. More than one tertiary treatment process may be used at any treatment plant. If disinfection is practiced, it is always the final process. It is also called effluent polishing. Tertiary treatment may include biological nutrient removal (alternatively, this can be classified as secondary treatment), disinfection and partly removal of micropollutants, such as environmental persistent pharmaceutical pollutants. Tertiary treatment is sometimes defined as anything more than primary and secondary treatment in order to allow discharge into a highly sensitive or fragile ecosystem such as estuaries, low-flow rivers or coral reefs. Treated water is sometimes disinfected chemically or physically (for example, by lagoons and microfiltration) prior to discharge into a stream, river, bay, lagoon or wetland, or it can be used for the irrigation of a golf course, greenway or park. If it is sufficiently clean, it can also be used for groundwater recharge or agricultural purposes. Sand filtration removes much of the residual suspended matter. Filtration over activated carbon, also called carbon adsorption, removes residual toxins. Micro filtration or synthetic membranes are used in membrane bioreactors and can also remove pathogens. Settlement and further biological improvement of treated sewage may be achieved through storage in large human-made ponds or lagoons. These lagoons are highly aerobic, and colonization by native macrophytes, especially reeds, is often encouraged. Disinfection Disinfection of treated sewage aims to kill pathogens (disease-causing microorganisms) prior to disposal. It is increasingly effective after more elements of the foregoing treatment sequence have been completed. The purpose of disinfection in the treatment of sewage is to substantially reduce the number of pathogens in the water to be discharged back into the environment or to be reused. The target level of reduction of biological contaminants like pathogens is often regulated by the presiding governmental authority. The effectiveness of disinfection depends on the quality of the water being treated (e.g. turbidity, pH, etc.), the type of disinfection being used, the disinfectant dosage (concentration and time), and other environmental variables. Water with high turbidity will be treated less successfully, since solid matter can shield organisms, especially from ultraviolet light or if contact times are low. Generally, short contact times, low doses and high flows all militate against effective disinfection. Common methods of disinfection include ozone, chlorine, ultraviolet light, or sodium hypochlorite. Monochloramine, which is used for drinking water, is not used in the treatment of sewage because of its persistence. Chlorination remains the most common form of treated sewage disinfection in many countries due to its low cost and long-term history of effectiveness. One disadvantage is that chlorination of residual organic material can generate chlorinated-organic compounds that may be carcinogenic or harmful to the environment. Residual chlorine or chloramines may also be capable of chlorinating organic material in the natural aquatic environment. Further, because residual chlorine is toxic to aquatic species, the treated effluent must also be chemically dechlorinated, adding to the complexity and cost of treatment. Ultraviolet (UV) light can be used instead of chlorine, iodine, or other chemicals. Because no chemicals are used, the treated water has no adverse effect on organisms that later consume it, as may be the case with other methods. UV radiation causes damage to the genetic structure of bacteria, viruses, and other pathogens, making them incapable of reproduction. The key disadvantages of UV disinfection are the need for frequent lamp maintenance and replacement and the need for a highly treated effluent to ensure that the target microorganisms are not shielded from the UV radiation (i.e., any solids present in the treated effluent may protect microorganisms from the UV light). In many countries, UV light is becoming the most common means of disinfection because of the concerns about the impacts of chlorine in chlorinating residual organics in the treated sewage and in chlorinating organics in the receiving water. As with UV treatment, heat sterilization also does not add chemicals to the water being treated. However, unlike UV, heat can penetrate liquids that are not transparent. Heat disinfection can also penetrate solid materials within wastewater, sterilizing their contents. Thermal effluent decontamination systems provide low resource, low maintenance effluent decontamination once installed. Ozone () is generated by passing oxygen () through a high voltage potential resulting in a third oxygen atom becoming attached and forming . Ozone is very unstable and reactive and oxidizes most organic material it comes in contact with, thereby destroying many pathogenic microorganisms. Ozone is considered to be safer than chlorine because, unlike chlorine which has to be stored on site (highly poisonous in the event of an accidental release), ozone is generated on-site as needed from the oxygen in the ambient air. Ozonation also produces fewer disinfection by-products than chlorination. A disadvantage of ozone disinfection is the high cost of the ozone generation equipment and the requirements for special operators. Ozone sewage treatment requires the use of an ozone generator, which decontaminates the water as ozone bubbles percolate through the tank. Membranes can also be effective disinfectants, because they act as barriers, avoiding the passage of the microorganisms. As a result, the final effluent may be devoid of pathogenic organisms, depending on the type of membrane used. This principle is applied in membrane bioreactors. Biological nutrient removal Sewage may contain high levels of the nutrients nitrogen and phosphorus. Typical values for nutrient loads per person and nutrient concentrations in raw sewage in developing countries have been published as follows: 8 g/person/d for total nitrogen (45 mg/L), 4.5 g/person/d for ammonia-N (25 mg/L) and 1.0 g/person/d for total phosphorus (7 mg/L). The typical ranges for these values are: 6–10 g/person/d for total nitrogen (35–60 mg/L), 3.5–6 g/person/d for ammonia-N (20–35 mg/L) and 0.7–2.5 g/person/d for total phosphorus (4–15 mg/L). Excessive release to the environment can lead to nutrient pollution, which can manifest itself in eutrophication. This process can lead to algal blooms, a rapid growth, and later decay, in the population of algae. In addition to causing deoxygenation, some algal species produce toxins that contaminate drinking water supplies. Ammonia nitrogen, in the form of free ammonia (NH3) is toxic to fish. Ammonia nitrogen, when converted to nitrite and further to nitrate in a water body, in the process of nitrification, is associated with the consumption of dissolved oxygen. Nitrite and nitrate may also have public health significance if concentrations are high in drinking water, because of a disease called metahemoglobinemia. Phosphorus removal is important as phosphorus is a limiting nutrient for algae growth in many fresh water systems. Therefore, an excess of phosphorus can lead to eutrophication. It is also particularly important for water reuse systems where high phosphorus concentrations may lead to fouling of downstream equipment such as reverse osmosis. A range of treatment processes are available to remove nitrogen and phosphorus. Biological nutrient removal (BNR) is regarded by some as a type of secondary treatment process, and by others as a tertiary (or advanced) treatment process. Nitrogen removal Nitrogen is removed through the biological oxidation of nitrogen from ammonia to nitrate (nitrification), followed by denitrification, the reduction of nitrate to nitrogen gas. Nitrogen gas is released to the atmosphere and thus removed from the water. Nitrification itself is a two-step aerobic process, each step facilitated by a different type of bacteria. The oxidation of ammonia (NH4+) to nitrite (NO2−) is most often facilitated by bacteria such as Nitrosomonas spp. (nitroso refers to the formation of a nitroso functional group). Nitrite oxidation to nitrate (NO3−), though traditionally believed to be facilitated by Nitrobacter spp. (nitro referring the formation of a nitro functional group), is now known to be facilitated in the environment predominantly by Nitrospira spp. Denitrification requires anoxic conditions to encourage the appropriate biological communities to form. Anoxic conditions refers to a situation where oxygen is absent but nitrate is present. Denitrification is facilitated by a wide diversity of bacteria. The activated sludge process, sand filters, waste stabilization ponds, constructed wetlands and other processes can all be used to reduce nitrogen. Since denitrification is the reduction of nitrate to dinitrogen (molecular nitrogen) gas, an electron donor is needed. This can be, depending on the wastewater, organic matter (from the sewage itself), sulfide, or an added donor like methanol. The sludge in the anoxic tanks (denitrification tanks) must be mixed well (mixture of recirculated mixed liquor, return activated sludge, and raw influent) e.g. by using submersible mixers in order to achieve the desired denitrification. Over time, different treatment configurations for activated sludge processes have evolved to achieve high levels of nitrogen removal. An initial scheme was called the Ludzack–Ettinger Process. It could not achieve a high level of denitrification. The Modified Ludzak–Ettinger Process (MLE) came later and was an improvement on the original concept. It recycles mixed liquor from the discharge end of the aeration tank to the head of the anoxic tank. This provides nitrate for the facultative bacteria. There are other process configurations, such as variations of the Bardenpho process. They might differ in the placement of anoxic tanks, e.g. before and after the aeration tanks. Phosphorus removal Studies of United States sewage in the late 1960s estimated mean per capita contributions of in urine and feces, in synthetic detergents, and lesser variable amounts used as corrosion and scale control chemicals in water supplies. Source control via alternative detergent formulations has subsequently reduced the largest contribution, but naturally the phosphorus content of urine and feces remained unchanged. Phosphorus can be removed biologically in a process called enhanced biological phosphorus removal. In this process, specific bacteria, called polyphosphate-accumulating organisms (PAOs), are selectively enriched and accumulate large quantities of phosphorus within their cells (up to 20 percent of their mass). Phosphorus removal can also be achieved by chemical precipitation, usually with salts of iron (e.g. ferric chloride) or aluminum (e.g. alum), or lime. This may lead to a higher sludge production as hydroxides precipitate and the added chemicals can be expensive. Chemical phosphorus removal requires significantly smaller equipment footprint than biological removal, is easier to operate and is often more reliable than biological phosphorus removal. Another method for phosphorus removal is to use granular laterite or zeolite. Some systems use both biological phosphorus removal and chemical phosphorus removal. The chemical phosphorus removal in those systems may be used as a backup system, for use when the biological phosphorus removal is not removing enough phosphorus, or may be used continuously. In either case, using both biological and chemical phosphorus removal has the advantage of not increasing sludge production as much as chemical phosphorus removal on its own, with the disadvantage of the increased initial cost associated with installing two different systems. Once removed, phosphorus, in the form of a phosphate-rich sewage sludge, may be sent to landfill or used as fertilizer in admixture with other digested sewage sludges. In the latter case, the treated sewage sludge is also sometimes referred to as biosolids. 22% of the world's phosphorus needs could be satisfied by recycling residential wastewater. Fourth treatment stage Micropollutants such as pharmaceuticals, ingredients of household chemicals, chemicals used in small businesses or industries, environmental persistent pharmaceutical pollutants (EPPP) or pesticides may not be eliminated in the commonly used sewage treatment processes (primary, secondary and tertiary treatment) and therefore lead to water pollution. Although concentrations of those substances and their decomposition products are quite low, there is still a chance of harming aquatic organisms. For pharmaceuticals, the following substances have been identified as toxicologically relevant: substances with endocrine disrupting effects, genotoxic substances and substances that enhance the development of bacterial resistances. They mainly belong to the group of EPPP. Techniques for elimination of micropollutants via a fourth treatment stage during sewage treatment are implemented in Germany, Switzerland, Sweden and the Netherlands and tests are ongoing in several other countries. In Switzerland it has been enshrined in law since 2016. Since 1 January 2025, there has been a recast of the Urban Waste Water Treatment Directive in the European Union. Due to the large number of amendments that have now been made, the directive was rewritten on November 27, 2024 as Directive (EU) 2024/3019, published in the EU Official Journal on December 12, and entered into force on January 1, 2025. The member states now have 31 months, i.e. until July 31, 2027, to adapt their national legislation to the new directive ("implementation of the directive"). The amendment stipulates that, in addition to stricter discharge values for nitrogen and phosphorus, persistent trace substances must at least be partially separated. The target, similar to Switzerland, is that 80% of 6 key substances out of 12 must be removed between discharge into the sewage treatment plant and discharge into the water body. At least 80% of the investments and operating costs for the fourth treatment stage will be passed on to the pharmaceutical and cosmetics industry according to the polluter pays principle in order to relieve the population financially and provide an incentive for the development of more environmentally friendly products. In addition, the municipal wastewater treatment sector is to be energy neutral by 2045 and the emission of microplastics and PFAS is to be monitored. The implementation of the framework guidelines is staggered until 2045, depending on the size of the sewage treatment plant and its population equivalents (PE). Sewage treatment plants with over 150,000 PE have priority and should be adapted immediately, as a significant proportion of the pollution comes from them. The adjustments are staggered at national level in: 20% of the plants by 31 December 2033, 60% of the plants by 31 December 2039, 100% of the plants by 31 December 2045. Wastewater treatment plants with 10,000 to 150,000 PE that discharge into coastal waters or sensitive waters are staggered at national level in: 10% of the plants by 31 December 2033, 30% of the plants by 31 December 2036, 60% of the plants by 31 December 2039, 100% of the plants by 31 December 2045. The latter concerns waters with a low dilution ratio, waters from which drinking water is obtained and those that are coastal waters, or those used as bathing waters or used for mussel farming. Member States will be given the option not to apply fourth treatment in these areas if a risk assessment shows that there is no potential risk from micropollutants to human health and/or the environment. Such process steps mainly consist of activated carbon filters that adsorb the micropollutants. The combination of advanced oxidation with ozone followed by granular activated carbon (GAC) has been suggested as a cost-effective treatment combination for pharmaceutical residues. For a full reduction of microplasts the combination of ultrafiltration followed by GAC has been suggested. Also the use of enzymes such as laccase secreted by fungi is under investigation. Microbial biofuel cells are investigated for their property to treat organic matter in sewage. To reduce pharmaceuticals in water bodies, source control measures are also under investigation, such as innovations in drug development or more responsible handling of drugs. In the US, the National Take Back Initiative is a voluntary program with the general public, encouraging people to return excess or expired drugs, and avoid flushing them to the sewage system. Sludge treatment and disposal Environmental impacts Sewage treatment plants can have significant effects on the biotic status of receiving waters and can cause some water pollution, especially if the treatment process used is only basic. For example, for sewage treatment plants without nutrient removal, eutrophication of receiving water bodies can be a problem. In 2024, The Royal Academy of Engineering released a study into the effects wastewater on public health in the United Kingdom. The study gained media attention, with comments from the UKs leading health professionals, including Sir Chris Whitty. Outlining 15 recommendations for various UK bodies to dramatically reduce public health risks by increasing the water quality in its waterways, such as rivers and lakes. After the release of the report, The Guardian newspaper interviewed Whitty, who stated that improving water quality and sewage treatment should be a high level of importance and a "public health priority". He compared it to eradicating cholera in the 19th century in the country following improvements to the sewage treatment network. The study also identified that low water flows in rivers saw high concentration levels of sewage, as well as times of flooding or heavy rainfall. While heavy rainfall had always been associated with sewage overflows into streams and rivers, the British media went as far to warn parents of the dangers of paddling in shallow rivers during warm weather. Whitty's comments came after the study revealed that the UK was experiencing a growth in the number of people that were using coastal and inland waters recreationally. This could be connected to a growing interest in activities such as open water swimming or other water sports. Despite this growth in recreation, poor water quality meant some were becoming unwell during events. Most notably, the 2024 Paris Olympics had to delay numerous swimming-focused events like the triathlon due to high levels of sewage in the River Seine. Reuse Irrigation Increasingly, people use treated or even untreated sewage for irrigation to produce crops. Cities provide lucrative markets for fresh produce, so are attractive to farmers. Because agriculture has to compete for increasingly scarce water resources with industry and municipal users, there is often no alternative for farmers but to use water polluted with sewage directly to water their crops. There can be significant health hazards related to using water loaded with pathogens in this way. The World Health Organization developed guidelines for safe use of wastewater in 2006. They advocate a 'multiple-barrier' approach to wastewater use, where farmers are encouraged to adopt various risk-reducing behaviors. These include ceasing irrigation a few days before harvesting to allow pathogens to die off in the sunlight, applying water carefully so it does not contaminate leaves likely to be eaten raw, cleaning vegetables with disinfectant or allowing fecal sludge used in farming to dry before being used as a human manure. Reclaimed water Global situation Before the 20th century in Europe, sewers usually discharged into a body of water such as a river, lake, or ocean. There was no treatment, so the breakdown of the human waste was left to the ecosystem. This could lead to satisfactory results if the assimilative capacity of the ecosystem is sufficient which is nowadays not often the case due to increasing population density. Today, the situation in urban areas of industrialized countries is usually that sewers route their contents to a sewage treatment plant rather than directly to a body of water. In many developing countries, however, the bulk of municipal and industrial wastewater is discharged to rivers and the ocean without any treatment or after preliminary treatment or primary treatment only. Doing so can lead to water pollution. Few reliable figures exist on the share of the wastewater collected in sewers that is being treated worldwide. A global estimate by UNDP and UN-Habitat in 2010 was that 90% of all wastewater generated is released into the environment untreated. A more recent study in 2021 estimated that globally, about 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%. As of 2022, without sufficient treatment, more than 80% of all wastewater generated globally is released into the environment. High-income nations treat, on average, 70% of the wastewater they produce, according to UN Water. Only 8% of wastewater produced in low-income nations receives any sort of treatment. The Joint Monitoring Programme (JMP) for Water Supply and Sanitation by WHO and UNICEF report in 2021 that 82% of people with sewer connections are connected to sewage treatment plants providing at least secondary treatment.However, this value varies widely between regions. For example, in Europe, North America, Northern Africa and Western Asia, a total of 31 countries had universal (>99%) wastewater treatment. However, in Albania, Bermuda, North Macedonia and Serbia "less than 50% of sewered wastewater received secondary or better treatment" and in Algeria, Lebanon and Libya the value was less than 20% of sewered wastewater that was being treated. The report also found that "globally, 594 million people have sewer connections that don't receive sufficient treatment. Many more are connected to wastewater treatment plants that do not provide effective treatment or comply with effluent requirements.". Global targets Sustainable Development Goal 6 has a Target 6.3 which is formulated as follows: "By 2030, improve water quality by reducing pollution, eliminating,dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally." The corresponding Indicator 6.3.1 is the "proportion of wastewater safely treated". It is anticipated that wastewater production would rise by 24% by 2030 and by 51% by 2050. Data in 2020 showed that there is still too much uncollected household wastewater: Only 66% of all household wastewater flows were collected at treatment facilities in 2020 (this is determined from data from 128 countries). Based on data from 42 countries in 2015, the report stated that "32 per cent of all wastewater flows generated from point sources received at least some treatment". For sewage that has indeed been collected at centralized sewage treatment plants, about 79% went on to be safely treated in 2020. History The history of sewage treatment had the following developments: It began with land application (sewage farms) in the 1840s in England, followed by chemical treatment and sedimentation of sewage in tanks, then biological treatment in the late 19th century, which led to the development of the activated sludge process starting in 1912. Regulations In most countries, sewage collection and treatment are subject to local and national regulations and standards. Country Examples Overview Europe In the European Union, 0.8% of total energy consumption goes to wastewater treatment facilities. The European Union needs to make extra investments of €90 billion in the water and waste sector to meet its 2030 climate and energy goals. In October 2021, British Members of Parliament voted to continue allowing untreated sewage from combined sewer overflows to be released into waterways. Asia India The 'Delhi Jal Board' (DJB) is currently operating on the construction of the largest sewage treatment plant in India. It will be operational by the end of 2022 with an estimated capacity of 564 MLD. It is supposed to solve the existing situation wherein untreated sewage water is being discharged directly into the river 'Yamuna'. Japan Africa Libya Americas United States More information Decentralized wastewater system List of largest wastewater treatment plants List of water supply and sanitation by country Nutrient Recovery and Reuse: producing agricultural nutrients from sewage Organisms involved in water purification Sanitary engineering Waste disposal References External links Water Environment Federation – Professional association focusing on municipal wastewater treatment Environmental engineering Pollution control technologies Sanitation Treatment Sewerage infrastructure Water pollution
Sewage treatment
[ "Chemistry", "Engineering", "Environmental_science" ]
8,920
[ "Water treatment", "Chemical engineering", "Sewerage infrastructure", "Pollution control technologies", "Water pollution", "Sewerage", "Civil engineering", "Environmental engineering" ]
16,080,632
https://en.wikipedia.org/wiki/Bedford%20Research%20Foundation
Bedford Research Foundation is a non-profit Institute that conducts stem cell research for diseases and conditions that currently have no known cure. The institute also created the Special Program of Assisted Reproduction (SPAR), a program that assists serodiscordant couples successfully achieve pregnancy. Dr. Ann Kiessling, the founder of Bedford Stem Cell Research Foundation, is the Laboratory Director. Background Bedford Research Foundation was founded to satisfy the need for a research and development clinical laboratory that could facilitate technology transfer from basic science discoveries to clinical test applications. BRF was founded and incorporated in 1996 by Dr. Ann Kiessling and through the efforts of men and women whose lives were altered by blood products tainted with the AIDS virus (Human Immunodeficiency Virus, HIV) and Hepatitis C virus. Faced with unprecedented disease obstacles, the men and women insisted that biomedical technology be developed to fight their infections, and allow them to conceive children of their own. Research to ensure the safety of conception by assisted reproductive technologies in general was not funded by the National Institutes of Health because of the U.S. Congress decisions in 1996 and 1998 that research on fertilized human eggs "...is meritorious and should be done for society..., but will not be funded by taxpayer dollars." The Foundation conducts research within its own laboratories (Stem Cell, Prostate, Infectious disease) as well as in collaboration with other laboratories and raises money to award research grants to qualified investigators seeking to improve the safety and success of assisted reproduction to mothers and babies. Much of the research supported by the Foundation cannot be funded by federal grants-in-aid because of the U.S. moratorium on funding research on human eggs activated either artificially or by sperm. For this reason, the men and women themselves raised the money to fund the Special Program of Assisted Reproduction (SPAR). Within two years, technology was developed to protect against virus transmission at conception. As a result, Baby Ryan was born in 1999 to a healthy Mom and a Dad with hemophilia who was infected with Hepatitis C and HIV by tainted blood factors. In conjunction with stem cell research, Foundation scientists also apply patented processes to help diagnose male reproductive tract disorders. Research done at the Foundation has led to the development of additional tests that may provide valuable information about overall men's health. A current focus is detection of bacteria in semen by molecular biology methods instead of standard laboratory culture. Studies to date reveal that semen contains bacteria not previously identified. Such studies hold the promise of developing new tests for the health of semen producing organs such as the prostate, which is a site of significant disease in men, including infection (prostatitis) and cancer. SARS2 (Coronavirus) Testing On April 10, 2020 it was reported that Bedford Research Foundation had expanded its operations to include SARS2 testing, making it one of 66 sites in the United States with a Food and Drug Administration- approved test for COVID-19. The lab began testing samples from Sturdy Hospital in Attleboro and Emerson in Concord. On April 21, 2020, Bedford Research Foundation piloted a program to expand their SARS2 (Coronavirus) testing to the public. The test was well-received and successful. The foundation is currently making plans to expand the program. References External links Bedford Research Foundation Embryology Obstetrics and gynaecology organizations HIV/AIDS research organisations Non-profit organizations based in Massachusetts Stem cell research Medical and health organizations based in Massachusetts HIV/AIDS organizations in the United States
Bedford Research Foundation
[ "Chemistry", "Biology" ]
718
[ "Translational medicine", "Tissue engineering", "Stem cell research" ]
16,080,752
https://en.wikipedia.org/wiki/Certified%20wireless%20security%20professional
The Certified Wireless Security Professional (CWSP) is an advanced level certification that measures the ability to secure any wireless network. A wide range of security topics focusing on the 802.11 wireless LAN technology are covered in the coursework and exam, which is vendor neutral. Certification track The CWSP certification is awarded to candidates who pass the CWSP exam and who also hold the CWNA certification. The CWNA certification is a prerequisite to earning the CWSP certification. CWSP requirements This certification covers a wide range of security areas. These include detecting attacks, wireless analysis, policy, monitoring and solutions. Recertification The CWSP certification is valid for three years. The certification may be renewed by retaking the CWSP exam or by advancing on to CWNE which is also valid for 3 years. See also Professional certification (Computer technology) References External links Official CWNP Site Wireless networking Professional titles and certifications Information technology qualifications
Certified wireless security professional
[ "Technology", "Engineering" ]
188
[ "Wireless networking", "Computer occupations", "Computer networks engineering", "Information technology qualifications" ]
16,080,899
https://en.wikipedia.org/wiki/Ariel%205
Ariel 5 (or UK 5) was a joint British and American space telescope dedicated to observing the sky in the X-ray band. It was launched on 15 October 1974 from the San Marco platform in the Indian Ocean and operated until 1980. It was the penultimate satellite to be launched as part of the Ariel programme. Background Ariel 5 was the fifth and penultimate satellite of the joint British and American Ariel programme. It was the third satellite in the series built entirely in the UK. It was named UK 5 before launch and renamed to Ariel 5 after the successful launch. Plans for Ariel 5 were first discussed between the UK and US in May 1967 at the Ariel 3 launch. The Science Research Council (SRC) advertised a request for proposal for experiments in June. Experiments were formally proposed to NASA in July 1968. Satellite design Development Marconi Space and Defence Systems (MSDS) in Portsmouth was selected as the prime contractor in 1969. SRC had them select MSDS Frimley for the attitude control system (ACS) and MSDS Stanmore for the core stores. A study was performed to see if the Scout rocket's heat shield could be enlarged to accommodate larger experiments for this mission. A larger heat shield was designed which allowed for a US experiment and five British experiments. Operation Ariel 5 was spin-stabilized. The satellite improved on the attitude control of Ariel 4. It used liquid propane that was expanded through a reducing valve and heated with the bulk tank temperature. Power was derived from solar cells that were mounted to 7/8 of the circumference of the spacecraft. It was stored in a 3.0 Ah Ni-Cd battery. Sensors The all-sky monitor (ASM) was two one-dimensional pinhole cameras scanned most of the sky every spacecraft revolution. The angular resolution was 10° × 10°, with an effective area of , and a bandpass of 3–6 keV. The ASM was designed to fit a resource budget of , 1 bit per second, and 1 W. The sky survey instrument (SSI) had an angular resolution of 0.75 × 10.6°, with an effective area of , and a bandpass of 2–20 keV. Mission Launch Launch operations took six weeks, starting from the time the Guppy took off from Thorney Island. The satellite was launched on 15 October 1974 from the San Marco platform in the Indian Ocean off the coast of Kenya. Operations The satellite was controlled via a mission control centre in Appleton Lab. It spun at over 10 revolutions/minute. Ariel 5 operated until 1980. Results Over 100 scientific papers were published within four years of the launch. Notes References Further reading Space telescopes Space programme of the United Kingdom 1974 in spaceflight Satellites formerly orbiting Earth Spacecraft launched in 1974
Ariel 5
[ "Astronomy" ]
560
[ "Space telescopes" ]
16,081,079
https://en.wikipedia.org/wiki/Population%20study
Population study is an interdisciplinary field of scientific study that uses various statistical methods and models to analyse, determine, address, and predict population challenges and trends from data collected through various data collection methods such as population census, registration method, sampling, and some other systems of data sources. In the various fields of healthcare, a population study is a study of a group of individuals taken from the general population who share a common characteristic, such as age, sex, or health condition. This group may be studied for different reasons, such as their response to a drug or risk of getting a disease. See also Demography References External links Population study entry in the public domain NCI Dictionary of Cancer Terms Clinical research Epidemiology
Population study
[ "Environmental_science" ]
146
[ "Epidemiology", "Environmental social science" ]
16,081,202
https://en.wikipedia.org/wiki/String%20graph
In graph theory, a string graph is an intersection graph of curves in the plane; each curve is called a "string". Given a graph , is a string graph if and only if there exists a set of curves, or strings, such that the graph having a vertex for each curve and an edge for each intersecting pair of curves is isomorphic to . Background described a concept similar to string graphs as they applied to genetic structures. In that context, he also posed the specific case of intersecting intervals on a line, namely the now classical family of interval graphs. Later, specified the same idea to electrical networks and printed circuits. The mathematical study of string graphs began with the paper and through a collaboration between Sinden and Ronald Graham, where the characterization of string graphs eventually came to be posed as an open question at the 5th Hungarian Colloquium on Combinatorics in 1976. However, the recognition of string graphs was eventually proven to be NP-complete, implying that no simple characterization is likely to exist. Related graph classes Every planar graph is a string graph: one may form a string graph representation of an arbitrary plane-embedded graph by drawing a string for each vertex that loops around the vertex and around the midpoint of each adjacent edge, as shown in the figure. For any edge uv of the graph, the strings for u and v cross each other twice near the midpoint of uv, and there are no other crossings, so the pairs of strings that cross represent exactly the adjacent pairs of vertices of the original planar graph. Alternatively, by the circle packing theorem, any planar graph may be represented as a collection of circles, any two of which cross if and only if the corresponding vertices are adjacent; these circles (with a starting and ending point chosen to turn them into open curves) provide a string graph representation of the given planar graph. proved that every planar graph has a string representation in which each pair of strings has at most one crossing point, unlike the representations described above. Scheinerman's conjecture, now proven, is the even stronger statement that every planar graph may be represented by the intersection graph of straight line segments, a very special case of strings. If every edge of a given graph G is subdivided, the resulting graph is a string graph if and only if G is planar. In particular, the subdivision of the complete graph K5 shown in the illustration is not a string graph, because K5 is not planar. Every circle graph, as an intersection graph of line segments (the chords of a circle), is also a string graph. Every chordal graph may be represented as a string graph: chordal graphs are intersection graphs of subtrees of trees, and one may form a string representation of a chordal graph by forming a planar embedding of the corresponding tree and replacing each subtree by a string that traces around the subtree's edges. The complement graph of every comparability graph is also a string graph. Other results showed computing the chromatic number of string graphs to be NP-hard. found that string graphs form an induced minor closed class, but not a minor closed class of graphs. Every m-edge string graph can be partitioned into two subsets, each a constant fraction the size of the whole graph, by the removal of O(m3/4log1/2m) vertices. It follows that the biclique-free string graphs, string graphs containing no Kt,t subgraph for some constant t, have O(n) edges and more strongly have polynomial expansion. Notes References . . . . . . . . . . . . . . Topological graph theory Intersection classes of graphs NP-complete problems
String graph
[ "Mathematics" ]
755
[ "Graph theory", "Computational problems", "Topology", "Mathematical relations", "Mathematical problems", "Topological graph theory", "NP-complete problems" ]
16,081,289
https://en.wikipedia.org/wiki/Nonspecific%20immune%20cell
A non-specific immune cell is an immune cell (such as a macrophage, neutrophil, or dendritic cell) that responds to many antigens, not just one antigen. Non-specific immune cells function in the first line of defense against infection or injury. The innate immune system is always present at the site of infection and ready to fight the bacteria; it can also be referred to as the "natural" immune system. The cells of the innate immune system do not have specific responses and respond to each foreign invader using the same mechanism. The innate immune system There are two categories to which parts of the immune system are assigned: the non-specific, or innate immune system and the adaptive immune system. The non-specific response is a generalized response to pathogen infections involving the use of several white blood cells and plasma proteins. Non-specific immunity, or innate immunity, is the immune system with which you were born, made up of phagocytes and barriers. Phagocytosis, derived from the Greek words , meaning to eat, or cell, and “osis” meaning process, was first described by Élie Metchnikoff, who won the Nobel Prize 100 years ago. Phagocytosis involves the internalization of solids, such as bacteria, by an organism. Macrophages, neutrophils, and dendritic cells are all cells of the innate immune system that utilize phagocytosis and are equipped with Toll-like receptors (TLR). Toll-like receptors are present on each of these cells and recognize a variety of microbial products resulting in the induction of more specific immune responses. When a phagocytic cell engulfs bacteria, a phagosome is formed around it and the entire complex is ultimately trafficked to the lysosome for degradation. These cells that participate in the non-specific immune system response do not differentiate between types of microorganisms but do have the ability to discern between what is self and what is non-self. The cells of this system are known as non-specific immune cells. Cells of the innate immune system Neutrophils are a type of phagocyte, abundant in blood, that phagocytize pathogens in acute inflammation. Neutrophils, along with eosinophils and basophils, make up the category of granulocytes. Macrophages, also known as monocytes, will phagocytize a wide range of molecules. Dendritic cells are tree-like cells that bind antigens and alert the lymphocytes of infection, essentially directing T cells to make an immune response. Complement proteins are proteins that play a role in the non-specific immune responses alongside these non-specific immune cells to make up the first line of immune defense. The non-specific immune response is an immediate antigen-independent response, however it is not antigen-specific. Non-specific immunity results in no immunologic memory. There are mechanical, chemical, and biological factors affecting the effectiveness and results of the non-specific immune response. These factors include the epithelial surfaces forming a physical barrier, fatty acids that inhibit the growth of bacteria, and the microflora of the gastrointestinal tract serving to prevent the colonization of pathogenic bacteria. The non-specific immune system involves cells to which antigens are not specific in regards to fighting infection. The non-specific immune cells mentioned above (macrophages, neutrophils, and dendritic cells) will be discussed regarding their immediate response to infection. Macrophages Macrophages display a plasticity that allows them to respond to numerous types of infections, permitting them to change their physiology, while serving as a common “janitorial cell” to the immune system. Macrophages are produced through the differentiation of monocytes, and after ingestion of bacteria, secrete enzymes to destroy the ingested particle. These cells reside in every tissue of the body, and upon infected tissue, are recruited to the tissue. Once recruited, macrophages will differentiate into specific tissue macrophages. The receptors of macrophages consist of a broad specificity that allows them to discern between self and non-self in the non-specific recognition of foreign substances. There are type I and type II receptors present on macrophages, which are trimeric membrane glycoproteins each containing an NH2-terminal intracellular domain, an extracellular domain with a spacer region and an alpha-helical domain. Contrary to the structure of type II, type I receptors have a cysteine-rich COOH-terminal domain. These characteristics of macrophage receptors confer the broad specificity, which allow them to function as a general non-specific immune cell. Neutrophils Neutrophils are some of the first immune cells to travel to sites of infection that aid in fighting infection by ingesting microorganisms and providing the enzymes to kill them. This process characterizes neutrophils as a type of phagocyte. Neutrophils contain neutrophil extracellular traps (NETs), composed of granule and nuclear constituents, which play a role in breaking up and killing bacteria that has invaded the immune system. NETs, composed of activated neutrophils, are fragile structures consisting of smooth stretches and globular domains, as shown via high-resolution scanning electron microscopy. After stimulation of the neutrophil response, neutrophils lose their shape, allowing euchromatin and heterochromatin to homogenize, later resulting in the mixing of NET components. The formation of NETs happens once the nuclear envelope and granule membrane of the neutrophils disintegrates. The NETs are released as the cell membrane breaks, resulting in a unique process of cell death. These NET structures of neutrophils bind Gram-positive and Gram-negative bacteria, as well as fungi, which confers broad specificity of neutrophils, explaining their role in the first line of defense once microbes have invaded. Dendritic cells The classification of dendritic cells as another type of white blood cell occurred over thirty-five years ago by Ralph Steinmann and Zanvil A. Cohn and has provided an essential link in the innate immune system. Dendritic cells line airways and intestines, participate in a rich network making up part of the epidermal layer of the skin, and play a unique role in initiating a primary immune response. Dendritic cells are named after their structure that resembles that of a dendrite of an axon, and they have two vital functions: display antigens, which are recognized by T cells and alert lymphocytes of the presence of an injury or infection. When the body is introduced to infection or injury, dendritic cells migrate to immune or lymphoid tissues. These two types of tissues are rich in T cells, the cells whose actions are induced by dendritic cells. Dendritic cells will capture antigens and engulf them through the process of phagocytosis. Dendritic cells contain Toll-like receptors (TLR) that will recognize a broad variety of microorganisms in the case of invasion. The activation of these receptors stimulates specific antigen responses and development of antigen-specific adaptive immunity. A unique feature of dendritic cells is that they are able to open up the tight junctions between epithelial cells and sample invaders themselves, all while maintaining the integrity of the epithelial barrier with expression of their own tight-junction proteins. A real life example of dendritic cell functions is displayed in the rejection of organ transplants. References External links Nonspecific immune cell entry in the public domain National Cancer Institute Dictionary of Cancer Terms Immune system
Nonspecific immune cell
[ "Biology" ]
1,621
[ "Immune system", "Organ systems" ]
16,081,338
https://en.wikipedia.org/wiki/Nicotine%20nasal%20spray
A nicotine nasal spray is a nasal spray that contains a small dose of nicotine, which enters the blood by being absorbed through the lining of the nose. This helps stop nicotine cravings and relieves symptoms that occur when a person is trying to quit smoking. A prescription is needed for nicotine nasal spray in many countries. In the United Kingdom, it can be purchased in a pharmacy as an over-the-counter drug. References Nicotine nasal spray entry in the public domain NCI Dictionary of Cancer Terms Smoking cessation Nasal sprays Smoking Stimulants
Nicotine nasal spray
[ "Chemistry" ]
118
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
16,081,433
https://en.wikipedia.org/wiki/Molecular%20risk%20assessment
Molecular risk assessment is a procedure in which biomarkers (for example, biological molecules or changes in tumor cell DNA) are used to estimate a person's risk for developing cancer. Specific biomarkers may be linked to particular types of cancer. Sources External links Molecular risk assessment entry in the public domain NCI Dictionary of Cancer Terms Biological techniques and tools Cancer screening
Molecular risk assessment
[ "Biology" ]
75
[ "nan" ]
16,081,458
https://en.wikipedia.org/wiki/Cosmic%20Radiation%20Satellite
The Cosmic Radiation Satellite (CORSA, also CORSA-A) was a Japanese space telescope. It was supposed to be Japan's first X-ray astronomy satellite but was lost due to failure of its Mu-3 launch vehicle on 4 Feb 1976. A replacement satellite Hakucho (CORSA-b) was later launched. Sources Space telescopes
Cosmic Radiation Satellite
[ "Astronomy" ]
73
[ "Space telescopes" ]
16,081,683
https://en.wikipedia.org/wiki/Imaginary%20element
In model theory, a branch of mathematics, an imaginary element of a structure is roughly a definable equivalence class. These were introduced by , and elimination of imaginaries was introduced by . Definitions M is a model of some theory. x and y stand for n-tuples of variables, for some natural number n. An equivalence formula is a formula φ(x, y) that is a symmetric and transitive relation. Its domain is the set of elements a of Mn such that φ(a, a); it is an equivalence relation on its domain. An imaginary element a/φ of M is an equivalence formula φ together with an equivalence class a. M has elimination of imaginaries if for every imaginary element a/φ there is a formula θ(x, y) such that there is a unique tuple b so that the equivalence class of a consists of the tuples x such that θ(x, b). A model has uniform elimination of imaginaries if the formula θ can be chosen independently of a. A theory has elimination of imaginaries if every model of that theory does (and similarly for uniform elimination). Examples ZFC set theory has elimination of imaginaries. Peano arithmetic has uniform elimination of imaginaries. A vector space of dimension at least 2 over a finite field with at least 3 elements does not have elimination of imaginaries. References Model theory
Imaginary element
[ "Mathematics" ]
295
[ "Mathematical logic", "Model theory" ]
16,081,959
https://en.wikipedia.org/wiki/Scott%20A.%20McLuckey
Scott A. McLuckey is an American chemist, the John A. Leighty Distinguished Professor of Chemistry at Purdue University. His research concerns the formation of ionized versions of large biomolecules, mass spectrometry of these ions, and ion-ion reactions. McLuckey did his undergraduate studies at Westminster College, Pennsylvania, earning a B.S. in 1978. He received his Ph.D. in 1982 from Purdue University. After a year of postdoctoral studies in Amsterdam, McLuckey joined the research staff of Oak Ridge National Laboratory, where he remained until 2000 when he moved to Purdue. He became the Leighty professor in 2008. Since 1998 he has been editor of the International Journal of Mass Spectrometry. From 2010 to 2012 he was president of the American Society for Mass Spectrometry. In 1997, he was the first recipient of the Biemann Medal awarded by the American Society for Mass Spectrometry for his contributions to mass spectrometry. He was named scientist of the year at Oak Ridge in 1999. In 2000, he received the Curt Brunnée Award of the International Mass Spectrometry Society, given annually to a researcher under the age of 45. He received the 2007 Award in Chemical Instrumentation of the American Chemical Society Division of Analytical Chemistry, and the Anachem Award in 2008 from the National Federation of Analytical Chemistry and Spectroscopy. He also received the 2008 Herbert Newby McCoy Award for outstanding contributions to science from Purdue. References External links Scott A. McLuckey (Purdue University Department of Chemistry) 21st-century American chemists Mass spectrometrists Westminster College (Pennsylvania) alumni Purdue University alumni Living people Purdue University faculty Year of birth missing (living people) Thomson Medal recipients
Scott A. McLuckey
[ "Physics", "Chemistry" ]
357
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
16,082,193
https://en.wikipedia.org/wiki/ClearMeeting
ClearMeeting is a web conferencing service developed and marketed by Audiocast Inc. The company provides database driven streaming media products and corporate online communication systems. ClearMeeting is a tool used for adding visual and interactive elements to traditional telephone conference calls. It is sold as an on-demand service, also called SaaS (Software as a Service). History The company was developed in 2005 by Audiocast Inc. of Northfield, Illinois, United States, ClearMeeting was designed as a platform for giving and viewing slideshow presentations over the web. It operates as a SaaS, there is no installation of the ClearMeeting application. In 2007, ClearMeeting became a certified application on the Salesforce.com AppExchange. References Teleconferencing Collaborative software Streaming 1997 software
ClearMeeting
[ "Technology" ]
171
[ "Multimedia", "Streaming" ]
16,082,982
https://en.wikipedia.org/wiki/Bed%20trick
The bed trick is a plot device in traditional literature and folklore; it involves a substitution of one partner in the sex act with a third person (in the words of Wendy Doniger, "going to bed with someone whom you mistake for someone else"). In the standard and most common form of the bed trick, a man goes to a sexual assignation with a certain woman, and without his knowledge that woman's place is taken by a substitute. In traditional literature Instances of the bed trick exist in the traditional literatures of many human cultures. It can be found in the Old Testament: in Genesis Chapter 29 Laban substitutes Leah for Rachel on Jacob's wedding night, as Jacob discovers the following morning. Other examples range throughout the Western canon (several occur in Arthurian romance, as well as in Chaucer's "The Reeve's Tale") and can be paralleled by instances in non-Western cultures (such as that of Indra and Ahalya in the ancient Indian epic Ramayana). Renaissance For modern readers and audiences, the bed trick is most immediately and most closely associated with English Renaissance drama, primarily due to the uses of the bed trick by Shakespeare in his two dark comedies, All's Well That Ends Well and Measure for Measure. In All's Well That Ends Well, Bertram thinks he is going to have sex with Diana, the woman he is trying to seduce; Helena, the protagonist, takes Diana's place in the darkened bedchamber, and so consummates their arranged marriage. In this case, the bed trick derives from Shakespeare's non-dramatic plot source, the ninth story of the third day in The Decameron by Boccaccio (which Shakespeare may have accessed through an English-language intermediary, the version in William Painter's Palace of Pleasure). In Measure for Measure, Angelo expects to have sex with Isabella, the heroine; but the Duke substitutes Mariana, the woman Angelo had engaged to marry but abandoned. In this case the bed trick was not present in Shakespeare's sources, but was added to the plot by the poet. (Related plot elements can be found in two other Shakespearean plays. In the final scene of Much Ado About Nothing, the bride at Claudio's wedding turns out to be Hero instead of her cousin, as expected; and in The Two Noble Kinsmen, the Wooer pretends to be Palamon to sleep with and marry the Jailer's Daughter.) The two uses of the bed trick by Shakespeare are the most famous in the drama of his era; they were emulated by more than forty other uses, however, and virtually every major successor of Shakespeare down to the closing of the theatres in 1642 employed the plot element at least once. The use of the bed trick in Middleton and Rowley's The Changeling, in which Diaphanta takes Beatrice-Joanna's place on the latter's wedding night, is probably the most famous instance outside of Shakespeare. Rowley also provides a gender-reversed instance of the bed trick in his All's Lost by Lust, in which it is the male rather than the female partner in the sexual pair who is substituted. (Male versions of the bed trick are rarer but not unprecedented; a classical instance occurs when Zeus disguises himself as Amphitryon to impregnate Alcmene with the future Hercules. Similarly in Arthurian legend, Uther Pendragon takes the place of Gorlois to impregnate Igraine with the future King Arthur.) Multiple uses of the bed trick occur in the works of Thomas Middleton, John Marston, John Fletcher, James Shirley, Richard Brome, and Thomas Heywood. Shakespeare employs the bed trick to yield plot resolutions that largely conform to traditional morality, as do some of his contemporaries; in the comic subplot to The Insatiate Countess (c. 1610), Marston constructs a double bed trick in which two would-be adulterers sleep with their own wives. Shakespeare's successors, however, tend to use the trick in more sensational and salacious ways. In Rowley's play cited above, it leads to the mistaken murder of the substituted man. Middleton's Hengist, King of Kent features an extreme version of the bed trick, in which a woman is kidnapped and raped in darkness, by a man she doesn't realise is her own husband. Post-Renaissance After theatres re-opened with the start of the Restoration era, the bed trick made sporadic appearances in plays by Elkanah Settle and Aphra Behn, and perhaps reached its culmination in Sir Francis Fane's Love in the Dark (1675); but in time it passed out of fashion in drama. Modern critics, readers, and audience members tend to find the bed trick highly artificial and lacking in credibility (though scholar Marliss Desens cites one alleged real-life instance of its employment in Shakespeare's era). In contemporary legal systems the bed trick may be considered a form of rape by deception. Some other bed-trick plays Blurt, Master Constable The English Moor The Family of Love A Game at Chess The Gamester Grim the Collier of Croydon The Lady of Pleasure Love's Last Shift The Marriage of Figaro A Mad Couple Well-Match'd The Novella The Parliament of Love The Queen of Corinth The Wedding The Widow's Tears The Witch The Wonder of Women True Lies In other media In Richard Strauss's 1932 opera Arabella, Zdenka/Zdenko, the daughter consigned to live as a boy because of family finances, contrives to pretend she is her sister Arabella to sleep with Matteo, with whom she is secretly in love. The bed trick can be seen in Eliza Haywood's novel Love in Excess. The bed trick is used in Roald Dahl's story The Great Switcheroo. A variation of the bed trick can also be seen in the movie Revenge of the Nerds. The Family Guy episode "Peter-assment" features a farcical and unwieldy variation, with Peter hiding Quagmire and Mort under his clothes to have sex with his boss Angela. The bed trick is also used twice in the film The Rocky Horror Picture Show. References William Shakespeare Narrative techniques Deception Sexuality in plays Sleep in fiction
Bed trick
[ "Biology" ]
1,303
[ "Behavior", "Sleep in fiction", "Sleep" ]
16,083,202
https://en.wikipedia.org/wiki/Cilofungin
Cilofungin (INN) is the first clinically applied member of the echinocandin family of antifungal drugs. It was derived from a fungus in the genus Aspergillus. It accomplishes this by interfering with an invading fungus' ability to synthesize the cell wall (specifically, it inhibits the synthesis of (1→3)-β-D-glucan). References Abandoned drugs Antifungals Echinocandins
Cilofungin
[ "Chemistry" ]
101
[ "Drug safety", "Abandoned drugs" ]
16,083,822
https://en.wikipedia.org/wiki/Baily%E2%80%93Borel%20compactification
In mathematics, the Baily–Borel compactification is a compactification of a quotient of a Hermitian symmetric space by an arithmetic group, introduced by . Example If C is the quotient of the upper half plane by a congruence subgroup of SL2(Z), then the Baily–Borel compactification of C is formed by adding a finite number of cusps to it. See also L² cohomology References Algebraic geometry Compactification (mathematics)
Baily–Borel compactification
[ "Mathematics" ]
102
[ "Geometry stubs", "Compactification (mathematics)", "Fields of abstract algebra", "Topology", "Geometry", "Algebraic geometry" ]
16,083,989
https://en.wikipedia.org/wiki/IMViC
The IMViC tests are a group of individual tests used in microbiology lab testing to identify an organism in the coliform group. A coliform is a gram negative, aerobic, or facultative anaerobic rod, which produces gas from lactose within 48 hours. The presence of some coliforms indicate fecal contamination. The term "IMViC" is an acronym for each of these tests. "I" is for indole test; "M" is for methyl red test; "V" is for Voges-Proskauer test, and "C" is for citrate test. The lower case "i" is merely for "in" as the Citrate test requires coliform samples to be placed "in Citrate". These tests are useful in distinguishing members of Enterobacteriaceae. Indole test In this test, the organism under consideration is grown in peptone water broth. It contains tryptophan, which under the action of enzyme tryptophanase is converted to an Indole molecule, pyruvate and ammonium. The indole is then extracted from the broth by means of xylene. The broth is sterilized for 15 minutes at around 121 °C. To test the broth for indole production, Kovac's reagent . Kovac's reagent consist of amyl alcohol and para-dimethylaminobenzaldehyde and concentrated hydrochloric acid. Kovac's reagent is actually used to determine ability of an organism to separate indole from amino acid tryptophan and it is added after incubation. A positive result is indicated by a pink/red layer forming on top of the liquid. Methyl red and Voges–Proskauer test These tests both use the same broth for bacterial growth. The broth is called MR-VP broth. After growth, the broth is separated into two different tubes, one for the methyl red (MR) test and one for the Voges-Proskauer (VP) test. The methyl red test detects production of acids formed during metabolism using mixed acid fermentation pathway using pyruvate as a substrate. The pH indicator Methyl Red is added to one tube and a red color appears at pH's lower than 4.2, indicating a positive test (mixed acid fermentation is used). The solution remaining yellow (pH = 6.2 or above) indicates a negative test, meaning the butanediol fermentation is used. The VP test uses alpha-naphthol and potassium hydroxide to test for the presence of acetylmethylcarbinol (acetoin), an intermediate of the 2,3-butanediol fermentation pathway. After adding both reagents, the tube is shaken vigorously then allowed to sit for 5–10 minutes. A pinkish-red color indicates a positive test, meaning the 2,3-butanediol fermentation pathway is used. Citrate test In the 1930's, S.A. Koser conducted experiments that were used to study bacterial catabolism of organic acids. Koser found that citrate metabolism could be an indicator for bacteria found in natural environments. Additionally, citrate could be used to distinguish bacterial coilforms found in soil, and aquatic environments, such as Enterobacteiacea, and coilforms with fecal contamination. It was found that coilforms without fecal contamination grew, while the coilforms with fecal contamination did not grow. This test uses Simmon's citrate agar to determine the ability of a microorganism to use citrate as its sole carbon and energy source. The agar contains citrate and ammonium ions (nitrogen source) and bromothymol blue (BTB) as a pH indicator. Bromothymol blue was added in order to reduce false positives. The citrate agar is green before inoculation, and turns blue, because of BTB as a positive test indicator, meaning citrate is utilized. The test is also prepared on a slant to maximize bacterial growth for an even better indication of the use of citrate. Usage These IMViC tests are useful for differentiating the family Enterobacteriaceae, especially when used alongside the Urease test. The IMViC results of some important species are shown below. References Microbiology
IMViC
[ "Chemistry", "Biology" ]
924
[ "Microbiology", "Microscopy" ]
16,084,282
https://en.wikipedia.org/wiki/Ordinal%20analysis
In proof theory, ordinal analysis assigns ordinals (often large countable ordinals) to mathematical theories as a measure of their strength. If theories have the same proof-theoretic ordinal they are often equiconsistent, and if one theory has a larger proof-theoretic ordinal than another it can often prove the consistency of the second theory. In addition to obtaining the proof-theoretic ordinal of a theory, in practice ordinal analysis usually also yields various other pieces of information about the theory being analyzed, for example characterizations of the classes of provably recursive, hyperarithmetical, or functions of the theory. History The field of ordinal analysis was formed when Gerhard Gentzen in 1934 used cut elimination to prove, in modern terms, that the proof-theoretic ordinal of Peano arithmetic is ε0. See Gentzen's consistency proof. Definition Ordinal analysis concerns true, effective (recursive) theories that can interpret a sufficient portion of arithmetic to make statements about ordinal notations. The proof-theoretic ordinal of such a theory is the supremum of the order types of all ordinal notations (necessarily recursive, see next section) that the theory can prove are well founded—the supremum of all ordinals for which there exists a notation in Kleene's sense such that proves that is an ordinal notation. Equivalently, it is the supremum of all ordinals such that there exists a recursive relation on (the set of natural numbers) that well-orders it with ordinal and such that proves transfinite induction of arithmetical statements for . Ordinal notations Some theories, such as subsystems of second-order arithmetic, have no conceptualization or way to make arguments about transfinite ordinals. For example, to formalize what it means for a subsystem of Z2 to "prove well-ordered", we instead construct an ordinal notation with order type . can now work with various transfinite induction principles along , which substitute for reasoning about set-theoretic ordinals. However, some pathological notation systems exist that are unexpectedly difficult to work with. For example, Rathjen gives a primitive recursive notation system that is well-founded iff PA is consistent,p. 3 despite having order type - including such a notation in the ordinal analysis of PA would result in the false equality . Upper bound Since an ordinal notation must be recursive, the proof-theoretic ordinal of any theory is less than or equal to the Church–Kleene ordinal . In particular, the proof-theoretic ordinal of an inconsistent theory is equal to , because an inconsistent theory trivially proves that all ordinal notations are well-founded. For any theory that's both -axiomatizable and -sound, the existence of a recursive ordering that the theory fails to prove is well-ordered follows from the bounding theorem, and said provably well-founded ordinal notations are in fact well-founded by -soundness. Thus the proof-theoretic ordinal of a -sound theory that has a axiomatization will always be a (countable) recursive ordinal, that is, strictly less than . Theorem 2.21 Examples Theories with proof-theoretic ordinal ω Q, Robinson arithmetic (although the definition of the proof-theoretic ordinal for such weak theories has to be tweaked). PA–, the first-order theory of the nonnegative part of a discretely ordered ring. Theories with proof-theoretic ordinal ω2 RFA, rudimentary function arithmetic. IΔ0, arithmetic with induction on Δ0-predicates without any axiom asserting that exponentiation is total. Theories with proof-theoretic ordinal ω3 EFA, elementary function arithmetic. IΔ0 + exp, arithmetic with induction on Δ0-predicates augmented by an axiom asserting that exponentiation is total. RCA, a second order form of EFA sometimes used in reverse mathematics. WKL, a second order form of EFA sometimes used in reverse mathematics. Friedman's grand conjecture suggests that much "ordinary" mathematics can be proved in weak systems having this as their proof-theoretic ordinal. Theories with proof-theoretic ordinal ωn (for n = 2, 3, ... ω) IΔ0 or EFA augmented by an axiom ensuring that each element of the n-th level of the Grzegorczyk hierarchy is total. Theories with proof-theoretic ordinal ωω RCA0, recursive comprehension. WKL0, weak Kőnig's lemma. PRA, primitive recursive arithmetic. IΣ1, arithmetic with induction on Σ1-predicates. Theories with proof-theoretic ordinal ε0 PA, Peano arithmetic (shown by Gentzen using cut elimination). ACA0, arithmetical comprehension. Theories with proof-theoretic ordinal the Feferman–Schütte ordinal Γ0 ATR0, arithmetical transfinite recursion. Martin-Löf type theory with arbitrarily many finite level universes. This ordinal is sometimes considered to be the upper limit for "predicative" theories. Theories with proof-theoretic ordinal the Bachmann–Howard ordinal ID1, the first theory of inductive definitions. KP, Kripke–Platek set theory with the axiom of infinity. CZF, Aczel's constructive Zermelo–Fraenkel set theory. EON, a weak variant of the Feferman's explicit mathematics system T0. The Kripke-Platek or CZF set theories are weak set theories without axioms for the full powerset given as set of all subsets. Instead, they tend to either have axioms of restricted separation and formation of new sets, or they grant existence of certain function spaces (exponentiation) instead of carving them out from bigger relations. Theories with larger proof-theoretic ordinals , Π11 comprehension has a rather large proof-theoretic ordinal, which was described by Takeuti in terms of "ordinal diagrams",p. 13 and which is bounded by ψ0(Ωω) in Buchholz's notation. It is also the ordinal of , the theory of finitely iterated inductive definitions. And also the ordinal of MLW, Martin-Löf type theory with indexed W-Types . IDω, the theory of ω-iterated inductive definitions. Its proof-theoretic ordinal is equal to the Takeuti-Feferman-Buchholz ordinal. T0, Feferman's constructive system of explicit mathematics has a larger proof-theoretic ordinal, which is also the proof-theoretic ordinal of the KPi, Kripke–Platek set theory with iterated admissibles and . KPi, an extension of Kripke–Platek set theory based on a recursively inaccessible ordinal, has a very large proof-theoretic ordinal described in a 1983 paper of Jäger and Pohlers, where I is the smallest inaccessible. This ordinal is also the proof-theoretic ordinal of . KPM, an extension of Kripke–Platek set theory based on a recursively Mahlo ordinal, has a very large proof-theoretic ordinal θ, which was described by . TTM, an extension of Martin-Löf type theory by one Mahlo-universe, has an even larger proof-theoretic ordinal . has a proof-theoretic ordinal equal to , where refers to the first weakly compact, due to (Rathjen 1993) has a proof-theoretic ordinal equal to , where refers to the first -indescribable and , due to (Stegert 2010). has a proof-theoretic ordinal equal to where is a cardinal analogue of the least ordinal which is -stable for all and , due to (Stegert 2010). Most theories capable of describing the power set of the natural numbers have proof-theoretic ordinals that are so large that no explicit combinatorial description has yet been given. This includes , full second-order arithmetic () and set theories with powersets including ZF and ZFC. The strength of intuitionistic ZF (IZF) equals that of ZF. Table of ordinal analyses Key This is a list of symbols used in this table: ψ represents various ordinal collapsing functions as defined in their respective citations. Ψ represents either Rathjen's or Stegert's Psi. φ represents Veblen's function. ω represents the first transfinite ordinal. εα represents the epsilon numbers. Γα represents the gamma numbers (Γ0 is the Feferman–Schütte ordinal) Ωα represent the uncountable ordinals (Ω1, abbreviated Ω, is ω1). Countability is considered necessary for an ordinal to be regarded as proof theoretic. is an ordinal term denoting a stable ordinal, and the least admissible ordinal above . is an ordinal term denoting an ordinal such that ; N is a variable that defines a series of ordinal analyses of the results of forall . when N=1, This is a list of the abbreviations used in this table: First-order arithmetic is Robinson arithmetic is the first-order theory of the nonnegative part of a discretely ordered ring. is rudimentary function arithmetic. is arithmetic with induction restricted to Δ0-predicates without any axiom asserting that exponentiation is total. is elementary function arithmetic. is arithmetic with induction restricted to Δ0-predicates augmented by an axiom asserting that exponentiation is total. is elementary function arithmetic augmented by an axiom ensuring that each element of the n-th level of the Grzegorczyk hierarchy is total. is augmented by an axiom ensuring that each element of the n-th level of the Grzegorczyk hierarchy is total. is primitive recursive arithmetic. is arithmetic with induction restricted to Σ1-predicates. is Peano arithmetic. is but with induction only for positive formulas. extends PA by ν iterated fixed points of monotone operators. is not exactly a first-order arithmetic system, but captures what one can get by predicative reasoning based on the natural numbers. is autonomously iterated (in other words, once an ordinal is defined, it can be used to index a new series of definitions.) extends PA by ν iterated least fixed points of monotone operators. is not exactly a first-order arithmetic system, but captures what one can get by predicative reasoning based on ν-times iterated generalized inductive definitions. is autonomously iterated . is a weakened version of based on W-types. is a transfinite induction of length α no more than -formulas. It happens to be the representation of the ordinal notation when used in first-order arithmetic. Second-order arithmetic In general, a subscript 0 means that the induction scheme is restricted to a single set induction axiom. is a second order form of sometimes used in reverse mathematics. is a second order form of sometimes used in reverse mathematics. is recursive comprehension. is weak Kőnig's lemma. is arithmetical comprehension. is plus the full second-order induction scheme. is arithmetical transfinite recursion. is plus the full second-order induction scheme. is plus the assertion "every true -sentence with parameters holds in a (countable coded) -model of ". Kripke-Platek set theory is Kripke-Platek set theory with the axiom of infinity. is Kripke-Platek set theory, whose universe is an admissible set containing . is a weakened version of based on W-types. asserts that the universe is a limit of admissible sets. is a weakened version of based on W-types. asserts that the universe is inaccessible sets. asserts that the universe is hyperinaccessible: an inaccessible set and a limit of inaccessible sets. asserts that the universe is a Mahlo set. is augmented by a certain first-order reflection scheme. is KPi augmented by the axiom . is KPI augmented by the assertion "at least one recursively Mahlo ordinal exists". is with an axiom stating that 'there exists a non-empty and transitive set M such that '. A superscript zero indicates that -induction is removed (making the theory significantly weaker). Type theory is the Herbelin-Patey Calculus of Primitive Recursive Constructions. is type theory without W-types and with universes. is type theory without W-types and with finitely many universes. is type theory with a next universe operator. is type theory without W-types and with a superuniverse. is an automorphism on type theory without W-types. is type theory with one universe and Aczel's type of iterative sets. is type theory with indexed W-Types. is type theory with W-types and one universe. is type theory with W-types and finitely many universes. is an automorphism on type theory with W-types. is type theory with a Mahlo universe. is System F, also polymorphic lambda calculus or second-order lambda calculus. Constructive set theory is Aczel's constructive set theory. is plus the regular extension axiom. is plus the full-second order induction scheme. is with a Mahlo universe. Explicit mathematics is basic explicit mathematics plus elementary comprehension is plus join rule is plus join axioms is a weak variant of the Feferman's . is , where is inductive generation. is , where is the full second-order induction scheme. See also Equiconsistency Large cardinal property Feferman–Schütte ordinal Bachmann–Howard ordinal Complexity class Gentzen's consistency proof Notes 1.For 2.The Veblen function with countably infinitely iterated least fixed points. 3.Can also be commonly written as in Madore's ψ. 4.Uses Madore's ψ rather than Buchholz's ψ. 5.Can also be commonly written as in Madore's ψ. 6. represents the first recursively weakly compact ordinal. Uses Arai's ψ rather than Buchholz's ψ. 7.Also the proof-theoretic ordinal of , as the amount of weakening given by the W-types is not enough. 8. represents the first inaccessible cardinal. Uses Jäger's ψ rather than Buchholz's ψ. 9. represents the limit of the -inaccessible cardinals. Uses (presumably) Jäger's ψ. 10.represents the limit of the -inaccessible cardinals. Uses (presumably) Jäger's ψ. 11. represents the first Mahlo cardinal. Uses Rathjen's ψ rather than Buchholz's ψ. 12. represents the first weakly compact cardinal. Uses Rathjen's Ψ rather than Buchholz's ψ. 13. represents the first -indescribable cardinal. Uses Stegert's Ψ rather than Buchholz's ψ. 14. is the smallest such that ' is -indescribable') and ' is -indescribable '). Uses Stegert's Ψ rather than Buchholz's ψ. 15. represents the first Mahlo cardinal. Uses (presumably) Rathjen's ψ. Citations References Proof theory Ordinal numbers
Ordinal analysis
[ "Mathematics" ]
3,417
[ "Ordinal numbers", "Proof theory", "Mathematical logic", "Mathematical objects", "Order theory", "Numbers" ]
16,084,368
https://en.wikipedia.org/wiki/Nonrecursive%20ordinal
In mathematics, particularly set theory, non-recursive ordinals are large countable ordinals greater than all the recursive ordinals, and therefore can not be expressed using recursive ordinal notations. The Church–Kleene ordinal and variants The smallest non-recursive ordinal is the Church Kleene ordinal, , named after Alonzo Church and S. C. Kleene; its order type is the set of all recursive ordinals. Since the successor of a recursive ordinal is recursive, the Church–Kleene ordinal is a limit ordinal. It is also the smallest ordinal that is not hyperarithmetical, and the smallest admissible ordinal after (an ordinal is called admissible if .) The -recursive subsets of are exactly the subsets of . The notation is in reference to , the first uncountable ordinal, which is the set of all countable ordinals, analogously to how the Church-Kleene ordinal is the set of all recursive ordinals. Some old sources use to denote the Church-Kleene ordinal. For a set , a set is -computable if it is computable from a Turing machine with an oracle state that queries . The relativized Church–Kleene ordinal is the supremum of the order types of -computable relations. The Friedman-Jensen-Sacks theorem states that for every countable admissible ordinal , there exists a set such that . , first defined by Stephen G. Simpson is an extension of the Church–Kleene ordinal. This is the smallest limit of admissible ordinals, yet this ordinal is not admissible. Alternatively, this is the smallest α such that is a model of -comprehension. Recursively ordinals The th admissible ordinal is sometimes denoted by . Recursively "x" ordinals, where "x" typically represents a large cardinal property, are kinds of nonrecursive ordinals. Rathjen has called these ordinals the "recursively large counterparts" of x, however the use of "recursively large" here is not to be confused with the notion of an ordinal being recursive. An ordinal is called recursively inaccessible if it is admissible and a limit of admissibles. Alternatively, is recursively inaccessible iff is the th admissible ordinal, or iff , an extension of Kripke–Platek set theory stating that each set is contained in a model of Kripke–Platek set theory. Under the condition that ("every set is hereditarily countable"), is recursively inaccessible iff is a model of -comprehension. An ordinal is called recursively hyperinaccessible if it is recursively inaccessible and a limit of recursively inaccessibles, or where is the th recursively inaccessible. Like "hyper-inaccessible cardinal", different authors conflict on this terminology. An ordinal is called recursively Mahlo if it is admissible and for any -recursive function there is an admissible such that (that is, is closed under ). Mirroring the Mahloness hierarchy, is recursively -Mahlo for an ordinal if it is admissible and for any -recursive function there is an admissible ordinal such that is closed under , and is recursively -Mahlo for all . An ordinal is called recursively weakly compact if it is -reflecting, or equivalently, 2-admissible. These ordinals have strong recursive Mahloness properties, if α is -reflecting then is recursively -Mahlo. Weakenings of stable ordinals An ordinal is stable if is a -elementary-substructure of , denoted . These are some of the largest named nonrecursive ordinals appearing in a model-theoretic context, for instance greater than for any computably axiomatizable theory .Proposition 0.7. There are various weakenings of stable ordinals: A countable ordinal is called -stable iff . The smallest -stable ordinal is much larger than the smallest recursively weakly compact ordinal: it has been shown that the smallest -stable ordinal is -reflecting for all finite . In general, a countable ordinal is called -stable iff . A countable ordinal is called -stable iff , where is the smallest admissible ordinal . The smallest -stable ordinal is again much larger than the smallest -stable or the smallest -stable for any constant . A countable ordinal is called -stable iff , where are the two smallest admissible ordinals . The smallest -stable ordinal is larger than the smallest -reflecting. A countable ordinal is called inaccessibly-stable iff , where is the smallest recursively inaccessible ordinal . The smallest inaccessibly-stable ordinal is larger than the smallest -stable. A countable ordinal is called Mahlo-stable iff , where is the smallest recursively Mahlo ordinal . The smallest Mahlo-stable ordinal is larger than the smallest inaccessibly-stable. A countable ordinal is called doubly -stable iff . The smallest doubly -stable ordinal is larger than the smallest Mahlo-stable. Larger nonrecursive ordinals Even larger nonrecursive ordinals include: The least ordinal such that where is the smallest nonprojectible ordinal. An ordinal is nonprojectible if is a limit of -stable ordinals, or; if the set is unbounded in . The ordinal of ramified analysis, often written as . This is the smallest such that is a model of second-order comprehension, or , which is without the axiom of power set. The least ordinal such that . This ordinal has been characterized by Toshiyasu Arai. The least ordinal such that . The least stable ordinal. References Proof theory Ordinal numbers
Nonrecursive ordinal
[ "Mathematics" ]
1,367
[ "Ordinal numbers", "Proof theory", "Mathematical logic", "Mathematical objects", "Order theory", "Numbers" ]
16,084,455
https://en.wikipedia.org/wiki/Feferman%E2%80%93Sch%C3%BCtte%20ordinal
In mathematics, the Feferman–Schütte ordinal (Γ0) is a large countable ordinal. It is the proof-theoretic ordinal of several mathematical theories, such as arithmetical transfinite recursion. It is named after Solomon Feferman and Kurt Schütte, the former of whom suggested the name Γ0. There is no standard notation for ordinals beyond the Feferman–Schütte ordinal. There are several ways of representing the Feferman–Schütte ordinal, some of which use ordinal collapsing functions: , , , or . Definition The Feferman–Schütte ordinal can be defined as the smallest ordinal that cannot be obtained by starting with 0 and using the operations of ordinal addition and the Veblen functions φα(β). That is, it is the smallest α such that φα(0) = α. Properties This ordinal is sometimes said to be the first impredicative ordinal, though this is controversial, partly because there is no generally accepted precise definition of "predicative". Sometimes an ordinal is said to be predicative if it is less than Γ0. Any recursive path ordering whose function symbols are well-founded with order type less than that of Γ0 itself has order type less than Γ0. References Proof theory Ordinal numbers
Feferman–Schütte ordinal
[ "Mathematics" ]
301
[ "Ordinal numbers", "Proof theory", "Mathematical logic", "Mathematical objects", "Number stubs", "Order theory", "Numbers" ]
16,085,319
https://en.wikipedia.org/wiki/Rubber%20toughening
Rubber toughening is a process in which rubber nanoparticles are interspersed within a polymer matrix to increase the mechanical robustness, or toughness, of the material. By "toughening" a polymer it is meant that the ability of the polymeric substance to absorb energy and plastically deform without fracture is increased. Considering the significant advantages in mechanical properties that rubber toughening offers, most major thermoplastics are available in rubber-toughened versions; for many engineering applications, material toughness is a deciding factor in final material selection. The effects of disperse rubber nanoparticles are complex and differ across amorphous and partly crystalline polymeric systems. Rubber particles toughen a system by a variety of mechanisms such as when particulates concentrate stress causing cavitation or initiation of dissipating crazes. However the effects are not one-sided; excess rubber content or debonding between the rubber and polymer can reduce toughness. It is difficult to state the specific effects of a given particle size or interfacial adhesion parameter due to numerous other confounding variables. The presence of a given failure mechanism is determined by many factors: those intrinsic to the continuous polymer phase, and those that are extrinsic, pertaining to the stress, loading speed, and ambient conditions. The action of a given mechanism in a toughened polymer can be studied with microscopy. The addition of rubbery domains occurs via processes such as melt blending in a Rheomix mixer and atom-transfer radical-polymerization. Current research focuses on how optimizing the secondary phase composition and dispersion affects mechanical properties of the blend. Questions of interest include those to do with fracture toughness, tensile strength, and glass transition temperature. Toughening mechanisms Different theories describe how a dispersed rubber phase toughens a polymeric substance; most employ methods of dissipating energy throughout the matrix. These theories include: microcrack theory, shear-yielding theory, multiple-crazing theory, shear band and crazing interaction theory, and more recently those including the effects of critical ligament thickness, critical plastic area, voiding and cavitation, damage competition and others. Microcrack theory In 1956, the microcrack theory became the first to explain the toughening effect of a dispersed rubber phase in a polymer. Two key observations that went into the initial theory and subsequent expansion were as follows: (1) microcracks form voids over which styrene-butadiene copolymer fibrils form to prevent propagation, and (2) energy stored during elongation of toughened epoxies is released upon breaking of rubber particles. The theory concluded that the combined energy to initiate microcracks and the energy to break rubber particles could account for the increased energy absorption of toughened polymers. This theory was limited, only accounting for a small fraction of the observed increase in fracture energy. Matrix crazing The matrix crazing theory focuses on explaining the toughening effects of crazing. Crazes start at the equator where principal strain is highest, propagate perpendicular to the stress, and end when they meet another particle. Crazes with perpendicular fibrils can eventually become a crack if the fibrils break. The volume expansion associated with small crazes distributed through a large volume compared to the small volume of a few large cracks in untoughened polymer accounts for a large fraction of the increase in fracture energy. Interaction between rubber particles and crazes puts elongation pressures onto the particles in the direction of stress. If this force overcomes the surface adhesion between the rubber and polymer, debonding will occur, thereby diminishing the toughening effect associated with crazing. If the particle is harder, it will be less able to deform, and thus debonding occurs under less stress. This is one reason why dispersed rubbers, below their own glass transition temperature, do not toughen plastics effectively. Shear yielding Shear yielding theory is one that, like matrix crazing, can account for a large fraction of the increase in energy absorption of a toughened polymer. Evidence of shear yielding in a toughened polymer can be seen where there is "necking, drawing or orientation hardening." Shear yielding will result if rubber particles act as stress concentrators and initiate volume-expansion through crazing, debonding and cavitation, to halt the formation of cracks. Overlapping stress fields from one particle to its neighbor will contribute to a growing shear-yielding region. The closer the particles are the more overlap and the larger shear-yielding region. Shear yielding is an energy absorbing process in itself, but furthermore initiation of shear bands also aids in craze arrest. The occurrence of cavitation is important to shear yielding theory because it acts to lower the yield stress. Cavitation precedes shear yielding, however shear yielding accounts for a much larger increase in toughness than does the cavitation itself. Cavitation Cavitation is common in epoxy resins and other craze resistant toughened polymers, and is prerequisite to shearing in Izod impact strength testing. During the deformation and fracture of a toughened polymer, cavitation of the strained rubber particles occurs in crazing-prone and non-crazing-prone plastics, including, ABS, PVC, nylon, high impact polystyrene, and CTBN toughened epoxies. Engineers use an energy-balance approach to model how particle size and rubber modulus factors influence material toughness. Both particle size and modulus show positive correlation with brittle-tough transition temperatures. They are both shown to affect the cavitation process occurring at the crack tip process zone early in deformation, preceding large-scale crazing and shear yielding. In order to show increased toughness under strain, the volumetric strain must overcome the energy of void formation as modeled by the equation: "where and are the shear modulus and bulk modulus of the rubber, is the volume strain in the rubber particle, is the surface energy of the rubber phase, and the function is dependent on the failure strain of the rubber under biaxial stretching conditions." The energy-balancing model applies the physical properties of the whole material to describe the microscopic behavior during triaxial stress. The volume stress and particle radius conditions for cavitation can be calculated, giving the theoretical minimum particle radius for cavitation, useful for practical applications in rubber toughening. Typically cavitation will occur when the average stress on the rubber particles is between 10 and 20 megapascal. The volume strain on the particle is relieved and voiding occurs. The energy absorption due to this increase in volume is theoretically negligible. Instead, it is the consequent shear band formation that accounts for increased toughness. Before debonding, as the strain increases, the rubber phases is forced to stretch further strengthening the matrix. Debonding between the matrix and the rubber reduces the toughness, creating the need for strong adhesion between the polymer and rubber phases. Damage competition theory The damage competition theory models the relative contributions of shear yielding and craze failure, when both are present. there are two main assumptions: crazing, microcracks, and cavitation dominate in brittle systems, and shearing dominates in the ductile systems. Systems that are in between brittle and ductile will show a combination of these. The damage competition theory defines the brittle-ductile transition as the point at which the opposite mechanism (shear or yield damage) appears in a system dominated by the other mechanism. Characterization of failure The dominant failure mechanism can usually be observed directly using TEM, SEM and light microscopy. If cavitation or crazing is dominant, tensile dilatometry (see dilatometer) can be used to measure the extent of the mechanism by measuring volume strain. However, if multiple dilatational mechanisms are present, it is difficult to measure the separate contributions. Shear yielding is a constant volume process and cannot be measured with tensile dilatometry. Voiding can be seen with optical microscopy, however one of two methods, using polarized light or low angle light scattering are necessary to observe the connection between cavitation and shear bands. Characteristics of the continuous phase relevant to toughening theory In order to gauge the toughening effects of a dispersed secondary phase, it is important to understand the relevant characteristics of the continuous polymer phase. The mechanical failure characteristics of the pure polymeric continuous phase will strongly influence how rubber toughened polymer failure occurs. When a polymer usually fails due to crazing, rubber toughening particles will act as craze initiators. When it fails by shear yielding, the rubber particles will initiate shear bands. It is also possible to having multiple mechanisms come into play if the polymer is prone to failing by multiple stresses equally. Polystyrene and styrene-acrylonitrile are brittle materials that are prone to craze failure while polycarbonate, polyamides, and polyethylene terephthalate (PET) are prone to shear yield failure. Glass transition temperature Amorphous plastics are used below their glass transition temperature (). They are brittle and notch sensitive but creep resistant. Molecules are immobile and the plastic responds to rapidly applied stress by fracturing. Partly crystalline thermoplastics are used for application in temperature conditions between and (melting temperature). Partly crystalline thermoplastics are tough and creep-prone because the amorphous regions surrounding the rigid crystals are afforded some mobility. Often they are brittle at room temperature because they have high glass transition temperatures. Polyethylene is tough at room temperature because its is lower than room temperature. Polyamide 66 and polyvinylchloride have secondary transitions below their that allows for some energy absorbing molecule mobility. Chemical structure There are some general guidelines to follow when trying to determine a plastic's toughness from its chemical structure. Vinyl polymers like polystyrene and styrene-acrylonitrile tend to fail by crazing. They have low crack initiation and propagation energies. Polymers with aromatic backbones, such as polyethylene terephthalate and polycarbonate, tend to fail by shear yielding with high crack initiation energy but low propagation energy. Other polymers, including poly(methyl methacrylate) and polyacetal(polyoxymethylene), are not as brittle as "brittle polymers" and are also not as ductile as "ductile polymers". Entanglement density and flexibility of unperturbed real chain The following equations relate the entanglement density and a measure of the flexibility of the unperturbed real chain () of a given plastic to its fracture mechanics: Where is the mass density of the amorphous polymer, and is the average molecular weight per statistical unit. Crazing stress is related to the entanglement density by: The normalized stress yield is related to by is a constant. The ratio of the crazing stress to the normalized stress yield is used to determine whether a polymer fails due to crazing or yield: When the ratio is higher, the matrix is prone to yielding; when the ratio is lower, the matrix is prone to failure by crazing. These formulas form the base of crazing theory, shear-yielding theory, and damage competition theory. Relationship between the secondary phase properties and toughening effect Rubber selection and miscibility with continuous phase In material selection it is important to look at the interaction between the matrix and the secondary phase. For example, crosslinking within the rubber phase promotes high strength fibril formation that toughens the rubber, preventing particle fracture. Carboxyl-terminated butadiene-acrylonitrile (CTBN) is often used to toughen epoxies, but using CTBN alone increases the toughness at the cost of stiffness and heat resistance. Amine-terminated butadiene acrylonitrile (ATBN) is also used. Using ultra-fine full-vulcanized powdered rubber (UFPR) researchers have been able to improve all three, toughness, stiffness, and heat resistance simultaneously, resetting the stage for rubber toughening with particles smaller than previously thought to be effective. In applications where high optical transparency is necessary, examples being poly(methyl methacrylate) and polycarbonate it is important to find a secondary phase that does not scatter light. To do so it is important to match refractive indices of both phases. Traditional rubber particles do not offer this quality. Modifying the surface of nanoparticles with polymers of comparable refractive indices is an interest of current research. Secondary phase concentration Increasing the rubber concentration in a nanocomposite decreases the modulus and tensile strength. In one study, looking at PA6-EPDM blend, increasing the concentration of rubber up to 30 percent showed a negative linear relationship with the brittle-tough transition temperature, after which the toughness decreased. This suggests that the toughening effect of adding rubber particles is limited to a critical concentration. This is examined further in a study on PMMA from 1998; using SAXS to analyze crazing density, it was found that crazing density increases and yield stress decreases until the critical point when the relationship flips. Rubber particle size A material that is expected to fail by crazing is more likely to benefit from larger particles than a shear prone material, which would benefit from a smaller particle. In materials where crazing and yielding are comparable, a bimodal distribution of particle size may be useful for toughening. At fixed rubber concentrations, one can find that an optimal particle size is a function of the entanglement density of the polymer matrix. The neat polymer entanglement densities of PS, SAN, and PMMA are 0.056, 0.093, and 0.127 respectively. As entanglement density increases, the optimum particle size decreases linearly, ranging between 0.1 and 3 micrometers. The effect of particle size on toughening is dependent on the type of test performed. This can be explained because for different test conditions, the failure mechanism changes. For impact strength testing on PMMA where failure occurs by shear-yielding, the optimum size of filler PBA-core PMMA-shell particle was shown in one case to be 250 nm. In the three-point bend test, where failure is due to crazing, 2000 nm particles had the most significant toughening effect. Temperature effects Temperature has a direct effect on the fracture mechanics. At low temperatures, below the glass transition temperature of the rubber, the dispersed phase behaves like a glass rather than like a rubber that toughens the polymer. As a result, the continuous phase fails by mechanisms characteristic of the pure polymer, as if the rubber was not present. As temperature increases past the glass transition temperature, the rubber phase increases the crack initiation energy. At this point the crack self-propagates due to the stored elastic energy in the material. As temperature rises further past the glass transition of the rubber phase, the impact strength of a rubber-polymer composite still dramatically increases as crack propagation requires additional energy input. Sample applications Epoxy resins Epoxy resins are a highly useful class of materials used in engineering applications. Some of these include use for adhesives, fiber-reinforced composites, and electronics coatings. Their rigidity and low crack propagation resistance makes epoxies a candidate of interest for rubber toughening research to fine-tune the toughening processes. Some of the factors affecting the toughness of epoxy nanocomposites include the chemical identity of the epoxy curing agent, entanglement density, and interfacial adhesion. Curing epoxy 618 with piperidine, for example, produces tougher epoxies than when boron trifluoride-ethylamine is used. Low entanglement density increases the toughness. Bisphenol A can be added to lower the crosslinking density of epoxy 618, thereby increasing the fracture toughness. Bisphenol A and a rubber filler increase toughness synergistically. In textbooks and literature before 2002 it was assumed that there is a lower limit for rubber-toughening particle diameter at 200 nm; it was then discovered that ultra-fine full-vulcanized powdered rubber particles with diameter of 90 nm show significant toughening of rubber epoxies. This finding underlines how this field is constantly growing and more work can be done to better model the rubber toughening effect. ABS Acrylonitrile butadiene styrene (ABS) polymer is an application of rubber toughening. The properties of this polymer come mainly from rubber toughening. The polybutadiene rubber domains in the main styrene-acrylonitrile matrix act as a stop to crack propagation. Optically transparent plastics PMMA’s high optical transparency, low cost, and compressibility make it a viable option for practical applications in architecture and car manufacturing as a substitute for glass when high transparency is necessary. Incorporating a rubber filler phase increases the toughness. Such fillers need to form strong interfacial bonds with the PMMA matrix. In applications where optical transparency is important, measures must be taken to limit light scattering. It is common in toughening PMMA, and in other composites, to synthesize core-shell particles via atom-transfer radical-polymerization that have an outer polymer layer that has properties similar to those of the primary phase that increases the particle’s adhesion to the matrix. Developing PMMA compatible core-shell particles with low glass transition temperature while maintaining optical transparency is of interest to architects and car companies. For optimal transparency the disperse rubber phase needs the following: Small average particle radius Narrow particle size distribution Refractive index matching that of matrix across range of temperatures and wavelengths Strong adhesion to matrix Similar viscosity to matrix at processing temperature Cyclic olefin copolymer, an optically transparent plastic with low moisture uptake and solvent resistance among other useful properties, can be toughened effectively with a styrene-butadiene-styrene rubber with the above properties. The Notched-Izod strength more than doubled from 21 J/m to 57 J/m with an optical haze of 5%. Improving polystyrene Polystyrene generally has stiffness, transparency, processibility, and dielectric qualities that make it useful. However, its low impact resistance at low temperatures makes catastrophic fracture failure when cold more likely. The most widely used version of toughened polystyrene is called high impact polystyrene or HIPS. Being cheap and easy to thermoform (see thermoforming), it is utilized for many everyday uses. HIPS is made by polymerizing styrene in a polybutadiene rubber solution. After the polymerization reaction begins, the polystyrene and rubber phases separate. When phase separation begins, the two phases compete for volume until phase inversion occurs and the rubber can distribute throughout the matrix. The alternative emulsion polymerization with styrene-butadiene-styrene or styrene-butadiene copolymers allows fine-tuned manipulation of particle size distribution. This method makes use of the core-shell architecture. In order to study the fracture microstructure of HIPS in a transmission electron microscope it is necessary to stain one of the phases with a heavy metal, Osmium tetroxide for example. This produces substantially different electron density between phases. Given a constant particle size, it is the cross-linking density that determines the toughness of a HIPS material. This can be measured by exploiting the negative relationship between the cis-polybutadiene content of the rubber and the crosslink density that can be measured with the swelling index. Lower crosslink density leads to increased toughness. The generation of vast quantities of waste rubber from car tires has sparked interest in finding uses for this discarded rubber. The rubber can be turned into a fine powder, which can then be used as a toughening agent for polystyrene. However, poor miscibility between the waste rubber and polystyrene weakens the material. This problem requires the use of a compatibilizer (see compatibilization) in order to reduce interfacial tension and ultimately make rubber toughening of polystyrene effective. A polystyrene/styrene-butadiene copolymer acts to increase the adhesion between the dispersed and continuous phases. References Plastics Polymers Materials science
Rubber toughening
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,261
[ "Applied and interdisciplinary physics", "Unsolved problems in physics", "Materials science", "Polymer chemistry", "nan", "Polymers", "Amorphous solids", "Plastics" ]
16,085,364
https://en.wikipedia.org/wiki/Organisms%20involved%20in%20water%20purification
Most organisms involved in water purification originate from the waste, wastewater or water stream itself or arrive as resting spore of some form from the atmosphere. In a very few cases, mostly associated with constructed wetlands, specific organisms are planted to maximise the efficiency of the process. Role of biota Biota are an essential component of most sewage treatment processes and many water purification systems. Most of the organisms involved are derived from the waste, wastewater or water stream itself or from the atmosphere or soil water. However some processes, especially those involved in removing very low concentrations of contaminants, may use engineered eco-systems created by the introduction of specific plants and sometimes animals. Some full scale sewage treatment plants also use constructed wetlands to provide treatment. Pollutants in wastewater Pathogens Parasites, bacteria and viruses may be injurious to the health of people or livestock ingesting the polluted water. These pathogens may have originated from sewage or from domestic or wild bird or mammal feces. Pathogens may be killed by ingestion by larger organisms, oxidation, infection by phages or irradiation by ultraviolet sunlight unless that sunlight is blocked by plants or suspended solids. Suspended solids Particles of soil or organic matter may be suspended in the water. Such materials may give the water a cloudy or turbid appearance. The anoxic decomposition of some organic materials may give rise to obnoxious or unpleasant smells as sulphur containing compounds are released. Nutrients Compounds containing nitrogen, potassium or phosphorus may encourage growth of aquatic plants and thus increase the available energy in the local food-web. this can lead to increased concentrations of suspended organic material. In some cases specific micro-nutrients may be required to allow the available nutrients to be fully utilised by living organisms. In other cases, the presence of specific chemical species may produce toxic effects limiting growth and abundance of living matter. Metals Many dissolved or suspended metal salts exert harmful effects in the environment sometimes at very low concentrations. Some aquatic plants are able to remove very low metal concentrations, with the metals ending up bound to clay or other mineral particles. Organisms Saprophytic bacteria and fungi can convert organic matter into living cell mass, carbon dioxide, water and a range of metabolic by-products. These saprophytic organisms may then be predated upon by protozoa, rotifers and, in cleaner waters, Bryozoa which consume suspended organic particles including viruses and pathogenic bacteria. Clarity of the water may begin to improve as the protozoa are subsequently consumed by rotifers and cladocera. Purifying bacteria, protozoa, and rotifers must either be mixed throughout the water or have the water circulated past them to be effective. Sewage treatment plants mix these organisms as activated sludge or circulate water past organisms living on trickling filters or rotating biological contactors. Aquatic vegetation may provide similar surface habitat for purifying bacteria, protozoa, and rotifers in a pond or marsh setting; although water circulation is often less effective. Plants and algae have the additional advantage of removing nutrients from the water; but some of those nutrients will be returned to the water when the plants die unless the plants are removed from the water. Because of the complex chemistry of Phosphorus much of this element is in an unavailable form unless decomposition creates anoxic conditions which render the phosphorus available for re-uptake. Plants also provide shade, a refuge for fish, and oxygen for aerobic bacteria. In addition, fish can limit pests such as mosquitoes. Fish and waterfowl feces return waste to the water, and their feeding habits may increase turbidity. Cyanobacteria have the disadvantageous ability to add nutrients from the air to the water being purified and to generate toxins in some cases. The choice of organism depends on the local climate different species and other factors. Indigenous species usually tend to be better adapted to the local environment. Macrophytes The choice of plants in engineered wet-lands or managed lagoons is dependent on the purification requirements of the system and this may involve plantings of varying plant species at a range of depths to achieve the required goal. Plants purify water by consuming excess nutrients and by providing surfaces upon which a wide range of other purifying organisms can live. They also are effective oxygenators in sunlight. They also have the ability to translocate chemicals between their submerged foliage and their root systems and this is of significance in engineered wet-lands designed to de-toxify waste waters. Plants that have been used in temperate climates include Nymphea alba, Phragmites australis, Sparganium erectum, Iris pseudacorus, Schoenoplectus lacustris and Carex acutiformis. Where oxygenation is a critical requirement Stratiotes aloides, Hydrocharis morsus-ranae, Acorus calamus, Myriophyllum species and Elodea have been used. Hydrocharis morsus-ranae and Nuphar lutea have been used where shade and cover are required. Fish Fish are frequently the top level predators in a managed treatment eco-system and in some case may simply be a mono-culture of herbivorous species. Management of multi-species fisheries requires careful management and may involve a range of fish species including bottom-feeders and predatory species to limit population growth of the herbivorous fish. Rotifers Rotifers are microscopic complex organisms and are filter feeders removing fine particulate matter from water. They occur naturally in aerobic lagoons, activated sludge processes, in trickling filters and in final settlement tanks and are a significant factor in removing suspended bacterial cells and algae from the water column. Annelids Annelid worms are essential to the effective operation of trickling filters helping to remove excess bio-mass and enhancing natural sloughing of the bio-film. Supernumerary worms are very commonly found in the drainage troughs around trickling filters and in the final settlement sludge. Annelids also play a key role in lagoon treatment systems and in the effective working or engineered wet-lands. In this environment worms are a principal force in mixing in the upper few centimetres of the sediment layer exposing organic material to both oxidative and anoxic environments aiding the complete breakdown of most organics. They are also a key ingredient in the food-chain transferring energy upwards to fish and aquatic birds. Protozoa The range of protozoan species found is very wide but may include species of the following genera: Amoeba Arcella Blepharisma Didinium Euglena Hypotrich Paramecium Suctoria Stylonychia Vorticella Insects Chironomidae bloodworm larva Podura aquatica water springtail Psychodidae drain fly or filter fly larva Bacteria Bacteria are probably the most significant group of organisms involved in water purification and are ubiquitous in all biological purification environments. Some such as Sphaerotilus natans are typically associated with grossly polluted waters, but even in such environments the bacteria are degrading the organic material present. See also Aquatic plant Water purification Treatment pond Detoxification Sources Fair, Gordon Maskew, Geyer, John Charles & Okun, Daniel Alexander Water and Wastewater Engineering (Volume 2) John Wiley & Sons (1968) Hammer, Mark J. Water and Waste-Water Technology John Wiley & Sons (1975) Metcalf & Eddy Wastewater Engineering McGraw-Hill (1972) Notes Anaerobic digestion Sewerage Water technology Water pollution Water treatment
Organisms involved in water purification
[ "Chemistry", "Engineering", "Environmental_science" ]
1,547
[ "Water treatment", "Water pollution", "Sewerage", "Anaerobic digestion", "Environmental engineering", "Water technology" ]
16,085,372
https://en.wikipedia.org/wiki/Processing%20amplifier
Processing amplifier, commonly called ProcAmp, is used to alter, change or clean video or audio signal components or parameters in realtime. Form factor Broadcast professionals prefer to use hardware rack mountable ProcAmps that helps them make video broadcast safe by correcting video inconsistencies. They may also be chip-based, as part of other larger multi-purpose devices in professional environments. Software ProcAmps are also available as code embedded in media players like Windows Media Player, VLC, KMPlayer, or in codecs like ffdshow. Software ProcAmps can process media either on the CPU or GPU. Video ProcAmp Video ProcAmps can be used for processing standard-definition 525/30 (NTSC) 625/25 (PAL) or high-definition video signals. ProcAmps can process video signals ranging from analog composite to SDI video signals. Common ProcAmp Controls: Brightness (Luminance) Contrast (Gain) Saturation (Amplitude) Hue (Phase) Common ProcAmp features: Regenerate sync and color burst Adjust sync amplitude Boost low light level video Reduce video wash out Chroma clipping See also Video processing Broadcast-safe External links More information about TV standards References Broadcast engineering Television technology ITU-R recommendations
Processing amplifier
[ "Technology", "Engineering" ]
258
[ "Information and communications technology", "Broadcast engineering", "Electronic engineering", "Television technology" ]
16,085,921
https://en.wikipedia.org/wiki/National%20Environmental%20Engineering%20Research%20Institute
The National Environmental Engineering Research Institute (NEERI) in Nagpur was originally established in 1958 as the Central Public Health Engineering Research Institute (CPHERI). It has been described as the "premier and oldest institute in India." It is an institution listed on the Integrated Government Online Directory. It operates under the aegis of the Council of Scientific and Industrial Research (CSIR), based in New Delhi. Indira Gandhi, the Prime Minister of India at the time, renamed the Institute NEERI in 1974. The Institute primarily focused on human health issues related to water supply, sewage disposal, diseases, and industrial pollution. NEERI operates as a laboratory in the field of environmental science and engineering and is one of the constituent laboratories of the Council of Scientific and Industrial Research (CSIR). The institute has six zonal laboratories located in Chennai, Delhi, Hyderabad, Kolkata, Nagpur, and Mumbai. NEERI operates under the Ministry of Science and Technology of the Indian government. NEERI is a partner organization of India's POP National Implementation Plan (NIP). History In 1958, the Central Public Health Engineering Research Institute (CPHERI) was established. It was created by the Council of Scientific and Industrial Research (CSIR). In 1974, after participating in the "United Nations Inter-Governmental Conference on Human Environment" and with its renaming by Prime Minister Indira Gandhi, CPHERI became the National Environmental Engineering Research Institute (NEERI). NEERI has headquarters in Nagpur and five zonal laboratories in Mumbai, Kolkata, Delhi, Chennai, and Hyderabad. The study for the location of a new municipal solid waste landfill site in Kolkata used the institute's 2005 guidelines. During the COVID-19 crisis, the institute developed a saline gargling sample method to trace the disease. Fields Environmental monitoring  Since 1978, the institute has operated a nationwide air quality monitoring network. Sponsored by the Central Pollution Control Board (CPCB) since 1990. Receptor modelling techniques are used. CSIR-NEERI is involved in the design and development of air pollution control systems. The institute has also developed a water purification system called 'NEERI ZAR'. In the 1960s and 1970s, the Institute developed guidelines for Defluorination techniques. They have sometimes formed a departure point for the development of other techniques. The Institute tests samples for research on Defluorination and the measurement of particulate matter in air. The institute has been entrusted by the courts to provide an inspection of the current environmental and legal framework. Skill development The institute has set up a Centre for Skill Development, offering certificate courses in the areas of environmental impact and water quality assessment. Prof. V. Rajagopalan (1993 Vice President of the World Bank) had in his time (1955–65) with the Institute created a national program for water industry professionals. Graduate programmers were established in Public Health Engineering at the Guindy Engineering College, Madras, Roorkee Engineering University, and VJTI in Mumbai. Assessment of research In 1989–2013, 1,236 publications of the National Environmental Engineering Research Institute were assessed. The institute technique for enrichment of ilmenite with titanium dioxide has been evaluated externally. Patent development The institute has national and international patents for a method to manufacture zeolite-A using flash instead of sodium silicate and aluminate. Selected publications Kumar, A., et al. "Sustainability in Environmental Engineering and Science." (2021): 253–262. Sharma, Abhinav. "Effect of ozone pretreatment on biodegradability enhancement and biogas production of biomethane distillery effluent." Sharma, Asheesh, et al. "NutriL-GIS: A Tool for Assessment of Agricultural Runoff and Nutrient Pollution in a Watershed." National Environmental Engineering Research Institute (NEERI). India (2010). Sinnarkar, S. N., and Rajesh Kumar Lohiya. "External user in an environmental research library." Annals of library and information studies 55.4 (2008): 275–280. Schools, Greywater Reuse In Rural. "Guidance Manual." National Environmental Engineering Research Institute (2007). Thawale, P. R., Asha A. Juwarkar, and S. K. Singh. "Resource conservation through land treatment of municipal wastewater." Current Science (2006): 704–711. Rao, Padma S., et al. "Performance evaluation of a green belt in a petroleum refinery: a case study". Ecological engineering 23.2 (2004): 77–84. Murty , K. S. "Groundwater in India." Studies in Environmental Science. Vol. 17. Elsevier, 1981. 733–736. References Research institutes in Nagpur Council of Scientific and Industrial Research Environmental engineering Science and technology in Maharashtra Ministry of Science and Technology (India) Research institutes established in 1958 1958 establishments in Bombay State
National Environmental Engineering Research Institute
[ "Chemistry", "Engineering" ]
1,007
[ "Chemical engineering", "Civil engineering", "Environmental engineering" ]
16,086,660
https://en.wikipedia.org/wiki/Advisory%20Council%20for%20Aeronautics%20Research%20in%20Europe
The Advisory Council for Aeronautics Research in Europe (ACARE) is a European advisory body that aims to improve the competitiveness and sustainability of the European Union in the field of aeronautics. It is a public-private partnership between the Directorate-General for Transport and Energy of the European Commission and industry leaders. ACARE was launched at the Paris Airshow in June 2001 and has about 40 members. Overview In the year 2000 the Belgian European Commissioner for Research, Innovation and Science, Philippe Busquin, invited a number of aviation industry leaders to produce a strategy detailing how the European Commission could help Europe's aviation industry become more competitive. This ‘Group of Personalities’ published ‘’European Aeronautics: A Vision for 2020’’ in January 2001. The document recommended the creation of ACARE, which would define the content of a long-term strategy to create a coherent European aviation research network. This network would bring together industry leaders, government actors from the member states of the European Union, and the European Commission. Following the publication of ACARE's ‘’Strategic Research Agenda’’ , the Commission launched a number of aeronautical research bodies and initiatives as part of the 6th and 7th Framework Programmes for Research and Technological Development and the Horizon 2020 Research and Innovation Programme. Prominent examples of these include the Clean Sky Joint Undertaking, a public-private partnership coordinating and funding research projects that aim to mitigate the environmental impact of aviation by developing more fuel-efficient technologies, and the SESAR Joint Undertaking, which aims to improve the efficiency of the European air traffic management system. References External links ACARE Clean Sky SESAR Joint Undertaking Creating Innovative Air transport Technologies for Europe Aeronautics organizations European Union and science and technology International aviation organizations Pan-European trade and professional organizations Science and technology in Europe Transport and the European Union
Advisory Council for Aeronautics Research in Europe
[ "Engineering" ]
358
[ "Aeronautics organizations" ]
16,086,895
https://en.wikipedia.org/wiki/Independent%20electron%20approximation
In condensed matter physics, the independent electron approximation is a simplification used in complex systems, consisting of many electrons, that approximates the electron–electron interaction in crystals as null. It is a requirement for both the free electron model and the nearly-free electron model, where it is used alongside Bloch's theorem. In quantum mechanics, this approximation is often used to simplify a quantum many-body problem into single-particle approximations. While this simplification holds for many systems, electron–electron interactions may be very important for certain properties in materials. For example, the theory covering much of superconductivity is BCS theory, in which the attraction of pairs of electrons to each other, termed "Cooper pairs", is the mechanism behind superconductivity. One major effect of electron–electron interactions is that electrons distribute around the ions so that they screen the ions in the lattice from other electrons. Quantum treatment For an example of the Independent electron approximation's usefulness in quantum mechanics, consider an N-atom crystal with one free electron per atom (each with atomic number Z). Neglecting spin, the Hamiltonian of the system takes the form: where is the reduced Planck constant, e is the elementary charge, me is the electron rest mass, and is the gradient operator for electron i. The capitalized is the Ith lattice location (the equilibrium position of the Ith nuclei) and the lowercase is the ith electron position. The first term in parentheses is called the kinetic energy operator while the last two are simply the Coulomb interaction terms for electron–nucleus and electron–electron interactions, respectively. If the electron–electron term were negligible, the Hamiltonian could be decomposed into a set of N decoupled Hamiltonians (one for each electron), which greatly simplifies analysis. The electron–electron interaction term, however, prevents this decomposition by ensuring that the Hamiltonian for each electron will include terms for the position of every other electron in the system. If the electron–electron interaction term is sufficiently small, however, the Coulomb interactions terms can be approximated by an effective potential term, which neglects electron–electron interactions. This is known as the independent electron approximation. Bloch's theorem relies on this approximation by setting the effective potential term to a periodic potential of the form that satisfies , where is any reciprocal lattice vector (see Bloch's theorem). This approximation can be formalized using methods from the Hartree–Fock approximation or density functional theory. See also Strongly correlated material References Omar, M. Ali (1994). Elementary Solid State Physics, 4th ed. Addison Wesley. . Electron
Independent electron approximation
[ "Chemistry" ]
552
[ "Electron", "Molecular physics" ]
16,087,261
https://en.wikipedia.org/wiki/Hydrological%20code
A hydrological code or hydrologic unit code is a sequence of numbers or letters (a geocode) that identify a hydrological unit or feature, such as a river, river reach, lake, or area like a drainage basin (also called watershed in North America) or catchment. One system, developed by Arthur Newell Strahler, known as the Strahler stream order, ranks streams based on a hierarchy of tributaries. Each segment of a stream or river within a river network is treated as a node in a tree, with the next segment downstream as its parent. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream, and so on. Another example is the system of assigning IDs to watersheds devised by , known as the Pfafstetter Coding System or the Pfafstetter System. Drainage areas are delineated in a hierarchical fashion, with "level 1" watersheds at continental scales, subdivided into smaller level 2 watersheds, which are divided into level 3 watersheds, and so on. Each watershed is assigned a unique number, called a Pfafsetter Code, based on its location within the overall drainage system. Europe A comprehensive coding system is in use in Europe. This system codes from the ocean to the so-called primary catchment. The system determines a set of oceans or endorheic systems identified by a letter. These systems are subdivided into a maximum of 9 seas. The seas are numbered 1 to 9. Seas lying far from the ocean, for example the Black Sea receive a higher number. The seas are delimited using the so-called definitions made by the International Hydrographic Organization in 1953. The coasts of these seas are defined clockwise from north west to south east from the strait where the sea connects to the ocean or the other seas. Subsequently every watershed along this coast is assigned a number using the Pfafstetter Coding System. This implies that the four largest watersheds are selected and receive numbers 2,4,6, or 8. The watersheds in between the large systems receive numbers 3, 5, and 7. Numbers 1 and 9 are used for the small watersheds on the edges of the strait. The smaller systems can subsequently be numbered recursively or kept together for grouping purpose. Landmasses (Continent and Islands) are also numbered in a logical manner, along a clock-wise oriented sea. For Europe containing many inner seas this feature helps to read the relative location of a hydrological object in the sea. United States See also Hydrologic Unit Modeling for the United States Water Resource Region References External links Hydrology Limnology Water and the environment Geocodes
Hydrological code
[ "Chemistry", "Engineering", "Environmental_science" ]
559
[ "Hydrology", "Environmental engineering" ]
16,087,606
https://en.wikipedia.org/wiki/History%20of%20steam%20road%20vehicles
The history of steam road vehicles comprises the development of vehicles powered by a steam engine for use on land and independent of rails, whether for conventional road use, such as the steam car and steam waggon, or for agricultural or heavy haulage work, such as the traction engine. The first experimental vehicles were built in the 18th and 19th century, but it was not until after Richard Trevithick had developed the use of high-pressure steam, around 1800, that mobile steam engines became a practical proposition. The first half of the 19th century saw great progress in steam vehicle design, and by the 1850s it was viable to produce them on a commercial basis. This progress was dampened by legislation which limited or prohibited the use of steam-powered vehicles on roads. Nevertheless, the 1880s to the 1920s saw continuing improvements in vehicle technology and manufacturing techniques, and steam road vehicles were developed for many applications. In the 20th century, the rapid development of internal combustion engine technology led to the demise of the steam engine as a source of propulsion of vehicles on a commercial basis, with relatively few remaining in use beyond the Second World War. Many of these vehicles were acquired by enthusiasts for preservation, and numerous examples are still in existence. In the 1960s, the air pollution problems in California gave rise to a brief period of interest in developing and studying steam-powered vehicles as a possible means of reducing the pollution. Apart from interest by steam enthusiasts, occasional replica vehicles, and experimental technology, no steam vehicles are in production at present. Early steam-powered vehicles, which were uncommon but not rare, have considerable disadvantages as seen from a 21st-century viewpoint. They were slow to start, as water had to be boiled to generate the steam. They used a dirty fuel (coal) and put out dirty smoke. Fuel was bulky and had to be shoveled onto the vehicle and then into the firebox. Like a furnace, hot ash had to be removed and disposed of. The engine needed to be replenished with water in addition to fuel. Most vehicles had metal wheels and less than excellent traction. They were heavy. In most cases the user had to do their own maintenance. Top speed was low, about per hour, and acceleration was poor. Steam vehicle technology evolved over time. Later steam vehicles used cleaner liquid fuel (kerosene), were fitted with rubber tyres and condensers to recover water, and were lighter overall. These improvements were not enough to keep pace with internal-combustion engines, however, which ultimately out-competed steam and remained dominant for the rest of the 20th century. Early pioneers Early research on the steam engine before 1700 was closely linked to the quest for self-propelled vehicles and ships, the first practical applications from 1712 were stationary plant working at very low pressure which entailed engines of very large dimensions. The size reduction necessary for road transport meant an increase in steam pressure with all the attendant dangers, due to the inadequate boiler technology of the period. A strong opponent of high pressure steam was James Watt who along with Matthew Boulton did all he could to dissuade William Murdoch from developing and patenting his steam carriage built and operated in model form in 1784. In 1791 he built a larger steam carriage which he had to abandon to do other work. During the latter part of the 18th century, there were numerous attempts to produce self-propelled steerable vehicles. Many remained in the form of models. Progress was dogged by many problems inherent to road vehicles in general, such as adequate road surfaces, suitable power plant giving steady rotative motion, tyres, vibration resistant bodywork, braking, suspension and steering among other issues. The extreme complexity of these issues can be said to have hampered progress over more than a hundred years, as much as hostile legislation. Verbiest steam carriage Ferdinand Verbiest is suggested to have built what may have been the first steam carriage in about 1679, but very little concrete information on this is known to exist. It was not designed to carry a driver or goods as it was a small scale vehicle. It also seems that the Belgian vehicle served as an inspiration for the Italian Grimaldi (early 1700) and the French Nolet (1748) steam carriage successor. Cugnot "Fardier à vapeur" Nicolas-Joseph Cugnot's "machine à feu pour le transport de wagons et surtout de l'artillerie" ("fire engine for transporting wagons and especially artillery") was built in two versions, one in 1769 and one in 1771 for use by the French Army. This was the first steam wagon that was not a toy, and that was known to exist. Cugnot's fardier a term usually applied to a massive two wheeled cart for exceptionally heavy loads, was intended to be capable of transporting 4 tonnes (3.9 tons), and of travelling at up to . The vehicle was of tricycle layout, with two rear wheels and a steerable front wheel controlled by a tiller. There is considerable evidence, from the period, that this vehicle actually ran, making it probably the first to do so, however it remained a short lived experiment due to inherent instability and the vehicle's failure to meet the Army's specified performance level. Symington steam carriage In 1786 William Symington built a steam carriage. Fourness and Ashworth steam car A British patent No.1674 of December 1788 was granted for a steam car by Fourness and Ashworth. Trevithick steam carriage In 1801 Richard Trevithick constructed an experimental steam-driven vehicle (Puffing Devil) which was equipped with a firebox enclosed within the boiler, with one vertical cylinder, the motion of the single piston being transmitted directly to the driving wheels by means of connecting rods. It was reported as weighing 1520 kg fully loaded, with a speed of on the flat. During its first trip it was left unattended and was "self destructed". Trevithick soon built the London Steam Carriage that ran successfully in London in 1803, but the venture failed to attract interest and soon folded up. In the context of Trevithick's vehicle, an English writer by the name of "Mickleham" in 1822 coined the term Steam engine: "It exhibits in construction the most beautiful simplicity of parts, the most sagacious selection of appropriate forms, the most convenient and effective arrangement and connexion uniting strength with elegance, the necessary solidity with the greatest portability, possessing unlimited power with a wonderful pliancy to accommodate it to the varying resistance: it may indeed be called The steam engine." Evans steam-powered amphibious craft In 1805 Oliver Evans built the Oruktor amphibolos (literally Amphibious digger), a steam-powered, flat bottomed dredger that he modified to be self-propelled on both water and land. It is widely believed to be the first amphibious vehicle, and the first steam-powered road vehicle to run in the United States. However, no designs for the machine survive, and the only accounts of its achievements come from Evans himself. Later analysis of Evans's descriptions suggests that the engine was unlikely to have been powerful enough to move the vehicle either on land or water, and that the chosen route for its demonstration would have had the benefit of gravity, river currents and tides to assist with the vehicles' progress. The dredger was not a success, and after a few years lying idle, was dismantled for parts. Summers and Ogle steam carriage In around 1830 or 1831 Summers and Ogle based at the Iron Foundry, Millbrook, Southampton, made two three-wheeled steam carriages. In 1831 the firm's Nathaniel Ogle gave evidence on the steam carriage to the "Select Committee of the House of Commons on Steam Carriages". In 1832 one of their steam carriages travelled via Oxford to Birmingham and Liverpool. A June 1833 newspaper report described a demonstration in London: Early steam carriage services More commercially successful for a time than Trevithick's carriage were the steam carriage services operated in England in the 1830s, principally by associates of Sir Goldsworthy Gurney and by Walter Hancock among others and in Scotland by John Scott Russell. However, the heavy road tolls imposed by the Turnpike Acts discouraged steam road vehicles and for a short time allowed the continued monopoly of horse traction until railway trunk routes became established in the 1840s and 1850s. Sir James C. Anderson and his engineering partner Jasper Wheeler Rogers were the first to bring steam-propulsion vehicles to Ireland. Rogers and Anderson created their versions of these devices in the 1830s and early 1840s where they advocated for an island-wide conveyance network that would use Ireland's mail coach roads. An 1838 Cork Southern Reporter article on Anderson's "steam drag, or carriage for common roads" recounts how Anderson and his father (both of Buttevant Castle) spent "a fortune in building twenty-nine unsuccessful carriages to succeed in the thirtieth." Jasper Rogers built his Irish steam-driven cars in a former flint-glass factory, Fort Chrystal, located on what is now known as Dublin's East Wall. Accompanying Rogers' and Anderson's interests in improvements in Irish conveyance of goods and people, they particularly advocated steam-propelled individual vehicles because the operators, road network staff, and work crews needed to maintain the system were much more encompassing than those used by a railway system alone, at a time when Rogers and Anderson were trying to maximize Irish wage employment. They could see that their immediate competitor, the railway, would greatly diminish labor needs within Ireland's transportation infrastructures. Similarly, a national railway system would contract, rather than expand, inner-island travel destinations. Rogers' and Anderson's steam-vehicle system called for numerous way-stations for refueling and supplying fresh water, and at the same time, these stations could house a "road police" as well as telegraph depots. Essentially most Irish villages, no matter how remote, would participate in this grand steam-vehicle network. Locals would be able to earn extra money by carrying rocks to the fuel stations, rocks that would be used to build, repair, or maintain the roadways. In addition, every village would require a local road repair crew. Military application of steam road vehicles During the Crimean War (1853–1856) a traction engine was used to pull multiple open trucks. In the 1870s many armies experimented with steam tractors pulling road trains of supply wagons. By 1898 steam traction engine trains with up to four wagons were employed in military manoeuvres in England. In 1900 John Fowler & Co. provided armoured road trains for use by the British forces in the Second Boer War. Victorian age of steam Although engineers developed ingenious steam-powered road vehicles, they did not enjoy the same level of acceptance and expansion as steam power at sea and on the railways in the middle and late 19th century of the "age of steam". Ransomes built a portable steam engine, that is a farm steam engine on wheels, hauled from farm to farm by horses in 1841. The next year Ransomes automated it and had the engine drive itself to farms. Harsh legislation virtually eliminated mechanically propelled vehicles from the roads of Great Britain for 30 years, the Locomotive Act 1861 imposing restrictive speed limits on "road locomotives" of in towns and cities, and in the country. The Locomotives Act 1865 (the famous Red Flag Act) further reduced the speed limits to in the country and just in towns and cities, additionally requiring a man bearing a red flag (red lantern during the hours of darkness) to precede every vehicle. At the same time, the act gave local authorities the power to specify the hours during which any such vehicle might use the roads. The sole exceptions were street trams which from 1879 onwards were authorised under licence from the Board of Trade. In France the situation was radically different from the extent of the 1861 ministerial ruling formally authorising the circulation of steam vehicles on ordinary roads. Whilst this led to considerable technological advances throughout the 1870s and 1880s, steam vehicles nevertheless remained a rarity. To an extent competition from the successful railway network reduced the need for steam vehicles. From the 1860s onwards, attention was turned more to the development of various forms of traction engine which could either be used for stationary work such as sawing wood and threshing, or for transporting outsize loads too voluminous to go by rail. Steam trucks were also developed but their use was generally confined to the local distribution of heavy materials such as coal and building materials from railway stations and ports. Rickett of Buckingham steam car In 1854 Thomas Rickett of Buckingham built the first of several steam cars and in 1858 he built the second. Instead of looking like a steam car it resembled a small locomotive. It consisted of a steam engine mounted on three wheels: two large driven rear wheels and one smaller front wheel by which the vehicle was steered. The weight of the machine was 1.5 tonnes and somewhat lighter than Rickett's steam car. The whole was driven by a chain drive and a maximum speed of twelve miles per hour was reached. Two years later in 1860 Rickett built a similar but heavier vehicle. This model incorporated spur-gear drive instead of chain. In his final design resembling a railway locomotive, the cylinders were coupled directly outside the cranks of the driving-axle. Roper steam car Sylvester H. Roper drove around Boston, Massachusetts on a steam car he invented in 1863. One of his 1863 steam cars went to the Henry Ford Museum, where in 1972 it was the oldest car in the collection. Around 1867–1869 he built a steam velocipede, which may have been the first motorcycle. Roper died in 1896 of heart failure while testing a later version of his steam motorcycle. Manzetti steam car In 1864 Italian inventor Innocenzo Manzetti built a road steamer. It had the boiler at the front and a single cylinder engine. Holt Road steamer H.P. Holt constructed a small road steamer in 1866. Able to reach a speed of twenty miles per hour on level roads, it had a vertical boiler at the rear and two separate twin cylinder engines, each of which drove one rear wheel by means of a chain and sprocket wheels. Taylor Steam buggy In 1867 Canadian jeweller Henry Seth Taylor demonstrated his four wheeled steam buggy at the Stanstead Fair in Stanstead, Quebec and again the following year. The basis of the buggy which he began building in 1865 was a high wheeled carriage with bracing to support a two-cylinder steam engine mounted on the floor. Michaux-Perreaux Steam velocipede Around 1867–1869 in France a Louis-Guillaume Perreaux commercial steam engine was attached to a Pierre Michaux metal framed velocipede, creating the Michaux-Perreaux steam velocipede. Along with the Roper steam velocipede, it might have been the first motorcycle. The only Michaux-Perreaux Steam velocipede made is in the Musée de l'Île-de-France, Sceaux, and was included in The Art of the Motorcycle exhibition in New York in 1998. Knight of Farnham steam carriage In 1868-1870 John Henry Knight of Farnham built a four-wheeled steam carriage which originally only had a single-cylinder engine. Catley and Ayres of York steam car In 1869 a small three wheeled vehicle propelled by a horizontal twin cylinder engine which drove the rear axle by spur gearing, only one rear wheel was driven, the other turning freely on the axle. A vertical fire-tube boiler was mounted at the rear with a polished copper casing over the fire box and chimney, the boiler was enclosed in a mahogany casing. The weight was only 19 cwt and the front wheel was used for steering. Thomson of Edinburgh Road steamer In 1869 the road steamer built by Robert William Thomson of Edinburgh became famous because its wheels were shod with heavy solid rubber tyres. Thomson's first road steamers, manufactured in his own small workshop in Leith, were fitted with three wheels, the small single wheel at the front being directly below the steering wheel. The tyres, which were thick, were corrugated internally and adhered to the wheel by friction. He then turned to T. M. Tennant and Co of Bowershall Iron and Engine Works, Leith for their manufacture, but as they could not keep up with demand in 1870 some of the production was moved to Robey & Co of Lincoln. Over the next two years Robeys built 32 of these vehicles, which were either versions. A large proportion were exported. These included one to Italy (for an experiment of public transport in Bergamo), three to Austria (Vienna) and others to Turkey, Australia, New Zealand, India, Ireland, Chile, Russia (Moscow) and Greece. A further Thomson steam vehicle was built in 1877, but apart from traction engines, Robeys appear to have discontinued making road steam vehicles until 1904, when they started manufacturing steam road lorries. Kemna of East-Prussia Road steamer In 1871 Julius Kemna, a German industrialist, started selling English steam threshing systems. A couple of years later Kemna started producing various other steam-powered vehicles (such as road rollers) but also high quality steam ploughing engines and road steamers. Randolph of Glasgow Steam bus In 1872 a steam coach by Charles Randolph of Glasgow weighed , was in length, but had a maximum speed of only . Two vertical twin cylinder engines were independent of one another and each drove one of the rear wheels by spur gearing. The entire vehicle was enclosed and fitted with windows all around, carried six people, and even had two driving mirrors for observing traffic approaching from behind, the earliest recorded instance of such a device. Bollée Steam bus From 1873 to 1883 Amédée Bollée of Le Mans built a series of steam-powered passenger vehicles able to carry 6 to 12 people at speeds up to , with such names as La Rapide, La Nouvelle, La Marie-Anne, La Mancelle and L'Obéissante. To L'Obeissante the boiler was mounted behind the passenger compartment with the engine at the front of the vehicle, driving the differential through a shaft with chain drive to the rear wheels. The driver sat behind the engine and steered by means of a wheel mounted on a vertical shaft. The lay out more closely resembled much later motor cars than other steam vehicles. Moreover, in 1873 it had independent suspension on all four corners. Grenville of Glastonbury steam car In 1875-1880 R. Neville Grenville of Glastonbury constructed a 3 wheeled steam vehicle which travelled a maximum of . This vehicle is still in existence, preserved for many years in the Bristol Museum & Art Gallery but since 2012 at the National Motor Museum, Beaulieu. Cederholm of Sweden steam car In 1892 painter Joens Cederholm and his brother, André, a blacksmith, designed their first car, a two-seater, introducing a condensor in 1894. It was not a success. Shearer of South Australia steam car Starting in 1894 David Shearer designed and built the first car in Australia. It was capable of on the streets of Adelaide, South Australia. The boiler was his own design, being a horizontal boiler of the semi flash type. Steering was by a tiller type design and a photograph of the vehicle shows it carrying eight passengers. The news article on the car has a sectional drawing of the design. The car's first official road trial was in 1899. De Dion & Bouton Steam vehicles See steam tricycle The development by Léon Serpollet of the flash steam boiler brought about the appearance of various diminutive steam tricycles and quadricycles during the late 80s and early 90s, notably by de Dion and Bouton, these successfully competed in long-distance races but soon met with stiff competition for public favour from the internal combustion engine cars being developed, notably by Peugeot, that quickly cornered most of the popular market. In the face of the flood of IC cars, proponents of the steam car had to fight a long rear guard battle that was to last into modern times. Locomobile Company of America This American firm bought the patents from the Stanley brothers and began building their steam buggies from 1898 to 1905. Locomobile Company of America went into building gas cars and lasted until the Depression. Stanley Motor Carriage Company In 1902 the twins Francis E. Stanley (1849–1918) and Freelan O. Stanley formed the Stanley Motor Carriage Company. They made famous models such as the 1906 Stanley Rocket, 1908 Stanley K Raceabout and 1923 Stanley Steam Car. Early to mid-20th century In 1906 the land speed record was broken by a Stanley steam car, piloted by Fred Marriott, which achieved at Ormond Beach, Florida. This annual week-long "Speed Week" was the forerunner of today's Daytona 500. This record was not exceeded by any land vehicle until 1910, and stood as the steam-powered world speed record till 25 August 2009. Doble steam car Attempts were made to bring more advanced steam cars on the market, the most remarkable being the Doble Steam Car which shortened start up time very noticeably by incorporating a highly efficient mono tube steam generator to heat a much smaller quantity of water along with effective automation of burner and water feed control. By 1923 Doble's steam cars could be started from cold with the turn of a key and driven off in 40 seconds or less. Paxton Phoenix steam car Abner Doble developed the Doble Ultimax engine for the Paxton Phoenix steam car, built by the Paxton Engineering Division of McCulloch Motors Corporation, Los Angeles. Its sustained maximum power was . The project was eventually dropped in 1954. Decline of steam car development Steam cars became less popular after the adoption of the electric starter, which eliminated the need for risky hand cranking to start gasoline-powered cars. The introduction of assembly-line mass production by Henry Ford, which hugely reduced the cost of owning a conventional automobile, was also a strong factor in the steam car's demise as the Model T was both cheap and reliable. Late 20th century See steam car#Air pollution, fuel crises, resurgence and enthusiasts Renewed interest In 1968 renewed interest was shown, sometimes prompted by newly available techniques. Some of these designs used safer and more responsive water-tube boilers. A prototype car was built by Charles J. & Calvin E. Williams of Ambler, Pennsylvania. Other high-performance steam cars were built by Richard J. Smith of Midway City, California, and A.M. and E. Pritchard of Caulfeld, Australia. Companies/organisations as Controlled Steam Dynamics of Mesa, Arizona, General Motors, Thermo-Electron Corp. of Waltham, Massachusetts, and Kinetics Inc., of Sarasota, Florida all built high-performance steam engines in the same period. Bill Lear also started work on a closed circuit steam turbine to power cars and buses, and built a transit bus and converted a Chevrolet Monte Carlo sedan to use this turbine system. It used a proprietary working fluid dubbed Learium, possibly a chlorofluorocarbon similar to DuPont Freon. In 1970 a variant of the steam car was made by Wallace L. Minto, which works on Ucon U-113 fluorocarbon as the working fluid instead of water. The car was called the Minto car. Land speed record On 25 August 2009 a team of British engineers from Hampshire ran their steam-powered car "Inspiration" at Edwards Air Force Base in the Mojave Desert, and averaged over two runs, driven by Charles Burnett III. The car was long and weighed , built from carbon fibre and aluminium and contained 12 boilers with over of steam tubing. Gallery See also Charles Dance (motorist) History of the automobile mainly covers later, internal combustion vehicles List of motorized trikes List of steam car makers Steam car Steam bus Steam engine Steam wagon The Steam House (Jules Verne novel) Timeline of motor vehicle brands References External links History of the Automobile – includes drawings of many early steam vehicles (Newton, Cugnot, Trevithick, Gurney, Hancock) including plan views Steamcar history – Some early drawings, plus detail of Verbiest's toy and a related book title... Smithsonian library entry for book about model of Verbiest's 'toy'. and Amazon entry for the same book by Horst O. Hardenberg Road vehicles History of steam road vehicles Steam Steam road vehicles
History of steam road vehicles
[ "Physics" ]
4,980
[ "Power (physics)", "Steam power", "Physical quantities" ]
16,088,989
https://en.wikipedia.org/wiki/Nesprin
Nesprins (nuclear envelope spectrin repeat proteins) are a family of proteins that are found primarily in the outer nuclear membrane, as well as other subcellular compartments. They contain a C-terminal KASH transmembrane domain and are part of the LINC complex (Linker of Nucleoskeleton and Cytoskeleton) which is a protein network that associates the nuclear envelope (the membrane surrounding the nucleus) to the cytoskeleton, outside the nucleus, and the nuclear lamina, inside the nucleus. Nesprin-1 and -2 bind to the actin filaments. Nesprin-3 binds to plectin, which is bound to the intermediate filaments, while nesprin-4 interacts with kinesin-1. Nesprin mediated connections to the cytoskeleton provides mechanosensory functions in cells, as the absence or disruption of Nesprin family members at the nuclear envelope interferes with the cell's ability to sense and respond to mechanical challenges. See also SYNE1 SYNE2 Spectrin repeat containing nuclear envelope family member 3 References Protein families
Nesprin
[ "Chemistry", "Biology" ]
241
[ "Biochemistry stubs", "Protein families", "Protein stubs", "Protein classification" ]
16,089,082
https://en.wikipedia.org/wiki/Aesthetic%20emotions
Aesthetic emotions are emotions that are felt during aesthetic activity or appreciation. These emotions may be of the everyday variety (such as fear, wonder or sympathy) or may be specific to aesthetic contexts. Examples of the latter include the sublime, the beautiful, and the kitsch. In each of these respects, the emotion usually constitutes only a part of the overall aesthetic experience, but may play a more or less definitive function for that state. Types Visual arts and film The relation between aesthetic emotions and other emotions is traditionally said to rely on the disinterestedness of the aesthetic experience (see Kant especially). Aesthetic emotions do not motivate practical behaviours in the way that other emotions do (such as how fear motivates avoidance behaviours). The capacity of artworks to arouse emotions such as fear is a subject of philosophical and psychological research. It raises problems such as the paradox of fiction in which one responds with sometimes quite intense emotions to art, even whilst knowing that the scenario presented is fictional (see for instance the work of Kendall Walton). Another issue is the problem of imaginative resistance, which considers why we are able to imagine many far-fetched fictional truths but experience comparative difficulty imagining that different moral standards hold in a fictional world. This problem was first raised by David Hume, and was revived in current discussion by Richard Moran, Kendall Walton and Tamar Gendler (who introduced the term in its current usage in a 2000 article by the same name). Some forms of artwork seem to be dedicated to the arousal of particular emotions. For instance horror films seek to arouse feelings of fear or disgust; comedies seek to arouse amusement or happiness, tragedies seek to arouse sympathy or sadness, and melodramas try to arouse pity and empathy. Music In the philosophy of music, scholars have argued whether instrumental music such as symphonies are simply abstract arrangements and patterns of musical pitches ("absolute music"), or whether instrumental music depicts emotional tableaux and moods ("program music"). Despite the assertions of philosophers advocating the "absolute music" argument, the typical symphony-goer does interpret the notes and chords of the orchestra emotionally; the opening of a Romantic-era symphony, in which minor chords thunder over low bass notes, is often interpreted by layman listeners as an expression of sadness in music. Also called "abstract music", absolute music is music that is not explicitly "about" anything, non-representational or non-objective. Absolute music has no references to stories or images or any other kind of extramusical idea. The aesthetic ideas underlying the absolute music debate relate to Kant's aesthetic disinterestedness from his Critique of Aesthetic Judgment, and has led to numerous arguments, including a war of words between Brahms and Wagner. In the 19th century, a group of early Romantics including Johann Wolfgang von Goethe and E. T. A. Hoffmann gave rise to the idea of what can be labeled as spiritual absolutism. "Formalism" is the concept of ‘music for music’s sake’ and refers only to instrumental music without words. The 19th century music critic Eduard Hanslick argued that music could be enjoyed as pure sound and form, that it needed no connotation of extra-musical elements to warrant its existence. See also Art and emotion References Further reading Chua, Daniel. Absolute Music and the Construction of Meaning (Cambridge University Press, 1999) Clay, Felix. 'The Origin of the Aesthetic Emotion'. Sammelbände der Internationalen Musikgesellschaft, 9. Jahrg., H. 2. (Jan. - Mar., 1908), pp. 282–290. Pouivet, Roger. 'On the Cognitive Functioning of Aesthetic Emotions'. Leonardo, Vol. 33, No. 1. (2000), pp. 49–53. Emotion Concepts in aesthetics
Aesthetic emotions
[ "Biology" ]
787
[ "Emotion", "Behavior", "Human behavior" ]
16,089,583
https://en.wikipedia.org/wiki/Privilege%20Management%20Infrastructure
In cryptography Privilege Management is the process of managing user authorisations based on the ITU-T Recommendation X.509. The 2001 edition of X.509 specifies most (but not all) of the components of a Privilege Management Infrastructure (PMI), based on X.509 attribute certificates (ACs). Later editions of X.509 (2005 and 2009) have added further components to the PMI, including a delegation service (in 2005 ) and interdomain authorisation (in the 2009 edition ). Privilege management infrastructures (PMIs) are to authorisation what public key infrastructures (PKIs) are to authentication. PMIs use attribute certificates (ACs) to hold user privileges, in the form of attributes, instead of public key certificates (PKCs) to hold public keys. PMIs have Sources of Authority (SoAs) and Attribute Authorities (AAs) that issue ACs to users, instead of certification authorities (CAs) that issue PKCs to users. Usually PMIs rely on an underlying PKI, since ACs have to be digitally signed by the issuing AA, and the PKI is used to validate the AA's signature. An X.509 AC is a generalisation of the well known X.509 public key certificate (PKC), in which the public key of the PKC has been replaced by any set of attributes of the certificate holder (or subject). Therefore, one could in theory use X.509 ACs to hold a user's public key as well as any other attribute of the user. (In a similar vein, X.509 PKCs can also be used to hold privilege attributes of the subject, by adding them to the subject directory attributes extension of an X.509 PKC). However, the life cycle of public keys and user privileges are usually very different, and therefore it isn't usually a good idea to combine both of them in the same certificate. Similarly, the authority that assigns a privilege to someone is usually different from the authority that certifies someone's public key. Therefore, it isn't usually a good idea to combine the functions of the SoA/AA and the CA in the same trusted authority. PMIs allow privileges and authorisations to be managed separately from keys and authentication. The first open source implementation of an X.509 PMI was built with funding under the EC PERMIS project, and the software is available from here. A description of the implementation can be found in. X.509 ACs and PMIs are used today in Grids (see Grid computing), to assign privileges to users, and to carry the privileges around the Grid. In the most popular Grid privilege management system today, called VOMS, user privileges, in the shape of VO memberships and roles, are placed inside an X.509 AC by the VOMS server, signed by the VOMS server, and then embedded in the user's X.509 proxy certificate for carrying around the Grid. Because of the rise in popularity of XML SOAP based services, SAML attribute assertions are now more popular than X.509 ACs for transporting user attributes. However, they both have similar functionality, which is to strongly bind a set of privilege attributes to a user. References Cryptography Communications protocols
Privilege Management Infrastructure
[ "Mathematics", "Technology", "Engineering" ]
694
[ "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Computer standards", "Communications protocols" ]
16,090,138
https://en.wikipedia.org/wiki/Windows%20Server%202008%20R2
Windows Server 2008 R2, codenamed "Windows Server 7", is the eighth major version of the Windows NT operating system produced by Microsoft to be released under the Windows Server brand name. It was released to manufacturing on July 22, 2009, and became generally available on October 22, 2009, the same respective release dates of Windows 7. It is the successor to Windows Server 2008, which is derived from the Windows Vista codebase, released the previous year, and was succeeded by the Windows 8-based Windows Server 2012. Enhancements in Windows Server 2008 R2 include new functionality for Active Directory, new virtualization and management features, version 7.5 of the Internet Information Services web server and support for up to 256 logical processors. It is built on the same kernel used with the client-oriented Windows 7, and is the first server operating system released by Microsoft which dropped support for 32-bit processors, an addition which carried over to the consumer-oriented Windows 11 in 2021. Windows Server 2008 R2 is the final version of Windows Server that includes Enterprise and Web Server editions, the final that got a service pack from Microsoft and the final version that supports IA-64 and processors without PAE, SSE2 and NX (although a 2018 update dropped support for non-SSE2 processors). Its successor, Windows Server 2012, requires a processor with PAE, SSE2 and NX, in any supported architecture. Seven editions of Windows Server 2008 R2 were released: Foundation, Standard, Enterprise, Datacenter, Web, HPC Server and Itanium, as well as Windows Storage Server 2008 R2. A home server variant called Windows Home Server 2011 was also released. History Microsoft introduced Windows Server 2008 R2 at the 2008 Professional Developers Conference as the server variant of Windows 7, based on the Windows NT kernel. On January 7, 2009, a beta release of Windows Server 2008 R2 was made available to subscribers of Microsoft's TechNet and MSDN programs, as well as those participating in the Microsoft Connect program for Windows 7. Two days later, the beta was released to the public via the Microsoft Download Center. On April 30, 2009, the release candidate was made available to subscribers of TechNet and MSDN. On May 5, 2009, the release candidate was made available to the public via the Microsoft download center. According to Windows Server Blog, the following are the dates of the year 2009 when Microsoft Windows Server 2008 R2 has been made available to various distribution channels: OEMs received Windows Server 2008 R2 in English and all language packs on July 29. The remaining languages were available around August 11. Independent software vendor (ISV) and independent hardware vendor (IHV) partners have been able to download Windows Server 2008 R2 from MSDN starting on August 14. IT professionals with TechNet subscriptions were able to download Windows Server 2008 R2 and obtain product keys for English, French, German, Italian, and Spanish variants beginning August 14 and all remaining languages beginning August 21. Developers with MSDN subscriptions have been able to download and obtain product keys for Windows Server 2008 R2 in English, French, German, Italian, and Spanish starting August 14 and all remaining languages starting August 21. Microsoft Partner Program (MPP) gold/certified members were able to download Windows Server 2008 R2 through the MPP portal on August 19. Volume licensing customers with an existing Software Assurance (SA) contracts were able to download Windows Server 2008 R2 on August 19 via the Volume License Service Center. Volume licensing customers without an SA were able to purchase Windows Server 2008 R2 through volume licensing by September 1. Additionally, qualifying students have been able to download Windows Server 2008 R2 Standard edition in 15 languages from the Microsoft Imagine program (known as DreamSpark at the time). New features A reviewer guide published by the company describes several areas of improvement in R2. These include new virtualization capabilities (Live Migration, Cluster Shared Volumes using Failover Clustering and Hyper-V), reduced power consumption, a new set of management tools and new Active Directory capabilities such as a "recycle bin" for deleted objects. IIS 7.5 has been added to this release which also includes updated FTP server services. Security enhancements include encrypted clientless authenticated VPN services through DirectAccess for clients using Windows 7, and the addition of DNSSEC support for DNS Server Service. Even though DNSSEC as such is supported, only one signature algorithm is available: #5/RSA/SHA-1. Since many zones use a different algorithm – including the root zone – this means that in reality Windows still can't serve as a recursive resolver. The DHCP server supports a large number of enhancements such as MAC address-based control filtering, converting active leases into reservations or link layer based filters, DHCppP Name protection for non-Windows machines to prevent name squatting, better performance through aggressive lease database caching, DHCP activity logging, auto-population of certain network interface fields, a wizard for split-scope configuration, DHCP Server role migration using WSMT, support for DHCPv6 Option 15 (User Class) and Option 32 (Information Refresh Time). The DHCP server runs in the context of the Network Service account which has fewer privileges to reduce potential damage if compromised. Windows Server 2008 R2 supports up to 64 physical processors or up to 256 logical processors per system. (Only the Datacenter and Itanium editions can take advantage of the capability of 64 physical processors. Enterprise, the next-highest edition after those two, can only use 8.) When deployed in a file server role, new File Classification Infrastructure services allow files to be stored on designated servers in the enterprise based on business naming conventions, relevance to business processes and overall corporate policies. Server Core includes a subset of the .NET Framework, so that some applications (including ASP.NET web sites and Windows PowerShell 2.0) can be used. Performance improvement was a major area of focus for this release; Microsoft has stated that work was done to decrease boot time, improve the efficiency of I/O operations while using less processing power, and generally improve the speed of storage devices, especially iSCSI. Active Directory has several new features when raising the forest and domain functional levels to Windows Server 2008 R2: Two added features are Authentication Mechanism Assurance and Automatic SPN Management. When raising the forest functional level, the Active Directory recycle bin feature is available and can be enabled using the Active Directory Module for PowerShell. Support lifecycle Support for the RTM version of Windows Server 2008 R2 ended on April 9, 2013. Users had to install Service Pack 1 to continue receiving updates. On January 13, 2015, Windows Server 2008 R2 exited mainstream support and entered the extended support phase; Microsoft continued to provide security updates every month for Windows Server 2008 R2, however, free technical support, warranty claims, and design changes were no longer offered. Extended support ended on January 14, 2020, about ten years after the release of Windows Server 2008 R2. In August 2019, researchers reported that "all modern versions of Microsoft Windows" may be at risk for "critical" system compromise due to design flaws of hardware device drivers from multiple providers. Itanium Microsoft announced that Server 2008 R2 would be the last version of Windows supporting the Itanium architecture, with extended support to end on July 10, 2018. However, monthly security updates continued until January 14, 2020, and a final unscheduled update appeared in May 2020 via WSUS. Paid extended updates Windows Server 2008 R2 was eligible for the paid ESU (Extended Security Updates) program. This program allowed volume license customers to purchase, in yearly installments, security updates for the operating system until January 10, 2023, only for Standard, Enterprise, and Datacenter volume licensed editions. The program was included with Microsoft Azure purchases, and offered Azure customers an additional year of support, until January 9, 2024. Prior to the ESU program becoming available, Windows Server 2008 R2 was eligible for the now discontinued, paid Premium Assurance program (an add-on to Microsoft Software Assurance) available to volume license customers. Microsoft will, however, honor the program for customers who purchased it between March 2017 and July 2018 (while it was available). The program provides an extra six years of security update support, until January 13, 2026. This will mark the final end of all security updates for the Windows NT 6.1 product line after 16 years, 5 months, and 22 days. Paid extended updates are not available for Itanium customers. Service Pack On February 9, 2011, Microsoft officially released Service Pack 1 (SP1) for Windows 7 and Windows Server 2008 R2 to OEM partners. Apart from bug fixes, it introduces two new major functions, RemoteFX and Dynamic Memory. RemoteFX enables the use of graphics hardware support for 3D graphics in a Hyper-V based VM. Dynamic Memory makes it possible for a VM to only allocate as much physical RAM as is needed temporarily for its execution. On February 16, SP1 became available for MSDN and TechNet subscribers as well as volume licensing customers. As of February 22, SP1 is generally available for download via the Microsoft Download Center and available on Windows Update. System requirements System requirements for Windows Server 2008 R2 are as follows: Processor 1.4 GHz x86-64 or Itanium 2 processor Memory Minimum: 512 MB RAM (may limit performance and some features) Recommended: 2 GB RAM Maximum: 8 GB RAM (Foundation), 32 GB RAM (Standard), or 2 TB RAM (Enterprise, Datacenter and Itanium) Display Super VGA (800×600) or higher Disk Space Requirements Minimum (editions higher than Foundation): 32 GB or more Minimum (Foundation edition) 10 GB or more. Computers with more than 16 GB of RAM require more disk space for paging and dump files. Other DVD drive, keyboard and mouse, Internet access (required for updates and online activation) Editions See also BlueKeep (security vulnerability) Comparison of Microsoft Windows versions Comparison of operating systems History of Microsoft Windows List of operating systems Microsoft Servers References External links Windows Server 2008 R2 on Microsoft TechNet Convert Windows Server 2008 R2 to Workstation 2009 software X86-64 operating systems 2008 R2 Server 2008 R2 es:Windows Server 2008#Windows Server 2008 R2 ms:Windows Server 2008#Windows Server 2008 R2 sv:Windows Server 2008#Windows Server 2008 R2
Windows Server 2008 R2
[ "Technology" ]
2,177
[ "Computing platforms", "Microsoft Windows" ]
16,090,282
https://en.wikipedia.org/wiki/Spatial%20acceleration
In physics, the study of rigid body motion allows for several ways to define the acceleration of a body. The usual definition of acceleration entails following a single particle/point of a rigid body and observing its changes in velocity. Spatial acceleration entails looking at a fixed (unmoving) point in space and observing the change in velocity of the particles that pass through that point. This is similar to the definition of acceleration in fluid dynamics, where typically one measures velocity and/or acceleration at a fixed point inside a testing apparatus. Definition Consider a moving rigid body and the velocity of a point P on the body being a function of the position and velocity of a center-point C and the angular velocity . The linear velocity vector at P is expressed in terms of the velocity vector at C as: where is the angular velocity vector. The material acceleration at P is: where is the angular acceleration vector. The spatial acceleration at P is expressed in terms of the spatial acceleration at C as: which is similar to the velocity transformation above. In general the spatial acceleration of a particle point P that is moving with linear velocity is derived from the material acceleration at P as: References This reference effectively combines screw theory with rigid body dynamics for robotic applications. The author also chooses to use spatial accelerations extensively in place of material accelerations as they simplify the equations and allows for compact notation. See online presentation, page 23 also from same author. JPL DARTS page has a section on spatial operator algebra (link: ) as well as an extensive list of references (link: ). This reference defines spatial accelerations for use in rigid body mechanics. Rigid bodies Acceleration
Spatial acceleration
[ "Physics", "Mathematics" ]
332
[ "Wikipedia categories named after physical quantities", "Quantity", "Physical quantities", "Acceleration" ]
593,338
https://en.wikipedia.org/wiki/Bimetal
Bimetal refers to an object that is composed of two separate metals joined together. Instead of being a mixture of two or more metals, like alloys, bimetallic objects consist of layers of different metals. Trimetal and tetrametal refer to objects composed of three and four separate metals respectively. A bimetal bar is usually made of brass and iron. Bimetallic strips and disks, which convert a temperature change into mechanical displacement, are the most recognized bimetallic objects due to their name. However, there are other common bimetallic objects. For example, tin cans consist of steel covered with tin. The tin prevents the can from rusting. To cut costs and prevent people from melting them down for their metal, coins are often composed of a cheap metal covered with a more expensive metal. For example, the United States penny was changed from 95% copper to 95% zinc, with a thin copper plating to retain its appearance. A common type of trimetallic object (before the all-aluminium can) was a tin-plated steel can with an aluminum lid with a pull tab. Making the lid out of aluminum allowed it to be pulled off by hand instead of using a can opener, but these cans proved difficult to recycle owing to their mix of metals. Blades for bandsaws and reciprocating saws are often made with bimetal construction. The teeth, made of high-speed steel, are bonded (by various methods, for example, electron beam welding or laser beam welding) to the softer high-carbon steel base. Such construction makes for blades with a better combination of cutting speed and durability than shown by non-bimetal blades, because the advantages and disadvantages of each of the metals are applied in the best locations: the teeth are harder (and thus cut better), but therefore also brittler; meanwhile, the body area of the band is softer (which would make for poorer teeth), but also less brittle, and thus more resistant to cracking and breaking (which is desirable in the body area). See also Bimetallic strip Bimetallism Bi-metallic coin Thermocouple (electric) Copper-clad steel References Further reading Thermal imaging with tapping mode using a bimetal oscillator formed at the end of a cantilever Bimetal: Definition, Properties, and Applications Kanthal Thermostatic Bimetal Guide.pdf How Thermostatic Bimetal Works Metallurgy Composite materials
Bimetal
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
521
[ "Metallurgy", "Composite materials", "Materials science", "Bimetal", "Materials", "nan", "Matter" ]
593,400
https://en.wikipedia.org/wiki/Fecal%20fat%20test
In medicine, the fecal fat test is a diagnostic test for fat malabsorption conditions, which lead to excess fat in the feces (steatorrhea). Background In the duodenum, dietary fat (primarily triglycerides) is digested by enzymes such as pancreatic lipase into smaller molecules of 1,2-Diacylglycerols and free fatty acids, which can be absorbed through the wall of the jejenum of the small intestine and enter circulation for metabolism and storage. Since fat is a valuable nutrient, human feces normally contains very little undigested fat. However, a number of diseases of the pancreas and gastrointestinal tract are characterized by fat malabsorption. Examples of such diseases are: disorders of exocrine pancreatic function, such as chronic pancreatitis, cystic fibrosis and Shwachman–Diamond syndrome (these are characterized by deficiency of pancreatic digestive enzymes) celiac disease (in which the fat malabsorption in severe cases is due to inflammatory damage to the integrity of the intestinal lining) short bowel syndrome (in which much of the small intestine has had to be surgically removed and the remaining portion cannot completely absorb all of the fat). small bowel bacterial overgrowth syndrome Microscopy In the simplest form of the fecal fat test, a random fecal specimen is submitted to the hospital laboratory and examined under a microscope after staining with a Sudan III or Sudan IV dye ("Sudan staining"). Visible amounts of fat indicate some degree of fat malabsorption. Quantitative Quantitative fecal fat tests measure and report an amount of fat. This is usually done over a period of three days, the patient collecting all of their feces into a container. The container is thoroughly mixed to homogenize the feces, without using specific mixer equipment. A small sample from the feces is collected. The fat content is extracted with solvents and measured by saponification (turning the fat into soap). Normally, up to 7 grams of fat can be malabsorbed in people consuming 100 grams of fat per day. In patients with diarrhea, up to 12 grams of fat may be malabsorbed since the presence of diarrhea interferes with fat absorption, even when the diarrhea is not due to fat malabsorption. References Gastroenterology Feces Stool tests
Fecal fat test
[ "Biology" ]
506
[ "Feces", "Excretion", "Animal waste products" ]
593,406
https://en.wikipedia.org/wiki/Ion-selective%20electrode
An ion-selective electrode (ISE), also known as a specific ion electrode (SIE), is a simple membrane-based potentiometric device which measures the activity of ions in solution. It is a transducer (or sensor) that converts the change in the concentration of a specific ion dissolved in a solution into an electrical potential. ISE is a type of sensor device that senses changes in signal based on the surrounding environment through time. This device will have an input signal, a property that we wish to quantify, and an output signal, a quantity we can register. In this case, ion selective electrode are electrochemical sensors that give potentiometric signals. The voltage is theoretically dependent on the logarithm of the ionic activity, according to the Nernst equation. Analysis with ISEs expands throughout a range of technological fields such as biology, chemistry, environmental science and other industrial workplaces like agriculture. Ion-selective electrodes are used in analytical chemistry and biochemical/biophysical research, where measurements of ionic concentration in an aqueous solution are required. General Theory of Ion-Selective Electrodes When using ion-selective electrodes, a scientist wants to compare the signal of an analyte to the electrochemical potential shown by the ISE. Different types of electrodes can be used to do this, as described in the sections below. As shown in the general schematic, an ion-selective membrane (consisting of glass, crystalline, liquid, or polymers) selectively allows specific types of ions to travel through, or in other words, is selectively permeable.   All ISE measurements are made with a comparison to an internal reference electrode with a known concentration of the analyte being measured. The external reference electrode is the part of the system that is exposed to the solution. The potential is measured using the following formula: Eise includes the potential of the internal reference electrode and the ion-selective membrane potential, Em. The Eise is governed by analyte activity in the internal solution whereas Em is governed by the activity of the analyte on each side of the selective membrane. Furthermore, the Eref or external reference portion of the cell is dependent on the half-reaction of the electrode and the liquid junction potential Ej. Reference Electrodes The most common types of reference electrodes used in analytical chemistry include the standard hydrogen electrode, the saturated calomel electrode, and the Ag/AgCl electrode. The standard hydrogen electrode (SHE) is the primary reference electrode that has a potential of 0 volts at all temperatures and a pressure of 1 atm. The figure on the left highlights the platinum (Pt) wire electrode which is not a part of the reaction (it’s a catalyst) and can serve as either the anode or cathode. The wire is immersed in an acidic solution with an H2 (g) outlet pumping gaseous hydrogen into the solution. On the surface of the Pt electrode, a half-reaction occurs: 2H^+{}_{(aq)} + 2e^- <=> H_2{}_{(g)} The cell notation is as follows with a single line denoting a phase boundary and a double line representing a salt bridge: Pt | H_2 (1 atm) | H^+(1M) || In fieldwork, the SHE is inconvenient, making the Saturated Calomel Electrode (SCE) the second most used reference. However, it contains mercury, making it the less preferred choice of measurement. The electrode, as shown on the right, is connected to an electrical lead. A platinum wire in a paste of Hg/Hg2Cl2 is placed in a saturated 3M KCl solution. A small hole of asbestos wire is located on the bottom of the internal electrode. A ceramic frit, acting as the salt bridge, is located on the bottom of the reference electrode. The overall half-reaction is: Hg2Cl2 + 2e^- <-> 2Hg + 2Cl^- The notation for the cell is: Hg | Hg2Cl2 | KCl(xM) || Given its toxicity, the silver chloride electrode is most frequently used even over the SCE. Within the reference electrode, an Ag/AgCl wire is immersed in a KCl-filling solution. A frit at the bottom of the reference electrode plays the role of a salt bridge. The overall half-reaction is: AgCl + e^-<=> Ag +Cl^- The notation for the cell is: Ag | AgCl | KCl(xM)|| Types of ion-selective membrane There are four main types of ion-selective membrane used in ion-selective electrodes (ISEs): glass, solid state, liquid based, and compound electrode. Glass membranes Glass membranes are made from an ion-exchange type of glass (silicate or chalcogenide). This type of ISE has good selectivity, but only for several single-charged cations; mainly H+, Na+, and Ag+. Chalcogenide glass also has selectivity for double-charged metal ions, such as Pb2+, and Cd2+. The glass membrane has excellent chemical durability and can work in very aggressive media. A very common example of this type of electrode is the pH glass electrode. Crystalline membranes Crystalline membranes are made from mono- or polycrystallites of a single substance. They have good selectivity, because only ions which can introduce themselves into the crystal structure can interfere with the electrode response. This is the major difference between this type of electrodes and the glass membrane electrodes. The lack of internal solution reduces the potential junctions. Selectivity of crystalline membranes can be for both cation and anion of the membrane-forming substance. An example is the fluoride selective electrode based on LaF3 crystals. Ion-exchange resin membranes Ion-exchange resins are based on special organic polymer membranes which contain a specific ion-exchange substance (resin). This is the most widespread type of ion-specific electrode. Usage of specific resins allows preparation of selective electrodes for tens of different ions, both single-atom or multi-atom. They are also the most widespread electrodes with anionic selectivity. However, such electrodes have low chemical and physical durability as well as "survival time". An example is the potassium selective electrode, based on valinomycin as an ion-exchange agent. Enzyme electrodes Enzyme electrodes are not true ion-selective electrodes, but are usually considered to be within the ion-selective electrode scope. Such an electrode has a "double reaction" mechanism - an enzyme reacts with a specific substance, and the product of this reaction (usually H+ or OH−) is detected by a true ion-selective electrode, such as a pH-selective electrodes. All these reactions occur inside a special membrane, which covers the true ion-selective electrode. This is why enzyme electrodes are sometimes considered ion-selective. An example is a glucose selective electrode. Alkali metal ISE Electrodes specific for each alkali metal ion, Li+, Na+, K+, Rb+ and Cs+ have been developed. The principle on which these electrodes are based is that the alkali metal ion is encapsulated in a molecular cavity whose size is matched to the size of the ion. For example, an electrode based on Valinomycin may be used for the determination of potassium ion concentration. See also Fluoride selective electrode Ion transport number Solvated electron Electrochemical hydrogen compressor References External links Ion-selective electrodes Nico 2000 - Student Learning Guide (Beginners Guide to ISE Measurement: nico2000.net) ION-Selective electrodes analysers Electrodes
Ion-selective electrode
[ "Chemistry" ]
1,613
[ "Electrochemistry", "Electrodes" ]
593,414
https://en.wikipedia.org/wiki/Mean%20corpuscular%20hemoglobin%20concentration
The mean corpuscular hemoglobin concentration (MCHC) is a measure of the concentration of hemoglobin in a given volume of packed red blood cell. It is calculated by dividing the hemoglobin by the hematocrit. Reference ranges for blood tests are 32 to 36 g/dL (320 to 360g/L), or between 4.81 and 5.58 mmol/L. It is thus a mass or molar concentration. Still, many instances measure MCHC in percentage (%), as if it were a mass fraction (mHb / mRBC). Numerically, however, the MCHC in g/dL and the mass fraction of hemoglobin in red blood cells in % are identical, assuming an RBC density of 1g/mL and negligible hemoglobin in plasma. Interpretation A low MCHC can be interpreted as identifying decreased production of hemoglobin. MCHC can be normal even when hemoglobin production is decreased (such as in iron deficiency) due to a calculation artifact. MCHC can be elevated ("hyperchromic") in hereditary spherocytosis, sickle cell disease and homozygous hemoglobin C disease, depending upon the hemocytometer. MCHC can be elevated in some megaloblastic anemias. MCHC can be falsely elevated when there is agglutination of red cells (falsely lowering the measured RBC count) or when there is opacification of the plasma (falsely increasing the measured hemoglobin). Causes of plasma opacification that can falsely increase the MCHC include hyperbilirubinemia, hypertriglyceridemia, and free hemoglobin in the plasma (due to hemolysis). Complicating conditions Because of the way automated analysers count blood cells, a very high MCHC (greater than about 370 g/L) may indicate the blood is from someone with a cold agglutinin, or there may be some other problem resulting in one or more artifactual results affecting the MCHC. For example, for some patients with cold agglutinins, when their blood gets colder than 37 °C, the red cells will clump together. As a result, the analyzer may incorrectly report a low number of very dense red blood cells. This will result in an impossibly high number when the analyzer calculates the MCHC. This problem is usually picked up by the laboratory before the result is reported. The blood can be warmed until the cells separate from each other, and quickly put through the machine while still warm. There are four steps to perform when a suspect increased MCHC (>370 g/L or >37.0 g/dL) is received from the analyzer: Remix the EDTA tube—if the MCHC corrects, report corrected results Incubation at 37 °C—if the MCHC corrects, report corrected results and comment on possible cold agglutinin Saline replacement: Replace plasma with same amount of saline to exclude interference e.g. Lipemia and Auto-immune antibodies—if the MCHC corrects, report corrected results and comment on Lipemia Check the slide for spherocytosis (e.g. in hereditary spherocytosis, among other causes) Worked example See also Red blood cell indices Mean corpuscular volume Mean corpuscular hemoglobin References External links FP Notebook Blood tests Concentration indicators
Mean corpuscular hemoglobin concentration
[ "Chemistry" ]
720
[ "Blood tests", "Chemical pathology" ]
593,418
https://en.wikipedia.org/wiki/Mean%20corpuscular%20hemoglobin
The mean corpuscular hemoglobin, or "mean cell hemoglobin" (MCH), is the average mass of hemoglobin (Hb) per red blood cell (RBC) in a sample of blood. It is reported as part of a standard complete blood count. MCH value is diminished in hypochromic anemias. RBCs are either normochromic or hypochromic. They are never "hyperchromic". If more than the normal amount of hemoglobin is made, the cells get larger—they do not become darker. It is calculated by dividing the total mass of hemoglobin by the number of red blood cells in a volume of blood. MCH=(Hb*10)/RBC (in millions) A normal MCH value in humans is 27 to 33 picograms (pg)/cell. The amount of hemoglobin per RBC depends on hemoglobin synthesis and the size of the RBC. The mass of the red cell is determined by the iron (as part of the hemoglobin molecule), thus MCH in picograms is roughly the mass of one red cell. In iron deficiency anemia the cell mass becomes lighter, thus a MCH below 27 pg is an indication of iron deficiency. The MCH decreases when Hb synthesis is reduced, or when RBCs are smaller than normal, such as in cases of iron-deficiency anemia. Conversion to SI-units: 1 pg of hemoglobin = 0.06207 femtomole (fmol). Normal value converted to SI-units: 1.68 – 1.92 fmol/cell. See also Red blood cell indices Mean corpuscular hemoglobin concentration Mean corpuscular volume References External links FP Notebook Medline Blood tests
Mean corpuscular hemoglobin
[ "Chemistry" ]
389
[ "Blood tests", "Chemical pathology" ]
593,419
https://en.wikipedia.org/wiki/Mean%20corpuscular%20volume
The mean corpuscular volume, or mean cell volume (MCV), is a measure of the average volume of a red blood corpuscle (or red blood cell). The measure is obtained by multiplying a volume of blood by the proportion of blood that is cellular (the hematocrit), and dividing that product by the number of erythrocytes (red blood cells) in that volume. The mean corpuscular volume is a part of a standard complete blood count. In patients with anemia, it is the MCV measurement that allows classification as either a microcytic anemia (MCV below normal range), normocytic anemia (MCV within normal range) or macrocytic anemia (MCV above normal range). Normocytic anemia is usually deemed so because the bone marrow has not yet responded with a change in cell volume. It occurs occasionally in acute conditions, namely blood loss and hemolysis. If the MCV was determined by automated equipment, the result can be compared to RBC morphology on a peripheral blood smear, where a normal RBC is about the size of a normal lymphocyte nucleus. Any deviation would usually be indicative of either faulty equipment or technician error, although there are some conditions that present with high MCV without megaloblastic cells. For further specification, it can be used to calculate red blood cell distribution width (RDW). The RDW is a statistical calculation made by automated analyzers that reflects the variability in size and shape of the RBCs. Calculation To calculate MCV, the hematocrit (Hct) is divided by the concentration of RBCs ([RBC]) Normally, MCV is expressed in femtoliters (fL, or 10−15 L), and [RBC] in millions per microliter (106 / μL). The normal range for MCV is 80–100 fL. If the hematocrit is expressed as a percentage, the red blood cell concentration as millions per microliter, and the MCV in femtoliters, the formula becomes For example, if the Hct = 42.5% and [RBC] = 4.58 million per microliter (4,580,000/μL), then Using implied units, The MCV can be determined in a number of ways by automatic analyzers. In volume-sensitive automated blood cell counters, such as the Coulter counter, the red cells pass one-by-one through a small aperture and generate a signal directly proportional to their volume. Other automated counters measure red blood cell volume by means of techniques that measure refracted, diffracted, or scattered light. Interpretation The normal reference range is typically 80-100 fL. High In pernicious anemia (macrocytic), MCV can range up to 150 femtolitres. (as are an elevated GGT and an AST/ALT ratio of 2:1). Vitamin B12 and/or folic acid deficiency has also been associated with macrocytic anemia (high MCV numbers). Low The most common causes of microcytic anemia are iron deficiency (due to inadequate dietary intake, gastrointestinal blood loss, or menstrual blood loss), thalassemia, sideroblastic anemia or chronic disease. In iron deficiency anemia (microcytic anemia), it can be as low as 60 to 70 femtolitres. In some cases of thalassemia, the MCV may be low even though the patient is not iron deficient. Worked example Derivation The MCV can be conceptualized as the total volume of a group of cells divided by the number of cells. For a real world sized example, imagine you had 10 small jellybeans with a combined volume of 10 μL. The mean volume of a jellybean in this group would be 10 μL / 10 jellybeans = 1 μL / jellybean. A similar calculation works for MCV. 1. Measure the RBC index in cells/μL. Take the reciprocal (1/RBC index) to convert it to μL/cell. 2. The 1 μL is only made of a proportion of red cells (e.g. 40%) with the rest of the volume composed of plasma. Multiply by the hematocrit (a unitless quantity) to take this into account. 3. Finally, convert the units of μL to fL by multiplying by . The result would look like this: Note: the shortcut proposed above just makes the units work out: References Blood tests
Mean corpuscular volume
[ "Chemistry" ]
960
[ "Blood tests", "Chemical pathology" ]
593,422
https://en.wikipedia.org/wiki/Red%20blood%20cell%20distribution%20width
Red blood cell distribution width (RDW), as well as various types thereof (RDW-CV or RCDW and RDW-SD), is a measure of the range of variation of red blood cell (RBC) volume that is reported as part of a standard complete blood count. Red blood cells have an average volume of 80–100 femtoliters, but individual cell volumes vary even in healthy blood. Certain disorders, however, cause a significantly increased variation in cell size. Higher RDW values indicate greater variation in size. Normal reference range of RDW-CV in human red blood cells is 11.5–15.4%. If anemia is observed, RDW test results are often used together with mean corpuscular volume (MCV) results to determine the possible causes of the anemia. It is mainly used to differentiate an anemia of mixed causes from an anemia of a single cause. Deficiencies of Vitamin B12 or folate produce a macrocytic anemia (large cell anemia) in which the RDW is elevated in roughly two-thirds of all cases. However, a varied size distribution of red blood cells is a hallmark of iron deficiency anemia, and as such shows an increased RDW in virtually all cases. In the case of both iron and B12 deficiencies, there will normally be a mix of both large cells and small cells, causing the RDW to be elevated. An elevated RDW (red blood cells of unequal sizes) is known as anisocytosis. An elevation in the RDW is not characteristic of all anemias. Anemia of chronic disease, hereditary spherocytosis, acute blood loss, aplastic anemia (anemia resulting from an inability of the bone marrow to produce red blood cells), and certain hereditary hemoglobinopathies (including some cases of thalassemia minor) may all present with a normal RDW. Calculations The "width" in RDW is sometimes thought to be "misleading", since it in fact is a measure of deviation of the volume of RBCs, and not directly the diameter. RDW-CV "width" refers to the width of the volume curve (distribution width), not the width of the cells. RDW-SD is calculated as the width (in fL) of the RBC size distribution histogram at the 20% height level. This parameter is, therefore, not influenced by the average RBC size (mean corpuscular volume, MCV). RDW-CV (expressed in %) is calculated with the following formula: RDW-CV = (1 standard deviation of RBC volume ÷ MCV) × 100%. Since RDW-CV is mathematically derived from MCV, it is therefore affected by the average RBC size (MCV). Pathological implications Normal RDW Anemia in the presence of a normal RDW may suggest thalassemia. A low Mentzer Index, calculated from CBC data [MCV/RBC < 13], may suggest this disorder but a hemoglobin electrophoresis would be diagnostic. Anemia of chronic diseases show normal RDW. High RDW High RDW may be a result of the presence of fragments, groups of agglutination, and/or abnormal shape of red blood cells. Iron-deficiency anemia usually presents with high RDW and low MCV. Folate and vitamin B12 deficiency anemia usually presents with high RDW and high MCV. Mixed-deficiency (iron + B12 or folate) anemia usually presents with high RDW and variable MCV. Recent hemorrhages typically present with high RDW and normal MCV. A false high RDW reading can occur if EDTA anticoagulated blood is used instead of citrated blood. See Pseudothrombocytopenia. By severity, elevated RDW can be classified as follows: References Further reading RDW Blood Test - Red Blood Cell Distribution Width Blood tests
Red blood cell distribution width
[ "Chemistry" ]
823
[ "Blood tests", "Chemical pathology" ]
593,680
https://en.wikipedia.org/wiki/Statistical%20process%20control
Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical methods to monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste scrap. SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines. SPC must be practiced in two phases: The first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures). An advantage of SPC over other methods of quality control, such as "inspection," is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred. In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped. History Statistical process control was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George D. Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II. W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939), which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming travelled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry. 'Common' and 'special' sources of variation Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the same way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control. Application to non-manufacturing processes Statistical process control is appropriate to support any repetitive process, and has been implemented in many settings where for example ISO 9000 quality management systems are used, including financial auditing and accounting, IT operations, health care processes, and clerical processes such as loan arrangement and administration, customer billing etc. Despite criticism of its use in design and development, it is well-placed to manage semi-automated data governance of high-volume data processing operations, for example in an enterprise data warehouse, or an enterprise data quality management system. In the 1988 Capability Maturity Model (CMM) the Software Engineering Institute suggested that SPC could be applied to software engineering processes. The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept. The application of SPC to non-repetitive, knowledge-intensive processes, such as research and development or systems engineering, has encountered skepticism and remains controversial. In No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the software development than in, e.g., manufacturing. Variation in manufacturing In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article. Any source of variation at any point of time in a process will fall into one of two classes. (1) Common causes 'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time. (2) Special causes 'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable. Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs. For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights. If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced). From an SPC perspective, if the weight of each cereal box varies randomly, some higher and some lower, always within an acceptable range, then the process is considered stable. If the cams and pulleys of the machinery start to wear out, the weights of the cereal box might not be random. The degraded functionality of the cams and pulleys may lead to a non-random linear pattern of increasing cereal box weights. We call this common cause variation. If, however, all the cereal boxes suddenly weighed much more than average because of an unexpected malfunction of the cams and pulleys, this would be considered a special cause variation. Application The application of SPC involves three main phases of activity: Understanding the process and the specification limits. Eliminating assignable (special) sources of variation, so that the process is stable. Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation. The proper implementation of SPC has been limited, in part due to a lack of statistical expertise at many organizations. Control charts The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time. Stable process When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future. A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index. Excessive variations When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation. The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs. Process stability metrics When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger. They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups. Mathematics of control charts Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example, derived value = last value + average absolute difference between the last N numbers. See also ANOVA Gauge R&R Distribution-free control chart Electronic design automation Industrial engineering Process Window Index Process capability index Quality assurance Reliability engineering Six sigma Stochastic control Total quality management References Bibliography External links MIT Course - Control of Manufacturing Processes
Statistical process control
[ "Engineering" ]
2,348
[ "Statistical process control", "Engineering statistics" ]
593,683
https://en.wikipedia.org/wiki/Form%20follows%20function
Form follows function is a principle of design associated with late 19th- and early 20th-century architecture and industrial design in general, which states that the appearance and structure of a building or object (architectural form) should primarily relate to its intended function or purpose. Origins of the phrase The architect Louis Sullivan coined the maxim, which encapsulates Viollet-le-Duc's theories: "a rationally designed structure may not necessarily be beautiful but no building can be beautiful that does not have a rationally designed structure". The maxim is often incorrectly attributed to the sculptor Horatio Greenough (1805–1852), whose thinking mostly predates the later functionalist approach to architecture. Greenough's writings were for a long time largely forgotten, and were rediscovered only in the 1930s. In 1947, a selection of his essays was published as Form and Function: Remarks on Art by Horatio Greenough. The earliest formulation of the idea as "in architecture only that shall show that has a definite function" belongs not to an architect, but to a monk Carlo Lodoli (1690–1761), who uttered the phrase while inspired by positivist thinking (Lodoli's words were published by his student, Francesco Algarotti, in 1757). Sullivan was Greenough's much younger compatriot and admired rationalist thinkers such as Thoreau, Emerson, Whitman, and Melville, as well as Greenough himself. In 1896, Sullivan coined the phrase in an article titled The Tall Office Building Artistically Considered, though he later attributed the core idea to the Roman architect, engineer, and author Marcus Vitruvius Pollio, who first asserted in his book that a structure must exhibit the three qualities of firmitas, utilitas, venustas—that is, it must be solid, useful, and beautiful. Sullivan actually wrote that "form ever follows function", but the simpler and less emphatic phrase is more widely remembered. For Sullivan, this was distilled wisdom, an aesthetic credo, the single "rule that shall permit of no exception". The full quote is: Whether it be the sweeping eagle in his flight, or the open apple-blossom, the toiling work-horse, the blithe swan, the branching oak, the winding stream at its base, the drifting clouds, over all the coursing sun, form ever follows function, and this is the law. Where function does not change, form does not change. The granite rocks, the ever-brooding hills, remain for ages; the lightning lives, comes into shape, and dies, in a twinkling. It is the pervading law of all things organic and inorganic, of all things physical and metaphysical, of all things human and all things superhuman, of all true manifestations of the head, of the heart, of the soul, that the life is recognizable in its expression, that form ever follows function. This is the law. Sullivan developed the shape of the tall steel skyscraper in late 19th-century Chicago at a moment in which technology, taste and economic forces converged and made it necessary to break with established styles. If the shape of the building was not going to be chosen out of the old pattern book, something had to determine form, and according to Sullivan it was going to be the purpose of the building. Thus, "form follows function", as opposed to "form follows precedent". Sullivan's assistant, Frank Lloyd Wright, adopted and professed the same principle in a slightly different form. Debate on the functionality of ornamentation In 1910, the Austrian architect Adolf Loos gave a lecture titled "Ornament and Crime" in reaction to the elaborate ornament used by the Vienna Secession architects. Modernists adopted Loos's moralistic argument as well as Sullivan's maxim. Loos had worked as a carpenter in the USA. He celebrated efficient plumbing and industrial artifacts like corn silos and steel water towers as examples of functional design. Application in different fields Architecture The phrase "form (ever) follows function" became a battle cry of Modernist architects after the 1930s. The credo was taken to imply that decorative elements, which architects call "ornament", were superfluous in modern buildings. The phrase can best be implemented in design by asking the question, "Does it work?" Design in architecture utilizing this mantra follows the functionality and purpose of the building. For example, a family home would be designed around familial and social interactions and life. It would be purposeful, without functionless flare. A building's beauty comes from the function it serves rather than from its visual design. One aim of the Modernists after World War II was to elevate the living conditions of the masses. Many people around the world were living in less than ideal conditions, worsened by war. The Modernists sought to bring these people into more livable, humane spaces that, while not conventionally beautiful, were extremely functional. As a result, architecture utilizing "form follows function" became a sign of hope and progress. Despite coining the term, Louis Sullivan himself neither thought nor designed along such lines at the peak of his career. Indeed, while his buildings could be spare and crisp in their principal masses, he often punctuated their plain surfaces with eruptions of lush Art Nouveau and Celtic Revival decorations, usually cast in iron or terracotta, and ranging from organic forms like vines and ivy, to more geometric designs, and interlace, inspired by his Irish design heritage. Probably the most famous example is the writhing green ironwork that covers the entrance canopies of the Carson, Pirie, Scott and Company Building on South State Street in Chicago. These ornaments, often executed by the talented younger draftsman in Sullivan's employ, would eventually become Sullivan's trademark; to students of architecture, they are his instantly recognizable signature. Automobile designing If the design of an automobile conforms to its function—for instance, the Fiat Multipla's shape, which is partly due to the desire to sit six people in two rows—then its form is said to follow its function. Product design One episode in the history of the inherent conflict between functional design and the demands of the marketplace took place in 1935, after the introduction of the streamlined Chrysler Airflow, when the American auto industry temporarily halted attempts to introduce optimal aerodynamic forms into mass manufacture. Some car-makers thought aerodynamic efficiency would result in a single optimal auto-body shape, a "teardrop" shape, which would not be good for unit sales. General Motors adopted two different positions on streamlining, one meant for its internal engineering community, the other meant for its customers. Like the annual model year change, so-called aerodynamic styling is often meaningless in terms of technical performance. Subsequently, drag coefficient has become both a marketing tool and a means of improving the sale-ability of a car by reducing its fuel consumption, slightly, and increasing its top speed, markedly. The American industrial designers of the 1930s and 1940s like Raymond Loewy, Norman Bel Geddes and Henry Dreyfuss grappled with the inherent contradictions of "form follows function" as they redesigned blenders and locomotives and duplicating machines for mass-market consumption. Loewy formulated his principle to express that product designs are bound by functional constraints of math and materials and logic, but their acceptance is constrained by social expectations. His advice was that for very new technologies, they should be made as familiar as possible, but for familiar technologies, they should be made surprising. Victor Papanek (1923–1998) was one influential twentieth-century designer and design philosopher who taught and wrote as a proponent of "form follows function". By honestly applying "form follows function", industrial designers had the potential to put their clients out of business. Some simple single-purpose objects like screwdrivers and pencils and teapots might be reducible to a single optimal form, precluding product differentiation. Some objects made too durable would prevent sales of replacements (see Planned obsolescence). From the standpoint of functionality, some products are simply unnecessary. An alternative approach referred to as "form leads function", or "function follows form", starts with vague, abstract, or underspecified designs. These designs, sometimes generated using tools like text-to-image models, can serve as triggers for generating novel ideas for product design. Software engineering It has been argued that the structure and internal quality attributes of a working, non-trivial software artifact will represent first and foremost the engineering requirements of its construction, with the influence of process being marginal, if any. This does not mean that process is irrelevant, but that processes compatible with an artifact's requirements lead to roughly similar results. The principle can also be applied to enterprise application architectures of modern business, where "function" encompasses the business processes which should be assisted by the enterprise architecture, or "form". If the architecture were to dictate how the business operates, then the business is likely to suffer from inflexibility and the inability to adapt to change. Service-oriented architecture enables an enterprise architect to rearrange the "form" of the architecture to meet the functional requirements of a business by adopting standards-based communication protocols which enable interoperability. This stands in conflict with Conway's law, which states from a social point of view that "form follows organization". Furthermore, domain-driven design postulates that structure (software architecture, design pattern, implementation) should emerge from constraints of the modeled domain (functional requirement). While "form" and "function" may be more or less explicit and invariant concepts to the many engineering doctrines, metaprogramming and the functional programming paradigm lend themselves very well to explore, blur and invert the essence of those two concepts. The agile software development movement espouses techniques such as "test-driven development", in which the engineer begins with a minimum unit of user-oriented functionality, creates an automated test for such and then implements the functionality and iterates, repeating this process. The result and argument for this discipline are that the structure or "form" emerges from actual function, and in fact because done organically, makes the project more adaptable long-term, as well of as higher-quality because of the functional base of automated tests. See also Truth to materials Aesthetics Design science (methodology) Separation of content and presentation User-centered design References Notes Bibliography External links "E. H. Gombrich’s adoption of the formula form follows function: A case of mistaken identity?" by Jan Michl "How form functions: On esthetics and Gestalt theory" by Roy Behrens "The Tall Office Building Artistically Considered" by Louis H. Sullivan in 1896. Aesthetics Architectural theory Industrial design Modernism
Form follows function
[ "Engineering" ]
2,209
[ "Industrial design", "Design engineering", "Architectural theory", "Design", "Architecture" ]
593,693
https://en.wikipedia.org/wiki/Point%20%28geometry%29
In geometry, a point is an abstract idealization of an exact position, without size, in physical space, or its generalization to other kinds of mathematical spaces. As zero-dimensional objects, points are usually taken to be the fundamental indivisible elements comprising the space, of which one-dimensional curves, two-dimensional surfaces, and higher-dimensional objects consist; conversely, a point can be determined by the intersection of two curves or three surfaces, called a vertex or corner. In classical Euclidean geometry, a point is a primitive notion, defined as "that which has no part". Points and other primitive notions are not defined in terms of other concepts, but only by certain formal properties, called axioms, that they must satisfy; for example, "there is exactly one straight line that passes through two distinct points". As physical diagrams, geometric figures are made with tools such as a compass, scriber, or pen, whose pointed tip can mark a small dot or prick a small hole representing a point, or can be drawn across a surface to represent a curve. Since the advent of analytic geometry, points are often defined or represented in terms of numerical coordinates. In modern mathematics, a space of points is typically treated as a set, a point set. An isolated point is an element of some subset of points which has some neighborhood containing no other points of the subset. Points in Euclidean geometry Points, considered within the framework of Euclidean geometry, are one of the most fundamental objects. Euclid originally defined the point as "that which has no part". In the two-dimensional Euclidean plane, a point is represented by an ordered pair (, ) of numbers, where the first number conventionally represents the horizontal and is often denoted by , and the second number conventionally represents the vertical and is often denoted by . This idea is easily generalized to three-dimensional Euclidean space, where a point is represented by an ordered triplet (, , ) with the additional third number representing depth and often denoted by . Further generalizations are represented by an ordered tuplet of terms, where is the dimension of the space in which the point is located. Many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points; As an example, a line is an infinite set of points of the form where through and are constants and is the dimension of the space. Similar constructions exist that define the plane, line segment, and other related concepts. A line segment consisting of only a single point is called a degenerate line segment. In addition to defining points and constructs related to points, Euclid also postulated a key idea about points, that any two points can be connected by a straight line. This is easily confirmed under modern extensions of Euclidean geometry, and had lasting consequences at its introduction, allowing the construction of almost all the geometric concepts known at the time. However, Euclid's postulation of points was neither complete nor definitive, and he occasionally assumed facts about points that did not follow directly from his axioms, such as the ordering of points on the line or the existence of specific points. In spite of this, modern expansions of the system serve to remove these assumptions. Dimension of a point There are several inequivalent definitions of dimension in mathematics. In all of the common definitions, a point is 0-dimensional. Vector space dimension The dimension of a vector space is the maximum size of a linearly independent subset. In a vector space consisting of a single point (which must be the zero vector 0), there is no linearly independent subset. The zero vector is not itself linearly independent, because there is a non-trivial linear combination making it zero: . Topological dimension The topological dimension of a topological space is defined to be the minimum value of n, such that every finite open cover of admits a finite open cover of which refines in which no point is included in more than n+1 elements. If no such minimal n exists, the space is said to be of infinite covering dimension. A point is zero-dimensional with respect to the covering dimension because every open cover of the space has a refinement consisting of a single open set. Hausdorff dimension Let X be a metric space. If and , the d-dimensional Hausdorff content of S is the infimum of the set of numbers such that there is some (indexed) collection of balls covering S with for each that satisfies The Hausdorff dimension of X is defined by A point has Hausdorff dimension 0 because it can be covered by a single ball of arbitrarily small radius. Geometry without points Although the notion of a point is generally considered fundamental in mainstream geometry and topology, there are some systems that forgo it, e.g. noncommutative geometry and pointless topology. A "pointless" or "pointfree" space is defined not as a set, but via some structure (algebraic or logical respectively) which looks like a well-known function space on the set: an algebra of continuous functions or an algebra of sets respectively. More precisely, such structures generalize well-known spaces of functions in a way that the operation "take a value at this point" may not be defined. A further tradition starts from some books of A. N. Whitehead in which the notion of region is assumed as a primitive together with the one of inclusion or connection. Point masses and the Dirac delta function Often in physics and mathematics, it is useful to think of a point as having non-zero mass or charge (this is especially common in classical electromagnetism, where electrons are idealized as points with non-zero charge). The Dirac delta function, or function, is (informally) a generalized function on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line. The delta function is sometimes thought of as an infinitely high, infinitely thin spike at the origin, with total area one under the spike, and physically represents an idealized point mass or point charge. It was introduced by theoretical physicist Paul Dirac. In the context of signal processing it is often referred to as the unit impulse symbol (or function). Its discrete analog is the Kronecker delta function which is usually defined on a finite domain and takes values 0 and 1. See also Accumulation point Affine space Boundary point Critical point Cusp Foundations of geometry Position (geometry) Point at infinity Point cloud Point process Point set registration Pointwise Singular point of a curve Whitehead point-free geometry Notes References . 2004 paperback, Prometheus Books. Being the 1919 Tarner Lectures delivered at Trinity College. External links
Point (geometry)
[ "Mathematics" ]
1,370
[ "Point (geometry)" ]
593,718
https://en.wikipedia.org/wiki/Isorhythm
Isorhythm (from the Greek for "the same rhythm") is a musical technique using a repeating rhythmic pattern, called a talea, in at least one voice part throughout a composition. Taleae are typically applied to one or more melodic patterns of pitches or colores, which may be of the same or a different length from the talea. History and development Isorhythms first appear in French motets of the 13th century, such as in the Montpellier Codex. Although 14th-century theorists used the words talea and color—the latter in a variety of senses related to repetition and embellishment—the term isorhythm was coined in 1904 by musicologist Friedrich Ludwig, initially to describe the practice in 13th-century polyphony. Ludwig later extended its use to the 14th-century music of Guillaume de Machaut. Subsequently, Heinrich Besseler and other musicologists expanded its scope further as an organizing structural element in 14th- and early 15th-century compositions—in particular, motets. Some of the earliest works organized around isorhythms are early 14th-century motets by various composers in an illuminated manuscript of the Roman de Fauvel. Two of the era's most important composers of isorhythmic motets are Phillipe de Vitry and Guillaume de Machaut. Machaut's second motet, , is an example of typical 14th-century use of isorhythm. Isorhythm is a logical outgrowth of the rhythmic modes that governed most late medieval polyphony. Discarding modal-rhythmic limitations, isorhythm became a significant organizing principle of much of 14th-century French polyphony by extending the talea of an initial section to the entire composition in conjunction with variation of a corresponding color. "The playful complexity of ...[taleae] that mixes mensuration and undergoes diminution by half—became a typical, even a defining feature of motets in the 14th century and beyond". Example The structural diagram shows the isorhythmic tenor voice of a late 14th-century motet, Sub arturo plebs / Fons citharizantium / In omnem terram by Johannes Alanus (c. late 14th century), featuring threefold isorhythmic diminution. Staff 1: preexisting Gregorian cantus firmus melody, from the first antiphon for the first nocturn of the commons for Apostles, In omnem terram exivit sonus eorum ('their voice has gone out into all the world'). The cantus firmus of the motet is a perfect fifth higher than the original chant; notes used for the tenor marked in red. Staff 2: isorhythmic tenor as notated in mensural notation. Numbers 1–3 and brackets indicate three rhythmically identical sequences (taleae). The three mensuration signs in the beginning define the pattern of diminution, indicating tempus perfectum cum prolatione maiore, tempus perfectum cum prolatione minore and tempus imperfectum cum prolatione minore, respectively. (In the manuscript these signs are in fact found at the end of the line, together with a repetition sign.) Staves 3–5: abbreviated transcription into modern notation. Each line represents one full repetition of the tenor's melody (color), including the three taleae in each, resulting in a nine-part structure. (Within each color, only the first few notes of each talea are rendered here.) The three mensuration signs in the line above correspond to the change in time signatures: , , . During the decades following and into the 15th century, upper voices became increasingly involved in isorhythmic organization. Many compositions became isorhythmic in all voices, a practice known as panisorhythm. In such compositions, the length of the color and talea are often unequal, causing the repetition of the melody in differing rhythmic patterns. As an example, if the "color" includes nine notes and the "talea" five, the "color" would have to be repeated five times before the two schemes again realign. Examples can be found in motets and Mass movements by John Dunstable, Johannes Ciconia and Guillaume Du Fay. A 15th-century mass by a composer known only as Pycard found in the Old Hall manuscript (named for the English town in which it was eventually discovered), demonstrates the high sophistication and complexity of panisorhythmic techniques. The lower parts have a recurring color and talea that unite the composition. The upper parts have four different talea, one for each major section of the composition. The rhythmic relationship between upper and lower parts changes as the music progresses. Each quarter note in the lower part equals 4 quarter notes in the upper parts, creating an uneven ratio of 4:9 that causes the parts to lose synchronization. The lower part then steadily contracts in a series of Pythagorean proportions (12:9:8:6) until the parts come back into alignment. As an analytical concept, isorhythm has proven valuable for understanding musical practices in other cultures; for example, the peyote culture songs of certain North American Indian groups and the music of India and Africa. See also Isometer References Sources Further reading Apel, Willi (1959). "Remarks about the Isorhythmic Motet". In Les Colloques de Wégimont II, 1955: L'Ars nova; recueil d'études sur la musique du XIVe siècle, edited by Suzanne Clercx-Lejeune, 139–148. Paris: Société d'Edition "Les Belles Lettres". Bent, Margaret (2008). "What Is Isorhythm?" In Quomodo Cantabimus Canticum? Studies in Honor of Edward H. Roesner, edited by David Butler Cannata, Gabriela Ilnitchi Currie, Rena Charnin Mueller, and John Louis Nádas, 121–143. Publications of the American Institute of Musicology: Miscellanea, No. 7. Middleton, Wisconsin: American Institute of Musicology. . Bent, Margaret, and Andrew Wathey (2001). "Vitry, Philippe de [Vitriaco, Vittriaco]". The New Grove Dictionary of Music and Musicians, second edition, edited by Stanley Sadie and John Tyrrell. London: Macmillan. Cumming, Julie (2003). The Motet in the Age of Du Fay, revised reprint edition. Cambridge and New York: Cambridge University Press. . Earp, Lawrence (2018). "Isorhythm". In A Critical Companion to Medieval Motets, edited by Jared C. Hartt, 77–101. Woodbridge: Boydell. . Harbinson, Denis (1966). "Isorhythmic Technique in the Early Motets". Music & Letters 47, No. 2 (April): 100–109. Hartt, Jared C. (2010). "Tonal and Structural Implications of Isorhythmic Design in Guillaume de Machaut's Tenors". Theory and Practice 35:57–94. Hoppin, Richard H. (1978). Medieval Music. New York: W. W. Norton. . Leech-Wilkinson, Daniel (1982–83). "Related Motets from Fourteenth-Century France". Proceedings of the Royal Musical Association 109:1–22. Leech-Wilkinson, Daniel (1989). Compositional Procedure in the Four-Part Isorhythmic Works of Philippe de Vitry and his Contemporaries, 2 vols. Outstanding dissertations from British Universities. New York and London: Garland University Press. Ludwig, Friedrich (1903–04). "Die 50 Beispiele Coussemaker's aus der Handschrift von Montpellier". Sammelbände der Internationalen Musik-Gesellschaft 5:177–224. Planchart, Alejandro Enrique (2013). "Proportion and Symbolism in Some Ars Antiqua Motets". Musica Disciplina 58:231–264. Sanders, Ernest H. (2001). "Talea". The New Grove Dictionary of Music and Musicians, second edition, edited by Stanley Sadie and John Tyrrell. London: Macmillan. Zayaruznaya, Anna (2015). The Monstrous New Art: Divided Forms in the Late Medieval Motet. Cambridge and New York: Cambridge University Press. . External links Isorhythm in Medieval Music by José Rodríguez Alvira Rhythm and meter Musical techniques Medieval music theory
Isorhythm
[ "Physics" ]
1,812
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
593,737
https://en.wikipedia.org/wiki/S/MIME
S/MIME (Secure/Multipurpose Internet Mail Extensions) is a standard for public-key encryption and signing of MIME data. S/MIME is on an IETF standards track and defined in a number of documents, most importantly . It was originally developed by RSA Data Security, and the original specification used the IETF MIME specification with the de facto industry standard PKCS #7 secure message format. Change control to S/MIME has since been vested in the IETF, and the specification is now layered on Cryptographic Message Syntax (CMS), an IETF specification that is identical in most respects with PKCS #7. S/MIME functionality is built into the majority of modern email software and interoperates between them. Since it is built on CMS, MIME can also hold an advanced digital signature. Function S/MIME provides the following cryptographic security services for electronic messaging applications: Authentication Message integrity Non-repudiation of origin (using digital signatures) Privacy Data security (using encryption) S/MIME specifies the MIME type application/pkcs7-mime (smime-type "enveloped-data") for data enveloping (encrypting) where the whole (prepared) MIME entity to be enveloped is encrypted and packed into an object which subsequently is inserted into an application/pkcs7-mime MIME entity. S/MIME certificates Before S/MIME can be used in any of the above applications, one must obtain and install an individual key/certificate either from one's in-house certificate authority (CA) or from a public CA. The accepted best practice is to use separate private keys (and associated certificates) for signature and for encryption, as this permits escrow of the encryption key without compromise to the non-repudiation property of the signature key. Encryption requires having the destination party's certificate on store (which is typically automatic upon receiving a message from the party with a valid signing certificate). While it is technically possible to send a message encrypted (using the destination party certificate) without having one's own certificate to digitally sign, in practice, the S/MIME clients will require the user to install their own certificate before they allow encrypting to others. This is necessary so the message can be encrypted for both, recipient and sender, and a copy of the message can be kept (in the sent folder) and be readable for the sender. A typical basic ("class 1") personal certificate verifies the owner's "identity" only insofar as it declares that the sender is the owner of the "From:" email address in the sense that the sender can receive email sent to that address, and so merely proves that an email received really did come from the "From:" address given. It does not verify the person's name or business name. If a sender wishes to enable email recipients to verify the sender's identity in the sense that a received certificate name carries the sender's name or an organization's name, the sender needs to obtain a certificate ("class 2") from a CA, who carries out a more in-depth identity verification process, and this involves making inquiries about the would-be certificate holder. For more detail on authentication, see digital signature. Depending on the policy of the CA, the certificate and all its contents may be posted publicly for reference and verification. This makes the name and email address available for all to see and possibly search for. Other CAs only post serial numbers and revocation status, which does not include any of the personal information. The latter, at a minimum, is mandatory to uphold the integrity of the public key infrastructure. S/MIME Working Group of CA/Browser Forum In 2020, the S/MIME Certificate Working Group of the CA/Browser Forum was chartered to create a baseline requirement applicable to CAs that issue S/MIME certificates used to sign, verify, encrypt, and decrypt email. That effort is intended to create standards including: Certificate profiles for S/MIME certificates and CAs that issue them Verification of control over email addresses Identity validation Key management, certificate lifecycle, CA operational practices, etc. Version 1 of the Baseline Requirements for the Issuance and Management of Publicly‐Trusted S/MIME Certificates was published on January 1, 2023 by the CA/Browser Forum. It defined four types of S/MIME certificate standards. Mailbox‐validated, Organization‐validated, Sponsor‐validated and Individual‐validated. Obstacles to deploying S/MIME in practice S/MIME is sometimes considered not properly suited for use via webmail clients. Though support can be hacked into a browser, some security practices require the private key to be kept accessible to the user but inaccessible from the webmail server, complicating the key advantage of webmail: providing ubiquitous accessibility. This issue is not fully specific to S/MIME: other secure methods of signing webmail may also require a browser to execute code to produce the signature; exceptions are PGP Desktop and versions of GnuPG, which will grab the data out of the webmail, sign it by means of a clipboard, and put the signed data back into the webmail page. Seen from the view of security this is a more secure solution. S/MIME is tailored for end-to-end security. Logically it is not possible to have a third party inspecting email for malware and also have secure end-to-end communications. Encryption will not only encrypt the messages, but also the malware. Thus if mail is not scanned for malware anywhere but at the end points, such as a company's gateway, encryption will defeat the detector and successfully deliver the malware. The only solution to this is to perform malware scanning on end user stations after decryption. Other solutions do not provide end-to-end trust as they require keys to be shared by a third party for the purpose of detecting malware. Examples of this type of compromise are: Solutions which store private keys on the gateway server so decryption can occur prior to the gateway malware scan. These unencrypted messages are then delivered to end users. Solutions which store private keys on malware scanners so that it can inspect messages content, the encrypted message is then relayed to its destination. Due to the requirement of a certificate for implementation, not all users can take advantage of S/MIME, as some may wish to encrypt a message without the involvement or administrative overhead of certificates, for example by encrypting the message with a public/private key pair instead. Any message that an S/MIME email client stores encrypted cannot be decrypted if the applicable key pair's private key is unavailable or otherwise unusable (e.g., the certificate has been deleted or lost or the private key's password has been forgotten). However, an expired, revoked, or untrusted certificate will remain usable for cryptographic purposes. Indexing of encrypted messages' clear text may not be possible with all email clients. Neither of these potential dilemmas is specific to S/MIME but rather cipher text in general and do not apply to S/MIME messages that are only signed and not encrypted. S/MIME signatures are usually "detached signatures": the signature information is separate from the text being signed. The MIME type for this is multipart/signed with the second part having a MIME subtype of application/(x-)pkcs7-signature. Mailing list software is notorious for changing the textual part of a message and thereby invalidating the signature; however, this problem is not specific to S/MIME, and a digital signature only reveals that the signed content has been changed. Security issues On May 13, 2018, the Electronic Frontier Foundation (EFF) announced critical vulnerabilities in S/MIME, together with an obsolete form of PGP that is still used, in many email clients. Dubbed EFAIL, the bug required significant coordinated effort by many email client vendors to fix. Mitigations for both Efail vulnerabilities have since been addressed in the security considerations section of . See also CryptoGraf DomainKeys Identified Mail for server-handled email message signing. Email encryption EFAIL, a security issue in S/MIME GNU Privacy Guard (GPG) Pretty Good Privacy (PGP), especially "MIME Security with OpenPGP" (). References External links : Cryptographic Message Syntax (CMS) : Cryptographic Message Syntax (CMS) Algorithms : Secure/Multipurpose Internet Mail Extensions (S/MIME) Version 3.2 Message Specification : Secure/Multipurpose Internet Mail Extensions (S/MIME) Version 4.0 Message Specification Microsoft Exchange Server: Understanding S/MIME (high-level overview). Cryptography Computer security standards Internet mail protocols Email authentication MIME
S/MIME
[ "Mathematics", "Technology", "Engineering" ]
1,892
[ "Cybersecurity engineering", "Cryptography", "Computer security standards", "Applied mathematics", "Computer standards" ]
593,757
https://en.wikipedia.org/wiki/NetLogo
NetLogo is a programming language and integrated development environment (IDE) for agent-based modeling. About NetLogo was designed by Uri Wilensky, in the spirit of the programming language Logo, to be "low threshold and no ceiling". It teaches programming concepts using agents in the form of turtles, patches, links and the observer. NetLogo was designed with multiple audiences in mind, in particular: teaching children in the education community, and for domain experts without a programming background to model related phenomena. Many scientific articles have been published using NetLogo. The NetLogo environment enables exploration of emergent phenomena. It comes with an extensive models library including models in a variety of domains, such as economics, biology, physics, chemistry, psychology, and system dynamics. NetLogo allows exploration by modifying switches, sliders, choosers, inputs, and other interface elements. Beyond exploring, NetLogo allows authoring new models and modifying extant models. NetLogo is open source and freely available from the NetLogo website. It is in use in a wide variety of educational contexts from elementary school to graduate school. Many teachers make use of NetLogo in their curricula. NetLogo was designed and authored by Uri Wilensky, director of Northwestern University's Center for Connected Learning and Computer-Based Modeling (CCL). Other features In addition to agent-based modeling, NetLogo also includes basic support for dynamic system modeling. Books Several books have been published about NetLogo. Books available in print include: Books available online include: Online courses , several massive open online courses are being offered that use NetLogo for assignments and/or demonstrations: Technical foundation NetLogo is free and open-source software, released under a GNU General Public License (GPL). Commercial licenses are also available. It is written in Scala and Java and runs on the Java virtual machine (JVM). At its core is a hybrid interpreter/compiler that partially compiles user code to JVM bytecode. NetLogo Web is a version that runs on JavaScript, instead of the JVM, so models may be run in a web browser. However, it does not have all features of the desktop version, and the official website advises that the "desktop version of NetLogo is recommended for most uses". Examples A simple multiagent model in NetLogo is the Wolf-Sheep Predation model, which is shown in the screenshot above. It models the population growth of a predator/prey system over time. It has the following characteristics: There are two breeds of turtles, called sheep and wolves. Sheep and wolves move randomly and have limited energy. Wolves and sheep lose energy by moving. If a wolf or sheep has zero energy, it dies. Sheep gain energy by eating grass. Wolves gain energy by eating sheep. Both wolves and sheep can reproduce, sharing energy with their offspring. HubNet HubNet is a technology that uses NetLogo to run participatory simulations in the classroom. In a participatory simulation, a whole group of users takes part in enacting the behavior of a system. Using an individual device, such as a networked computer or Texas Instruments graphing calculator, each user acts as a separate, independent agent. One example of a HubNet activity is Tragedy of the Commons, which models the economic problem called the tragedy of the commons. See also Comparison of agent-based modeling software References External links , CCL , NetLogo NetLogo Models Library Other NetLogo-related resources on the web NetLogo news via Twitter Discussion group for users hosted by Google Discussion group for developers hosted by Google NetLogo models of multiagent systems David M. Holmes' website, containing beginner material for new NetLogo users Logo programming language family Agent-based programming languages Agent-based software Pedagogic integrated development environments Java platform Free software programmed in Scala Simulation programming languages Simulation software
NetLogo
[ "Technology" ]
820
[ "Computing platforms", "Java platform" ]
593,773
https://en.wikipedia.org/wiki/Unimodular%20matrix
In mathematics, a unimodular matrix M is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix N that is its inverse (these are equivalent under Cramer's rule). Thus every equation , where M and b both have integer components and M is unimodular, has an integer solution. The n × n unimodular matrices form a group called the n × n general linear group over , which is denoted . Examples of unimodular matrices Unimodular matrices form a subgroup of the general linear group under matrix multiplication, i.e. the following matrices are unimodular: Identity matrix The inverse of a unimodular matrix The product of two unimodular matrices Other examples include: Pascal matrices Permutation matrices the three transformation matrices in the ternary tree of primitive Pythagorean triples Certain transformation matrices for rotation, shearing (both with determinant 1) and reflection (determinant −1). The unimodular matrix used (possibly implicitly) in lattice reduction and in the Hermite normal form of matrices. The Kronecker product of two unimodular matrices is also unimodular. This follows since where p and q are the dimensions of A and B, respectively. Total unimodularity A totally unimodular matrix (TU matrix) is a matrix for which every square submatrix has determinant 0, +1 or −1. A totally unimodular matrix need not be square itself. From the definition it follows that any submatrix of a totally unimodular matrix is itself totally unimodular (TU). Furthermore it follows that any TU matrix has only 0, +1 or −1 entries. The converse is not true, i.e., a matrix with only 0, +1 or −1 entries is not necessarily unimodular. A matrix is TU if and only if its transpose is TU. Totally unimodular matrices are extremely important in polyhedral combinatorics and combinatorial optimization since they give a quick way to verify that a linear program is integral (has an integral optimum, when any optimum exists). Specifically, if A is TU and b is integral, then linear programs of forms like or have integral optima, for any c. Hence if A is totally unimodular and b is integral, every extreme point of the feasible region (e.g. ) is integral and thus the feasible region is an integral polyhedron. Common totally unimodular matrices 1. The unoriented incidence matrix of a bipartite graph, which is the coefficient matrix for bipartite matching, is totally unimodular (TU). (The unoriented incidence matrix of a non-bipartite graph is not TU.) More generally, in the appendix to a paper by Heller and Tompkins, A.J. Hoffman and D. Gale prove the following. Let be an m by n matrix whose rows can be partitioned into two disjoint sets and . Then the following four conditions together are sufficient for A to be totally unimodular: Every entry in is 0, +1, or −1; Every column of contains at most two non-zero (i.e., +1 or −1) entries; If two non-zero entries in a column of have the same sign, then the row of one is in , and the other in ; If two non-zero entries in a column of have opposite signs, then the rows of both are in , or both in . It was realized later that these conditions define an incidence matrix of a balanced signed graph; thus, this example says that the incidence matrix of a signed graph is totally unimodular if the signed graph is balanced. The converse is valid for signed graphs without half edges (this generalizes the property of the unoriented incidence matrix of a graph). 2. The constraints of maximum flow and minimum cost flow problems yield a coefficient matrix with these properties (and with empty C). Thus, such network flow problems with bounded integer capacities have an integral optimal value. Note that this does not apply to multi-commodity flow problems, in which it is possible to have fractional optimal value even with bounded integer capacities. 3. The consecutive-ones property: if A is (or can be permuted into) a 0-1 matrix in which for every row, the 1s appear consecutively, then A is TU. (The same holds for columns since the transpose of a TU matrix is also TU.) 4. Every network matrix is TU. The rows of a network matrix correspond to a tree , each of whose arcs has an arbitrary orientation (it is not necessary that there exist a root vertex r such that the tree is "rooted into r" or "out of r").The columns correspond to another set C of arcs on the same vertex set V. To compute the entry at row R and column , look at the s-to-t path P in T; then the entry is: +1 if arc R appears forward in P, −1 if arc R appears backwards in P, 0 if arc R does not appear in P. See more in Schrijver (2003). 5. Ghouila-Houri showed that a matrix is TU iff for every subset R of rows, there is an assignment of signs to rows so that the signed sum (which is a row vector of the same width as the matrix) has all its entries in (i.e. the row-submatrix has discrepancy at most one). This and several other if-and-only-if characterizations are proven in Schrijver (1998). 6. Hoffman and Kruskal proved the following theorem. Suppose is a directed graph without 2-dicycles, is the set of all dipaths in , and is the 0-1 incidence matrix of versus . Then is totally unimodular if and only if every simple arbitrarily-oriented cycle in consists of alternating forwards and backwards arcs. 7. Suppose a matrix has 0-(1) entries and in each column, the entries are non-decreasing from top to bottom (so all −1s are on top, then 0s, then 1s are on the bottom). Fujishige showed that the matrix is TU iff every 2-by-2 submatrix has determinant in . 8. Seymour (1980) proved a full characterization of all TU matrices, which we describe here only informally. Seymour's theorem is that a matrix is TU if and only if it is a certain natural combination of some network matrices and some copies of a particular 5-by-5 TU matrix. Concrete examples 1. The following matrix is totally unimodular: This matrix arises as the coefficient matrix of the constraints in the linear programming formulation of the maximum flow problem on the following network: 2. Any matrix of the form is not totally unimodular, since it has a square submatrix of determinant −2. Abstract linear algebra Abstract linear algebra considers matrices with entries from any commutative ring , not limited to the integers. In this context, a unimodular matrix is one that is invertible over the ring; equivalently, whose determinant is a unit. This group is denoted . A rectangular -by- matrix is said to be unimodular if it can be extended with rows in to a unimodular square matrix. Over a field, unimodular has the same meaning as non-singular. Unimodular here refers to matrices with coefficients in some ring (often the integers) which are invertible over that ring, and one uses non-singular to mean matrices that are invertible over the field. See also Balanced matrix Regular matroid Special linear group Total dual integrality Hermite normal form Notes References Alexander Schrijver (1998), Theory of Linear and Integer Programming. John Wiley & Sons, (mathematical) External links Mathematical Programming Glossary by Harvey J. Greenberg Unimodular Matrix from MathWorld Software for testing total unimodularity by M. Walter and K. Truemper Matrices
Unimodular matrix
[ "Mathematics" ]
1,732
[ "Matrices (mathematics)", "Mathematical objects" ]
593,847
https://en.wikipedia.org/wiki/Jean-Baptiste%20Dumas
Jean Baptiste André Dumas (; 14 July 180010 April 1884) was a French chemist, best known for his works on organic analysis and synthesis, as well as the determination of atomic weights (relative atomic masses) and molecular weights by measuring vapor densities. He also developed a method for the analysis of nitrogen in compounds. Biography Dumas was born in Alès (Gard), and became an apprentice to an apothecary in his native town. In 1816, he moved to Geneva, where he attended lectures by M. A. Pictet in physics, C. G. de la Rive in chemistry, and A. P. de Candolle in botany, and before he had reached his majority, he was engaged with Pierre Prévost in original work on problems of physiological chemistry and embryology. In 1822, he moved to Paris, acting on the advice of Alexander von Humboldt, where he became professor of chemistry, initially at the Lyceum, later (1835) at the École polytechnique. He was one of the founders of the École centrale des arts et manufactures (later named École centrale Paris) in 1829. In 1832 Dumas became a member of the French Academy of Sciences. From 1868 until his death in 1884 he would serve the academy as the permanent secretary for its department of Physical Sciences. In 1838, Dumas was elected a foreign member of the Royal Swedish Academy of Sciences. The same year he became correspondent of the Royal Institute of the Netherlands and, when that became the Royal Netherlands Academy of Arts and Sciences in 1851, he joined as a foreign member. Dumas was president of Société d'encouragement pour l'industrie nationale from 1845 to 1864. He was elected to the American Philosophical Society in 1860. After 1848, he exchanged much of his scientific work for ministerial posts under Napoléon III. He became a member of the National Legislative Assembly. He acted as minister of agriculture and commerce for a few months in 1850–1851, and subsequently became a senator, president of the municipal council of Paris, and master of the French mint, but his official career came to a sudden end with the fall of the Second Empire. Dumas was a devout Catholic who would often defend Christian views against critics. Dumas died at Cannes in 1884, and is buried at the Montparnasse Cemetery in Paris, in a large tomb near the back wall. His is one of the 72 names inscribed on the Eiffel tower. Scientific work Dumas was one of the first to criticise the electro-chemical doctrines of Jöns Jakob Berzelius, which, at the time his work began, were widely accepted as the true theory of the constitution of compound bodies, and opposed a unitary view to the dualistic conception of the Swedish chemist. In a paper on the atomic theory, published in 1826, he anticipated to a remarkable extent some ideas which are frequently supposed to belong to a later period; and the continuation of these studies led him to the ideas about substitution (metalepsis) which were developed about 1839 into the theory (Older Style Theory) that in organic chemistry there are certain types which remain unchanged even when their hydrogen is replaced by an equivalent quantity of a halide element. The classification of organic compounds into homologous series was advanced as one consequence of his researches into the acids generated by the oxidation of the alcohols. Dumas also showed that kidneys remove urea from the blood. Vapour densities and atomic masses Dumas perfected the method of measuring vapor densities which was also important in determining atomic weights (see below). A known amount of the substance being analyzed was put into a previously weighed glass bulb, which was then sealed and heated in water to vaporize the substance. The pressure was recorded with a barometer, and the bulb is allowed to cool to determine the mass of the vapor. The universal gas law was then used to determine the moles of gas within the bulb. In an 1826 paper, he described his method for ascertaining vapour densities, and the redeterminations which he undertook by its aid of the atomic weights of carbon and oxygen proved the forerunners of a long series which included some thirty of the elements, the results being mostly published in 1858–1860. He showed "in all elastic fluids observed under the same conditions, the molecules are placed at equal distances". He also determined the atomic weight of samarium, one of the rare earth elements. Dumas established new values for the atomic mass of thirty elements, setting the value for hydrogen to 1. Determination of nitrogen In 1833, Dumas developed a method for estimating the amount of nitrogen in an organic compound, founding modern analysis methods. He made important revisions to the existing combustion methods with a sophisticated pneumatic trough. These revisions were the flushing of the combustion tube with carbon dioxide and the addition of potassium hydroxide in the pneumatic trough. Flushing with carbon dioxide eliminated the nitrogen present in the air that previously occupied the combustion tube, eliminating the need for correction due to nitrogen in the air. The potassium hydroxide dissolved the passing carbon dioxide gas, which left nitrogen as the only gas in the collection tube. Theory of substitution and theory of chemical types At the Tuileries palace in Paris, guests at a soirée began reacting adversely to a gas suddenly emitted by the candles. Alexandre Brongniart asked his son-in-law, Dumas, to investigate. Dumas found that the coughing and dangerous fumes were caused by chlorine present in the candle wax. Chlorine had been used to whiten the candles, and Dumas concluded that it must have combined during the candle-making process. This led Dumas to investigate the behavior of chlorine substitution in other chemical compounds. One of the most important research projects of Dumas was that on the action of chlorine on acetic acid to form trichloroacetic acid – a derivative of essentially the same character as the acetic acid itself, though a stronger acid. Dumas extended this to a theory (sometimes considered a law) which states that in an organic compound, a hydrogen atom may be substituted for any halogen. In his published paper on the subject, Dumas introduces his theory of types. Since the trichloracetic acid retained similar properties to acetic acid, Dumas reasoned that there were certain chemical structures that remained comparatively unchanged even if one atom were changed within them. The basis of this theory rests in the natural history of organism classification, which Dumas learned under the botanist de Candolle. This new theory challenged Berzelius's previous theory of electrochemical dualism and was also a competitor of radical theory. Family He married Herminie Brongniart, daughter of Alexandre Brongniart, in 1826. See also Dumas method Dumas method of molecular weight determination References Further reading Tiffeneau, Marc (1934). Jean-Baptiste Dumas (1800–1884), Paris, Laboratoires G. Beytout. External links Jean-Baptiste Dumas Biography, Pasteur Brewing An essay by Josiah Parsons Cooke Reprinted from the Proceedings of the American Academy of Arts and Sciences, vol. xix, 1883–'84 Jean-Baptiste-André Dumas Science Science Vol. III No.72 Published by the American Association for the Advancement of Science 19th-century French chemists Ministers of agriculture and commerce of France 1800 births 1884 deaths Members of the Académie Française Members of the Royal Netherlands Academy of Arts and Sciences Members of the Royal Swedish Academy of Sciences Foreign members of the Royal Society Foreign associates of the National Academy of Sciences Officers of the French Academy of Sciences Recipients of the Copley Medal Recipients of the Pour le Mérite (civil class) People from Alès French Roman Catholics Burials at Montparnasse Cemetery People involved with the periodic table
Jean-Baptiste Dumas
[ "Chemistry" ]
1,597
[ "Periodic table", "People involved with the periodic table" ]
593,904
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Finland
In the NUTS (Nomenclature of Territorial Units for Statistics) codes of Finland (FI), the three levels are: NUTS codes 2024 version. Local administrative units Below the NUTS levels, the two LAU (Local Administrative Units) levels are: The LAU codes of Finland can be downloaded here: See also List of Finnish regions by Human Development Index Subdivisions of Finland ISO 3166-2 codes of Finland FIPS region codes of Finland Sources Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe Overview map of EU Countries - NUTS level 1 SUOMI / FINLAND - NUTS level 2 SUOMI / FINLAND - NUTS level 3 Correspondence between the NUTS levels and the national administrative units List of current NUTS codes Download current NUTS codes (ODS format) Provinces of Finland, Statoids.com References Finland Nuts
NUTS statistical regions of Finland
[ "Mathematics" ]
166
[ "Nomenclature of Territorial Units for Statistics", "Statistical concepts", "Statistical regions" ]
593,908
https://en.wikipedia.org/wiki/Propagation%20of%20uncertainty
In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate due to the combination of variables in the function. The uncertainty u can be expressed in a number of ways. It may be defined by the absolute error . Uncertainties can also be defined by the relative error , which is usually written as a percentage. Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, , which is the positive square root of the variance. The value of a quantity and its error are then expressed as an interval . However, the most general way of characterizing uncertainty is by specifying its probability distribution. If the probability distribution of the variable is known or can be assumed, in theory it is possible to get any of its statistics. In particular, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation from the central value , which means that the region will cover the true value in roughly 68% of cases. If the uncertainties are correlated then covariance must be taken into account. Correlation can arise from two different sources. First, the measurement errors may be correlated. Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated. In a general context where a nonlinear function modifies the uncertain parameters (correlated or not), the standard tools to propagate uncertainty, and infer resulting quantity probability distribution/statistics, are sampling techniques from the Monte Carlo method family. For very large datasets or complex functions, the calculation of the error propagation may be very expensive so that a surrogate model or a parallel computing strategy may be necessary. In some particular cases, the uncertainty propagation calculation can be done through simplistic algebraic procedures. Some of these scenarios are described below. Linear combinations Let be a set of m functions, which are linear combinations of variables with combination coefficients : or in matrix notation, Also let the variance–covariance matrix of be denoted by and let the mean value be denoted by : is the outer product. Then, the variance–covariance matrix of f is given by In component notation, the equation reads This is the most general expression for the propagation of error from one set of variables onto another. When the errors on x are uncorrelated, the general expression simplifies to where is the variance of k-th element of the x vector. Note that even though the errors on x may be uncorrelated, the errors on f are in general correlated; in other words, even if is a diagonal matrix, is in general a full matrix. The general expressions for a scalar-valued function f are a little simpler (here a is a row vector): Each covariance term can be expressed in terms of the correlation coefficient by , so that an alternative expression for the variance of f is In the case that the variables in x are uncorrelated, this simplifies further to In the simple case of identical coefficients and variances, we find For the arithmetic mean, , the result is the standard error of the mean: Non-linear combinations When f is a set of non-linear combination of the variables x, an interval propagation could be performed in order to compute intervals which contain all consistent values for the variables. In a probabilistic approach, the function f must usually be linearised by approximation to a first-order Taylor series expansion, though in some cases, exact formulae can be derived that do not depend on the expansion as is the case for the exact variance of products. The Taylor expansion would be: where denotes the partial derivative of fk with respect to the i-th variable, evaluated at the mean value of all components of vector x. Or in matrix notation, where J is the Jacobian matrix. Since f0 is a constant it does not contribute to the error on f. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, Aki and Akj by the partial derivatives, and . In matrix notation, That is, the Jacobian of the function is used to transform the rows and columns of the variance-covariance matrix of the argument. Note this is equivalent to the matrix expression for the linear case with . Simplification Neglecting correlations or assuming independent variables yields a common formula among engineers and experimental scientists to calculate error propagation, the variance formula: where represents the standard deviation of the function , represents the standard deviation of , represents the standard deviation of , and so forth. This formula is based on the linear characteristics of the gradient of and therefore it is a good estimation for the standard deviation of as long as are small enough. Specifically, the linear approximation of has to be close to inside a neighbourhood of radius . Example Any non-linear differentiable function, , of two variables, and , can be expanded as If we take the variance on both sides and use the formula for the variance of a linear combination of variables then we obtain where is the standard deviation of the function , is the standard deviation of , is the standard deviation of and is the covariance between and . In the particular case that Then or where is the correlation between and . When the variables and are uncorrelated, . Then Caveats and warnings Error estimates for non-linear functions are biased on account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log(1+x) increases as x increases, since the expansion to x is a good approximation only when x is near zero. For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation; see Uncertainty quantification for details. Reciprocal and shifted reciprocal In the special case of the inverse or reciprocal , where follows a standard normal distribution, the resulting distribution is a reciprocal standard normal distribution, and there is no definable variance. However, in the slightly more general case of a shifted reciprocal function for following a general normal distribution, then mean and variance statistics do exist in a principal value sense, if the difference between the pole and the mean is real-valued. Ratios Ratios are also problematic; normal approximations exist under certain conditions. Example formulae This table shows the variances and standard deviations of simple functions of the real variables with standard deviations covariance and correlation The real-valued coefficients and are assumed exactly known (deterministic), i.e., In the right-hand columns of the table, and are expectation values, and is the value of the function calculated at those values. For uncorrelated variables (, ) expressions for more complicated functions can be derived by combining simpler functions. For example, repeated multiplication, assuming no correlation, gives For the case we also have Goodman's expression for the exact variance: for the uncorrelated case it is and therefore we have Effect of correlation on differences If A and B are uncorrelated, their difference A − B will have more variance than either of them. An increasing positive correlation () will decrease the variance of the difference, converging to zero variance for perfectly correlated variables with the same variance. On the other hand, a negative correlation () will further increase the variance of the difference, compared to the uncorrelated case. For example, the self-subtraction f = A − A has zero variance only if the variate is perfectly autocorrelated (). If A is uncorrelated, then the output variance is twice the input variance, And if A is perfectly anticorrelated, then the input variance is quadrupled in the output, (notice for f = aA − aA in the table above). Example calculations Inverse tangent function We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error. Define where is the absolute uncertainty on our measurement of . The derivative of with respect to is Therefore, our propagated uncertainty is where is the absolute propagated uncertainty. Resistance measurement A practical application is an experiment in which one measures current, , and voltage, , on a resistor in order to determine the resistance, , using Ohm's law, . Given the measured variables with uncertainties, and , and neglecting their possible correlation, the uncertainty in the computed quantity, , is: See also Accuracy and precision Automatic differentiation Bienaymé's identity Delta method Dilution of precision (navigation) Errors and residuals in statistics Experimental uncertainty analysis Interval finite element Measurement uncertainty Numerical stability Probability bounds analysis Uncertainty quantification Random-fuzzy variable References Further reading External links A detailed discussion of measurements and the propagation of uncertainty explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple significance arithmetic GUM, Guide to the Expression of Uncertainty in Measurement EPFL An Introduction to Error Propagation, Derivation, Meaning and Examples of Cy = Fx Cx Fx' uncertainties package, a program/library for transparently performing calculations with uncertainties (and error correlations). soerp package, a Python program/library for transparently performing *second-order* calculations with uncertainties (and error correlations). Uncertainty Calculator Propagate uncertainty for any expression Algebra of random variables Numerical analysis Statistical approximations Statistical deviation and dispersion
Propagation of uncertainty
[ "Mathematics" ]
1,984
[ "Computational mathematics", "Mathematical relations", "Statistical approximations", "Numerical analysis", "Approximations" ]
593,924
https://en.wikipedia.org/wiki/Observation%20on%20the%20Spot
Observation on the Spot (Polish title Wizja lokalna is an expression meaning crime scene reconstruction) is a social science fiction novel by Stanisław Lem. The novel is a report of Ijon Tichy's travel to a faraway planet Entia (in Polish text: Encja) to study their civilization. This report was supposed to fix a misunderstanding arisen from Tichy's Fourteenth Voyage to supposedly Entia (then known as Enteropia), which turned out to be a satellite of Entia, masqueraded by Entians to misguide explorers. The travel was also to verify the results of the "Institute of Historiographical Computers" (Polish: Instytut Maszyn Dziejowych), which use predictive modeling to overcome the speed of light limitation and get information about the state of the affairs on remote planets based on information obtained from previous expeditions. Description The major themes of the book are: the problems of the society of abundance based entirely on automated production where individuals have little to do; imposition of ethical laws through technology i.e. the "ethicsphere" which has made it impossible to harm individuals physically; and the ideological opposition of two dominant systems, which is basically a parody of Western World-Soviet Union split taken to the absurd. The original novel was first printed by Wydawnictwo Literackie in 1982. It was translated into German by Hubert Schumann under the title Lokaltermin in 1987 (). Lacking English translation, the novel title was also translated in literary criticism works as "Eyewitness Account" and "The Scene of the Crime". Science Fiction Studies published a collection of excerpts from Lem's letters which show the chronology of the creation of the novel (during 1979-1981). In the first one Lem confesses that he tried to approach the subject several times in the past few years. Also, in his essay The Philosophy of Chance Lem confesses that he struggled with the novel for many years. At the end of the novel Lem included a "Polish-Polish dictionary" of the neologisms used in it (named "the Earthish-Earthish Glossary"). In a letter to publisher Franz Rottensteiner Lem wrote about his intention to add this glossary and to include into it an explanation why these neologisms are a necessity, not just a fantastic embellishment. Plot summary The novel consists of four chapters. Chapters 1-3 Chapter 1 is the setup, which leads to Tichy's visit to the Institute of Historiographical Computers described in Chapter 2. Chapter 3 is the description of Tichy's flight to Entia. Chapter 4: Tichy on Entia Tichy arrives on Entia to discover a unique anthropomorphic civilization divided into two major states: Kurdlandia (from "kurdl") and Luzania. These names require some explanations. Kurdl is a huge animal inhabiting the marshes of Entia. The name of the animal is Lem's invention, used in earlier tales about Ijon Tichy. (In Polish it is kurdel, however in declensions of the word the root converts into "kurdl-", hence there are no associations with the English word "curdle"). Michael Kandel translated it as "squamp" in his translation of Tichy's 14th voyage. The name "Luzania" derives from the Polish root "luz-" with the meaning of "loose", "not restrained"; the choice will become clear below. Kurdlandia's guiding ideology is "national mobilism", that is the vast majority of the population must live inside of the stomachs, various passages and internal organs of the kurdls. Kurdls walk about the marshes, guided by drivers, and their inhabitants hence are "able to explore the land of their wonderful country from inside of their home kurdl", in the words of a patriotic individual that Ijon Tichy spoke to. Inhabitants of the kurdls may get out periodically (at least for 24 hours a year). Exceptions are largely confined to high government officials who live outside the marshes, on dry ground, in normal houses. Kurdlandia has no technology to speak of and is proud of it. The other state, Luzania, constitutes a treatment of the topic of an "ideal state". Luzanian's most prominent accomplishment is the creation of "ethicsphere" (compare "atmosphere"). They have produced huge numbers of molecular sized nanobots called "bystry" ("quickies" in English) that serve to control matter in the 'quickated' areas. The primary function of the 'quickies' is the enforcement of the laws of ethics as physical laws (hence the word "ethicsphere"). Hence, it is a physical law in Luzania that it is not possible to hurt an individual physically. If you try to strike your neighbor, your hand will be stopped by the suddenly increased air viscosity, (although you will not be hurt either). If you try to drown, the water will push you out. Doing non-physical harm, such as by pestering, criticizing, and otherwise mentally tormenting people is still possible, although in such a case the 'quickies' would probably help the victim to walk away from the attackers. There is a large protest movement in Luzania of people who want to end the ethicsphere, and a major element of their activities is trying to inflict harm on anybody just to prove the possibility of doing so, but they have not succeeded yet. The 'quickies' also serve to produce material goods necessary to maintain a high standard of living. Hence, there is not much of an economic life going on, although there are limits for the amount of energy individuals may spend on satisfying their needs. Many Luzanians are involved in intellectual pursuits, such as being professors, students, and government officials, but the problem of nothing productive to do stands prominent. Apparently the 'quickies' are capable of some collective thought, at least for the purposes of self-replication and self-improvement, as well as in order to identify instances of potential harm to individuals (no small feat, no doubt). The artists of Luzania feel particularly slighted by the fact that 'quickies' can create art of all forms of much greater quality than they can; naturally, many of them are members of the protest movement. There exists ideological opposition between Kurdlandia and Luzania. Generally speaking, many of the people holed up in the kurdls on poor rations would have been more than happy to run away and live in plenty across the border. On the other hand, many Luzanians, especially university students and faculty, dislike the consumerism and ethical limitations of freedom under the 'quickies' and call variously for the imposition of the kurdl-ism or at least for a slight rollback of the technological development and the abolition of the 'quickies', depending on the degree of radicalism of the individual. Luzanians also enjoy traveling to Kurdlandia on vacation to get out of the 'quickies' areas. The main character spends most of his time in Luzania, studying the history of the world and the current Luzanian social system. We learn about it through his words. References 1982 novels Novels by Stanisław Lem 1982 science fiction novels Social science fiction Wydawnictwo Literackie books Polish novels Polish science fiction novels Fiction about nanotechnology
Observation on the Spot
[ "Materials_science" ]
1,563
[ "Fiction about nanotechnology", "Nanotechnology" ]
593,949
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Denmark
The Nomenclature of Territorial Units for Statistics (NUTS) is a geocode standard for referencing the administrative division of Denmark for statistical purposes. The standard is developed and regulated by the European Union. The NUTS standard is instrumental in delivering the European Union's Structural Funds. The NUTS code for Denmark is DK and a hierarchy of three levels is established by Eurostat. Below these is a further levels of geographic organisation - the local administrative unit (LAU). In Denmark, the LAU 1 are municipalities and the LAU 2 are parishes. Overall NUTS codes Local administrative units Below the NUTS levels, the LAU (Local Administrative Units) levels are: The LAU codes of Denmark can be downloaded here: NUTS codes Before 2003 In the 2003 version, before the counties were abolished, the codes were as follows: See also Administrative divisions of Denmark FIPS region codes of Denmark ISO 3166-2 codes of Denmark References Sources Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe Overview map of EU Countries - NUTS level 1 Overview map of EU Countries - Country level Overview map of EU Countries - NUTS level 1 Correspondence between the NUTS levels and the national administrative units List of current NUTS codes Download current NUTS codes (ODS format) Regions of Denmark, Statoids.com Denmark Nuts
NUTS statistical regions of Denmark
[ "Mathematics" ]
259
[ "Nomenclature of Territorial Units for Statistics", "Statistical concepts", "Statistical regions" ]
594,024
https://en.wikipedia.org/wiki/Pyrophyllite
Pyrophyllite is a phyllosilicate mineral composed of aluminium silicate hydroxide: Al2Si4O10(OH)2. It occurs in two forms (habits): crystalline folia and compact masses; distinct crystals are not known. The folia have a pronounced pearly luster, owing to the presence of a perfect cleavage parallel to their surfaces: they are flexible but not elastic, and are usually arranged radially in fan-like or spherical groups. This variety, when heated, exfoliates and swells up to many times its original volume. The color of both varieties is white, pale green, greyish or yellowish; they are very soft (hardness of 1.0 to 1.5) and are greasy to the touch. The specific gravity is 2.65–2.85. The two varieties are thus very similar to talc. Occurrence Pyrophyllite occurs in phyllite and schistose rocks, often associated with kyanite, of which it is an alteration product. It also occurs as hydrothermal deposits. Typical associated minerals include: kyanite, andalusite, topaz, mica and quartz. Deposits containing well-crystallized material are found in: Manuels, Newfoundland and Labrador, Canada, talc-like bright white appearance, high grade, no impurities; 21 million ton deposit. Russia – pale green foliated masses, very like talc in appearance, are found at Beresovsk near Yekaterinburg in the Urals. St. Niklas, Zermatt, Valais, Switzerland Vaastana, Kristianstad, Sweden Near Ottrje, Ardennes Mountains, Belgium Ibitiara, Bahia, Brazil Nagano Prefecture, Japan Near Ogilby, Imperial County at Tres Cerritos, Mariposa County, and the Champion mine, White Mountains, Mono County, California, US Near Quartzsite, La Paz County, Arizona, US Large deposits at the Deep River region of North Carolina, USA Graves Mountain, Lincoln County, Georgia, US In South Africa, major deposits of pyrophyllite occur within the Ottosdal region, where it is mined for the production of a variety of manufactured goods, and blocks are quarried and marketed as "Wonderstone" for the carving of sculptures. Uses The compact variety of pyrophyllite is used for slate pencils and tailors' chalk (French chalk), and is carved by the Chinese into small images and ornaments of various kinds. Other soft compact minerals (steatite and pinite) used for these Chinese carvings are included with pyrophyllite under the terms agalmatolite and pagodite. Pyrophyllite is easily machineable and has excellent thermal stability, so it is added to clay to reduce thermal expansion when firing, but it has many other industrial uses when combined with other compounds, such as in insecticide and for making bricks. Pyrophyllite is also widely used in high-pressure experiments, both as a gasket material and as a pressure-transmitting medium. See also References . External links Mineral galleries USGS Phyllosilicates Aluminium minerals Triclinic minerals Monoclinic minerals Minerals in space group 2 Minerals in space group 15 Luminescent minerals
Pyrophyllite
[ "Chemistry" ]
684
[ "Luminescence", "Luminescent minerals" ]
594,043
https://en.wikipedia.org/wiki/Barrel%20%28unit%29
A barrel is one of several units of volume applied in various contexts; there are dry barrels, fluid barrels (such as the U.K. beer barrel and U.S. beer barrel), oil barrels, and so forth. For historical reasons the volumes of some barrel units are roughly double the volumes of others; volumes in common use range approximately from . In many connections the term is used almost interchangeably with barrel. Since medieval times the term as a unit of measure has had various meanings throughout Europe, ranging from about 100 litres to about 1,000 litres. The name was derived in medieval times from the French , of unknown origin, but still in use, both in French and as derivations in many other languages such as Italian, Polish, and Spanish. In most countries such usage is obsolescent, increasingly superseded by SI units. As a result, the meaning of corresponding words and related concepts (vat, cask, keg etc.) in other languages often refers to a physical container rather than a known measure. In the international oil market context, however, prices in United States dollars per barrel are commonly used, and the term is variously translated, often to derivations of the Latin / Teutonic root fat (for example vat or Fass). In other commercial connections, barrel sizes such as beer keg volumes also are standardised in many countries. Dry goods in the US US dry barrel: Defined as length of stave , diameter of head , distance between heads , circumference of bulge outside measurement; representing as nearly as possible 7,056 cubic inches; and the thickness of staves not greater than (diameter ≈ ). Any barrel that is 7,056 cubic inches is recognized as equivalent. This is exactly equal to . US barrel for cranberries Defined as length of stave , diameter of head , distance between heads , circumference of bulge outside measurement; and the thickness of staves not greater than (diameter ≈ ). No equivalent in cubic inches is given in the statute, but later regulations specify it as 5,826 cubic inches. Some products have a standard weight or volume that constitutes a barrel: Cornmeal, Cement (including Portland cement), or Sugar, Wheat or rye flour, or Lime (mineral), large barrel, or small barrel Butter and cheese in UK, Salt, Fluid barrel in the US and UK Fluid barrels vary depending on what is being measured and where. In the UK a beer barrel is . In the US most fluid barrels (apart from oil) are (half a hogshead), but a beer barrel is . The size of beer kegs in the US is based loosely on fractions of the US beer barrel. When referring to beer barrels or kegs in many countries, the term may be used for the commercial package units independent of actual volume, where common range for professional use is 20–60 L, typically a DIN or Euro keg of 50 L. History Richard III, King of England from 1483 until 1485, had defined the wine puncheon as a cask holding 84 wine gallons and a wine tierce as holding 42 wine gallons. Custom had made the 42 gallon watertight tierce a standard container for shipping eel, salmon, herring, molasses, wine, whale oil, and many other commodities in the English colonies by 1700. After the American Revolution in 1776, American merchants continued to use the same size barrels. Oil barrel Definitions In the oil industry, one barrel (unit symbol bbl) is a unit of volume used for measuring oil defined as exactly 42 US gallons, approximately 159 liters, or . According to the American Petroleum Institute (API), a standard barrel of oil is the amount of oil that would occupy a volume of exactly at reference temperature and pressure conditions of and (or 1 atm). This standard barrel of oil will occupy a different volume at different pressures and temperatures. A standard barrel in this context is thus not simply a measure of volume, but of volume under specific conditions. () Unit multiples Oil companies that are publicly listed in the United States typically report their production using the unit multiples Mbbl (one thousand barrels) and MMbbl (one million barrels), derived from the Latin word "mille" and Roman Numeral M, meaning "thousand". Due to the risk of confusion The Society of Petroleum Engineers recommends in their style guide that abbreviations or prefixes M or MM are not used for barrels of oil or barrel of oil equivalent, but rather that thousands, millions or billions are spelled out. Using M for thousand and MM for million are in conflict with the SI convention where the "M" prefix stands for "mega" representing million, from the Greek for "large". Some oil companies, particularly those based in Europe, use kb (kilobarrels, one thousand barrels), mb (megabarrels, one million barrels), and gb (gigabarrels, one billion barrels). The lower case m is used to avoid confusion with the capital M used for thousand. For the same reason, the unit kbbl (one thousand barrels) is also sometimes used. Etymology The first "b" in "bbl" may have been doubled originally to indicate the plural (1 bl, 2 bbl), or possibly it was doubled to eliminate any confusion with bl as a symbol for the bale. Some sources assert that "bbl" originated as a symbol for "blue barrels" delivered by Standard Oil in its early days. However, while Ida Tarbell's 1904 Standard Oil Company history acknowledged the "holy blue barrel", the abbreviation "bbl" had been in use well before the 1859 birth of the U.S. petroleum industry. Flow rate Oil wells recover not just oil from the ground, but also natural gas and water. The term barrels of liquid per day (BLPD) refers to the total volume of liquid that is recovered. Similarly, barrels of oil equivalent or BOE is a value that accounts for both oil and natural gas while ignoring any water that is recovered. Other terms are used when discussing only oil. These terms can refer to either the production of crude oil at an oil well, the conversion of crude oil to other products at an oil refinery, or the overall consumption of oil by a region or country. One common term is barrels per day (BPD, BOPD, bbl/d, bpd, bd, or b/d), where 1 BPD is equivalent to 0.0292 gallons per minute. One BPD also becomes 49.8 tonnes per year. At an oil refinery, production is sometimes reported as barrels per calendar day (b/cd or bcd), which is total production in a year divided by the days in that year. Likewise, barrels per stream day (BSD or BPSD) is the quantity of oil product produced by a single refining unit during continuous operation for 24 hours. Burning one tonne of light, synthetic, or heavy crude yields 38.51, 39.40, or 40.90 GJ (thermal) respectively (10.70, 10.94, or 11.36 MW·h), so 1 tonne per day of synthetic crude is about 456 kW of thermal power and 1 bpd of synthetic crude is about 378 kW (slightly less for light crude, slightly more for heavy crude). Conversion The task of converting a standard barrel of oil to a standard cubic metre of oil is complicated by the fact that the latter is defined by the API to mean the amount of oil that, at different reference conditions (101.325 kPa and ), occupies 1 cubic metre. The fact that the refence conditions are not exactly the same means that an exact conversion is impossible unless the exact expansion coefficient of the crude is known, and this will vary from one crude oil to another. For a light oil with density of 850 kilogram per cubic metre (API gravity of 35), warming the oil from to might increase its volume by about 0.047%. Conversely, a heavy oil with a density of 934 kg/m3 (API gravity of 20) might only increase in volume by 0.039%. If physically measuring the density at a new temperature is not possible, then tables of empirical data can be used to accurately predict the change in density. In turn, this allows maximum accuracy when converting between standard barrel and standard cubic metre. The logic above also implies the same level of accuracy in measurements for barrels if there is a error in measuring the temperature at time of measuring the volume. For ease of trading, communication and financial accounting, international commodity exchanges often set a conversion factor for benchmark crude oils. For instance the conversion factor set by the New York Mercantile Exchange (NYMEX) for Western Canadian Select (WCS) crude oil traded at Hardisty, Alberta, Canada is 6.29287 U.S. barrels per standard cubic metre, despite the uncertainty in converting the volume for crude oil. Regulatory authorities in producing countries set standards for measurement accuracy of produced hydrocarbons, where such measurements affect taxes or royalties to the government. In the United Kingdom, for instance, the measurement accuracy required is ±0.25%. Qualifiers A barrel can technically be used to specify any volume. Since the actual nature of the fluids being measured varies along the stream, sometimes qualifiers are used to clarify what is being specified. In the oil field, it is often important to differentiate between rates of production of fluids, which may be a mix of oil and water, and rates of production of the oil itself. If a well is producing 10 MBD (millions of barrels per day) of fluids with a 20% water cut, then the well would also be said to be producing 8,000 barrels of oil a day (bod). In other circumstances, it can be important to include gas in production and consumption figures. Normally, gas amount is measured in standard cubic feet or standard cubic metres (for volume at STP), as well as in kg or Btu (which do not depend on pressure or temperature). But when necessary, such volume is converted to a volume of oil of equivalent enthalpy of combustion. Production and consumption using this analogue is stated in barrels of oil equivalent per day (boed). In the case of water-injection wells, in the United States it is common to refer to the injectivity rate in barrels of water per day (bwd). In Canada, it is measured in cubic metres per day (m3/d). In general, water injection rates will be stated in the same units as oil production rates, since the usual objective is to replace the volume of oil produced with a similar volume of water to maintain reservoir pressure. Related kinds of quantity Outside the United States, volumes of oil are usually reported in cubic metres (m3) instead of oil barrels. Cubic metre is the basic volume unit in the International System. In Canada, oil companies measure oil in cubic metres, but convert to barrels on export, since most of Canada's oil production is exported to the US. The nominal conversion factor is 1 cubic metre = 6.2898 oil barrels, but conversion is generally done by custody transfer meters on the border, since the volumes are specified at different temperatures, and the exact conversion factor depends on both density and temperature. Canadian companies operate internally and report to Canadian governments in cubic metres, but often convert to US barrels for the benefit of American investors and oil marketers. They generally quote prices in Canadian dollars per cubic metre to other Canadian companies, but use US dollars per barrel in financial reports and press statements, making it appear to the outside world that they operate in barrels. Companies on the European stock exchanges report the mass of oil in tonnes. Since different varieties of petroleum have different densities, however, there is not a single conversion between mass and volume. For example, one tonne of heavy distillates might occupy a volume of . In contrast, one tonne of crude oil might occupy , and one tonne of gasoline will require . Overall, the conversion is usually between per tonne. History The measurement of an "oil barrel" originated in the early Pennsylvania oil fields. The Drake Well, the first oil well in the US, was drilled in Pennsylvania in 1859, and an oil boom followed in the 1860s. When oil production began, there was no standard container for oil, so oil and petroleum products were stored and transported in barrels of different shapes and sizes. Some of these barrels would originally have been used for other products, such as beer, fish, molasses, or turpentine. Both the barrels (based on the old English wine measure), the tierce (159 litres) and the whiskey barrels were used. Also, barrels were in common use. The 40 gallon whiskey barrel was the most common size used by early oil producers, since they were readily available at the time. Around 1866, early oil producers in Pennsylvania concluded that shipping oil in a variety of different containers was causing buyer distrust. They decided they needed a standard unit of measure to convince buyers that they were getting a fair volume for their money, and settled on the standard wine tierce, which was two gallons larger than the standard whisky barrel. The Weekly Register, an Oil City, Pennsylvania newspaper, stated on August 31, 1866 that "the oil producers have issued the following circular": And by that means, King Richard III's English wine tierce became the American standard oil barrel. By 1872, the standard oil barrel was firmly established as 42 US-gallons. The 42 gallon standard oil barrel was officially adopted by the Petroleum Producers Association in 1872 and by the U.S. Geological Survey and the U.S. Bureau of Mines in 1882. In modern times, many different types of oil, chemicals, and other products are transported in steel drums. In the United States, these commonly have a capacity of and are referred to as such. They are called 200 litre or 200 kg drums outside the United States. In the United Kingdom and its former dependencies, a drum was used, even though all those countries now officially use the metric system and the drums are filled to 200 litres. In the United States, the 42 US-gallon size as a unit of measure is largely confined to the oil industry, while different sizes of barrel are used in other industries. Nearly all other countries use the metric system. Thus, the 42 US-gallon oil barrel is a unit of measure rather than a physical container used to transport crude oil. See also 55 gallon drum Barrel Barrel of oil equivalent English brewery cask units English wine cask units Imperial units List of unusual units of measurement Petroleum Petroleum pricing around the world Standard Barrel Act For Fruits, Vegetables, and Dry Commodities United States customary units Units of measurement Notes References Brewing Customary units of measurement in the United States Imperial units Petroleum Units of volume Alcohol measurement
Barrel (unit)
[ "Chemistry", "Mathematics" ]
3,044
[ "Units of volume", "Quantity", "Petroleum", "Chemical mixtures", "Units of measurement" ]
594,109
https://en.wikipedia.org/wiki/Wolfram%20Research
Wolfram Research, Inc. ( ) is an American multinational company that creates computational technology. Wolfram's flagship product is the technical computing program Wolfram Mathematica, first released on June 23, 1988. Other products include WolframAlpha, Wolfram SystemModeler, Wolfram Workbench, gridMathematica, Wolfram Finance Platform, webMathematica, the Wolfram Cloud, and the Wolfram Programming Lab. Wolfram Research founder Stephen Wolfram is the CEO. The company is headquartered in Champaign, Illinois, United States. History The company launched Wolfram Alpha, an answer engine on May 16, 2009. It brings a new approach to knowledge generation and acquisition that involves large amounts of curated computable data in addition to semantic indexing of text. Wolfram Research acquired MathCore Engineering AB on March 30, 2011. On July 21, 2011, Wolfram Research launched the Computable Document Format (CDF). CDF is an electronic document format designed to allow easy authoring of dynamically generated interactive content. In June 2014, Wolfram Research officially introduced the Wolfram Language as a new general multi-paradigm programming language. It is the primary programming language used in Mathematica. On April 15, 2020, Wolfram Research received $5,575,000 to help pay its employees during the COVID-19 pandemic as part of the U.S. government's Paycheck Protection Program administered by the Small Business Administration. The loan was forgiven. Products and resources Mathematica Mathematica began as a software program for doing mathematics by computer, and has evolved to cover all domains of technical computing software, with features for neural networks, machine learning, image processing, geometry, data science, and visualizations. Central to Mathematica's mission is its ability to perform symbolic computation, for example, the ability to solve indefinite integrals symbolically. Mathematica includes a notebook interface and can produce slides for presentations. Mathematica is available in a desktop version, a grid computing version, and a cloud version. Wolfram Alpha Wolfram Alpha is a free online service that answers factual queries directly by computing the answer from externally sourced curated data, rather than providing a list of documents or web pages that might contain the answer as a search engine might. Users submit queries and computation requests via a text field and Wolfram Alpha then computes answers and relevant visualizations. On February 8, 2012, Wolfram Alpha Pro was released, offering users additional features(e.g., the ability to upload many common file types and data — including raw tabular data, images, audio, XML, and dozens of specialized scientific, medical, and mathematical formats — for automatic analysis) for a monthly subscription fee. In 2016, Wolfram Alpha Enterprise, a business-focused analytics tool, was launched. The program combines data supplied by a corporation with the algorithms from Wolfram Alpha to answer questions related to that corporation. Wolfram SystemModeler Wolfram SystemModeler is a platform for engineering as well as life-science modeling and simulation based on the Modelica language. It provides an interactive graphical modeling and simulation environment and a customizable set of component libraries. The primary interface, ModelCenter, is an interactive graphical environment including a customizable set of component libraries. The software also provides a tight integration with Mathematica. Users can develop, simulate, document, and analyze their models within Mathematica notebooks. Publishing Wolfram Research publishes several free websites including the MathWorld and ScienceWorld encyclopedias. ScienceWorld, which launched in 2002, is divided into sites on chemistry, physics, astronomy and scientific biography. In 2005, the physics site was deemed a "valuable resource" by American Scientist magazine. However, by 2009, the astronomy site was said to suffer from outdated information, incomplete articles and link rot. The Wolfram Demonstrations Project is a collaborative site hosting interactive technical demonstrations powered by a free Mathematica Player runtime. Wolfram Research publishes The Mathematica Journal. Wolfram has also published several books via Wolfram Media, Wolfram's publishing arm. In addition, they have experimented with electronic textbook creation. Media activities Wolfram Research served as the mathematical consultant for the CBS television series Numb3rs, a show about the mathematical aspects of crime-solving. See also A New Kind of Science Ed Pegg, Jr. Eric W. Weisstein References External links Official Wolfram Research Twitter Account Hoovers Fact Sheet on Wolfram Research, Inc. The Mathematics Behind NUMB3RS, Wolfram's site on NUMB3RS mathematics. Champaign, Illinois Cloud computing providers Data companies Mathematical software Multinational companies headquartered in the United States Software companies based in Illinois Software companies established in 1987 Software companies of the United States
Wolfram Research
[ "Mathematics" ]
978
[ "Mathematical software" ]
594,139
https://en.wikipedia.org/wiki/Lucas%20chain
In mathematics, a Lucas chain is a restricted type of addition chain, named for the French mathematician Édouard Lucas. It is a sequence a0, a1, a2, a3, ... that satisfies a0=1, and for each k > 0: ak = ai + aj, and either ai = aj or |ai − aj| = am, for some i, j, m < k. The sequence of powers of 2 (1, 2, 4, 8, 16, ...) and the Fibonacci sequence (with a slight adjustment of the starting point 1, 2, 3, 5, 8, ...) are simple examples of Lucas chains. Lucas chains were introduced by Peter Montgomery in 1983. If L(n) is the length of the shortest Lucas chain for n, then Kutz has shown that most n do not have L < (1-ε) logφ n, where φ is the Golden ratio. References Integer sequences Addition chains
Lucas chain
[ "Mathematics" ]
205
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Addition chains", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
594,140
https://en.wikipedia.org/wiki/List%20of%20postal%20codes
This list shows an overview of postal code notation schemes for all countries that have postal or ZIP Code systems. List Legend A = letter N = number ? = letter or number CC = ISO 3166-1 alpha-2 country code On the use of country codes The use of the country codes in conjunction with postal codes started as a recommendation from CEPT (European Conference of Postal and Telecommunications Administrations) in the 1960s. In the original CEPT recommendation the distinguishing signs of motor vehicles in international traffic ("car codes") were placed before the postal code, and separated from it by a "-" (dash). Codes were only used on international mail and were hardly ever used internally in each country. Since the late 1980s, however, a number of postal administrations have changed the recommended codes to the two-letter country codes of ISO 3166. This would allow a universal, standardized code set to be used, and bring it in line with country codes used elsewhere in the UPU (Universal Postal Union). Attempts were also made (without success) to make this part of the official address guidelines of the UPU. Recently introduced postal code systems where the UPU has been involved have included the ISO 3166 country code as an integral part of the postal code. At present there are no universal guidelines as to which code set to use, and recommendations vary from country to country. In some cases, the applied country code will differ according to recommendations of the sender's postal administration. UPU recommends that the country name always be included as the last line of the address. In the list above, the following principles have been applied: Integral country codes have been included in the code format, in bold type and without brackets. These are also used on internal mail in the respective countries. The ISO 3166 codes is used alone for countries that have explicitly recommended them. Where there is no explicit recommendation for ISO 3166 codes and the codes differ, both "car codes" and ISO 3166 codes are listed, with the "car code" listed first. See also Universal Postal Union International Postcode system using Cubic Meters (CubicPostcode.com) International Postcodes database (mapanet.eu) References Footnotes Notations Postal codes, List of Postal codes L Communication-related lists by country
List of postal codes
[ "Technology" ]
464
[ "Transport systems", "Postal systems" ]
594,169
https://en.wikipedia.org/wiki/Harvard%20College%20Observatory
The Harvard College Observatory (HCO) is an institution managing a complex of buildings and multiple instruments used for astronomical research by the Harvard University Department of Astronomy. It is located in Cambridge, Massachusetts, United States, and was founded in 1839. With the Smithsonian Astrophysical Observatory, it forms part of the Center for Astrophysics Harvard & Smithsonian. HCO houses a collection of approximately 500,000 astronomical plates taken between the mid-1880s and 1989 (with a gap from 1953–1968). This 100-year coverage is a unique resource for studying temporal variations in the universe. The Digital Access to a Sky Century @ Harvard project is digitally scanning and archiving these photographic plates. History In 1839, the Harvard Corporation voted to appoint William Cranch Bond, a prominent Boston clockmaker, as "Astronomical Observer to the University" (at no salary). This marked the founding of the Harvard College Observatory. HCO's first telescope, the 15-inch Great Refractor, was installed in 1847. That telescope was the largest in the United States from installation until 1867. Between 1847 and 1852, Bond and pioneer photographer John Adams Whipple used the Great Refractor telescope to produce images of the moon that are remarkable in their clarity of detail and aesthetic power. This was the largest telescope in North America at that time, and their images of the moon took the prize for technical excellence in photography at the 1851 Great Exhibition at The Crystal Palace in London. On the night of July 16–17, 1850, Whipple and Bond made the first daguerreotype of a star (Vega). Harvard College Observatory is historically important to astronomy, as many women including Annie Jump Cannon, Henrietta Swan Leavitt, Cecilia Payne-Gaposchkin, Williamina Fleming, and Florence Cushman performed pivotal stellar classification research. Cannon, Leavitt and Cushman were hired initially as "computers" to perform calculations and examine stellar photographs, but later made insightful connections in their research. Publications From 1898 to 1926, a series of Bulletins were issued containing many of the major discoveries of the period. These were then replaced by Announcement Cards which continued to be issued until 1952. In 1908, the observatory published the Harvard Revised Photometry Catalogue, which gave rise to the HR star catalogue, now maintained by the Yale University Observatory as the Bright Star Catalogue. Directors William Cranch Bond 1839–1859 George Phillips Bond 1859–1865 Joseph Winlock 1866–1875 Edward Charles Pickering 1877–1919 Solon Irving Bailey 1919–1921 (Acting Director) Harlow Shapley 1921–1952 Donald H. Menzel 1952–1953 (Acting Director); 1954–1966 (Director) Leo Goldberg 1966–1970 Fred Whipple 1955-1973 George B. Field 1973-1982 (founding director of the Center for Astrophysics Harvard & Smithsonian) Irwin Shapiro 1983–2004 Charles Alcock 2004–2022 Lisa Kewley 2022–present See also Harvard Computers Sears Tower – Harvard Observatory The Minor Planet Center credits many asteroid discoveries to "Harvard Observatory." See List of largest optical refracting telescopes, for other 'great refractors' References Further reading External links HCO home page Center for Astrophysics | Harvard & Smithsonian Harvard College Observatory Bulletins Harvard College Announcement Cards Harvard University Buildings and structures in Cambridge, Massachusetts Astronomical observatories in Massachusetts Astronomy institutes and departments Harvard–Smithsonian Center for Astrophysics Minor-planet discovering observatories
Harvard College Observatory
[ "Astronomy" ]
690
[ "Astronomy organizations", "Astronomy institutes and departments" ]
594,269
https://en.wikipedia.org/wiki/Vietnamese%20Quoted-Readable
Vietnamese Quoted-Readable (usually abbreviated VIQR), also known as Vietnet, is a convention for writing Vietnamese using ASCII characters encoded in only 7 bits, making possible for Vietnamese to be supported in computing and communication systems at the time. Because the Vietnamese alphabet contains a complex system of diacritical marks, VIQR requires the user to type in a base letter, followed by one or two characters that represent the diacritical marks. Syntax VIQR uses the following convention: VIQR uses DD or Dd for the Vietnamese letter Đ, and dd for the Vietnamese letter đ. To type certain punctuation marks (namely, the period, question mark, apostrophe, forward slash, opening parenthesis, or tilde) directly after most Vietnamese words, a backslash (\) must be typed directly before the punctuation mark, functioning as an escape character, so that it will not be interpreted as a diacritical mark. For example: What is your name [Sir]? My name is Trần Văn Hiếu. Software support VIQR is primarily used as a Vietnamese input method in software that supports Unicode. Similar input methods include Telex and VNI. Input method editors such as VPSKeys convert VIQR sequences to Unicode precomposed characters as one types, typically allowing modifier keys to be input after all the base letters of each word. However, in the absence of input method software or Unicode support, VIQR can still be input using a standard keyboard and read as plain ASCII text without suffering from mojibake. Unlike the VISCII and VPS code pages, VIQR is rarely used as a character encoding. While VIQR is registered with the Internet Assigned Numbers Authority as a MIME charset, MIME-compliant software is not required to support it. Nevertheless, the Mozilla Vietnamese Enabling Project once produced builds of the open source version of Netscape Communicator, as well as its successor, the Mozilla Application Suite, that were capable of decoding VIQR-encoded webpages, e-mails, and newsgroup messages. In these unofficial builds, a "VIQR" option appears in the Edit | Character Set menu, alongside the VISCII, TCVN 5712, VPS, and Windows-1258 options that remained available for several years in Mozilla Firefox and Thunderbird. History By the early 1990s, an ad-hoc system of mnemonics known as Vietnet was in use on the Viet-Net mailing list and soc.culture.vietnamese Usenet group. In 1992, the Vietnamese Standardization Group (Viet-Std, Nhóm Nghiên Cứu Tiêu Chuẩn Tiếng Việt) from the TriChlor Software Group led by Christopher Cuong T. Nguyen, Cuong M. Bui, and Hoc D. Ngo in California formalized the VIQR convention. It was described the next year in RFC 1456. See also Vietnamese language and computers Alternative schemes for Vietnamese: Telex VISCII VNI VPSKeys VSCII-MNEM ASCII mnemonics for other writing systems: ITRANS for Devanagari SAMPA for IPA Notes and references External links RFC 1456 – Conventions for Encoding the Vietnamese Language (VISCII and VIQR) Viet-Std Group Vietnamese Character Encoding Standardization Report – VISCII and VIQR 1.1 Character Encoding Specifications (English and Vietnamese) The VIQR Convention AnGiang Software Free Online VIQR to Unicode Converter Help page on inputting Vietnamese text in Vietnamese Wikipedia Character encoding Vietnamese character input
Vietnamese Quoted-Readable
[ "Technology" ]
755
[ "Natural language and computing", "Character encoding" ]
594,303
https://en.wikipedia.org/wiki/True%20airspeed
The true airspeed (TAS; also KTAS, for knots true airspeed) of an aircraft is the speed of the aircraft relative to the air mass through which it is flying. The true airspeed is important information for accurate navigation of an aircraft. Traditionally it is measured using an analogue TAS indicator, but as GPS has become available for civilian use, the importance of such air-measuring instruments has decreased. Since indicated, as opposed to true, airspeed is a better indicator of margin above the stall, true airspeed is not used for controlling the aircraft; for these purposes the indicated airspeed – IAS or KIAS (knots indicated airspeed) – is used. However, since indicated airspeed only shows true speed through the air at standard sea level pressure and temperature, a TAS meter is necessary for navigation purposes at cruising altitude in less dense air. The IAS meter reads very nearly the TAS at lower altitude and at lower speed. On jet airliners the TAS meter is usually hidden at speeds below . Neither provides for accurate speed over the ground, since surface winds or winds aloft are not taken into account. Performance TAS is the appropriate speed to use when calculating the range of an airplane. It is the speed normally listed on the flight plan, also used in flight planning, before considering the effects of wind. Airspeed sensing errors The airspeed indicator (ASI), driven by ram air into a pitot tube and still air into a barometric static port, shows what is called indicated airspeed (IAS). The differential pressure is affected by air density. The ratio between the two measurements is temperature-dependent and pressure-dependent, according to the ideal gas law. At sea level in the International Standard Atmosphere (ISA) and at low speeds where air compressibility is negligible (i.e., assuming a constant air density), IAS corresponds to TAS. When the air density or temperature around the aircraft differs from standard sea level conditions, IAS will no longer correspond to TAS, thus it will no longer reflect aircraft performance. The ASI will indicate less than TAS when the air density decreases due to a change in altitude or air temperature. For this reason, TAS cannot be measured directly. In flight, it can be calculated either by using an E6B flight calculator or its equivalent. For low speeds, the data required are static air temperature, pressure altitude and IAS (or CAS for more precision). Above approximately , the compressibility error rises significantly and TAS must be calculated by the Mach speed. Mach incorporates the above data including the compressibility factor. Modern aircraft instrumentation use an air data computer to perform this calculation in real time and display the TAS reading directly on the electronic flight instrument system. Since temperature variations are of a smaller influence, the ASI error can be estimated as indicating about 2% less than TAS per of altitude above sea level. For example, an aircraft flying at in the international standard atmosphere with an IAS of , is actually flying at TAS. Use in navigation calculations To maintain a desired ground track while flying in the moving airmass, the pilot of an aircraft must use knowledge of wind speed, wind direction, and true air speed to determine the required heading. See also wind triangle. Calculating true airspeed Low-speed flight At low speeds and altitudes, IAS and CAS are close to equivalent airspeed (EAS). TAS can be calculated as a function of EAS and air density: where is true airspeed, is equivalent airspeed, is the air density at sea level in the International Standard Atmosphere (15 °C and 1013.25 hectopascals, corresponding to a density of 1.225 kg/m3), is the density of the air in which the aircraft is flying. High-speed flight TAS can be calculated as a function of Mach number and static air temperature: where is the speed of sound at standard sea level (), is Mach number, is static air temperature in kelvins, is the temperature at standard sea level (288.15 K). For manual calculation of TAS in knots, where Mach number and static air temperature are known, the expression may be simplified to (remembering temperature is in kelvins). Combining the above with the expression for Mach number gives an expression for TAS as a function of impact pressure, static pressure and static air temperature (valid for subsonic flow): where: is impact pressure, is static pressure. Electronic flight instrument systems (EFIS) contain an air data computer with inputs of impact pressure, static pressure and total air temperature. In order to compute TAS, the air data computer must convert total air temperature to static air temperature. This is also a function of Mach number: where total air temperature. In simple aircraft, without an air data computer or machmeter, true airspeed can be calculated as a function of calibrated airspeed and local air density (or static air temperature and pressure altitude, which determine density). Some airspeed indicators incorporate a slide rule mechanism to perform this calculation. Otherwise, it can be performed using this applet or a device such as the E6B (a handheld circular slide rule). See also Acronyms and abbreviations in avionics ICAO recommendations on use of the International System of Units Air speed Airspeed indicator Calibrated airspeed Equivalent airspeed Flight instruments Flight planning Indicated airspeed References Bibliography United States Department of the Air Force and United States Navy Department. 1989. Air Navigation : Flying Training. Washington D.C.? Air Training Command in accordance with AFR 5-6 : For sale by the Supt. of Docs. U.S. G.P.O. Clancy L. J. 1978. Aerodynamics, Section 3.8. New York London: Wiley : Pitman. Kermode Alfred Cotterill. 1972. Mechanics of Flight, Chapter 2. 8th (metric) ed. London: Pitman. Gracey William. 1980. Measurement of Aircraft Speed and Altitude. Washington: NASA. Archived from original Fri, 04 Sep 2020 00:07:41 +0000 at the Wayback Machine (11.1 MiB). External links A free windows calculator which converts between various airspeeds (true / equivalent / calibrated) according to the appropriate atmospheric (standard and not standard!) conditions Android application for airspeed conversion in different atmospheric conditions True, Equivalent, and Calibrated Airspeed at MathPages Newbyte airspeed converter avc.obsment.com - True airspeed calculator. Calculate True Airspeed, Mach, Pitot Tube Impact Air Pressure and more at luizmonteiro.com Airspeed
True airspeed
[ "Physics" ]
1,383
[ "Wikipedia categories named after physical quantities", "Airspeed", "Physical quantities" ]
594,312
https://en.wikipedia.org/wiki/Indicated%20airspeed
Indicated airspeed (IAS) is the airspeed of an aircraft as measured by its pitot-static system and displayed by the airspeed indicator (ASI). This is the pilots' primary airspeed reference. This value is not corrected for installation error, instrument error, or the actual encountered air density, being instead calibrated to always reflect the adiabatic compressible flow of the International Standard Atmosphere at sea level. It uses the difference between total pressure and static pressure, provided by the system, to either mechanically or electronically measure dynamic pressure. The dynamic pressure includes terms for both density and airspeed. Since the airspeed indicator cannot know the density, it is by design calibrated to assume the sea level standard atmospheric density when calculating airspeed. Since the actual density will vary considerably from this assumed value as the aircraft changes altitude, IAS varies considerably from true airspeed (TAS), the relative velocity between the aircraft and the surrounding air mass. Calibrated airspeed (CAS) is the IAS corrected for instrument and position error. An aircraft's indicated airspeed in knots is typically abbreviated KIAS for "Knots-Indicated Air Speed" (vs. KCAS for calibrated airspeed and KTAS for true airspeed). The IAS is an important value for the pilot because it is the indicated speeds which are specified in the aircraft flight manual for such important performance values as the stall speed. These speeds, in true airspeed terms, vary considerably depending upon density altitude. However, at typical civilian operating speeds, the aircraft's aerodynamic structure responds to dynamic pressure alone, and the aircraft will perform the same when at the same dynamic pressure. Since it is this same dynamic pressure that drives the airspeed indicator, an aircraft will always, for example, stall at the published indicated airspeed (for the current configuration) regardless of density, altitude or true airspeed. Furthermore, the IAS is specified in some regulations, and by air traffic control when directing pilots, since the airspeed indicator displays that speed (by definition) and it is the pilot's primary airspeed reference when operating below transonic or supersonic speeds. Calculation Indicated airspeed measured by pitot-tube can be approximately expressed by the following equation delivered from Bernoulli's equation. NOTE: The above equation applies only to conditions that can be treated as incompressible. Liquids are treated as incompressible under almost all conditions. Gases under certain conditions can be approximated as incompressible. See Compressibility. The compression effects can be corrected by use of Poisson constant. This compensation corresponds to equivalent airspeed (EAS). where: is indicated airspeed in m/s, is stagnation or total pressure in pascals, is static pressure in pascals, is standard atmosphere fluid density in at sea level, and is the specific heat capacity ratio (≈1.401 for air). IAS vs CAS The IAS is not the actual speed through the air even when the aircraft is at sea level under International Standard Atmosphere conditions (15 °C, 1013 hPa, 0% humidity). The IAS needs to be corrected for known instrument and position errors to show true airspeed under those specific atmospheric conditions, and this is the CAS (Calibrated Airspeed). Despite this the pilot's primary airspeed reference, the ASI, shows IAS (by definition). The relationship between CAS and IAS is known and documented for each aircraft type and model. IAS and V speeds The aircraft's pilot manual usually gives critical V speeds as IAS, those speeds indicated by the airspeed indicator. This is because the aircraft behaves similarly at the same IAS no matter what the TAS is: E.g. A pilot landing at a hot and high airfield will use the same IAS to fly the aircraft at the correct approach and landing speeds as when landing at a cold sea level airfield, even though the TAS must differ considerably between the two landings. Whereas IAS can be reliably used for monitoring critical speeds well below the speed of sound this is not so at higher speeds. An example: Because (1) the compressibility of air changes considerably approaching the speed of sound, and (2) the speed of sound varies considerably with temperature and therefore altitude; the maximum speed at which an aircraft structure is safe, the never exceed speed (abbreviated VNE), is specified at several differing altitudes in faster aircraft's operating manuals, as shown in the sample table below. Ref: Pilot's Notes for Tempest V Sabre IIA Engine - Air Ministry A.P.2458C-PN IAS and navigation For navigation, it is necessary to convert IAS to TAS and/or ground speed (GS) using the following method: correct IAS to calibrated airspeed (CAS) using an aircraft-specific correction table; correct CAS to true airspeed (TAS) by using Outside Air Temperature (OAT), Pressure-altitude and CAS on an E6B flight computer or equivalent functionality on most GPSs; convert TAS to ground speed (GS) by allowing for the effect of wind. With the advent of Doppler radar navigation and, more recently, GPS receivers, with other advanced navigation equipment that allows pilots to read ground speed directly, the TAS calculation in-flight is becoming unnecessary for the purposes of navigation estimations. TAS is the primary method to determine aircraft's cruise performance in manufacturer's specs, speed comparisons and pilot reports. Other airspeeds From IAS, the following speeds can also be calculated: convert CAS to equivalent airspeed (EAS) by allowing for compressibility effects (not necessary at slow speed or low altitude); EAS is used by aircraft engineers and some very high-altitude flying aircraft such as the U-2 and the SR-71; convert EAS to true airspeed (TAS) by allowing for differences in density altitude. On large jet aircraft the IAS is by far the most important speed indicator. Most aircraft speed limitations are based on IAS, as IAS closely reflects dynamic pressure. TAS is usually displayed as well, but purely for advisory information and generally not in a prominent location. Modern jet airliners also include ground speed (GS) and Machmeter. Ground speed shows the actual speed that the aircraft uses compared to the ground. This is usually connected to a GPS or similar system. Ground speed is just a pilot aid to estimate if the flight is on time, behind or ahead of schedule. It is not used for takeoff and landing purposes, since the imperative speed for a flying aircraft always is the speed against the wind. The Machmeter is, on subsonic aircraft, a warning indicator. Subsonic aircraft must not fly faster than a specific percentage of the speed of sound. Usually passenger airliners do not fly faster than around 85% of speed of sound, or Mach 0.85. Supersonic aircraft, like the Concorde and military fighters, use the Machmeter as the main speed instrument with the exception of take-offs and landings. Some aircraft also have a taxi speed indicator for use on the ground. Since the IAS often starts at around (on jet airliners), pilots may need extra help while taxiing the aircraft on the ground. Its range is around . See also Acronyms and abbreviations in avionics ICAO recommendations on use of the International System of Units Air speed Calibrated airspeed Equivalent airspeed Flight instruments Global Positioning System True airspeed References Bibliography Gracey, William (1980), "Measurement of Aircraft Speed and Altitude" (11 MB), NASA Reference Publication 1046. Airspeed
Indicated airspeed
[ "Physics" ]
1,577
[ "Wikipedia categories named after physical quantities", "Airspeed", "Physical quantities" ]
594,336
https://en.wikipedia.org/wiki/Cell%20proliferation
Cell proliferation is the process by which a cell grows and divides to produce two daughter cells. Cell proliferation leads to an exponential increase in cell number and is therefore a rapid mechanism of tissue growth. Cell proliferation requires both cell growth and cell division to occur at the same time, such that the average size of cells remains constant in the population. Cell division can occur without cell growth, producing many progressively smaller cells (as in cleavage of the zygote), while cell growth can occur without cell division to produce a single larger cell (as in growth of neurons). Thus, cell proliferation is not synonymous with either cell growth or cell division, despite these terms sometimes being used interchangeably. Stem cells undergo cell proliferation to produce proliferating "transit amplifying" daughter cells that later differentiate to construct tissues during normal development and tissue growth, during tissue regeneration after damage, or in cancer. The total number of cells in a population is determined by the rate of cell proliferation minus the rate of cell death. Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves balanced cell growth and cell division rates that maintain a roughly constant cell size in the exponentially proliferating population of cells. Cell proliferation occurs by combining cell growth with regular "G1-S-M-G2" cell cycles to produce many diploid cell progeny. In single-celled organisms, cell proliferation is largely responsive to the availability of nutrients in the environment (or laboratory growth medium). In multicellular organisms, the process of cell proliferation is tightly controlled by gene regulatory networks encoded in the genome and executed mainly by transcription factors including those regulated by signal transduction pathways elicited by growth factors during cellcell communication in development. Recently it has been also demonstrated that cellular bicarbonate metabolism, which is responsible for cell proliferation, can be regulated by mTORC1 signaling. In addition, intake of nutrients in animals can induce circulating hormones of the Insulin/IGF-1 family, which are also considered growth factors, and that function to promote cell proliferation in cells throughout the body that are capable of doing so. Uncontrolled cell proliferation, leading to an increased proliferation rate, or a failure of cells to arrest their proliferation at the normal time, is a cause of cancer. References Cellular processes
Cell proliferation
[ "Biology" ]
510
[ "Cellular processes" ]
594,501
https://en.wikipedia.org/wiki/E-democracy
E-democracy (a blend of the terms electronic and democracy), also known as digital democracy or Internet democracy, uses information and communication technology (ICT) in political and governance processes. The term is credited to digital activist Steven Clit. By using 21st-century ICT, e-democracy seeks to enhance democracy, including aspects like civic technology and E-government. Proponents argue that by promoting transparency in decision-making processes, e-democracy can empower all citizens to observe and understand the proceedings. Also, if they possess overlooked data, perspectives, or opinions, they can contribute meaningfully. This contribution extends beyond mere informal disconnected debate; it facilitates citizen engagement in the proposal, development, and actual creation of a country's laws. In this way, e-democracy has the potential to incorporate crowdsourced analysis more directly into the policy-making process. Electronic democracy incorporates a diverse range of tools that use both existing and emerging information sources. These tools provide a platform for the public to express their concerns, interests, and perspectives, and to contribute evidence that may influence decision-making processes at the community, national, or global level. E-democracy leverages both traditional broadcast technologies such as television and radio, as well as newer interactive internet-enabled devices and applications, including polling systems. These emerging technologies have become popular means of public participation, allowing a broad range of stakeholders to access information and contribute directly via the internet. Moreover, large groups can offer real-time input at public meetings using electronic polling devices. Utilizing information and communication technology (ICT), e-democracy bolsters political self-determination. It collects social, economic, and cultural data to enhance democratic engagement. As a concept that encompasses various applications within differing democratic structures, e-democracy has substantial impacts on political norms and public engagement. It emerges from theoretical explorations of democracy and practical initiatives to address societal challenges through technology. The extent and manner of its implementation often depend on the specific form of democracy adopted by a society, thus shaped by both internal dynamics and external technological developments. When designed to present both supporting and opposing evidence and arguments for each issue, apply conflict resolution and Cost–benefit analysis techniques, and actively address confirmation bias and other cognitive biases, E-Democracy could potentially foster a more informed citizenry. However, the development of such a system poses significant challenges. These include designing sophisticated platforms to achieve these aims, navigating the dynamics of populism while acknowledging that not everyone has the time or resources for full-time policy analysis and debate, promoting inclusive participation, and addressing cybersecurity and privacy concerns. Despite these hurdles, some envision e-democracy as a potential facilitator of more participatory governance, a countermeasure to excessive partisan dogmatism, a problem-solving tool, a means for evaluating the validity of pro/con arguments, and a method for balancing power distribution within society. Throughout history, social movements have adapted to use the prevailing technologies as part of their civic engagement and social change efforts. This trend persists in the digital era, illustrating how technology shapes democratic processes. As technology evolves, it inevitably impacts all aspects of society, including governmental operations. This ongoing technological advancement brings new opportunities for public participation and policy-making while presenting challenges such as cybersecurity threats, issues related to the digital divide, and privacy concerns. Society is actively grappling with these complexities, striving to balance leveraging technology for democratic enhancement and managing its associated risks. Considerations E-democracy incorporates elements of both representative and direct democracy. In representative democracies, which characterize most modern systems, responsibilities such as law-making, policy formation, and regulation enforcement are entrusted to elected officials. This differs from direct democracies, where citizens undertake these duties themselves. Motivations for e-democracy reforms are diverse and reflect the desired outcomes of its advocates. Some aim to align government actions more closely with the public's interest, akin to populism, diminish the influence of media, political parties, and lobbyists, or use public input to assess potential costs and benefits of each policy. E-democracy, in its unstructured form, emphasizes direct participation and has the potential to redistribute political power from elected officials to individuals or groups. However, reforms aimed at maximizing benefits and minimizing costs might require structures that mimic a form of representation, conceivable if the public had the capacity to debate and analyze issues full-time. Given the design of electronic forums that can accommodate extensive debate, e-democracy has the potential to mimic aspects of representation on a much larger scale. These structures could involve public education initiatives or systems that permit citizens to contribute based on their interests or expertise. Further, E-democracies allow participants to engage online thus allowing it to reach a broader range of people. From this standpoint, e-democracy appears less concerned with what the public believes to be true and more focused on the evidence the public can demonstrate as true. This view reveals a tension within e-democratic reforms between populism and an evidence-based approach akin to the scientific method or the Enlightenment principles. A key indicator of the effectiveness of a democratic system is the successful implementation of policy. To facilitate this, voters must comprehend the implications of each policy approach, evaluate its costs and benefits, and consider historical precedents for policy effectiveness. Some proponents of e-democracy argue that technology can enable citizens to perform these tasks as effectively, if not more so, than traditional political parties within representative democracies. By harnessing technological advancements, e-democracy has the potential to foster more informed decision-making and enhance citizen involvement in the democratic process. History E-democracy traces back to the development of information and communication technology (ICT) and the evolution of democratic structures. It encompasses initiatives from governments to interact with citizens through digital means and grassroots activities using electronic platforms to influence governmental practices. Early developments The inception of e-democracy corresponds with the rise of the Internet in the late 20th century. The diffusion of personal computers and the Internet during the 1990s led to the initiation of electronic government initiatives. Digital platforms, such as forums, chat rooms, and email lists, were pivotal in fostering public discourse, thereby encouraging informal civic engagement online. These platforms provided an accessible medium for individuals to discuss ideas and issues, and they were utilized by both governments and citizens to promote dialogue, advocate for change, and involve the public in decision-making processes. Concept and approach The structure of the Internet, which currently embodies characteristics such as decentralization, open standards, and universal access, has been observed to align with principles often associated with democracy. These democratic principles have their roots in federalism and Enlightenment values like openness and individual liberty. Steven Clift, a notable proponent of e-democracy, suggests that the Internet should be utilized to enhance democratic processes and provide increased opportunities for interaction between individuals, communities, and the government. He emphasizes the importance of structuring citizen-to-citizen discussions online within existing power structures and maintaining significant reach within the community for these discussions to hold agenda-setting potential. The concept involves endorsing individuals or policies committed to leveraging internet technologies to amplify public engagement without modifying or substituting existing constitutions. The approach includes data collection, analysis of advantages and disadvantages, evaluation of interests, and facilitating discussions around potential outcomes. Late 20th century to early 21st century In the late 20th century and early into the 21st century, e-democracy started to become more structured as governments worldwide started to explore its potential. One major development was the rise of e-government initiatives, which aimed to provide public services online. One of the first instances of such an initiative was the establishment of the Government Information Locator Service (GILS) by the United States government in 1994. GILS was a searchable database of government information accessible to citizens and businesses, and it served as a tool to improve agency electronic records management practices. Along with the rise of e-government services, government websites started to spring up, aiming to improve communication with citizens, increase transparency, and make administrative tasks easier to accomplish online. The mid-2000s ushered in the era of Web 2.0, emphasizing user-generated content, interoperability, and collaboration. This period witnessed the rise of social media platforms, blogs, and other collaborative tools, further amplifying the potential for e-democracy through increasing opportunities for public participation and interaction. Concepts like crowdsourcing and open-source governance gained traction, advocating for broader and more direct public involvement in policymaking. As the digital age progressed, so too did the interaction between governments and citizens. The advent and rapid adoption of the internet globally catalyzed this transformation. With high internet penetration in many regions, politics have increasingly relied on the internet as a primary source of information for numerous people. This digital shift has been supported by the rise in online advertising among political candidates and groups actively trying to sway public opinion or directly influence legislators. This trend is especially noticeable among younger voters, who often regard the internet as their primary source of information due to its convenience and ability to streamline their information-gathering process. The user-friendly nature of search engines like Google and social networks encourages increased citizen engagement in political research and discourse. Social networks, for instance, offer platforms where individuals can voice their opinions on governmental issues without fear of judgement. The vast scale and decentralized structure of the internet enable anyone to create viral content and influence a wide audience. The Internet facilitates citizens in accessing and disseminating information about politicians while simultaneously providing politicians with insights from a broader citizen base. This collaborative approach to decision-making and problem-solving empowers citizens. It accelerates decision-making processes by politicians, thereby fostering a more efficient society. Gathering citizen feedback and perspectives is essential to a politician's role. The Internet functions as a conduit for effective engagement with a larger audience. Consequently, this enhanced communication with the public strengthens the capability and effectiveness of the American government as a democracy. The 2016 U.S. presidential election is an example of social media integration in political campaigns, where both Donald Trump and Hillary Clinton actively utilized Twitter as a communication tool. These platforms allow candidates to shape public perceptions while also humanizing their personas, suggesting that political figures are as approachable and relatable as ordinary individuals. Through resources such as Google, the Internet enables every citizen to readily research political topics. Social media platforms like Facebook, Twitter, and Instagram encourage political engagement, allowing users to share their political views and connect with like-minded individuals. Generation X's disillusionment with political processes, epitomized by large-scale public protests such as the U.K. miners' strike of 1984-1985 that appeared to fail, predated the widespread availability of information technology to individual citizens. There is a perception that e-democracy could address some of these concerns by offering a counter to the insularity, power concentration, and post-election accountability deficit often associated with traditional democratic processes organized primarily around political parties. Tom Watson, the Deputy Leader of the U.K. Labour Party, once stated: Despite the benefits of the digital shift, one of the challenges of e-democracy is the potential disconnect between politics and actual government implementation. While the internet provides a platform for robust political discourse, translating these discussions into effective government action can be complex. This gap can often be exacerbated by the rapid pace of digital dialogue, which may outpace the slower, more deliberative processes of policy-making. The rise of digital media has created new opportunities for citizens to participate in politics and to hold governments accountable. However, it has also created new challenges, such as the potential for echo chambers, and the need for governments to be responsive to citizen concerns. The challenge for e-democracy, therefore, is to ensure that the digital discourse contributes constructively to the functioning of the government and the decision-making processes, rather than becoming an echo chamber of opinions with little practical impact. As of the 2020s, e-democracy's landscape continues to evolve alongside advancements in technologies such as artificial intelligence, blockchain, and big data. These technologies promise to expand citizen participation further, enhance transparency, and boost the overall efficiency and responsiveness of democratic governance. The history of e-democracy exhibits significant progress, but it is also characterized by ongoing debates and challenges, such as the digital divide, data privacy, cybersecurity, and the impact of misinformation. One concern is whether or not e-democracies will be able to withstand terrorist threats; once people are assured that defenses are in place for this, e-democracies will better serve communities the way they were intended to. As this journey continues, the emphasis remains on leveraging technology to enhance democratic processes and ensure all citizens' voices are heard and valued. E-democracy promotes wider access to information, and its inherent decentralization challenges censorship practices. It embodies elements of the internet's origins, including strong libertarian support for freedom of speech, widespread sharing culture, and the National Science Foundation's commercial use prohibition. The internet's capacity for mass communication, evident in newsgroups, chat rooms, and MUDs, surpasses traditional boundaries associated with broadcast media like newspapers or radio, as well as personal media such as letters or landline telephones. As the Internet represents a vast digital network supporting open standards, achieving widespread, cost-effective access to a diverse range of communication media and models is feasible. Practical issues pertaining to e-democracy include managing the agenda while encouraging meaningful participation and fostering enlightened understanding. Furthermore, efforts are evaluated based on their ability to ensure voting equality and promote inclusivity. The success or failure of e-democracy largely depends on its capability to accurately delineate each issue's relevant costs and benefits, identify their likelihood and significance, and align votes with this analysis. In addition, all internet forums, including Wikipedia, must address cybersecurity and protect sensitive data. Digital mobilization in social movements Occupy movement The Occupy movement, which proposed various demonstrations in response to the financial crisis of 2007–08, extensively utilized social networks. 15-M Movement Originating in Spain and subsequently spreading to other European countries, the 15-M Movement gave rise to proposals by the Partido X (X Party) in Spain. In 2016 and 2017, citizens involved in the movement together with the City Council of Barcelona developed a combined online and offline e-democracy project called Decidim, that self-describes as a "technopolitical network for participatory democracy", with the aim of implementing the hopes of participatory democracy raised by the movement. The project combines a free and open-source software (FOSS) software package together with a participatory political project and an organising community, "Metadecidim". Decidim participants refer to the software, political and organising components of the project as "technical", "political" and "technopolitical" levels, respectively. By 2023, Decidim estimated that 400 city and regional governments and civil society institutions were running Decidim instances. Arab Spring During the Arab Spring, uprisings across North Africa and the Middle East were spearheaded by online activists. Initially, pro-democracy movements harnessed digital media to challenge authoritarian regimes. These regimes, however, adapted and integrated social media into their counter-insurgency strategies over time. Digital media served as a critical tool in transforming localized and individual dissent into structured movements with a shared awareness of common grievances and opportunities for collective action. Egyptian Revolution The Egyptian Revolution began on 25 January 2011, prompted by mass protests in Cairo, Egypt, against the long reign of President Hosni Mubarak, high unemployment, governmental corruption, poverty, and societal oppression. The 18-day revolution gained momentum not through initial acts of violence or protests, but via a single Facebook page, which quickly attracted the attention of thousands and eventually millions of Egyptians, evolving into a global phenomenon. The Internet became a tool of empowerment for the protestors, facilitating participation in their government's democratization process. Protestors effectively utilized digital platforms to communicate, organize, and collaborate, generating real-time impact. In response to the regime's failed attempt to disrupt political online discussions by severing all internet access, Google and Twitter collaborated to create a system that allowed information to reach the public without internet access. The interactive nature of media during this revolution enhanced civic participation and played a significant role in shaping the political outcome of the revolution and the democratization of the entire nation. The Egyptian Revolution has been interpreted by some as a paradigm shift from a group-controlled system to one characterized by "networked individualism". This transformation is tied to the post-"triple revolution" of technology, consisting of three key developments. First, the shift towards social networks, second, the widespread propagation of the instantaneous internet, and third, the ubiquity of mobile phones. These elements significantly impacted change through the Internet, providing an alternative, unregulated sphere for idea formation and protests. For instance, the "6 April Youth Movement" in Egypt established their political group on Facebook and called for a national strike. Despite the subsequent suppression of this event, the Facebook group persisted, encouraging other activist groups to utilize online media. Moreover, the Internet served as a medium for building international connections, amplifying the impact of the revolt. The rapid transmission of information via Twitter hashtags, for example, made the uprising globally known. In particular, over three million tweets contained popular hashtags such as #Egypt and #sidibouzid, further facilitating the spread of knowledge and fostering change in Egypt. Kony 2012 The Kony 2012 video, released on 5 March 2012 by the non-profit organization Invisible Children, launched an online grassroots campaign aimed at locating and arresting Joseph Kony, the leader of the Lord's Resistance Army (LRA) in Central Africa. The video's mission was to raise global awareness about Kony's activities, with Jason Russell, a founder of Invisible Children, emphasizing the necessity of public support to urge the government's continued search for Kony. The organization leveraged the extensive reach of social media and contemporary technology to spotlight Kony's crimes. In response to the campaign, on 21 March 2012, a resolution was introduced by 33 Senators denouncing "the crimes against humanity" perpetrated by Kony and the LRA. This resolution supported the US government's ongoing efforts to boost the capabilities of regional military forces for civilian protection and the pursuit of LRA commanders. It also advocated for cross-border initiatives to augment civilian protection and aid populations affected by the LRA. Co-sponsor Senator Lindsey Graham noted the significant impact of public attention driven by social media, stating that the YouTube sensation would "help the Congress be more aggressive and will do more to lead to his demise than all other action combined". India Against Corruption (2011–2012) The India Against Corruption (IAC) movement was an influential anti-corruption crusade in India, garnering substantial attention during the anti-corruption protests of 2011 and 2012. Its primary focus was the contention surrounding the proposed Jan Lokpal bill. IAC sought to galvanize the populace in their pursuit of a less corrupt Indian society. However, internal divisions within the IAC's central committee led to the movement's split. Arvind Kejriwal left to establish the Aam Aadmi Party, while Anna Hazare created the Jantantra Morcha. Long March (Pakistan) Long March is a socio-political movement in Pakistan initiated by Qadri after returning from a seven-year residence in Toronto, Ontario, Canada, in December 2012. Qadri called for a "million-men" march in Islamabad to protest government corruption. The march commenced on 14 January 2013, with thousands pledging to participate in a sit-in until their demands were met. The march began in Lahore with about 25,000 participants. During a rally in front of the parliament, Qadri critiqued the legislators saying, "There is no Parliament; there is a group of looters, thieves and dacoits [bandits] ... Our lawmakers are the lawbreakers.". After four days of sit-in, Qadri and the government reached an agreement—termed the Islamabad Long March Declaration—which pledged electoral reforms and enhanced political transparency. Despite Qadri's call for a "million-men" march, the government estimated the sit-in participants in Islamabad to number around 50,000. Five Star Movement (Italy) The Five Star Movement (M5S), a prominent political party in Italy, has been utilizing online voting since 2012 to select its candidates for Italian and European elections. These votes are conducted through a web-based application called Rousseau, accessible to registered members of Beppe Grillo's blog. Within this platform, M5S users are able to discuss, approve, or reject legislative proposals. These proposals are then presented in Parliament by the M5S group. For instance, the M5S's electoral law and the selection of its presidential candidate were determined via online voting. Notably, the decision to abolish a law against immigrants was made by online voting among M5S members, in opposition to the views of Grillo and Casaleggio. M5S's alliance with the UK Independence Party was also determined by online voting, albeit with limited options for the choice of European Parliament group for M5S. These were Europe of Freedom and Democracy (EFD), European Conservatives and Reformists (ECR), and "Stay independent" (Non-Inscrits). The possibility of joining the Greens/EFA group was discussed but not available at the time due to the group's prior rejection of M5S. When the Conte I Cabinet collapsed, a new coalition between the Democratic Party and M5S was endorsed after over 100,000 members voted online, with 79.3% supporting the new coalition. COVID-19 pandemic The COVID-19 pandemic has underscored the importance and impact of e-democracy. In 2020, the advent of COVID-19 led countries worldwide to implement safety measures as recommended by public health officials. This abrupt societal shift constrained social movements, causing a temporary halt to certain political issues. Despite these limitations, individuals leveraged digital platforms to express their views, create visibility for social movements, and strive to instigate change and raise awareness through democracy in social media. As reported by news analysis firm The ASEAN Post, the pandemic-induced limitations on traditional democratic spaces such as public meetings have led Filipinos, among others, to resort to social media, digital media, and collaborative platforms for engaging in public affairs and practising "active citizenship" in the virtual domain. This shift has enabled active participation in social, written, or visual interaction and the rectification of misinformation in a virtual setting. Opportunities and challenges Potential impacts E-democracy has the potential to inspire greater community involvement in political processes and policy decisions, interlacing its growth with complex internal aspects such as political norms and public pressure. The manner in which it is implemented is also closely connected to the specific model of democracy employed. Consequently, e-democracy is profoundly influenced by a country's internal dynamics as well as the external drivers defined by standard innovation and diffusion theory. In the current age, where the internet and social networking dominate daily life, individuals are increasingly advocating for their public representatives to adopt practices similar to those in other states or countries concerning the online dissemination of government information. By making government data easily accessible and providing straightforward channels to communicate with government officials, e-democracy addresses the needs of modern society. E-democracy promotes more rapid and efficient dissemination of political information, encourages public debate, and boosts participation in decision-making processes. Social media platforms have emerged as tools of empowerment, particularly among younger individuals, stimulating their participation in electoral processes. These platforms also afford politicians opportunities for direct engagement with constituents. A notable example is the 2016 United States presidential elections, in which Donald Trump primarily used Twitter to communicate policy initiatives and goals. Similar practices have been observed among various global leaders, such as Justin Trudeau, Jair Bolsonaro, and Hassan Rouhani, who maintain active Twitter accounts. Some observers argue that the government's online publication of public information enhances its transparency, enabling more extensive public scrutiny, and consequently promoting a more equitable distribution of power within society. Jane Fountain, in her 2001 work Building the Virtual State, delves into the expansive reach of e-democracy and its interaction with traditional governmental structures. She offers a comprehensive model to understand how pre-existing norms, procedures, and rules within bureaucracies impact the adoption of new technological forms. Fountain suggests that this form of e-government, in its most radical manifestation, would necessitate a significant overhaul of the modern administrative state, with routine electronic consultations involving elected politicians, civil servants, pressure groups, and other stakeholders becoming standard practice at all stages of policy formulation. States where legislatures are controlled by the Republican Party, as well as those characterized by a high degree of legislative professionalization and active professional networks, have shown a greater propensity to embrace e-government and e-democracy. E-democracy provides numerous benefits, contributing to a more engaged public sphere. It encourages increased public participation by offering platforms for citizens to express their opinions through websites, emails, and other electronic communication channels, influencing planning and decision-making processes. This digital democracy model broadens the number and diversity of individuals who exercise their democratic rights by conveying their thoughts to decision-making bodies about various proposals and issues. Moreover, it cultivates a virtual public space, fostering interaction, discussion, and the exchange of ideas among citizens. E-democracy also promotes convenience, allowing citizens to participate at their own pace and comfort. Its digital nature enables it to reach vast audiences with relative ease and minimal cost. The system promotes interactive communication, encouraging dialogue between authorities and citizens. It also serves as an effective platform for disseminating large amounts of information, maintaining clarity and minimizing distortion. Challenges While e-democracy platforms, also known as digital democracy platforms, offer enhanced opportunities for exercising voting rights, they are also susceptible to disruption. Digital voting platforms, for example, have faced attacks aimed at influencing election outcomes. As Dobrygowski states, "cybersecurity threats to the integrity of both electoral mechanisms and government institutions are, quite uncomfortably, more intangible." That being said, if e-democracy options were more secure, people would be more comfortable using it for things such as voting. While traditional paper ballots are often considered the most secure method for conducting elections, digital voting provides the convenience of electronic participation. However, the successful implementation of this system necessitates continual innovations and contributions from third parties. Essentially for e-democracies to be used in real time, governments would have to prove it's reliability to users. Ensuring digital inclusion To foster a robust digital democracy, it's imperative to promote digital inclusion that ensures all citizens, regardless of income, education, gender, religion, ethnicity, language, physical and mental health, have equal opportunities to participate in public policy formulation. Early instances of digital inclusion in e-democracy can be seen in the 2008 election; individuals who were normally civically uninvolved became increasingly engaged due to the accessibility of receiving and spreading campaign information. During the 2020 elections, digital communications were utilized by various communities to cultivate a sense of inclusivity. Specifically, the COVID-19 pandemic saw a surge in online political participation among the youth, demonstrated by the signing of online petitions and participation in digital protests. Even as youth participation in traditional politics dwindles, young people show significant support for pressure groups mobilized through social media. For instance, the Black Lives Matter movement gained widespread recognition on social media, enabling many young people to participate in meaningful ways, including online interactions and protests. Requirements E-Democracy is facilitated by its significance in fostering participation, promoting social inclusivity, displaying sensitivity to individual perspectives, and offering flexible means of engagement. The Internet endows a sense of relevance to participation by giving everyone a platform for their voices to be heard and articulated. It also facilitates a structure of social inclusivity through a broad array of websites, groups, and social networks, each representing diverse viewpoints and ideas. Individual needs are met by enabling the public and rapid expression of personal opinions. Furthermore, the Internet offers an exceptionally flexible environment for engagement; it is cost-effective and widely accessible. Through these attributes, e-democracy and the deployment of the Internet can play a pivotal role in societal change. Internet accessibility The progression of e-democracy is impeded by the digital divide, which separates those actively engaged in electronic communities from those who do not participate. Proponents of e-democracy often recommend governmental actions to bridge this digital gap. The divergence in e-governance and e-democracy between the developed and the developing world is largely due to the digital divide. Practical concerns include the digital divide that separates those with access from those without, and the opportunity cost associated with investments in e-democracy innovations. There also exists a degree of skepticism regarding the potential impact of online participation. Security and privacy The government has a responsibility to ensure that online communications are both secure and respectful of individuals' privacy. This aspect gains prominence when considering electronic voting. The complexity of electronic voting systems surpasses other digital transaction mechanisms, necessitating authentication measures that can counter ballot manipulation or its potential threat. These measures may encompass the use of smart cards, which authenticate a voter's identity while maintaining the confidentiality of the cast vote. Electronic voting in Estonia exemplifies a successful approach to addressing the privacy-identity dilemma inherent in internet voting systems. However, the ultimate goal should be to match the security and privacy standards of existing manual systems. Despite these advancements, recent research has indicated, through a SWOT analysis, that the risks of an e-government are related to data loss, privacy and security, and user adoption. Government responsiveness To encourage citizens to engage in online consultations and discussions, the government needs to be responsive and clearly demonstrate that public engagement influences policy outcomes. It's crucial for citizens to have the opportunity to contribute at a time and place that suits them and when their viewpoints will make a difference. The government should put structures in place to accommodate increased participation. Considering the role that intermediaries and representative organizations might play could be beneficial to ensure issues are debated in a manner that is democratic, inclusive, tolerant, and productive. To amplify the efficacy of existing legal rights allowing public access to information held by public authorities, citizens ought to be granted the right to productive public deliberation and moderation. Some researchers argue that many initiatives have been driven by technology rather than by the core values of government, which has resulted in weakened democracy. Participation and engagement Interaction modes E-democracy presents an opportunity to reconcile the conventional trade-off between the size of the group involved in democratic processes and the depth of will expression (refer to the Figure). Historically, broad group participation was facilitated via simple ballot voting, but the depth of will expression was confined to predefined options (those on the ballot). Depth of will expression was obtained by limiting participant numbers through representative democracy (refer to the Table). The social media Web 2.0 revolution has demonstrated the possibility of achieving both large group sizes and depth of will expression. However, expressions of will in social media are unstructured, making their interpretation challenging and often subjective (see Table). Novel information processing methods, including big data analytics and the semantic web, suggest potential ways to exploit these capabilities for future e-democracy implementations. Currently, e-democracy processes are facilitated by technologies such as electronic mailing lists, peer-to-peer networks, collaborative software, and apps like GovernEye, Countable, VoteSpotter, wikis, internet forums, and blogs. The examination of e-democracy encompasses its various stages including "information provision, deliberation, and participation in decision-making." This assessment also takes into account the different hierarchical levels of governance such as local communities, states/regions, nations, and the global stage. Further, the scope of involvement is also considered, which includes the participation of citizens/voters, the media, elected officials, political organizations, and governments. Therefore, e-democracy's evolution is influenced by such broad changes as increased interdependency, technological multimediation, partnership governance, and individualism. Social media platforms such as Facebook, Twitter, WordPress, and Blogspot, are increasingly significant in democratic dialogues. The role of social media in e-democracy is an emerging field of study, along with technological developments such as argument maps and the semantic web. Another notable development is the combination of open social networking communication with structured communication from closed expert and/or policy-maker panels, such as through the modified Delphi method (HyperDelphi). This approach seeks to balance distributed knowledge and self-organized memories with critical control, responsibility, and decision-making in electronic democracy. Social networking serves as an entry point within the citizens' environment, engaging them on their terms. Proponents of e-government believe this helps the government act more in tune with its public. Examples of state usage include The Official Commonwealth of Virginia Homepage, where citizens can find Google tools and open social forums, considered significant steps towards the maturity of e-democracy. Community involvement Civic engagement encompasses three key aspects: understanding public affairs (political knowledge), trust in the political system (political trust), and involvement in governmental decision-making processes (political participation). The internet enhances civic engagement by creating a new medium for interaction with government institutions. Advocates of e-democracy propose that it can facilitate more active government engagement and inspire citizens to actively influence decisions that directly affect them. Digital tools have and continue to be used to determine the best practices for getting citizens involved in government. Collecting data on what gets citizens involved most efficiently allows for stronger practices going forward in citizen involvement. Numerous studies indicate an increased use of the internet for obtaining political information. From 1996 to 2002, the percentage of adults claiming that the internet played a significant role in their political choices rose from around 14 to 20 percent. In 2002, almost a quarter of the population stated that they had visited a website to research specific public policy issues. Research has indicated that people are more likely to visit websites that challenge their viewpoints rather than those that align with their own beliefs. Around 16 percent of the population has participated in online political activities such as joining campaigns, volunteering time, donating money, or participating in polls. A survey conducted by Philip N. Howard revealed that nearly two-thirds of the adult population in the United States has interacted with online political news, information, or other content over the past four election cycles. People tend to reference the websites of special interest groups more frequently than those of specific elected leaders, political candidates, political parties, nonpartisan groups, and local community groups. The vast informational capacity of the Internet empowers citizens to gain a deeper understanding of governmental and political affairs, while its interactive nature fosters new forms of communication with elected officials and public servants. By providing access to contact information, legislation, agendas, and policies, governments can enhance transparency, thereby potentially facilitating more informed participation both online and offline. As articulated by Matt Leighninger, the internet bolsters government by enhancing individual empowerment and reinforcing group agency. The internet avails vital information to citizens, empowering them to influence public policy more effectively. The utilization of online tools for organizing allows citizens to participate more easily in the government's policy-making process, leading to a surge in public engagement. Social media platforms foster networks of individuals whose online activities can shape the political process, including prompting politicians to intensify public appeal efforts in their campaigns. E-democracy offers a digital platform for public dialogue, enhancing the interaction between government and its residents. This form of online engagement enables the government to concentrate on key issues the community wishes to address. The underpinning philosophy is that every citizen should have the potential to influence their local governance. E-democracy aligns with local communities and provides an opportunity for any willing citizen to make a contribution. The essence of an effective e-democracy lies not just in citizen contribution to government activities, but in promoting mutual communication and collaboration among citizens for the improvement of their own communities. E-democracy utilizes information and communication technologies (ICT) to bolster the democratic processes of decision-making. These technologies play a pivotal role in informing and organizing citizens in different avenues of civic participation. Moreover, ICTs enhance the active engagement of citizens, and foster collaboration among stakeholders for policy formation within political processes across all stages of governance. The Organisation for Economic Co-operation and Development (OECD) identifies three key aspects regarding the role of ICTs in fostering civic engagement. The first aspect is timing, with most civic engagement activities occurring during the agenda-setting phase of a cycle. The second factor is adaptation, which refers to how ICTs evolve to facilitate increased civic participation. The final aspect is integration, representing how emerging ICTs blend new and traditional methods to maximize civic engagement. ICT fosters the possibility of a government that is both more democratic and better informed by facilitating open online collaborations between professionals and the public. The responsibility of collecting information and making decisions is shared between those possessing technological expertise and the traditionally recognized decision-makers. This broadened public involvement in the exchange of ideas and policies results in more democratic decision-making. Furthermore, ICT enhances the notion of pluralism within a democracy, introducing fresh issues and viewpoints. Ordinary citizens have the opportunity to become creators of political content and commentary, for instance, by establishing individual blogs and websites. Collaborative efforts in the online political sphere, similar to ABC News' Campaign Watchdog initiative, allow citizens to report any rule violations committed by any political party during elections. In the 2000 United States presidential race, candidates frequently utilized their websites to not only encourage their supporters to vote but to motivate their friends to vote as well. This dual-process approach—urging an individual to vote and then to prompt their friends to vote—was just beginning to emerge during that time. Today, political participation through various social media platforms is typical, and civic involvement via online forums is common. Through the use of ICTs, individuals interested in politics have the ability to become more engaged. Youth involvement In previous years, individuals belonging to Generation X, Generation Y, and Generation Z, typically encompassing those aged 35 and below as of the mid-2000s, have been noted for their relative disengagement from political activities. The implementation of electronic democracy has been proposed as a potential solution to foster increased voter turnout, democratic participation, and political literacy among these younger demographics. E-citizenship Youth e-citizenship presents a dichotomy between two predominant approaches: management and autonomy. The strategy of "targeting" younger individuals, prompting them to "play their part," can be interpreted as either an incentive for youth activism or a mechanism to regulate it. Autonomous e-citizens argue that despite their relative inexperience, young people should have the right to voice their perspectives on issues that they personally consider important. Conversely, proponents of managed e-citizenship view youth as nascent citizens transitioning from childhood to adulthood, and hence not yet fully equipped to engage in political discourse without proper guidance. Another significant concern is the role of the Internet, with advocates of managed e-citizenship arguing that young people may be especially susceptible to misinformation or manipulation online. This discord manifests as two perspectives on democracy: one that sees democracy as an established and reasonably just system, where young people should be motivated to participate, and another that views democracy as a political and cultural goal best achieved through networks where young people interact. What might initially appear as mere differences in communication styles ultimately reveals divergent strategies for accessing and influencing power. In Scotland The Highland Youth Voice, an initiative in Scotland, is an exemplar of efforts to bolster democratic participation, particularly through digital means. Despite an increasing emphasis on the youth demographic in UK governmental policy and issues, their engagement and interest have been waning. During the 2001 elections to the Westminster Parliament in the UK, voter turnout among 18- to 24-year-olds was estimated to be a mere 40%. This contrasts starkly with the fact that over 80% of 16- to 24-year-olds have accessed the internet at some point. The United Nations Convention on the Rights of the Child emphasizes the importance of educating young individuals as citizens of their respective nations. It advocates for the promotion of active political participation, which they can shape through robust debate and communication. The Highland Youth Voice strives to boost youth participation by understanding their governmental needs, perspectives, experiences, and aspirations. It provides young Scots, aged 14 to 18, an opportunity to influence decision-makers in the Highlands. This body, consisting of approximately 100 elected members, represents youth voices. Elections occur biennially and candidates are chosen directly from schools and youth forums. The Highland Youth Voice website serves as a pivotal platform where members can discuss issues pertinent to them, partake in online policy debates, and experience a model of e-democracy through simplified online voting. Thus, the website encompasses three key features, forming an online forum that enables youth self-education, participation in policy discourse, and engagement in the e-democracy process. Civil society's role Civil society organizations have a pivotal role in democracies, as highlighted by theorists such as Alexis de Tocqueville, acting as platforms for citizens to gain knowledge about public affairs and as sources of power beyond the state's reach. According to Hans Klein, a public policy researcher at the Georgia Institute of Technology, there exist several obstacles to participation in these forums, including logistical challenges of physical meetings. Klein's study of a civic association in the northeastern US revealed that electronic communication significantly boosted the organization's capacity to achieve its objectives. Given the relatively low cost of exchanging information over the Internet and its potential for wide reach, the medium has become an attractive venue for disseminating political information, especially among interest groups and parties operating on smaller budgets.'" For example, environmental or social interest groups might leverage the Internet as a cost-effective mechanism to raise awareness around their causes. Unlike traditional media outlets, like television or newspapers, which often necessitate substantial financial investments, the Internet provides an affordable and extensive platform for information dissemination. As such, the Internet could potentially supplant certain traditional modes of political communication, such as telephone, television, newspapers, and radio. Consequently, civil society has been increasingly integrating into the online realm. Civic society encompasses various types of associations. The term interest group is typically used to refer to formal organizations focused on specific social groups, economic sectors like trade unions, business and professional associations, or specific issues such as abortion, gun control, or the environment. Many of these traditional interest groups have well-established organizational structures and formal membership rules, primarily oriented towards influencing government and policy-making processes. Transnational advocacy networks assemble loose coalitions of these organizations under common umbrella organizations that cross national borders. Innovative tools are increasingly being developed to empower bloggers, webmasters, and social media owners. These aim to transition from the Internet's strictly informational use to its application as a medium for social organization, independent of top-down initiatives. For instance, the concept of Calls to action is a novel approach that enables webmasters to inspire their audience into action without the need for explicit leadership. This trend is global, with countries like India cultivating an active blogosphere that encourages internet users to express their perspectives and opinions. The Internet serves multifaceted roles for these organizations. It functions as a platform for lobbying elected officials, public representatives, and policy elites; networking with affiliated associations and groups; mobilizing organizers, activists, and members through action alerts, newsletters, and emails; raising funds and recruiting support; and conveying their messages to the public via traditional news media channels. Deliberative democracy The Internet holds a pivotal role in deliberative democracy, a model that underscores dialogue, open discussion, and access to diverse perspectives in decision-making. It provides an interactive platform and functions as a vital instrument for research within the deliberative process. The Internet facilitates the exchange of ideas through a myriad of platforms such as websites, blogs, and social networking sites like Twitter, all of which champion freedom of expression.[citation needed] It allows for easily accessible and cost-effective information, paving the way for change. One of the intrinsic attributes of the Internet is its unregulated nature, offering a platform for all viewpoints, regardless of their accuracy. The autonomy granted by the Internet can foster and advocate change, a critical factor in e-democracy. A notable development in the application of e-democracy in the deliberative process is the California Report Card. This tool was created by the Data and Democracy Initiative of the Center for Information Technology Research in the Interest of Society at the University of California, Berkeley, in collaboration with Lt. Governor Gavin Newsom. Launched in January 2014, the California Report Card is a web application optimized for mobile use, aimed at facilitating online deliberative democracy. The application features a brief opinion poll on six pertinent issues, after which participants are invited to join an online "café". In this space, they are grouped with users sharing similar views through Principal Component Analysis, and are encouraged to participate in the deliberative process by suggesting new political issues and rating the suggestions of other participants. The design of the California Report Card is intended to minimize the influence of private agendas on the discussion. Openforum.com.au also exemplifies eDemocracy. This non-profit Australian project facilitates high-level policy discussions, drawing participants such as politicians, senior public servants, academics, business professionals, and other influential stakeholders. The Online Protection and Enforcement of Digital Trade Act (OPEN Act), presented as an alternative to SOPA and PIPA, garners the support of major companies like Google and Facebook. Its website, Keep The Web Open, not only provides full access to the bill but also incorporates public input—over 150 modifications have been made through user contributions. The peer-to-patent project allows public participation in the patent review process by providing research and 'prior art' publications for patent examiners to assess the novelty of an invention. In this process, the community nominates ten pieces of prior art to be reviewed by the patent examiner. This not only enables direct communication between the public and the patent examiner but also creates a structured environment that prompts participants to provide relevant information to aid in decision-making. By allowing experts and the general public to collaborate in finding solutions, the project aims to enhance the efficacy of the decision-making process. It offers a platform for citizens to participate and express their ideas beyond merely checking boxes that limit their opinions to predefined options. Voting and polling One significant challenge in implementing e-democracy is ensuring the security of internet-voting systems. The potential interference from viruses and malware, which could alter or inhibit citizens' votes on critical issues, hinders the widespread adoption of e-democracy as long as such cybersecurity threats persist. E-voting presents several practical challenges that can affect its legitimacy in elections. For instance, electronic voting machines can be vulnerable to physical interference, as they are often left unattended prior to elections, making them susceptible to tampering. This issue led to a decision by the Netherlands in 2017 to count election votes manually. Furthermore, 'Direct Recording Electronic' (DRE) systems, used in numerous US states, are quickly becoming outdated and prone to faults. A study by USENIX discovered that certain DREs in New Jersey inaccurately counted votes, potentially casting votes for unintended candidates without voters' knowledge. The study found these inconsistencies to be widespread with that specific machine. Despite the potential of electronic voting to increase voter turnout, the absence of a paper trail in DREs can lead to untraceable errors, which could undermine its application in digital democracy. Diminished participation in democracy may stem from the proliferation of polls and surveys, potentially leading to a condition known as survey fatigue. Government openness and accessibility Through Listserv's, RSS feeds, mobile messaging, micro-blogging services and blogs, government and its agencies can disseminate information to citizens who share common interests and concerns. For instance, many government representatives, including Rhode Island State Treasurer Frank T. Caprio, have begun to utilize Twitter as an easy medium for communication. Several non-governmental websites, like transparent.gov.com, and USA.gov, have developed cross-jurisdiction, customer-focused applications that extract information from thousands of governmental organizations into a unified system, making it easier for citizens to access information. E-democracy has led to a simplified process and access to government information for public-sector agencies and citizens. For example, the Indiana Bureau of Motor Vehicles simplified the process of certifying driver records for admission in county court proceedings. Indiana became the first state to allow government records to be digitally signed, legally certified and delivered electronically using Electronic Postmark technology. The internet has increased government accessibility to news, policies, and contacts in the 21st century. In 2000, only two percent of government sites offered three or more services online; in 2007, that figure was 58 percent. Also, in 2007, 89 percent of government sites allowed the public to email a public official directly rather than merely emailing the webmaster (West, 2007)"(Issuu). Controversies and concern Opposition Information and communications technologies can be utilized for both democratic and anti-democratic purposes. For instance, digital technology can be used to promote both coercive control and active participation. The vision of anti-democratic use of technology is exemplified in George Orwell's Nineteen Eighty-Four. Critiques associated with direct democracy are also considered applicable to e-democracy. This includes the potential for direct governance to cause the polarization of opinions, populism, and demagoguery. Cybersecurity The current inability to protect internet traffic from interference and manipulation has significantly limited the potential of e-democracy for decision-making. As a result, most experts express opposition to the use of the internet for widespread voting. Internet censorship In countries with severe government censorship, the full potential of e-democracy might not be realized. Internet clampdowns often occur during extensive political protests. For instance, the series of internet blackouts in the Middle East in 2011, termed as the "Arab Net Crackdown", provides a significant example. Governments in Libya, Egypt, Bahrain, Syria, Iran, and Yemen have all implemented total internet censorship in response to the numerous pro-democracy demonstrations within their respective nations. These lockdowns were primarily instituted to prevent the dissemination of cell phone videos that featured images of government violence against protesters. Social media manipulation Joshua A. Tucker and his colleagues critique e-democracy, pointing out that the adaptability and openness of social media may allow political entities to manipulate it for their own ends. They suggest that authorities could use social media to spread authoritarian practices in several ways. Firstly, by intimidating opponents, monitoring private conversations, and even jailing those who voice undesirable opinions. Secondly, by flooding online spaces with pro-regime messages, thereby diverting and occupying these platforms. Thirdly, by disrupting signal access to hinder the flow of information. Lastly, by banning globalized platforms and websites. Populism concerns A study that interviewed elected officials in Austria's parliament revealed a broad and strong opposition to e-democracy. These officials held the view that citizens, generally uninformed, should limit their political engagement to voting. The task of sharing opinions and ideas, they contended, belonged solely to elected representatives. Contrary to this view, theories of epistemic democracy suggest that greater public engagement contributes to the aggregation of knowledge and intelligence. This active participation, proponents argue, enables democracies to better discern the truth. Stop Online Piracy Act The introduction of H.R. 3261, the Stop Online Piracy Act (SOPA), in the United States House of Representatives, was perceived by many internet users as an attack on internet democracy. A contributor to the Huffington Post argued that defeating SOPA was crucial for the preservation of democracy and freedom of speech. Significantly, SOPA was indefinitely postponed following widespread protests, which included a site blackout by popular websites like Wikipedia on 18 January 2012. A comparable event occurred in India towards the end of 2011, when the country's Communication and IT Minister Kapil Sibal suggested pre-screening content for offensive material before its publication on the internet, with no clear mechanism for appeal. Subsequent reports, however, quote Sibal as stating that there would be no restrictions on internet use. Suitable government models Representative democracy A radical shift from a representative government to an internet-mediated direct democracy is not considered likely. Nonetheless, proponents suggest that a "hybrid model" which leverages the internet for enhanced governmental transparency and greater community involvement in decision-making could be forthcoming. The selection of committees, local town and city decisions, and other people-centric decisions could be more readily facilitated through this approach. This doesn't indicate a shift in the principles of democracy but rather an adaptation in the tools utilized to uphold them. E-democracy would not serve as a means to enact direct democracy, but rather as a tool to enable a more participatory form of democracy as it exists currently. Electronic direct democracy Supporters of e-democracy often foresee a transition from a representative democracy to a direct democracy, facilitated by technology, viewing this transition as an ultimate goal of e-democracy. In an electronic direct democracy (EDD) – also referred to as open source governance or collaborative e-democracy – citizens are directly involved in the legislative function through electronic means. They vote electronically on legislation, propose new legislation, and recall representatives, if any are retained. Technology supporting electronic direct democracy Technology to support electronic direct democracy (EDD) has been researched and developed at the Florida Institute of Technology, where it has been applied within student organizations. Many other software development projects are currently underway, along with numerous supportive and related projects. Several of these projects are now collaborating on a cross-platform architecture within the framework of the Meta-government project. EDD as a system is not fully implemented in a political government anywhere in the world, although several initiatives are currently forming. In the United States, businessman and politician Ross Perot was a prominent supporter of EDD, advocating for "electronic town halls" during his 1992 and 1996 presidential campaigns. Switzerland, already partially governed by direct democracy, is making progress towards such a system. Senator On-Line, an Australian political party established in 2007, proposes to institute an EDD system so that Australians can decide which way the senators vote on each and every bill. A similar initiative was formed 2002 in Sweden where the party Direktdemokraterna, running for the Parliament, offered its members the power to decide the actions of the party over all or some areas of decision, or to use a proxy with immediate recall for one or several areas. Liquid democracy Liquid democracy, or direct democracy incorporating a delegable proxy, enables citizens to appoint a proxy for voting on their behalf, while retaining the ability to cast their own vote on legislation. This voting and proxy assignment could be conducted electronically. Extending this concept, proxies could establish proxy chains; for instance, if citizen A appoints citizen B, and B appoints citizen C, and only C votes on a proposed bill, C's vote will represent all three of them. Citizens could also rank their proxies by preference, meaning that if their primary proxy does not vote, their vote could be cast by their second-choice proxy. Wikidemocracy One form of e-democracy that has been proposed is "wikidemocracy", where the codex of laws in a government legislature could be editable via a wiki, similar to Wikipedia. In 2012, J Manuel Feliz-Teixeira suggested that the resources necessary for implementing wikidemocracy were already accessible. He envisages a system in which citizens can participate in legislative, executive, and judiciary roles via a wiki-system. Every citizen would have free access to this wiki and a personal ID to make policy reforms continuously until the end of December, when all votes would be tallied. Perceived benefits of wikidemocracy include a cost-free system that eliminates elections and the need for parliament or representatives, as citizens would directly represent themselves, and the ease of expressing one's opinion. However, there are several potential obstacles and disagreements. The digital divide and educational inequality could hinder the full potential of a wikidemocracy. Similarly, differing rates of technological adoption mean that some people might readily accept new methods, while others reject or are slow to adapt. Security is also a concern; we would need to trust that the system administrators would ensure a high level of integrity to safeguard votes in the public domain. Peter Levine concurs that wikidemocracy could increase discussion on political and moral issues but disagrees with Feliz-Teixeira, arguing that representatives and formal governmental structures would still be needed. The term "wikidemocracy" is also used to refer to more specific instances of e-democracy. For example, in August 2011 in Argentina, the voting records from the presidential election were made available to the public in an online format for scrutiny. More broadly, the term can refer to the democratic values and environments facilitated by wikis. In 2011, a group in Finland explored the concept of wikidemocracy by creating an online "shadow government program". This initiative was essentially a compilation of the political views and goals of various Finnish groups, assembled on a wiki. Egora Egora, also known as "intelligent democracy", is a free software application developed for political opinion formation and decision-making. It is filed under the copyleft licensing system. The name "Egora" is a blend of "electronic" and "agora", a term from Ancient Greek denoting the central public space in city-states (polis). The ancient agora was the hub of public life, facilitating social interactions, business transactions, and discussions. Drawing from this Ancient Greek concept, Egora aims to foster a new, rational, efficient, and incorruptible form of democratic organization. It allows users to form their own political philosophies from diverse ideas, ascertain the most popular ideas among the public, organize meetings to scrutinize and debate these ideas, and employ a simple algorithm to identify true representatives of the public will. In popular media The theme of e-democracy has frequently appeared in science fiction. Works such as David's Sling by Marc Stiegler and Ender's Game by Orson Scott Card notably predicted forms of the internet before it actually came into existence. These early conceptualizations of the internet, and their implications for democracy, served as major plot drivers in these stories. David's Sling In David's Sling, Marc Stiegler presents e-democracy as a strategy leveraged by a team of hackers to construct a computer-controlled smart weapon. They utilize an online debate platform, the Information Decision Duel, where two parties delve deeply into the intricacies of their arguments, dissecting the pros and cons before a neutral referee selects the more convincing side. This fictional portrayal of an internet-like system for public discourse echoes real-world aspirations for e-democracy, underscoring thorough issue analysis, technological enablement, and transparency. The book's dedication, "To those who never stop seeking the third alternatives," epitomizes this emphasis on comprehensive issue scrutiny. Ender's Game Orson Scott Card's Ender's Game also explores e-democracy, with the internet portrayed as a powerful platform for political discourse and social change. Two of the characters, siblings Valentine and Peter, use this platform to anonymously share their political views, gaining considerable influence. Their activities lead to a significant political shift, even though they are just children posing as adults. This highlights the issue of true identity within online participation and raises questions about the potential for manipulation in e-democracy. Other portrayals E-democracy has also been depicted in: The Evitable Conflict by Isaac Asimov: Machines manage the economy for common welfare and make all key societal decisions. The Moon Is a Harsh Mistress by Robert A. Heinlein: A sentient computer assists Lunar colonists in their rebellion against Earth, with significant decisions made through public electronic voting. Distraction by Bruce Sterling: The novel explores potential perils of e-democracy in a future United States heavily influenced by the internet and electronic voting. Down and Out in the Magic Kingdom by Cory Doctorow: The future society in this work practices digital direct democracy with a reputation-based currency called "Whuffie". Rainbows End by Vernor Vinge: The novel imagines societal changes due to technological advancements, including more participatory democracy through continuous public polling and consensus-building tools. The Prefect by Alastair Reynolds: The narrative centers on a future society where an artificial intelligence, the Prefect, administers a democratic system. These works provide varied perspectives on the potential benefits and challenges of e-democracy. See also Collaborative e-democracy Collaborative governance Decidim, a "technopolitical network for participatory democracy" Democracy experiment Democratization of technology E2D International E-Government E-participation Electronic civil disobedience Electronic Democracy Party, a political party in Turkey Emergent democracy eRulemaking Hacktivism Index of Internet-related articles Internet activism Isocracy IserveU, a Canadian-based online voting platform Media democracy Online consultation Online deliberation Online Party of Canada, a political party in Canada Open politics Open source governance Outline of the Internet Parliamentary informatics ParoleWatch Party of Internet Democracy, a political party in Hungary Participation Platform cooperative Public Whip Second Superpower Smart mob Spatial Citizenship Technocracy Technology and society TheyWorkForYou References External links Council of Europe's work on e-Democracy - Including the work of the Ad Hoc Committee on e-Democracy IWG established in 2006 Edc.unigue.ch - Academic research centre on electronic democracy. Directed by Alexander H. Trechsel, e-DC is a joint-venture between the University of Geneva's c2d, the European University Institute in Florence and the Oxford University's OII. Institute for Politics Democracy and the Internet Democras ICEGOV - International Conference on Electronic Governance NYTimes Op-Ed Digital Democracy UK - launched to elected local councillors across the UK in 2013 to enable them to work alongside local residents in the democratic determination of community priorities Transparent Government Balbis Platform for digital democracy which enables creation of proposals, debates and voting. The Blueprint of E-Democracy Egora Politics and technology Types of democracy
E-democracy
[ "Technology" ]
12,970
[ "E-democracy", "Computing and society" ]
594,550
https://en.wikipedia.org/wiki/List%20of%20Solar%20System%20objects%20by%20size
This article includes a list of the most massive known objects of the Solar System and partial lists of smaller objects by observed mean radius. These lists can be sorted according to an object's radius and mass and, for the most massive objects, volume, density, and surface gravity, if these values are available. These lists contain the Sun, the planets, dwarf planets, many of the larger small Solar System bodies (which includes the asteroids), all named natural satellites, and a number of smaller objects of historical or scientific interest, such as comets and near-Earth objects. Many trans-Neptunian objects (TNOs) have been discovered; in many cases their positions in this list are approximate, as there is frequently a large uncertainty in their estimated diameters due to their distance from Earth. Solar System objects more massive than 1021 kilograms are known or expected to be approximately spherical. Astronomical bodies relax into rounded shapes (spheroids), achieving hydrostatic equilibrium, when their own gravity is sufficient to overcome the structural strength of their material. It was believed that the cutoff for round objects is somewhere between 100 km and 200 km in radius if they have a large amount of ice in their makeup; however, later studies revealed that icy satellites as large as Iapetus (1,470 kilometers in diameter) are not in hydrostatic equilibrium at this time, and a 2019 assessment suggests that many TNOs in the size range of 400–1,000 kilometers may not even be fully solid bodies, much less gravitationally rounded. Objects that are ellipsoids due to their own gravity are here generally referred to as being "round", whether or not they are actually in equilibrium today, while objects that are clearly not ellipsoidal are referred to as being "irregular." Spheroidal bodies typically have some polar flattening due to the centrifugal force from their rotation, and can sometimes even have quite different equatorial diameters (scalene ellipsoids such as ). Unlike bodies such as Haumea, the irregular bodies have a significantly non-ellipsoidal profile, often with sharp edges. There can be difficulty in determining the diameter (within a factor of about 2) for typical objects beyond Saturn. (See 2060 Chiron as an example) For TNOs there is some confidence in the diameters, but for non-binary TNOs there is no real confidence in the masses/densities. Many TNOs are often just assumed to have Pluto's density of 2.0 g/cm3, but it is just as likely that they have a comet-like density of only 0.5 g/cm3. For example, if a TNO is incorrectly assumed to have a mass of 3.59 kg based on a radius of 350 km with a density of 2 g/cm3 but is later discovered to have a radius of only 175 km with a density of 0.5 g/cm3, its true mass would be only 1.12 kg. The sizes and masses of many of the moons of Jupiter and Saturn are fairly well known due to numerous observations and interactions of the Galileo and Cassini orbiters; however, many of the moons with a radius less than ~100 km, such as Jupiter's Himalia, have far less certain masses. Further out from Saturn, the sizes and masses of objects are less clear. There has not yet been an orbiter around Uranus or Neptune for long-term study of their moons. For the small outer irregular moons of Uranus, such as Sycorax, which were not discovered by the Voyager 2 flyby, even different NASA web pages, such as the National Space Science Data Center and JPL Solar System Dynamics, give somewhat contradictory size and albedo estimates depending on which research paper is being cited. There are uncertainties in the figures for mass and radius, and irregularities in the shape and density, with accuracy often depending on how close the object is to Earth or whether it has been visited by a probe. Graphical overview Objects with radii over 400 km The following objects have a nominal mean radius of 400 km or greater. It was once expected that any icy body larger than approximately 200 km in radius was likely to be in hydrostatic equilibrium (HE). However, (r = 470 km) is the smallest body for which detailed measurements are consistent with hydrostatic equilibrium, whereas Iapetus (r = 735 km) is the largest icy body that has been found to not be in hydrostatic equilibrium. The known icy moons in this range are all ellipsoidal (except Proteus), but trans-Neptunian objects up to 450–500 km radius may be quite porous. For simplicity and comparative purposes, the values are manually calculated assuming that the bodies are all spheres. The size of solid bodies does not include an object's atmosphere. For example, Titan looks bigger than Ganymede, but its solid body is smaller. For the giant planets, the "radius" is defined as the distance from the center at which the atmosphere reaches 1 bar of atmospheric pressure. Because Sedna and 2002 MS4 have no known moons, directly determining their mass is impossible without sending a probe (estimated to be from 1.7x1021 to 6.1×1021 kg for Sedna). Smaller objects by mean radius From 200 to 399 km All imaged icy moons with radii greater than 200 km except Proteus are clearly round, although those under 400 km that have had their shapes carefully measured are not in hydrostatic equilibrium. The known densities of TNOs in this size range are remarkably low (), implying that the objects retain significant internal porosity from their formation and were never gravitationally compressed into fully solid bodies. From 100 to 199 km This list contains a selection of objects estimated to be between 100 and 199 km in radius (200 and 399 km in diameter). The largest of these may have a hydrostatic-equilibrium shape, but most are irregular. Most of the trans-Neptunian objects (TNOs) listed with a radius smaller than 200 km have "assumed sizes based on a generic albedo of 0.09" since they are too far away to directly measure their sizes with existing instruments. Mass switches from 1021 kg to 1018 kg (Zg). Main-belt asteroids have orbital elements constrained by (2.0 AU < a < 3.2 AU; q > 1.666 AU) according to JPL Solar System Dynamics (JPLSSD). Many TNOs are omitted from this list as their sizes are poorly known. From 50 to 99 km This list contains a selection of objects 50 and 99 km in radius (100 km to 199 km in average diameter). The listed objects currently include most objects in the asteroid belt and moons of the giant planets in this size range, but many newly discovered objects in the outer Solar System are missing, such as those included in the following reference. Asteroid spectral types are mostly Tholen, but some might be SMASS. From 20 to 49 km This list includes few examples since there are about 589 asteroids in the asteroid belt with a measured radius between 20 and 49 km. Many thousands of objects of this size range have yet to be discovered in the trans-Neptunian region. The number of digits is not an endorsement of significant figures. The table switches from  kg to  kg (Eg). Most mass values of asteroids are assumed. From 1 to 19 km This list contains some examples of Solar System objects between 1 and 19 km in radius. This is a common size for asteroids, comets and irregular moons. Below 1 km This list contains examples of objects below 1 km in radius. That means that irregular bodies can have a longer chord in some directions, hence the mean radius averages out. In the asteroid belt alone there are estimated to be between 1.1 and 1.9 million objects with a radius above 0.5 km, many of which are in the range 0.5–1.0 km. Countless more have a radius below 0.5 km. Very few objects in this size range have been explored or even imaged. The exceptions are objects that have been visited by a probe, or have passed close enough to Earth to be imaged. Radius is by mean geometric radius. Number of digits not an endorsement of significant figures. Mass scale shifts from × 1015 to 109 kg, which is equivalent to one billion kg or 1012 grams (Teragram – Tg). Currently most of the objects of mass between 109 kg to 1012 kg (less than 1000 teragrams (Tg)) listed here are near-Earth asteroids (NEAs). The Aten asteroid has less mass than the Great Pyramid of Giza, 5.9 × 109 kg. For more about very small objects in the Solar System, see meteoroid, micrometeoroid, cosmic dust, and interplanetary dust cloud. (See also Visited/imaged bodies.) Gallery See also List of gravitationally rounded objects of the Solar System List of dwarf planets List of minor planets List of natural satellites List of Solar System objects most distant from the Sun List of space telescopes Lists of astronomical objects Notes References Further reading NASA Planetary Data System (PDS) Asteroids with Satellites Minor Planet discovery circumstances Supplemental IRAS Minor Planet Survey (SIMPS) and IRAS Minor Planet Survey (IMPS) SIMPS & IMPS (V6, additional, from here) Asteroid Data Archive Archive Planetary Science Institute External links Planetary fact sheets Asteroid fact sheet Size Solar System
List of Solar System objects by size
[ "Astronomy" ]
1,955
[ "Outer space", "Solar System" ]
594,615
https://en.wikipedia.org/wiki/Ammonia%20solution
Ammonia solution, also known as ammonia water, ammonium hydroxide, ammoniacal liquor, ammonia liquor, aqua ammonia, aqueous ammonia, or (inaccurately) ammonia, is a solution of ammonia in water. It can be denoted by the symbols NH3(aq). Although the name ammonium hydroxide suggests a salt with the composition , it is impossible to isolate samples of NH4OH. The ions and OH− do not account for a significant fraction of the total amount of ammonia except in extremely dilute solutions. The concentration of such solutions is measured in units of the Baumé scale (density), with 26 degrees Baumé (about 30% of ammonia by weight at ) being the typical high-concentration commercial product. Basicity of ammonia in water In aqueous solution, ammonia deprotonates a small fraction of the water to give ammonium and hydroxide according to the following equilibrium: NH3 + H2O ⇌ + OH−. In a 1 M ammonia solution, about 0.42% of the ammonia is converted to ammonium, equivalent to pH = 11.63 because [] = 0.0042 M, [OH−] = 0.0042 M, [NH3] = 0.9958 M, and pH = 14 + log10[OH−] = 11.62. The base ionization constant is Kb = = 1.77. Saturated solutions Like other gases, ammonia exhibits decreasing solubility in solvent liquids as the temperature of the solvent increases. Ammonia solutions decrease in density as the concentration of dissolved ammonia increases. At , the density of a saturated solution is 0.88 g/ml; it contains 35.6% ammonia by mass, 308 grams of ammonia per litre of solution, and has a molarity of approximately 18 mol/L. At higher temperatures, the molarity of the saturated solution decreases and the density increases. Upon warming of saturated solutions, ammonia gas is released. Applications In contrast to anhydrous ammonia, aqueous ammonia finds few non-niche uses outside of cleaning agents. Cleaning products Ammonia solutions are used as a cleaning products for many surfaces and applications. Ammonia in water is sold as a cleaning agent by itself, usually labeled as simply "ammonia", as well as in cleaning products combined with other ingredients. It may be sold plain, lemon-scented (and typically colored yellow), or pine-scented (green). Commonly available ammonia with soap added is known as "cloudy ammonia". Household ammonia ranges in concentration by weight from 5% to 10% ammonia. Because aqueous ammonia is a gas dissolved in water, as the water evaporates from a surface, the gas evaporates also, leaving the surface streak-free. Its most common uses are to clean glass , porcelain, and stainless steel. It is good at removing grease and is found in products for cleaning ovens and for soaking items to loosen baked-on grime. Experts also warn not to use ammonia-based cleaners on car touchscreens, due to the risk of damage to the screen's anti-glare and anti-fingerprint coatings. More concentrated solutions (higher than 10%) are used for in professional and industrial cleaning products. US manufacturers of cleaning products are required to provide the product's material safety data sheet that lists the concentration used. Solutions of ammonia can be dangerous. These solutions are irritating to the eyes and mucous membranes (respiratory and digestive tracts), and to a lesser extent the skin. Experts advise that caution be used to ensure the chemical is not mixed into any liquid containing bleach, due to the danger of forming toxic chloramine gas. Mixing with chlorine-containing products or strong oxidants, such as household bleach, can generate toxic chloramine fumes. Alkyl amine precursor In industry, aqueous ammonia can be used as a precursor to some alkyl amines, although anhydrous ammonia is usually preferred. Hexamethylenetetramine forms readily from aqueous ammonia and formaldehyde. Ethylenediamine forms from 1,2-dichloroethane and aqueous ammonia. Absorption refrigeration In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems was popular and widely used, but after the development of the vapor compression cycle it lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Both the Electrolux refrigerator and the Einstein refrigerator are well known examples of this application of the ammonia solution. Water treatment Ammonia is used to produce chloramine, which may be utilised as a disinfectant. In drinking water, chloramine is preferred over direct chlorination for its ability to remain active in stagnant water pipes longer, thereby reducing the risk of waterborne infections. Ammonia is used by aquarists for the purposes of setting up a new fish tank using an ammonia process called fishless cycling. This application requires that the ammonia contain no additives. Food production Baking ammonia (ammonium carbonate and ammonium bicarbonate) was one of the original chemical leavening agents. It was obtained from deer antlers. It is useful as a leavening agent, because ammonium carbonate is heat activated. This characteristic allows bakers to avoid both yeast's long proofing time and the quick CO2 dissipation of baking soda in making breads and cookies rise. It is still used to make ammonia cookies and other crisp baked goods, but its popularity has waned because of ammonia's off-putting smell and concerns over its use as a food ingredient compared to modern-day baking powder formulations. It has been assigned E number E527 for use as a food additive in the European Union. Aqueous ammonia is used as an acidity regulator to bring down the acid levels in food. It is classified in the United States by the Food and Drug Administration as generally recognized as safe (GRAS) when using the food grade version. Its pH control abilities make it an effective antimicrobial agent. Furniture darkening In furniture-making, ammonia fuming was traditionally used to darken or stain wood containing tannic acid. After being sealed inside a container with the wood, fumes from the ammonia solution react with the tannic acid and iron salts naturally found in wood, creating a rich, dark stained look to the wood. This technique was commonly used during the Arts and Crafts movement in furniture – a furniture style which was primarily constructed of oak and stained using these methods. Treatment of straw for cattle Ammonia solution is used to treat straw, producing "ammoniated straw" making it more edible for cattle. Laboratory use Aqueous ammonia is used in traditional qualitative inorganic analysis as a complexant and base. Like many amines, it gives a deep blue coloration with copper(II) solutions. Ammonia solution can dissolve silver oxide residues, such as those formed from Tollens' reagent. It is often found in solutions used to clean gold, silver, and platinum jewelry, but may have adverse effects on porous gem stones like opals and pearls. See also Ammonia Ammonium Conjugate acid References Further reading External links External Material Safety Data Sheet – for ammonium hydroxide (10%-35% solution). Ammonia Ammonium compounds Hydroxides Photographic chemicals Food acidity regulators Antipruritics E-number additives Household chemicals
Ammonia solution
[ "Chemistry" ]
1,532
[ "Bases (chemistry)", "Ammonium compounds", "Hydroxides", "Salts" ]
594,682
https://en.wikipedia.org/wiki/Compactly%20generated%20group
In mathematics, a compactly generated (topological) group is a topological group G which is algebraically generated by one of its compact subsets. This should not be confused with the unrelated notion (widely used in algebraic topology) of a compactly generated space -- one whose topology is generated (in a suitable sense) by its compact subspaces. Definition A topological group G is said to be compactly generated if there exists a compact subset K of G such that So if K is symmetric, i.e. K = K −1, then Locally compact case This property is interesting in the case of locally compact topological groups, since locally compact compactly generated topological groups can be approximated by locally compact, separable metric factor groups of G. More precisely, for a sequence Un of open identity neighborhoods, there exists a normal subgroup N contained in the intersection of that sequence, such that G/N is locally compact metric separable (the Kakutani-Kodaira-Montgomery-Zippin theorem). References Topological groups
Compactly generated group
[ "Physics", "Mathematics" ]
210
[ "Space (mathematics)", "Topological spaces", "Topology stubs", "Topology", "Space", "Geometry", "Topological groups", "Spacetime" ]
594,730
https://en.wikipedia.org/wiki/Henry%20Louis%20Le%20Chatelier
Henry Louis Le Chatelier (; 8 October 1850 – 17 September 1936) was a French chemist of the late 19th and early 20th centuries. He devised Le Chatelier's principle, used by chemists and chemical engineers to predict the effect a changing condition has on a system in chemical equilibrium. Early life Le Chatelier was born on 8 October 1850 in Paris and was the son of French materials engineer Louis Le Chatelier and Louise Durand. His father was an influential figure who played important roles in the birth of the French aluminium industry, the introduction of the Martin-Siemens processes into the iron and steel industries, and the rise of railway transportation. Le Chatelier's father profoundly influenced his son's future. Henry Louis had one sister, Marie, and four brothers, Louis (1853–1928), Alfred (1855–1929), George (1857–1935), and André (1861–1929). His mother raised the children by regimen, described by Henry Louis: "I was accustomed to a very strict discipline: it was necessary to wake up on time, to prepare for your duties and lessons, to eat everything on your plate, etc. All my life I maintained respect for order and law. Order is one of the most perfect forms of civilization." As a child, Le Chatelier attended the Collège Rollin in Paris. At the age of 19, after only one year of instruction in specialized engineering, he followed in his father's footsteps by enrolling in the École Polytechnique on 25 October 1869. Like all the pupils of the Polytechnique, in September 1870, Le Chatelier was named second lieutenant and later took part in the Siege of Paris. After brilliant successes in his technical schooling, he entered the École des Mines in Paris in 1871. Le Chatelier married Geneviève Nicolas, a friend of the family and sister of four fellow students of the Polytechnique. They had seven children, four girls and three boys, five of whom entered scientific fields; two died preceding Le Chatelier's death. Career Despite training as an engineer, and even with his interests in industrial problems, Le Chatelier chose to teach chemistry rather than pursue a career in industry. In 1887, he was appointed head of the general chemistry to the preparatory course of the École des Mines in Paris. He tried unsuccessfully to get a position teaching chemistry at the École polytechnique in 1884 and again in 1897. At the Collège de France, Le Chatelier succeeded Schützenberger as chair of inorganic chemistry. Later he taught at the Sorbonne university, where he replaced Henri Moissan. At the Collège de France, Le Chatelier taught: Phenomena of combustion (1898) Theory of chemical equilibria, high temperature measurements and phenomena of dissociation (1898–1899) Properties of metal alloys (1899–1900) Iron alloys (1900–1901) General methods of analytical chemistry (1901–1902) General laws of analytical chemistry (1901–1902) General laws of chemical mechanics (1903) Silica and its compounds (1905–1906) Some practical applications of the fundamental principles of chemistry (1906–1907) Properties of metals and some alloys (1907) After four unsuccessful campaigns (1884, 1897, 1898 and 1900), Le Chatelier was elected to the Académie des sciences (Academy of Science) in 1907. He was also elected to the Royal Swedish Academy of Sciences in 1907. In 1924, he became an Honorary Member of the Polish Chemical Society. Scientific work In chemistry, Le Chatelier is best known for his work on his principle of chemical equilibrium, Le Chatelier's principle, and on varying solubility of salts in an ideal solution. He published no fewer than thirty papers on these topics between 1884 and 1914. His results on chemical equilibrium were presented in 1884 at the Académie des sciences in Paris. Le Chatelier also carried out extensive research on metallurgy and was one of the founders of the technical newspaper La revue de métallurgie (Metallurgy Review). Part of Le Chatelier's work was devoted to industry. For example, he was a consulting engineer for a cement company, the Société des chaux et ciments Pavin de Lafarge, today known as Lafarge Cement. His 1887 doctoral thesis was dedicated to the subject of mortars: Recherches expérimentales sur la constitution des mortiers hydrauliques (Experimental Research on the Composition of Hydraulic Mortars). On the advice of a paper of Le Chatelier that the combustion of a mixture of oxygen and acetylene in equal parts rendered a flame of more than 3000 celsius, in 1899 Charles Picard (1872-1957) started to investigate this phenomenon but failed because of soot deposits. In 1901 the latter consulted with Edmond Fouché and together they obtain a perfectly stable flame and the oxyacetylene industry was born. In 1902 Fouché invented a gas welder tool with French patent number 325,403 and in 1910 Picard developed the needle valve. Le Chatelier in 1901 attempted the direct combination of the two gases nitrogen and hydrogen at a pressure of 200 atm and 600 °C in the presence of metallic iron. An air compressor forced the mixture of gases into a steel Berthelot bomb, where a platinum spiral heated them and the reduced iron catalyst. A terrific explosion occurred which nearly killed an assistant. Le Chatelier found that the explosion was due to the presence of air in the apparatus used. And thus it was left for Fritz Haber to succeed where several noted French chemists, including Thenard, Sainte Claire Deville and even Berthelot had failed. Less than five years later, Haber and Carl Bosch were successful in producing ammonia on a commercial scale. Near the end of his life, Le Chatelier wrote, "I let the discovery of the ammonia synthesis slip through my hands. It was the greatest blunder of my scientific career”. His brother Alfred Le Chatelier, a former soldier, opened the Atelier de Glatigny in the rural area of Glatigny (Le Chesnay), near Versailles, in 1897. The workshop made sandstone ceramics, high-quality porcelain and glassware. In 1901, the critic Henri Cazalis (alias Jean Lahor), listed the workshop as one of the best producers in France of Art Nouveau ceramics. Henry Louis seems to have encouraged Alfred's workshop and assisted with experiments in the composition of porcelain and the reactions of quartz inclusions, and also designed a thermoelectric pyrometer to measure temperature in the kilns. Le Chatelier's principle Le Chatelier's Principle states that a system always acts to oppose changes in chemical equilibrium; to restore equilibrium, the system will favor a chemical pathway to reduce or eliminate the disturbance so as to restabilize at thermodynamic equilibrium. Put another way, If a chemical system at equilibrium experiences a change in concentration, temperature or total pressure, the equilibrium will shift in order to minimize that change. This qualitative law enables one to envision the displacement of equilibrium of a chemical reaction. For example: a change in concentration of a reaction in equilibrium for the following equation: N2(g) + 3H2(g) ⇌ 2NH3(g) If one increases the pressure of the reactants, the reaction will tend to move towards the products to decrease the pressure of the reaction. However consider another example: in the contact process for the production of sulfuric acid, the second stage is a reversible reaction: 2SO2(g) + O2(g) ⇌ 2SO3(g) The forward reaction is exothermic and the reverse reaction is endothermic. Viewed by Le Chatelier's principle a larger amount of thermal energy in the system would favor the endothermic reverse reaction, as this would absorb the increased energy; in other words the equilibrium would shift to the reactants in order to remove the stress of added heat. For similar reasons, lower temperatures would favor the exothermic forward reaction, and produce more products. This works in this case, since due to loss of entropy the reaction becomes less exothermic as temperature increases; however reactions that become more exothermic as temperature increases would seem to violate this principle. Politics It was then typical for scientists and engineers to have a very scientific vision of industry. In the first issue of La revue de métallurgie, Le Chatelier published an article describing his convictions on the subject, discussing the scientific management theory of Frederick Winslow Taylor. In 1928, he published a book on Taylorism. Le Chatelier was politically conservative. In 1934, he published an opinion on the French forty-hour work week law in the Brussels publication Revue économique internationale. However, in spite of certain anti-parliamentarian convictions, he kept away from any extremist or radical movements. Works Cours de chimie industrielle (1896; second edition, 1902) High Temperature Measurements, translated by G. K. Burgess (1901; second edition, 1902) Recherches expérimentales sur la constitution des mortiers hydrauliques (1904; English translation, 1905) Leçons sur le carbone (1908) Introduction à l'étude de la métallurgie (1912) La silice et les silicates (1914) Honours and awards Le Chatelier named "chevalier" (knight) of the Légion d'honneur in 1887, became "officier" (officer) in 1908, "commandeur" (Knight Commander) in 1919, and was finally awarded the title of "grand officier" (Knight Grand Officer) in May 1927. He was admitted to the Academie des Sciences in 1907. He was awarded the Bessemer Gold Medal of the British Iron and Steel Institute in 1911, admitted as a Foreign Member of the Royal Society in 1913 and awarded their Davy Medal in 1916. References Sources External links "Henry LE CHATELIER (1850–1936) Sa vie, son œuvre." Révue de Métallurgie, special edition, January 1937. 1850 births 1936 deaths École Polytechnique alumni Mines Paris - PSL alumni Corps des mines 19th-century French chemists 20th-century French chemists Members of the French Academy of Sciences Members of the Royal Swedish Academy of Sciences University of Paris alumni Academic staff of the Collège de France Scientists from Paris Academic staff of the University of Paris French science writers French inventors Foreign members of the Royal Society Grand Officers of the Legion of Honour Bessemer Gold Medal French male non-fiction writers Members of the Ligue de la patrie française
Henry Louis Le Chatelier
[ "Chemistry" ]
2,195
[ "Bessemer Gold Medal", "Chemical engineering awards" ]
594,865
https://en.wikipedia.org/wiki/Welteislehre
(WEL; "World Ice Theory" or "World Ice Doctrine"), also known as (Glacial Cosmogony), is a discredited cosmological concept proposed by Hanns Hörbiger, an Austrian engineer and inventor. According to his ideas, ice was the basic substance of all cosmic processes, and ice moons, ice planets, and the "global ether" (also made of ice) had determined the entire development of the universe. Hörbiger did not arrive at his ideas through research, but said that he had received it in a "vision" in 1894. He published a book about the theory in 1912 and heavily promoted it in subsequent years, through lectures, magazines and associations. History By his own account, Hörbiger was observing the Moon when he was struck by the notion that the brightness and roughness of its surface were due to ice. Shortly after, he dreamt that he was floating in space watching the swinging of a pendulum which grew longer and longer until it broke. "I knew that Newton had been wrong and that the sun's gravitational pull ceases to exist at three times the distance of Neptune", he concluded. He worked out his concepts in collaboration with amateur astronomer and schoolteacher Philipp Fauth whom he met in 1898, and published it as in 1912. Fauth had previously produced a large (if somewhat inaccurate) lunar map and had a considerable following, which lent Hörbiger's ideas some respectability. It did not receive a great deal of attention at the time, but following World War I Hörbiger changed his strategy by promoting the new "cosmic truth" not only to people at universities and academies, but also to the general public. Hörbiger thought that if "the masses" accepted his ideas, then they might put enough pressure on the academic establishment to force his ideas into the mainstream. No effort was spared in popularising the ideas: "cosmotechnical" societies were founded, which offered public lectures that attracted large audiences, there were cosmic ice movies and radio programs, and even cosmic ice journals and novels. During this period, the name was changed from the Graeco-Latin to the Germanic [WEL] ("World ice theory"). The followers of WEL exerted a great deal of public pressure on behalf of the ideas. The movement published posters, pamphlets, books, and even a newspaper The Key to World Events. Companies owned by adherents would only hire people who declared themselves convinced of the WEL's truth. Some followers even attended astronomical meetings to heckle, shouting, "Out with astronomical orthodoxy! Give us Hörbiger!" Supporters of the idea were Houston Stewart Chamberlain, the leading theorist behind the early development of the National Socialist Party in Germany in 1923, and later both Hitler and Himmler. Esoteric and pseudo-scientific views were quite popular among the Nazi elite at the time, and WEL appealed to them because it represented a "Germanic" all-encompassing alternative to a natural science viewed as Jewish and soulless. Despite Hitler's claim that the WEL constituted an "Aryan" theory, a number of Jewish intellectuals supported the theory: for example, Viennese author Egon Friedell, who explained the World Ice Theory in his 1930 Cultural History of the Modern Age. Hans Schindler Bellamy, a Jewish member of the Austrian Social Democratic Party, was also a proponent. He continued to advocate the viewpoint after he had fled Vienna following the Anschluss. On the left wing Raoul Hausmann also supported the theory, and corresponded with Hörbiger. Two organizations were set up in Vienna concerned with the idea: the and the Hörbiger Institute. The first was formed in 1921 by a group of enthusiastic adherents of the idea, which included engineers, physicians, civil servants, and businessmen. Most had been personally acquainted with Hörbiger and had attended his many lectures. Premise According to the idea, the solar system had its origin in a gigantic star into which a smaller, dead, waterlogged star fell. This impact caused a huge explosion that flung fragments of the smaller star out into interstellar space where the water condensed and froze into giant blocks of ice. A ring of such blocks formed, that we now call the Milky Way, as well as a number of solar systems among which was our own, but with many more planets than currently exist. Interplanetary space is filled with traces of hydrogen gas, which cause the planets to slowly spiral inwards, along with ice blocks. The outer planets are large mainly because they have swallowed a large number of ice blocks, but the inner planets have not swallowed nearly as many. One can see ice blocks on the move in the form of meteors, and when one collides with Earth, it produces hailstorms over an area of many square kilometers, while when one falls into the Sun, it produces a sunspot and gets vaporized, making "fine ice", that covers the innermost planets. It was also claimed that Earth had had several satellites before it acquired the Moon; they began as planets in orbits of their own, but over long spans of time were captured one by one and slowly spiralled in towards Earth until they disintegrated and their debris became part of Earth's structure. One can supposedly identify the rock strata of several geological eras with the impacts of these satellites. It was believed that the destruction of earlier ice-moons were responsible for the Flood. The last such impact, of the "Tertiary" or "Cenozoic Moon" and the capture of our present Moon, is supposedly remembered through myths and legends. This was worked out in detail by Hörbiger's English follower Hans Schindler Bellamy; Bellamy recounted how as a child he would often dream about a large moon that would spiral closer and closer in until it burst, making the ground beneath roll and pitch, awakening him and giving him a very sick feeling. When he looked at the Moon's surface through a telescope, he found its surface looking troublingly familiar. When he learned of Hörbiger's idea in 1921, he found it a description of his dream. He explained the mythological support he found in such books as Moons, Myths, and Man, In the Beginning God, and The Book of Revelation is History. It was believed that our current Moon was the sixth since Earth began and that a new collision was inevitable. Believers argued that the great flood described in the Bible and the destruction of Atlantis were caused by the fall of previous moons. Hörbiger had various responses to the criticism that he received. If it was pointed out to him that his assertions did not work mathematically, he responded: "Calculation can only lead you astray." If it was pointed out that there existed photographic evidence that the Milky Way was composed of millions of stars, he responded that the pictures had been faked by "reactionary" astronomers. He responded in a similar way when it was pointed out that the surface temperature of the Moon had been measured in excess of 100 °C in the daytime, writing to rocket expert Willy Ley: "Either you believe in me and learn, or you will be treated as the enemy." Astronomers generally dismissed his views and the following they acquired as a "carnival". As Martin Gardner argued in Chapter Three of his Fads and Fallacies in the Name of Science, Hörbiger's ideas have much in common with those of Immanuel Velikovsky. See also Interstellar ice Snowball Earth Lunar water Nazi mysticism References External links Essay on Cosmic Ice Theory (Christina Wessely at the Max Planck Institute for the History of Science) Catastrophism Obsolete theories in physics Pseudoscience History of astronomy German words and phrases Science in Nazi Germany Water ice
Welteislehre
[ "Physics", "Astronomy" ]
1,585
[ "History of astronomy", "Theoretical physics", "Obsolete theories in physics" ]
594,874
https://en.wikipedia.org/wiki/Autoionization
Autoionization is a process by which an atom or a molecule in an excited state spontaneously emits one of the outer-shell electrons, thus going from a state with charge  to a state with charge , for example from an electrically neutral state to a singly ionized state. Autoionizing states are usually short-lived, and thus can be described as Fano resonances rather than normal bound states. They can be observed as variations in the ionization cross sections of atoms and molecules, by photoionization, electron ionization and other methods. Examples As examples, several Fano resonances in the extreme ultraviolet photoionization spectrum of neon are attributed to autoionizing states. Some are due to one-electron excitations, such as a series of three strong similarly shaped peaks at energies of 45.546, 47.121 and 47.692 eV which are interpreted as 1s2 2s1 2p6 np (1P) states for n = 3, 4 and 5. These states of neutral neon lie beyond the first ionization energy because it takes more energy to excite a 2s electron than to remove a 2p electron. When autoionization occurs, the np → 2s de-excitation provides the energy needed to remove one 2p electron and form the Ne+ ground state. Other resonances are attributed to two-electron excitations. The same neon photoionization spectrum considered above contains a fourth strong resonance in the same region at 44.979 eV but with a very different shape, which is interpreted as the 1s2 2s2 2p4 3s 3p (1P) state. For autoionization, the 3s → 2p transition provides the energy to remove the 3p electron. Electron ionization allows the observation of some states which cannot be excited by photons due to selection rules. In neon for example again, the excitation of triplet states is forbidden by the spin selection rule ΔS = 0, but the 1s2 2s2 2p4 3s 3p (3P) has been observed by electron ionization at 42.04 eV. Ion impact by high energy H+, He+ and Ne+ ions has also been used. If a core electron is missing, a positive ion can autoionize further and lose a second electron in the Auger effect. In neon, X-ray excitation can remove a 1s electron, producing an excited Ne+ ion with configuration 1s1 2s2 2p6. In the subsequent Auger process a 2s → 1s transition and simultaneous emission of a second electron from 2p leads to the Ne2+ 1s2 2s1 2p5 ionic state. Molecules, in addition, can have vibrationally autoionizing Rydberg states, in which the small amount of energy necessary to ionize a Rydberg state is provided by vibrational excitation. Autodetachment When the excited state of the atom or molecule consists of a compound state of a neutral particle and a resonantly attached electron, autoionization is referred to as autodetachment. In this case the compound state begins with a net negative charge before the autoionization process, and ends with a neutral charge. The ending state will often be vibrationally or rotationally excited state as a result of excess energy from the resonant attachment process. References Atomic physics Molecular physics Quantum chemistry
Autoionization
[ "Physics", "Chemistry" ]
703
[ " and optical physics stubs", "Quantum chemistry stubs", "Quantum chemistry", "Molecular physics", "Theoretical chemistry stubs", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic physics", "nan", "Atomic", "Molecular physics stubs", "Physical chemistry stubs", " and opti...
594,964
https://en.wikipedia.org/wiki/Periodical%20cicadas
The term periodical cicada is commonly used to refer to any of the seven species of the genus Magicicada of eastern North America, the 13- and 17-year cicadas. They are called periodical because nearly all individuals in a local population are developmentally synchronized and emerge in the same year. Although they are sometimes called "locusts", this is a misnomer, as cicadas belong to the taxonomic order Hemiptera (true bugs), suborder Auchenorrhyncha, while locusts are grasshoppers belonging to the order Orthoptera. Magicicada belongs to the cicada tribe Lamotialnini, a group of genera with representatives in Australia, Africa, and Asia, as well as the Americas. Magicicada species spend around 99.5% of their long lives underground in an immature state called a nymph. While underground, the nymphs feed on xylem fluids from the roots of broadleaf forest trees in the eastern United States. In the spring of their 13th or 17th year, mature cicada nymphs emerge between late April and early June (depending on latitude), synchronously and in tremendous numbers. The adults are active for only about four to six weeks after the unusually prolonged developmental phase. The males aggregate in chorus centers and call there to attract mates. Mated females lay eggs in the stems of woody plants. Within two months of the original emergence, the life cycle is complete and the adult cicadas die. Later in that same summer, the eggs hatch and the new nymphs burrow underground to develop for the next 13 or 17 years. Periodical emergences are also reported for the "World Cup cicada" Chremistica ribhoi (every 4 years) in northeast India and for a cicada species from Fiji, Raiateana knowlesi (every 8 years). Description The winged imago (adult) periodical cicada has two red compound eyes, three small ocelli, and a black dorsal thorax. The wings are translucent with orange veins. The underside of the abdomen may be black, orange, or striped with orange and black, depending on the species. Adults are typically , depending on species, generally about 75% the size of most of the annual cicada species found in the same region. Mature females are slightly larger than males. Magicicada males typically form large aggregations that sing in chorus to attract receptive females. Different species have different characteristic calling songs. The call of decim periodical cicadas is said to resemble someone calling "weeeee-whoa" or "Pharaoh". The cassini and decula periodical cicadas (including M. tredecula) have songs that intersperse buzzing and ticking sounds. Cicadas cannot sting and do not normally bite. Like other Auchenorrhyncha (true) bugs, they have mouthparts used to pierce plants and suck their sap. These mouthparts are used during the nymph stage to tap underground roots for water, minerals and carbohydrates and in the adult stage to acquire nutrients and water from plant stems. An adult cicada's proboscis can pierce human skin when it is handled, which is painful but in no other way harmful. Cicadas are neither venomous nor poisonous and there is no evidence that they or their bites can transmit diseases. Oviposition by female periodical cicadas damages pencil-sized twigs of woody vegetation. Mature trees rarely suffer lasting damage, although peripheral twig die-off or "flagging" may result. Planting young trees or shrubs is best postponed until after an expected emergence of the periodical cicadas. Existing young trees or shrubs can be covered with cheesecloth or other mesh netting with holes that are in diameter or smaller to prevent damage during the oviposition period, which begins about a week after the first adults emerge and lasts until all females have died. Life cycle Nearly all cicadas spend years underground as juveniles, before emerging above ground for a short adult stage of several weeks to a few months. The seven periodical cicada species are so named because, in any one location, all members of the population are developmentally synchronized—they emerge as adults all at once in the same year. This periodicity is especially remarkable because their life cycles are so long—13 or 17 years. In contrast, for nonperiodical species, some adults mature each summer and emerge while the rest of the population continues to develop underground. Many people refer to these nonperiodical species as annual cicadas because some are seen every summer. This may lead some to conclude that the non-periodic cicadas have life cycles of 1 year. This is incorrect. The few known life cycles of "annual" species range from two to 10 years, although some could be longer. The nymphs of the periodical cicadas live underground, usually within of the surface, feeding on the juices of plant roots. The nymphs of the periodical cicada undergo five instar stages in their development underground. The difference in the 13- and 17-year life cycle is said to be the time needed for the second instar to mature. When underground the nymphs move deeper below ground, detecting and then feeding on larger roots as they mature. The nymphs seem to track the number of years by detecting the changes in the xylem caused by abscission of the tree. This was supported experimentally by inducing a grove of trees to go through two cycles of losing and re-growing leaves in one calendar year. Cicadas feeding on those trees emerged after 16 years instead of 17. In late April to early June of the emergence year, mature fifth-instar nymphs construct tunnels to the surface and wait for the soil temperature to reach a critical value. In some situations, nymphs extend mud turrets up to several inches above the soil surface. The function of these turrets is not known, but the phenomenon has been observed in some nonperiodical cicadas, as well as other tunneling insects. The nymphs first emerge on a spring evening when the soil temperature at around of depth is above . The crepuscular emergence is thought to be related to the fact that maximum soil temperatures lag behind maximum insolation by several hours, conveniently providing some protection for the flightless nymphs against diurnal sight predators such as birds. For the rest of their lives the mature periodical cicadas will be strongly diurnal, with song often nearly ceasing at night. During most years in the United States this emergence cue translates to late April or early May in the far south, and late May to early June in the far north. Emerging nymphs may molt in the grass or climb from a few centimeters to more than 100 feet (30 m) to find a suitable vertical surface to complete their transformation into adults. After securing themselves to tree trunks, the walls of buildings, telephone poles, fenceposts, hanging foliage, and even stationary automobile tires, the nymphs undergo a final molt and then spend about six days in the trees to await the complete hardening of their wings and exoskeletons. Just after emerging from this final molt the teneral adults are off-white, but darken within an hour. Adult periodical cicadas live for only a few weeks; by mid-July, all have died. Their ephemeral adult forms are adapted for one purpose: reproduction. Like other cicadas the males produce a very loud species-specific mating song using their tymbals. Singing males of the same Magicicada species tend to form aggregations called choruses whose collective songs are attractive to females. Males in these choruses alternate bouts of singing with short flights from tree to tree in search of receptive females. Most matings occur in so-called chorus trees. Receptive females respond to the calls of conspecific males with timed wing-flicks (visual signaling is apparently a necessity in the midst of the males' song) which attract the males for mating. The sound of a chorus can be literally deafening and depending on the number of males composing it, may reach 100 dB in the immediate vicinity. In addition to their "calling" or "congregating" songs, males produce a distinctive courtship song when approaching an individual female. Both males and females can mate multiple times, although most females seem to mate only once . After mating, the female cuts V-shaped slits in the bark of young twigs and lays about 20 eggs in each, for a total clutch of 600 or more. After about 6–10 weeks, the eggs hatch and the nymphs drop to the ground, where they burrow and begin another 13- or 17-year cycle. Predator satiation survival strategy The nymphs emerge in very large numbers at nearly the same time, sometimes more than 1.5 million individuals per acre (>370/m2). Their mass emergence is, among other things, a survival trait called predator satiation. The details of this strategy are simple: for the first week after emergence the periodical cicadas are easy prey for reptiles, birds, squirrels, cats, dogs and other small and large mammals. In their present range the periodical cicadas have no effective predators, and all other animals feeding on them after emergence quickly become irrelevant with respect to their impact on total cicada populations. Early entomologists maintained that the cicadas' overall survival mechanism was simply to overwhelm predators by their sheer numbers, ensuring the survival of most of the individuals. Later, the fact that the developmental periods were each a prime number of years (13 and 17) was hypothesized to be a predator avoidance strategy, one adopted to eliminate the possibility of potential predators receiving periodic population boosts by synchronizing their own generations to divisors of the cicada emergence period. On this prime number hypothesis, a predator with a three-year reproductive cycle, which happened to coincide with a brood emergence in a given year, will have gone through either four cycles plus one year (12 + 1) or five cycles plus two years (15 + 2) by the next time that brood emerges. In this way prime-numbered broods exhibit a strategy to ensure that they nearly always emerge when some portion of the predators they will confront are sexually immature and therefore incapable of taking maximum advantage of the momentarily limitless food supply. Another viewpoint turns this hypothesis back onto the cicada broods themselves. It posits that the prime-numbered developmental times represent an adaptation to prevent hybridization between broods. It is hypothesized that this unusual method of sequestering different populations in time arose when conditions were extremely harsh. Under those conditions the mutation producing extremely long development times became so valuable that cicadas which possessed it found it beneficial to protect themselves from mating with cicadas that lacked the long-development trait. In this way, the long-developing cicadas retained a trait allowing them to survive the period of heavy selection pressure (i.e., harsh conditions) brought on by isolated and lowered populations during the period immediately following the retreat of glaciers (in the case of periodical cicadas, the North American Pleistocene glacial stadia). When seen in this light, their mass emergence and the predator satiation strategy that follows from this serves only to maintain the much longer-term survival strategy of protecting their long-development trait from hybridizations that might dilute it. This hybridization hypothesis was subsequently supported through a series of mathematical models and remains the most widely-accepted explanation for the unusually lengthy and mathematically sophisticated survival strategy of these insects. The length of the cycle was hypothesized to be controlled by a single gene locus, with the 13-year cycle dominant to the 17-year one, but this interpretation remains controversial and unsubstantiated at the level of DNA. Impact on other populations Cycles in cicada populations are significant enough to affect other animal and plant populations. For example, tree growth has been observed to decline the year before the emergence of a brood because of the increased feeding on roots by the growing nymphs. Moles, which feed on nymphs, have been observed to do well during the year before an emergence, but suffer population declines the following year because of the reduced food source. Wild turkey populations respond favorably to increased nutrition in their food supply from gorging on cicada adults on the ground at the end of their life cycles. Uneaten carcasses of periodical cicadas decompose on the ground, providing a resource pulse of nutrients to the forest community. Cicada broods may also have a negative impact. Eastern gray squirrel populations have been negatively affected, because the egg-laying activity of female cicadas damaged upcoming mast crops. Broods Periodical cicadas are grouped into geographic broods based on the calendar year when they emerge. For example, in 2014, the 13-year Brood XXII emerged in Louisiana and the 17-year Brood III emerged in western Illinois and eastern Iowa. In 1907, entomologist Charles Lester Marlatt assigned Roman numerals to 30 different broods of periodical cicadas: 17 distinct broods with a 17-year life cycle, to which he assigned brood numbers I through XVII (with emerging years 1893 through 1909); plus 13 broods with a 13-year cycle, to which he assigned brood numbers XVIII through XXX (1893 through 1905). Marlatt noted that the 17-year broods are generally more northerly than are the 13-year broods. Many of these hypothetical 30 broods have not been observed. Marlatt noted that some cicada populations (especially Brood XI in the valley of the Connecticut River in Massachusetts and Connecticut) were disappearing, a fact that he attributed to the reduction in forests and the introduction and proliferation of insect-eating "English sparrows" (House sparrows, Passer domesticus) that had followed the European settlement of North America. Two of the broods that Marlatt named (Broods XI and XXI) have become extinct. His numbering scheme has been retained for convenience (and because it clearly separates 13- and 17-year life cycles), although only 15 broods are known to survive. Periodical cicadas that emerge outside the expected time frame are called stragglers. Although they can emerge at any time, they usually do so one or four years before or after most other members of their broods emerge. Stragglers with a 17-year life cycle typically emerge four years early. Those with a 13-year cycle typically emerge four years late. The emergence of stragglers may in theory be indicative of a brood shifting from a 17-year cycle to a 13-year one. Brood XIII of the 17-year cicada, which reputably has the largest emergence of cicadas by size known anywhere, and Brood XIX of the 13-year cicada, arguably the largest (by geographic extent) of all periodical cicada broods, were expected to emerge together in 2024 for the first time since 1803. However, the two broods were not expected to overlap except potentially in a thin area in central and eastern Illinois (Macon, Sangamon, Livingston, and Logan counties). The next such dual emergence of these two particular broods will occur in 2245, 221 years after 2024. Many other 13-year and 17-year broods emerge during the same years, but the broods are not geographically close. Map of brood locations Taxonomy Phylogeny Magicicada is a member of the cicada tribe Lamotialnini, which is distributed globally aside from South America. Despite Magicicada being only found in eastern North America, its closest relatives are thought to be the genera Tryella and Aleeta from Australia, with Magicicada being sister to the clade containing Tryella and Aleeta. Within the Americas, its closest relative is thought to be the genus Chrysolasia from Guatemala. Species Seven recognized species are placed within Magicicada—three 17-year species and four 13-year species. These seven species are also sometimes grouped differently into three subgroups, the so-called Decim species group, Cassini species group, and Decula species group, reflecting strong similarities of each 17-year species with one or more species with a 13-year cycle. Evolution and speciation Not only are the periodical cicada life cycles curious for their use of the prime numbers 13 or 17, but their evolution is also intricately tied to one- and four-year changes in their life cycles. One-year changes are less common than four-year changes and are probably tied to variation in local climatic conditions. Four-year early and late emergences are common and involve a much larger proportion of the population than one-year changes. The different species are well-understood to have originated from a process of allochronic speciation, in which species subpopulations that are isolated from one another in time eventually become reproductively isolated as well. Research suggests that in extant periodical cicadas, the 13- and 17-year life cycles evolved at least eight different times in the last 4 million years and that different species with identical life cycles developed their overlapping geographic distribution by synchronizing their life cycles to the existing dominant populations. The same study estimates that the Decim species group split from the common ancestor of the Decula plus Cassini species groups around 4 million years ago (Mya). At around 2.5 Mya, the Cassini and Decula groups split from each other. The Sota et al. (2013) paper also calculates that the first separation of extant 13-year cicadas from 17-year cicadas took place in the Decim group about 530,000 years ago when the southern M. tredecim split from the northern M. septendecim. The second noteworthy event took place about 320,000 years ago with the split of the western Cassini group from its conspecifics to the east. The Decim and the Decula clades experienced similar western splits, but these are estimated to have taken place 270,000 and 230,000 years ago, respectively. The 13- and 17-year splits in Cassini and Decula took place after these events. The 17-year cicadas largely occupy formerly glaciated territory, and as a result their phylogeographic relationships reflect the effects of repeated contraction into glacial refugia (small islands of suitable habitat) and subsequent re-expansion during multiple interglacial periods. In each species group, Decim, Cassini, and Decula, the signature of the glacial periods is manifested in three phylogeographic genetic subdivisions: one subgroup east of the Appalachians, one midwestern, and one on the far western edge of their range. The Sota et al. data suggest that the founders of the southern 13-year cicada populations originated from the Decim group. These were later joined by Cassini originating from the western Cassini clade and Decula originating from eastern, middle, and western Decula clades. As Cassini and Decula invaded the south, they became synchronized with the resident M. tredecim. These Cassini and Decula are known as M. tredecassini and M. tredecula. More data is needed to lend support to this hypothesis and others hypotheses related to more recent 13- and 17-year splits involving M. neotredecim and M. tredecim. Distribution The 17-year periodical cicadas are distributed from the Eastern states, across the Ohio Valley, to the Great Plains states and north to the edges of the Upper Midwest, while the 13-year cicadas occur in the Southern and Mississippi Valley states, with some slight overlap of the two groups. For example, broods IV (17-year cycle) and XIX (13-year cycle) overlap in western Missouri and eastern Oklahoma. Their emergences should again coincide in 2219, 2440, 2661, etc., as they did in 1998 (although distributions change slightly from generation to generation and older distribution maps can be unreliable). An effort sponsored by the National Geographic Society is underway as of April 2021 at the University of Connecticut to generate new distribution maps of all periodical cicada broods. The effort uses crowdsourced data and records that entomologists and volunteers collect. Parasites, pests and pathogens Although it usually feeds on oak leaf gall midge (Polystepha pilulae) larvae and other insects, the oak leaf gall mite ("itch mite") (Pyemotes herfsi) becomes an ectoparasite of periodical cicada eggs when these are available. After cicadas deposit their eggs in the branches of trees, feeding mites reproduce and their numbers increase. After cicada emergences have ended, many people have therefore developed rashes, pustules, intense itching and other mite bite sequelae on their upper torso, head, neck and arms. Rashes and itching peaked after several days, but lasted as long as two weeks. Anti-itch treatments, including calamine lotion and topical steroid creams, did not relieve the itching. Massospora cicadina is a pathogenic fungus that infects only 13 and 17 year periodical cicadas. Infection results in a "plug" of spores that replaces the end of the cicada's abdomen while it is still alive, leading to infertility, disease transmission, and eventual death of the cicada. Symbiosis Magicicada are unable to obtain all of the essential amino acids from the dilute xylem fluid that they feed upon, and instead rely upon endosymbiotic bacteria that provide essential vitamins and nutrients for growth. Bacteria in the genus Hodgkinia live inside periodical cicadas, and grow and divide for years before punctuated cicada reproduction events impose natural selection on these bacteria to maintain a mutually beneficial relationship. As a result, the genome of Hodgkinia has fractionated into three independent bacterial species each containing only a subset of genes essential for this symbiosis. The host requires all three subgroups of symbionts, as only the complete complement of all three subgroups provides the host with all its essential nutrients. The Hodgkinia–Magicicada symbiosis is a powerful example of how bacterial endosymbionts drive the evolution of their hosts. History The first known account of a large emergence of cicadas appeared in a 1633 report by William Bradford, the governor of the Plymouth Colony, which had been established in 1620 within the future state of Massachusetts. After describing a "pestilent fever" that had swept through the colony and neighboring Indians, the report stated: It is to be observed that, the spring before this sickness, there was a numerous company of Flies which were like for bigness unto wasps or Bumble-Bees; they came out of little holes in the ground, and did eat up the green things, and made such a constant yelling noise as made the woods ring of them, and ready to deafen the hearers; they were not any seen or heard by the English in this country before this time; but the Indians told them that sickness would follow, and so it did, very hot, in the months of June, July, and August of that summer. (Elaborating on an observation that Marlatt reported in 1907, Gene Kritsky has suggested that Bradford's report is misdated, as Broods XI and XIV would have emerged in Plymouth in 1631 and 1634, respectively, while no presently known brood would have emerged there in 1633.) Historical accounts cite reports of 15- to 17-year recurrences of enormous numbers of noisy emergent cicadas ("locusts") written as early as 1733. John Bartram, a noted Philadelphia botanist and horticulturist, was among the early writers that described the insect's life cycle, appearance and characteristics. On May 9, 1715, Andreas Sandel, the pastor of Philadelphia's "Gloria Dei" Swedish Lutheran Church, described in his journal an emergence of Brood X. Pehr Kalm, a Finnish naturalist visiting Pennsylvania and New Jersey in 1749 on behalf of the Royal Swedish Academy of Sciences, observed in late May another emergence of that brood. When reporting the event in a paper that a Swedish academic journal published in 1756, Kalm wrote: Kalm then described Sandel's report and one that he had obtained from Benjamin Franklin that had recorded in Philadelphia the emergence from the ground of large numbers of cicadas during early May 1732. He noted that the people who had prepared these documents had made no such reports in other years. Kalm further noted that others had informed him that they had seen cicadas only occasionally before the insects emerged from the ground in Pennsylvania in large swarms on May 22, 1749. He additionally stated that he had not heard any cicadas in Pennsylvania and New Jersey in 1750 in the same months and areas in which he had heard many in 1749. The 1715 and 1732 reports, when coupled with his own 1749 and 1750 observations, supported the previous "general opinion" that he had cited. Kalm summarized his findings in a book translated into English and published in London in 1771, stating: Based on Kalm's account and a specimen that Kalm had provided, in 1758 Carl Linnaeus named the insect Cicada septendecim in the tenth edition of his Systema Naturae. Moses Bartram, a son of John Bartram, described the next appearance of the brood (Brood X) that Kalm had observed in 1749 in an article entitled Observations on the cicada, or locust of America, which appears periodically once in 16 or 17 years that he wrote in 1766. Bartram's article, which a London journal published in 1768, noted that upon hatching from eggs deposited in the twigs of trees, the young insects ran down to the earth and "entered the first opening that they could find". He reported that he had been able to discover them below the surface, but that others had reportedly found them deep. In 1775, Thomas Jefferson recorded in his "Garden Book" Brood II's 17-year periodicity, writing that an acquaintance remembered "great locust years" in 1724 and 1741, that he and others recalled another such year in 1758 and that the insects had again emerged from the ground at Monticello in 1775. He noted that the females lay their eggs in the small twigs of trees while above ground. The 1780 emergence of the Brood VII cicadas (also known as the Onondaga brood) during the American Revolutionary War, coincided with the aftermath of the military operation known as the Sullivan Expedition which devastated the indigenous Onondagan communities and destroyed their crops. The sudden arrival of such a substantial quantity of the cicadas provided a source of sustenance for the Onondaga people who were experiencing severe food insecurity following the Sullivan campaigns and the subsequent brutal winter. The seemingly miraculous arrival of the cicadas is commemorated by the Onondaga as though it were an intervention by the Creator to ensure their survival after such a traumatizing, catastrophic event. In April 1800, Benjamin Banneker, who lived near Ellicott's Mills, Maryland, wrote in his record book that he recalled a "great locust year" in 1749, a second in 1766 during which the insects appeared to be "full as numerous as the first", and a third in 1783. He predicted that the insects (Brood X) "may be expected again in they year 1800 which is Seventeen Since their third appearance to me". Describing an effect that the pathogenic fungus, Massospora cicadina, has on its host, Banneker's record book stated that the insects:... begin to Sing or make a noise from first they come out of the Earth till they die. The hindermost part rots off, and it does not appear to be any pain to them, for they still continue on Singing till they die. In 1845, D.L. Pharas of Woodville, Mississippi, announced the 13-year periodicity of the southern cicada broods in a local newspaper, the Woodville Republican. In 1858, Pharas placed the title Cicada tredecim in a subsequent article that the newspaper published on the subject. Ten years later, the American Entomologist published in December 1868 a paper that Benjamin Dann Walsh and Charles Valentine Riley had written that also reported the 13-year periodicity of the southern cicada broods. Walsh's and Riley's paper, which Scientific American reprinted in January 1869, illustrated the interior and exterior characteristics of the nymphs' emergence holes and raised turrets. Their article, which did not cite Pharas' reports, was the first to describe the southern cicadas' 13-year periodicity that received widespread attention. Riley later acknowledged Pharas' work in an 1885 publication on periodical cicadas that he authored. In 1998, an emergence contained a brood of 17-year cicadas (Brood IV) in western Missouri and a brood of 13-year cicadas (Brood XIX) over much of the rest of the state. Each of the broods are the state's largest of their types. As the territories of the two broods overlap (converge) in some areas, the convergence was the state's first since 1777. In 2007 and 2008, Edmond Zaborski, a research scientist with the Illinois Natural History Survey, reported that the oak leaf gall mite ("itch mite") (Pyemotes herfsi) is an ectoparasite of periodical cicada eggs. While investigating with the help of others the mysterious itchy welts and rashes that people were developing in Chicago's suburbs after the end of a 2007 Brood XIII emergence, he attributed the event to bites by mites whose populations had quickly increased while parasitizing those eggs. Similar events occurred in Cincinnati after a Brood XIV emergence ended in 2008, in Cleveland and elsewhere in northern and eastern Ohio after a Brood V emergence ended in 2016, in the Washington, D.C., area after a Brood X emergence ended in 2021, and again in the Chicago area after the next Brood XIII emergence ended in 2024. Use as human food Magicicada species are edible when cooked for people who lack allergies to similar foods. A number of recipes are available for this purpose. Some recommend collecting the insects shortly after molting while still soft. Others exhibit preferences for emergent nymphs or hardened adults. The insects have historically been eaten by Native Americans, who fried them or roasted them in hot ovens, stirring them until they were well browned. Marlatt wrote in 1907: Notes References Further reading Wikipedia Cicada page. External links The Periodical Cicada Page Informational page about periodical cicadas that supersedes www.magicicada.org. Has maps and 3-D models. Cicada Mania GIGAmacro has a zoomable, very high-resolution image of the male, female & nymph cicada InsectSingers.com Recordings of species-specific songs of many North American cicada species. Liebhold, A.M.; Bohne, M.J.; Lilja, R.L. "Active Periodical Cicada Broods of the United States" (map). USDA Forest Service Northern Research Station, Northeastern Area State and Private Forestry. 2013. Massachusetts Cicadas describes behavior, sightings, photos, "how to find" guide, videos and distribution maps of New England and U.S. periodical and annual cicada species including Brood X, Brood XIII, Brood XIV and Brood XIX Cicadas Edible insects Lamotialnini Native American cuisine Periodic phenomena Antipredator adaptations
Periodical cicadas
[ "Biology" ]
6,641
[ "Antipredator adaptations", "Biological defense mechanisms" ]
594,990
https://en.wikipedia.org/wiki/Psychrophile
Psychrophiles or cryophiles (adj. psychrophilic or cryophilic) are extremophilic organisms that are capable of growth and reproduction in low temperatures, ranging from to . They are found in places that are permanently cold, such as the polar regions and the deep sea. They can be contrasted with thermophiles, which are organisms that thrive at unusually high temperatures, and mesophiles at intermediate temperatures. Psychrophile is Greek for 'cold-loving', . Many such organisms are bacteria or archaea, but some eukaryotes such as lichens, snow algae, phytoplankton, fungi, and wingless midges, are also classified as psychrophiles. Biology Habitat The cold environments that psychrophiles inhabit are ubiquitous on Earth, as a large fraction of the planetary surface experiences temperatures lower than 10 °C. They are present in permafrost, polar ice, glaciers, snowfields and deep ocean waters. These organisms can also be found in pockets of sea ice with high salinity content. Microbial activity has been measured in soils frozen below −39 °C. In addition to their temperature limit, psychrophiles must also adapt to other extreme environmental constraints that may arise as a result of their habitat. These constraints include high pressure in the deep sea, and high salt concentration on some sea ice. Adaptations Psychrophiles are protected from freezing and the expansion of ice by ice-induced desiccation and vitrification (glass transition), as long as they cool slowly. Free living cells desiccate and vitrify between −10 °C and −26 °C. Cells of multicellular organisms may vitrify at temperatures below −50 °C. The cells may continue to have some metabolic activity in the extracellular fluid down to these temperatures, and they remain viable once restored to normal temperatures. They must also overcome the stiffening of their lipid cell membrane, as this is important for the survival and functionality of these organisms. To accomplish this, psychrophiles adapt lipid membrane structures that have a high content of short, unsaturated fatty acids. Compared to longer saturated fatty acids, incorporating this type of fatty acid allows for the lipid cell membrane to have a lower melting point, which increases the fluidity of the membranes. In addition, carotenoids are present in the membrane, which help modulate the fluidity of it. Antifreeze proteins are also synthesized to keep psychrophiles' internal space liquid, and to protect their DNA when temperatures drop below water's freezing point. By doing so, the protein prevents any ice formation or recrystallization process from occurring. The enzymes of these organisms have been hypothesized to engage in an activity-stability-flexibility relationship as a method for adapting to the cold; the flexibility of their enzyme structure will increase as a way to compensate for the freezing effect of their environment. Certain cryophiles, such as Gram-negative bacteria Vibrio and Aeromonas spp., can transition into a viable but nonculturable (VBNC) state. During VBNC, a micro-organism can respire and use substrates for metabolism – however, it cannot replicate. An advantage of this state is that it is highly reversible. It has been debated whether VBNC is an active survival strategy or if eventually the organism's cells will no longer be able to be revived. There is proof however it may be very effective – Gram positive bacteria Actinobacteria have been shown to have lived about 500,000 years in the permafrost conditions of Antarctica, Canada, and Siberia. Taxonomic range Psychrophiles include bacteria, lichens, snow algae, phytoplankton, fungi, and insects. Among the bacteria that can tolerate extreme cold are Arthrobacter sp., Psychrobacter sp. and members of the genera Halomonas, Pseudomonas, Hyphomonas, and Sphingomonas. Another example is Chryseobacterium greenlandensis, a psychrophile that was found in 120,000-year-old ice. Umbilicaria antarctica and Xanthoria elegans are lichens that have been recorded photosynthesizing at temperatures ranging down to −24 °C, and they can grow down to around −10 °C. Some multicellular eukaryotes can also be metabolically active at sub-zero temperatures, such as some conifers; those in the Chironomidae family are still active at −16 °C. Microalgae that live in snow and ice include green, brown, and red algae. Snow algae species such as Chloromonas sp., Chlamydomonas sp., and Chlorella sp. are found in polar environments. Some phytoplankton can tolerate extremely cold temperatures and high salinities that occur in brine channels when sea ice forms in polar oceans. Some examples are diatoms like Fragilariopsis cylindrus, Nitzchia lecointeii, Entomoneis kjellmanii, Nitzchia stellata, Thalassiosira australis, Berkelaya adeliense, and Navicula glaciei. Penicillium is a genus of fungi found in a wide range of environments including extreme cold. Among the psychrophile insects, the Grylloblattidae or ice crawlers, found on mountaintops, have optimal temperatures between 1–4 °C. The wingless midge (Chironomidae) Belgica antarctica can tolerate salt, being frozen and strong ultraviolet, and has the smallest known genome of any insect. The small genome, of 99 million base pairs, is thought to be adaptive to extreme environments. Psychrotrophic bacteria Psychrotrophic microbes are able to grow at temperatures below , but have better growth rates at higher temperatures. Psychrotrophic bacteria and fungi are able to grow at refrigeration temperatures, and can be responsible for food spoilage and as foodborne pathogens such as Yersinia. They provide an estimation of the product's shelf life, but also they can be found in soils, in surface and deep sea waters, in Antarctic ecosystems, and in foods. Psychrotrophic bacteria are of particular concern to the dairy industry. Most are killed by pasteurization; however, they can be present in milk as post-pasteurization contaminants due to less than adequate sanitation practices. According to the Food Science Department at Cornell University, psychrotrophs are bacteria capable of growth at temperatures at or less than . At freezing temperatures, growth of psychrotrophic bacteria becomes negligible or virtually stops. All three subunits of the RecBCD enzyme are essential for physiological activities of the enzyme in the Antarctic Pseudomonas syringae, namely, repairing of DNA damage and supporting the growth at low temperature. The RecBCD enzymes are exchangeable between the psychrophilic P. syringae and the mesophilic E. coli when provided with the entire protein complex from same species. However, the RecBC proteins (RecBCPs and RecBCEc) of the two bacteria are not equivalent; the RecBCEc is proficient in DNA recombination and repair, and supports the growth of P. syringae at low temperature, while RecBCPs is insufficient for these functions. Finally, both helicase and nuclease activity of the RecBCDPs are although important for DNA repair and growth of P. syringae at low temperature, the RecB-nuclease activity is not essential in vivo. Psychrophilic microalgae Microscopic algae that can tolerate extremely cold temperatures can survive in snow, ice, and very cold seawater. On snow, cold-tolerant algae can bloom on the snow surface covering land, glaciers, or sea ice when there is sufficient light. These snow algae darken the surface of the snow and can contribute to snow melt. In seawater, phytoplankton that can tolerate both very high salinities and very cold temperatures are able to live in sea ice. One example of a psychrophilic phytoplankton species is the ice-associated diatom Fragilariopsis cylindrus. Phytoplankton living in the cold ocean waters near Antarctica often have very high protein content, containing some of the highest concentrations ever measured of enzymes like Rubisco. Psychrotrophic insects Insects that are psychrotrophic can survive cold temperatures through several general mechanisms (unlike opportunistic and chill susceptible insects): (1) chill tolerance, (2) freeze avoidance, and (3) freeze tolerance. Chill tolerant insects succumb to freezing temperatures after prolonged exposure to mild or moderate freezing temperatures. Freeze avoiding insects can survive extended periods of time at sub-freezing temperatures in a supercooled state, but die at their supercooling point. Freeze tolerant insects can survive ice crystal formation within their body at sub-freezing temperatures. Freeze tolerance within insects is argued to be on a continuum, with some insect species exhibiting partial (e.g., Tipula paludosa, Hemideina thoracica ), moderate (e.g., Cryptocercus punctulatus), and strong freezing tolerance (e.g., Eurosta solidaginis and Syrphus ribesii), and other insect species exhibiting freezing tolerance with low supercooling point (e.g., Pytho deplanatus). Psychrophile versus psychrotroph In 1940, ZoBell and Conn stated that they had never encountered "true psychrophiles" or organisms that grow best at relatively low temperatures. In 1958, J. L. Ingraham supported this by concluding that there are very few or possibly no bacteria that fit the textbook definitions of psychrophiles. Richard Y. Morita emphasizes this by using the term psychrotroph to describe organisms that do not meet the definition of psychrophiles. The confusion between the terms psychrotrophs and psychrophiles was started because investigators were unaware of the thermolability of psychrophilic organisms at the laboratory temperatures. Due to this, early investigators did not determine the cardinal temperatures for their isolates. The similarity between these two is that they are both capable of growing at zero, but optimum and upper temperature limits for the growth are lower for psychrophiles compared to psychrotrophs. Psychrophiles are also more often isolated from permanently cold habitats compared to psychrotrophs. Although psychrophilic enzymes remain under-used because the cost of production and processing at low temperatures is higher than for the commercial enzymes that are presently in use, the attention and resurgence of research interest in psychrophiles and psychrotrophs will be a contributor to the betterment of the environment and the desire to conserve energy. See also Chionophile Extremophile Halophile Ice algae Mesophile Osmophile Pathogenic microorganisms in frozen environments Thermophile Xerophile References Further reading Microbial growth and nutrition Cryobiology
Psychrophile
[ "Physics", "Chemistry", "Biology" ]
2,367
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
595,145
https://en.wikipedia.org/wiki/Dry%20distillation
Dry distillation is the heating of solid materials to produce gaseous products (which may condense into liquids or solids). The method may involve pyrolysis or thermolysis, or it may not (for instance, a simple mixture of ice and glass could be separated without breaking any chemical bonds, but organic matter contains a greater diversity of molecules, some of which are likely to break). If there are no chemical changes, just phase changes, it resembles classical distillation, although it will generally need higher temperatures. Dry distillation in which chemical changes occur is a type of destructive distillation or cracking. Uses The method has been used to obtain liquid fuels from coal and wood. It can also be used to break down mineral salts such as sulfates () through thermolysis, in this case producing sulfur dioxide (SO2) or sulfur trioxide (SO3) gas which can be dissolved in water to obtain sulfuric acid. By this method sulfuric acid was first identified and artificially produced. When substances of vegetable origin, e.g. coal, oil shale, peat or wood, are heated in the absence of air (dry distillation), they decompose into gas, liquid products and coke/charcoal. The yield and chemical nature of the decomposition products depend on the nature of the raw material and the conditions under which the dry distillation is done. Decomposition within a temperature range of 450 °C to about 600 °C is called carbonization or low-temperature degassing. At temperatures above 900 °C, the process is called coking or high-temperature degassing. If coal is gasified to make coal gas or carbonized to make coke then coal tar is among the by-products. Wood When wood is heated above 270 °C it begins to carbonize. If air is absent, the final product (since there is no oxygen present to react with the wood) is charcoal. If air (which contains oxygen) is present, the wood will catch fire and burn when it reaches a temperature of about 400–500 °C and the fuel product is wood ash. If wood is heated away from air, first the moisture is driven off. Until this is complete, the wood temperature remains at about 100–110 °C. When the wood is dry its temperature rises, and at about 270 °C, it begins to spontaneously decompose. This is the well known exothermic reaction which takes place in charcoal burning. At this stage evolution of the by-products of wood carbonization starts. These substances are given off gradually as the temperature rises and at about 450 °C the evolution is complete. The solid residue, charcoal, is mainly carbon (about 70%) and small amounts of tarry substances which can be driven off or decomposed completely only by raising the temperature to above about 600 °C. In the common practice of charcoal burning using internal heating of the charged wood by burning a part of it, all the by-product vapors and gases escape into the atmosphere as smoke. The by-products can be recovered by passing the off-gases through a series of water to yield so-called wood vinegar (pyroligneous acid) and the non-condensible wood gas passes on through the condenser and may be burned to provide heat. The wood gas is only usable as fuel, and consists typically of 17% methane; 2% hydrogen; 23% carbon monoxide; 38% carbon dioxide; 2% oxygen and 18% nitrogen. It has a gas calorific value of about 10.8 MJ/m3 (290 BTU/cu.ft.) i.e. about one third the value of natural gas. When deciduous tree woods are subjected to distillation, the products are methanol (wood alcohol) and charcoal. The distillation of pine wood causes Pine tar and pitch to drip away from the wood and leave behind charcoal. Birch tar from birch bark is a particularly fine tar, known as "Russian oil", suitable for leather protection. The by-products of wood tar are turpentine and charcoal. Tar kilns are dry distillation ovens, historically used in Scandinavia for producing tar from wood. They were built close to the forest, from limestone or from more primitive holes in the ground. The bottom is sloped into an outlet hole to allow the tar to pour out. The wood is split into dimensions of a finger, stacked densely, and finally covered tight with dirt and moss. If oxygen can enter, the wood might catch fire, and the production would be ruined. On top of this, a fire is stacked and lit. After a few hours, the tar starts to pour out and continues to do so for a few days. See also Coal oil Destructive distillation Gasworks Pitch (resin) Syngas Tar References Distillation Pyrolysis
Dry distillation
[ "Chemistry" ]
997
[ "Separation processes", "Pyrolysis", "Oil shale technology", "Organic reactions", "Distillation", "Synthetic fuel technologies" ]
595,183
https://en.wikipedia.org/wiki/Calcium%20sulfate
Calcium sulfate (or calcium sulphate) is the inorganic compound with the formula CaSO4 and related hydrates. In the form of γ-anhydrite (the anhydrous form), it is used as a desiccant. One particular hydrate is better known as plaster of Paris, and another occurs naturally as the mineral gypsum. It has many uses in industry. All forms are white solids that are poorly soluble in water. Calcium sulfate causes permanent hardness in water. Hydration states and crystallographic structures The compound exists in three levels of hydration corresponding to different crystallographic structures and to minerals: (anhydrite): anhydrous state. The structure is related to that of zirconium orthosilicate (zircon): is 8-coordinate, is tetrahedral, O is 3-coordinate. (gypsum and selenite (mineral)): dihydrate. (bassanite): hemihydrate, also known as plaster of Paris. Specific hemihydrates are sometimes distinguished: α-hemihydrate and β-hemihydrate. Uses The main use of calcium sulfate is to produce plaster of Paris and stucco. These applications exploit the fact that calcium sulfate which has been powdered and calcined forms a moldable paste upon hydration and hardens as crystalline calcium sulfate dihydrate. It is also convenient that calcium sulfate is poorly soluble in water and does not readily dissolve in contact with water after its solidification. Hydration and dehydration reactions With judicious heating, gypsum converts to the partially dehydrated mineral called bassanite or plaster of Paris. This material has the formula CaSO4·(nH2O), where 0.5 ≤ n ≤ 0.8. Temperatures between are required to drive off the water within its structure. The details of the temperature and time depend on ambient humidity. Temperatures as high as are used in industrial calcination, but at these temperatures γ-anhydrite begins to form. The heat energy delivered to the gypsum at this time (the heat of hydration) tends to go into driving off water (as water vapor) rather than increasing the temperature of the mineral, which rises slowly until the water is gone, then increases more rapidly. The equation for the partial dehydration is: CaSO4 · 2 H2O   →   CaSO4 · H2O + H2O↑ The endothermic property of this reaction is relevant to the performance of drywall, conferring fire resistance to residential and other structures. In a fire, the structure behind a sheet of drywall will remain relatively cool as water is lost from the gypsum, thus preventing (or substantially retarding) damage to the framing (through combustion of wood members or loss of strength of steel at high temperatures) and consequent structural collapse. But at higher temperatures, calcium sulfate will release oxygen and act as an oxidizing agent. This property is used in aluminothermy. In contrast to most minerals, which when rehydrated simply form liquid or semi-liquid pastes, or remain powdery, calcined gypsum has an unusual property: when mixed with water at normal (ambient) temperatures, it quickly reverts chemically to the preferred dihydrate form, while physically "setting" to form a rigid and relatively strong gypsum crystal lattice: CaSO4 · H2O + H2O   →   CaSO4 · 2 H2O This reaction is exothermic and is responsible for the ease with which gypsum can be cast into various shapes including sheets (for drywall), sticks (for blackboard chalk), and molds (to immobilize broken bones, or for metal casting). Mixed with polymers, it has been used as a bone repair cement. Small amounts of calcined gypsum are added to earth to create strong structures directly from cast earth, an alternative to adobe (which loses its strength when wet). The conditions of dehydration can be changed to adjust the porosity of the hemihydrate, resulting in the so-called α- and β-hemihydrates (which are more or less chemically identical). On heating to , the nearly water-free form, called γ-anhydrite (CaSO4·nH2O where n = 0 to 0.05) is produced. γ-Anhydrite reacts slowly with water to return to the dihydrate state, a property exploited in some commercial desiccants. On heating above 250 °C, the completely anhydrous form called β-anhydrite or "natural" anhydrite is formed. Natural anhydrite does not react with water, even over geological timescales, unless very finely ground. The variable composition of the hemihydrate and γ-anhydrite, and their easy inter-conversion, is due to their nearly identical crystal structures containing "channels" that can accommodate variable amounts of water, or other small molecules such as methanol. Food industry The calcium sulfate hydrates are used as a coagulant in products such as tofu. For the FDA, it is permitted in cheese and related cheese products; cereal flours; bakery products; frozen desserts; artificial sweeteners for jelly & preserves; condiment vegetables; and condiment tomatoes and some candies. It is known in the E number series as E516, and the UN's FAO knows it as a firming agent, a flour treatment agent, a sequestrant, and a leavening agent. Dentistry Calcium sulfate has a long history of use in dentistry. It has been used in bone regeneration as a graft material and graft binder (or extender) and as a barrier in guided bone tissue regeneration. It is a biocompatible material and is completely resorbed following implantation. It does not evoke a significant host response and creates a calcium-rich milieu in the area of implantation. Desiccant When sold at the anhydrous state as a desiccant with a color-indicating agent under the name Drierite, it appears blue (anhydrous) or pink (hydrated) due to impregnation with cobalt(II) chloride, which functions as a moisture indicator. Sulfuric acid production Up to the 1970s, commercial quantities of sulfuric acid were produced from anhydrous calcium sulfate. Upon being mixed with shale or marl, and roasted at 1400°C, the sulfate liberates sulfur dioxide gas, a precursor to sulfuric acid. The reaction also produces calcium silicate, used in cement clinker production. Some component reactions pertaining to calcium sulfate: Production and occurrence The main sources of calcium sulfate are naturally occurring gypsum and anhydrite, which occur at many locations worldwide as evaporites. These may be extracted by open-cast quarrying or by deep mining. World production of natural gypsum is around 127 million tonnes per annum. In addition to natural sources, calcium sulfate is produced as a by-product in a number of processes: In flue-gas desulfurization, exhaust gases from fossil-fuel power stations and other processes (e.g. cement manufacture) are scrubbed to reduce their sulfur dioxide content, by injecting finely ground limestone: Related sulfur-trapping methods use lime and some produces an impure calcium sulfite, which oxidizes on storage to calcium sulfate. In the production of phosphoric acid from phosphate rock, calcium phosphate is treated with sulfuric acid and calcium sulfate precipitates. The product, called phosphogypsum is often contaminated with impurities making its use uneconomic. In the production of hydrogen fluoride, calcium fluoride is treated with sulfuric acid, precipitating calcium sulfate. In the refining of zinc, solutions of zinc sulfate are treated with hydrated lime to co-precipitate heavy metals such as barium. Calcium sulfate can also be recovered and re-used from scrap drywall at construction sites. These precipitation processes tend to concentrate radioactive elements in the calcium sulfate product. This issue is particular with the phosphate by-product, since phosphate ores naturally contain uranium and its decay products such as radium-226, lead-210 and polonium-210. Extraction of uranium from phosphorus ores can be economical on its own depending on prices on the uranium market or the separation of uranium can be mandated by environmental legislation and its sale is used to recover part of the cost of the process. Calcium sulfate is also a common component of fouling deposits in industrial heat exchangers, because its solubility decreases with increasing temperature (see the specific section on the retrograde solubility). Solubility The solubility of calcium sulfate decreases as temperature increases. This behaviour ("retrograde solubility") is uncommon: dissolution of most of the salts is endothermic and their solubility increases with temperature. The retrograde solubility of calcium sulfate is also responsible for its precipitation in the hottest zone of heating systems and for its contribution to the formation of scale in boilers along with the precipitation of calcium carbonate whose solubility also decreases when CO2 degasses from hot water or can escape out of the system. See also Calcium sulfate (data page) Alabaster Anhydrite Bathybius haeckelii Chalk (calcium carbonate) Gypsum Gypsum plaster Phosphogypsum Selenite (mineral) Flue-gas desulfurization References External links International Chemical Safety Card 1215 NIOSH Pocket Guide to Chemical Hazards Calcium compounds Sulfates Desiccants Food additives Pyrotechnic colorants E-number additives
Calcium sulfate
[ "Physics", "Chemistry" ]
2,049
[ "Sulfates", "Salts", "Desiccants", "Materials", "Matter" ]
595,186
https://en.wikipedia.org/wiki/Wolfgang%20Paul
Wolfgang Paul (; 10 August 1913 – 7 December 1993) was a German physicist, who co-developed the non-magnetic quadrupole mass filter which laid the foundation for what is now called an ion trap. He shared one-half of the Nobel Prize in Physics in 1989 for this work with Hans Georg Dehmelt; the other half of the Prize in that year was awarded to Norman Foster Ramsey, Jr. Early life Wolfgang Paul was born on 10 August 1913 in Lorenzkirch, Germany. He grew up in Munich where his father was a professor of pharmaceutical chemistry. After the first few years at the Technical University of Munich, he changed to Technische Universität Berlin in 1934 where he finished his Diploma in 1937 at the group of Hans Geiger. He followed his doctorate adviser Hans Kopfermann to the University of Kiel and after being drafted to the air force he finished his PhD in 1940 at Technische Universität Berlin. During World War II, he researched isotope separation, which is necessary to produce fissionable material for use in making nuclear weapons. Academic career For several years he was a private lecturer at the University of Göttingen with Hans Kopfermann. He became a professor of Experimental Physics at the University of Bonn and stayed there from 1952 until 1993. For two years from 1965 to 1967 he was director of the Division of Nuclear Physics at CERN. In 1970, he spent some weeks as Morris Loeb lecturer at Harvard University. 1978 He were lecturing as distinguished scientist at the FERMI Institute of the University of Chicago and in a similar position at The University of Tokyo. Since 1981, he is Professur Emeritus at the Bonn University. Scientific results He developed techniques for trapping charged particles in mass spectrometry by electric quadrupole fields in the 1950s. Paul traps are used extensively today to contain and study ions. He developed molecular beam lenses and worked on a 500 MeV electron synchrotron, followed by one at 2500 MeV in 1965. Later he worked on containing slow neutrons in magnetic storage rings, measuring the free neutron lifetime. He humorously referred to Wolfgang Pauli as his imaginary part if their surnames were considered as complex numbers. Göttingen Manifesto In 1957, Paul was a signatory of the Göttingen Manifesto, a declaration of 18 leading nuclear scientists of West Germany against arming the West German army with tactical nuclear weapons. Sons His son Stephan Paul is a professor of experimental physics at the Technical University of Munich. His son Lorenz Paul is a professor of physics at the University of Wuppertal. Works References External links including the Nobel Lecture, December 8, 1989 Electromagnetic Traps for Charged and Neutral Particles Wolfgang Paul Prize, awarded by the Alexander von Humboldt Foundation in November 2001. List of award winners. 20th-century German physicists Nobel laureates in Physics 1913 births 1993 deaths German Nobel laureates Academic staff of the University of Bonn Nuclear program of Nazi Germany Technical University of Munich alumni Technische Universität Berlin alumni Academic staff of Technische Universität Berlin Grand Crosses with Star and Sash of the Order of Merit of the Federal Republic of Germany Recipients of the Pour le Mérite (civil class) Mass spectrometrists People associated with CERN
Wolfgang Paul
[ "Physics", "Chemistry" ]
658
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
595,204
https://en.wikipedia.org/wiki/Nokia%207250
The Nokia 7250 is a mobile phone handset manufactured by Nokia. Announced in 2002 and released for sale in February 2003, it was designed at Nokia Design Center in California, by the Bulgarian-American designer Miki Mehandjiysky. The Nokia 7250 is notable for its unconventional design, striking colours and integrated digital camera, and also had Xpress-On covers. It was the successor of Nokia 7210. Features & Specifications Size: 105 x 44 x 19 mm Weight: 92 g Battery standby: 150 – 300 hours Battery talktime: 2–5 hours Tri band WAP 1.2.1, GPRS High Speed Data (HSCSD) - max 43.2 kilobits per second MMS (Multimedia Messaging Service) Downloadable Java applications Colour display (256 colours, 128 x 128 pixels) Polyphonic ringtones (MIDI format) 3.5 megabytes shared memory Integrated stereo FM radio Integrated handsfree speaker Changeable x-press on covers Nokia Pop-Port connector Infra-red and cable connections Downloadable colour wallpaper Screensaver: digital clock Speed dialling Phone book (up to 300 entries) 2 games (Triple Pop and Bounce) Clock, alarm clock, calculator, currency converter Nokia 7250i The Nokia 7250i is a slightly improved version of the Nokia 7250, introduced in June 2003. It includes XHTML browser, OMA Forward lock digital rights management, zoom function to magnify images and a more advanced camera. The phone has exactly the same design as the 7250. The Nokia 6610i was essentially the same phone in terms of features, but featured a more conservative design to appeal more to business users while the 7250i was intended to be a fashion-oriented phone. The Nokia 3200 also had a similar feature set, however, the 3200 was intended to be a more affordable youth-oriented phone and featured a different style graphical user interface derived from that of the Nokia 3100. References 7250 Mobile phones introduced in 2003 Mobile phones with infrared transmitter Series 40 devices
Nokia 7250
[ "Technology" ]
420
[ "Mobile technology stubs", "Mobile phone stubs" ]
595,428
https://en.wikipedia.org/wiki/File%20verification
File verification is the process of using an algorithm for verifying the integrity of a computer file, usually by checksum. This can be done by comparing two files bit-by-bit, but requires two copies of the same file, and may miss systematic corruptions which might occur to both files. A more popular approach is to generate a hash of the copied file and comparing that to the hash of the original file. Integrity verification File integrity can be compromised, usually referred to as the file becoming corrupted. A file can become corrupted by a variety of ways: faulty storage media, errors in transmission, write errors during copying or moving, software bugs, and so on. Hash-based verification ensures that a file has not been corrupted by comparing the file's hash value to a previously calculated value. If these values match, the file is presumed to be unmodified. Due to the nature of hash functions, hash collisions may result in false positives, but the likelihood of collisions is often negligible with random corruption. Authenticity verification It is often desirable to verify that a file hasn't been modified in transmission or storage by untrusted parties, for example, to include malicious code such as viruses or backdoors. To verify the authenticity, a classical hash function is not enough as they are not designed to be collision resistant; it is computationally trivial for an attacker to cause deliberate hash collisions, meaning that a malicious change in the file is not detected by a hash comparison. In cryptography, this attack is called a preimage attack. For this purpose, cryptographic hash functions are employed often. As long as the hash sums cannot be tampered with — for example, if they are communicated over a secure channel — the files can be presumed to be intact. Alternatively, digital signatures can be employed to assure tamper resistance. File formats A checksum file is a small file that contains the checksums of other files. There are a few well-known checksum file formats. Several utilities, such as md5deep, can use such checksum files to automatically verify an entire directory of files in one operation. The particular hash algorithm used is often indicated by the file extension of the checksum file. The ".sha1" file extension indicates a checksum file containing 160-bit SHA-1 hashes in sha1sum format. The ".md5" file extension, or a file named "MD5SUMS", indicates a checksum file containing 128-bit MD5 hashes in md5sum format. The ".sfv" file extension indicates a checksum file containing 32-bit CRC32 checksums in simple file verification format. The "crc.list" file indicates a checksum file containing 32-bit CRC checksums in brik format. As of 2012, best practice recommendations is to use SHA-2 or SHA-3 to generate new file integrity digests; and to accept MD5 and SHA-1 digests for backward compatibility if stronger digests are not available. The theoretically weaker SHA-1, the weaker MD5, or much weaker CRC were previously commonly used for file integrity checks. CRC checksums cannot be used to verify the authenticity of files, as CRC32 is not a collision resistant hash function -- even if the hash sum file is not tampered with, it is computationally trivial for an attacker to replace a file with the same CRC digest as the original file, meaning that a malicious change in the file is not detected by a CRC comparison. See also Checksum Data deduplication References Computer files Error detection and correction
File verification
[ "Engineering" ]
741
[ "Error detection and correction", "Reliability engineering" ]
595,479
https://en.wikipedia.org/wiki/Habitat%20II
Habitat II, the Second United Nations Conference on Human Settlements, was held in Istanbul, Turkey, from 3–14 June 1996, twenty years after Habitat I held in Vancouver, Canada, in 1976. Popularly called the "City Summit", it brought together high-level representatives of national and local governments, as well as private sector, NGOs, research and training institutions and the media. Universal goals of ensuring adequate shelter for all and human settlements safer, healthier and more livable cities, inspired by the Charter of the United Nations, were discussed and endorsed. Habitat II received its impetus from the 1992 United Nations Conference on Environment and Development and General Assembly resolution A/RES/47/180. The conference outcomes were integrated in the Istanbul Declaration and the Habitat Agenda, and adopted as a new global action plan to realize sustainable human settlements. The Secretary-General of the Conference was Dr. Wally N'Dow. The objectives for Habitat II were stated as: in the long term, to arrest the deterioration of global human settlements conditions and ultimately create the conditions for achieving improvements in the living environment of all people on a sustainable basis, with special attention to the needs and contributions of women and vulnerable social groups whose quality of life and participation in development have been hampered by exclusion and inequality, affecting the poor in general; to adopt a general statement of principles and commitments and formulate a related global plan of action capable of guiding national and international efforts through the first two decades of the next century. A new mandate for the United Nations Centre for Human Settlements (UNCHS) was derived to support and monitor the implementation of the Habitat Agenda adopted at the Conference and approved by the General Assembly. Habitat III met in Quito, Ecuador, from 17–20 October 2016. Previous negotiations The organizational session of the Preparatory Committee (PrepCom) for Habitat II was held at UN Headquarters in New York from 3–5 March 1993. Delegates elected the Bureau and took decisions regarding the organization and timing of the process. The First Substantive Session of the Preparatory Committee of the PrepCom was held in Geneva from 11–22 April 1994. Delegates agreed that the overriding objective of the Conference was to increase world awareness of the problems and potentials of human settlements as important inputs to social progress and economic growth, and to commit the world's leaders to making cities, towns and villages healthy, safe, just and sustainable. The Earth Negotiations Bulletin prepared a comprehensive report on the first session of the PrepCom. The PrepCom also took decisions on the organization of the Conference and financing, in addition to the areas of: National Objectives, International Objectives, Participation, Draft Statement of Principles and Commitments and Draft Global Plan of Action. Habitat II at the 49th United Nations General Assembly The Second Committee of the UN General Assembly addressed Habitat II from 8–16 November 1994. The Earth Negotiations Bulletin prepared a year-end update report on Habitat II preparations that included a report on the General Assembly's treatment of this agenda item. A draft resolution on the "United Nations Conference on Human Settlements (Habitat II)" (A/C.2/49/L.27) was first tabled by the co-sponsors, Algeria, on behalf of the G-77 and China, and Turkey. After informal consultations by members of the Second Committee, the Vice Chair, Raiko Raichev (Bulgaria) submitted a new draft resolution (A/C.2/49/L.61). This resolution was adopted as orally amended by the Committee on 9 December 1994.The operative part of the resolution, as contained in L.61, took note of the reports of the PrepCom on its organizational session and first substantive session and endorsed the decisions contained therein. The resolution approved the PrepCom's recommendation that a third substantive session of the PrepCom be held at UN Headquarters early in 1996 to complete the preparatory work for the Conference. The Second Session of the Habitat II Preparatory Committee The Second Substantive Session of the PrepCom was held in Nairobi, Kenya, from the 24 April - 5 May 1995. The Earth Negotiations Bulletin published a summary of the meeting. The Third Session of the Habitat II Preparatory Committee The Third Session of the Habitat II Preparatory Committee was held in New York from 5–17 February 1996. The Earth Negotiations Bulletin published a summary of the meeting See also UN-Habitat World Urban Forum Habitat I Habitat III United Nations Conference on Housing and Sustainable Urban Development (Habitat III) References United Nations conferences Diplomatic conferences in Turkey 20th-century diplomatic conferences 1996 in international relations Human settlement Urban planning United Nations Human Settlements Programme Turkey and the United Nations 1990s in Istanbul
Habitat II
[ "Engineering" ]
926
[ "Urban planning", "Architecture" ]
595,605
https://en.wikipedia.org/wiki/Nuclear%20Energy%20Agency
The Nuclear Energy Agency (NEA) is an intergovernmental agency that is organized under the Organisation for Economic Co-operation and Development (OECD). Originally formed on 1 February 1958 with the name European Nuclear Energy Agency (ENEA)—the United States participated as an Associate Member—the name was changed on 20 April 1972 to its current name after Japan became a member. The mission of the NEA is to "assist its member countries in maintaining and further developing, through international co-operation, the scientific, technological and legal bases required for the safe, environmentally friendly and economical use of nuclear energy for peaceful purposes." History The creation of the European Nuclear Energy Agency (ENEA) was agreed by the OEEC Council of Ministers on December 20, 1957. Members NEA currently consists of 33 countries from Europe, North America and the Asia-Pacific region. In 2021, Bulgaria accessioned to NEA as its most recent member. In 2022, following Russia's invasion of Ukraine, Russia's membership was suspended. Together they account for approximately 85% of the world's installed nuclear capacity. Nuclear power accounts for almost a quarter of the electricity produced in NEA Member countries. The NEA works closely with the International Atomic Energy Agency (IAEA) in Vienna and with the European Commission in Brussels. Within the OECD, there is close co-ordination with the International Energy Agency and the Environment Directorate, as well as contacts with other directorates, as appropriate. Areas of work Nuclear safety and regulation Nuclear energy development Radioactive waste management Radiation protection and public health Nuclear law and liability Nuclear science Data bank Information and communication European Nuclear Energy Tribunal Structure Since 1 September 2014, the Director-General of the NEA is William D Magwood, IV, who replaced Luis E. Echávarri on this post. The NEA Secretariat serves seven specialised standing technical committees under the leadership of the Steering Committee for Nuclear Energy—the governing body of the NEA—which reports directly to the OECD Council. The standing technical committees, representing each of the seven major areas of the Agency's programme, are composed of member country experts who are both contributors to the programme of work and beneficiaries of its results. The approach is highly cost-efficient as it enables the Agency to pursue an ambitious programme with a relatively small staff that co-ordinates the work. The substantive value of the standing technical committees arises from the numerous important functions they perform, including: providing a forum for in-depth exchanges of technical and programmatic information; stimulating development of useful information by initiating and carrying out co-operation/research on key problems; developing common positions, including "consensus opinions", on technical and policy issues; identifying areas where further work is needed and ensuring that NEA activities respond to real needs; organising joint projects to enable interested countries to carry out research on particular issues on a cost-sharing basis. NEA Annual Report The NEA Annual Report, issued in English and French, is a definitive guide to the agency's yearly undertakings, major publications, and the evolving global nuclear energy sector. It aims to equip governments, stakeholders, and industry specialists with in-depth analysis and foresight on nuclear technology developments. The 2022 edition highlights that there were 423 nuclear reactors in operation worldwide, providing a total of 379 GWe. NEA member countries manage 312 of these reactors, constituting roughly 80% of the global capacity. Additionally, the year witnessed the grid connection of six new reactors, contributing 7,360 MWe, and the construction of 57 reactors, reflecting a dynamic and expanding nuclear industry. See also International Energy Agency International Atomic Energy Agency European Organization for Nuclear Research References External links – OECD Nuclear Energy Agency International organizations based in France International nuclear energy organizations Nuclear organizations Radiation protection organizations OECD
Nuclear Energy Agency
[ "Engineering" ]
782
[ "International nuclear energy organizations", "Energy organizations", "Nuclear organizations", "Radiation protection organizations" ]
595,708
https://en.wikipedia.org/wiki/Functional%20equation%20%28L-function%29
In mathematics, the L-functions of number theory are expected to have several characteristic properties, one of which is that they satisfy certain functional equations. There is an elaborate theory of what these equations should be, much of which is still conjectural. Introduction A prototypical example, the Riemann zeta function has a functional equation relating its value at the complex number s with its value at 1 − s. In every case this relates to some value ζ(s) that is only defined by analytic continuation from the infinite series definition. That is, writingas is conventionalσ for the real part of s, the functional equation relates the cases σ > 1 and σ < 0, and also changes a case with 0 < σ < 1 in the critical strip to another such case, reflected in the line σ = ½. Therefore, use of the functional equation is basic, in order to study the zeta-function in the whole complex plane. The functional equation in question for the Riemann zeta function takes the simple form where Z(s) is ζ(s) multiplied by a gamma-factor, involving the gamma function. This is now read as an 'extra' factor in the Euler product for the zeta-function, corresponding to the infinite prime. Just the same shape of functional equation holds for the Dedekind zeta function of a number field K, with an appropriate gamma-factor that depends only on the embeddings of K (in algebraic terms, on the tensor product of K with the real field). There is a similar equation for the Dirichlet L-functions, but this time relating them in pairs: with χ a primitive Dirichlet character, χ* its complex conjugate, Λ the L-function multiplied by a gamma-factor, and ε a complex number of absolute value 1, of shape where G(χ) is a Gauss sum formed from χ. This equation has the same function on both sides if and only if χ is a real character, taking values in {0,1,−1}. Then ε must be 1 or −1, and the case of the value −1 would imply a zero of Λ(s) at s = ½. According to the theory (of Gauss, in effect) of Gauss sums, the value is always 1, so no such simple zero can exist (the function is even about the point). Theory of functional equations A unified theory of such functional equations was given by Erich Hecke, and the theory was taken up again in Tate's thesis by John Tate. Hecke found generalised characters of number fields, now called Hecke characters, for which his proof (based on theta functions) also worked. These characters and their associated L-functions are now understood to be strictly related to complex multiplication, as the Dirichlet characters are to cyclotomic fields. There are also functional equations for the local zeta-functions, arising at a fundamental level for the (analogue of) Poincaré duality in étale cohomology. The Euler products of the Hasse–Weil zeta-function for an algebraic variety V over a number field K, formed by reducing modulo prime ideals to get local zeta-functions, are conjectured to have a global functional equation; but this is currently considered out of reach except in special cases. The definition can be read directly out of étale cohomology theory, again; but in general some assumption coming from automorphic representation theory seems required to get the functional equation. The Taniyama–Shimura conjecture was a particular case of this as general theory. By relating the gamma-factor aspect to Hodge theory, and detailed studies of the expected ε factor, the theory as empirical has been brought to quite a refined state, even if proofs are missing. See also Explicit formula (L-function) Riemann–Siegel formula (particular approximate functional equation) References External links Zeta and L-functions Functional equations
Functional equation (L-function)
[ "Mathematics" ]
804
[ "Mathematical analysis", "Mathematical objects", "Functional equations", "Equations" ]
595,824
https://en.wikipedia.org/wiki/Tschirnhaus%20transformation
In mathematics, a Tschirnhaus transformation, also known as Tschirnhausen transformation, is a type of mapping on polynomials developed by Ehrenfried Walther von Tschirnhaus in 1683. Simply, it is a method for transforming a polynomial equation of degree with some nonzero intermediate coefficients, , such that some or all of the transformed intermediate coefficients, , are exactly zero. For example, finding a substitutionfor a cubic equation of degree ,such that substituting yields a new equationsuch that , , or both. More generally, it may be defined conveniently by means of field theory, as the transformation on minimal polynomials implied by a different choice of primitive element. This is the most general transformation of an irreducible polynomial that takes a root to some rational function applied to that root. Definition For a generic degree reducible monic polynomial equation of the form , where and are polynomials and does not vanish at ,the Tschirnhaus transformation is the function:Such that the new equation in , , has certain special properties, most commonly such that some coefficients, , are identically zero. Example: Tschirnhaus' method for cubic equations In Tschirnhaus' 1683 paper, he solved the equation using the Tschirnhaus transformation Substituting yields the transformed equation or Setting yields, and finally the Tschirnhaus transformation which may be substituted into to yield an equation of the form: Tschirnhaus went on to describe how a Tschirnhaus transformation of the form: may be used to eliminate two coefficients in a similar way. Generalization In detail, let be a field, and a polynomial over . If is irreducible, then the quotient ring of the polynomial ring by the principal ideal generated by , , is a field extension of . We have where is modulo . That is, any element of is a polynomial in , which is thus a primitive element of . There will be other choices of primitive element in : for any such choice of we will have by definition: , with polynomials and over . Now if is the minimal polynomial for over , we can call a Tschirnhaus transformation of . Therefore the set of all Tschirnhaus transformations of an irreducible polynomial is to be described as running over all ways of changing , but leaving the same. This concept is used in reducing quintics to Bring–Jerrard form, for example. There is a connection with Galois theory, when is a Galois extension of . The Galois group may then be considered as all the Tschirnhaus transformations of to itself. History In 1683, Ehrenfried Walther von Tschirnhaus published a method for rewriting a polynomial of degree such that the and terms have zero coefficients. In his paper, Tschirnhaus referenced a method by René Descartes to reduce a quadratic polynomial such that the term has zero coefficient. In 1786, this work was expanded by Erland Samuel Bring who showed that any generic quintic polynomial could be similarly reduced. In 1834, George Jerrard further expanded Tschirnhaus' work by showing a Tschirnhaus transformation may be used to eliminate the , , and for a general polynomial of degree . See also Polynomial transformations Monic polynomial Reducible polynomial Quintic function Galois theory Abel–Ruffini theorem Principal equation form References Polynomials Field (mathematics)
Tschirnhaus transformation
[ "Mathematics" ]
686
[ "Polynomials", "Algebra" ]
595,896
https://en.wikipedia.org/wiki/Chowla%E2%80%93Mordell%20theorem
In mathematics, the Chowla–Mordell theorem is a result in number theory determining cases where a Gauss sum is the square root of a prime number, multiplied by a root of unity. It was proved and published independently by Sarvadaman Chowla and Louis Mordell, around 1951. In detail, if is a prime number, a nontrivial Dirichlet character modulo , and where is a primitive -th root of unity in the complex numbers, then is a root of unity if and only if is the quadratic residue symbol modulo . The 'if' part was known to Gauss: the contribution of Chowla and Mordell was the 'only if' direction. The ratio in the theorem occurs in the functional equation of L-functions. References Gauss and Jacobi Sums by Bruce C. Berndt, Ronald J. Evans and Kenneth S. Williams, Wiley-Interscience, p. 53. Cyclotomic fields Zeta and L-functions Theorems in number theory fi:Chowlan–Mordellin lause
Chowla–Mordell theorem
[ "Mathematics" ]
217
[ "Mathematical theorems", "Theorems in number theory", "Mathematical problems", "Number theory" ]
595,929
https://en.wikipedia.org/wiki/S-Adenosyl%20methionine
S-Adenosyl methionine (SAM), also known under the commercial names of SAMe, SAM-e, or AdoMet, is a common cosubstrate involved in methyl group transfers, transsulfuration, and aminopropylation. Although these anabolic reactions occur throughout the body, most SAM is produced and consumed in the liver. More than 40 methyl transfers from SAM are known, to various substrates such as nucleic acids, proteins, lipids and secondary metabolites. It is made from adenosine triphosphate (ATP) and methionine by methionine adenosyltransferase. SAM was first discovered by Giulio Cantoni in 1952. In bacteria, SAM is bound by the SAM riboswitch, which regulates genes involved in methionine or cysteine biosynthesis. In eukaryotic cells, SAM serves as a regulator of a variety of processes including DNA, tRNA, and rRNA methylation; immune response; amino acid metabolism; transsulfuration; and more. In plants, SAM is crucial to the biosynthesis of ethylene, an important plant hormone and signaling molecule. Structure S-Adenosyl methionine consists of the adenosyl group attached to the sulfur of methionine, providing it with a positive charge. It is synthesized from ATP and methionine by S-Adenosylmethionine synthetase enzyme through the following reaction: ATP + L-methionine + H2O phosphate + diphosphate + S-adenosyl-L-methionine The sulfonium functional group present in S-adenosyl methionine is the center of its peculiar reactivity. Depending on the enzyme, S-adenosyl methionine can be converted into one of three products: adenosyl radical, which converts to deoxyadenosine (AdO): classic rSAM reaction, also cogenerates methionine S-adenosyl homocysteine, releasing methyl radical methylthioadenosine (SMT), homoalanine radical Biochemistry SAM cycle The reactions that produce, consume, and regenerate SAM are called the SAM cycle. In the first step of this cycle, the SAM-dependent methylases (EC 2.1.1) that use SAM as a substrate produce S-adenosyl homocysteine as a product. S-Adenosyl homocysteine is a strong negative regulator of nearly all SAM-dependent methylases despite their biological diversity. This is hydrolysed to homocysteine and adenosine by S-adenosylhomocysteine hydrolase EC 3.3.1.1 and the homocysteine recycled back to methionine through transfer of a methyl group from 5-methyltetrahydrofolate, by one of the two classes of methionine synthases (i.e. cobalamin-dependent (EC 2.1.1.13) or cobalamin-independent (EC 2.1.1.14)). This methionine can then be converted back to SAM, completing the cycle. In the rate-limiting step of the SAM cycle, MTHFR (methylenetetrahydrofolate reductase) irreversibly reduces 5,10-methylenetetrahydrofolate to 5-methyltetrahydrofolate. Radical SAM enzymes A large number of enzymes cleave SAM reductively to produce radicals: 5′-deoxyadenosyl 5′-radical, methyl radical, and others. These enzymes are called radical SAMs. They all feature iron-sulfur cluster at their active sites. Most enzymes with this capability share a region of sequence homology that includes the motif CxxxCxxC or a close variant. This sequence provides three cysteinyl thiolate ligands that bind to three of the four metals in the 4Fe-4S cluster. The fourth Fe binds the SAM. The radical intermediates generated by these enzymes perform a wide variety of unusual chemical reactions. Examples of radical SAM enzymes include spore photoproduct lyase, activases of pyruvate formate lyase and anaerobic sulfatases, lysine 2,3-aminomutase, and various enzymes of cofactor biosynthesis, peptide modification, metalloprotein cluster formation, tRNA modification, lipid metabolism, etc. Some radical SAM enzymes use a second SAM as a methyl donor. Radical SAM enzymes are much more abundant in anaerobic bacteria than in aerobic organisms. They can be found in all domains of life and are largely unexplored. A recent bioinformatics study concluded that this family of enzymes includes at least 114,000 sequences including 65 unique reactions. Deficiencies in radical SAM enzymes have been associated with a variety of diseases including congenital heart disease, amyotrophic lateral sclerosis, and increased viral susceptibility. Polyamine biosynthesis Another major role of SAM is in polyamine biosynthesis. Here, SAM is decarboxylated by adenosylmethionine decarboxylase (EC 4.1.1.50) to form S-adenosylmethioninamine. This compound then donates its n-propylamine group in the biosynthesis of polyamines such as spermidine and spermine from putrescine. SAM is required for cellular growth and repair. It is also involved in the biosynthesis of several hormones and neurotransmitters that affect mood, such as epinephrine. Methyltransferases are also responsible for the addition of methyl groups to the 2′ hydroxyls of the first and second nucleotides next to the 5′ cap in messenger RNA. Therapeutic uses Osteoarthrtitis pain As of 2012, the evidence was inconclusive as to whether SAM can mitigate the pain of osteoarthritis; clinical trials that had been conducted were too small from which to generalize. Liver disease The SAM cycle has been closely tied to the liver since 1947 because people with alcoholic cirrhosis of the liver would accumulate large amounts of methionine in their blood. While multiple lines of evidence from laboratory tests on cells and animal models suggest that SAM might be useful to treat various liver diseases, as of 2012 SAM had not been studied in any large randomized placebo-controlled clinical trials that would allow an assessment of its efficacy and safety. Depression A 2016 Cochrane review concluded that for major depressive disorder, "Given the absence of high quality evidence and the inability to draw firm conclusions based on that evidence, the use of SAMe for the treatment of depression in adults should be investigated further." A 2020 systematic review found that it performed significantly better than placebo, and had similar outcomes to other commonly used antidepressants (imipramine and escitalopram). Anti-cancer treatment SAM has recently been shown to play a role in epigenetic regulation. DNA methylation is a key regulator in epigenetic modification during mammalian cell development and differentiation. In mouse models, excess levels of SAM have been implicated in erroneous methylation patterns associated with diabetic neuropathy. SAM serves as the methyl donor in cytosine methylation, which is a key epigenetic regulatory process. Because of this impact on epigenetic regulation, SAM has been tested as an anti-cancer treatment. In many cancers, proliferation is dependent on having low levels of DNA methylation. In vitro addition in such cancers has been shown to remethylate oncogene promoter sequences and decrease the production of proto-oncogenes. In cancers such as colorectal cancer, aberrant global hypermethylation can inhibit promoter regions of tumor-suppressing genes. Contrary to the former information, colorectal cancers (CRCs) are characterized by global hypomethylation and promoter-specific DNA methylation. Pharmacokinetics Oral SAM achieves peak plasma concentrations three to five hours after ingestion of an enteric-coated tablet (400–1000 mg). The half-life is about 100 minutes. Availability in different countries In Canada, the UK, and the United States, SAM is sold as a dietary supplement under the marketing name SAM-e (also spelled SAME or SAMe). It was introduced in the US in 1999, after the Dietary Supplement Health and Education Act was passed in 1994. It was introduced as a prescription drug in Italy in 1979, in Spain in 1985, and in Germany in 1989. As of 2012, it was sold as a prescription drug in Russia, India, China, Italy, Germany, Vietnam, and Mexico. Adverse effects Gastrointestinal disorder, dyspepsia and anxiety can occur with SAM consumption. Long-term effects are unknown. SAM is a weak DNA-alkylating agent. Another reported side effect of SAM is insomnia; therefore, the supplement is often taken in the morning. Other reports of mild side effects include lack of appetite, constipation, nausea, dry mouth, sweating, and anxiety/nervousness, but in placebo-controlled studies, these side effects occur at about the same incidence in the placebo groups. Interactions and contraindications Taking SAM at the same time as some drugs may increase the risk of serotonin syndrome, a potentially dangerous condition caused by having too much serotonin. These drugs include, but are certainly not limited to, dextromethorphan (Robitussin), meperidine (Demerol), pentazocine (Talwin), and tramadol (Ultram). SAM can also interact with many antidepressant medications — including tryptophan and the herbal medicine Hypericum perforatum (St. John's wort) — increasing the potential for serotonin syndrome or other side effects, and may reduce the effectiveness of levodopa for Parkinson's disease. SAM can increase the risk of manic episodes in people who have bipolar disorder. Toxicity A 2022 study concluded that SAMe could be toxic. Jean-Michel Fustin of Manchester University said that the researchers found that excess SAMe breaks down into adenine and methylthioadenosine in the body, both producing the paradoxical effect of inhibiting methylation. This was found in laboratory mice, causing harm to health, and in in vitro tests on human cells. See also DNA methyltransferase SAM-I riboswitch SAM-II riboswitch SAM-III riboswitch SAM-IV riboswitch SAM-V riboswitch SAM-VI riboswitch List of investigational antidepressants References External links Alpha-Amino acids Coenzymes Dietary supplements Biology of bipolar disorder Psychopharmacology Sulfonium compounds
S-Adenosyl methionine
[ "Chemistry" ]
2,290
[ "Psychopharmacology", "Pharmacology", "Coenzymes", "Organic compounds" ]
595,999
https://en.wikipedia.org/wiki/Cell%20envelope
The cell envelope comprises the inner cell membrane and the cell wall of a bacterium. In Gram-negative bacteria an outer membrane is also included. This envelope is not present in the Mollicutes where the cell wall is absent. Bacterial cell envelopes fall into two major categories: a Gram-positive type which stains purple during Gram staining and a Gram-negative type which stains pink during Gram staining. Either type may have an enclosing capsule of polysaccharides for extra protection. As a group these are known as polysaccharide encapsulated bacteria. Function As in other organisms, the bacterial cell wall provides structural integrity to the cell. In prokaryotes, the primary function of the cell wall is to protect the cell from internal turgor pressure caused by the much higher concentrations of proteins and other molecules inside the cell compared to its external environment. The bacterial cell wall differs from that of all other organisms by the presence of peptidoglycan (poly-N-acetylglucosamine and N-acetylmuramic acid), which is located immediately outside of the cytoplasmic membrane. Peptidoglycan is responsible for the rigidity of the bacterial cell wall and for the determination of cell shape. It is relatively porous and is not considered to be a permeability barrier for small substrates. While all bacterial cell walls (with a few exceptions e.g. intracellular parasites such as Mycoplasma) contain peptidoglycan, not all cell walls have the same overall structures. This is notably expressed through the classification into Gram-positive and Gram-negative bacteria. Types The Gram-positive cell wall The Gram-positive cell wall is characterized by the presence of a very thick peptidoglycan layer, which is responsible for the retention of the crystal violet dyes during the Gram staining procedure. It is found exclusively in organisms belonging to the Actinomycetota (or high %G+C Gram-positive organisms) and the Bacillota (or low %G+C Gram-positive organisms). Bacteria within the Deinococcota group may also exhibit Gram-positive staining behavior but contain some cell wall structures typical of Gram-negative organisms. Imbedded in the Gram-positive cell wall are polyalcohols called teichoic acids, some of which are lipid-linked to form lipoteichoic acids. Because lipoteichoic acids are covalently linked to lipids within the cytoplasmic membrane they are responsible for linking the peptidoglycan to the cytoplasmic membrane. Teichoic acids give the gram-positive cell wall an overall negative charge due to the presence of phosphodiester bonds between teichoic acid monomers. Outside the cell wall, many Gram-positive bacteria have an S-layer of "tiled" proteins. The S-layer assists attachment and biofilm formation. Outside the S-layer, there is often a capsule of polysaccharides. The capsule helps the bacterium evade host phagocytosis. In laboratory culture, the S-layer and capsule are often lost by reductive evolution (the loss of a trait in absence of positive selection). The Gram-negative cell wall The Gram-negative cell wall contains a thinner peptidoglycan layer adjacent to the cytoplasmic membrane than the Gram-positive wall, which is responsible for the cell wall's inability to retain the crystal violet stain upon decolourisation with ethanol during Gram staining. In addition to the peptidoglycan layer the Gram-negative cell wall also contains an additional outer membrane composed by phospholipids and lipopolysaccharides which face into the external environment. The highly charged nature of lipopolysaccharides confer an overall negative charge to the Gram -negative cell wall. The chemical structure of the outer membrane lipopolysaccharides is often unique to specific bacterial strains (i.e. sub-species) and is responsible for many of the antigenic properties of these strains. As a phospholipid bilayer, the lipid portion of the outer membrane is largely impermeable to all charged molecules. However, channels called porins are present in the outer membrane that allow for passive transport of many ions, sugars and amino acids across the outer membrane. These molecules are therefore present in the periplasm, the region between the plasma membrane and outer membrane. The periplasm contains the peptidoglycan layer and many proteins responsible for substrate binding or hydrolysis and reception of extracellular signals. The periplasm is thought to exist as a gel-like state rather than a liquid due to the high concentration of proteins and peptidoglycan found within it. Because of its location between the cytoplasmic and outer membranes, signals received and substrates bound are available to be transported across the cytoplasmic membrane using transport and signaling proteins imbedded there. In nature, many uncultivated Gram-negative bacteria also have an S-layer and a capsule. These structures are often lost during laboratory cultivation. Mycobacteria The Mycobacteria (acid-fast bacteria) have a cell envelope which is not typical of Gram-positives or Gram-negatives. The mycobacterial cell envelope does not consist of the outer membrane characteristic of Gram-negatives, but has a significant peptidoglycan-arabinogalactan-mycolic acid wall structure which provides an external permeability barrier. Therefore, there is thought to be a distinct 'pseudoperiplasm' compartment between the cytoplasmic membrane and this outer barrier. The nature of this compartment is not well understood. Acid-fast bacteria, like Mycobacteria, are resistant to decolorization by acids during staining procedures. The high mycolic acid content of Mycobacteria, is responsible for the staining pattern of poor absorption followed by high retention. The most common staining technique used to identify acid-fast bacteria is the Ziehl–Neelsen stain or acid-fast stain, in which the acid fast bacilli are stained bright red and stand out clearly against a blue background. Bacteria lacking a peptidoglycan cell wall The obligate intracellular bacteria in the family Chlamydiaceae are unique in their morphology as they do not contain detectable amounts of peptidoglycan in the cell wall of their infectious forms. Instead, the extracellular forms of these Gram-negative bacteria maintain their structural integrity by relying on a layer of disulfide bond cross-linked cysteine-rich proteins, which is located between cytoplasmic membrane and outer membrane in a manner analogous to the peptidoglycan layer in other Gram-negative bacteria. In the intracellular forms of the bacterium the disulfide cross linkage is not found, which confers this form more mechanically fragile. The cell envelopes of the bacterial class of mollicutes do not have a cell wall. The main pathogenic bacteria in this class are mycoplasma and ureaplasma. L-form bacteria are strains bacteria that lack cell walls derived from bacteria that normally possess cell walls. See also Viral envelope References Cells Bacteria
Cell envelope
[ "Biology" ]
1,520
[ "Microorganisms", "Prokaryotes", "Bacteria" ]
596,116
https://en.wikipedia.org/wiki/Seasonal%20year
The seasonal year is the time between successive recurrences of a seasonal event such as the flooding of a river, the migration of a species of bird, or the flowering of a species of plant. The need for farmers to predict seasonal events led to the development of calendars. However, the variability from year to year of seasonal events (due to climate change or just random variation) makes the seasonal year very hard to measure. This means that calendars are based on astronomical years (which are regular enough to be easily measured) as surrogates for the seasonal year. For example, the ancient Egyptians used the heliacal rising of Sirius to predict the flooding of the Nile. A study of temperature records over the past 300 years suggests that the seasonal year is governed by the anomalistic year rather than the tropical year. This suggestion is surprising because the seasons have been thought to be governed by the tilt of the Earth's axis (see Effect of sun angle on climate). The two types of years differ by a mere 4 days over 300 years, so Thompson's result may not be significant. However, the result is not unreasonable. The seasons can be considered to be an oscillating system driven by two inputs with slightly different frequencies: the total input of energy from the sun varies with the anomalistic year, while the distribution of this energy between the hemispheres varies with the tropical year. In other physical situations, oscillating systems driven by two similar frequencies can latch onto either one. One point that must be considered is that the oscillation arising from the tilt of the axis is much greater than that arising from the distance of the sun. See also Deseasonalization Water year References Units of time Seasonality Types of year
Seasonal year
[ "Physics", "Mathematics" ]
359
[ "Physical quantities", "Time", "Time stubs", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
596,179
https://en.wikipedia.org/wiki/Chowla%E2%80%93Selberg%20formula
In mathematics, the Chowla–Selberg formula is the evaluation of a certain product of values of the gamma function at rational values in terms of values of the Dedekind eta function at imaginary quadratic irrational numbers. The result was essentially found by and rediscovered by . Statement In logarithmic form, the Chowla–Selberg formula states that in certain cases the sum can be evaluated using the Kronecker limit formula. Here χ is the quadratic residue symbol modulo D, where −D is the discriminant of an imaginary quadratic field. The sum is taken over 0 < r < D, with the usual convention χ(r) = 0 if r and D have a common factor. The function η is the Dedekind eta function, and h is the class number, and w is the number of roots of unity. Origin and applications The origin of such formulae is now seen to be in the theory of complex multiplication, and in particular in the theory of periods of an abelian variety of CM-type. This has led to much research and generalization. In particular there is an analog of the Chowla–Selberg formula for p-adic numbers, involving a p-adic gamma function, called the Gross–Koblitz formula. The Chowla–Selberg formula gives a formula for a finite product of values of the eta functions. By combining this with the theory of complex multiplication, one can give a formula for the individual absolute values of the eta function as for some algebraic number α. Examples Using Euler's reflection formula for the gamma function gives: See also Multiplication theorem References Theorems in number theory Gamma and related functions
Chowla–Selberg formula
[ "Mathematics" ]
343
[ "Mathematical theorems", "Theorems in number theory", "Mathematical problems", "Number theory" ]
596,267
https://en.wikipedia.org/wiki/Rotations%20and%20reflections%20in%20two%20dimensions
In Euclidean geometry, two-dimensional rotations and reflections are two kinds of Euclidean plane isometries which are related to one another. Process A rotation in the plane can be formed by composing a pair of reflections. First reflect a point to its image on the other side of line . Then reflect to its image on the other side of line . If lines and make an angle with one another, then points and will make an angle around point , the intersection of and . I.e., angle will measure . A pair of rotations about the same point will be equivalent to another rotation about point . On the other hand, the composition of a reflection and a rotation, or of a rotation and a reflection (composition is not commutative), will be equivalent to a reflection. Mathematical expression The statements above can be expressed more mathematically. Let a rotation about the origin by an angle be denoted as . Let a reflection about a line through the origin which makes an angle with the -axis be denoted as . Let these rotations and reflections operate on all points on the plane, and let these points be represented by position vectors. Then a rotation can be represented as a matrix, and likewise for a reflection, With these definitions of coordinate rotation and reflection, the following four identities hold: Proof These equations can be proved through straightforward matrix multiplication and application of trigonometric identities, specifically the sum and difference identities. The set of all reflections in lines through the origin and rotations about the origin, together with the operation of composition of reflections and rotations, forms a group. The group has an identity: . Every rotation has an inverse . Every reflection is its own inverse. Composition has closure and is associative, since matrix multiplication is associative. Notice that both and have been represented with orthogonal matrices. These matrices all have a determinant whose absolute value is unity. Rotation matrices have a determinant of +1, and reflection matrices have a determinant of −1. The set of all orthogonal two-dimensional matrices together with matrix multiplication form the orthogonal group: . The following table gives examples of rotation and reflection matrix : Rotation of axes See also 2D computer graphics#Rotation Cartan–Dieudonné theorem Clockwise Dihedral group Euclidean plane isometry Euclidean symmetries Instant centre of rotation Orthogonal group Rotation group SO(3) – 3 dimensions References Sources Euclidean symmetries Euclidean plane geometry Rotation
Rotations and reflections in two dimensions
[ "Physics", "Mathematics" ]
490
[ "Physical phenomena", "Functions and mappings", "Euclidean symmetries", "Euclidean plane geometry", "Mathematical objects", "Classical mechanics", "Rotation", "Motion (physics)", "Mathematical relations", "Planes (geometry)", "Symmetry" ]
596,277
https://en.wikipedia.org/wiki/Synodic%20day
A synodic day (or synodic rotation period or solar day) is the period for a celestial object to rotate once in relation to the star it is orbiting, and is the basis of solar time. The synodic day is distinguished from the sidereal day, which is one complete rotation in relation to distant stars and is the basis of sidereal time. In the case of a tidally locked planet, the same side always faces its parent star, and its synodic day is infinite. Its sidereal day, however, is equal to its orbital period. Earth Earth's synodic day is the time it takes for the Sun to pass over the same meridian (a line of longitude) on consecutive days, whereas a sidereal day is the time it takes for a given distant star to pass over a meridian on consecutive days. For example, in the Northern Hemisphere, a synodic day could be measured as the time taken for the Sun to move from exactly true south (i.e. its highest declination) on one day to exactly south again on the next day (or exactly true north in the Southern Hemisphere). For Earth, the synodic day is not constant, and changes over the course of the year due to the eccentricity of Earth's orbit around the Sun and the axial tilt of the Earth. The longest and shortest synodic days' durations differ by about 51 seconds. The mean length, however, is 24 hours (with fluctuations on the order of milliseconds), and is the basis of solar time. The difference between the mean and apparent solar time is the equation of time, which can also be seen in Earth's analemma. Because of the variation in the length of the synodic day, the days with the longest and shortest period of daylight do not coincide with the solstices near the equator. As viewed from Earth during the year, the Sun appears to slowly drift along an imaginary path coplanar with Earth's orbit, known as the ecliptic, on a spherical background of seemingly fixed stars. Each synodic day, this gradual motion is a little less than 1° eastward (360° per 365.25 days), in a manner known as prograde motion. Certain spacecraft orbits, Sun-synchronous orbits, have orbital periods that are a fraction of a synodic day. Combined with a nodal precession, this allows them to always pass over a location on Earth's surface at the same mean solar time. The Moon Due to tidal locking with Earth, the Moon's synodic day (the lunar day or synodic rotation period) is the same as its synodic period with Earth and the Sun (the period of the lunar phases, the synodic lunar month, which is the month of the lunar calendar). Venus Due to the slow retrograde rotational speed of Venus, its synodic rotation period of 117 Earth days is about half the length of its sidereal rotational period (sidereal day) and even its orbital period. Mercury Due to Mercury's slow rotational speed and fast orbit around the Sun, its synodic rotation period of 176 Earth days is three times longer than its sidereal rotational period (sidereal day) and twice as long as its orbital period. See also Orbital period Rotation period Sidereal time Solar rotation Solar time Sun transit time Synodic month Day length fluctuations References Units of time Astronomy
Synodic day
[ "Physics", "Astronomy", "Mathematics" ]
695
[ "Physical quantities", "Time", "Units of time", "Quantity", "nan", "Spacetime", "Units of measurement" ]
596,282
https://en.wikipedia.org/wiki/Class%20number%20problem
In mathematics, the Gauss class number problem (for imaginary quadratic fields), as usually understood, is to provide for each n ≥ 1 a complete list of imaginary quadratic fields (for negative integers d) having class number n. It is named after Carl Friedrich Gauss. It can also be stated in terms of discriminants. There are related questions for real quadratic fields and for the behavior as . The difficulty is in effective computation of bounds: for a given discriminant, it is easy to compute the class number, and there are several ineffective lower bounds on class number (meaning that they involve a constant that is not computed), but effective bounds (and explicit proofs of completeness of lists) are harder. Gauss's original conjectures The problems are posed in Gauss's Disquisitiones Arithmeticae of 1801 (Section V, Articles 303 and 304). Gauss discusses imaginary quadratic fields in Article 303, stating the first two conjectures, and discusses real quadratic fields in Article 304, stating the third conjecture. Gauss conjecture (class number tends to infinity) Gauss class number problem (low class number lists) For given low class number (such as 1, 2, and 3), Gauss gives lists of imaginary quadratic fields with the given class number and believes them to be complete. Infinitely many real quadratic fields with class number one Gauss conjectures that there are infinitely many real quadratic fields with class number one. The original Gauss class number problem for imaginary quadratic fields is significantly different and easier than the modern statement: he restricted to even discriminants, and allowed non-fundamental discriminants. Status Gauss conjecture solved, Heilbronn, 1934. Low class number lists class number 1: solved, Baker (1966), Stark (1967), Heegner (1952). Class number 2: solved, Baker (1971), Stark (1971) Class number 3: solved, Oesterlé (1985) Class numbers h up to 100: solved, Watkins 2004 Infinitely many real quadratic fields with class number one Open. Lists of discriminants of class number 1 For imaginary quadratic number fields, the (fundamental) discriminants of class number 1 are: The non-fundamental discriminants of class number 1 are: Thus, the even discriminants of class number 1, fundamental and non-fundamental (Gauss's original question) are: Modern developments In 1934, Hans Heilbronn proved the Gauss conjecture. Equivalently, for any given class number, there are only finitely many imaginary quadratic number fields with that class number. Also in 1934, Heilbronn and Edward Linfoot showed that there were at most 10 imaginary quadratic number fields with class number 1 (the 9 known ones, and at most one further). The result was ineffective (see effective results in number theory): it did not give bounds on the size of the remaining field. In later developments, the case n = 1 was first discussed by Kurt Heegner, using modular forms and modular equations to show that no further such field could exist. This work was not initially accepted; only with later work of Harold Stark and Bryan Birch (e.g. on the Stark–Heegner theorem and Heegner number) was the position clarified and Heegner's work understood. Practically simultaneously, Alan Baker proved what we now know as Baker's theorem on linear forms in logarithms of algebraic numbers, which resolved the problem by a completely different method. The case n = 2 was tackled shortly afterwards, at least in principle, as an application of Baker's work. The complete list of imaginary quadratic fields with class number 1 is where d is one of The general case awaited the discovery of Dorian Goldfeld in 1976 that the class number problem could be connected to the L-functions of elliptic curves. This effectively reduced the question of effective determination to one about establishing the existence of a multiple zero of such an L-function. With the proof of the Gross–Zagier theorem in 1986, a complete list of imaginary quadratic fields with a given class number could be specified by a finite calculation. All cases up to n = 100 were computed by Watkins in 2004. The class number of for d = 1, 2, 3, ... is . Real quadratic fields The contrasting case of real quadratic fields is very different, and much less is known. That is because what enters the analytic formula for the class number is not h, the class number, on its own — but h log ε, where ε is a fundamental unit. This extra factor is hard to control. It may well be the case that class number 1 for real quadratic fields occurs infinitely often. The Cohen–Lenstra heuristics are a set of more precise conjectures about the structure of class groups of quadratic fields. For real fields they predict that about 75.45% of the fields obtained by adjoining the square root of a prime will have class number 1, a result that agrees with computations. See also List of number fields with class number one Notes References External links Algebraic number theory Mathematical problems Unsolved problems in number theory
Class number problem
[ "Mathematics" ]
1,079
[ "Unsolved problems in mathematics", "Unsolved problems in number theory", "Algebraic number theory", "Mathematical problems", "Number theory" ]
1,556,660
https://en.wikipedia.org/wiki/Gray%20death
Gray death is a slang term which refers to potent mixtures of synthetic opioids, for example benzimidazole opioids or fentanyl analogues, which were often sold on the street misleadingly as "heroin". However, other substances such as cocaine have also been laced with opioids that resulted in illness and death. History and etymology The substance first appeared in America and was thought to be a unique chemical compound before being identified as a mixture of drugs. The first batch of gray death had a characteristic gray color. Composition Samples have been found to contain heroin, fentanyl, carfentanil, and the designer drug U-47700. A mixture of drugs misleadingly called 2C-B had been found to contain fentanyl in Argentina. Dangers and treatment As with other illicit narcotics, gray death carries a higher risk of serious adverse effects than prescribed opioids due to the unknown and inconsistent composition of the product. Even experienced opioid users risk serious injury or death when taking this drug mixture. In February 2022, 24 people in Argentina died after using cocaine laced with carfentanil. Reversing a gray death overdose may require multiple doses of naloxone. By contrast, an overdose from morphine or from high-purity heroin would ordinarily need only one dose. This difficulty is regularly encountered when treating overdoses of high-affinity opioids in the fentanyl chemical family or with buprenorphine. The greater affinity of these substances for the μ-opioid receptor impedes the activity of naloxone, which is an antagonist at the receptor. It may be necessary to increase the dosage of naloxone or its frequency of administration in order to counteract respiratory depression. See also List of opioids List of designer drugs Opioid epidemic in the United States Mickey Finn (drugs) Whoonga References External links Designer drugs Drug culture English-language slang History of South America Adulteration
Gray death
[ "Chemistry" ]
406
[ "Adulteration", "Drug safety" ]
1,556,672
https://en.wikipedia.org/wiki/Phosphatidic%20acid
Phosphatidic acids are anionic phospholipids important to cell signaling and direct activation of lipid-gated ion channels. Hydrolysis of phosphatidic acid gives rise to one molecule each of glycerol and phosphoric acid and two molecules of fatty acids. They constitute about 0.25% of phospholipids in the bilayer. Structure Phosphatidic acid consists of a glycerol backbone, with, in general, a saturated fatty acid bonded to carbon-1, an unsaturated fatty acid bonded to carbon-2, and a phosphate group bonded to carbon-3. Formation and degradation Besides de novo synthesis, PA can be formed in three ways: By phospholipase D (PLD), via the hydrolysis of the P-O bond of phosphatidylcholine (PC) to produce PA and choline. By the phosphorylation of diacylglycerol (DAG) by DAG kinase (DAGK). By the acylation of lysophosphatidic acid by lysoPA-acyltransferase (LPAAT); this is the most common pathway. The glycerol 3-phosphate pathway for de novo synthesis of PA is shown here: In addition, PA can be converted into DAG by lipid phosphate phosphohydrolases (LPPs) or into lyso-PA by phospholipase A (PLA). Roles in the cell The role of PA in the cell can be divided into three categories: PA is the precursor for the biosynthesis of many other lipids. The physical properties of PA influence membrane curvature. PA acts as a signaling lipid, recruiting cytosolic proteins to appropriate membranes (e.g., sphingosine kinase 1). PA plays very important role in phototransduction in Drosophila. PA is a lipid ligand that gates ion channels. See also lipid-gated ion channels. The first three roles are not mutually exclusive. For example, PA may be involved in vesicle formation by promoting membrane curvature and by recruiting the proteins to carry out the much more energetically unfavourable task of neck formation and pinching. Roles in biosynthesis PA is a vital cell lipid that acts as a biosynthetic precursor for the formation (directly or indirectly) of all acylglycerol lipids in the cell. In mammalian and yeast cells, two different pathways are known for the de novo synthesis of PA, the glycerol 3-phosphate pathway or the dihydroxyacetone phosphate pathway. In bacteria, only the former pathway is present, and mutations that block this pathway are lethal, demonstrating the importance of PA. In mammalian and yeast cells, where the enzymes in these pathways are redundant, mutation of any one enzyme is not lethal. However, it is worth noting that in vitro, the various acyltransferases exhibit different substrate specificities with respect to the acyl-CoAs that are incorporated into PA. Different acyltransferases also have different intracellular distributions, such as the endoplasmic reticulum (ER), the mitochondria or peroxisomes, and local concentrations of activated fatty acids. This suggests that the various acyltransferases present in mammalian and yeast cells may be responsible for producing different pools of PA. The conversion of PA into diacylglycerol (DAG) by LPPs is the commitment step for the production of phosphatidylcholine (PC), phosphatidylethanolamine (PE) and phosphatidylserine (PS). In addition, DAG is also converted into CDP-DAG, which is a precursor for phosphatidylglycerol (PG), phosphatidylinositol (PI) and phosphoinositides (PIP, PIP2, PIP3). PA concentrations are maintained at extremely low levels in the cell by the activity of potent LPPs. These convert PA into DAG very rapidly and, because DAG is the precursor for so many other lipids, it too is soon metabolised into other membrane lipids. This means that any upregulation in PA production can be matched, over time, with a corresponding upregulation in LPPs and in DAG metabolising enzymes. PA is, therefore, essential for lipid synthesis and cell survival, yet, under normal conditions, is maintained at very low levels in the cell. Biophysical properties PA is a unique phospholipid in that it has a small highly charged head group that is very close to the glycerol backbone. PA is known to play roles in both vesicle fission and fusion, and these roles may relate to the biophysical properties of PA. At sites of membrane budding or fusion, the membrane becomes or is highly curved. A major event in the budding of vesicles, such as transport carriers from the Golgi, is the creation and subsequent narrowing of the membrane neck. Studies have suggested that this process may be lipid-driven, and have postulated a central role for DAG due to its, likewise, unique molecular shape. The presence of two acyl chains but no headgroup results in a large negative curvature in membranes. The LPAAT BARS-50 has also been implicated in budding from the Golgi. This suggests that the conversion of lysoPA into PA might affect membrane curvature. LPAAT activity doubles the number of acyl chains, greatly increasing the cross-sectional area of the lipid that lies ‘within’ the membrane while the surface headgroup remains unchanged. This can result in a more negative membrane curvature. Researchers from Utrecht University have looked at the effect of lysoPA versus PA on membrane curvature by measuring the effect these have on the transition temperature of PE from lipid bilayers to nonlamellar phases using 31P-NMR. The curvature induced by these lipids was shown to be dependent not only on the structure of lysoPA versus PA but also on dynamic properties, such as the hydration of head groups and inter- and intramolecular interactions. For instance, Ca2+ may interact with two PAs to form a neutral but highly curved complex. The neutralisation of the otherwise repulsive charges of the headgroups and the absence of any steric hindrance enables strong intermolecular interactions between the acyl chains, resulting in PA-rich microdomains. Thus in vitro, physiological changes in pH, temperature, and cation concentrations have strong effects on the membrane curvature induced by PA and lysoPA. The interconversion of lysoPA, PA, and DAG – and changes in pH and cation concentration – can cause membrane bending and destabilisation, playing a direct role in membrane fission simply by virtue of their biophysical properties. However, though PA and lysoPA have been shown to affect membrane curvature in vitro; their role in vivo is unclear. The roles of lysoPA, PA, and DAG in promoting membrane curvature do not preclude a role in recruiting proteins to the membrane. For instance, the Ca2+ requirement for the fusion of complex liposomes is not greatly affected by the addition of annexin I, though it is reduced by PLD. However, with annexin I and PLD, the extent of fusion is greatly enhanced, and the Ca2+ requirement is reduced almost 1000-fold to near physiological levels. Thus the metabolic, biophysical, recruitment, and signaling roles of PA may be interrelated. Role in signaling PA is kept low in the bulk of the membrane in order to transiently burst and signal locally in high concentration. For example TREK-1 channels are activated by local association with PLD and production of PA. The dissociation constant of PA for TREK-1 is approximately 10 micromolar. The relatively weak binding combined with a low concentration of PA in the membrane allows the channel to turn off. The local high concentration for activation suggests at least some restrictions in local lipid diffusion. The bulk low concentration of PA combined with high local bursts is the opposite of PIP2 signaling. PIP2 is kept relatively high in the membrane and then transiently hydrolized near a protein in order to transiently reduce PIP2 signaling. PA signaling mirrors PIP2 signaling in that the bulk concentration of signaling lipid need not change to exert a potent local effect on a target protein. As described above, PLD hydrolyzes PC to form PA and choline. Because choline is very abundant in the cell, PLD activity does not significantly affect choline levels; and choline is unlikely to play any role in signaling. The role of PLD activation in numerous signaling contexts, combined with the lack of a role for choline, suggests that PA is important in signaling. However, PA is rapidly converted to DAG, and DAG is also known to be a signaling molecule. This raises the question as to whether PA has any direct role in signaling or whether it simply acts as a precursor for DAG production. If it is found that PA acts only as a DAG precursor, then one can raise the question as to why cells should produce DAG using two enzymes when they contain the PLC that could produce DAG in a single step. PA produced by PLD or by DAGK can be distinguished by the addition of [γ-32P]ATP. This will show whether the phosphate group is newly derived from the kinase activity or whether it originates from the PC. Although PA and DAG are interconvertible, they do not act in the same pathways. Stimuli that activate PLD do not activate enzymes downstream of DAG, and vice versa. For example, it was shown that addition of PLD to membranes results in the production of [32P]-labeled PA and [32P]-labeled phosphoinositides. The addition of DAGK inhibitors eliminates the production of [32P]-labeled PA but not the PLD-stimulated production of phosphoinositides. It is possible that, though PA and DAG are interconvertible, separate pools of signaling and non-signaling lipids may be maintained. Studies have suggested that DAG signaling is mediated by polyunsaturated DAG, whereas PLD-derived PA is monounsaturated or saturated. Thus functional saturated/monounsaturated PA can be degraded by hydrolysing it to form non-functional saturated/monounsaturated DAG, whereas functional polyunsaturated DAG can be degraded by converting it into non-functional polyunsaturated PA. This model suggests that PA and DAG effectors should be able to distinguish lipids with the same headgroups but with differing acyl chains. Although some lipid-binding proteins are able to insert themselves into membranes and could hypothetically recognize the type of acyl chain or the resulting properties of the membrane, many lipid-binding proteins are cytosolic and localize to the membrane by binding only the headgroups of lipids. Perhaps the different acyl chains can affect the angle of the head-group in the membrane. If this is the case, it suggests that a PA-binding domain must not only be able to bind PA specifically but must also be able to identify those head-groups that are at the correct angle. Whatever the mechanism is, such specificity is possible. It is seen in the pig testes DAGK that is specific for polyunsaturated DAG and in two rat hepatocyte LPPs that dephosphorylate different PA species with different activities. Moreover, the stimulation of SK1 activity by PS in vitro was shown to vary greatly depending on whether dioleoyl (C18:1), distearoyl (C18:0), or 1-stearoyl, 2-oleoyl species of PS were used. Thus it seems that, though PA and DAG are interconvertible, the different species of lipids can have different biological activities; and this may enable the two lipids to maintain separate signaling pathways. Measurement of PA production As PA is rapidly converted to DAG, it is very short-lived in the cell. This means that it is difficult to measure PA production and therefore to study the role of PA in the cell. However, PLD activity can be measured by the addition of primary alcohols to the cell. PLD then carries out a transphosphatidylation reaction, instead of hydrolysis, producing phosphatidyl alcohols in place of PA. The phosphatidyl alcohols are metabolic dead-ends, and can be readily extracted and measured. Thus PLD activity and PA production (if not PA itself) can be measured, and, by blocking the formation of PA, the involvement of PA in cellular processes can be inferred. Protein interactors SK1 PDE4A1 Raf1 mTOR PP1 SHP1 Spo20p p47phox PKCε PLCβ PIP5K Opi1 TREK-1 Kv Kir2.2 References External links Biomolecules Signal transduction Organophosphates
Phosphatidic acid
[ "Chemistry", "Biology" ]
2,781
[ "Natural products", "Signal transduction", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Neurochemistry", "Molecular biology" ]
1,556,918
https://en.wikipedia.org/wiki/Phase%20response%20curve
A phase response curve (PRC) illustrates the transient change (phase response) in the cycle period of an oscillation induced by a perturbation as a function of the phase at which it is received. PRCs are used in various fields; examples of biological oscillations are the heartbeat, circadian rhythms, and the regular, repetitive firing observed in some neurons in the absence of noise. In circadian rhythms In humans and animals, there is a regulatory system that governs the phase relationship of an organism's internal circadian clock to a regular periodicity in the external environment (usually governed by the solar day). In most organisms, a stable phase relationship is desired, though in some cases the desired phase will vary by season, especially among mammals with seasonal mating habits. In circadian rhythm research, a PRC illustrates the relationship between a chronobiotic's time of administration (relative to the internal circadian clock) and the magnitude of the treatment's effect on circadian phase. Specifically, a PRC is a graph showing, by convention, time of the subject's endogenous day along the x-axis and the amount of the phase shift (in hours) along the y-axis. Each curve has one peak and one trough in each 24-hour cycle. Relative circadian time is plotted against phase-shift magnitude. The treatment is usually narrowly specified as a set intensity and colour and duration of light exposure to the retina and skin, or a set dose and formulation of melatonin. These curves are often consulted in the therapeutic setting. Normally, the body's various physiological rhythms will be synchronized within an individual organism (human or animal), usually with respect to a master biological clock. Of particular importance is the sleep–wake cycle. Various sleep disorders and externals stresses (such as jet lag) can interfere with this. Humans with non-24-hour sleep–wake disorder often experience an inability to maintain a consistent internal clock. Extreme chronotypes usually maintain a consistent clock, but find that their natural clock does not align with the expectations of their social environment. PRC curves provide a starting point for therapeutic intervention. The two common treatments used to shift the timing of sleep are light therapy, directed at the eyes, and administration of the hormone melatonin, usually taken orally. Either or both can be used daily. The phase adjustment is generally cumulative with consecutive daily administrations, and — at least partially — additive with concurrent administrations of distinct treatments. If the underlying disturbance is stable in nature, ongoing daily intervention is usually required. For jet lag, the intervention serves mainly to accelerate natural alignment, and ceases once desired alignment is achieved. Note that phase response curves from the experimental setting are usually aggregates of the test population, that there can be mild or significant variation within the test population, that individuals with sleep disorders often respond atypically, and that the formulation of the chronobiotic might be specific to the experimental setting and not generally available in clinical practice (e.g. for melatonin, one sustained-release formulation might differ in its release rate as compared to another); also, while the magnitude is dose-dependent, not all PRC graphs cover a range of doses. The discussions below are restricted to the PRCs for the light and melatonin in humans. Light Starting about two hours before an individual's regular bedtime, exposure of the eyes to light will delay the circadian phase, causing later wake-up time and later sleep onset. The delaying effect gets stronger as evening progresses; it is also dependent on the wavelength and illuminance ("brightness") of the light. The effect is small if indoor lighting is dim About five hours after usual bedtime, coinciding with the body temperature trough (the lowest point of the core body temperature during sleep) the PRC peaks and the effect changes abruptly from phase delay to phase advance. Immediately after this peak, light exposure has its greatest phase-advancing effect, causing earlier wake-up and sleep onset. Again, illuminance greatly affects results; indoor light may be less than 500 lux, while light therapy uses up to 10,000 lux. The effect diminishes until about two hours after spontaneous wake-up time, when it reaches approximately zero. During the period between two hours after usual wake-up time and two hours before usual bedtime, light exposure has little or no effect on circadian phase (slight effects generally cancelling each other out). Another image of the PRC for light is here (Figure 1). Within that image, the explanatory text is Delay region: evening light shifts sleepiness later and Advance region: morning light shifts sleepiness earlier. Light therapy, typically with a light box producing 10,000 lux at a prescribed distance, can be used in the evening to delay or in the morning to advance an individual's sleep timing. Because losing sleep to obtain bright light exposure is considered undesirable by most people, and because it is very difficult to estimate exactly when the greatest effect (the PRC peak) will occur in an individual, the treatment is usually applied daily just prior to bedtime (to achieve phase delay), or just after spontaneous awakening (to achieve phase advance). In addition to its use in the adjustment of circadian rhythms, light therapy is used as treatment for several affective disorders including seasonal affective disorder (SAD). In 2002 Brown University researchers led by David Berson announced the discovery of special cells in the human eye, ipRGCs (intrinsically photosensitive retinal ganglion cells), which, many researchers now believe, control the light entrainment effect of the phase response curve. In the human eye, the ipRGCs have the greatest response to light in the 460–480 nm (blue) range. In one experiment, 400 lux of blue light produced the same effects as 10,000 lux of white light from a fluorescent source. A theory of spectral opponency, in which the addition of other spectral colors renders blue light less effective for circadian phototransduction, was supported by research reported in 2005. Melatonin The phase response curve for melatonin is roughly twelve hours out of phase with the phase response curve for light. At spontaneous wake-up time, exogenous (externally administered) melatonin has a slight phase-delaying effect. The amount of phase-delay increases until about eight hours after wake-up time, when the effect swings abruptly from strong phase delay to strong phase advance. The phase-advance effect diminishes as the day goes on until it reaches zero about bedtime. From usual bedtime until wake-up time, exogenous melatonin has no effect on circadian phase. The human body produces its own (endogenous) melatonin starting about two hours before bedtime, provided the lighting is dim. This is known as dim-light melatonin onset, DLMO. This stimulates the phase-advance portion of the PRC and helps keep the body on a regular sleep-wake schedule. It also helps prepare the body for sleep. Administration of melatonin at any time may have a mild hypnotic (sleep-inducing) effect. The expected effect on sleep phase timing, if any, is predicted by the PRC. Additive effects In a 2006 study Victoria L. Revell et al. showed that a combination of morning bright light and afternoon melatonin, both timed to phase advance according to the respective PRCs, produce a larger phase advance shift than bright light alone, for a total of up to 2 hours. All times are approximate and vary from one individual to another. In particular, there is no convenient way to accurately determine the times of the peaks and zero-crossings of these curves in an individual. Administration of light or melatonin close to the time at which the effect is expected to change sense abruptly may, if the changeover time is not accurately known, produce an opposite effect to that desired. Exercise In a 2019 study Shawn D. Youngstedt et al., showed that in humans "Exercise elicits circadian phase‐shifting effects, but additional information is needed. [...] Significant phase–response curves were established for aMT6(melatonin derivative) onset and acrophase with large phase delays from 7:00 pm to 10:00 pm and large phase advances at both 7:00 am and from 1:00 pm to 4:00 pm" Origin The first published usage of the term "phase response curve" was in 1960 by Patricia DeCoursey. The "daily" activity rhythms of her flying squirrels, kept in constant darkness, responded to pulses of light exposure. The response varied according to the time of day – that is, the animals' subjective "day" – when light was administered. When DeCoursey plotted all her data relating the quantity and direction (advance or delay) of phase-shift on a single curve, she created the PRC. It has since been a standard tool in the study of biological rhythms. In neurons Phase response curve analysis can be used to understand the intrinsic properties and oscillatory behavior of regular-spiking neurons. The neuronal PRCs can be classified as being purely positive (PRC type I) or as having negative parts (PRC type II). Importantly, the PRC type exhibited by a neuron is indicative of its input–output function (excitability) as well as synchronization behavior: networks of PRC type II neurons can synchronize their activity via mutual excitatory connections, but those of PRC type I can not. Experimental estimation of PRC in living, regular-spiking neurons involves measuring the changes in inter-spike interval in response to a small perturbation, such as a transient pulse of current. Notably, the PRC of a neuron is not fixed but may change when firing frequency or neuromodulatory state of the neuron is changed. See also Chronobiology Circadian rhythm sleep disorder Delayed sleep phase disorder References Further reading Chronobiology Circadian rhythm Neural coding Neuroscience of sleep Sleep physiology
Phase response curve
[ "Biology" ]
2,078
[ "Behavior", "Sleep physiology", "Circadian rhythm", "Chronobiology", "Sleep" ]
1,556,970
https://en.wikipedia.org/wiki/Dictator%20game
The dictator game is a popular experimental instrument in social psychology and economics, a derivative of the ultimatum game. The term "game" is a misnomer because it captures a decision by a single player: to send money to another or not. Thus, the dictator has the most power and holds the preferred position in this “game.” Although the “dictator” has the most power and presents a take it or leave it offer, the game has mixed results based on different behavioral attributes. The results – where most "dictators" choose to send money – evidence the role of fairness and norms in economic behavior, and undermine the assumption of narrow self-interest when given the opportunity to maximise one's own profits. Description The dictator game is a derivative of the ultimatum game, in which one player (the proposer) provides a one-time offer to the other (the responder). The responder can choose to either accept or reject the proposer's bid, but rejecting the bid would result in both players receiving a payoff of 0. In the dictator game, the first player, "the dictator", determines how to split an endowment (such as a cash prize) between themselves and the second player (the recipient). The dictator's action space is complete and therefore is at their own will to determine the endowment, which ranges from giving nothing to giving all the endowment. The recipient has no influence over the outcome of the game, which means the recipient plays a passive role. While the ultimatum game is informative, it can be considered an over simplified model when discussing most real-world negotiation situations. Real-world games tend to involve offers and counteroffers while the ultimatum game is simply player one placing forward a division of an amount that player 2 has to accept or reject. Based on this limited scope, it is expected that the second player will accept any offer they are given, which is not necessarily seen in real world examples. Application The initial game was developed by Daniel Kahneman in the 1980s and involved three parties, with one active and two passive participants. However, it was only in 1994 that a paper by Forsythe et al. simplified this to the contemporary form of this game with one decision-maker (the dictator) and one passive participant (the recipient). One would expect players to behave "rationally" and maximize their own payoffs, as shown by the homo economicus principle; however, it has been shown that human populations are more “benevolent than homo economicus” and therefore rarely do the majority give nothing to the recipient. In the original dictator game, the dictator and the recipient were randomly selected and completely unknown. However it was found that the result was different depending on the social distance between the two parties. The level of "social distance" that a dictator and a recipient have changes the ratio of endowment that the dictator is willing to give. If the dictator in the game has anonymity with the recipient, resulting in a high level of social distance, they are most likely to give less endowment, whereas players with a low level of social distance, whether they are very familiar with each other or shallowly acquainted, are more likely to give a higher proportion of the endowment to the recipient. When players are within an organization, they are likely to have a low level of social distance. Within organizations, altruism and prosocial behavior are heavily relied on in dictator games for optimal organizational output. Prosocial behavior encourages the “intention of promoting the welfare of the individual, group, or organization toward which it is directed”. Experiments In 1988 a group of researchers at the University of Iowa conducted a controlled experiment to evaluate the homo economicus model of behavior with groups of voluntarily recruited economics, accounting, and business students. These experimental results contradict the homo economicus model, suggesting that players in the dictator role take fairness and potential adverse consequences into account when making decisions about how much utility to give the recipient. A later study in neuroscience further challenged the homo economicus model, suggesting that various cognitive differences among humans affect decision-making processes, and thus ideas of fairness. Experimental results have indicated that adults often allocate money to the recipients, reducing the amount of money the dictator receives. These results appear robust: for example, Henrich et al. discovered in a wide cross-cultural study that dictators allocate a non-zero share of the endowment to the recipient. In modified versions of the dictator game, children also tend to allocate some of a resource to a recipient and most five-year-olds share at least half of their goods. A number of studies have examined psychological framing of the dictator game with a version called "taking" in which the player "takes" resources from the recipient's predetermined endowment, rather than choosing the amount to "give". Some studies show no effect between male and female players, but one 2017 study reported a difference between male and female players in the taking frame, with females allocating significantly more to the recipient under the "taking" frame compared to the "giving" frame, while males showed exactly the opposite behavior – nullifying the overall effect. In 2016, Bhogal et al. conducted a study to evaluate the effects of perceived attractiveness on decision-making behavior and altruism in the standard dictator game, testing theories that altruism may serve as a courtship display. This study found no relationship between attractiveness and altruism. If these experiments appropriately reflect individuals' preferences outside of the laboratory, these results appear to demonstrate that either: Dictators' utility functions include only money that they receive and dictators fail to maximize it. Dictators' utility functions may include non-tangible harms they incur (for example self-image or anticipated negative views of others in society), or Dictators' utility functions may include benefits received by others. Additional experiments have shown that subjects maintain a high degree of consistency across multiple versions of the dictator game in which the cost of giving varies. This suggests that dictator game behavior is well approximated by a model in which dictators maximize utility functions that include benefits received by others, that is, subjects are increasing their utility when they pass money to the recipients. The latter implies they are maximizing a utility function that incorporates the recipient's welfare and not only their own welfare. This is the core of the "other-regarding" preferences. A number of experiments have shown that donations are substantially larger when the dictators are aware of the recipient's need of the money. Other experiments have shown a relationship between political participation, social integration, and dictator game giving, suggesting that it may be an externally valid indicator of concern for the well-being of others. Regarding altruism, recent papers have shown that experimental subjects in a lab environment do not behave differently to other participants in an outside setting. Studies have suggested that behavior in this game is heritable. Challenges The idea that the highly mixed results of the dictator game prove or disprove rationality in economics is not widely accepted. Results offer both support of the classical assumptions and notable exceptions which have led to improved holistic economic models of behavior. Some authors have suggested that giving in the dictator game does not entail that individuals wish to maximize others' benefit (altruism). Instead they suggest that individuals have some negative utility associated with being seen as greedy, and are avoiding this judgment by the experimenter. Some experiments have been performed to test this hypothesis with mixed results. Additionally, the mixed results of the dictator game point to other behavioral attributes that may influence how individuals play the game. Specifically, people are motivated by altruism and how their actions are perceived by others, rather than solely by avoiding being viewed as greedy. There have been experiments that more deeply study people's motivations in this game. One experiment showed that females are more likely to value altruism in their actions than males. They are also more likely to be more altruistic towards other females than to males. This proves that there are many extraneous variables that may influence players’ decisions in the dictator game, such as an individual’s own motivations and the other players. Variants The Trust Game is similar to the dictator game, but with an added first step. It is a sequential game involving two players, the trustor and the trustee. Initially called the Investment Game by Berg, Dickhaut and McCabe in 1995, the trust game originated as a design experiment to study trust and reciprocity in an investment setting. In the trust game, the trustor first decides how much of an endowment to give to the trustee. The trustor is also informed that whatever they send will be tripled by the experimenter. Then the trustee (now acting as a dictator) decides how much of this increased endowment to allocate to the trustor. Thus the dictator's (or trustee's) partner must decide how much of the initial endowment to trust with the dictator (in the hopes of receiving the same amount or more in return). In this game, it is all about trust and trustworthiness in order to determine the behavior of the two players. Since trust is an important factor in economic behavior, trust and trustworthiness must be addressed at an individual level by utilizing experimental designs involving both roles in different trust games. The experiments rarely end in the subgame perfect Nash equilibrium of "no trust". Often, studies found that having more trust resulted in the participant losing more in the end. Since the decision to trust is dependent on the belief that the other participant will reciprocate, according to Berg et al.'s study, then the first participant will usually send an endowment even when they are not expecting anything back, similar to the practical conditions of participating in the lottery. This is because the trustor wants to avoid the responsibility of leaving the trustee with no endowment and risking zero payoffs at the end of the game. A pair of studies published in 2008 of identical and fraternal twins in the US and Sweden suggests that behavior in this game is heritable. Betrayal aversion is another major factor that weighs the impact of trust and risk, determining whether trusting another person is equivalent to taking a risky bet. Initially coined by Bohnet and Zeckhauser, betrayal aversion could prevent the trustor from not trusting the trustee due to the social risk of having zero payoffs. Their study looked at a practical experiment where participants were randomly paired with one another to increase the probability that the outcome would be dependent on the actions of the trustee selected. Results from the study showed that regardless of whether the trustor placed a safe or risky bet, the payoffs were not equivalent to the trustee's payoffs. Ultimately, Bohnet and Zeckhauser assessed potential risk with the Trust Game and the relative hesitation made by each participant when deciding the amount to give in the game. A variation of the dictator game called the "taking" game (see “Experiments" section above for further detail) emerged from sociological experiments conducted in 2003, in which the dictator decides how much utility to “take” from the recipient's pre-determined endowment. This dictator game variation was designed to evaluate the idea of greed, rather than the idea of fairness or altruism generally evaluated with the standard dictator game model, also referred to as the "giving" game. See also Impunity game Neuroeconomics Ultimatum game Prisoner's dilemma Public goods game Social preferences References Further reading Concludes that people tend to be more generous if there is a picture of a pair of eyes watching them. For a recent review of the dictator game in experiments see Angela A. Stanton: Evolving Economics: Synthesis Non-cooperative games Social psychology Moral psychology Social science experiments
Dictator game
[ "Mathematics" ]
2,388
[ "Game theory", "Non-cooperative games" ]
1,557,269
https://en.wikipedia.org/wiki/Reginald%20Punnett
Reginald Crundall Punnett FRS (; 20 June 1875 – 3 January 1967) was a British geneticist who co-founded, with William Bateson, the Journal of Genetics in 1910. Punnett is probably best remembered today as the creator of the Punnett square, a tool still used by biologists to predict the probability of possible genotypes of offspring. His Mendelism (1905) is sometimes said to have been the first textbook on genetics; it was probably the first popular science book to introduce genetics to the public. Life and work Reginald Punnett was born in 1875 in the town of Tonbridge in Kent, England. While recovering from a childhood bout of appendicitis, Punnett became acquainted with Jardine's Naturalist's Library and developed an interest in natural history. Punnett was educated at Clifton College. Attending Gonville and Caius College, Cambridge, Punnett earned a bachelor's degree in zoology in 1898 and a master's degree in 1901. Between these degrees he worked as a demonstrator and part-time lecturer at the University of St. Andrews' Natural History Department. In October 1901, Punnett was back at Cambridge when he was elected to a Fellowship at Gonville and Caius College, working in zoology, primarily the study of worms, specifically nemerteans. It was during this time that he and William Bateson began a research collaboration, which lasted several years. When Punnett was an undergraduate, Gregor Mendel's work on inheritance was largely unknown and unappreciated by scientists. However, in 1900, Mendel's work was rediscovered by Carl Correns, Erich Tschermak von Seysenegg and Hugo de Vries. William Bateson became a proponent of Mendelian genetics and had Mendel's work translated into English. It was with Bateson that Reginald Punnett helped establish the new science of genetics at Cambridge. He, Bateson and Saunders co-discovered genetic linkage through experiments with chickens and sweet peas. In 1905 Punnett devised what is now called the Punnett square, a square diagram that is used to predict the genotypes of a particular cross or breeding experiment, described for the first time in the 2nd edition of his book. In 1908, unable to explain how a dominant allele would not become fixed and ubiquitous in a population, Punnett introduced one of his problems to the mathematician G. H. Hardy, with whom he played cricket. Hardy went on to formulate the Hardy–Weinberg principle, independently of the German Wilhelm Weinberg. Punnett was Superintendent of the Cambridge University Museum of Zoology from 1908 to 1909. In 1909 he went to Sri Lanka to meet Arthur Willey, FRS, then Director of the Colombo Museum and R H Lock, then Scientific Assistant at the Peradeniya Botanical Gardens and to catch butterflies. The following year, he published a monograph, '"Mimicry" in Ceylon Butterflies, with a suggestion as to the nature of Polymorphism', in Spolia Zeylanica, the journal of the Colombo Museum, in which he voiced his opposition to gradualistic accounts of the evolution of mimicry which he later expanded on, in his 1915 book Mimicry in Butterflies. In 1910 Punnett became a professor of biology at Cambridge, and then the first Arthur Balfour Professor of Genetics when Bateson left in 1912. In the same year, Punnett was elected a Fellow of the Royal Society. He received the society's Darwin Medal in 1922. During World War I, Punnett successfully applied his expertise to the problem of the early determination of sex in chickens. Since only females were used for egg-production, early identification of male chicks, which were destroyed or separated for fattening, meant that limited animal-feed and other resources could be used more efficiently. Punnett's work in this area was summarized in Heredity in Poultry (1923). With Michael Pease as his assistant, he created the first auto-sexing chicken breed, the Cambar, by transferring the barring gene of the Barred Rock to the Golden Campine. Reginald Punnett retired in 1940, and died at the age of 91 in 1967 in Bilbrook, Somerset. Selected writings - A scanned copy of the second edition is here. References External links A brief biographical sketch of Punnett A briefer biographical sketch of Punnett A brief history of the University of Cambridge's Department of Genetics 1875 births 1967 deaths People educated at Clifton College Alumni of Gonville and Caius College, Cambridge English geneticists British eugenicists English statisticians Fellows of the Royal Society History of genetics People from Tonbridge Arthur Balfour Professors of Genetics Fellows of Gonville and Caius College, Cambridge Mutationism
Reginald Punnett
[ "Biology" ]
966
[ "Non-Darwinian evolution", "Mutationism", "Biology theories", "Obsolete biology theories" ]
1,557,358
https://en.wikipedia.org/wiki/Molecular%20knot
In chemistry, a molecular knot is a mechanically interlocked molecular architecture that is analogous to a macroscopic knot. Naturally-forming molecular knots are found in organic molecules like DNA, RNA, and proteins. It is not certain that naturally occurring knots are evolutionarily advantageous to nucleic acids or proteins, though knotting is thought to play a role in the structure, stability, and function of knotted biological molecules. The mechanism by which knots naturally form in molecules, and the mechanism by which a molecule is stabilized or improved by knotting, is ambiguous. The study of molecular knots involves the formation and applications of both naturally occurring and chemically synthesized molecular knots. Applying chemical topology and knot theory to molecular knots allows biologists to better understand the structures and synthesis of knotted organic molecules. The term knotane was coined by Vögtle et al. in 2000 to describe molecular knots by analogy with rotaxanes and catenanes, which are other mechanically interlocked molecular architectures. The term has not been broadly adopted by chemists and has not been adopted by IUPAC. Naturally occurring molecular knots Organic molecules containing knots may fall into the categories of slipknots or pseudo-knots. They are not considered mathematical knots because they are not a closed curve, but rather a knot that exists within an otherwise linear chain, with termini at each end. Knotted proteins are thought to form molecular knots during their tertiary structure folding process, and knotted nucleic acids generally form molecular knots during genomic replication and transcription, though details of knotting mechanism continue to be disputed and ambiguous. Molecular simulations are fundamental to the research on molecular knotting mechanisms. Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots. Naturally knotted RNA has not yet been reported. A number of proteins containing naturally occurring molecular knots have been identified. The knot types found to be naturally occurring in proteins are the and knots, as identified in the KnotProt database of known knotted proteins. Chemically synthesized molecular knots Several synthetic molecular knots have been reported. Knot types that have been successfully synthesized in molecules are and 819 knots. Though the and knots have been found to naturally occur in knotted molecules, they have not been successfully synthesized. Small-molecule composite knots have also not yet been synthesized. Artificial DNA, RNA, and protein knots have been successfully synthesized. DNA is a particularly useful model of synthetic knot synthesis, as the structure naturally forms interlocked structures and can be easily manipulated into forming knots control precisely the raveling necessary to form knots. Molecular knots are often synthesized with the help of crucial metal ion ligands. History The first researcher to suggest the existence of a molecular knot in a protein was Jane Richardson in 1977, who reported that carbonic anhydrase B (CAB) exhibited apparent knotting during her survey of various proteins' topological behavior. However, the researcher generally attributed with the discovery of the first knotted protein is Marc. L. Mansfield in 1994, as he was the first to specifically investigate the occurrence of knots in proteins and confirm the existence of the trefoil knot in CAB. Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots. In 1989, Sauvage and coworkers reported the first synthetic knotted molecule: a trefoil synthesized via a double-helix complex with the aid of Cu+ ions. Vogtle et al. was the first to describe molecular knots as knotanes in 2000. Also in 2000 was William Taylor's creation of an alternative computational method to analyze protein knotting that set the termini at a fixed point far enough away from the knotted component of the molecule that the knot type could be well-defined. In this study, Taylor discovered a deep knot in a protein. With this study, Taylor confirmed the existence of deeply knotted proteins. In 2007, Eric Yeates reported the identification of a molecular slipknot, which is when the molecule contains knotted subchains even though their backbone chain as a whole is unknotted and does not contain completely knotted structures that are easily detectable by computational models. Mathematically, slipknots are difficult to analyze because they are not recognized in the examination of the complete structure. A pentafoil knot prepared using dynamic covalent chemistry was synthesized by Ayme et al. in 2012, which at the time was the most complex non-DNA molecular knot prepared to date. Later in 2016, a fully organic pentafoil knot was also reported, including the very first use of a molecular knot to allosterically regulate catalysis. In January 2017, an 819 knot was synthesized by David Leigh's group, making the 819 knot the most complex molecular knot synthesized. An important development in knot theory is allowing for intra-chain contacts within an entangled molecular chain. Circuit topology has emerged as a topology framework that formalises the arrangement of contacts as well as chain crossings in a folded linear chain. As a complementary approach, Colin Adams. et al., developed a singular knot theory that is applicable to folded linear chains with intramolecular interactions. Applications Many synthetic molecular knots have a distinct globular shape and dimensions that make them potential building blocks in nanotechnology. See also Circuit topology of folded linear molecules Supramolecular chemistry Knotted protein Knotted polymers Topology (chemistry) Knot theory Molecular Borromean rings References External links Supramolecular chemistry Macrocycles Molecular topology
Molecular knot
[ "Chemistry", "Materials_science", "Mathematics" ]
1,139
[ "Organic compounds", "Molecular topology", "Macrocycles", "Topology", "nan", "Nanotechnology", "Supramolecular chemistry" ]