id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
68,918,936
https://en.wikipedia.org/wiki/Transition%20metal%20thiosulfate%20complex
A transition metal thiosulfate complex is a coordination complex containing one or more thiosulfate ligands. Thiosulfate occurs in nature and is used industrially, so its interactions with metal ions are of some practical interest. Examples Thiosulfate is a potent ligand for soft metal ions. A typical complex is , which features a pair of S-bonded thiosulfate ligands. Simple aquo and ammine complexes are also known. Three binding modes are common: monodentate (κ1-), O,S-bidentate (κ2-), and bridging (μ-). Linkage isomerism (O vs S) has been observed in . Preparation Typically, thiosulfate complexes are prepared from thiosulfate salts by displacement of aquo or chloro ligands. In some cases, they arise by oxidation of polysulfido complexes, or by binding of sulfur trioxide to sulfido ligands. Applications Photography Silver-thiosulfate complexes are produced by common photographic fixers. By dissolving silver halides, the fixer stabilises the image. The dissolution process entails reactions involving the formation of 1:2 and 1:3 complexes (X = halide): Fixation involves these chemical reactions (X = halide, typically ): Recovery of precious metals Sodium thiosulfate and ammonium thiosulfate have been proposed as alternative lixiviants to cyanide for extraction of gold from ores and printed circuit boards. The complex [Au(S2O3)2]3- is assumed to be the principal product in such extractions. Presently cyanide salts are used on a large scale for that purpose with obvious risks. The advantages of this approach are that (i) thiosulfate is far less toxic than cyanide and (ii) that ore types that are refractory to gold cyanidation (e.g. carbonaceous or Carlin-type ores) can be leached by thiosulfate. One problem with this alternative process is the high consumption of thiosulfate, which is more expensive than cyanide. Another issue is the lack of a suitable recovery technique since does not adsorb to activated carbon, which is the standard technique used in gold cyanidation to separate the gold complex from the ore slurry. Naming In the IUPAC Red Book the following terms may be used for thiosulfate as a ligand: trioxido-1κ3O-disulfato(S—S)(2−); trioxidosulfidosulfato(2−); thiosulfato; sulfurothioato. In the naming for thiosulfate salts, the final "o" is replaced by "e". Thus, sodium aurothiosulfate could be called trisodium di(thiosulfato)aurate(I). References Coordination chemistry Coordination complexes Ligands Thiosulfates
Transition metal thiosulfate complex
[ "Chemistry" ]
627
[ "Ligands", "Coordination chemistry", "Coordination complexes" ]
53,450,372
https://en.wikipedia.org/wiki/Plasmodium%20falciparum%20erythrocyte%20membrane%20protein%201
Plasmodium falciparum erythrocyte membrane protein 1 (PfEMP1) is a family of proteins present on the membrane surface of red blood cells (RBCs or erythrocytes) that are infected by the malarial parasite Plasmodium falciparum. PfEMP1 is synthesized during the parasite's blood stage (erythrocytic schizogony) inside the RBC, during which the clinical symptoms of falciparum malaria are manifested. Acting as both an antigen and adhesion protein, it is thought to play a key role in the high level of virulence associated with P. falciparum. It was discovered in 1984 when it was reported that infected RBCs had unusually large-sized cell membrane proteins, and these proteins had antibody-binding (antigenic) properties. An elusive protein, its chemical structure and molecular properties were revealed only after a decade, in 1995. It is now established that there is not one but a large family of PfEMP1 proteins, genetically regulated (encoded) by a group of about 60 genes called var. Each P. falciparum is able to switch on and off specific var genes to produce a functionally different protein, thereby evading the host's immune system. RBCs carrying PfEMP1 on their surface stick to endothelial cells, which facilitates further binding with uninfected RBCs (through the processes of sequestration and rosetting), ultimately helping the parasite to both spread to other RBCs as well as bringing about the fatal symptoms of P. falciparum malaria. Introduction Malaria is the deadliest among infectious diseases, accounting for approximately 429,000 human deaths in 2015 as of the latest estimate by the World Health Organization. In humans, malaria can be caused by five Plasmodium parasites, namely P. falciparum, P. vivax, P. malariae, P. ovale and P. knowlesi. P. falciparum is the most dangerous species, attributed to >99% of malaria's death toll, with 70% of these deaths occurring in children under the age of five years. The parasites are transmitted through the bites of female mosquitos (of the species of Anopheles). Before invading the RBCs and causing the symptoms of malaria, the parasites first multiply in the liver. The daughter parasites called merozoites then only infect the RBCs. They undergo structural development inside the RBCs, becoming trophozoites and schizonts. It is during this period that malarial symptoms are produced. Unlike RBCs infected by other Plasmodium species, P. falciparum-infected RBCs had been known to spontaneously stick together. By the early 1980s, it was established that when the parasite (both the trophozoite and schizont forms) enters the blood stream and infects RBCs, the infected cells form knobs on their surface. Then they become sticky, and get attached to the walls (endothelium) of the blood vessels through a process called cytoadhesion, or cytoadherence. Such attachment favours binding with and accumulation of other RBCs. This process is known as sequestration. It is during this condition that the parasites induce an immune response (antigen-antibody reaction) and evade destruction in the spleen. Although the process and significance of sequestration were described in detail by two Italian physicians Amico Bignami and Ettore Marchiafava in the early 1890s, it took a century to discover the actual factor for the stickiness and virulence. Discovery PfEMP1 was discovered by Russell J. Howard and his colleagues at the US National Institutes of Health in 1984. Using the techniques of radioiodination and immunoprecipitation, they found a unique but yet unknown antigen from P. falciparum-infected RBCs that appeared to cause binding with other cells. Since the antigenic protein could only be detected in infected cells, they asserted that the protein was produced by the malarial parasite, and not by RBCs. The antigen was large and appeared to be different in size in different strains of P. falciparum obtained from night monkey (Aotus). In one strain, called Camp (from Malaysia), the antigen was found to have a molecular size of approximately 285 kDa; while in the other, called St. Lucia (from El Salvador), it was approximately 260 kDa. Both antigens bind to cultured skin cancer (melanoma) cells. But the researchers failed to confirm whether or not the protein actually was an adhesion molecule to the wall of blood vessels. Later in the same year, they found out that the unknown antigen was associated only with RBCs having small lumps called knobs on their surface. The first human RBC antigen was reported in 1986. Howard's team found that the antigens from Gambian children, who were suffering from falciparum malaria, were similar to those from the RBCs of night monkey. They determined that the molecular sizes of the proteins ranged from 250 to 300 kDa. In 1987, they discovered another type of surface antigen from the same Camp and St. Lucia strains of malarial parasites. This was also a large-sized protein of about 300 kDa, but quite different from the antigens reported in 1984. The new protein was unable to bind to melanoma cells and present only inside the cell. Hence, they named the earlier protein Plasmodium falciparum erythrocyte membrane protein 1 (PfEMP1), to distinguish it from the newly identified Plasmodium falciparum erythrocyte membrane protein 2 (PfEMP2). The distinction was confirmed the next year, with an additional information that PfEMP1 is relatively less in number. Although some of the properties of PfEMP1 were firmly established, the protein was difficult to isolate due to its low occurrence. Five years after its discovery, one of the original researchers Irwin Sherman began to doubt the existence of PfEMP1 as a unique protein. He argued that the antigen could be merely a surface protein of RBCs that changes upon infection with malarial parasites. A consensus was achieved in 1995 following the identification (by cloning) of the gene for PfEMP1. The discovery of the genes was independently reported by Howard's team and two other teams at NIH. Howard's team identified two genes for PfEMP1, and recombinant protein products of these genes were shown to have antigenic and adhesive properties. They further affirmed that PfEMP1 is the key molecule in the ability of P. falciparum to evade the host's immune system. Joseph D. Smith and others showed that PfEMP1 is actually a large family of proteins encoded by a multigene family called var. The gene products can bind to a variety of receptors including those on endothelial cells. Xin-Zhuan Su and others showed that there could be more than 50 var genes which are distributed on different chromosomes of the malarial parasite. Structure PfEMP1 is a large family of proteins having high molecular weights ranging from 200 to 350 kDa. The wide range of molecular size reflects extreme variation in the amino acid composition of the proteins. But all the PfEMP1 proteins can be described as having three basic structural components, namely, an extracellular domain (ECD), a transmembrane domain (TMD) and an intracellular acidic terminal segment (ATS). The extracellular domain is fully exposed on the cell surface, and is the most variable region. It consists of a number of sub-domains, including a short and conserved N terminal segment (NTS) at the outermost region, followed by a highly variable Duffy-binding-like (DBL) domain, sometimes a Ca2+-binding C2 domain, and then one or two cysteine-rich interdomain regions (CIDRs). Duffy-binding-like domains are so named because of their similarity to the Duffy binding proteins of P. vivax and P. knowlesi. There are six variant types of DBL, named DBLα, DBLβ, DBLγ, DBLδ, DBLε and DBLζ. CIDR is also divided into three classes: CIDRα, CIDRβ and CIDRγ. Both DBL and CIDR have an additional type called PAM, so named because of their specific involvement in pregnancy-associated malaria (PAM). In spite of the diverse DBL and CIDR proteins, the extracellular amino terminal region is partly conserved, consisting of about 60 amino acids of NTS, one each of DBLα and CIDR1 proteins in tandem. This semi-conserved DBLα-CIDR1 region is called the head structure. The last CIDR region joins the TMD, which is embedded in the cell membrane. The TMD and ATS are highly conserved among different PfEMP1s, and their structures have been solved using solution NMR (). The head structure is followed by a variable combination of diverse DBL and CIDR proteins, and in many cases along with C2. This variation gives rise to different types of PfEMP1. The DBL-CIDR combination in a particular type of PfEMP1 protein is never random, but organized into specific sequences known as domain cassettes. In some domain cassettes, there are only two or few DBL domains and CIDR domains, but in others they cover the entire length of the PfEMP1. These differences are responsible for different binding capacity among different PfEMP1s. For instance, among the most well-known types, VAR3 (earlier called type 3 PfEMP1) is the smallest, consisting of only NTS with DBL1α and DBL2ε domains in the ECD. Its molecular size is approximately 150 kDa. In domain cassette (DC) 4 type, the ECD is made up of three domains DBLα1.1/1.4, CIDRα1.6 and DBLβ3. The DBLβ3 domain contains a binding site for intercellular adhesion molecule 1 (ICAM1). This is particularly implicated with the development of brain infection. VAR2CSA is atypical in having a single domain cassette that consists of three N terminal DBLPAM domains followed by three DBLε domains and one CIDRPAM. The seven domains always occur together. The usual NTS is absent. The protein specifically binds to chondroitin sulphate A (CSA); hence the name VAR2CSA. Synthesis and transport The PfEMP1 proteins are regulated and produced (encoded) by about 60 different var genes, but an individual P. falciparum would switch on only a single var gene at a time to produce only one type of PfEMP. The var genes are distributed in two exons. Exon 1 encodes amino acids of the highly variable ECD, while exon 2 encodes those of the conserved TMD and ATS. Based on their location in the chromosome and sequence, the var genes are generally classified into three major groups, A, B, and C, and two intermediate groups, B/A and B/C; or sometimes simply into five classes, upsA, upsB, upsC, upsD, and upsE respectively. Groups A and B are found towards the terminal end (subtelomeric) region of the chromosome, while group C is in the central (centromeric) region. Once the PfEMP1 protein is fully synthesized (translated), it is carried to the cytoplasm towards the RBC membrane. The NTS is crucial for such directional movement. Within the cytoplasm, the newly synthesized protein is attached to a Golgi-like membranous vesicle called the Maurer's cleft. Inside the Maurer's clefts is a family of proteins called Plasmodium helical interspersed subtelomeric (PHIST) proteins. Of the PHIST proteins, PFI1780w and PFE1605w bind the intracellular ATS of PfEMP1 during transport to the RBC membrane. The PfEMP1 molecule is deposited at the RBC membrane at the knobs. These knobs are easily identified as conspicuous bumps on the infected RBCs from the early trophozoite stage onward. The malarial parasite cannot induce its virulence on RBCs without knobs. As many as 10,000 knobs are distributed throughout the surface of a mature infected RBC, and each knob is 50-80 nm in diameter. The export of pfEMP1 from Maurer's cleft to RBC membrane is mediated by binding of another protein produced by the parasite called knob-associated histidine-rich protein (KAHRP). KAHRP enhances the structural rigidity of infected RBC and adhesion of PfEMP1 on the knobs. It is also directly responsible for forming knobs, as indicated by the fact that kahrp gene-deficient malarial parasites do not form knobs. To form a knob, KAHRP aggregates several membrane skeletal proteins of the host RBC, such as spectrin, actin, ankyrin R, and spectrin–actin band 4.1 complex. Upon arrival at the knob, PfEMP1 is attached to the spectrin network using the PHIST proteins. Function The primary function of PfEMP1 is to bind and attach RBCs to the wall of the blood vessels. The most important binding properties of P. falciparum known to date are mediated by the head structure of PfEMP1, consisting of DBL domains and CIDRs. DBL domains can bind to a variety of cell receptors including thrombospondin (TSP), complement receptor 1 (CR1), chondroitin sulfate A (CSA), P-selectin, endothelial protein C receptor (EPCR), and heparan sulfate. The DBL domain adjacent to the head structure binds to ICAM-1. CIDRs mainly bind to a large variety of cluster determinant 36 (CD36). These bindings produce the pathogenic characteristics of the parasite, such as sequestration of infected cells in different tissues, invasion of RBCs, and clustering of infected cells by a process called rosetting. CIDR1 protein in the semi-conserved head structure is the principal and best understood adhesion site of PfEMP1. It binds with CD36 on endothelial cells. Only group B and C proteins are able to bind, and that too with only those having CIDRα2-6 sequence types. On the other hand, group A proteins have either CIDRα1 or CIDRβ/γ/δ, and they are responsible for the most severe condition of malaria. Binding with ICAM-1 is achieved through the DBLβ domain adjacent to the head structure. However, many PfEMP1s having DBLβ domain do not bind to ICAM-1, and it appears that only the DBLβ paired with C2 domain can to bind to ICAM-1. The DBLα-CIDRγ tandem pair is the main factor for rosetting, sticking together the infected RBC with the uninfected cells, and thereby clogging of the blood vessels. This activity is performed through binding with CR1. The most dangerous malarial infection is in the brain and is called cerebral malaria. In cerebral malaria, the PfEMP1 proteins involved are DC8 and DC13. They are named after the number of domain cassettes they contain, and are capable of binding not only endothelial cells of the brain, but also in different organs including brain, lung, heart, and bone marrow. Initially, it was assumed that PfEMP1 binds to ICAM-1 in the brain, but DC8 and DC13 were found incompatible with ICAM-1. Instead DC8 and DC13 specifically bind to EPCR using CIDRα sub-types such as CIDRα1.1, CIDRα1.4, CIDRα1.5 and CIDRα1.7. However, it was later shown that DC13 can bind to both ICAM-1 and EPCR. EPCR is thus a potential vaccine and drug target in cerebral malaria. VAR2CSA is unique in that it is mostly produced by the placenta during pregnancy (the condition called pregnancy-associated malaria, PAM, or placental malaria). The majority of PAM is therefore due to VAR2SCA. Unlike other PfEMP1, VAR2CSA binds to chondroitin sulfate A present on the vascular endothelium of the placenta. Although its individual domains can bind to CSA, its entire structure is used for complete binding. The major complication in PAM is low-birth-weight babies. However, women who survived the first infection generally develop an effective immune response. In P. falciparum-prevalent regions in Africa, pregnant women are found to contain high levels of antibody (immunoglobulin G, or IgG) against VAR2CSA, which protect them the placenta-attacking malarial parasite. They are noted for giving birth to heavier babies. Clinical importance In a normal human immune system, malarial parasite binding to RBCs stimulates the production of antibodies that attack the PfEMP1 molecules. Binding of antibody with PfEMP1 disables the binding properties of DBL domains, causing loss of cell adhesion, and the infected RBC is destroyed. In this scenario, malaria is avoided. However, to evade the host's immune response, different P. falciparum switch on and off different var genes to produce functionally different (antigenically distinct) PfEMP1s. Each variant type of PfEMP1 has different binding property, and thus, is not always recognized by antibodies. By default, all the var genes in the malarial parasite are inactivated. Activation (gene expression) of var is initiated upon infection of the organs. Further, in each organ only specific var genes are activated. The severity of the infection is determined by the type of organ in which infection occurs, hence, the type of var gene activated. For examples, in the most severe cases of malaria, such as cerebral malaria, only the var genes for the PfEMP1 proteins DC8 and DC13 are switched on. Upon the synthesis of DC8 and DC13, their CIDRα1 domains bind to EPCR, which brings about the onset of severe malaria. The abundance of the gene products (transcripts) of these PfEMP1 proteins (specifically the CIDRα1 subtype transcripts) directly relates to the severity of the disease. This further indicates that preventing the interaction between CIDRα1 and EPCR would be good target for a potential vaccine. In pregnancy-associated malaria, another severe type of falciparum malaria, the gene for VAR2CSA (named var2csa) is activated in the placenta. Binding of VAR2CSA to CSA is the primary cause of premature delivery, death of the foetus and severe anaemia in the mother. This indicates that drugs targeting VAR2CSA will be able to prevent the effects of malaria, and for this reason VAR2CSA is the leading candidate for development of a PAM vaccine. References falciparum erythrocyte Antigens Apicomplexan proteins
Plasmodium falciparum erythrocyte membrane protein 1
[ "Chemistry" ]
4,100
[ "Antigens", "Biomolecules" ]
53,451,790
https://en.wikipedia.org/wiki/Cytosolic%20ciliogenesis
Cytosolic ciliogenesis, otherwise cytoplasmic ciliogenesis, is a type of ciliogenesis where the cilium axoneme is formed in the cytoplasm or becomes exposed to the cytoplasm. Cytosolic ciliogenesis is divided into three types: Primary cytosolic cilia are formed by exposing the axoneme of compartmentalized cilium (formed initially by compartmentalized ciliogenesis) to the cytoplasm. This type of cilia is found in the sperm of human and other mammals. Secondary cytosolic cilia are formed in parallels to the formation of the typical compartmentalized cilium. One end of the axoneme is exposed to the cytoplasm as the other end of the axoneme is formed as compartmentalized cilia. This type of cilia is found in insects. Tertiary cytosolic cilia are axonemes that form directly in the cytoplasm. This type of cilia is found in Plasmodium (the malaria parasite). History The term Cytosolic Ciliogenesis was coined in 2004 as part of a study that identified a large set of ciliogenesis genes. It was found that a subset of genes that are thought to be essential for compartmentalized cilia are not essential to form the sperm flagellum. Since the axoneme of this flagellum was exposed to the cytoplasm it was named Cytosolic Ciliogenesis. References Cell biology Organelles
Cytosolic ciliogenesis
[ "Biology" ]
312
[ "Cell biology" ]
53,452,258
https://en.wikipedia.org/wiki/Compartmentalized%20ciliogenesis
Compartmentalized ciliogenesis is the most common type of ciliogenesis where the cilium axoneme is formed separated from the cytoplasm by the ciliary membrane and a ciliary gate known as the transition zone. References Cell biology
Compartmentalized ciliogenesis
[ "Biology" ]
54
[ "Cell biology" ]
53,455,286
https://en.wikipedia.org/wiki/Vapor-tight%20tank
A vapor-tight tank is a piece of portable onshore oil production equipment designed to store crude oil and convey oil vapors to a flare stack. Vapor-tight tanks are horizontal vessels that can usually hold up to 14.7 pounds per square inch (gauge) (1.01 bar(g)). They use that pressure to force oil vapors to the flare. Connection to a flare allows these systems to be operated in situations with a high hydrogen sulfide content. In fact, their original intended use was sour crude oil production. The first vapor-tight tanks were constructed from used crude oil tank cars by Tornado Technologies. Vapor-tight tanks are frequently packaged with an integral separator, flare stack, and other equipment to form a complete single-well battery. Because of their small size and portability, they are mostly used in temporary production of oil wells. Canadian regulations consider that vapor-tight tanks are process vessels, rather than storage tanks, so tankage spacing and secondary containment provisions are not applicable. References Storage tanks Petroleum technology
Vapor-tight tank
[ "Chemistry", "Engineering" ]
213
[ "Chemical equipment", "Petroleum stubs", "Petroleum technology", "Petroleum engineering", "Storage tanks", "Petroleum" ]
58,004,420
https://en.wikipedia.org/wiki/Robophysics
Robophysics is an emerging scientific field to understand the physical principles of how robots move in the complex real world, analogous to biophysics to understand the motions of biological systems. This emerging area has demonstrated the need for a physics of robotics and reveal interesting problems at the interface of nonlinear dynamics, soft matter, control and biology. References Terrestrial locomotion Robot locomotion
Robophysics
[ "Physics" ]
78
[ "Physical phenomena", "Motion (physics)", "Robot locomotion" ]
58,004,880
https://en.wikipedia.org/wiki/NDUFAF6
NADH:ubiquinone oxidoreductase complex assembly factor 6 is a protein that in humans is encoded by the NDUFAF6 gene. The protein is involved in the assembly of complex I in the mitochondrial electron transport chain. Mutations in the NDUFAF6 gene have been shown to cause Complex I deficiency, Leigh syndrome, and Acadian variant Fanconi Syndrome. Structure The NDUFAF6 gene is located on the q arm of chromosome 8 in position 22.1 and spans 222,728 base pairs. The gene produces a 38.2 kDa protein composed of 333 amino acids. The protein contains a predicted phytoene synthase domain. Function The NDUFAF6 gene encodes a protein that localizes to mitochondria. The encoded protein plays an important role in the assembly of complex I (NADH-ubiquinone oxidoreductase) of the mitochondrial respiratory chain through regulation of subunit ND1 biogenesis. Clinical Significance Mutations in the NDUFAF6 gene are associated with complex I enzymatic deficiency and lead to Leigh syndrome, which is characterized by lesions in the central nervous system and rapid deterioration of cognitive and motor functions. In Acadians, a non-coding mutation in NDUFAF6 has been shown to cause Acadian variant Fanconi Syndrome, symptoms of which include pulmonary interstitial fibrosis and proximal tubular dysfunction accompanied by slowly progressive kidney disease. Inheritance of mutations in the NDUFAF6 gene is autosomal recessive. Interactions The protein encoded by NDUFAF6 interacts with RHOXF2, OTX1, GUCD1, and GALNT6 proteins. References Further reading Peripheral membrane proteins Cellular respiration
NDUFAF6
[ "Chemistry", "Biology" ]
365
[ "Biochemistry", "Cellular respiration", "Metabolism" ]
58,011,682
https://en.wikipedia.org/wiki/Advanced%20Electric%20Propulsion%20System
Advanced Electric Propulsion System (AEPS) is a solar electric propulsion system for spacecraft that is being designed, developed and tested by NASA and Aerojet Rocketdyne for large-scale science missions and cargo transportation. The first application of the AEPS is to propel the Power and Propulsion Element (PPE) of the Lunar Gateway, to be launched no earlier than 2027. The PPE module is built by Maxar Space Systems in Palo Alto, California. Two identical AEPS engines would consume 25 kW being generated by the roll-out solar array (ROSA) assembly, which can produce over 60 kW of power. The Power and Propulsion Element (PPE) for the Lunar Gateway will have a mass of 8-9 metric tons and will be capable of generating 50 kW of solar electric power for its Hall-effect thrusters for maneuverability, which can be supported by chemical monopropellant thrusters for high-thrust attitude control maneuvers. Overview Solar-electric propulsion has been shown to be reliable and efficient, and allows a significant mass reduction of spacecraft. High-power solar electric propulsion is a key technology that has been prioritized because of its significant exploration benefits in cis-lunar space and crewed missions to Mars. The AEPS Hall thruster system was originally developed since 2015 by NASA Glenn Research Center and the Jet Propulsion Laboratory to be used on the now canceled Asteroid Redirect Mission. Work on the thruster did not stop following the mission cancellation in April 2017 because there is demand of such thrusters for a range of NASA, defense and commercial missions in deep space. Since May 2016, further work on AEPS has been transitioned to Aerojet Rocketdyne that is currently designing and testing the engineering-model hardware. This is a contract worth $65 million, where Aerojet Rocketdyne developed, qualified and will deliver five 12.5 kW Hall thruster subsystems, including thrusters, PPUs and xenon flow controllers. Design AEPS is based on the 12.5 kW development model thruster called 'Hall Effect Rocket with Magnetic Shielding' (HERMeS). The AEPS solar electric engine makes use of the Hall-effect thruster in which the propellant is ionized and accelerated by an electric field to produce thrust. To generate 12.5 kW at the thruster actually takes a total of 13.3 kW including power needed for the control electronics. Four identical AEPS engines (thruster and control electronics) would theoretically need more than the 50 kW generated by solar panels of the PPE. It is stated that the AEPS array is intended only to use 40 kW of the 50 kW, so the maximum thrust would be limited to around 1.77 N. The engineering model is undergoing various vibration tests, thruster dynamic and thermal environment tests in 2017. AEPS is expected to accumulate about 5,000 h by the end of the contract and the design aims to achieve a flight model that offers a half-life of at least 23,000 hours and a full life of about 50,000 hours. The three main components of the AEPS propulsion engine are: a Hall-effect thruster, Power Processor Unit (PPU), and the Xenon Flow Controller (XFC). The thrusters are throttleable over an input power range of 6.6740 kW with input voltages ranging from 95 to 140 V. The estimated xenon propellant mass for the Lunar Gateway would be 5,000 kg. The Preliminary Design Review took place in August 2017. It was concluded that "The Power Processing Unit successfully demonstrated stable operation of the propulsion system and responded appropriately to all of our planned contingency scenarios." Tests In July 2017, AEPS was tested at Glenn Research Center. The tests used a Power Processing Unit (PPU), which could also be used for other advanced spacecraft propulsion technology. In August 2018, Aerojet Rocketdyne completed the early systems integration test in a vacuum chamber, leading to the design finalization and verification phase. In November 2019, Aerojet Rocketdyne demonstrated the AEPS thruster at full power for the first time. In July 2023, NASA and Aerojet Rocketdyne began qualification testing on AEPS. See also VASIMR variable impulse electric plasma engine References Hall effect Ion engines Magnetic propulsion devices Spacecraft propulsion Lunar Gateway
Advanced Electric Propulsion System
[ "Physics", "Chemistry", "Materials_science" ]
878
[ "Physical phenomena", "Matter", "Ion engines", "Hall effect", "Electric and magnetic fields in matter", "Electrical phenomena", "Solid state engineering", "Ions" ]
58,012,993
https://en.wikipedia.org/wiki/Scutoid
A scutoid is a particular type of geometric solid between two parallel surfaces. The boundary of each of the surfaces (and of all the other parallel surfaces between them) either is a polygon or resembles a polygon, but is not necessarily planar, and the vertices of the two end polygons are joined by either a curve or a Y-shaped connection on at least one of the edges, but not necessarily all of the edges. Scutoids present at least one vertex between these two planes. Scutoids are not necessarily convex, and lateral faces are not necessarily planar, so several scutoids can pack together to fill all the space between the two parallel surfaces. They may be more generally described as a mix between a frustum and a prismatoid. Naming The object was first described by Gómez-Gálvez et al. in a paper entitled Scutoids are a geometrical solution to three-dimensional packing of epithelia, and published in July 2018. Officially, the name scutoid was coined because of its resemblance to the shape of the scutum and scutellum in some insects, such as beetles in the subfamily Cetoniinae. Unofficially, Clara Grima has stated that while working on the project, the shape was temporarily called an Escu-toid as a joke after the biology group leader Luis M. Escudero. Since his last name, "Escudero", means "squire" (from Latin scutarius = shield-bearer), the temporary name was modified slightly to become "scutoid". Appearance in nature Epithelial cells adopt the "scutoidal shape" under certain circumstances. In epithelia, cells can 3D-pack as scutoids, facilitating tissue curvature. This is fundamental to the shaping of the organs during development. "Scutoid is a prismatoid to which one extra mid-level vertex has been added. This extra vertex forces some of the "faces" of the resulting object to curve. This means that Scutoids are not polyhedra, because not all of their faces are planar. ... For the computational biologists who created/discovered the Scutoid, the key property of the shape is that it can combine with itself and other geometric objects like frustums to create 3D packings of epithelial cells." - Laura Taalman Cells in the developing lung epithelium have been found to have more complex shapes than the term "scutoid", inspired by the simple scutellum of beetles, suggests. When "scutoids" exhibit multiple Y-shaped connections or vertices along their axis, they have therefore been called "punakoids" instead, as their shape is more reminiscent of the Pancake Rocks in Punakaiki, New Zealand. Potential uses The scutoid explains how epithelial cells (the cells that line and protect organs such as the skin) efficiently pack in three dimensions. As epithelial tissue bends or grows, the cells have to take on new shapes to pack together using the least amount of energy possible, and until the scutoid's discovery, it was assumed that epithelial cells packed in mostly frustums, as well as other prism-like shapes. Now, with the knowledge of how epithelial cells pack, it opens up many new possibilities in terms of artificial organs. The scutoid may be applied to making better artificial organs, allowing for things like effective organ replacements, recognizing whether a person's cells are packing correctly or not, and ways to fix that problem. References External links Volume Epithelium
Scutoid
[ "Physics", "Mathematics" ]
745
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Extensive quantities", "Volume", "Wikipedia categories named after physical quantities" ]
73,328,748
https://en.wikipedia.org/wiki/Lanthanum%20pentanickel
LaNi5 is a hexagonal intermetallic compound composed of rare earth element lanthanum and transition metal nickel. It presents a calcium pentacopper (CaCu5) crystal structure. It is a melting compound with the same composition and has hydrogen storage capacity. Structure LaNi5 has a calcium pentacopper (CaCu5) type crystal structure, with a hexagonal lattice, space group is P6/mmm (No. 191), with lanthanum atom is located at coordinate origin 1a (0,0,0), two nickel atoms are located at 2c (1/ 3,2/3,0) and (2/3,1/3,0), the other three at 3g (1/2,0,1/2), (0,1/2,1/2), (1/2,1/2,1/2), with a=511pm, c=397pm. The unit cell contains 1 LaNi5 atom, the volume is 90×10−24 cm3, the LaNi5 unit cell contains a larger The six deformed tetrahedral voids can be used to fill in hydrogen atoms. Chemical reactions As a hydrogen storage alloy, LaNi5 can absorb hydrogen to form the hydride LaNi5Hx (x≈6) when the pressure is slightly high and the temperature is low, or when the pressure decreases or the temperature increases, hydrogen can be released to form repeated absorption and release of hydrogen. Energy must be added for the dehydrogenation process to proceed as it is an endothermic reaction. A decrease in temperature will cause the reaction to stop. Characteristics and applications The hydrogen storage density per unit volume (crystal) of LaNi5H6.5 at 2 bar is equal to the density of gaseous molecular hydrogen at 1800 bar, and all hydrogen can be desorbed at 2 bar. Although the hydrogen storage density in practical applications is reduced due to the aggregation of some LaNi5 powders, it is still higher than the density of liquid hydrogen. This allows safe operation of hydrogen fuel. In order to improve its hydrogen storage performance, metals such as lead or manganese are often used to partially replace nickel. Currently, LaNi5 is commonly used in storage and transportation of hydrogen, hydrogen vehicle power, fuel cells, separation and purification of hydrogen, propylene hydrogenation catalysts, etc. References Lanthanum compounds Nickel compounds Nickel alloys Intermetallics
Lanthanum pentanickel
[ "Physics", "Chemistry", "Materials_science" ]
522
[ "Nickel alloys", "Inorganic compounds", "Metallurgy", "Intermetallics", "Condensed matter physics", "Alloys" ]
47,757,024
https://en.wikipedia.org/wiki/Synthetic%20genomes
Synthetic genome is a synthetically built genome whose formation involves either genetic modification on pre-existing life forms or artificial gene synthesis to create new DNA or entire lifeforms. The field that studies synthetic genomes is called synthetic genomics. Recombinant DNA technology Soon after the discovery of restriction endonucleases and ligases, the field of genetics began using these molecular tools to assemble artificial sequences from smaller fragments of synthetic or naturally occurring DNA. The advantage in using the recombinatory approach as opposed to continual DNA synthesis stems from the inverse relationship that exists between synthetic DNA length and percent purity of that synthetic length. In other words, as you synthesize longer sequences, the number of error-containing clones increases due to the inherent error rates of current technologies. Although recombinant DNA technology is more commonly used in the construction of fusion proteins and plasmids, several techniques with larger capacities have emerged, allowing for the construction of entire genomes. Polymerase cycling assembly Polymerase cycling assembly (PCA) uses a series of oligonucleotides (or oligos), approximately 40 to 60 nucleotides long, that altogether constitute both strands of the DNA being synthesized. These oligos are designed such that a single oligo from one strand contains a length of approximately 20 nucleotides at each end that is complementary to sequences of two different oligos on the opposite strand, thereby creating regions of overlap. The entire set is processed through cycles of: (a) hybridization at 60 °C; (b) elongation via Taq polymerase and a standard ligase; and (c) denaturation at 95 °C, forming progressively longer contiguous strands and ultimately resulting in the final genome. PCA was used to generate the first synthetic genome in history, that of the Phi X 174 virus. Gibson assembly method The gibson assembly method, designed by Daniel Gibson during his time at the J. Craig Venter Institute, requires a set of double-stranded DNA cassettes that constitute the entire genome being synthesized. Note that cassettes differ from contigs by definition, in that these sequences contain regions of homology to other cassettes for the purposes of recombination. In contrast to Polymerase Cycling Assembly, Gibson Assembly is a single-step, isothermal reaction with larger sequence-length capacity; ergo, it is used in place of Polymerase Cycling Assembly for genomes larger than 6 kb. A T5 exonuclease performs a chew-back reaction at the terminal segments, working in the 5' to 3' direction, thereby producing complementary overhangs. The overhangs hybridize to each other, a Phusion DNA polymerase fills in any missing nucleotides and the nicks are sealed with a ligase. However, the genomes capable of being synthesized using this method alone is limited because as DNA cassettes increase in length, they require propagation in vitro in order to continue hybridizing; accordingly, Gibson assembly is often used in conjunction with Transformation-Associated Recombination (see below) to synthesize genomes several hundred kilobases in size. Transformation-associated recombination The goal of transformation-associated recombination (TAR) technology in synthetic genomics is to combine DNA contigs by means of homologous recombination performed by the Yeast Artificial Chromosome (YAC). Of importance is the CEN element within the YAC vector, which corresponds to the yeast centromere. This sequence gives the vector the ability to behave in a chromosomal manner, thereby allowing it to perform homologous recombination. First, gap repair cloning is performed to generate regions of homology flanking the DNA contigs. Gap Repair Cloning is a particular form of the Polymerase Chain Reaction in which specialized primers with extensions beyond the sequence of the DNA target are utilized. Then, the DNA cassettes are exposed to the YAC vector, which drives the process of homologous recombination, thereby connecting the DNA cassettes. Polymerase Cycling Assembly and TAR technology were used together to construct the 600 kb Mycoplasma genitalium genome in 2008, the first synthetic organism ever created. Similar steps were taken in synthesizing the larger Mycoplasma mycoides genome a few years later. General creation of synthetic genomes It is difficult to directly synthesize oligonucleotides larger than ~200 base pairs and maintain high fidelity. Therefore, smaller oligonucleotides (around 5-20 base pairs) are combined to create genome-size oligonucleotides. Previous methods of stitching the smaller strands involved using T4 polynucleotide ligase. Modern techniques, like PCA/PCR based-methods have improved on this method, increasing speed and fidelity. To further increase fidelity, PCA-based methods can include an error-reversal step in which nucleases recognize and cut mismatched base pairs. Recognition is possible because errors usually cause structural budges and abnormalities in the DNA. Currently, a 4-Mb E. coli genome created in May 2019 holds the record for the largest synthetic genome size. See also Synthetic genomics References Genetic engineering Genome editing Synthetic biology
Synthetic genomes
[ "Chemistry", "Engineering", "Biology" ]
1,078
[ "Synthetic biology", "Genetics techniques", "Biological engineering", "Genome editing", "Genetic engineering", "Bioinformatics", "Molecular genetics", "Molecular biology" ]
47,763,640
https://en.wikipedia.org/wiki/Cortinarius%20subfoetens
Cortinarius subfoetens is a basidiomycete mushroom of the genus Cortinarius native to North America. It was first described in Wyoming. References subfoetens Fungi described in 1995 Fungi of North America Taxa named by Meinhard Michael Moser Fungus species
Cortinarius subfoetens
[ "Biology" ]
60
[ "Fungi", "Fungus species" ]
47,763,930
https://en.wikipedia.org/wiki/Specification%20for%20human%20interface%20for%20semiconductor%20manufacturing%20equipment
This specification is usually called SEMI E95-0200 standard. It was originally published in February 2000, and the latest technical revision is SEMI E95-1101. This standard addresses the area of processing content with the direct intention of developing common software standards, so that problems involving operator training, operation specifications, and efficient development can be resolved more easily. See also Semiconductor Equipment and Materials International Notes Semiconductor device fabrication Technical specifications
Specification for human interface for semiconductor manufacturing equipment
[ "Materials_science", "Technology" ]
88
[ "Semiconductor device fabrication", "nan", "Microtechnology" ]
66,079,732
https://en.wikipedia.org/wiki/Mark%20Bowick
Mark John Bowick (born 1957) is a theoretical physicist in condensed matter theory and high energy physics. He is the deputy director of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, and a Visiting Distinguished Professor of Physics in UCSB's Physics Department. Early life and education Bowick was born in Rotorua, New Zealand, and earned his bachelor's degree, B.Sc. (Hons.), at the University of Canterbury in Christchurch. In 1983, he received his Ph.D. in theoretical physics from the California Institute of Technology, where he held an Earle C. Anthony Graduate Fellowship. Professional career Bowick then spent three years at Yale University as the research associate of their Sloane Physics Lab's "Particle Theory Group," followed by a two-year postdoctoral position at the Center for Theoretical Physics, at MIT. He was awarded first prize in the 1986 Gravity Research Foundation Essay Competition. In 1987, he joined the faculty of the physics department at Syracuse University, where he was granted an Outstanding Junior Investigator award, from the United States Department of Energy, for the years 1987 to 1994. At Syracuse, Bowick served as assistant and associate professor from 1987 to 1998, was promoted to full professor of physics in 1998, and went on to become director of the Soft Matter Program from 2011 to 2016. In August 2016, the Kavli Institute for Theoretical Physics, at the University of California, Santa Barbara, invited Bowick to join as deputy director and visiting distinguished professor of physics. Research Bowick's research interests include symmetry breaking, the interplay of order and geometry, topological defects, building blocks for supramolecular self-assembly, membrane statistical mechanics, shaped structures, and common themes in condensed matter and particle physics. Since 2002, his career has been split between high-energy physics and condensed matter physics, with ongoing research support by the National Science Foundation. Honors and awards First prize in the Gravity Research Foundation Essay Competition (1986) Outstanding Junior Investigator, United States Department of Energy (1987–1994) Fellow of the American Physical Society, Division of Condensed Matter Physics (elected 2004) Fellow of the American Association for the Advancement of Science (elected 2022). Syracuse honored Bowick with two commendations: the Chancellor's Citation for Exceptional Academic Achievement in 2006, and the William Wasserstrom Prize for Excellence in Graduate Teaching and Advising in 2009. He was also named the Joel Dorman Steele Professor of Physics in 2013. Personal life Bowick is married to theoretical physicist M. Cristina Marchetti. They have two adult children. In 2016, while director of Syracuse University's Soft Matter Program, Bowick commissioned composer Andrew Waggoner to write music for their Active And Smart Matter Conference: A New Frontier for Science & Engineering. The world premiere of this eclectic composition, entitled Hexacorda Mollia, was performed by the JACK Quartet on June 22, 2016. Selected publications Bowick, MJ and LCR Wijewardhana, Superstrings at High Temperature, Physical Review Letters 54 (23), 2485 (1985). Bowick, MJ, and TW Appelquist, D Karabali, LCR Wijewardhana, Spontaneous chiral-symmetry breaking in three-dimensional QED, Physical Review D 33 (12), 3704 (1986). Bowick, MJ, and L Chandar, EA Schiff, AM Srivastava, The Cosmological Kibble Mechanism in the Laboratory – String Formation in Liquid-Crystals, Science 263 (5149), 943–944 (1994). Bowick, MJ and A Travesset, The statistical mechanics of membranes, Physics Reports 344 (4-6), 255–308 (2001). Bowick, MJ, and AR Bausch, A Cacciuto, AD Dinsmore, MF Hsu, DR Nelson, ... Grain boundary scars and spherical crystallography, Science 299 (5613), 1716–1718 (2003). Bowick, MJ and L Giomi, Two-Dimensional Matter: Order, Curvature and Defects, Advances in Physics 58 (5), 449–563 (2009). Bowick MJ, and L Giomi, X Ma, MC Marchetti, Defect annihilation and proliferation in active nematics, Physical Review Letters 110 (22), 228101 (2013). Bowick, MJ, and FC Keber, E Loiseau, T Sanchez, SJ DeCamp, L Giomi, ... Topology and dynamics of active nematic vesicles, Science 345 (6201), 1135–1139 (2014). References External links Group webpage at Kavli Institute for Theoretical Physics Mark Bowick on Google Scholar 1957 births 20th-century American physicists 21st-century American physicists Living people People from Rotorua California Institute of Technology alumni University of California, Santa Barbara faculty American particle physicists American theoretical physicists Syracuse University faculty Fellows of the American Physical Society University of Canterbury alumni Topological dynamics American condensed matter physicists Fellows of the American Association for the Advancement of Science 20th-century New Zealand physicists
Mark Bowick
[ "Mathematics" ]
1,066
[ "Topology", "Topological dynamics", "Dynamical systems" ]
66,082,562
https://en.wikipedia.org/wiki/Lutetium%20phthalocyanine
Lutetium phthalocyanine () is a coordination compound derived from lutetium and two phthalocyanines. It was the first known example of a molecule that is an intrinsic semiconductor. It exhibits electrochromism, changing color when subject to a voltage. Structure is a double-decker sandwich compound consisting of a ion coordinated to two the conjugate base of two phthalocyanines. The rings are arranged in a staggered conformation. The extremities of the two ligands are slightly distorted outwards. The complex features a non-innocent ligand, in the sense that the macrocycles carry an extra electron. It is a free radical with the unpaired electron sitting in a half-filled molecular orbital between the highest occupied and lowest unoccupied orbitals, allowing its electronic properties to be finely tuned. Properties , along with many substituted derivatives like the alkoxy-methyl derivative , can be deposited as a thin film with intrinsic semiconductor properties; said properties arise due to its radical nature and its low reduction potential compared to other metal phthalocyanines. This initially green film exhibits electrochromism; the oxidized form is red, whereas the reduced form is blue and the next two reduced forms are dark blue and violet, respectively. The green/red oxidation cycle can be repeated over 10,000 times in aqueous solution with dissolved alkali metal halides, before it is degraded by hydroxide ions; the green/blue redox degrades faster in water. Electrical properties and other lanthanide phthalocyanines are of interest in the development of organic thin-film field-effect transistors. derivatives can be selected to change color in the presence of certain molecules, such as in gas detectors; for example, the thioether derivative changes from green to brownish-purple in the presence of NADH. References Phthalocyanines Lutetium complexes Chemical tests Organic semiconductors Sandwich compounds Free radicals
Lutetium phthalocyanine
[ "Chemistry", "Biology" ]
406
[ "Free radicals", "Molecular electronics", "Semiconductor materials", "Chemical tests", "Sandwich compounds", "Senescence", "Biomolecules", "Organometallic chemistry", "Organic semiconductors" ]
66,087,885
https://en.wikipedia.org/wiki/Ketamine%20in%20society%20and%20culture
Ketamine has had a wide variety of medicinal and recreational uses since its discovery in 1962. Generic names Ketamine is the English generic name of the drug and its and , while ketamine hydrochloride is its , , , and . Its generic name in Spanish and Italian and its are ketamina, in French and its are , in German is , and in Latin is . The S(+) stereoisomer of ketamine is known as esketamine, and this is its while esketamine hydrochloride is its . Brand names Ketamine is sold throughout the world primarily under the brand name Ketalar. It is also marketed under a variety of other brand names, including Calypsol, Ketamin, Ketamina, Ketamine, Ketaminol, Ketanest, Ketaset, Tekam, and Vetalar among others. Esketamine is sold mainly under the brand names Ketanest, Ketanest-S, and Spravato. Ketamine clinics After the publication of the NIH-run antidepressant clinical trial, clinics began opening in which the intravenous ketamine is given for depression. This practice is an off label use of IV ketamine in the United States, though the intranasal version of esketamine has been approved by the FDA for treatment of depression In 2015 there were about 60 such clinics in the US; the procedure was not covered by insurance, and people paid between $400 and $1700 out of pocket for a treatment. It was estimated in 2018 that there were approximately 300 of these clinics. The number of clinics has been increasing rapidly. A chain of such clinics in Australia, run by Aura Medical Corporation, was closed down by regulatory authorities in 2015. They found that the clinics' marketing was not supported by scientific research and the chain sent patients home with ketamine and needles to administer infusions to themselves. Legal status While ketamine is legally marketed in many countries worldwide, it is also a controlled substance in many countries. Australia In Australia, ketamine is listed as a schedule 8-controlled drug under the Poisons Standard (October 2015). Schedule 8 drugs are outlined in the Poisons Act 1964 as "Substances which should be available for use but require restriction of manufacture, supply, distribution, possession and use to reduce abuse, misuse and physical or psychological dependence." Canada In Canada, ketamine has been classified since 2005 as a Schedule I narcotic. Hong Kong In Hong Kong, since 2000, ketamine has been regulated under Schedule 1 of Hong Kong Chapter 134 Dangerous Drugs Ordinance. It can be used legally only by health professionals, for university research purposes, or with a physician's prescription. Taiwan By 2002, ketamine was classified as class III in Taiwan; given the recent rise of its prevalence in East Asia, however, rescheduling into class I or II is being considered. India In December 2013, the government of India, in response to rising recreational use and the use of ketamine as a date rape drug, has added it to Schedule X of the Drug and Cosmetics Act, requiring a special license for sale and maintenance for two years of records of all sales. United Kingdom In the United Kingdom, it became labeled a Class C drug on 1 January 2006. On 10 December 2013, the UK Advisory Council on the Misuse of Drugs (ACMD) recommended that the government reclassify ketamine to become a Class B drug. On 12 February 2014 the Home Office announced it would follow this advice "in light of the evidence of chronic harms associated with ketamine use, including chronic bladder and other urinary tract damage". The UK Minister of State for Crime Prevention, Norman Baker, responding to the ACMD's advice, said the issue of ketamine's rescheduling for medical and veterinary use would be addressed "separately to allow for a period of consultation". United States Because of the increase in recreational use, ketamine was placed in Schedule III of the United States Controlled Substance Act in August 1999. Recreational use Recreational use of ketamine was documented in the early 1970s in underground literature (e.g., The Fabulous Furry Freak Brothers). It was used in psychiatric and other academic research through the 1970s, culminating in 1978 with the publishing of psychonaut John Lilly's The Scientist, and Marcia Moore and Howard Alltounian's Journeys into the Bright World, which documented the unusual phenomenology of ketamine intoxication. The incidence of non-medical ketamine use increased through the end of the century, especially in the context of raves and other parties. Its emergence as a club drug differs from other club drugs (e.g., MDMA), however, due to its anesthetic properties (e.g., slurred speech, immobilization) at higher doses; in addition, reports are common of ketamine being sold as "ecstasy". In the 1993 book E for Ecstasy (about the uses of the street drug Ecstasy in the UK), the writer, activist, and ecstasy advocate Nicholas Saunders highlighted test results showing that certain consignments of the drug also contained ketamine. Consignments of ecstasy known as "strawberry" contained what Saunders described as a "potentially dangerous combination of ketamine, ephedrine, and selegiline", as did a consignment of "Sitting Duck" ecstasy tablets. The use of ketamine as part of a "post-clubbing experience" has also been documented. Ketamine's rise in the dance culture was most rapid in Hong Kong by the end of the 1990s. Ketamine use as a recreational drug has been implicated in deaths globally, with more than 90 deaths in England and Wales in the years of 2005–2013. They include accidental poisonings, drownings, traffic accidents, and suicides. The majority of deaths were among young people. This has led to increased regulation (e.g., upgrading ketamine from a Class C to a Class B banned substance in the U.K.). Unlike the other well-known dissociatives phencyclidine (PCP) and dextromethorphan (DXM), ketamine is very short-acting. It takes effect within about 10 minutes, while its hallucinogenic effects last 60 minutes when insufflated or injected, and up to two hours when ingested orally. At subanesthetic doses—under-dosaged from a medical point of view—ketamine produces a dissociative state, characterised by a sense of detachment from one's physical body and the external world which is known as depersonalization and derealization. At sufficiently high doses, users may experience what is called the "K-hole", a state of dissociation with visual and auditory hallucinations. John C. Lilly, Marcia Moore, D. M. Turner and David Woodard (amongst others) have written extensively about their own entheogenic use of, and psychonautic experiences with, ketamine. Turner died prematurely due to drowning during presumed unsupervised ketamine use. In 2006 the Russian edition of Adam Parfrey's Apocalypse Culture II was banned and destroyed by authorities owing to its inclusion of an essay by Woodard about the entheogenic use of, and psychonautic experiences with, ketamine. Because of its ability to cause confusion and amnesia, ketamine has been used for date rape. Slang terms Production for recreational use has been traced to 1967, when it was referred to as "mean green" and "rockmesc". Recreational names for ketamine include "Special K", "K", "Kitty", "Ket", "K2", "Vitamin K", "Super K", "Jet", "Super acid", "Mauve", "Special LA coke", "Purple", "Cat Valium", "Keller", "Kelly's Day", "New ecstasy", "Psychedelic heroin", "bump", "Majestic". A mixture of ketamine with cocaine is called "Calvin Klein" or "CK1". In Hong Kong, where illicit use of the drug is popular, ketamine is colloquially referred to as "kai-jai". Usage North America According to the ongoing Monitoring the Future study conducted by University of Michigan, prevalence rates of recreational ketamine use among American secondary school students (grades 8, 10, and 12) have varied between 0.8 and 2.5% since 1999, with recent rates at the lower end of this range. The 2006 National Survey on Drug Use and Health (NSDUH) reports a rate of 0.1% for persons ages 12 or older with the highest rate (0.2%) in those ages 18–25. Further, 203,000 people are estimated to have used ketamine in 2006, and an estimated 2.3 million people used ketamine at least once in their life. A total of 529 emergency department visits in 2009 were ketamine-related. In 2003, the U.S. Drug Enforcement Administration conducted Operation TKO, a probe into the quality of ketamine being imported from Mexico. As a result of operation TKO, U.S. and Mexican authorities shut down the Mexico City company Laboratorios Ttokkyo, which was the biggest producer of ketamine in Mexico. According to the DEA, over 80% of ketamine seized in the United States is of Mexican origin. As of 2011, it was mostly shipped from places like India, as cheap in cost as $5/gram. The World Health Organization Expert Committee on Drug Dependence, in its thirty-third report (2003), recommended research into ketamine's recreational use due to growing concerns about its rising popularity in Europe, Asia, and North America. Europe Cases of ketamine use in club venues have been observed in the Czech Republic, France, Italy, Hungary, The Netherlands, and the United Kingdom. Additional reports of use and dependence have been reported in Poland and Portugal. Australia Australia's 2019 National Drug Strategy Household Survey report shows a prevalence of recent ketamine use of 0.3% in 2004, 0.2% in 2007 and 2010, 0.4% in 2016 and 0.9% in 2019 in persons aged 14 or older. Asia In China, the small village of Boshe in eastern Guangdong was confirmed as a main production centre in 2013 when it was raided. Established by the Hong Kong Narcotics Division of the Security Bureau, the Central Registry of Drug Abuse (CRDA) maintains a database of all the illicit drug users who have come into contact with law enforcement, treatment, health care, and social organizations. The compiled data are confidential under The Dangerous Drugs Ordinance of Hong Kong, and statistics are made freely available online on a quarterly basis. Statistics from the CRDA show that the number of ketamine users (all ages) in Hong Kong has increased from 1605 (9.8% of total drug users) in 2000 to 5212 (37.6%) in 2009. Increasing trends of ketamine use among illicit drug users under the age of 21 were also reported, rising from 36.9% of young drug users in 2000 to 84.3% in 2009. A survey conducted among school-attending Taiwanese adolescents reported prevalence rates of 0.15% in 2004, 0.18% in 2005, and 0.15% in 2006 in middle-school (grades 7 and 9) students; in Taiwanese high-school (grades 10 and 12) students, prevalence was 1.13% in 2004, 0.66% in 2005, and 0.44% in 2006. From the same survey, a large portion (42.8%) of those who reported ecstasy use also reported ketamine use. Ketamine was the second-most used illicit drug (behind ecstasy) in absconding Taiwanese adolescents as reported by a multi-city street outreach survey. In a study comparing the reporting rates between web questionnaires and paper-and-pencil questionnaires, ketamine use was reported a higher rate in the web version. Urine samples taken at a club in Taipei, Taiwan, showed high rates of ketamine use at 47.0%; this prevalence was compared with that of detainees suspected of recreational drug use in the general public, of which 2.0% of the samples tested positive for ketamine use. Law enforcement In the late 2010s and early 2020s, law enforcement agencies in some U.S. states began directing paramedics to use ketamine to sedate people under arrest, sometimes under the auspices of treatment for the controversial diagnosis "excited delirium". The American Society of Anesthesiologists and American College of Emergency Physicians oppose the use of ketamine or any similar agent to incapacitate someone solely for a law enforcement purpose. References Drugs Drug culture
Ketamine in society and culture
[ "Chemistry" ]
2,704
[ "Pharmacology", "Chemicals in medicine", "Drugs", "Products of chemical industry" ]
59,626,627
https://en.wikipedia.org/wiki/European%20Project%20on%20Ocean%20Acidification
The European Project on Ocean Acidification (EPOCA) was Europe's first major research initiative and the first large-scale international research effort devoted to studying the impacts and consequences of ocean acidification. EPOCA was an EU FP7 Integrated Project active during four years, from 2008 to 2012. The EPOCA consortium brought together more than 160 researchers from 32 institutes in 10 European countries (Belgium, France, Germany, Iceland, Italy, The Netherlands, Norway, Sweden, Switzerland, and the United Kingdom) and was coordinated by the French Centre National de la Recherche Scientifique (CNRS) with the project office based at the Institut de la Mer de Villefranche, France (formerly Observatoire Océanologique de Villefranche). Scope The research carried out through EPOCA was structured around four themes : Theme 1 investigated the changes in ocean chemistry and biogeography across space and time. Paleo-reconstruction methods were used on several archives, including foraminifera and deep-sea corals, to determine the past variability in ocean chemistry and to tie these to present-day chemical and biological observations; Theme 2 studied the sensitivity of marine organisms, communities and ecosystems to ocean acidification. Key climate-relevant biogeochemical processes such as calcification, primary production and nitrogen fixation were investigated using a large array of techniques, ranging from molecular tools to physiological and ecological approaches. Perturbation experiments were carried out both in the laboratory and in the field, including a major large-scale offshore mesocosm experiment in Svalbard in 2010 Theme 3 focused on the integration of the results from Themes 1 and 2 in biogeochemical, sediment, and coupled ocean-climate models to better understand and project the responses of the Earth system to ocean acidification. Special attention was paid to feedbacks of physiological changes on the carbon, nitrogen, sulfur and iron cycles and how these changes will affect and be affected by future climate change; Finally, Theme 4 synthesized the results from Themes 1-3 and assessed uncertainties, risks and thresholds ("tipping points") related to ocean acidification at scales ranging from subcellular to ecosystem and local to global scales. A second focus of this theme was to communicate the findings to fellow scientists but also to policy makers, media, schools and the general public. Legacy EPOCA significantly contributed to advancing the state of knowledge on ocean acidification and its impact on marine organisms and ecosystems. The project produced more than 200 research articles, equivalent to 20% of the peer-reviewed scientific literature on ocean acidification published during the period 2009-2012. EPOCA leaves behind products still widely used by the international scientific community working on ocean acidification, such as : EPOCA scientists designed and developed the R software package seacarb, which calculates parameters of the seawater carbonate system and includes functions useful for ocean acidification research; EPOCA led the production of the community-reviewed “Guide to best practices in ocean acidification research and data reporting”, published in 2010 as a collaborative effort of EPOCA and international colleagues to provide guidance on design of ocean acidification experiments and to facilitate comparisons of studies and; EPOCA maintained and pioneered several resources made available to the international ocean acidification research community : a news stream on ocean acidification launched in 2006 by Jean-Pierre Gattuso, which provides daily information on the latest scientific articles, media coverage, meetings, and job and training opportunities; a bibliographic database including research articles, books and book chapters with allocated keywords, launched in 1995 by Jean-Pierre Gattuso and; a compilation of data from peer-reviewed studies investigating a biological response to ocean acidification. References External links Oceanography Marine biology Climatological research International climate change organizations
European Project on Ocean Acidification
[ "Physics", "Biology", "Environmental_science" ]
768
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "Marine biology" ]
59,627,302
https://en.wikipedia.org/wiki/Symmorphosis
Symmorphosis is the regulation of biological units to produce an optimal outcome. Symmorphosis is when a quantitative match of design and function within an organism defined within a functional system. Symmorphosis can be broken down into the three predictions that are required for organs to evolve within a species. This proposes that if organs were matched structurally and functionally, and paired with the correct energy and minerals, the body would create an organ of optimal design. Some examples of this in the human body could be how the respiratory system distributes oxygen, how bones are structured to withstand stress, how blood vessels are designed to distribute blood throughout the body without using a lot of energy, or even how as a person becomes more physically fit or endures more cardio after their body has adjusted to maintain higher functioning demands. The use of symmorphosis can allow for fields of science to work with the field of evolutionary biology to better understand adaptation. Requirements For symmorphosis to occur, there must be three predictions or guidelines in place and functioning at the same time. These three predictions work together to let an organ function or organ system work at full potential. Structure When looking at the theory of symmorphosis, one must consider if the design in the organism is fully optimized. The structural design in terms of symmorphosis means that the organ is designed to allow full capacity of its function and can allow for adjustments to occur when necessary. This design must contain the sufficient amount of economy material for the organ needed. In this circumstance, economy material is the careful management of resources such as tissues. Capacity The functional capacity is when all functional units work together to determine the maximal capacity. Functional capacity is overall determined by the structural design. Once the design is optimized in terms of biological materials, then the structure must be taken into account. The structure of an organ determines the maximal functional capacity and the adjustments required for morphogenesis—the process that causes an organism to create its shape—to occur. Performance The third prediction states that if prediction two works in intermediate steps to create a function of an individual organ, then each step also helps create the upper limit of the function." This means that if multiple units work together in multiple steps, they function together to create an upper limit (e.g., Vo2max) in terms of function or ability. Within the respiratory system A common form of testing symmorphosis between species of mammals is to use comparative biology. The first system to use the proposed theory for symmorphosis is the oxygen pathway for mammals. The original experimental method for symmorphosis was used to show if the design of the organs were relative to the static demands of the mammalian respiratory system. The respiratory system is a good example to study because it has one main function, the function has a measurable limit, the limit is variable, it has a sequence of structure, and each step of the sequence has functional parameters that are not fixed. A common pathway within the respiratory system is the oxygen pathway. This pathway is used because it is a good representation for mammals within most species—because it involves several organs that link together, and the overall function has a measurable upper limit. In particular, this testing helps identify structural elements that differ so they can carry the maximum amount of oxygen throughout the body. Vo2max The upper limit for the oxygen pathway is called the Vo2max. Vo2max is the maximal oxygen capacity that systems can take in, transport, and use oxygen. Vo2max can vary among individuals due to allometric variation (the differences in body mass), adaptive variation (differences in lifestyles), and the induced variation (amount of cardio exercise). Variation of any of the three types of variation should lead researchers to expect different parameters. The oxygen cascade is one system with clear limits, and can help determine the Vo2max by components such as oxygen supply to the skeletal muscle mitochondria and the demand of oxygen by these skeletal muscle mitochondria. If oxygen is not transferred via skeletal muscle mitochondria, it can then be transferred across muscle capillaries. Evolutionary implications Symmophsis can be use as an analytical advancement that helps other fields of science—such as biochemistry, physiology, and astronomy—work with fields such as cell, molecular, and evolutionary biology. Combining these fields helps researchers better understand past biological adaptions. In evolution, natural selection can hinder the design when looking at the guidelines for symmorphosis. Natural selection can alter the phenotype to increase fitness of a species. In doing this, natural selection can cause adaptations that can change the optimal structural. When the optimal structural design changes, it changes the amount of economy material that must be used, which changes the predictions. Critiques An issue with symmorphosis is the problem of having an optimal design for an organ if the organ contains multiple functions. An organ that performs multiple functions must compromise optimal performance of one function to perform another optimally. These complex components adding together dramatically decreases the chance that everything will optimally match. An example of this in mammals is the lungs. Researchers now claim that the lungs are an exception when considering the Lungs typically are only partially adjusted to maximal oxygen capacity in terms of adaptive and allometric variation and cause a fluctuation in these values. In terms of symmorphosis, the capacity of each step of the oxygen cascade should match the demand of Vmax. In most cases this theory holds true with the exception of when an individual exceeds the Vmax. When Vmax is exceeded there then becomes developmental constraints as well as design constraints in terms of symmorphosis. When this occurs there is an unmatched capacity, although they may be similar they do not align with the predictions for symmorphosis. References Evolutionary biology Physiology
Symmorphosis
[ "Biology" ]
1,200
[ "Evolutionary biology", "Physiology" ]
59,627,310
https://en.wikipedia.org/wiki/Hybrid%20incompatibility
Hybrid incompatibility is a phenomenon in plants and animals, wherein offspring produced by the mating of two different species or populations have reduced viability and/or are less able to reproduce. Examples of hybrids include mules and ligers from the animal world, and subspecies of the Asian rice crop Oryza sativa from the plant world. Multiple models have been developed to explain this phenomenon. Recent research suggests that the source of this incompatibility is largely genetic, as combinations of genes and alleles prove lethal to the hybrid organism. Incompatibility is not solely influenced by genetics, however, and can be affected by environmental factors such as temperature. The genetic underpinnings of hybrid incompatibility may provide insight into factors responsible for evolutionary divergence between species. Background Hybrid incompatibility occurs when the offspring of two closely related species are not viable or suffer from infertility. Charles Darwin posited that hybrid incompatibility is not a product of natural selection, stating that the phenomenon is an outcome of the hybridizing species diverging, rather than something that is directly acted upon by selective pressures. The underlying causes of the incompatibility can be varied: earlier research focused on things like changes in ploidy in plants. More recent research has taken advantage of improved molecular techniques and has focused on the effects of genes and alleles in the hybrid and its parents. Dobzhansky-Muller model The first major breakthrough in the genetic basis of hybrid incompatibility is the Dobzhansky-Muller model, a combination of findings by Theodosius Dobzhansky and Joseph Muller between 1937 and 1942. The model provides an explanation as to why a negative fitness effect like hybrid incompatibility is not selected against. By hypothesizing that the incompatibility arose from alterations at two or more loci, rather than one, the incompatible alleles are in one hybrid individual for the first time rather than throughout the population - thus, hybrids that are infertile can develop while the parent populations remain viable. The negative fitness effects of infertility are not present in the original population. In this way, hybrid infertility contributes in some part to speciation by ensuring that gene flow between diverging species remains limited. Further analysis of the issue has supported this model, although it does not include conspecific genic interactions, a potential factor that more recent research has begun to look in to. Gene identification Decades after the research of Dobzhansky and Muller, the specifics of hybrid incompatibility were explored by Jerry Coyne and H. Allen Orr. Using introgression techniques to analyze the fertility in Drosophila hybrid and non-hybrid offspring, specific genes that contribute to sterility were identified; a study by Chung-I Wu which expanded on Coyne and Orr's work found that the hybrids of two Drosophila species were made sterile by the interaction of around 100 genes. These studies widened the scope of the Dobzhansky-Muller model, who thought it likely that more than two genes would be responsible. The ubiquity of Drosophila as a model organism has allowed many of the sterility genes to be sequenced in the years since Wu's study. Modern directions With modern molecular techniques, researchers have been able to more accurately identify the underlying genetic causes of hybrid incompatibility. This has led to both the development of expansions to the Dobzhansky-Muller model. Recent research has also explored the possibility of external influences on sterility as well. The "snowball effect" An extension of the Dobzhansky-Muller model is the "snowball effect"; an accumulation of incompatible loci due to increased species divergence. Since the model posits that sterility is due to negative allelic interaction between the hybridizing species, as species become more diverged it follows that more negative interactions should develop. The snowball effect states that the number of these incompatibilities will increase exponentially over the time of divergence, particularly when more than two loci contribute to the incompatibility. This concept has been exhibited in tests with the flowering plant genus Solanum, with the findings supporting the genetic underpinnings of Dobzhansky-Muller: "Overall, our results indicate that the accumulation of sterility loci follows a different trajectory from the accumulation of loci for other quantitative species differences, consistent with the unique genetic basis expected to underpin species reproductive isolating barriers. ...In doing so, we uncover direct empirical support for the Dobzhansky-Muller model of hybrid incompatibility, and the snowball prediction in particular." Environmental influences Though the primary causes of hybrid incompatibility appear to be genetic, external factors may play a role as well. Studies focused primarily on model plants have found that the viability of hybrids can be dependent on environmental influence. Several studies on rice and Arabidopsis species identify temperature as an important factor in hybrid viability; generally, low temperatures seem to cause negative hybrid symptoms to be expressed while high temperatures suppress them, although one rice study found the opposite to be true. There has also been evidence in an Arabidopsis species that in poor environmental conditions (in this case, high temperatures), hybrids did not express negative symptoms and are viable with other populations. When environmental conditions return to normal, however, the negative symptoms are expressed and the hybrids are once again incompatible with other populations. Lynch-Force model Though a multitude of evidence supports the Dobzhansky-Muller model of hybrid sterility and speciation, this does not rule out the possibility that other situations besides the inviable combination of benign genes can lead to hybrid incompatibility. One such situation is incompatibility by way of gene duplication, or the Lynch and Force model (put forth by Michael Lynch and Allan Force in 2000). When gene duplication occurs, there is a possibility that a redundant gene can be rendered non-functional over time by mutations. From Lynch and Force's paper:"The divergent resolution of genomic redundancies, such that one population loses function from one copy while the second population loses function from a second copy at a different chromosomal location, leads to chromosomal repatterning such that gametes produced by hybrid individuals can be completely lacking in functional genes for a duplicate pair." This hypothesis is relatively recent compared to Dobzhansky-Muller, but has support as well. Epigenetic influences A possible contributor to hybrid incompatibility that fits with the Lynch and Force model better than the Dobzhansky-Muller model is epigenetic inheritance. Epigenetics broadly refers to heritable elements that affect offspring phenotype without adjusting the DNA sequence of the offspring. When a particular allele has been epigenetically modified, it is referred to as an epiallele A study found that an Arabidopsis gene is not expressed because it is a silent epiallele, and when this epiallele is inherited by hybrids in combination with a mutant gene at the same locus, the hybrid is inviable. This fits with the Lynch and Force model because the heritable epiallele, ordinarily not an issue in non-hybrid populations with non-epiallele copies of the gene, becomes problematic when it is the only copy of the gene in the hybrid population. Study in Capsella shows that dosage of maternal small-interfering RNAs can contribute to hybrid incompatibility between closely related plant species. See also Hybrid inviability References Hybridisation (biology) Breeding Biology terminology
Hybrid incompatibility
[ "Biology" ]
1,586
[ "Behavior", "Breeding", "nan", "Reproduction" ]
59,627,314
https://en.wikipedia.org/wiki/Biotic%20homogenization
Biotic homogenization is the process by which two or more spatially distributed ecological communities become increasingly similar over time. This process may be genetic, taxonomic, or functional, and it leads to a loss of beta (β) diversity. While the term is sometimes used interchangeably with "taxonomic homogenization", "functional homogenization", and "genetic homogenization", biotic homogenization is actually an overarching concept that encompasses the other three. This phenomenon stems primarily from two sources: extinctions of native and invasions of nonnative species. While this process pre-dates human civilization, as evidenced by the fossil record, and still occurs due to natural impacts, it has recently been accelerated due anthropogenic pressures. Biotic homogenization has become recognized as a significant component of the biodiversity crisis, and as such has become of increasing importance to conservation ecologists. Overview Homogenization versus differentiation Homogenization is the process of assemblages becoming increasingly similar: the reverse is the process of assemblages becoming increasingly different over time, a process known as "biotic differentiation". Just as biotic homogenization has genetic, taxonomic, and functional components, differentiation can occur at any of these levels of organization. Alpha and beta diversity Understanding homogenization requires an understanding of the difference between alpha (α) and beta (β) diversity. Alpha diversity refers to diversity within a community: it addresses how many species are present. A community with high α diversity has many species present. Beta diversity compares multiple communities. For there to be high β diversity, two communities would have to have high α diversity but have different, unique species compositions. Species introduction, extinction, and richness When organisms are introduced to a habitat, be it naturally or artificially, overall species richness increases (assuming no other species are simultaneously lost). Similarly, when species become extinct, species richness decreases, once again assuming no other alterations to the assemblage. As such, when there is net increase in species richness, a common misconception is to assume that differentiation has occurred. This, however, may or may not be the case. While an increase in species richness does indicate an increase in α diversity, homogenization and differentiation specifically address β diversity. Positive relationships with richness While it may seem counterintuitive, there are times when increased species richness (α diversity) also leads to increased homogenization. If we imagine an example of two communities: community one contains four species (A, B, C, and D). Community two contains three species (C, D, and E). While there is overlap between these two communities, they are certainly different. However, if community two undergoes drastic change where E becomes extinct while A and B are simultaneously introduced, it now demonstrates higher species richness (greater α diversity), because there are now four species present instead of three. Yet, at the same time, communities one and two have become identical, removing any β diversity: they have homogenized. This particular trend is frequently observed in studies of biotic homogenization. Sometimes decreased species richness can lead to greater β diversity and differentiation. If, in the example above, community one had lost species D and community two had lost species C, both communities would have lower α diversity because each would have one less species. However, the two communities would have no species in common, which would dramatically increase the β diversity, leading to differentiation. Negative relationships with richness In some cases, increased α diversity could theoretically lead to increased β diversity and differentiation. When we return to the previous example, community one still contains four species (A, B, C, and D) and community two contains three (C, D, and E). This time, C goes extinct in community two, but F and G are introduced at the same time. Community two now has greater richness and therefore greater α diversity. It also only now has one species in common with community one instead of two species. The two communities are now more different from each other than they were initially, indicating greater β diversity and therefore biotic differentiation. Decreased richness could also lead to homogenization. If A were to go extinct in community one and E were to go extinct in community two, then both communities would have lower richness, since they both would be out one species. There would also be greater overlap in species composition between the two communities, indicating lost β diversity and increased homogenization. Pressures leading to homogenization Homogenization can result from either anthropomorphic or natural pressures. Many cases of species introductions are the result of either unintentional or intentional introduction of species by humans, be it for the pet trade, recreation, or agriculture. Urbanization can also have profound impacts on biota, leading to changes in assemblages. Natural selection and other evolutionary forces that lead to extinction can also potentially lead to homogenization. Sometimes, previously isolated populations can become exposed to each other naturally. Species interactions can also cause local extinctions, be the relationship predatory or pathogenic. Components Genetic Genetic homogenization refers to the underlying molecular processes involved in biotic homogenization. It typically results from hybridization with non-native species, leading to decreased variation in the gene pool. These hybridization events may be either interspecific or intraspecific. Genetic homogenization can be analyzed in terms of allelic frequencies, which is accomplished through a comparison of how common specific genotypes are. If an allele occurs at a similar frequency between two populations, then there is greater homogenization present. Other evolutionary forces such as founder effects and bottleneck effects can also lead to genetic homogenization. Taxonomic Taxonomic homogenization is perhaps the most well-known and broadly studied component of biotic homogenization, and the two terms are often used interchangeably. It is most strictly defined as a loss in β diversity, meaning that multiple communities are increasing in taxonomic similarity over time. A common misconception with taxonomic homogenization is that it represents a loss in α diversity, or that it leads to decreased species richness. However, assemblages under taxonomic homogenization may actually display an increase in α diversity, a phenomenon that has been observed in plant, animal, and microbial groups. Functional Functional homogenization refers to the increase in similarity of function across a community: that is, similarity in the roles filled by the species. In an ecosystem that has undergone functional homogenization, there are increased species that fill the same functional role or niche, with fewer species occupying unique niches. Analysis Measuring biotic homogenization ultimately requires measuring β diversity. Taxonomic homogenization is typically studied by comparing two species pools that may be separated spatially, temporally, or both. Researchers can choose to use extant pools only or pools containing both extant species and reconstructed historical species. It is not unusual to compare relationships between α diversity and β diversity in a population. Examples Most studies of biotic homogenization have typically focused on fishes and vascular plants. More recently, however, homogenization has been demonstrated in other taxonomic groups. Fossil Record The fossil record gives multiple prehistoric examples of biotic homogenization. For example, the Panamanian land bridge between North and South America allowed previously isolated assemblages to homogenize. However, prehistoric rates of homogenization were at a far slower rate than they are currently. Additionally, organisms have been able to move far greater distances due to anthropomorphic impacts than they ever have done naturally. Animals Birds Both taxonomic and functional homogenization have been investigated in birds. Certain island studies have demonstrated that on a small spatial scale, that avian taxonomic homogenization occurs far more rapidly than it does on a larger spatial scale. In France, communities have been recorded as becoming increasingly functionally similar over the course of two decades. Interestingly, in other French studies, it has been noted that there is not a temporal relationship between functional and taxonomic homogenization, a trend that had been observed in freshwater fishes. In urban landscapes, the introduction of non-native species such as rock doves and European starlings has led to increased homogenization of urban avian communities. Many species considered "urban exploiters" also contribute to biotic homogenization in urban environments, in part due to their ability to utilize anthropogenic resources. There have been predictions that avian taxonomic homogenization is occurring on the global scale, which could lead to future mass extinctions of avifauna. Mammals Ungulates were studied at both a global and local scale over a span of forty years, ending in 2005. On a global scale, it was found that homogenization had increased by 2%, and that introductions contributed more to this change than did extinctions. In a more localized study in South Africa, homogenization increased by 8%. In this example, species richness increased as homogenization increased. Fishes Freshwater fishes were among the first taxonomic groups to be used in homogenization studies, and trends have been observed on several continents. Homogenization in freshwater fishes typically stems from stocking of nonnative fishes for recreational purposes. In a more specific example, there was a 2015 study in Chile, where freshwater systems support diverse assemblages of endemic fishes. In a comparison of 201 watersheds that analyzed changes in similarities over 200 years, approximately 65% of comparisons demonstrate that the watersheds are undergoing homogenization. Insects While there have been fewer studies of biotic homogenization in insects compared to other taxonomic groups, there is evidence that it exists in multiple taxa. According to a 2015 study that examined bees, hoverflies, and butterflies, the extent to which taxonomic homogenization occurs varies with taxa, country, and spatial scale. In the three European countries that were included in the study, hoverflies had homogenized in all of the countries while bees and butterflies only homogenized in two countries. The scale at which homogenization occurred also varied between taxonomic groups. Amphibians and reptiles There has been relatively little research on homogenization in the herpetofauna, and according to a 2006 study, introduction of nonnative reptiles has not led to homogenization of reptilian communities in Florida. However, in Central America, Batrachochytrium dendrobatidis, which is pathogenic to amphibians, has led to selective extinction of certain taxa, which in turn has resulted in homogenization of certain amphibian assemblages. In addition to this more natural example of homogenization, there is evidence that there is amphibian homogenization of human-impacted environments around the world. Plants Anthropomorphic impacts on plants have been complex, with overall species richness of flora increasing over the course of human history. Additionally, there have been significantly more introductions on the continental scale than there have been extinction of endemics, increasing overall species richness and α diversity. However, β diversity has decreased in some circumstances, resulting in homogenization effects. Microorganisms Agriculture in the Amazon river basin has been connected to an increase in α diversity but a decrease in β diversity of bacteria. This trend is likely due to the loss of endemic species that have limited ranges being replaced by tolerant, generalist species. Implications Ecology and Evolution Community composition, rather than richness, plays the more crucial role in maintaining the ecosystem. Due to the fact that the study of biotic homogenization is still relatively new, the implications of homogenization on the environment are still not entirely clear and it is possible that its impacts may not be all negative. Further research is required to determine the extent of its impact on the ecosystem. However, as ecosystems become increasingly similar and simplified, there is concern that the resilience of the assemblages against stressful events will be limited. Indeed, the more limited an assemblage becomes on functional, taxonomic, and genetic levels, the more constrained that assemblage is in its ability to evolve. Natural selection acts on diversity between individuals and species, and if that diversity does not exist, communities are severely limited when it comes to future evolutionary paths. Conservation Limiting biotic homogenization ultimately relies on limiting its sources: species invasion and extinction. Because these are largely rooted in human activity, if conservation is to be successful, it is necessary to reduce the degree to which people cause invasions and extinctions. Since biotic homogenization is still a relatively new area of study, increased education about both its mechanism and impact could potentially be effective as well. If we are to improve our understanding of the field, it is necessary to increase the scale of our knowledge of its spatial, temporal, geographic, and taxonomic components. There is a disproportionate number of studies in taxonomic homogenization, with relatively few in functional homogenization, which could have greater ecological implications. Increased study into functional homogenization could give insight into conservation needs. These gaps in the literature may, however, soon be filled. The study of homogenization is increasingly gaining attention in ecological circles, with the number of studies quantifying its effects increasing exponentially between the years of 2000 and 2015. References Biodiversity
Biotic homogenization
[ "Biology" ]
2,722
[ "Biodiversity" ]
59,630,526
https://en.wikipedia.org/wiki/Transition%20metal%20imido%20complex
In coordination chemistry and organometallic chemistry, transition metal imido complexes is a coordination compound containing an imido ligand. Imido ligands can be terminal or bridging ligands. The parent imido ligand has the formula NH, but most imido ligands have alkyl or aryl groups in place of H. The imido ligand is generally viewed as a dianion, akin to oxide. Structural classes Complexes with terminal imido ligands In some terminal imido complexes, the M=N−C angle is 180° but often the angle is decidedly bent. Complexes of the type M=NH are assumed to be intermediates in nitrogen fixation by synthetic catalysts. Complexes with bridging imido ligands Imido ligands are observed as doubly and, less often, triply bridging ligands. Synthesis From metal oxo complexes Commonly metal-imido complexes are generated from metal oxo complexes. They arise by condensation of amines and metal oxides and metal halides: LnMO + H2NR → LnMNR + H2O This approach is illustrated by the conversion of MoO2Cl2 to the diimido derivative MoCl2(NAr)2(dimethoxyethane), precursors to the Schrock carbenes of the type Mo(OR)2(NAr)(CH-t-Bu). LnMCl2 + 3 H2NR → LnMNR + 2 RNH3Cl Aryl isocyanates react with metal oxides concomitant with decarboxylation: LnMO + O=C=NR → LnMNR + CO2 Alternative routes Some are generated from the reaction of low-valence metal complexes with azides: LnM + N3R → LnMNR + N2 A few imido complexes have been generated by the alkylation of metal nitride complexes: LnMN− + RX → LnMNR + X− Utility Metal imido complexes are mainly of academic interest. They are however assumed to be intermediates in ammoxidation catalysis, in the Sharpless oxyamination, and in nitrogen fixation. In nitrogen fixation A molybdenum imido complex appears in a common nitrogen fixation cycle: Mo•NH3 (ammine); with the oxidation state of molybdenum varying to accommodate the number bonds from nitrogen. References Coordination chemistry
Transition metal imido complex
[ "Chemistry" ]
520
[ "Coordination chemistry" ]
59,634,425
https://en.wikipedia.org/wiki/Phenine%20nanotube
A phenine nanotube is a derivation or variant of short carbon nanotubes first reported in 2019. They have a precise cylindrical structure with pores and a length index of 7, and have been made by a 9 step process starting with 1,3-dibromobenzene. References Carbon nanotubes
Phenine nanotube
[ "Materials_science" ]
68
[ "Nanotechnology", "Materials science stubs", "Nanotechnology stubs" ]
67,546,035
https://en.wikipedia.org/wiki/Ramalic%20acid
Ramalic acid is an organic compound of the depside class with the molecular formula C18H18O7. Ramalic acid occurs as a secondary metabolite in some lichens like Ramalina pollinaria wherfrom ramalic acid has its name. Ramalic acid can be used as a dye. References Further reading Polyphenols
Ramalic acid
[ "Chemistry" ]
75
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
71,916,572
https://en.wikipedia.org/wiki/B%20cell%20growth%20and%20differentiation%20factors
B Cell Growth and Differentiation Factors (also known as BCGF and BCDF) are two important groups of soluble factors controlling the life cycle of B cells (also referred to as B lymphocytes, cells which perform functions including: antibody secretion, antigen presentation, preservation of memory for antigens, and lymphokine secretion). BCGFs specifically mediate the growth and division of B cells, or, in other words, the progression of B cells through their life cycle (cell cycle stages G1, S, G2). BCDFs control the advancement of a B cell progenitor or unmatured B cell to an adult immunoglobulin (Ig) secreting cell. Differentiation factors control cell fate and can sometimes cause matured cells to change lineage. Not all currently known BCGFs and BCDFs affect all B cell lineages and stages of the cell cycle in similar ways. Both BCGFs and BCDFs work on cells previously "activated" by factors such as anti-immunoglobulin (anti-Ig). BCGFs cause activated B cells to enlarge, express activation markers (ex. transferrin receptor) and enter the S phase (DNA synthesis phase) of the cell cycle. Meanwhile, BCDFs stimulate these cells to differentiate to mature Ig-secreting B cells. An important note is that B cell Proliferation Factors (BCPFs) also exist and are different from BCGFs. BCPFs make cells, which are not necessarily activated, more responsive to BCGFs and help maintain cell viability, whereas BCGFs direct and stimulate growth and division. This article will mention BCPFs and factors that induce proliferation, yet the main focus will remain on BCGFs and BCDFs. General Overview The currently known BCGFs and BCDFs are BCGF I (also called B cell Stimulating Factor 1 (BSF1)), BCGF II, BCDF, IL-1 (interleukin-1), IL-2, IL-3, IL-4, IL-5, IL-6 (BSF-2), IFN-alpha, beta 2, and gamma, neuroleukin, TGF-beta (Transforming Growth Factor-beta), LP1 (Lymphopoetin 1), BCGFLOW, TNF-alpha (Tumor Necrosis Factor alpha), TRF (T cell Replacing Factor), CSF (colony-stimulating factors), MAF (macrophage activation factors), and lymphotoxin. Most factors act in many points throughout the B cell lifecycle, activation, growth, differentiation, and maturation, making this a complex pathway for study. Provided here is a list of these with some more detailed descriptions about their origins and functions. BCGF I (BSF1 or BSFp1) - secreted by activated T cells. BCGF I induces "resting" cells to become susceptible to stimulation by ligands. Both anti-Ig and BCGF I are required for a cell to enter S and G2 phase. It is not clear if BCGF I acts on memory B cells specifically, but it appears to induce growth in the continuous presence of anti-Ig in all other lineages. BCGF is uninhibited by anti-Tac (T cell activation antigen), whereas other factors, such as IL-2 are. BCGF II - a cytokine secreted by T cells. BCDF - causes calcium influx in cells, critical for differentiation. It is the only factor which can achieve this effect. Induces differentiation in late-stage activated cells. BCDF subclasses are associated with the secretion of specific subclasses of Ig, for example BCDF(γ) with IgG and BCDF(μ) with IgM. IL-1 - a cytokine derived from macrophages, this factor drives cells into S phase, usually working after BCGF I. IL-1 weakly co-stimulates even resting B cells in the presence of anti-Ig and can enhance BCDF function. IL-2 - a cytokine key activating factor for T cells and B cells secreted by T cells. Cells in early-stage activation differentiate in response to IL-2 and all B cells proliferate in the presence of IL-2. IL-2 exhibits an additive affect to BCGF when both are present. Yet the magnitude of its effect is much less than BCGF and BCDF in both growth and differentiation. IL-3 - cytokine associated with the differentiation of more mature B cells. IL-4 - cytokine associated with the differentiation of mature T cells, which some B cell precursors are also responsive to. IL-5 - cytokine that acts like IL-6, except it can also induce proliferation in B cells, and its effect on differentiation is partially inhibited by IL-4. IL-5 cannot induce differentiation in cells activated by anti-Ig. IL-6 (BSF-2) - cytokine that acts exclusively as a B cell differentiation factor, stimulating increase in levels of Ig, J-chain mRNA, and proteins. IFN-alpha, beta 2, and gamma (interferons alpha, beta 2, and gamma) - IFN-gamma in combination with IL-2 also induces early-stage differentiation. Interferon-gamma has previously been reported as a requirement for plaque-forming cell response. Interferon-alpha can either enhance or suppress differentiation by controlling responsiveness of human peripheral blood B cells to B-cell helper factors, depending on certain environment and context-specific conditions, as its signaling is likely mediated by other cell types. Neuroleukin TGF-beta LP1 - a growth factor active in the development of immature B cells and capable of stimulating proliferation of B cell precursors. BCGFLOW TNF-alpha TRF - induced primarily IgM secretion from B cells, thus constituting a differentiation factor. Various sources disagree as to whether TRF can induce proliferation. CSF MAF Lymphotoxin Discovery The identification and classification of B cell growth and differentiation factors was primarily conducted in the 1980s-1990s, though it had begun to spark interest of the scientific community in the 1970s. It began with the creation of T cell hybridomas - immortal cells that could be selected to produce only one factor. This allowed the study of B cells exposed to only one soluble factor at a time, enabling the identification of that factor's direct effects on the cells. Previously, it was believed that B cell growth was induced exclusively by the presence of antigen. Some major questions that researchers attempted to answer were which cells and specifically cell types secreted these factors, and in which conditions, as well as if and how these factors differed from T Cell Growth and Differentiation Factors (TCGFs and TCDFs) such as IL-2. Additionally, it was established early on that several compounds could mediate B cell growth and differentiation, some of them working only when encountered together (synergistically). So, researchers also attempted to identify how many BCGFs and BCDFs exist and classify the varieties of these factors. A major early challenge was the inability of culturing one distinctive T cell line or the isolation of thereof to analyze its effects. T cell factors that induced B cell activation, proliferation, growth, and differentiation were frequently generated by mixed populations of T cells. When the first immortalized T cell lines began to emerge, it became possible to observe which T cells had specific effects on B cells. Various T cell types secreted factors that induced Ig production, in some cases only of specific kinds or only in the presence of antigen. Some IgG classes secreted by B cells are exclusively T cell dependent. Another major advance was the declaration that BCGF and BCDF were indeed two different entities. It was determined that T cell secreted factors and anti-Ig were necessary for the proliferation of activated B cells, while the addition of a differentiation factor was required to induce Ig production (ie. differentiation). So, it was determined that these two factors were separate entities. Isolating the two types of BCGF and BCDF was difficult as it required purification from IL-2. A key difference between the two variants of BCGF was that only one could induce growth in colony-forming B cells. Later, difficulties with the subject B cell populations began to emerge, as there wasn't yet a stable long-term method of culture or isolation of individual subtypes. The difficulty of obtaining populations of viable B cell precursors was resolved by the design of a long-term bone marrow culture system, which secreted LP1 growth factor. In given populations, it was determined that B cells could be sorted into groups of "activated" and "resting" cells by their size, enabling the study of factors on these two distinct subgroups. As not all cell lines responded to the factors listed in the above section in similar ways (and some were completely irresponsive), a model cell line that could respond to various factors was necessary to compare the resulting responses and study in more detail the pathways of each lymphokine or factor's signal. Researchers identified several such cell lines that were guaranteed to have receptors for or respond to groups of factors. For example, in CH12 B cell lymphoma, cells differentiate in response to both IL-5 and IL-6 in the presence of other costimulatory cytokines, while in other cell lines IL-5 is only effective in a narrow window of time right after activation. BCGFs and BCDFs were originally sought after for research purposes. Previously identified similar factors for T cells allowed T cells to be "immortalized" or kept alive in the research setting for prolonged periods of time. This permitted the extensive study of T cells and their functioning. It also permitted the modelling of the immune response, such as studying the activated T cell state. Finding factors that would enable a similar closer study of B cells would greatly benefit science. B cell differentiation pathways The most common simplified overview description of the B cell differentiation pathway involves the following steps: an antigen interacts with the corresponding surface membrane immunoglobulin after which the B cell begins expressing receptors for growth factors secreted by T cells (BCGFs and IL-2), after these factors bind, the lymphocytes enter S phase, and subsequent binding with BCDFs differentiates B cells into Ig secreting cells. This model quickly grows more complex as individual resting B cells receive multiple varying sequential signals that determine future cell fate and functions that will be performed by those cells. Depending on this sequence of BCDFs, B cells may achieve different "fates" which can constitute the types of Ig they secrete or even their destiny with a specialized lineage (such as Memory B cells or plasma cells). Further investigations have been conducted since the identification of BCGFs and BCDF to determine what receptors they bind to and outline their pathways. There is evidence that CD23 is the receptor for BCDF. It was concluded that neither BCGF nor BCDF shared a receptor with IL-2. At least one pathway of B cell maturation via 446-BCDF, derived from anti-CD3 peripheral blood T cells, may involve reduction of intracellular cAMP. Stimulation by 446-BCDF causes an influx of calcium. Immune system interactions BCGFs and BCDFs primarily travel through the body intravenously but tend to be more concentrated in sites most critical to the human immune system - the lymph nodes, thyroid, spleen, bone marrow, and liver. The environments in all of these areas are complex ecosystems of various cell types, states, and concentrations of factors. So, in general, B cell activation, proliferation, and differentiation appears to be a complex process dependent on many cell and factor interactions as well as the state of activation of the cell. The interconnected nature of the immune system has caused many complications, for when looking at cells in model systems, it has often been unclear which, if any, factor actually exerted their effects directly on the B cells themselves as opposed to acting via accessory cells or in conjunction with other factors. Related diseases Common diseases associated with the dysregulation of B cells are autoimmunity, immune deficiency, and various blood-associated cancers. BCGFs and BCDFs are associated with these diseases because they control crucial parts of the B cell life cycle - the cell's growth and identity. For example, if BCGFs are present at an extremely high concentration, cells may multiply very quickly exhibiting cancer-like behavior or extreme levels of immune response. Similarly, extreme differentiation towards a specific lineage may make the immune system weakened in some areas or too powerful and cause immune-related disease. Different lineages or states of T cells secrete various BCDF subgroups. Maintaining the balance of the number and proportion of these cells is critical, as deficiencies in one or more subgroups cause disease, such as common variable immunodeficiency and chronic lymphocytic leukemia. Dysregulation of growth factor production is a characteristic of some diseases such as rheumatoid arthritis, ankylosing spondylitis, systemic lupus erythematosus, and traumatic joint injury, where high levels of BCDF and IL-2 are present in the synovial fluid, resulting in increased differentiation of B lymphocytes into plasma cells and Ig secreting cells that secrete so many antibodies that they generate an immune response and inflammation in locations where they accumulate. References Further reading Growth factors
B cell growth and differentiation factors
[ "Chemistry" ]
2,827
[ "Growth factors", "Signal transduction" ]
71,916,621
https://en.wikipedia.org/wiki/Cell%20biomechanics
Cell biomechanics a branch of biomechanics that involves single molecules, molecular interactions, or cells as the system of interest. Cells generate and maintain mechanical forces within their environment as a part of their physiology. Cell biomechanics deals with how mRNA, protein production, and gene expression is affected by said environment and with mechanical properties of isolated molecules or interaction of proteins that make up molecular motors. It is known that minor alterations in mechanical properties of cells can be an indicator of an infected cell. By studying these mechanical properties, greater insight will be gained in regards to disease. Thus, the goal of understanding cell biomechanics is to combine theoretical, experimental, and computational approaches to construct a realistic description of cell mechanical behaviors to provide new insights on the role of mechanics in disease. History In the late seventeenth century, English polymath Robert Hooke and Dutch scientist Antonie van Leeuwenhoek looked into ciliate Vorticella with extreme fluid and cellular motion using a simple optical microscope. In 1702 on Christmas day, van Leeuwenhoek described his observations, “In structure these little animals were fashioned like a bell, and at the round opening they made such a stir, that the particles in the water thereabout were set in motion thereby…which sight I found mightily diverting” in a letter. Prior to this, Brownian motion of particles and organelles within living cells had been discovered as well as theories to measure viscosity. However, there were not enough accessible technical tools to perform these accurate experiments at the time. Thus, mechanical properties within cells were only supported qualitatively by observation. With these new discoveries, the role of mechanical forces within biology was not always naturally accepted. In 1850, English physician William Benjamin Carpenter wrote “many of the actions taking place in the living body are conformable to the laws of mechanics, has been hastily assumed as justifying the conclusion that all its actions are mechanical." Similarly, in 1917, Scottish mathematical biologist D'Arcy Wentworth Thompson noted “…though they resemble known physical phenomena, their nature is still the subject of much dubiety and discussion, and neither the forms produced nor the forces at work can yet be satisfactorily and simply explained” in his book On Growth and Form. In the nineteenth century industrialization era, the overall understanding of the cell and tissue mechanics finally developed as it related to the mechanical, structural testing and theory (indentation, beam bending, the Hertz model) of engines, boats, and bridges. At the end of the nineteenth century, the mechanical properties of living cells were able to be experimentally analyzed and examined using techniques provided by large scale engineering mechanics. Since 2008, the nanoscale testing and modeling remains to be fundamentally based on these nineteenth century practices. Research methods Various studies have been conducted to establish relationships between the structure, mechanical responses, and function of biological tissues (blood vessels, heart, cardiac muscle, lung). To conduct this research, there have been several developed tools and techniques which are sensitive to detect such small forces. At this time, these techniques are only applicable in a controlled environment (test tube, petri dish). All of these methods ultimately give insight on mechanical properties of cells. These techniques can generally be split up into two sections: active methods and passive methods. Active methods are methods that apply forces onto cells in some manner to deform the cell. Passive methods are methods that sense mechanical forces and do not apply any external forces to the cell. Active methods Atomic force microscopy Atomic force microscopy is an interaction between a tip attached to a flexible cantilever and the molecule on a cell surface. The sharp tip can be used to probe single molecular events and image live cells. The relative deformation of the cell and the tip can be used to estimate how much force was applied and how stiff the cell is. Since it is a high force measurement technique, large scale deformations and reorganizations can be observed and mapped. Some drawbacks of this technique include but are not limited to an overestimation of force-versus-indentation curve given no applied force, potential cell damage, variety of tip shapes that determine nature of force-deformation curve. Magnetic tweezers and magnetic twisting cytometry Magnetic twisting cytometry is mainly used to determine physical properties of biological tissues. They can also be used for micromanipulating cells. Beads are exposed to magnetizing coils leading to a magnetic dipole moment. A weaker directional magnetic field is then applied to twist the beads through a specific angle or to move the beads lineary. Some disadvantages to this system include the difficulty to control the region of the cell that the beads, no guarantee of complete binding to the cell surface, and loss of magnetization with time. A variation of this technique is named optical tweezers where linear forces are applied to cells rather than magnetic ones. A laser beam is used alongside dielectric beads of high refractive indices to generate optical forces. Drawbacks of this method include potential photo-induced damage and a limited amount of force that can be generated. Micropipette aspiration Micropipette aspiration is primarily used for measuring absolute values of mechanical properties. On a cellular scale, it can map in space and time surface tension of interfaces within a tissue. On a tissue scale, it can measure mechanical properties such as viscoelasticity and tissue surface tension. Like AFM, it is also a high force measurement technique, where large scale deformations and reorganizations can be observed and mapped. A micropipette gets placed on the surface of the cell and gently suctions the cell to deform it. The geometry of the deformation along with the applied pressure allows researchers to calculate the force applied along with mechanical properties of the cell. A dual micropipette assay can is also able to quantify the strength of cadherin-dependent cell-cell adhesion. Stretching devices Stretching devices were developed to study effects of tensile stress on cells and tissues. Cells are incubated on flexible silicone sheet elastic membranes with modifiable surfaces. They are then stretched either in an uniaxial, biaxial, or pressure-controlled manner. The stretching can also occur at different frequencies. The main downside to stretching devices is that they leave behind wrinkling patterns, distorting the actual forces that were applied on the sheets. They are also large in size and generate both heat and shock, hindering the real-time imaging of cells. Carbon fiber-based systems Carbon fibers are mounted in glass capillaries and attached to a position-control device with feedback control mechanism. The fibers then attach to cells and apply and record the active forces generated from the cell. This, however, may result in damage to the cells due to the attachment they have to the fibers, focus issues, and potential bias. Passive methods Elastic substratum method This method stems from the classical theory of small-strain, plane-stress elasticity. The elastic substratum method allows for analysis of the displacement field of the elastic substrate over the traction field. This method is also referred to as traction force microscopy. Cells are incubated onto a flexible silicone sheet substrate. The cells then apply force onto the sheets causing a wrinkling pattern and is analyzed through the number of wrinkles and patterns. The downside to this method is the difficulty in transforming the patterns into a traction force map leading to potential inaccuracy in identifying forces. Flexible sheets with embedded beads Latex or fluorescently tagged beads are embedded into elastic substratum where the position of the beads are recorded over time. Cellular forces can be assumed by these displacements. The uncertainty with this method is the interdependence of bead displacement. A more improved technique named flexible sheets with micropatterned dots or grids considers this drawback and instead has the dots imprinted onto the flexible sheet. The deformation of the grid from the original grid is then analyzed. The same assumptions, however, are required to be made where the forces originate from the measured location and do not spread from another area. Micromachined cantilever beam A horizontal cantilever beam with an attachment pad and a well is used to measure cell traction forces as cells are seeded onto substrates and crawl over the cantilevers. These cantilevers are set to measure force through cantilever deflection, stiffness, and stress gradient. Unlike the prior method, the uncertainty of no propagation is not an issue. Rather the cantilever beam can only move in only one direction leading to only one axis being measured. The array of vertical microcantilevers is a technique that overcomes the limitations of the typical micromachined cantilever beam where there are two axes of directions available rather than a single horizontal beam. Although there is an improvement in scale and resolution, it is not suited for rapid- mass production and is quite costly. With delicate properties, minor damage would require reproduction of the device. Applications and usage In the last half-century, several studies have been conducted using cell biomechanics leading to greater biological control. Majority of these newly created devices are built to either provide greater insight into the human body’s reaction to disease or attempt to eradicate the disease as a whole. Cardiovascular cell mechanics and microcirculation Quantitative passive biomechanical models have been developed to predict cell motion and deformation in the mammalian red blood cell, a cell with a membrane with bending and shearing properties that are dependent upon strain, strain rate, and strain history, and a cytoplasm that in the normal red cell is predominantly a Newtonian viscous fluid, within a living organism. Newly developed (2007) models constitutive to this one show that biomechanical analysis not only is a starting point for prediction of the whole cell and cell suspension behavior, but also provides a reference point for molecular models of cell membranes that originate from the crystal structure of its parts. Several generations of biomechanical models have also been developed for white blood cells, the basis of immune surveillance and inflammation. These models have been proven to effectively predict cell-cell interactions in microcirculation. Similar additional models have been created for endothelium, platelets and metastatic tumor cells. Biomechanical analyses of different cell types in the circulation has brought greater understanding of cell interactions in the circulation, making it possible to predict cell behavior in narrow vessels. As a result, several blood diseases like inflammation and cardiovascular disease now have biomechanical footing. Models have also been developing in organs like the lung, heart, skeletal muscle, and connective tissue that are able to predict basic aspects of organ perfusion. Cell enrichment and separation From cell biomechanics, technology has been created to separate targeted cells. For the case of disease diagnosis and detection, said technology is able to separate healthy cells from cancerous ones through the difference in stiffness of the cell. Deformability-based enrichment devices are an example of this technology. These devices mostly deal with cancer cells from blood. Their main feature is their ability to identify if cancer cells have separated themselves from the tumor and have entered into the bloodstream as CTCs (Circulating Tumor Cells). If they have, these devices have recently also become able to count the number of CTCs in a millimeter of blood. Using this value, medical professionals are able to determine the effectiveness of a chemotherapy treatment. More specific examples include Clare Boothe Luce Assistant Professor of Mechanical Engineering at the Whiting School of Engineering Soojung Claire Hur’s microfluidic device and Woodruff School of Mechanical Engineering Professor Gonghao Wang’s microfluidic device that both deal with breast cancer cells. Hur’s device improves metastatic breast cancer cells by balancing deformability-induced and inertial lift forces that pushes larger metastatic cancer cells to move towards the centerline of a microchannel compared to blood cells. Wang’s device separates stiffer less invasive breast cancer cells by having diagonal ridges where only more deformable and highly invasive breast cancer cells can squeeze through. Deformability-based enrichment devices, however, are not only exclusive to cancer cells. An example of this is Nanyang Technological University Researcher Han Wei Hou’s microfluidic device that separates and improves red blood cells from normal cells based on their stiffness through margination. Infected red blood cells are generally stiffer, so through his device, stiffer red blood cells would be closer to the vessel wall when normal red blood cells would stay in the center. This allows the deformed red blood cells to be collected via a separate outlet on the sides. Ongoing research concerns In the 1800’s, cells were initially thought to be of homogeneous gels, sols, viscoelastic and plastic fluids. Models currently have been developed into including a viscoelastic continuum, a combination of discrete mechanical elements, or a combination of viscoelastic fluid within a dense meshwork and have been proven to be highly accurate after experimentation. Despite these improved and more refined models, there still remain to be flaws as several experimental proofs (soft glass rheology rheology phenomenon) that refute current existing models. Thus, the time-dependent and predictive theoretical description of cell mechanics remains to be incomplete. It is also not fully understood whether mechanical phenomena are side products of biological processes or they are controlled at the genetic and physiological level through feedback loops, actuation and response pathways given our existing knowledge of cell physiology or neurophysiology. References Biomechanics
Cell biomechanics
[ "Physics" ]
2,775
[ "Biomechanics", "Mechanics" ]
71,929,189
https://en.wikipedia.org/wiki/Anisotropic%20terahertz%20microspectroscopy
Anisotropic terahertz microspectroscopy (ATM) is a spectroscopic technique in which molecular vibrations in an anisotropic material are probed with short pulses of terahertz radiation whose electric field is linearly polarized parallel to the surface of the material. The technique has been demonstrated in studies involving single crystal sucrose, fructose, oxalic acid, and molecular protein crystals in which the spatial orientation of molecular vibrations are of interest. Explanation When the electric field of a propagating beam of light oscillates in a direction perpendicular to its direction of propagation, it is said to be a polarized transverse wave. Light with an electric field constrained to a particular angle in the transverse plane is said to be linearly polarized. When linearly polarized light is transmitted through an isotropic material — a material that exhibits the same physical properties in all spatial directions — the amount of light absorbed by the material is the same when measured for all angles of the polarized light. The resulting absorbance spectrum is featureless as a function of the polarization angle. A material said to be anisotropic exhibits different physical properties, like absorbance, refractive index, conductivity and so on, along different spatial directions. Thus, when a linearly polarized beam of light is passed through an anisotropic material and measured for different angles of polarization, the absorption of the light is different for different polarization angles. The resulting absorbance spectrum exhibits varying degrees of absorbance that correspond to the materials degree of anisotropy. When a polarized THz beam of light is transmitted through an anisotropic material, the resulting absorbance spectrum exhibits varying degrees of absorbance that correspond to the anisotropy of the material. If measurements are made at different frequencies across the THz spectrum (between about 0.3 to 3 THz) at a particular THz polarization angle, the resulting absorbance spectrum may also vary with frequency. This occurs because the vibrational modes of the molecules in the material absorb light at different frequencies. In protein molecules, for example, many of these vibrational modes oscillate within the range of terahertz frequencies. When the molecules in a material are arranged in the same orientation, the internal vibrational properties of the molecules may be identified using anisotropic terahertz microspectroscopy (ATM). This molecular alignment is found in single crystals of sucrose, fructose, oxalic acid, and other molecular crystals like protein crystals. Techniques To date, ATM techniques have utilized THz time-domain spectroscopy (THz-TDS) because of historical scarcity of strong THz sources and highly sensitive THz detectors that operate at room temperature. Many samples of interest contain large amounts of water that strongly absorb THz radiation, thus requiring a very strong THz source. This requirement is exacerbated when attempting to use highly sensitive THz detectors that conventionally require supercooling to liquid helium temperatures. Worse, the need for supercooling these detectors has made THz detection unavailable to many researchers around the world due to recent sharp rises in the price of liquid helium due to its scarcity. To circumvent THz detection hurdles, THz-TDS is utilized as it requires commonly available infrared detectors sensitive in the near infrared region of the electromagnetic spectrum — most commonly around a wavelength of 800 nm. In this case, an electro-optic (EO) crystal, such as gallium nitride (GaN), zinc telluride (ZnTe), is commonly used to detect changes in the THz light after it has passed through a sample. The polarization properties of a synchronized infrared beam of light passing through the EO crystal are changed. This polarization change is detected by an infrared detector, called a balanced detector, that compares the magnitude of two perpendicular polarization components of the infrared beam. Until more powerful THz sources that provide a wide frequency range and more sensitive room temperature THz detectors are realized, THz-TDS remains a reliable technique for ATM. The THz-TDS techniques used in ATM may be divided into two categories: rotated sample and stationary sample. Historically, the former technique involved rotation of the sample at the focus of a THz beam while the detector is placed far from the sample in the far-field. For many mechanical reasons, however, a stationary sample technique is preferred. In stationary sample ATM, a polarized THz beam is rotated through 360° in a plane perpendicular to the propagation direction of the beam and typically utilizes a near-field detection scheme in which the sample is mounted in direct contact with an EO crystal that is subsequently analyzed by the infrared beam in a THz-TDS configuration. Rotated Sample ATM Original ATM techniques involve rotating the sample at the focal point of a linearly polarized THz beam using a mechanically rotated sample mount. For this reason, the configuration is typically a far-field instrument in which a balanced detector (sensitive to infrared light) is placed a considerable distance from the sample. In the terahertz time-domain spectroscopy configuration, both the infrared and THz beams are transmitted through an electro-optic (EO) crystal like ZnTe or GaP. Here, the infrared beam detects the change in birefringence of the EO crystal due to the THz beam. When a sample is placed in the THz beam, the polarized THz beam is perturbed and the resulting degree of birefringence in the EO crystal is changed. The resulting perturbation of the infrared beam is sensed at the balanced detector. Rotated sample ATM is very useful for large samples (0.1 to 1 cm). However, when measuring samples such as protein crystals that must be isolated inside a hydration chamber, for example, the sample cannot be easily rotated. Additionally, it is challenging to maintain the same location of a rotated sample at the precise focal point of a THz beam. Instrument Design An ATM designed with a rotated sample is typically a far-field measurement configuration using a time-domain spectroscopy strategy. A high power infrared laser is typically used. Its beam is split by a beamsplitter into two optical paths: a probe beam and a THz generation beam. The THz generation beam typically receives the greater fraction of NIR power in order to maximize the power of the THz light commonly generated by a voltage-pulsed photoconductive antenna. The generated THz light is collected through a hyper-hemispherical silicon lens and passed to an off-axis parabolic mirror that collimates the THz beam for polarization by a THz polarizer that is often made of a simple wire-grid. The linearly polarized THz beam is then focused by a second off-axis parabolic mirror onto the sample. The THz beam transmitted through the sample is again collected by a third off-axis parabolic mirror, collimated onto a fourth parabolic mirror that then focuses the beam onto an electro-optic (EO) crystal whose birefringence is perturbed by the strength of the THz beam. The NIR probe beam is passed through the EO crystal to probe the induced degree of birefringence caused by the THz beam and passed to a detection module that often consists of an NIR quarter wave plate, a Wollaston prism that spatially separates orthogonal polarization states of the probe beam into two optical paths that are individually detected at a balanced detector. The resulting signal reported by the balanced detector is a measure of the difference in magnitude of these two orthogonal components of the NIR probe beam and therefore a direct correlation of the degree of birefringence induced in the EO crystal by the THz beam passed through the sample. Stationary Sample ATM Previously called "ideal ATM" and "polarization-varying ATM," stationary sample ATM (SSATM) involves rotation of the linearly polarized state of the THz beam in a time-domain spectroscopy (TDS) configuration parallel to the interrogated material sample. In a SSATM configuration, the THz beam polarization is rotated through 360° in a plane perpendicular to the propagation direction of the beam. Measurements of the sample's anisotropy is measured at several THz polarization angles. At least two methods to achieve THz polarization rotation for SSATM have been demonstrated: 1) by using a THz quarter waveplate (THz-QWP) together with an infrared polarizer and 2) by rotating the photoconductive antenna. In the case of employing a THz-QWP and an infrared polarizer, the magnitude of the measured signal, , where is a time delay between THz generation and the detected pulses in a THz-TDS system is dependent on the relative polarization angle of the THz light, and the polarization angle of the ultrafast near-infrared (NIR) probe beam, , at the sample by the relationship The objective is to maintain equal magnitude of the THz electric field at the sample for all measurement angles, . This requires adjustment of for every . Instrument Design A SSATM instrument is typically designed in a time-domain spectroscopy configuration in which a high power infrared laser beam is divided into two optical paths by a beamsplitter. The first optical path often receives a greater fraction of the optical power of the laser to maximize the output power of generated THz light. THz light is often generated with a voltage-pulsed photoconductive antenna, collected with a hyper-hemispherical silicon lens, collimated using an off-axis parabolic mirror that is then passed through a THz polarizer, made circular by a THz quarter waveplate constructed of two planar mirrors and a right-angled high-resistivity silicon prism to form circularly polarized light. A second THz polarizer selects from the circularly polarized THz light the angle at which each measurement is made once the light reaches a sample located at a focal point of the beam and mounted in direct contact with an electro-optic crystal often made of either ZnTe or GaP. The second optical path includes a retroreflector mirror mounted on a delay stage that adjusts the time-of-flight of the NIR beam to match the delay time, , of the THz light at the sample. The NIR beam is linearly polarized and chopped at a frequency suitable for detection, directed to the EO crystal to measure the change in its birefringence due to the degree of THz absorption by the sample. The NIR beam is reflected by the sample/EO crystal interface and directed to the detection module that often consists of an NIR quarter waveplate, a Wollaston prism that spatially selects perpendicular polarization states of the light toward two detectors in a balanced detector. The detected signal is a measure of the difference of the magnitude of the two perpendicular polarization states and corresponds to the degree of birefringence induced in the EO crystal by the THz light as-perturbed by the sample. THz Quarter Waveplate One strategy to provide full 360° rotation of THz polarization of equal electric field magnitude at the sample is to generate a circular state of polarization, then select particular linear polarization states from the circularly polarized beam with a THz polarizer. A circular polarization state may be generated by a quarter waveplate, however, common optical waveplates are typically designed for visible, near- and mid-infrared regions of the electromagnetic spectrum. A quarter waveplate designed for use in the THz frequency range consists of a right-angle silicon prism together with metal-coated planar mirrors as input/output. In particular, the silicon prism acts analogously to a Fresnel rhomb with a single total internal reflection on the longer face of the prism and is a passive broadband component that permits a wide frequency sweep during measurements. Advantages A few advantages of ATM over other related microspectroscopy techniques include the orientation of the THz electric field at the sample and the ability to readily measure materials that are sensitive to environmental conditions like hydration, cryo-cooling, and evacuation. THz polarization orientation at the sample A key characteristic of ATM is the orientation of the polarized electric field of THz light at the sample. In particular, unlike other microspectroscopy techniques like scattering scanning near-field optical microscopy (s-SNOM), the electric field of the interrogating THz field is parallel to the surface of the sample. In s-SNOM, the shape of the oscillating metallic probe tip directs the THz polarization into a direction predominantly perpendicular to the sample surface. Environmentally sensitive sample materials Living organisms typically consist of large quantities of water. Many anisotropic materials of interest are biological in nature and as such require hydration during spectroscopic measurements. While some limited novel techniques to measure properties of materials inside a hydrated sample chamber have been recently reports, the primary design requirement of ATM is that the material is accessible through a window that is transparent to THz light such as quartz. Similarly, samples requiring cryo-cooling or low pressure vacuum environment are readily interrogated in ATM using THz-transparent window materials. Applications Anisotropic terahertz microspectrosopy (ATM) has found applications in structural biology and molecular fingerprinting of DNA and proteins. The technique is also suitable for drug discovery and studying THz frequency properties of thin film solid state materials. Special attention is given to molecular motions in proteins where many structural changes occur at frequencies in the terahertz range of the spectrum (0.3 THz to 3 THz). These structural changes include hinge motions in which two regions of molecules are connected together by a flexible molecular structure that bends like a mechanical hinge or elbow. ATM is uniquely capable of measuring the spatial direction in which hinge motions occur because of its use of linearly polarized electric fields. Protein dynamics ATM is uniquely suited to measure resonant molecular vibrations in proteins. Molecular motions in proteins occur with frequencies in the terahertz range of the spectrum (0.3 THz to 3 THz). These structural changes include hinge motions in which two regions of molecules are connected together in a flexible way that bends like a mechanical hinge or joint and other conformational changes that occur within systems of protein molecules. Protein molecules are typically surrounded by water molecules and are arranged in random orientations. For this reason, it is common to arrange protein molecules in crystal form such that their orientations all the same. In particular, in a protein crystal the dipole of all protein molecules are naturally aligned. This allows us to perform microspectroscopy with polarized THz light and ascertain the spatial orientation of vibrations within molecules. References Terahertz technology Spectroscopy Scientific techniques
Anisotropic terahertz microspectroscopy
[ "Physics", "Chemistry" ]
3,084
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Electromagnetic spectrum", "Spectroscopy", "Terahertz technology" ]
71,929,531
https://en.wikipedia.org/wiki/Inverse%20lithography
In semiconductor device fabrication, the inverse lithography technology (ILT) is an approach to photomask design. It is basically an approach to solve an inverse imaging problem: to calculate the shapes of the openings in a photomask ("source") so that the passing light produces a good approximation of the desired pattern ("target") on the illuminated material, typically a photoresist. As such, it is treated as a mathematical optimization problem of a special kind, because usually an analytical solution does not exist. In conventional approaches known as the optical proximity correction (OPC) a "target" shape is augmented with carefully tuned rectangles to produce a "Manhattan shape" for the "source", as shown in the illustration. The ILT approach generates curvilinear shapes for the "source", which deliver better approximations for the "target". The ILT was proposed in 1980s, however at that time it was impractical due to the huge required computational power and complicated "source" shapes, which presented difficulties for verification (design rule checking) and manufacturing. However in late 2000s developers started reconsidering ILT due to significant increases in computational power. References Lithography (microfabrication) Inverse problems
Inverse lithography
[ "Materials_science", "Mathematics" ]
258
[ "Microtechnology", "Applied mathematics", "Inverse problems", "Nanotechnology", "Lithography (microfabrication)" ]
61,082,237
https://en.wikipedia.org/wiki/Biomass%20allocation
Biomass allocation is a concept in plant biology which indicates the relative proportion of plant biomass present in the different organs of a plant. It can also be used for whole plant communities. Rationale Different organs of plants serve different functions. Leaves generally intercept light and fix carbon, roots take up water and nutrients, and stems and petioles display the leaves in a favourable position and transport various compounds within the plant. Depending on environmental conditions, plants may change their investment scheme, to make plants with relatively bigger root systems, or more leaves. This balance has been suggested to be a ‘functional equilibrium’, with plants that experience low water or nutrient supply investing more in roots, and plants growing under low light or CO2 conditions investing more in leaves or stems. Alternatively, it is also known as the 'balanced growth hypothesis', or the 'optimal partitioning theory'. Next to environmentally-induced changes, there are also inherent differences in biomass allocation between species, and changes that depend on the age or size of plants. Related concepts Biomass allocation is the result of a number of processes which take place in the plant. It starts with the way sugars are allocated to different organs after having been fixed by the leaves in the process of photosynthesis (sugar allocation). Conceptually this is simple to envisage, but to quantify the flow of sugars is challenging and requires sophisticated machinery. For plants growing under steady state conditions, it is feasible to determine sugar-allocation by constructing a C-budget. This requires determination of the C-uptake by the whole plant during photosynthesis, and the C-losses of shoots and roots during respiration. Further C-losses may occur when sugars and other C-based compounds are exuded by the roots, or disappear as volatiles in the leaves. When these measurements are combined with growth measurements and the C-concentrations present in the biomass of leaves, stems and roots, C-budgets can be constructed from which sugar allocation is derived. These C-budgets are instructive, but require extensive measurements. A next level of analysis is to measure the growth allocation: what is the increase in total biomass of a plant, and to what extent is the increase due to growth of leaves, of stems and of roots. In young plants, growth allocation is often quite similar to the actual biomass allocation. But especially in trees, there may be a high yearly turnover in leaves and fine roots, and a low turnover in stems, branches and thick roots. In those cases, the allocation of growth and the final biomass allocation may diverge quite strongly over the years. There have been attempts to give these three different levels of allocation different names (a.o. partitioning, distribution, fractionation), but so far they have been applied inconsistently. The fractions of biomass present in leaves and roots are also relevant variables in Plant growth analysis. Calculation and units A common way to characterize the biomass allocation of a vegetative plant is to separate the plant in the organs of interest (e.g. leaves, stems, roots) and determine the biomass of these organs – generally on a dry mass basis - independently. The Leaf Mass Fraction (LMF) is then calculated as leaf dry mass / total plant dry mass, the Stem Mass Fraction (SMF) as stem dry mass / total plant dry mass, and Root Mass Fraction (RMF) as root dry mass / total plant dry mass. Generally, units are g g−1 (g organ / g total plant biomass). For generative plants, there is the additional compartment related to reproduction (flowers and flower stalks, seeds or fruits). The relative amount of biomass present in this compartment is often indicated as 'Reproductive Effort'. A related variable which is often used in agronomy is the 'Harvest index'. Because roots are seldom harvested, the harvest index is the amount of marketable product (often the seeds), relative to the total above-ground biomass. Alternative terminology that has been used are Leaf, Stem and Root Mass Ratios, or shoot:root or root:shoot ratios. The latter two convey less information, as they do not discriminate between leaves and stems. Normal ranges Young herbaceous plants generally have LMF values in the range of 0.3–0.7 g g−1 (0.5 on average), SMF values ranging from 0.04 - 0.4 (0.2 on average), and RMF values between 0.1 and 0.5 (0.3 on average). Young tree seedlings have values in the same range. For older and bigger plants, the LMF decreases and SMF increases. For large trees (> 1000 kg) LMF is below 0.05, SMF around 0.8 and RMF around 0.2 g g−1. At that stage most of the stem biomass consists of highly lignified material, which still may serve the important function of contributing to the support function of stems, but is physiologically not active anymore. Environmental effects The effect of the environment generally is as expected from the ‘functional equilibrium’ concept: plants decrease LMF and increase RMF when grown at high light levels as compared to low light. At low nutrient levels they invest more in roots and less in leaves as compared to high nutrient supply. However, changes are often smaller at different water supply, and effects of CO2 concentration, UV-B radiation, ozone and salinity on allocation are generally negligible. Plants growing at higher temperature mostly decrease RMF and increase LMF. A point of attention in the analysis of mass fractions is whether or not to correct for differences in size, when comparing plants that have been treated differently, or in the comparison of species. The rationale behind this is that mass factions often change with plant size (and developmental phase), and different treatments may have caused growth differences as well. Thus, for an assessment of whether plants actively changed their allocation scheme, plants of similar size should be compared. If size corrections are required, one could do an allometric analysis. A simple alternative is to plot mass fractions against total plant mass. Differences between species Species of different families may have different allocation patterns. For example, species belonging to the Solanaceae have high LMF values, whereas Fagaceae have low LMF values, even after size-corrections. Grasses generally have lower LMF values that herbaceous dicots, with a much higher proportion of their biomass present in roots. Large evergreen trees have a larger fraction of their biomass allocated to leaves (LMF ~0.04) than deciduous species (LMF ~0.01). See also Allometry Biomass partitioning Plant growth analysis References Biomass Plant ecology Plant physiology
Biomass allocation
[ "Biology" ]
1,381
[ "Plant physiology", "Plant ecology", "Plants" ]
61,085,382
https://en.wikipedia.org/wiki/C38H38O16
{{DISPLAYTITLE:C38H38O16}} The molecular formula C38H38O16 (molar mass: 750.70 g/mol, exact mass: 750.2160 u) may refer to: Dicerandrol C Phomoxanthone A (PXA) Phomoxanthone B (PXB) Molecular formulas
C38H38O16
[ "Physics", "Chemistry" ]
78
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
54,727,095
https://en.wikipedia.org/wiki/Cumulative%20accuracy%20profile
A cumulative accuracy profile (CAP) is a concept utilized in data science to visualize discrimination power. The CAP of a model represents the cumulative number of positive outcomes along the y-axis versus the corresponding cumulative number of a classifying parameter along the x-axis. The output is called a CAP curve. The CAP is distinct from the receiver operating characteristic (ROC) curve, which plots the true-positive rate against the false-positive rate. CAPs are used in robustness evaluations of classification models. Analyzing a CAP A cumulative accuracy profile can be used to evaluate a model by comparing the current curve to both the 'perfect' and a randomized curve. A good model will have a CAP between the perfect and random curves; the closer a model is to the perfect CAP, the better it is. The accuracy ratio (AR) is defined as the ratio of the area between the model CAP and random CAP, and the area between the perfect CAP and random CAP. In a successful model, the AR has values between zero and one, and the higher the value is, the stronger the model. The cumulative number of positive outcomes indicates a model's strength. For a successful model, this value should lie between 50% and 100% of the maximum, with a higher percentage for stronger models. In sporadic cases, the accuracy ratio can be negative. In this case, the model is performing worse than the random CAP. Applications The cumulative accuracy profile (CAP) and ROC curve are both commonly used by banks and regulators to analyze the discriminatory ability of rating systems that evaluate credit risks. The CAP is also used by instructional design engineers to assess, retrain and rebuild instructional design models used in constructing courses, and by professors and school authorities for improved decision-making and managing educational resources more efficiently. References Mathematical modeling
Cumulative accuracy profile
[ "Mathematics" ]
366
[ "Applied mathematics", "Mathematical modeling" ]
54,729,846
https://en.wikipedia.org/wiki/Kentrophoros
Kentrophoros is a genus of ciliates in the class Karyorelictea. Ciliates in this genus lack a distinct oral apparatus and depend primarily on symbiotic bacteria for their nutrition. Systematics Kentrophoros is the sole genus in the family Kentrophoridae Jankowski 1980. The type species of the genus is K. fasciolatus Sauerbrey 1928, first described from the Bay of Kiel. Synonyms are Centrophorus Kahl 1931 (an illegitimate synonym because the name was already used for a genus of sharks) and Centrophorella Kahl 1935. Fifteen species of Kentrophoros have been formally described, although several of these names may be synonyms for the same species. Description The ciliates are long and ribbon-shaped, like other karyorelictean ciliates that live in the marine interstitial habitat. In some species, the cell body is folded or involuted into a tube or more elaborate shapes. The ventral side is ciliated, while the dorsal side is mostly unciliated except for a single "circle kinety" at the margin. The dorsal side is covered with a single layer of symbiotic bacteria. Kentrophoros lacks a distinct oral apparatus, although densely-spaced kinetids associated with fibers (nematodesmata) at the anterior part of the cell may be vestiges of the oral apparatus. The number and arrangement of nuclei within the cell are also variable between species. Some species have only one micronucleus and two macronuclei, but others can have multiple clusters of macro- and micronuclei, or so-called "composite nuclei" where each cluster of macro- and micronuclei is enclosed in another membrane. Kentrophoros live in coastal marine sediments, where they prefer the interface between oxic and anoxic layers. Symbiotic bacteria The dorsal side of Kentrophoros is covered in a single layer of rod-shaped bacterial symbionts. These bacteria gain their energy from oxidizing sulfide, and unlike other sulfur-oxidizing symbionts, lack the genetic capacity to fix CO2 autotrophically into biomass; instead they appear to be entirely heterotrophic. The ciliates ingest the bacteria as their primary food source. This symbiosis has therefore been called a "kitchen garden" carried by the ciliates to feed themselves. The symbionts occupy about 50% of the total volume. They belong to a group in the Gammaproteobacteria for which the provisional name "Candidatus Kentron" has been proposed. Similar symbioses between eukaryotic hosts and sulfur-oxidizing bacteria include the ciliate Zoothamnium niveum, oligochaete worm Olavius algarvensis, and flatworm Paracatenula. References Karyorelictea Chemosynthetic symbiosis Ciliate genera
Kentrophoros
[ "Biology" ]
625
[ "Biological interactions", "Chemosynthetic symbiosis", "Behavior", "Symbiosis" ]
54,730,939
https://en.wikipedia.org/wiki/Notch%20%28engineering%29
In mechanical engineering and materials science, a notch refers to a V-shaped, U-shaped, or semi-circular defect deliberately introduced into a planar material. In structural components, a notch causes a stress concentration which can result in the initiation and growth of fatigue cracks. Notches are used in materials characterization to determine fracture mechanics related properties such as fracture toughness and rates of fatigue crack growth. Notches are commonly used in material impact tests where a morphological crack of a controlled origin is necessary to achieve standardized characterization of fracture resistance of the material. The most common is the Charpy impact test, which uses a pendulum hammer (striker) to strike a horizontal notched specimen. The height of its subsequent swing-through is used to determine the energy absorbed during fracture. The Izod impact strength test uses a circular notched vertical specimen in a cantilever configuration. Charpy testing is conducting with U- or V-notches whereby the striker contacts the specimen directly behind the notch, whereas the now largely obsolete Izod method involves a semi-circular notch facing the striker. Notched specimens are used in other characterization protocols, such as tensile and fatigue tests. Types of notches The type of notch introduced to a specimen depends on the material and characterization employed. For standardized testing of fracture toughness by the Charpy impact method, specimen and notch dimensions are most often taken from American standard ASTM E23, or British standard BS EN ISO 148-1:2009. For all notch types, a key parameter in governing stress concentration and failure in notched materials is the notch tip curvature or radius. Sharp tipped V-shaped notches are often used in standard fracture toughness testing for ductile materials, polymers and for the characterization of weld strength. The application of such notches for hard-steels is problematic due to sensitivity to grain alignment, which is why torsional testing may be applied for such materials instead. A U notch is an elongated notch having a round notch-tip, being deeper than it is wide. This notch is also often referred to as C-notch, and is the most widely form of introduced notch, due to the repeatability of results obtained from notch specimens. Correlating U-Notch performance to V-Notch equivalent is challenging and is carried out on a case by case basis, there is no standardized correlation between performance values obtained with the two notch types. A keyhole notch is typically considered as a slit ending in a hole of a given radius. This type of notch is most often considered in numerical models. Fracture toughness results obtained from keyhole notch testing are often higher than those obtained from V-notched or pre-cracked specimens. See also Charpy impact test References Fracture mechanics
Notch (engineering)
[ "Materials_science", "Engineering" ]
554
[ "Structural engineering", "Materials degradation", "Materials science", "Fracture mechanics" ]
54,731,628
https://en.wikipedia.org/wiki/Arsonium
The arsonium cation is a positively charged polyatomic ion with the chemical formula . An arsonium salt is a salt containing either the arsonium () cation, such as arsonium bromide () and arsonium iodide (), which can be synthesized by reacting arsine with hydrogen bromide or hydrogen iodide. Or more commonly, as organic derivative such as the quaternary arsonium salts (CAS: , hydrate form) and the zwitterionic compound arsenobetaine. References Arsenic(−III) compounds Cations
Arsonium
[ "Physics", "Chemistry" ]
118
[ "Cations", "Ions", "Matter" ]
54,737,840
https://en.wikipedia.org/wiki/3%CF%89-method
The 3ω-method (3 omega method) or 3ω-technique, is a measurement method for determining the thermal conductivities of bulk material (i.e. solid or liquid) and thin layers. The process involves a metal heater applied to the sample that is heated periodically. The temperature oscillations thus produced are then measured. The thermal conductivity and thermal diffusivity of the sample can be determined from their frequency dependence. Theory The 3ω-method can be accomplished by depositing a thin metal structure (generally a wire or a film) onto the sample to function as a resistive heater and a resistance temperature detector (RTD). The heater is driven with AC current at frequency ω, which induces periodic joule heating at frequency 2ω (since ) due to the oscillation of the AC signal during a single period. There will be some delay between the heating of the sample and the temperature response which is dependent upon the thermal properties of the sensor/sample. This temperature response is measured by logging the amplitude and phase delay of the AC voltage signal from the heater across a range of frequencies (generally accomplished using a lock-in-amplifier). Note, the phase delay of the signal is the lag between the heating signal and the temperature response. The measured voltage will contain both the fundamental and third harmonic components (ω and 3ω respectively), because the Joule heating of the metal structure induces oscillations in its resistance with frequency 2ω due to the temperature coefficient of resistance (TCR) of the metal heater/sensor as stated in the following equation: , where C0 is constant. Thermal conductivity is determined by the linear slope of ΔT vs. log(ω) curve. The main advantages of the 3ω-method are minimization of radiation effects and easier acquisition of the temperature dependence of the thermal conductivity than in the steady-state techniques. Although some expertise in thin film patterning and microlithography is required, this technique is considered as the best pseudo-contact method available. (ch23) The process was first published by David Cahill and Robert Pohl in the April 1987 issue of the Physical Review in a paper titled "Thermal Conductivity of Amorphous Solids above the Plateau". References Materials testing Heat conduction
3ω-method
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
476
[ "Materials testing", "Heat conduction", "Thermodynamics", "Materials science" ]
54,740,174
https://en.wikipedia.org/wiki/Bioinformatics%20Institute%20%28Singapore%29
The Bioinformatics Institute (Abbreviation: BII) is one of the Biomedical Sciences Institutes of the Agency for Science, Technology and Research, (A*STAR). BII was originally founded in 2001 by Dr Rajagopal as a support unit for Bioinformatics and IT service management. However, since August 2007, it has been redefined as a biological research organisation upon the arrival of the current executive director, Dr Frank Eisenhaber. BII focuses on "computationally biology-driven life science research aimed at the discovery of biomolecular mechanisms." BII also develops computer based research tools and performs experimental verifications in its own experimental facilities or by collaborating with appropriate groups. BII is home to the journal Scientific Phone Apps and Mobile Devices with SpringerNature. There are currently four research divisions in BII: Biomolecular Sequence to Function Biomolecular Modelling and Design Imaging Informatics Translational Research Under Dr. Sebastian Maurer-Stroh, the team at BII quality-checked genomic sequences uploaded by various countries to the GISAID database that stores and shares COVID-19 virus data. External links References Genetics or genomics research institutions Bioinformatics Research institutes in Singapore
Bioinformatics Institute (Singapore)
[ "Engineering", "Biology" ]
253
[ "Bioinformatics", "Biological engineering" ]
56,263,862
https://en.wikipedia.org/wiki/Xi%20Yin
Xi Yin (; born December 1983 ) is a Chinese-American theoretical physicist. Biography Yin was accepted to University of Science and Technology of China in 1996, at the age of 12, and completed the (then) 5-year bachelor program in 2001. He gained a PhD at Harvard University in 2006, under the supervision of Andrew Strominger. He was a Junior Fellow at the Harvard Society of Fellows, and a Visiting Member at the Institute for Advanced Study. He joined the Harvard faculty in 2008, and is now a Professor of Physics. Yin is a recipient of NSF CAREER Award, Sloan Research Fellowship, and New Horizons in Physics Prize. He is a Simons Investigator, and a principal investigator of the Simons Bootstrap Collaboration. Yin ran the Boston marathon three times, and completed the Leadville Trail 100 Run in 2011. References External links Personal website 1983 births Living people 21st-century American physicists 21st-century Chinese scientists Chinese emigrants to the United States Harvard Graduate School of Arts and Sciences alumni Harvard Faculty of Arts and Sciences faculty People from Zhuzhou Scientists from Hunan Theoretical physicists University of Science and Technology of China alumni
Xi Yin
[ "Physics" ]
231
[ "Theoretical physics", "Theoretical physicists" ]
51,960,491
https://en.wikipedia.org/wiki/C%20band%20%28infrared%29
In infrared optical communications, C-band (C for "conventional") refers to the wavelength range 1530–1565 nm, which corresponds to the amplification range of erbium doped fiber amplifiers (EDFAs). The C-band is located around the absorption minimum in optical fiber, where the loss reaches values as good as 0.2 dB/km, as well as an atmospheric transmission window (see figures). The C-band is located between the short wavelengths (S) band (1460–1530 nm) and the long wavelengths (L) band (1565–1625 nm). It includes the 50 GHz-spaced DWDM ITU channels 16 (1564.68 nm, 191.6 THz) to 59 (1530.33 nm, 195.9 THz). References Infrared Optical communications
C band (infrared)
[ "Physics", "Engineering" ]
177
[ "Optical communications", "Telecommunications engineering", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Infrared" ]
51,967,982
https://en.wikipedia.org/wiki/Paleostress%20inversion
Paleostress inversion refers to the determination of paleostress history from evidence found in rocks, based on the principle that past tectonic stress should have left traces in the rocks. Such relationships have been discovered from field studies for years: qualitative and quantitative analyses of deformation structures are useful for understanding the distribution and transformation of paleostress fields controlled by sequential tectonic events. Deformation ranges from microscopic to regional scale, and from brittle to ductile behaviour, depending on the rheology of the rock, orientation and magnitude of the stress, etc. Therefore, detailed observations in outcrops, as well as in thin sections, are important in reconstructing the paleostress trajectories. Inversions require assumptions in order to simplify the complex geological processes. The stress field is assumed to be spatially uniform for a faulted rock mass and temporally stable over the concerned period of time when faulting occurred in that region. In other words, the effect of local fault slip is ignored in the variation in small-scale stress field. Moreover, the maximum shear stress resolved on the fault surface from the known stress field and the slip on each of the fault surface has the same direction and magnitude. Since the first introduction of the methods by Wallace and Bott in the 1950s, similar assumptions have been used throughout the decades. Fault slip analysis Conjugate fault system Anderson was the first to utilize conjugate fault systems in interpreting paleostress, including all kinds of conjugate faults (normal, reverse and strike-slip). Regional conjugate fault can be better understood by comparison to a familiar rock mechanics experiment, i.e. the Uniaxial Compressive Strength (UCS) Test. Basics of their mechanisms are similar except the principal stress orientation applied is rotated from perpendicular to parallel to the ground. The conjugate fault model is a simple way to obtain approximate orientations of stress axes, due to the abundance of such structure in the upper brittle crust. Therefore, a number of studies have been carried out by other researchers in assorted structural settings and by correlating with other deformation structures. Nonetheless, further development revealed the deficiency of the model: 1. Important geometrical properties absent in practical situation The geometrical properties of conjugate faults are indicative of the sense of stress, but they may not appear in the actual fault patterns. Slickenside lineations normal to fault plane intersection Symmetrical sense of motion that gives the obtuse angle in the direction of lengthening Relation between the intersecting angle of fault planes and mechanical properties, with reference to information from rock mechanics experiments in lab 2. Observed fault patterns are far more sophisticated There are often oblique pre-existing faults, planes of weaknesses or striations to the fault slip, which do not belong to the conjugate fault sets. Neglecting this considerable amount of data would cause error in analysis. 3. Neglecting the stress ratio (Φ) This ratio provides the relative magnitude of the intermediate stress (σ2) and thus determines the shape of the stress ellipsoid. However, this model does not give an account on the ratio, save for some specific cases. Reduced stress tensor This method was established by Bott in 1959, based on the assumption that direction and sense of slip occurs on the fault plane are the same with those of the maximum resolved shear stress, hence, with known orientations and senses of movements on abundant faults, a particular solution T (the reduce stress tensor) is attained. It gives more comprehensive and accurate results in reconstructing paleostress axes and determining the stress ratio (Φ) than the conjugate fault system. The tensor works by solving for four independent unknowns (three principal axes and Φ) through mathematical computation of observations of faults (i.e. attitude of faults and lineations on fault planes, direction and sense of slip, and other tension fractures). This method follows four rigorous steps: Data Analysis Computation of Reduced Stress Tensor Minimization Check of Results Data analysis Reconstruction of paleostress requires large amount of data to attain accuracy, so it is essential to organize the data in comprehensible format prior to any analysis. 1) Fault Population Geometry Attitude of fault planes and slickensides is plotted on rose diagrams, such that the geometry is visible. This is particularly useful when the sample size is enormous, it provides the full picture of the region of interest. 2) Fault Motion Fault movement is resolved into three components (as in 3D), which are vertical transverse, horizontal transverse and lateral components, by trigonometric relation with the measured dips and trends. Net slip is shown more clearly which paves the way to understanding the deformation. 3) Individual Fault Geometry Fault planes are represented by lines in stereonets (equal area lower hemisphere projection), while rakes on them are indicated by dots sitting on the lines. It helps to visualize the geometrical distribution and possible symmetry among individual faults. 4) P (pressure) and T (tension) Dihedra This is a concluding step of compiling all the data and check their mechanical compatibility, also could be seen a preliminary step in determining major paleostress orientations. As this is a simple graphical representation of the fault geometry (being the boundaries of dihedra) and sense of slip (shortening direction indicated by black and extension depicted by grey), while it is able to provide good constraints on the orientation of principal stress axes. The approximation is built upon the assumption that the orientation of maximum principal stress (σ1) most probably passes through the greatest number of P-quadrants. Since fault plane and auxiliary plane perpendicular to striations are considered the same in this method, the model can be directly applied to focal mechanisms of earthquakes. Nonetheless, due to the same reason, this method cannot provide accurate determination of paleostress, as well as the stress ratio. Determination of paleostress Reduced stress tensor Stress tensor can be considered as a matrix with nine components being the nine stress vectors acting on a point, in which the three vectors along the diagonal (highlighted in brown) represent the principal axes. The reduced stress tensor is a mathematical computation approach to determining the three principal axes and the stress ratio, totally four independent unknowns, calculated as eigenvectors and eigenvalue respectively, so that this method is more complete and accurate than the mentioned graphical approaches. There are a number formulations that can reach the same final results but with distinctive features: (1) , where , such that . This tensor is defined by setting σ1, σ2 and σ3 as 1, Φ and 0 (highlighted in pink) respectively, due to choosing and as the mode of reduction. The advantage of this formulation is the direct correspondence to stress orientation, thus the stress ellipsoid, and the stress ratio. (2) This formulation is a deviator, which requires more computation to obtain information of the stress ellipsoid despite maintaining a symmetry in mathematical context. Minimization Minimization aims to reduce the differences between the computed and observed slip directions of fault planes by choosing a function to proceed the least square minimization. Here are a few examples of the functions: (1) The very first function used in fault slip analysis does not account on the sense of individual slip, which means altering the sense of a single slip does not affect the result. However, individual sense of motion is an effective reflection of orientation of stress axes in real situation. Hence, S1 is the simplest function but include the importance of sense of individual slip. (2) S2 is derived from S1 based on variation in computational process. (3) S3 is an improved version of the previous model in two aspects. Regarding the efficiency in computation, which is particularly significant in long iterative processes like this, tangent of angles is preferred to cosine. Moreover, to deal with anomalous data (e.g. faults initiated by another event, error in data collection etc.), an upper limit of the value of the functions of angle could be set to filter deviated data. (4) S4 resembles S2 except the unit vector parallel to shear stress is substituted by the predicted shear stress. Therefore, it still produces similar results as other methods, although its physical meaning is less well justified. Checking results The reduced stress tensor should best (hardly perfectly) describe the observed orientations and senses of movement on diversified fault planes in a rock mass. Therefore, by reviewing the fundamental principle of interpreting paleostress from the reduced stress tensor, an assumption is recognized: every fault slip in the rock mass is induced homogeneously by a common stress tensor. This implies the variation in stress orientation and ratio Φ within a rock mass is overlooked yet always present in practical case, due to interaction between discontinuities at any scale. Hence, the significance of this effect has to be examined to test the validity of the method, by considering the parameter: the difference between the measured slickenside lineation and the theoretical shear stress. The average angular deviation is insignificant when compared with the total of instrumental (measuring tools) and observation (unevenness of fault surfaces and striae) errors in majority of the cases. In conclusion, the reduced stress tensor method is validated when sample size is large and representative (homogeneous data sets with a range of fault orientations), sense of motion of is noted, minimization of angular difference is emphasized when choosing functions (mentioned in section above), and rigorous computation takes place. Limitation Quantitative analyses cannot stand alone without careful qualitative field observations. The above described analyses are to be carried out after the overall geologic framework is understood e.g. number of paleostress systems, chronological order of successive stress patterns. Also, consistency with other stress markers e.g. stylolites and tension fractures, is required to justify the result. Examples of application Cambrian Eriboll Formation sandstones west of the Moine Thrust Zone, NW Scotland Baikal region, Central Asia Alpine foreland, Central Northern Switzerland Grain boundary piezometer A piezometer is an instrument used in the measurement of pressure (non-directional) or stress (directional) from strain in rocks at any scale. Referring to the paleostress inversion principle, rock masses under stress should exhibit strain at both macroscopic and microscopic scale, while the latter is found at the grain boundaries (interface between crystal grains at the magnitude below 102μm). Strain is revealed from the change in grain size, orientation of grains or migration of crystal defects, through a number of mechanisms e.g. dynamic recrystallization (DRX). Since these mechanisms primarily depend on flow stress and their resulted deformation is stable, the strained grain size or grain boundary are often used as an indicator of paleostress in tectonically active regions such as crustal shear zones, orogenic belts and the upper mantle. Dynamic recrystallization (DRX) Dynamic recrystallization is one of the crucial mechanisms in reducing grain size in shear setting. DRX is defined as a nucleation-and-growth process because local grain boundary bulging (BLG)(mechanisms of nucleation) subgrain rotation (SGR)(mechanisms of nucleation) grain boundary migration (GBM)(mechanisms of grain growth), are all present in the deformation. This evidence is commonly found in quartz, a typical piezometer, from ductile shear zones. Optical microscope and transmission electron microscope (TEM) are usually utilized in observing the sequential occurrence of subgrain rotation and local grain boundary bulging, and measuring recrystallized grain size. The nucleation process is triggered at boundaries of existing grains only when materials have been deformed to particular critical values. Grain boundary bulging (BLG) Grain boundary bulging is the process involving the growth of nuclei at the expense of existing grains and then formation of a 'necklace' structure. Subgrain rotation (SGR) Subgrain rotation is also known as in-situ recrystallization without considerable grain growth. This process happens steadily over the strain history, thus the change in orientation is progressive but not abrupt as grain boundary bulging. Therefore, grain boundary bulging and subgrain rotation are differentiated as discontinuous and continuous dynamic recrystallization respectively. Theoretical models Static energy-balance model The theoretical basis of grain size piezometry was first established by Robert J. Twiss in late 1970s. By comparing free dislocation energy and grain boundary energy, he derived a static energy balance model applicable to subgrain size . Such relation has been represented by an empirical equation between normalized value of grain size and flow stress, which is universal for various materials: , d is the average grain size; b is the length of the Burgers vector; K is a non-dimensional temperature-dependent constant, which is typically in the order of 10; μ is the shear modulus; σ is the flow stress. This model does not account for the persistently transforming nature of microstructures seen in dynamic recrystallization, so its inability in determination of recrystallized grain size has led to the latter models. Nucleation-and-growth models Unlike the previous model, these models consider the sizes of individual grains vary temporally and spatially, therefore, they derive an average grain size from an equilibrium between nucleation and grain growth. The scaling relation of the grain size is as follows: , where d is the mode of logarithmic grain size, I is the nucleation rate per unit volume, and a is a scaling factor. Upon this basic theory, there are still plenty of arguments on the details, which are reflected in the assumptions of the models, so there are various modifications. Derby–Ashby model Derby and Ashby considered boundary bulging nucleation at grain boundary in determining the nucleation rate (Igb), which opposes to the intracrystalline nucleation suggested by the prior model. Thus this model describes the microstructures of discontinuous DRX (DDRX): . Shimizu model Because of a contrasting assumption that subgrain rotation nucleation in continuous DRX (CDRX) should be considered for the nucleation rate, Shimizu has come up with another model, which has also been tested in laboratory: . Simultaneous operation of dislocation and diffusion creeps Field boundary model In the above models, one of the vital factors, especially when the grain size is reduced substantially through dynamic recrystallization, is neglected. The surface energy becomes more significant when grains are sufficiently small, which converts the creep mechanism from dislocation creep to diffusion creep, thus the grains start to grow. Therefore, the determination of the boundary zone between fields of these two creep mechanisms matter to know when the recrystallized grain size tends to stabilize, as to supplement the above model. The difference between this model and the previous nucleation-and-growth models lies within the assumptions: the field boundary model assumes that grain size reduces in the dislocation creep field, and enlarges in the diffusion creep field, but it is not the case in the previous models. Common piezometers Quartz is abundant in the crust and contains creep microstructures that are sensitive to deformation conditions in deeper crust. Before starting to infer flow stress magnitude, the mineral has to be calibrated carefully in laboratory. Quartz has been found to exhibit different piezometer relations during different recrystallization mechanisms, which are local grain boundary migration (dislocation creep), subgrain rotation (SGR) and the combination of these two, as well as at different grain size. Other common minerals used for grain size piezometers are calcite and halite, that have gone through syn-tectonic deformation or manual high-temperature creep, which also demonstrate difference in piezometer relation for distinct recrystallization mechanisms. References Further reading Angelier, J., 1994, Fault slip analysis and paleostress reconstruction. In: Hancock, P.L. (ed.), Continental Deformation. Pergamon, Oxford, p. 101–120. Célérier, B., Etchecopar, A., Bergerat, F., Vergely, P., Arthaud, F., Laurent, P., 2012. Inferring stress from faulting: From early concepts to inverse methods. Tectonophysics, Crustal Stresses, Fractures, and Fault Zones: The Legacy of Jacques Angelier 581, 206–219. Pascal, C., 2021. Paleostress Inversion Techniques: Methods and Applications for Tectonics, Elsevier, 400 p. https://www.elsevier.com/books/paleostress-inversion-techniques/pascal/978-0-12-811910-5 Ramsay, J.G., Lisle, R.J., 2000. The Techniques of Modern Structural Geology. Volume 3: Applications of continuum mechanics in structural geology (Session 32: Fault Slip Analysis and Stress Tensor Calculations), Academic Press, London. Yamaji, A., 2007. An Introduction to Tectonophysics: Theoretical Aspects of Structural Geology (Chapter 11: Determination of Stress from Faults), Terrapub, Tokyo. http://www.terrapub.co.jp/e-library/yamaji/ Structural geology Deformation (mechanics)
Paleostress inversion
[ "Materials_science", "Engineering" ]
3,570
[ "Deformation (mechanics)", "Materials science" ]
51,970,331
https://en.wikipedia.org/wiki/D%20band%20%28waveguide%29
The waveguide D band is the range of radio frequencies from 110 GHz to 170 GHz in the electromagnetic spectrum, corresponding to the recommended frequency band of operation of the WR6 and WR7 waveguides. These frequencies are equivalent to wavelengths between 2.7 mm and 1.8 mm. The D band is in the EHF range of the radio spectrum. References Radio spectrum
D band (waveguide)
[ "Physics" ]
77
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
64,586,749
https://en.wikipedia.org/wiki/Oxford%E2%80%93AstraZeneca%20COVID-19%20vaccine
The Oxford–AstraZeneca COVID19 vaccine, sold under the brand names Covishield and Vaxzevria among others, is a viral vector vaccine for the prevention of COVID-19. It was developed in the United Kingdom by Oxford University and British-Swedish company AstraZeneca, using as a vector the modified chimpanzee adenovirus ChAdOx1. The vaccine is given by intramuscular injection. Studies carried out in 2020 showed that the efficacy of the vaccine is 76.0% at preventing symptomatic COVID-19 beginning at 22 days following the first dose and 81.3% after the second dose. A study in Scotland found that, for symptomatic COVID-19 infection after the second dose, the vaccine is 81% effective against the Alpha variant (lineage B.1.1.7) and 61% against the Delta variant (lineage B.1.617.2). The vaccine is stable at refrigerator temperatures and has a good safety profile, with side effects including injection-site pain, headache, and nausea, all generally resolving within a few days. More rarely, anaphylaxis may occur; the UK Medicines and Healthcare products Regulatory Agency (MHRA) has 268 reports out of some 21.2million vaccinations . In very rare cases (around 1 in 100,000 people), the vaccine has been associated with an increased risk of blood clots when in combination with low levels of blood platelets (embolic and thrombotic events after COVID-19 vaccination). According to the European Medicines Agency, as of 4 April 2021, a total of 222 cases of blood clots had been recorded among 34 million people who had been vaccinated in the European Economic Area (a percentage of 0.0007%). On 30 December 2020, the vaccine was first approved for use in the UK vaccination programme, and the first vaccination outside of a trial was administered on 4 January 2021. The vaccine has since been approved by several medicine agencies worldwide, such as the European Medicines Agency (EMA), and the Australian Therapeutic Goods Administration (provisional approval in February 2021), and was approved for an Emergency Use Listing by the World Health Organization (WHO). More than 3billion doses of the vaccine were supplied to countries worldwide. Some countries have limited its use to elderly people at higher risk for severe COVID-19 illness due to concerns over the very rare side effects of the vaccine in younger individuals. The vaccine is no longer in production. AstraZeneca withdrew its marketing authorizations for the vaccine from the European market in March 2024, and worldwide by May 2024. Medical uses The Oxford–AstraZeneca COVID19 vaccine is used to provide protection against infection by the SARS-CoV-2 virus in order to prevent COVID-19 in adults aged 18 years and older. The medicine is administered by two doses given by intramuscular injection into the deltoid muscle (upper arm). The initial course consists of two doses with an interval of 4 to 12 weeks between doses. The World Health Organization (WHO) recommends an interval of 8 to 12 weeks between doses for optimal efficacy. , there is no evidence that a third booster dose is needed to prevent severe disease in healthy adults. Effectiveness Preliminary data from a study in Brazil with 61 million individuals from January to June 2021, indicate that the effectiveness against infection, hospitalization and death is similar between most age groups, but protection against all these outcomes is significantly reduced in those aged 90 year of age or older, attributable to immunosenescence. A vaccine is generally considered effective if the estimate is ≥50% with a >30% lower limit of the 95% confidence interval. Effectiveness is generally expected to slowly decrease over time. Preliminary data suggest that the initial two-dose regimen is not effective against symptomatic disease caused by the Omicron variant from the 15th week onwards. A regimen of two doses of the Oxford–AstraZeneca vaccine followed by a booster dose of the Pfizer–BioNTech or the Moderna vaccine is initially about 60% effective against symptomatic disease caused by Omicron, then after 10 weeks the effectiveness drops to about 35% with the Pfizer–BioNTech and to about 45% with the Moderna vaccine. The vaccine remains effective against severe disease, hospitalization and death. Contraindications The Oxford–AstraZeneca COVID-19 vaccine should not be administered to people who have had capillary leak syndrome. Adverse effects The most common side effects in the clinical trials were usually mild or moderate and got better within a few days after vaccination. Vomiting, diarrhoea, fever, swelling, redness at the injection site and low levels of blood platelets occurred in less than 1 in 10 people. Enlarged lymph nodes, decreased appetite, dizziness, sleepiness, sweating, abdominal pain, itching and rash occurred in less than 1 in 100 people. An increased risk of the rare and potentially fatal thrombosis with thrombocytopenia syndrome (TTS) has been associated with mainly younger female recipients of the vaccine. Analysis of VigiBase reported embolic and thrombotic events after vaccination with Oxford–AstraZeneca, Moderna and Pfizer vaccines, found a temporally related incidence of 0.21 cases per 1 million vaccinated-days. Anaphylaxis and other allergic reactions are known side effects of the Oxford–AstraZeneca COVID-19 vaccine. The European Medicines Agency (EMA) has assessed 41 cases of anaphylaxis from around 5million vaccinations in the United Kingdom. Capillary leak syndrome is a possible side effect of the vaccine. The European Medicines Agency (EMA) listed Guillain-Barré syndrome as a very rare side effect of the Oxford–AstraZeneca COVID-19 vaccine and added a warning in the product information. Additional side effects include tinnitus (persistent ringing in the ears), paraesthesia (unusual feeling in the skin, such as tingling or a crawling sensation), and hypoaesthesia (decreased feeling or sensitivity, especially in the skin). Pharmacology The Oxford–AstraZeneca COVID-19 vaccine is a viral vector vaccine containing a modified, replication-deficient chimpanzee adenovirus ChAdOx1, containing the full‐length codon‐optimised coding sequence of SARS-CoV-2 spike protein along with a tissue plasminogen activator (tPA) leader sequence. The adenovirus is called replication-deficient because some of its essential genes required for replication were deleted and replaced by a gene coding for the spike protein. However, the HEK 293 cells used for vaccine manufacturing, express several adenoviral genes, including the ones required for the vector to replicate. Following vaccination, the adenovirus vector enters the cells and releases its genes, in the form of DNA, which are transported to the cell nucleus; thereafter, the cell's machinery does the transcription from DNA into mRNA and the translation into spike protein. The approach to use adenovirus as a vector to deliver spike protein is similar to the approach used by the Johnson & Johnson COVID-19 vaccine and the Russian Sputnik V COVID-19 vaccine. The protein of interest is the spike protein, a protein on the exterior of the virus that enables SARS-type coronaviruses to enter cells through the ACE2 receptor. Following vaccination, the production of coronavirus spike protein within the body will cause the immune system to attack the spike protein with antibodies and T-cells if the virus later enters the body. Manufacturing To manufacture the vaccine the virus is propagated on HEK 293 cell lines and then purified multiple times to completely remove the cell culture. The vaccine costs around to per dose to manufacture. On 17 December 2020, a tweet by the Belgian Budget State Secretary revealed that the European Union (EU) would pay () per dose, The New York Times suggesting the lower price might relate to factors including investment in vaccine production infrastructure by the EU. the vaccine active substance (ChAdOx1-SARS-COV-2) was being produced at several sites worldwide, with AstraZeneca claiming to have established 25 sites in 15 countries. The UK sites at that time were Oxford and Keele, with bottling and finishing in Wrexham. Other sites at that time included the Serum Institute of India at Pune. The Halix site at Leiden was approved by the EMA on 26 March 2021, joining three other sites approved by the EU. History The vaccine arose from a collaboration between Oxford University's Jenner Institute and Vaccitech, a private company spun off from the university, with financing from Oxford Sciences Innovation, Google Ventures, and Sequoia Capital, among others. The first batch of the COVID-19 vaccine produced for clinical testing was developed by Oxford University's Jenner Institute and the Oxford Vaccine Group in collaboration with Italian manufacturer Advent Srl located in Pomezia. The team is led by Sarah Gilbert, Adrian Hill, Andrew Pollard, Teresa Lambe, Sandy Douglas and Catherine Green. Early development In February 2020, the Jenner Institute agreed a collaboration with the Italian company Advent Srl for the production of a batch of 1,000 doses of a vaccine candidate for clinical trials. Originally, Oxford intended to donate the rights to manufacture and market the vaccine to any drugmaker who wanted to do so, but after the Gates Foundation urged Oxford to find a large company partner to get its COVID-19 vaccine to market, the university backed off of this offer in May 2020. The UK government then encouraged Oxford to work with AstraZeneca, a company based in Europe, instead of Merck & Co., a US-based company (The Guardian reported the initial partner was the German-based Merck Group instead). Government ministers also had concerns that a vaccine manufactured in the US would not be available in the UK, according to anonymous sources in The Wall Street Journal. Financial considerations at Oxford and spin-out companies may have also played a part in the decision to partner with AstraZeneca. An initially not-for-profit licensing agreement was signed between the university and AstraZeneca PLC, in May 2020, with 1billion doses of potential supply secured, with the UK reserving access to the initial 100million doses. Furthermore, the US reserved 300million doses, as well as the authority to perform Phase III trials in the US. The collaboration was also granted of UK government funding, and of US government funding, to support the development of the vaccine. In June 2020, the US National Institute of Allergy and Infectious Diseases (NIAID) confirmed that the third phase of trials for the vaccine would begin in July 2020. On 4 June, AstraZeneca announced that the COVAX program for equitable vaccine access managed by the WHO and financed by CEPI and GAVI had spent $750m to secure 300million doses of the vaccine to be distributed to low-income or under-developed countries. Preliminary data from a study that reconstructed funding for the vaccine indicates that funding was at least 97% public, almost all from UK government departments, British and American scientific institutes, the European Commission and charities. Clinical trials In July 2020, AstraZeneca partnered with IQVIA to accelerate the timeframe for clinical trials being planned or conducted in the US. On 31 August, AstraZeneca announced that it had begun enrolment of adults for a US-funded, 30,000-subject late-stage study. Clinical trials for the vaccine candidate were halted worldwide on 8 September, as AstraZeneca investigated a possible adverse reaction which occurred in a trial participant in the UK. Trials were resumed on 13 September after AstraZeneca and Oxford, along with UK regulators, concluded it was safe to do so. AstraZeneca was later criticised for refusing to provide details about potentially serious neurological side effects in two trial participants who had received the experimental vaccine in the UK. While the trials resumed in the UK, Brazil, South Africa, Japan and India, the US did not resume clinical trials of the vaccine until 23 October. This was due to a separate investigation by the Food and Drug Administration surrounding a patient illness that triggered a clinical hold, according to the US Department of Health and Human Services (HHS) Secretary Alex Azar. The results of the COV002 phase II/III trial showed that immunity lasts for at least one year after a single dose. Results of Phase III trial On 23 November 2020, the first interim data was released by Oxford University and AstraZeneca from the vaccine's ongoing Phase III trials. The interim data reported a 70% efficacy, based on combined results of 62% and 90% from different groups of participants who were given different dosages. The decision to combine results from two different dosages was met with criticism from some who questioned why the results were being combined. AstraZeneca responded to the criticism by agreeing to carry out a new multi-country trial using the lower dose, which had led to the 90% claim. The full publication of the interim results from four ongoing Phase III trials on 8 December allowed regulators and scientists to begin evaluating the vaccine's efficacy. The December report showed that at 21 days after the second dose and beyond, there were no hospitalisations or severe disease in those who received the vaccine, compared to 10 cases in the control groups. The rate of serious adverse events was balanced between the active and control groups, which suggested that the active vaccine did not pose safety concerns beyond a rate experienced in the general population. One case of transverse myelitis was reported 14 days after the second-dose was administered as being possibly related to vaccination, with an independent neurological committee considering the most likely diagnosis to be of an idiopathic, short-segment, spinal cord demyelination. The other two cases of transverse myelitis, one in the vaccine group and the other in the control group, were considered to be unrelated to vaccination. A subsequent analysis, published on 19 February 2021, showed an efficacy of 76.0% at preventing symptomatic COVID-19 beginning at 22 days following the first dose, increasing to 81.3% when the second dose is given 12 weeks or more after the first. However, the results did not show any protection against asymptomatic COVID-19 following only one dose. Beginning 14 days following timely administration of a second dose, with different duration from the first dose depending on trials, the results showed 66.7% efficacy at preventing symptomatic infection, and the UK arm (which evaluated asymptomatic infections in participants) was inconclusive as to the prevention of asymptomatic infection. Efficacy was higher at greater intervals between doses, peaking at around 80% when the second dose was given at 12 weeks or longer after the first. Preliminary results from another study with 120 participants under 55 years of age showed that delaying the second dose by up to 45 weeks increases the resulting immune response and that a booster (third) dose given at least six months later produces a strong immune response. A booster dose may not be necessary, but it alleviates concerns that the body would develop immunity to the vaccine's viral vector, which would reduce the potency of annual inoculations. On 22 March 2021, AstraZeneca released interim results from the phase III trial conducted in the US that showed efficacy of 79% at preventing symptomatic COVID-19 and 100% efficacy at preventing severe disease and hospitalisation. The next day, the National Institute of Allergy and Infectious Diseases (NIAID) published a statement countering that those results may have relied on "outdated information" that may have provided an incomplete view of the efficacy data. AstraZeneca later revised its efficacy claim to be 76% after further review of the data. On 29 September 2021, AstraZeneca shows of 74% efficacy rate in the US trial. Single dose effectiveness A study on the effectiveness of a first dose of the Pfizer–BioNTech or Oxford–AstraZeneca COVID-19 vaccines against COVID-19 related hospitalisation in Scotland was based on a national prospective cohort study of 5.4million people. Between 8 December 2020 and 15 February 2021, 1,137,775 participants were vaccinated in the study, 490,000 of whom were given the Oxford–AstraZeneca vaccine. The first dose of the Oxford–AstraZeneca vaccine was associated with a vaccine effect of 94% for COVID-19-related hospitalisation at 28–34 days post-vaccination. Combined results (all vaccinated participants, whether Pfizer–BioNTech or Oxford–AstraZeneca) showed a significant vaccine effect for prevention of COVID-19-related hospitalisation, which was comparable when restricting the analysis to those aged ≥80 years (81%). The majority of the participants over the age of 65 were given the Oxford–AstraZeneca vaccine. Nasal spray On 25 March 2021, the University of Oxford announced the start of a phase I clinical trial to investigate the efficacy of an intranasal spray method. Approvals The first country to issue a temporary or emergency approval for the Oxford–AstraZeneca vaccine was the UK. The Medicines and Healthcare products Regulatory Agency (MHRA) began a review of efficacy and safety data on 27 November 2020, followed by approval for use on 30 December 2020, becoming the second vaccine approved for use in the national vaccination programme. The BBC reported that the first person to receive the vaccine outside of clinical trials was vaccinated on 4 January 2021. The European Medicines Agency (EMA) began review of the vaccine on 12 January 2021, and stated in a press release that a recommendation could be issued by the agency by 29 January, followed by the European Commission deciding on a conditional marketing authorisation within days. On 29 January 2021, the EMA recommended granting a conditional marketing authorisation for AZD1222 for people 18 years of age and older, and the recommendation was accepted by the European Commission the same day. Prior to approval across the EU, the Hungarian regulator unilaterally approved the vaccine instead of waiting for EMA approval. In October 2022, the conditional marketing authorisation was converted to a standard one. On 30 January 2021, the Vietnamese Ministry of Health approved the AstraZeneca vaccine for use, becoming the first vaccine to be approved in Vietnam. The vaccine has since been approved by a number of non-EU countries, including Argentina, Bangladesh, Brazil, the Dominican Republic, El Salvador, India, Israel, Malaysia, Mexico, Nepal, Pakistan, the Philippines, Sri Lanka, and Taiwan regulatory authorities for emergency usage in their respective countries. South Korea granted approval of the AstraZeneca vaccine on 10 February 2021, thus becoming the first vaccine to be approved for use in that country. The regulator recommended the two-shot regimen be used in all adults, including the elderly, noting that consideration is needed when administering the vaccine to individuals over 65 years of age due to limited data from that demographic in clinical trials. On the same day, the World Health Organization (WHO) issued interim guidance and recommended the AstraZeneca vaccine for all adults, its Strategic Advisory Group of Experts also having considered use where variants were present and concluded there was no need not to recommend it. In February 2021, the government and regulatory authorities in Australia (16 February 2021) and Canada (26 February 2021) granted approval for temporary use of the vaccine. On 19 November 2021, the vaccine was approved for use in Canada. Suspensions South Africa On 7 February 2021, the vaccine rollout in South Africa was suspended. Researchers from the University of the Witwatersrand released interim, non-peer-reviewed data that suggested the AstraZeneca vaccine provided minimal protection against mild or moderate disease infection among young people. The BBC reported on 8 February 2021 that Katherine O'Brien, director of immunisation at the WHO, felt it was "really plausible" the AstraZeneca vaccine could have a "meaningful impact" on the Beta variant (lineage B.1.351), particularly in preventing serious illness and death. The same report also indicated the Deputy Chief Medical Officer for England Jonathan Van-Tam said the Witwatersrand study did not change his opinion that the AstraZeneca vaccine was "rather likely" to have an effect on severe disease from the Beta variant. The South African government subsequently cancelled the use of the AstraZeneca vaccine. European Union In March 2021, Austria suspended the use of one batch of vaccine after two people had blood clots after vaccination, one of whom died. In total, four cases of blood clots have been identified in the same batch of 1million doses. Although no causal link with vaccination has been shown, several other countries, including Denmark, Norway, Iceland, Bulgaria, Ireland, Italy, Spain, Germany, France, the Netherlands and Slovenia also halted the vaccine rollout over the following days while waiting for the EMA to finish a safety review triggered by the cases. In April 2021, the EMA concluded its safety review and concluded that unusual blood clots with low blood platelets should be listed as very rare side effects while reaffirming the overall benefits of the vaccine. Following this announcement EU countries have resumed use of the vaccine with some limiting its use to elderly people at higher risk for severe COVID-19 illness. In March 2021, the Norwegian government temporarily suspended the vaccine's use, awaiting more information regarding potential adverse effects. Then, in April, the Norwegian Institute of Public Health recommended to the government to permanently suspended vaccination with AstraZeneca due to the "rare but severe incidents with low platelet counts, blood clots, and haemorrhages," since in the case of Norway, "the risk of dying after vaccination with the AstraZeneca vaccine would be higher than the risk of dying from the disease, particularly for younger people." At the same time, the Norwegian government announced their decision to wait for a final decision and to establish an expert group to provide a broader assessment on the safety of the AstraZeneca and Janssen vaccines. In May, the expert committee also recommended suspending the use of both vaccines. Finally, in May —two months after the initial suspension— the Prime Minister of Norway announced that the government decided to completely remove the AstraZeneca vaccine from the Norwegian Coronavirus Immunisation Programme, and people who have had the first will be offered another coronavirus vaccine for their second dose. In March 2021, the German Ministry of Health announced that the use of the vaccine in people aged 60 and below should be the result of a recipient-specific discussion, and that younger patients could still be given the AstraZeneca vaccine, but only "at the discretion of doctors, and after individual risk analysis and thorough explanation". In April, the Danish Health Authority suspended use of the vaccine. The Danish Health Authority said that it had other vaccines available, and that the next target groups being a lower-risk population had to be "[weighed] against the fact that we now have a known risk of severe adverse effects from vaccination with AstraZeneca, even if the risk in absolute terms is slight." A 2021 study found that the decisions to suspend the vaccine led to increased vaccine hesitancy across the West, even in countries that did not suspend the vaccine. In October 2022, the conditional marketing authorisation was converted to a standard one. Despite the continued authorisation, most EU countries stopped the administration of the vaccine by end of 2021. After an initial quick uptake, the number of doses administered remained at 67 Million since October 2021. AstraZeneca withdrew its marketing authorization for the vaccine from the European Union in March 2024. Canada On 29 March 2021, Canada's National Advisory Committee on Immunization (NACI) recommended that distribution of the vaccine be suspended for patients below the age of 55; NACI chairwoman Caroline Quach-Thanh stated that the risk of blood clots was higher in younger patients, and that NACI needed to "evolve" its recommendations as new data becomes available. Most Canadian provinces subsequently announced that they would follow this guidance. there had been three confirmed cases of blood clotting tied to the vaccine in Canada, out of over 700,000 doses administered in the country. Beginning 18 April 2021, amid a major third wave of the virus, several Canadian provinces announced that they would backtrack on the NACI recommendation and extend eligibility for the AstraZeneca vaccine to residents as young as 40 years old, including Alberta, British Columbia, Ontario, and Saskatchewan. Quebec also extended eligibility to residents 45 and older. The NACI guidance was a recommendation which did not affect the formal approval of the vaccine by Health Canada for all adults over 18; it stated on 14 April 2021 that it had updated its warnings on the vaccine as part of an ongoing review, but that "the potential risk of these events is very rare, and the benefits of the vaccine in protecting against COVID-19 outweigh its potential risks." On 23 April 2021, citing the current state of supplies for mRNA-based vaccines and new data, NACI issued a recommendation that the vaccine could be offered to patients as young as 30 years old if benefits outweighed the risks, and the patient did "not wish to wait for an mRNA vaccine". Beginning 11 May 2021, multiple provinces announced that they would suspend use of the AstraZeneca vaccine once again, citing either supply issues or the blood clotting risk. Some provinces stated that they planned to only use the AstraZeneca vaccine for outstanding second doses. On 1 June 2021, NACI issued guidance, citing the safety concerns as well as European studies showing an improved antibody response, recommending that an mRNA vaccine be administered as a second dose to patients that had received the AstraZeneca vaccine as their first dose. Indonesia In March 2021, Indonesia halted the rollout of the vaccine while awaiting more safety guidance from the World Health Organization, and then resumed using the vaccine on 19 March. Australia In June 2021, Australia revised its recommendations for the rollout of the vaccine, recommending that the Pfizer Comirnaty vaccine be used for people aged under 60 years if the person has not already received a first dose of AstraZeneca COVID-19 vaccine. The AstraZeneca COVID-19 vaccine can still be used in people aged under 60 years where the benefits are likely to outweigh the risks for that person, and the person has made an informed decision based on an understanding of the risks and benefits in consultation with a medical professional. Malaysia After initially approving the use of the AstraZeneca vaccine, Malaysian health authorities removed the vaccine from the country's mainstream vaccination programme due to public concerns about its safety in late April 2021. The AstraZeneca vaccines was distributed in designated vaccination centres, with the public being allowed to register for the vaccine on a voluntary basis. All 268,800 doses of the initial batch of the vaccine were fully booked in three and a half hours after the registration opened for residents of the state of Selangor and the Federal Territory of Kuala Lumpur. A second batch of 1,261,000 doses was offered to residents of the states of Selangor, Penang, Johore, Sarawak, and the Federal Territory of Kuala Lumpur. A total of 29,183 doses were reserved for previously waitlisted registrants, and 275,208 doses were taken up by senior citizens during a grace 3-day period. The remaining 956,609 doses were then offered to those aged 18 and above, and was completely booked within an hour. On 10 May 2024, Health Minister Dzulkefly Ahmad announced that the Malaysian Government would continue to offer care to individuals suffering from adverse effects of COVID-19 vaccines including the AstraZeneca vaccine. He also confirmed that the Malaysian Government had data on adverse effects caused by COVID-19 vaccines and methods for treating the side effects. On 13 May, Deputy Health Minister Lukanisman Awang Sauni confirmed that the Malaysian Government would release a report on the AstraZeneca vaccine's adverse effects later in the week. Safety review In March 2021, the European Medicines Agency (EMA) stated that there is no indication that vaccination has been the cause of the observed clotting issues, which were not listed as side effects of the vaccine. At the time, according to the EMA, the number of thromboembolic events in vaccinated people was no higher than that seen in the general population. , 30 cases of events of thromboembolism events had been reported among the almost 5million people vaccinated in the European Economic Area. The UK's MHRA also stated that after more than 11million doses administered, it had not been confirmed that the reported blood clots were caused by the vaccine and that vaccinations would not be stopped. On 12 March 2021 the WHO stated that a causal relationship had not been shown and that vaccinations should continue. AstraZeneca confirmed on 14 March 2021 that after examining over 17million people who have been vaccinated with the vaccine, no evidence of an increased risk of blood clots in any particular country was found. The company reported that , across the EU and UK, there had been 15 events of deep vein thrombosis and 22 events of pulmonary embolism reported among those given the vaccine, which is much lower than would be expected to occur naturally in a general population of that size. In March 2021, the German Paul-Ehrlich Institute (PEI) reported that out of 1.6million vaccinations, seven cases of cerebral vein thrombosis in conjunction with a deficiency of blood platelets had occurred. According to the PEI, the number of cases of cerebral vein thrombosis after vaccination was statistically significantly higher than the number that would occur in the general population during a similar time period. These reports prompted the PEI to recommend a temporary suspension of vaccinations until the EMA had completed their review of the cases. The World Health Organization (WHO) issued a statement on 17 March, regarding the AstraZeneca COVID-19 vaccine safety signals, and still considers the benefits of the vaccine to outweigh its potential risks, further recommending that vaccinations continue. On 18 March, the EMA announced that out of the around 20million people who had received the vaccine, general blood clotting rates were normal, but that it had identified seven cases of disseminated intravascular coagulation, and eighteen cases of cerebral venous sinus thrombosis. A causal link with the vaccine was not proven, but the EMA said it would conduct further analysis and recommended informing people eligible for the vaccine of the fact that the possibility it may cause rare clotting problems had not been disproven. The EMA confirmed that the vaccine's benefits outweighed the risks. On 25 March, the EMA released updated product information. According to the EMA, 100,000 cases of blood clots occur naturally each month in the EU, and the risk of blood clots was not statistically higher in the vaccinated population. The EMA noted that COVID-19 itself causes an increased risk of the development of blood clots, and as such the vaccine would lower the risk of the formation of blood clots even if the 15 cases' causal link were to be confirmed. Italy resumed vaccinations after the EMA's statement, with most of the remaining European countries following suit and resuming their AstraZeneca inoculations shortly thereafter. To reassure the public of the vaccine's safety, the British and French Prime Ministers, Boris Johnson and Jean Castex, had themselves vaccinated with it in front of the media shortly after the restart of the AstraZeneca vaccination campaigns in the EU. In April 2021, the EMA issued its direct healthcare professional communication (DHPC) about the vaccine. The DHPC indicated that a causal relationship between the vaccine and blood clots (thrombosis) in combination with low blood platelets (thrombocytopenia) was plausible and identified it as a very rare side effect of the vaccine. According to the EMA these very rare adverse events occur in around 1 out of 100,000 vaccinated people. Further development Efficacy against variants A study published in April 2021 by researchers from the COVID-19 Genomics United Kingdom Consortium, the AMPHEUS Project, and the Oxford COVID-19 Vaccine Trial Group indicated the Oxford–AstraZeneca vaccine showed somewhat reduced efficacy against infection with the Alpha variant (lineage B.1.1.7), with 70.4% efficacy in absolute terms against Alpha versus 81.5% against other variants. Despite this, the researchers concluded that the vaccine remained effective at preventing symptomatic infection from this variant and that vaccinated individuals infected symptomatically typically had shorter duration of symptoms and less viral load, thereby reducing the risk of transmission. Following the identification of notable variants of concern, concern arose that the E484K mutation, present in the Beta and Gamma variants (lineages B.1.351 and P.1), could evade the protection given by the vaccine. In February 2021, the collaboration was working to adapt the vaccine to target these variants, with the expectation that a modified vaccine would be available "in a few months" as a "booster" given to people who had already completed the two-dose series of the original vaccine. In June 2021, AstraZeneca published a press release confirming undergoing Phase II/III trials of an AZD2816 COVID-19 variant vaccine candidate. The new vaccine would be based on the current Vaxzevria adenoviral vector platform but modified with spike proteins based on the Beta (B.1.351 lineage) variant. Phase II/III trials saw 2849 volunteers participating from UK, South Africa, Brazil and Poland with parallel dosing of both the current Oxford-AstraZeneca vaccine and the variant vaccine candidate. By September 2021, AZD2816 vaccine candidate is still undergoing Phase II/III trials with intent to switch to this vaccine if approved by government regulators. Particularly the government of Thailand, with delivery of additional 60 million doses of AstraZeneca COVID-19 Vaccine agreed for 2022. Heterologous prime-boost vaccination In December 2020, a clinical trial was registered to examine a heterologous prime-boost vaccination course consisting of one dose of the Oxford–AstraZeneca vaccine followed by Sputnik Light based on the Ad26 vector 29 days later. After suspensions due to rare cases of blood clots in March 2021, Canada and several European countries recommended receiving a different vaccine for the second dose. Despite the lack of clinical data on the efficacy and safety of such heterologous combinations, some experts believe that doing so may boost immunity, and several studies have begun to examine this effect. In June 2021, preliminary results from a study of 463 participants showed that a heterologous prime-boost vaccination course consisting of one dose of the Oxford–AstraZeneca vaccine followed by one dose of the Pfizer–BioNTech vaccine produced the strongest T cell activity and an antibody level almost as high as two doses of the Pfizer-BioNTech vaccine. The reversal of the order resulted in T cell activity at half the potency and one-seventh the antibody levels, the latter still five times higher than two doses of Oxford–AstraZeneca. The lowest T cell activity was observed in homologous courses, when both doses were of the same vaccine. In July 2021, a study of 216 participants found that a heterologous prime-boost vaccination course consisting of one dose of the Oxford–AstraZeneca vaccine followed by one dose of the Moderna vaccine produced a similar level of neutralizing antibodies and T cell responses with increased spike-specific cytotoxic T cells compared to a homologous course consisting of two doses of the Moderna vaccine. Society and culture The Oxford University and AstraZeneca collaboration was seen as having the potential as being a low-cost vaccine with no onerous storage requirements. A series of events including a deliberate undermining of the AstraZeneca vaccine for geopolitical purposes by both the EU and EU member states including miscommunication, reports of supply difficulties (responsibility of which were due to the EU mis-handling vaccine procurement) misleading reports of inefficacy and adverse effects as well as the high-profile European Commission–AstraZeneca COVID-19 vaccine dispute, have been a public relations disaster for both Brussels and member states, and in the opinion of one academic has led to increased vaccine hesitancy. In April 2021, the vaccine was a key component of the WHO backed COVAX (COVID-19 Vaccines Global Access) program, with the WHO, the EMA, and the MHRA continuing to state that the benefits of the vaccine outweigh any possible side effects. About 69million doses of the Oxford–AstraZeneca COVID-19 vaccine were administered in the EU/EEA from authorization to 26 June 2022. Economics Agreements for access to vaccines began being signed in May 2020, with the UK having priority for the first 100million doses if trials proved successful, with the final agreement being signed at the end of August. On 21 May 2020, AstraZeneca agreed to provide 300million doses to the US for , implying a cost of per dose. An AstraZeneca spokesman said the funding also covers development and clinical testing. It also reached a technology transfer agreement with the Mexican and Argentinean governments and agreed to produce at least 400million doses to be distributed throughout Latin America. The active ingredients would be produced in Argentina and sent to Mexico to be completed for distribution. In June 2020, Emergent BioSolutions signed a deal to manufacture doses of the AstraZeneca vaccine specifically for the US market. The deal was part of the Trump administration's Operation Warp Speed initiative to develop and rapidly scale production of targeted vaccines before the end of 2020. Catalent would be responsible for the finishing and packaging process. On 4 June 2020, the WHO's COVAX (COVID-19 Vaccines Global Access) facility made initial purchases of 300million doses from the company for low- to middle-income countries. Also, AstraZeneca and Serum Institute of India reached a licensing agreement to independently supply 1billion doses of the Oxford University vaccine to middle- and low-income countries, including India. Later in September, funded by a grant from the Bill and Melinda Gates Foundation, the COVAX program secured an additional 100 million doses at US$3 per dose. On 27 August 2020, AstraZeneca concluded an agreement with the EU, to supply up to 400million doses to all EU and select European Economic Area (EEA) member states. The European Commission took over negotiations started by the Inclusive Vaccines Alliance, a group made up of France, Germany, Italy, and the Netherlands, in June 2020. On 5 November 2020, a tripartite agreement was signed between the government of Bangladesh, the Serum Institute of India, and Beximco Pharma of Bangladesh. Under the agreement Bangladesh ordered 30million doses of Oxford–AstraZeneca vaccine from Serum through Beximco for $4 per shot. On the other hand, Indian government has given 3.2 million doses to Bangladesh as a gift which were also produced by Serum. But Serum supplied only 7 million doses from the tripartite agreement in the first two months of the year. Bangladesh was supposed to receive 5 million doses per month but not received shipments in March and April. As a result, rollout of vaccine has been disrupted by supply shortfalls. The situation became complicated when the second dose of 1.3 million citizens is uncertain as India halts exports. Not getting the second dose at the right time is likely to reduce the effectiveness of the vaccination program. In addition, several citizens of Bangladesh have expressed doubts about its effectiveness and safety. Bangladesh is looking for alternative vaccine sources because India isn't supplying the vaccine according to the timeline of the deal. Thailand's agreement in November 2020 for 26million doses of vaccine would cover 13million people, approximately 20% of the population, with the first lot expected to be delivered at the end of May. The public health minister indicated the price paid was $5 per dose; AstraZeneca (Thailand) explained in January 2021 after a controversy that the price each country paid depended on production cost and differences in supply chain, including manufacturing capacity, labour and raw material costs. In January 2021, the Thai cabinet approved further talks on ordering another 35million doses, and the Thai FDA approved the vaccine for emergency use for 1year. Siam Bioscience, a company owned by Vajiralongkorn, will receive technological transfer and has the capacity to manufacture up to 200million doses a year for export to ASEAN. Also in November, the Philippines agreed to buy 2.6million doses, reportedly worth around million (approximately per dose). In December 2020, South Korea signed a contract with AstraZeneca to secure 20million doses of its vaccine, reportedly equivalent in worth to those signed by Thailand and the Philippines, with the first shipment expected as early as January 2021. , the vaccine remains under review by the South Korea Disease Control and Prevention Agency. AstraZeneca signed a deal with South Korea's SK Bioscience to manufacture its vaccine products. The collaboration calls for the SK affiliate to manufacture AZD1222 for local and global markets. On 7 January 2021, the South African government announced that they had secured an initial 1million doses from the Serum Institute of India, to be followed by another 500,000 doses in February, however the South African government subsequently cancelled the use of the vaccine, selling its supply to other African countries, and switched its vaccination program to use the Janssen COVID-19 vaccine. On 22 January 2021, AstraZeneca announced that in the event the European Union approved the COVID-19 Vaccine AstraZeneca, initial supplies would be lower than expected due to production issues at Novasep in Belgium. Only 31million of the previously predicted 80million doses would be delivered to the EU by March 2021. In an interview with Italian newspaper La Repubblica, AstraZeneca's CEO Pascal Soriot said the delivery schedule for the doses in the EU was two months behind schedule. He mentioned low yield from cell cultures at one large-scale European site. Analysis published in The Guardian also identified an apparently low yield from bioreactors in the Belgium plant and noted the difficulties in setting up this form of process, with variable yields often occurring. As a result, the EU imposed export controls on vaccine doses; controversy erupted as to whether doses were being diverted to the UK and whether deliveries to Northern Ireland would be disrupted. On 24 February 2021, a shipment of the vaccine to Accra, Ghana, via COVAX made it the first country in Africa to receive vaccines via the initiative. In early 2021, the Bureau for Investigative Journalism found that South Africa had paid double the rate for the European Commission, while Uganda paid triple. According to the Higher Education Statistics Agency data, Oxford received a US$176 million windfall on vaccine in the 2021-22 academic year. Brand names The vaccine is marketed under the brand name Covishield by the Serum Institute of India. The name of the vaccine was changed to Vaxzevria in the European Union on 25 March 2021. Vaxzevria, AstraZeneca COVID‐19 Vaccine, and COVID-19 Vaccine AstraZeneca are manufactured by AstraZeneca. Research , the AZD1222 development team were working on adapting the vaccine to be more effective in relation to newer SARS-CoV-2 variants; redesigning the vaccine being the relatively quick process of switching the genetic sequence of the spike protein. Manufacturing set-up and a small scale trial are also required before the adapted vaccine might be available in autumn. References Further reading External links "An oral history of Oxford/AstraZeneca: 'Making a vaccine in a year is like landing a human on the moon'". The Guardian Adenoviridae Drugs developed by AstraZeneca British COVID-19 vaccines Products introduced in 2020 Vaccine controversies Viral vector vaccines Withdrawn drugs
Oxford–AstraZeneca COVID-19 vaccine
[ "Chemistry", "Biology" ]
9,255
[ "Vaccination", "Withdrawn drugs", "Drug safety", "Vaccine controversies" ]
64,591,279
https://en.wikipedia.org/wiki/The%20Machine%20in%20Neptune%27s%20Garden
The Machine in Neptune's Garden: Historical Perspectives on Technology and the Marine Environment is a 2004 book edited by Helen M. Rozwadowski and David K. van Keuren. The book takes its name from Leo Marx's influential book The Machine in the Garden. It is a product of the Maury III conference on the history of oceanography held in Monterey, California in 2001. It argues the centrality of technology to the acquisition of knowledge of the oceans and contains ten thematically linked essays on the indispensable role of technology in the history of ocean science. It "demonstrate[s] that historians of science and technology should pay more attention to the history and historiography of oceanography." It is the most prominent work combining the history of technology, environmental history, and history of ocean sciences, and it is considered a foundational work in history of technology of the oceans and in the history of the marine environment. Contents The book contains an introduction by Keith R. Benson and editors Helen M. Rozwadowski and David K. van Keuren, and ten chapters by historians of science and technology. The volume is dedicated to historian of science Philip F. Rehbock, who had died in 2002. 1. "Gauging Science and Technology in the Early Victorian Era" by Michael S. Reidy 2. "Mathematics in Neptune's Garden: Making the Physics of the Sea Quantitative, 1876-1900" by Eric L. Mills 3. "Fashioning Naval Oceanography: Columbus O'Donnell Iselin and American Preparation for War 1940-1941" by Gary E. Weir 4. "'A Wonderful Oceanographic Tool': The Atomic Bomb, Radioactivity and the Development of American Oceanography" by Ronald Rainger 5. "Choosing between Centers of Action: Instrument Buoys, El Niño, and Scientific Internationalism in the Pacific, 1957-1983" by Gregory T. Cushman 6. "Breaking New Ground: The Origins on Scientific Ocean Drilling" by David K. van Keuren 7. "An Eye into the Sea: The Early Development of Fisheries Acoustics in Norway, 1935-1960" by Vera Schwach 8. "From Civilian Plantonologist to Navy Oceanographer: Mary Sears in World War II" by Kathleen Broome Williams 9. "Modeling Neptune's Garden: The Chesapeake Bay Hydraulic Model, 1965-1984" by Christine Keiner 10. "Engineering, Imagination, and Industry: Scripps Island and Dreams for Ocean Science in the 1960s" by Helen M. Rozwadowski References 2004 non-fiction books History of science and technology Environmental history Environmental non-fiction books Maritime history Naval history Oceanography
The Machine in Neptune's Garden
[ "Physics", "Technology", "Environmental_science" ]
549
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "History of science and technology" ]
63,245,755
https://en.wikipedia.org/wiki/One-way%20wave%20equation
A one-way wave equation is a first-order partial differential equation describing one wave traveling in a direction defined by the vector wave velocity. It contrasts with the second-order two-way wave equation describing a standing wavefield resulting from superposition of two waves in opposite directions (using the squared scalar wave velocity). In the one-dimensional case it is also known as a transport equation, and it allows wave propagation to be calculated without the mathematical complication of solving a 2nd order differential equation. Due to the fact that in the last decades no general solution to the 3D one-way wave equation could be found, numerous approximation methods based on the 1D one-way wave equation are used for 3D seismic and other geophysical calculations, see also the section . One-dimensional case The scalar second-order (two-way) wave equation describing a standing wavefield can be written as: where is the coordinate, is time, is the displacement, and is the wave velocity. Due to the ambiguity in the direction of the wave velocity, , the equation does not contain information about the wave direction and therefore has solutions propagating in both the forward () and backward () directions. The general solution of the equation is the summation of the solutions in these two directions: where and are the displacement amplitudes of the waves running in and direction. When a one-way wave problem is formulated, the wave propagation direction has to be (manually) selected by keeping one of the two terms in the general solution. Factoring the operator on the left side of the equation yields a pair of one-way wave equations, one with solutions that propagate forwards and the other with solutions that propagate backwards. The backward- and forward-travelling waves are described respectively (for ), The one-way wave equations can also be physically derived directly from specific acoustic impedance. In a longitudinal plane wave, the specific impedance determines the local proportionality of pressure and particle velocity : with = density. The conversion of the impedance equation leads to: A longitudinal plane wave of angular frequency has the displacement . The pressure and the particle velocity can be expressed in terms of the displacement (: Elastic Modulus): for the 1D case this is in full analogy to stress in mechanics: , with strain being defined as These relations inserted into the equation above () yield: With the local wave velocity definition (speed of sound): directly(!) follows the 1st-order partial differential equation of the one-way wave equation: The wave velocity can be set within this wave equation as or according to the direction of wave propagation. For wave propagation in the direction of the unique solution is and for wave propagation in the direction the respective solution is There also exists a spherical one-way wave equation describing the wave propagation of a monopole sound source in spherical coordinates, i.e., in radial direction. By a modification of the radial nabla operator an inconsistency between spherical divergence and Laplace operators is solved and the resulting solution does not show Bessel functions (in contrast to the known solution of the conventional two-way approach). Three-dimensional case The one-way equation and solution in the three-dimensional case was assumed to be similar way as for the one-dimensional case by a mathematical decomposition (factorization) of a 2nd order differential equation. In fact, the 3D One-way wave equation can be derived from first principles: a) derivation from impedance theorem and b) derivation from a tensorial impulse flow equilibrium in a field point. It is also possible to derive the vectorial two-way wave operator from synthesis of two one-way wave operators (using a combined field variable). This approach shows that the two-way wave equation or two-way wave operator can be used for the specific condition ∇c=0, i.e. for homogeneous and anisotropic medium, whereas the one-way wave equation resp. one-way wave operator is also valid in inhomogeneous media. Inhomogeneous media For inhomogeneous media with location-dependent elasticity module , density and wave velocity an analytical solution of the one-way wave equation can be derived by introduction of a new field variable. Further mechanical and electromagnetic waves The method of PDE factorization can also be transferred to other 2nd or 4th order wave equations, e.g. transversal, and string, Moens/Korteweg, bending, and electromagnetic wave equations and electromagnetic waves. See also References Geophysics Wave mechanics Acoustics Sound Continuum mechanics
One-way wave equation
[ "Physics" ]
930
[ "Physical phenomena", "Applied and interdisciplinary physics", "Continuum mechanics", "Classical mechanics", "Acoustics", "Waves", "Wave mechanics", "Geophysics" ]
53,466,077
https://en.wikipedia.org/wiki/Brain%20Electrical%20Oscillation%20Signature%20Profiling
Brain Electrical Oscillation Signature Profiling (BEOSP or BEOS) is an EEG technique by which a suspect's participation in a crime is detected by eliciting electrophysiological impulses. It is a non-invasive, scientific technique with a degree of sensitivity and a neuro-psychological method of interrogation which is also referred to as 'brain fingerprinting'. History The methodology was developed by Champadi Raman Mukundan (C. R. Mukundan), a Neuroscientist, former Professor & Head of Clinical Psychology at the National Institute of Mental Health and Neurosciences (Bangalore, India), while he worked as a Research Consultant to TIFAC-DFS Project on 'Normative Data for Brain Electrical Activation Profiling'. His works are based on research that was also formerly pursued by other scientists at American universities, including J. Peter Rosenfeld, Lawrence Farwell and Emanuel Donchin. Principle The human brain receives millions of arrays of signals in different modalities, all through the waking periods. These signals are classified and stored in terms of their relationship perceived as function of experience and available knowledge base of an individual, as well as new relationship produced through sequential processing. The process of encoding happens primarily when the individual directly participates in an activity or experiences it. It is considered secondary, when the information is obtained from a secondary source viz. books, conversations, hearsay etc. in which there is no primary experiential component and the brain deals mainly with conceptual aspects. Primary encoding is deep-seated and has specific source memory in terms of time and space of occurrence of experience, as individual himself/herself has shared or participated in the experience/act/event at certain time in his/her life at a certain place. It is found that when the brain of an individual is activated by a piece of information of an event in which he/she has taken part, the brain of the individual will respond differently from that of a person who has received the same information from secondary sources (non-experiential). BEOSP is based on this principle, thereby intending to demonstrate that the suspect who have primary encoded information of those who have participated in the suspected events will show responses indicating firsthand (personally acquired) knowledge of the event. Procedure Pretest interview with the suspect in BEOSP The suspect is acquainted with BEOSP test procedure Informed consent is obtained Ideally, no questions are to be asked while conducting the test; rather, the subject is simply provided with the probable events/scenarios in the aftermath of which, the results are analyzed to verify if the brain produces any experiential knowledge, which is essentially the recognition of events disclosed. This way, all fundamental rights are protected, as neither there are no questions that are being asked or any answers reciprocated. Applications University of Pennsylvania conducted a research along with the Brigham & Women's Hospital (Boston, Massachusetts), Children's Hospital Boston & the University Hospital of Freiburg, Germany which determined that Gamma Oscillations in the brain could help distinguish false memories from the real ones. Their analysis concluded that in the retrieval of truthful memories, as compared to false, human brain creates an extremely distinct pattern of gamma oscillations, indicating a recognition of context based information associated with a prior experience. Criticism India’s Novel Use of Brain Scans in Courts Is Debated as featured on The New York Times India’s Judges Overrule Scientists on ‘Guilty Brain’ Tech as discussed over Wired (magazine) See also Polygraph Criminal profiling External links Forensic psychology Physiological instruments Forensic equipment Neurophysiology Psychology controversies Lie detection Fringe science
Brain Electrical Oscillation Signature Profiling
[ "Technology", "Engineering" ]
749
[ "Physiological instruments", "Measuring instruments" ]
62,331,147
https://en.wikipedia.org/wiki/Jean-Marc%20Egly
Jean-Marc Egly (born 27 December 1945), is a French molecular biology researcher specialising in the field of transcription. Research Director at Inserm, he was also Chairman of the Scientific Council of the ARC from 2006 to 2011. He is a member of the French Academy of sciences. Biography Jean-Marc Egly obtained his doctorate in chemistry in 1971 and a second in biochemistry in 1976 at the Louis-Pasteur University in Strasbourg. In 1985, he became Inserm research director at the IGBMC in Strasbourg, founded by Pierre Chambon. In 1995, he was commissioned by the Secrétaire d'état à la Recherche, Elisabeth Dufourcq, to carry out a mission and prepare a report advocating the creation of the Great Sequencing of the Genome in Evry. In 2005, he was elected a member of the French Academy of sciences. He is also a member of the Scientific Council of the Parliamentary Office for the Assessment of Scientific and Technological Choices (OPECST). Scientific contributions Jean-Marc Egly's work focused mainly on describing the mechanisms of transcription at the level of type II RNA polymerase. Distinctions 2002: Research Prize of the Allianz-Institut de France Foundation. 2004: Grand Prix de la recherche médicale de l'Inserm 2006: Chevalier of the Légion d'honneur 2014: Officier of the Légion d'honneur References 1945 births Living people 21st-century French biologists Inserm directors Members of the French Academy of Sciences Molecular biologists Officers of the Legion of Honour University of Strasbourg alumni
Jean-Marc Egly
[ "Chemistry" ]
323
[ "Molecular biologists", "Biochemists", "Molecular biology" ]
62,332,908
https://en.wikipedia.org/wiki/Journal%20of%20Architectural%20Engineering
The Journal of Architectural Engineering is a quarterly peer-reviewed scientific journal published by the American Society of Civil Engineers covering all aspects of engineering design, planning, construction, and operation of buildings, including building systems; structural, mechanical, and electrical engineering; acoustics; environmental quality; lighting; and sustainability. Abstracting and indexing The journal is indexed in Ei Compendex, Emerging Sources Citation Index, ProQuest databases, Civil Engineering Database, Inspec, Scopus, and EBSCO databases. References External links Civil engineering journals American Society of Civil Engineers academic journals Academic journals established in 1995 Architecture journals English-language journals
Journal of Architectural Engineering
[ "Engineering" ]
129
[ "Civil engineering journals", "Civil engineering" ]
62,338,906
https://en.wikipedia.org/wiki/Categorical%20trace
In category theory, a branch of mathematics, the categorical trace is a generalization of the trace of a matrix. Definition The trace is defined in the context of a symmetric monoidal category C, i.e., a category equipped with a suitable notion of a product . (The notation reflects that the product is, in many cases, a kind of a tensor product.) An object X in such a category C is called dualizable if there is another object playing the role of a dual object of X. In this situation, the trace of a morphism is defined as the composition of the following morphisms: where 1 is the monoidal unit and the extremal morphisms are the coevaluation and evaluation, which are part of the definition of dualizable objects. The same definition applies, to great effect, also when C is a symmetric monoidal ∞-category. Examples If C is the category of vector spaces over a fixed field k, the dualizable objects are precisely the finite-dimensional vector spaces, and the trace in the sense above is the morphism which is the multiplication by the trace of the endomorphism f in the usual sense of linear algebra. If C is the ∞-category of chain complexes of modules (over a fixed commutative ring R), dualizable objects V in C are precisely the perfect complexes. The trace in this setting captures, for example, the Euler characteristic, which is the alternating sum of the ranks of its terms: Further applications have used categorical trace methods to prove an algebro-geometric version of the Atiyah–Bott fixed point formula, an extension of the Lefschetz fixed point formula. References Further reading Category theory Fixed-point theorems Geometry
Categorical trace
[ "Mathematics" ]
364
[ "Theorems in mathematical analysis", "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fixed-point theorems", "Theorems in topology", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Geometry" ]
62,343,336
https://en.wikipedia.org/wiki/Ove%20Christiansen
Ove Christiansen (born November 13, 1969, in Holstebro, Denmark) is professor of chemistry at the Department of Chemistry, Aarhus University (AU), Denmark. He is contributor to the DALTON program package and initiated the MidasCpp (Molecular Interactions Dynamics and Simulations in C++) program for the accurate description of nuclear dynamics with means of Coupled Cluster Theory. Research Ove Christiansen made important contributions to electronic structure theory by introducing the CC2 and CC3 method and by establishing a hierarchy of Coupled cluster electronic structure models: CCS, CC2, CCSD, CC3, etc.. He introduced contributions to response theory for the purpose of describing electronic excited states. Later he changed the emphasis of his main research interest towards vibrational structure theory and defined a variant of vibrational Coupled cluster (VCC) and developed the theoretical machinery for automatic derivation and implementation of VCC. Moreover, he defined vibrational response theory for various wave function types. All these progress is assembled in the publicly available MidasCpp program suite . Academic career Ove Christiansen received his PhD in Theoretical Chemistry under the supervision of Prof. Poul Jørgensen at Aarhus University Denmark in 1997. Afterwards he joined from 1997 to 1999 as Alexander von Humboldt fellow the group of Prof. Jürgen Gauß in Mainz Germany and later went to the University of Lund in Sweden, where he became a Docent in 2000. In 2002 he returned to Aarhus University as Associate Professor, became Professor MSO (Professor with special obligations) in 2013 and was promoted to a full Professor in 2018. Awards 2013: EliteForsk award 2006: EURYI Award References 1969 births Living people Theoretical chemists Danish chemists Academic staff of Aarhus University Computational chemists People from Holstebro
Ove Christiansen
[ "Chemistry" ]
361
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
77,675,171
https://en.wikipedia.org/wiki/Lazertinib
Lazertinib, sold under the brand name Lazcluze among others, is an anti-cancer medication used for the treatment of non-small cell lung cancer. It is a kinase inhibitor of epidermal growth factor receptor. The most common adverse reactions include rash, nail toxicity, infusion-related reactions (amivantamab), musculoskeletal pain, edema, stomatitis, venous thromboembolism, paresthesia, fatigue, diarrhea, constipation, COVID-19 infection, hemorrhage, dry skin, decreased appetite, pruritus, nausea, and ocular toxicity. Lazertinib was approved for medical use in South Korea in January 2021, in the United States in August 2024, and in the European Union in January 2025. Medical uses Lazertinib is indicated in combination with amivantamab for the first-line treatment of adults with locally advanced or metastatic non-small cell lung cancer with epidermal growth factor receptor exon 19 deletions or exon 21 L858R substitution mutations. History Efficacy was evaluated in MARIPOSA (NCT04487080), a randomized, active-controlled, multicenter trial of 1074 participants with exon 19 deletion or exon 21 L858R substitution mutation-positive locally advanced or metastatic non-small cell lung cancer and no prior systemic therapy for advanced disease. Participants were randomized (2:2:1) to receive lazertinib in combination with amivantamab, osimertinib monotherapy, or lazertinib monotherapy (an unapproved regimen for non-small cell lung cancer) until disease progression or unacceptable toxicity. Society and culture Legal status Lazertinib was approved for medical use in the United States in August 2024. In November 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Lazcluze, intended in combination with amivantamab, for the treatment of non-small cell lung cancer (NSCLC) with activating epidermal growth factor receptor (EGFR) exon 19 deletions or exon 21 L858R substitution mutations. The applicant for this medicinal product is Janssen-Cilag International NV. Lazertinib was approved for medical use in the European Union in January 2025. Names Lazertinib is the international nonproprietary name. Lazertinib is sold under the brand name Lazcluze. References External links Tyrosine kinase inhibitors 4-Morpholinyl compounds Pyrimidines Methoxy compounds Anilides Guanidines Pyrazoles Dimethylamino compounds
Lazertinib
[ "Chemistry" ]
591
[ "Guanidines", "Functional groups" ]
77,678,036
https://en.wikipedia.org/wiki/Dorothy%20Pile
Dorothy Lilian Pile (26 July 1902 – 1 February 1993) was a British metallurgist, first woman to be admitted to the Institution of Metallurgists and past president of the Women's Engineering Society. Early life Dorothy Lilian Pile was born in Yorkshire on 26 July 1902. Career In 1920 Pile's first job was as at the Midland Laboratory Guild Ltd. where her father was the chief metallurgist. Her role was in the chemical laboratory as an assistant working on physical testing and metallography before she became more involved in sheet metal research. In 1949, Pile was appointed as a metallurgist at the Design and Research Centre of the Gold, Silver and Jewellery Trade, in London and later became an industrial liaison officer. Professional memberships Pile was the first woman to become a member of the Institution of Metallurgists in 1946. Later in 1983 she also became the first woman to be awarded honorary fellowship. As a thank you she presented the institution with a presidential tankard which is still held by the IOM3 Historical Collection. Pile was an active member of the Birmingham Metallurgical Association and in 1949 she was elected president. Pile was the first woman to become president of any British metallurgical societies. Pile became the president of the Women's Engineering Society (WES) in 1954, succeeding Ella Mary Collin in the role. Pile's successor as president was Kathleen Mary Cook. Pile presented WES with a President's Medal on 29th August 1964, featuring the organisation's logo at the time in green enamel. Pile had various other roles and memberships to industrial societies and would often be the only woman in attendance at society dinners. She is known to have been referred to as the "metallurgical aunt" at such events. References 1902 births 1993 deaths British women engineers Metallurgists Women's Engineering Society
Dorothy Pile
[ "Chemistry", "Materials_science" ]
381
[ "Metallurgists", "Metallurgy" ]
77,688,443
https://en.wikipedia.org/wiki/Monitoring%20of%20geological%20carbon%20dioxide%20storage
Carbon dioxide (CO2) from carbon capture and storage and direct air capture operations is often injected into deep geologic formations. These storage sites can be monitored for CO2 leakage. Monitoring can be done at both the surface and subsurface levels. The dominant monitoring technique is seismic imaging, where vibrations are generated that propagate through the subsurface. The geologic structure can be imaged from the refracted/reflected waves. Subsurface Subsurface monitoring can directly and/or indirectly track the reservoir's status. One direct method involves drilling deep enough to collect a sample. This drilling can be expensive due to the rock's physical properties. It also provides data only at a specific location. One indirect method sends sound or electromagnetic waves into the reservoir which reflects back for interpretation. This approach provides data over a much larger region; although with less precision. Both direct and indirect monitoring can be done intermittently or continuously. Seismic Seismic monitoring is a type of indirect monitoring. Examples of seismic monitoring of geological sequestration are the Sleipner sequestration project, the Frio CO2 injection test and the CO2CRC Otway Project. Seismic monitoring can confirm the presence of CO2 in a given region and map its lateral distribution, but is not sensitive to the concentration. Tracer Organic chemical tracers, using no radioactive or Cadmium components, can be used during the injection phase in a CCS project where CO2 is injected into an existing oil or gas field, either for EOR, pressure support or storage. Tracers and methodologies are compatible with CO2 – and at the same time unique and distinguishable from the CO2 itself or other molecules present in the sub-surface. Using laboratory methodology with an extreme detectability for tracer, regular samples at the producing wells will detect if injected CO2 has migrated from the injection point to the producing well. Therefore, a small tracer amount is sufficient to monitor large scale subsurface flow patterns. For this reason, tracer methodology is well-suited to monitor the state and possible movements of CO2 in CCS projects. Tracers can therefore be an aid in CCS projects by acting as an assurance that CO2 is contained in the desired location sub-surface. In the past, this technology has been used to monitor and study movements in CCS projects in Algeria, the Netherlands and Norway (Snøhvit). Surface This provides a measure of the vertical CO2 flux. Eddy covariance towers could potentially detect leaks, after accounting for the natural carbon cycle, such as photosynthesis and plant respiration. An example of eddy covariance techniques is the Shallow Release test. Another similar approach is to use accumulation chambers for spot monitoring. These chambers are sealed to the ground with an inlet and outlet flow stream connected to a gas analyzer. They also measure vertical flux. Monitoring a large site would require a network of chambers. InSAR Interferometric synthetic aperture radar (InSAR), is a radar technique used in geodesy and remote sensing. References Carbon capture and storage Environmental monitoring
Monitoring of geological carbon dioxide storage
[ "Engineering" ]
630
[ "Geoengineering", "Carbon capture and storage" ]
67,554,094
https://en.wikipedia.org/wiki/Topochemical%20polymerization
Topochemical polymerization is a polymerization method performed by monomers aligned in the crystal state. In this process, the monomers are crystallised and polymerised under external stimuli such as heat, light, or pressure. Compared to traditional polymerisation, the movement of monomers was confined by the crystal lattice in topochemical polymerisation, giving rise to polymers with high crystallinity, tacticity, and purity. Topochemical polymerisation can also be used to synthesise unique polymers such as polydiacetylene that are otherwise hard to prepare. Various reactions have been adopted in the field of topochemical polymerisation, such as [2+2], [4+2], [4+4], and [3+2] cycloaddition, linear addition between dienes, trienes, diacetylenes. Other than linear polymers, they can also be applied to the synthesis of two dimensional covalent networks. History The term "topochemistry" was first introduced by Kohlschütter in 1919, referring to the chemical reactions driven by the molecular alignments within the crystal. The prefix "topo" came from the Greek word "topos", which means "site". These reactions quickly draw people's attention because of their high conversion as well as solvent/catalyst-free nature. However, the early studies were usually serendipitous. In the 1960s, Schmidt's work on [2+2] photodimerization of cinnamic acids established the systematic approach to study the topochemical reactions. They proposed that only double bonds adopting coplanar and parallel orientation within a distance of 3.5-4.2 Å could react with each other in the crystal lattice. This empirical rule was later referred to as Schmidt's criteria. [2+2] cycle addition and diacetylene polymerization are among the early examples of topochemical polymerization. As shown in the figure, the formation of 1,3-diphenyl substituted cyclobutane derivatives was first studied in detail by Hasegawa and his coworkers in 1967. A series of similar monomers had also been studied by them. In 1969, the 1,4-addition polymerization of diacetylene was confirmed by Wegner and his coworkers. Restricted by the experimental condition, early researchers of topochemical polymerization usually characterized the reaction process and product with traditional chemical methods. The development of modern analysis technology such as single-crystal X-ray diffraction greatly facilitated the systematic study of topochemical polymerization and kept the popularity till these days. Design of the Reaction system Lattice Criteria of Polymerization In topochemical polymerization, little room is provided for the monomer to adjust their position. Thus, the reacting sites of the monomer should be pre-packed in a suitable manner. If [2+2] cycloaddition is involved in the polymerization, then the alignment of double bonds within the crystal should fulfill the aforementioned Schmidt's criteria. Sometimes multiple parameters should be considered. As shown in the figure, for example, the 1,4-polymerization of diacetylene requires the fine adjustment of angle as well as the monomer packing distance to achieve a satisfying reaction site distance dCC (distance between C1 and C4). The method invented by Schmidt is still the most promising way to investigate the structural criteria of polymerization. In this approach, a series of monomers with different substituents are crystallized and characterized by single-crystal X-ray diffractometer. By comparing their polymerization reactivity and slightly different structure, the suitable range of lattice parameters can be derived. Though Schmidt's criteria are generally useful for predicting the topochemical reactivity, there are many instances of violation of these criteria. Many examples of smooth reaction of crystals that are not expected to be reactive based on Schmidt's criteria are reported. Strategies of Lattice Control Various methods have been proposed to achieve the suitable alignment of monomers in the crystal. These methods can be divided into two categories: An obvious method is to introduce supramolecular interactions to the monomer. Popular choices include π - π stacking interactions, hydrogen/halogen bonding interactions, and Coulomb interactions. These interactions are sometimes inherent properties of reaction groups, such as π-π interaction between azide and acetylene group, or stacking force between biphenylethylene unit. Sometimes the side groups are introduced to form a network within the crystal. The other strategy is to take advantage of the so-called "host-guest" assembly. In this case, the monomer is designed to link to a "host" molecule, while the host molecule is in charge of forming the ordered network. The host molecule stays intact during the polymerization. Such strategies simplify the synthesis of monomer. The Stress of Polymerization Although the movement of the mass center of the monomer is restricted by the crystal during the polymerization, the slight change of the bond length before and after the reaction give rise to the shifting of lattice parameters. Consider a real-life topochemical polymerization initiated by irradiation: if monomer beneath the surface polymerizes later due to the light absorption near the surface, the already polymerized layer will shrink or expand, causing unbalanced stress within the crystal. The crystal might break or even lose crystallinity if the stress isn't handled properly. Using elastic interaction such as weak hydrogen bonds is a common strategy to release the stress. It has been found that the bond length of the hydrogen bond in the crystal would change after polymerization, acting as cushion. Another possible routine is to introduce "soft" parts (C-C or C-O bond free to rotate instead of rigid conjugated system) in the monomer molecule. But it will in turn increase the difficulty of crystallization. Reaction condition Light Irradiation Light irradiation can initiate the reaction while avoiding exerting additional physical effects on the monomer crystal. It can be used in topochemical polymerization based on free radical mechanism such as 1,4-polymerization of diacetylene or diene polymerization. UV light is widely used as initiation method as it does in conventional polymerization. In some circumstances, however, the polymerization initiated by UV light is so slow that unbalanced pressure will accumulate more easily as previously stated. γ-irradiation can trigger the reaction faster due to the shorter wavelength. Thus, it was proved to be a better choice than UV in various reactions such as topochemical polymerization of 1,3-diene carboxylic acid derivatives. Heat Heat can be used to trigger the electrocyclization topochemical polymerization. For example, Kana M. Sureshan et al. have developed a series of bio-compatible polymer crystals based on [3+2] Topochemical Azide-Alkyne Cycloaddition (TAAC) reaction and [3+2] topochemical ene-azide cycloaddition (TEAC) reaction. The monomers are polymerized by heating for a few days. Contrary to the light-initiated topochemical polymerization, the lower temperature and slower reaction rate would produce high quality polymer crystals. This is due to the fact that heat expansion is not obvious in lower temperature. Pressure Topochemical polymerization can also be triggered by pressure. It has been reported that the cocrystal of diododiacetylene (guest) and bispyridyl oxalamide (host) could be polymerized under pressure. Interestingly, no polymerization was observed under light or heat due to the unfavorable distance between diacetylene units. The researcher postulated that the high pressure might "squeeze" the reactive site together and initiate the polymerization. Application Tacticity/Stereochemistry Control Tactic and stereoselective polymerizations are traditionally catalyzed by metal-organic complexes. Topochemical polymerization provides an additional choice. In addition, by changing the alignment of the monomer within the crystal, the tacticity/stereochemistry of the polymer product could be easily controlled. An intuitive example is shown in the figure. In topochemical polymerization of 1,3-diene carboxylic acid derivatives, polymers with four different configurations can be prepared. Their structural relationships with the monomer packing are also shown in the figure. Single Crystal Polymer Single crystal polymers have unique applications in various fields Compared to single crystals of small molecules. Because of the long chain and various conformation, it is hard for the polymers to be crystallized directly from solution. Few examples of polymer single crystals prepared in this way suffered from low quality and small size. Topochemical polymerization provides a potential solution to yield high-quality polymer single crystals. If the polymer is still mono crystalline, the transformation from single-crystal monomer to polymer is called single-crystal-to-single-crystal (SCSC) transformation, which required a more sophisticated design than normal topochemical polymerization. In order to prevent the polymer from breaking into polycrystalline powder, the stress-releasing strategies should be carefully considered. However, the study on general criteria of SCSC transition is still in its infancy and requires further study. Coordination Polymer In addition to organic polymers, coordination polymers can also be prepared with topochemical polymerization. The various conformations of metal-organic complexes provide large libraries of monomer geometry. In addition, the length and angle of metal-ligand bonds are relatively flexible so that stress generated by polymerization is able to be released. 2-D polymers The Two-dimensional (2-D) polymers formed by topochemical polymerization are popular topics in material chemistry. By synthesizing and polymerizing monomers with functionality greater than 2, the 2-D networks instead of linear polymers can be obtained. [4+4] and [4+2] involving anthracene units are popular choices for 2D-polymer synthesis. 2-D covalent networks with high crystallinity can be produced in this way in high conversion. Recently, schluter et al. synthesized a 2D polymer via 2+2 topochemical cycloaddition reaction. References Polymerization reactions Polymers Polymer chemistry
Topochemical polymerization
[ "Chemistry", "Materials_science", "Engineering" ]
2,141
[ "Polymers", "Polymerization reactions", "Polymer chemistry", "Materials science" ]
67,554,277
https://en.wikipedia.org/wiki/Rectangular%20lattice
The rectangular lattice and rhombic lattice (or centered rectangular lattice) constitute two of the five two-dimensional Bravais lattice types. The symmetry categories of these lattices are wallpaper groups pmm and cmm respectively. The conventional translation vectors of the rectangular lattices form an angle of 90° and are of unequal lengths. Bravais lattices There are two rectangular Bravais lattices: primitive rectangular and centered rectangular (also rhombic). The primitive rectangular lattice can also be described by a centered rhombic unit cell, while the centered rectangular lattice can also be described by a primitive rhombic unit cell. Note that the length in the lower row is not the same as in the upper row. For the first column above, of the second row equals of the first row, and for the second column it equals . Crystal classes The rectangular lattice class names, Schönflies notation, Hermann-Mauguin notation, orbifold notation, Coxeter notation, and wallpaper groups are listed in the table below. References Lattice points Crystal systems
Rectangular lattice
[ "Chemistry", "Materials_science", "Mathematics" ]
219
[ "Materials science stubs", "Lattice points", "Crystal systems", "Crystallography stubs", "Crystallography", "Number theory" ]
67,554,287
https://en.wikipedia.org/wiki/Oblique%20lattice
The oblique lattice is one of the five two-dimensional Bravais lattice types. The symmetry category of the lattice is wallpaper group p2. The primitive translation vectors of the oblique lattice form an angle other than 90° and are of unequal lengths. Crystal classes The oblique lattice class names, Schönflies notation, Hermann-Mauguin notation, orbifold notation, Coxeter notation, and wallpaper groups are listed in the table below. References Lattice points Crystal systems
Oblique lattice
[ "Chemistry", "Materials_science", "Mathematics" ]
99
[ "Materials science stubs", "Lattice points", "Crystal systems", "Crystallography stubs", "Crystallography", "Number theory" ]
67,565,180
https://en.wikipedia.org/wiki/Transition%20metal%20complexes%20of%20aldehydes%20and%20ketones
Transition metal complexes of aldehydes and ketones describes coordination complexes with aldehyde (RCHO) and ketone ligands. Because aldehydes and ketones are common, the area is of fundamental interest. Some reactions that are useful in organic chemistry involve such complexes. Structure and bonding In monometallic complexes, aldehydes and ketones can bind to metals in either of two modes, η1-O-bonded and η2-C,O-bonded. These bonding modes are sometimes referred to sigma- and pi-bonded. These forms may sometimes interconvert. The sigma bonding mode is more common for higher valence, Lewis-acidic metal centers (e.g., Zn2+). The pi-bonded mode is observed for low valence, electron-rich metal centers (e.g., Fe(0) and Os(0)). For the purpose of electron-counting, O-bonded ligands count as 2-electron "L ligands": they are Lewis bases. η2-C,O ligands are described as analogues of alkene ligands, i.e. the Dewar-Chatt-Duncanson model. η2-C,O ketones and aldehydes can function as bridging ligands, utilizing a lone pair of electrons on oxygen. One such complex is , which features a ring. Related ligands Related to η1-O-bonded complexes of aldehydes and ketones are metal acetylacetonates and related species, which can be viewed as a combination of ketone and enolate ligands. Reactions Some η2-aldehyde complexes insert alkenes to give five-membered metallacycles. η1-Complexes of alpha-beta unsaturated carbonyls exhibit enhanced reactivity toward dienes. This interaction is the basis of Lewis-acid catalyzed Diels-Alder reactions. References Organometallic chemistry Transition metals Coordination chemistry
Transition metal complexes of aldehydes and ketones
[ "Chemistry" ]
416
[ "Organometallic chemistry", "Coordination chemistry" ]
74,725,614
https://en.wikipedia.org/wiki/Flame%20deflector
A flame deflector, flame diverter or flame trench is a structure or device designed to redirect or disperse the flame, heat, and exhaust gases produced by rocket engines or other propulsion systems. The amount of thrust generated by a rocket launch, along with the sound it produces during liftoff, can damage the launchpad and service structure, as well as the launch vehicle. The primary goal of the diverter is to prevent the flame from causing damage to equipment, infrastructure, or the surrounding environment. Flame diverters can be found at rocket launch sites and test stands where large volumes of exhaust gases are expelled during engine testing or vehicle launch. Design and operation The diverter typically comprises a robust, heat-resistant structure that channels the force of the exhaust gases and flames in a specific direction, typically away from the rocket or equipment. This is essential to prevent the potentially destructive effects of the high-temperature gases and to reduce the acoustic impact of the ignition. A flame trench can also be used in combination with a diverter to form a trench-deflector system. The flames from the rocket travel through openings in the launchpad onto a flame deflector situated in the flame trench, which runs underneath the launch structure and extends well beyond the launchpad itself. To further reduce the acoustic effects a water sound suppression system may be also used. Notable examples Apollo program During the Apollo program the need for a flame deflector was a determining factor in the design of the Kennedy Space Center Launch Complex 39. NASA designers chose a two-way, wedge-type metal flame deflector. It measured 13 meters in height and 15 meters in width, with a total weight of 317 tons. Since the water table was close to the surface of the ground, the designers wanted the bottom of the flame trench at ground level. The flame deflector and trench determined the height and width of the octagonal shaped launch pad. Space Shuttle program During the Space Shuttle program NASA modified Launch Complex 39B at Kennedy Space Center. They installed a flame trench that was 150 meters long, 18 meters wide, and 13 meters deep. It was built with concrete and refractory brick. The main flame deflector was situated inside the trench directly underneath the rocket boosters. The V-shaped steel structure was covered with a high-temperature concrete material. It separated the exhaust of the orbiter main engines and of the solid rocket boosters into two flame trenches. It was approximately 11.6 meters high, 17.5 meters wide, and 22 meters long. The Shuttle flame trench-diverter system was refurbished for the SLS program. Baikonur Cosmodrome The main launch pads at the Russian launch complex of Baikonur Cosmodrome use a flame pit to manage launch exhaust. The launch vehicles are transported by rail to the launch pad, where they are vertically erected over a large flame deflector pit. A similar structure was built by the European Space Agency at its Guiana Space Centre. SpaceX Starship launch mount During the first orbital test flight of SpaceX's Starship vehicle in April 2023, the launch mount of Starbase was substantially damaged due to the lack of a flame diverter system. The 33 Raptor rocket engines dug a crater and scattered debris and dust over a wide area. The company designed a new water deluge based flame diverter that protects the launch mount and vehicle by spraying large quantities of water from a piece of steel equipment under the rocket. In November of the same year, the new water deluge system successfully protected the launchpad during the second orbital flight test of Starship, avoiding the cloud of dust and debris that rose up during the first test. References Rocket launch technologies Fire Rocketry Explosion protection
Flame deflector
[ "Chemistry", "Engineering" ]
753
[ "Explosion protection", "Combustion engineering", "Rocketry", "Combustion", "Explosions", "Aerospace engineering", "Fire" ]
74,727,689
https://en.wikipedia.org/wiki/Energy%20management%20system%20%28building%20management%29
An Energy Management System is, in the context of energy conservation, a computer system which is designed specifically for the automated control and monitoring of those electromechanical facilities in a building which yield significant energy consumption such as heating, ventilation and lighting installations. The scope may span from a single building to a group of buildings such as university campuses, office buildings, retail stores networks or factories. Most of these energy management systems also provide facilities for the reading of electricity, gas and water meters. The data obtained from these can then be used to perform self-diagnostic and optimization routines on a frequent basis and to produce trend analysis and annual consumption forecasts. Energy management systems are also often commonly used by individual commercial entities to monitor, measure, and control their electrical building loads. Energy management systems can be used to centrally control devices like HVAC units and lighting systems across multiple locations, such as retail, grocery and restaurant sites. Energy management systems can also provide metering, submetering, and monitoring functions that allow facility and building managers to gather data and insight that allows them to make more informed decisions about energy activities across their sites. Smart Energy Management System (SEMS) usually refers to energy management systems capable of dynamically adapting and efficiently managing new energy scenatrios with minimal human intervention through the use of artificial intelligence. These systems typically include self-supervised learning (SSL) machine learning models for energy consumption and generation forecasting which allows for better planning of the operation of energy infrastructure. The models also typically take into account energy price data and through the use of mathematical optimization algorithms (typically linear programming) are able to minimize the energy costs of a given system. Smart Energy Management Systems (SEMS) are used in both residential sector, such as SoliTek NOVA and in commercial/insdustrial applications of various types. SEMS plays a key role in most smart grid concepts as it enables use cases such as virtual power plants and demand response. As electric vehicle (EV) charging becomes more popular smaller residential devices that manage when an EV can charge based on the total load vs total capacity of an electrical service are becoming popular. The global energy management system market is projected to grow exponentially over the next 10–15 years. The energy management of smart grids, battery storage systems, electric mobility, and renewable energy sources is an important area of application of the Internet of Things in the context of smart homes and smart buildings. Protocols In residential settings, the S2 Standard was developed in 2010. The S2 Standard provides a standard communication protocol, enabling communication between smart devices and an EMS. It is an open source protocol for the energy management of energy intensive devices found in the build environment, such as photovoltaic (PV) systems, electric vehicle (EV) chargers, batteries, (hybrid) heat pumps and white goods. It is built in such a way that it can work with any flexible device from any manufacturer, and that it would work for any energy management use case. The standard was ratified as a European standard by the European Electrotechnical Committee for Standardization (CENELEC) in 2018, in the form of the EN 50491–12 series. An EMS can provide energy efficiency through process optimization by reporting on granular energy use by individual pieces of equipment. Newer, cloud-based energy management systems provide the ability to remotely control HVAC and other energy-consuming equipment; gather detailed, real-time data for each piece of equipment; and generate intelligent, specific, real-time guidance on finding and capturing the most compelling savings opportunities. See also Energy accounting Energy conservation measure Energy management Energy management software, software to monitor and optimize energy consumption in buildings or communities References Energy Energy conservation Management systems Building automation Low-energy building Management cybernetics Sustainable building
Energy management system (building management)
[ "Physics", "Engineering" ]
762
[ "Sustainable building", "Physical quantities", "Building engineering", "Automation", "Construction", "Energy (physics)", "Energy", "Building automation" ]
70,470,600
https://en.wikipedia.org/wiki/Phosphide%20iodide
Phosphide iodides or iodide phosphides are compounds containing anions composed of iodide (I−) and phosphide (P3−). They can be considered as mixed anion compounds. They are in the category of pnictidehalides. Related compounds include the phosphide chlorides, arsenide iodides antimonide iodides and phosphide bromides. Phosphorus can form clusters or chains in these compounds, so that some are 1-dimensional or fibrous. Phosphide iodides are often metallic, black or dark red in colour. List References Phosphides Iodides Mixed anion compounds
Phosphide iodide
[ "Physics", "Chemistry" ]
150
[ "Ions", "Matter", "Mixed anion compounds" ]
70,474,208
https://en.wikipedia.org/wiki/Phase%20space%20crystal
Phase space crystal is the state of a physical system that displays discrete symmetry in phase space instead of real space. For a single-particle system, the phase space crystal state refers to the eigenstate of the Hamiltonian for a closed quantum system or the eigenoperator of the Liouvillian for an open quantum system. For a many-body system, phase space crystal is the solid-like crystalline state in phase space. The general framework of phase space crystals is to extend the study of solid state physics and condensed matter physics into phase space of dynamical systems. While real space has Euclidean geometry, phase space is embedded with classical symplectic geometry or quantum noncommutative geometry. Phase space lattices In his celebrated book Mathematical Foundations of Quantum Mechanics, John von Neumann constructed a phase space lattice by two commutative elementary displacement operators along position and momentum directions respectively, which is also called the von Neumann lattice nowadays. If the phase space is replaced a frequency-time plane, the von Neumann lattice is called Gabor lattice and widely used for signal processing. The phase space lattice differs fundamentally from the real space lattice because the two coordinates of phase space are noncommutative in quantum mechanics. As a result, a coherent state moving along a closed path in phase space acquires an additional phase factor, which is similar to the Aharonov–Bohm effect of a charged particle moving in a magnetic field. There is a deep connection between phase space and magnetic field. In fact, the canonical equation of motion can also be rewritten in the Lorenz-force form reflecting the symplectic geometry of classical phase space. In the phase space of dynamical systems, the stable points together with their neighbouring regions form the so-called Poincaré-Birkhoff islands in the chaotic sea that may form a chain or some regular two dimensional lattice structures in phase space. For example, the effective Hamiltonian of kicked harmonic oscillator (KHO). can possess square lattice, triangle lattice and even quasi-crystal structures in phase space depending on the ratio of kicking number. In fact, any arbitrary phase space lattice can be engineered by selecting an appropriate kicking sequence for the KHO. Phase space crystals (PSC) The concept of phase space crystal was proposed by Guo et al. and originally refers to the eigenstate of effective Hamiltonian of periodically driven (Floquet) dynamical system. Depending on whether interaction effect is included, phase space crystals can be classified into single-particle PSC and many-body PSC. Single-particle phase space crystals Depending on the symmetry in phase space, phase space crystal can be a one-dimensional (1D) state with -fold rotational symmetry in phase space or two-dimensional (2D) lattice state extended into the whole phase space. The concept of phase space crystal for a closed system has been extended into open quantum systems and is named as dissipative phase space crystals. Zn PSC Phase space is fundamentally different from real space as the two coordinates of phase space do not commute, i.e., where is the dimensionless Planck constant. The ladder operator is defined as such that . The Hamiltonian of a physical system can also be written in a function of ladder operators . By defining the rotational operator in phase space by where with a positive integer, the system has -fold rotational symmetry or symmetry if the Hamiltonian commutates with rotational operator , i.e., In this case, one can apply Bloch theorem to the -fold symmetric Hamiltonian and calculate the band structure. The discrete rotational symmetric structure of Hamiltonian is called phase space lattice and the corresponding eigenstates are called phase space crystals. Lattice PSC The discrete rotational symmetry can be extended to the discrete translational symmetry in the whole phase space. For such purpose, the displacement operator in phase space is defined by which has the property , where is a complex number corresponding to the displacement vector in phase space. The system has discrete translational symmetry if the Hamiltonian commutates with translational operator , i.e., If there exist two elementary displacements and that satisfy the above condition simultaneously, the phase space Hamiltonian possesses 2D lattice symmetry in phase space. However, the two displacement operators are not commutative in general . In the non-commutative phase space, the concept of a "point" is meaningless. Instead, a coherent state is defined as the eigenstate of the lowering operator via . The displacement operator displaces the coherent state with an additional phase, i.e., . A coherent state that is moved along a closed path, e.g., a triangle with three edges given by in phase space, acquires a geometric phase factor where is the enclosed area. This geometric phase is analogous to the Aharonov–Bohm phase of charged particle in a magnetic field. If the magnetic unit cell and the lattice unit cell are commensurable, namely, there exist two integers and such that , one can calculate the band structure defined in a 2D Brillouin. For example, the spectrum of a square phase space lattice Hamiltonian displays Hofstadter's butterfly band structure that describes the hopping of charged particles between tight-binding lattice sites in a magnetic field. In this case, the eigenstates are called 2D lattice phase space crystals. Dissipative PSC The concept of phase space crystals for closed quantum system has been extended to open quantum system. In circuit QED systems, a microwave resonator combined with Josephson junctions and voltage bias under -photon resonance can be described by a rotating wave approximation (RWA) Hamiltonian with phase space symmetry described above. When single-photon loss is dominant, the dissipative dynamics of resonator is described by the following master equation (Lindblad equation) where is the loss rate and superoperator is called the Liouvillian. One can calculate the eigenspectrum and corresponding eigenoperators of the Liouvillian of the system . Notice that not only the Hamiltonian but also the Liouvillian both are invariant under the -fold rotational operation, i.e., with and . This symmetry plays a crucial role in extending the concept of phase space crystals to an open quantum system. As a result, the Liouvillian eigenoperators have a Bloch mode structure in phase space, which is called a dissipative phase space crystal. Many-body phase space crystals The concept of phase space crystal can be extended to systems of interacting particles where it refers to the many-body state having a solid-like crystalline structure in phase space. In this case, the interaction of particles plays an important role. In real space, the many-body Hamiltonian subjected to a perturbative periodic drive (with period ) is given by Usually, the interaction potential is a function of two particles' distance in real space. By transforming to the rotating frame with the driving frequency and adapting rotating wave approximation (RWA), one can get the effective Hamiltonian. Here, are the stroboscopic position and momentum of -th particle, namely, they take the values of at the integer multiple of driving period . To have the crystal structure in phase space, the effective interaction in phase space needs to be invariant under the discrete rotational or translational operations in phase space. Phase space interactions In classical dynamics, to the leading order, the effective interaction potential in phase space is the time-averaged real space interaction in one driving period Here, represents the trajectory of -th particle in the absence of driving field. For the model power-law interaction potential with integers and half-integers , the direct integral given by the above time-average formula is divergent, i.e., A renormalisation procedure was introduced to remove the divergence and the correct phase space interaction is a function of phase space distance in the plane. For the Coulomb potential , the result still keeps the form of Coulomb's law up to a logarithmic renormalised "charge" , where is the Euler's number. For , the renormalised phase space interaction potential is where is the collision factor. For the special case of , there is no effective interaction in phase space since is a constant with respect to phase space distance. In general for the case of , phase space interaction grows with the phase space distance . For the hard-sphere interaction (), phase space interaction behaves like the confinement interaction between quarks in Quantum chromodynamics (QCD). The above phase space interaction is indeed invariant under the discrete rotational or translational operations in phase space. Combined with the phase space lattice potential from driving, there exist a stable regime where the particles arrange themselves periodically in phase space giving rise to many-body phase space crystals. In quantum mechanics, the point particle is replaced by a quantum wave packet and the divergence problem is naturally avoided. To the lowest-order Magnus expansion for Floquet system, the quantum phase space interaction of two particles is the time-averaged real space interaction over the periodic two-body quantum state as follows. In the coherent state representation, the quantum phase space interaction approaches the classical phase space interaction in the long-distance limit. For bosonic ultracold atoms with repulsive contact interaction bouncing on an oscillating mirror, it is possible to form Mott insulator-like state in the phase space lattice. In this case, there is a well defined number of particles in each potential site which can be viewed as an example of 1D many-body phase space crystal. If the two indistinguishable particles have spins, the total phase space interaction can be written in a sum of direct interaction and exchange interaction. This means that the exchange effect during the collision of two particles can induce an effective spin-spin interaction. Phase space crystal vibrations Solid crystals are defined by a periodic arrangement of atoms in real space, atoms subject to a time-periodic drive can also form crystals in phase space. The interactions between these atoms give rise to collective vibrational modes similar to phonons in solid crystals. The honeycomb phase space crystal is particularly interesting because the vibrational band structure has two sub-lattice bands that can have nontrivial topological physics. The vibrations of any two atoms are coupled via a pairing interaction with intrinsically complex couplings. Their complex phases have a simple geometrical interpretation and can not be eliminated by a gauge transformation, leading to a vibrational band structure with non-trivial Chern numbers and chiral edge states in phase space. In contrast to all topological transport scenarios in real space, the chiral transport for phase space phonons can arise without breaking physical time-reversal symmetry. Relation to time crystals Time crystals and phase space crystals are closely related but different concepts. They both study subharmonic modes that emerge in periodically driven systems. Time crystals focus on the spontaneous symmetry breaking process of discrete time translational symmetry (DTTS) and the protection mechanism of subharmonic modes in quantum many-body systems. In contrast, the study of phase space crystal focuses on the discrete symmetries in phase space. The basic modes constructing a phase space crystal are not necessarily a many-body state, and need not break DTTS as for the single-particle phase space crystals. For many-body systems, phase space crystals study the interplay of the potential subharmonic modes that are arranged periodically in phase space. There is a trend to study the interplay of multiple time crystals which is coined as condensed matter physics in time crystals. References Concepts in physics Hamiltonian mechanics Dimensional analysis Dynamical systems Quantum mechanics
Phase space crystal
[ "Physics", "Mathematics", "Engineering" ]
2,387
[ "Dimensional analysis", "Theoretical physics", "Quantum mechanics", "Classical mechanics", "Hamiltonian mechanics", "Mechanics", "nan", "Mechanical engineering", "Dynamical systems" ]
58,017,976
https://en.wikipedia.org/wiki/Asteroid%20impact%20prediction
Asteroid impact prediction is the prediction of the dates and times of asteroids impacting Earth, along with the locations and severities of the impacts. The process of impact prediction follows three major steps: Discovery of an asteroid and initial assessment of its orbit which is generally based on a short observation arc of less than 2 weeks. Follow-up observations to improve the orbit determination Calculating if, when and where the orbit may intersect with Earth at some point in the future. The usual purpose of predicting an impact is to direct an appropriate response. Most asteroids are discovered by a camera on a telescope with a wide field of view. Image differencing software compares a recent image with earlier ones of the same part of the sky, detecting objects that have moved, brightened, or appeared. Those systems usually obtain a few observations per night, which can be linked up into a very preliminary orbit determination. This predicts approximate positions over the next few nights, and follow-ups can then be carried out by any telescope powerful enough to see the newly detected object. Orbit intersection calculations are then carried out by two independent systems, one (Sentry) run by NASA and the other (NEODyS) by ESA. Current systems only detect an arriving object when several factors are just right, mainly the direction of approach relative to the Sun, the weather, and phase of the Moon. The overall success rate is around 1% and is lower for the smaller objects. A few near misses by medium-size asteroids have been predicted years in advance, with a tiny chance of striking Earth, and a handful of small impactors have successfully been detected hours in advance. All of the latter struck wilderness or ocean, and hurt no one. The majority of impacts are by small, undiscovered objects. They rarely hit a populated area, but can cause widespread damage when they do. Performance is improving in detecting smaller objects as existing systems are upgraded and new ones come on line, but all current systems have a blind spot around the Sun that can only be overcome by a dedicated space based system or by discovering objects on a previous approach to Earth many years before a potential impact. History In 1992 a report to NASA recommended a coordinated survey (christened Spaceguard) to discover, verify and provide follow-up observations for Earth-crossing asteroids. This survey was scaled to discover 90% of all objects larger than one kilometer within 25 years. Three years later, a further NASA report recommended search surveys that would discover 60–70% of the short-period, near-Earth objects larger than one kilometer within ten years and obtain 90% completeness within five more years. In 1998, NASA formally embraced the goal of finding and cataloging, by 2008, 90% of all near-Earth objects (NEOs) with diameters of 1 km or larger that could represent a collision risk to Earth. The 1 km diameter metric was chosen after considerable study indicated that an impact of an object smaller than 1 km could cause significant local or regional damage but is unlikely to cause a worldwide catastrophe. The impact of an object much larger than 1 km diameter could well result in worldwide damage up to, and potentially including, extinction of the human race. The NASA commitment has resulted in the funding of a number of NEO search efforts, which made considerable progress toward the 90% goal by the target date of 2008 and also produced the first ever successful prediction of an asteroid impact (the 4-meter was detected 19 hours before impact). However, the 2009 discovery of several NEOs approximately 2 to 3 kilometers in diameter (e.g. , , , and ) demonstrated there were still large objects to be detected. Three years later, in 2012, the 40 meter diameter asteroid 367943 Duende was discovered and successfully predicted to be on close but non-colliding approach to Earth again just 11 months later. This was a landmark prediction as the object was only , and it was closely monitored as a result. On the day of its closest approach and by coincidence, a smaller asteroid was also approaching Earth, unpredicted and undetected, from a direction close to the Sun. Unlike 367943 Duende it was on a collision course and it impacted Earth 16 hours before 367943 Duende passed, becoming the Chelyabinsk meteor. It injured 1,500 people and damaged over 7,000 buildings, raising the profile of the dangers of even small asteroid impacts if they occur over populated areas. The asteroid is estimated to have been 17 m across. In April 2018, the B612 Foundation stated "It's 100 per cent certain we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the National Near-Earth Object Preparedness Strategy Action Plan to better prepare. Discovery of near-Earth asteroids The first step in predicting impacts is detecting asteroids and determining their orbits. Finding faint near-Earth objects against the much more numerous background stars is very much a needle in a haystack search. It is achieved by sky surveys that are designed to discover near Earth asteroids. Unlike the majority of telescopes that have a narrow field of view and high magnification, survey telescopes have a wide field of view to scan the entire sky in a reasonable amount of time with enough sensitivity to pick up the faint near-Earth objects they are searching for. NEO focused surveys revisit the same area of sky several times in succession. Movement can then be detected using image differencing techniques. Anything that moves from image to image against the background of stars is compared to a catalogue of all known objects, and if it is not already known is reported as a new discovery along with its precise position and the observation time. This then allows other observers to confirm and add to the data about the newly discovered object. Cataloging vs warning surveys Asteroid surveys can be broadly classified as either cataloging surveys, which use larger telescopes to mostly identify larger asteroids well before they come notably close to Earth, or warning surveys, which use smaller telescopes to mostly look for smaller asteroids within several million kilometers of Earth. Cataloging systems focus on finding larger asteroids years in advance and they scan the sky slowly (of the order of once per month), but deeply. Warning systems focus on scanning the sky relatively quickly (of the order of once per night). They typically cannot detect objects that are as faint as cataloging systems but they will not miss an asteroid that dramatically brightens for just a few days when it passes very close to Earth. Some systems compromise and scan the sky approximately once per week. Cataloging systems For larger asteroids (> 100 m to 1 km across), prediction is based on cataloging the asteroid, years to centuries before it could impact. This technique is possible as their size makes them bright enough to be seen from a long distance. Their orbits therefore can be measured and any future impacts predicted long before they are on an impact approach to Earth. This long period of warning is important as an impact from a 1 km object would cause worldwide damage and a minimum of around a decade of lead time would be needed to deflect it away from Earth. As of 2018, the inventory is nearly complete for the kilometer-size objects (around 900) which would cause global damage, and approximately one third complete for 140 meter objects (around 8500) which would cause major regional damage. The effectiveness of the cataloging is somewhat limited by the fact that some proportion of the objects have been lost since their discovery, due to insufficient observations to accurately determine their orbits. Warning systems Smaller near-Earth objects number into millions and therefore impact Earth much more often, though obviously with much less damage. The vast majority remain undiscovered. They seldom pass close enough to Earth that they become bright enough to observe, and so most can only be observed when within a few million kilometers of Earth. They therefore cannot usually be catalogued well in advance and can only be warned about, a few weeks to days in advance. Current mechanisms for detecting asteroids on approach rely on ground based visible-light telescopes with wide fields of view. Those currently can monitor the sky at most every night, and therefore miss most of the smaller asteroids which are bright enough to detect for less than a day. Such very small asteroids much more commonly impact Earth than larger ones, but they make little damage. Missing them therefore has limited consequences. Much more importantly, ground-based telescopes are blind to most of the asteroids which impact the day side of the planet and will miss even large ones. These and other problems mean very few impacts are successfully predicted (see §Effectiveness of the current system and §Improving impact prediction). Asteroids detected by warning systems are much too close to their time of potential impact to deflect them away from Earth, but there is still enough time to mitigate the consequences of the impact by evacuating and otherwise preparing the affected area. Warning systems can also detect asteroids which have been successfully catalogued as existing, but whose orbit was insufficiently well determined to allow a prediction of where they are now. Surveys The main NEO focussed surveys are listed below, along with future telescopes that are already funded. Originally all the surveys were clustered together in a relatively small part of the Northern Hemisphere. This meant that around 15% of the sky at extreme Southern declination was never monitored, and that the rest of the Southern sky was observed over a shorter season than the Northern sky. Moreover, as the hours of darkness are fewer in summertime, the lack of a balance of surveys between North and South meant that the sky was scanned less often in the Northern summer. The ATLAS telescopes now operating at the South African Astronomical Observatory and El Sauce observatory in Chile now cover this gap in the south east of the globe. Once it is completed, the Large Synoptic Survey Telescope will improve the existing cover of the southern sky. The 3.5 m Space Surveillance Telescope, which was originally also in the southwest United States, was dismantled and moved to Western Australia in 2017. When completed, this should also improve the global coverage. Construction has been delayed due to the new site being in a cyclone region, but was completed in September 2022. ATLAS ATLAS, the "Asteroid Terrestrial-impact Last Alert System" uses four 0.5-metre telescopes. Two are located on the Hawaiian Islands, at Haleakala and Mauna Loa, one at the South African Astronomical Observatory, and one in Chile. With a field of view of 30 square degrees each, the telescopes survey the observable sky down to apparent magnitude 19 with 4 exposures every night. The survey has been operational with the two Hawaii telescopes since 2017, and in 2018 obtained NASA funding for two additional telescopes sited in the Southern hemisphere. They were expected to take 18 months to build. Their southern locations provide coverage of the 15% of the sky that cannot be observed from Hawaii, and combined with the Northern hemisphere telescopes give non-stop coverage of the equatorial night sky (the South African location is not only in the opposite hemisphere to Hawaii, but also at an opposing longitude). The full ATLAS concept consists of eight of its 50-centimeter diameter f/2 Wright-Schmidt telescopes, spread over the globe for 24h/24h coverage of the full-night-sky. Catalina Sky Survey (including Mount Lemmon Survey) In 1998, the Catalina Sky Survey (CSS) took over from Spacewatch in surveying the sky for the University of Arizona. It uses two telescopes, a 1.5 m Cassegrain reflector telescope on the peak of Mount Lemmon (also known as a survey in its own right, the Mount Lemmon Survey), and a 0.7 m Schmidt telescope near Mount Bigelow (both in the Tucson, Arizona area in the south west of the United States). Both sites use identical cameras which provide a field of view of 5 square degrees on the 1.5 m telescope and 19 square degrees on the Catalina Schmidt. The Cassegrain reflector telescope takes three to four weeks to survey the entire sky, detecting objects fainter than apparent magnitude 21.5. The 0.7 m telescope takes a week to complete a survey of the sky, detecting objects fainter than apparent magnitude 19. This combination of telescopes, one slow and one medium, has so far detected more near Earth Objects than any other single survey. This shows the need for a combination of different types of telescopes. CSS used to include a telescope in the Southern Hemisphere, the Siding Spring Survey. However operations ended in 2013 after funding was discontinued. Kiso Observatory (Tomo-e Gozen) The Kiso Observatory uses a 1.05m Schmidt telescope on Mt. Ontake near Tokyo in Japan. In late 2019 the Kiso Observatory added a new instrument to the telescope, "Tomo-e Gozen", designed to detect fast moving and rapidly changing objects. It has a wide field of view (20 square degrees) and scans the sky in just 2 hours, far faster than any other survey as of 2021. This puts it squarely in the warning survey category. In order to scan the sky so quickly, the camera captures 2 frames per second, which means the sensitivity is lower than other metre class telescopes (which have much longer exposure times), giving a limiting magnitude of just 18. However, despite not being able to see dimmer objects which are detectable by other surveys, the ability to scan the entire sky several times per night allows it to spot fast moving asteroids that other surveys miss. It has discovered a significant number of near-Earth asteroids as a result (for example see List of asteroid close approaches to Earth in 2021). Large Synoptic Survey Telescope The Large Synoptic Survey Telescope (LSST) is a wide-field survey reflecting telescope with an 8.4 meter primary mirror, currently under construction on Cerro Pachón in Chile. It will survey the entire available sky around every three nights. Science operations are due to begin in 2022. Scanning the sky relatively fast but also being able to detect objects down to apparent magnitude 27, it should be good at detecting nearby fast moving objects as well as excellent for larger slower objects that are currently further away. Near-Earth Object Surveillance Mission A planned space-based 0.5m infrared telescope designed to survey the Solar System for potentially hazardous asteroids. The telescope will use a passive cooling system, and so unlike its predecessor NEOWISE, it will not suffer from a performance degradation due to running out of coolant. It does still have a limited mission duration however as it needs to use propellant for orbital station keeping in order to maintain its position at SEL1. From here, the mission will search for asteroids hidden from Earth based satellites by the Sun's glare. It is planned for launch in 2026. NEO Survey Telescope The Near Earth Object Survey TELescope (NEOSTEL) is an ESA funded project, starting with an initial prototype currently under construction. The telescope is of a new "fly-eye" design that combines a single reflector with multiple sets of optics and CCDs, giving a very wide field of view (around 45 square degrees). When complete it will have the widest field of view of any telescope and will be able to survey the majority of the visible sky in a single night. If the initial prototype is successful, three more telescopes are planned for installation around the globe. Because of the novel design, the size of the primary mirror is not directly comparable to more conventional telescopes, but is equivalent to a conventional 1–metre telescope. The telescope itself should be complete by end of 2019, and installation on Mount Mufara, Sicily should be complete in 2020 but was pushed back to 2022. NEOWISE The Wide-field Infrared Survey Explorer is a 0.4 m infrared-wavelength space telescope launched in December 2009, and placed in hibernation in February 2011. It was re-activated in 2013 specifically to search for near-Earth objects under the NEOWISE mission. By this stage, the spacecraft's cryogenic coolant had been depleted and so only two of the spacecraft's four sensors could be used. Whilst this has still led to new discoveries of asteroids not previously seen from ground-based telescopes, the productivity has dropped significantly. In its peak year when all four sensors were operational, WISE made 2.28 million asteroid observations. In recent years, with no cryogen, NEOWISE typically makes approximately 0.15 million asteroid observations annually. The next generation of infrared space telescopes has been designed so that they do not need cryogenic cooling. Pan-STARRS Pan-STARRS, the "Panoramic Survey Telescope And Rapid Response System", currently (2018) consists of two 1.8 m Ritchey–Chrétien telescopes located at Haleakala in Hawaii. It has discovered a large number of new asteroids, comets, variable stars, supernovae and other celestial objects. Its primary mission is now to detect near-Earth objects that threaten impact events, and it is expected to create a database of all objects visible from Hawaii (three-quarters of the entire sky) down to apparent magnitude 24. The Pan-STARRS NEO survey searches all the sky north of declination −47.5. It takes three to four weeks to survey the entire sky. Space Surveillance Telescope The Space Surveillance Telescope (SST) is a 3.5 m telescope that detects, tracks, and can discern small, obscure objects, in deep space with a wide field of view system. The SST mount uses an advanced servo-control technology, that makes it one of the quickest and most agile telescopes of its size. It has a field of view of 6 square degrees and can scan the visible sky in 6 clear nights down to apparent magnitude 20.5. Its primary mission is tracking orbital debris. This task is similar to that of spotting near-Earth asteroids and so it is capable of both. The SST was initially deployed for testing and evaluation at the White Sands Missile Range in New Mexico. On 6 December 2013, it was announced that the telescope system would be moved to the Naval Communication Station Harold E. Holt in Exmouth, Western Australia. The SST was moved to Australia in 2017, captured first light in 2020 and after a two and a half year testing programme became operational in September 2022. Spacewatch Spacewatch was an early sky survey focussed on finding near Earth asteroids, founded in 1980. It was the first to use CCD image sensors to search for them, and the first to develop software to detect moving objects automatically in real-time. This led to a huge increase in productivity. Before 1990 a few hundred observations were made each year. After automation, annual productivity jumped by a factor of 100 leading to tens of thousands of observations per year. This paved the way for the surveys we have today. Although the survey is still in operation, in 1998 it was superseded by Catalina Sky Survey. Since then it has focused on following up on discoveries by other surveys, rather than making new discoveries itself. In particular it aims to prevent high priority PHOs from being lost after their discovery. The survey telescopes are 1.8 m and 0.9 m. The two follow-up telescopes are 2.3 m and 4 m. Zwicky Transient Facility The Zwicky Transient Facility (ZTF) was commissioned in 2018, superseding the Intermediate Palomar Transient Factory (2009–2017). It is designed to detect transient objects that rapidly change in brightness, for example supernovae, gamma ray bursts, collisions between two neutron stars, as well as moving objects such as comets and asteroids. The ZTF is a 1.2 m telescope that has a field of view of 47 square degrees, designed to image the entire northern sky in three nights and scan the plane of the Milky Way twice each night to a limiting magnitude of 20.5. The amount of data produced by ZTF is expected to be 10 times larger than its predecessor. Follow-up observations Once a new asteroid has been discovered and reported, other observers can confirm the finding and help define the orbit of the newly discovered object. The International Astronomical Union Minor Planet Center (MPC) acts as the global clearing house for information on asteroid orbits. It publishes lists of new discoveries that need verifying and still have uncertain orbits, and it collects the resulting follow-up observations from around the world. Unlike the initial discovery, which typically requires unusual and expensive wide-field telescopes, ordinary telescopes can be used to confirm the object as its position is now approximately known. There are far more of these around the globe, and even a well equipped amateur astronomer can contribute valuable follow-up observations of moderately bright asteroids. For example, the Great Shefford Observatory in the back garden of amateur Peter Birtwhistle typically submits thousands of observations to the Minor Planet Center every year. Nonetheless, some surveys (for example CSS and Spacewatch) have their own dedicated follow-up telescopes. Follow-up observations are important because once a sky survey has reported a discovery it may not return to observe the object again for days or weeks. By this time it may be too faint for it to detect, and in danger of becoming a lost asteroid. The more observations and the longer the observation arc, the greater the accuracy of the orbit model. This is important for two reasons: for imminent impacts it helps to make a better prediction of where the impact will occur and whether there is any danger of hitting a populated area. for asteroids that will miss Earth this time round, the more accurate the orbit model is, the further into the future its position can be predicted. This allows recovery of the asteroid on its subsequent approaches, and impacts to be predicted years in advance. Estimating size and impact severity Assessing the size of the asteroid is important for predicting the severity of the impact, and therefore the actions that need to be taken (if any). With just observations of reflected visible light by a conventional telescope, the object could be anything from 50% to 200% of the estimated diameter, and therefore anything from one-eighth to eight times the estimated volume and mass. Because of this, one key follow-up observation is to measure the asteroid in the thermal infrared spectrum (long-wavelength infrared), using an infrared telescope. The amount of thermal radiation given off by an asteroid together with the amount of reflected visible light allows a much more accurate assessment of its size than just how bright it appears in the visible spectrum. Jointly using thermal infrared and visible measurements, a thermal model of the asteroid can estimate its size to within about 10% of the true size. One example of such a follow-up observation was for 3671 Dionysus by UKIRT, the world's largest infrared telescope at the time (1997). A second example was the 2013 ESA Herschel Space Observatory follow-up observations of 99942 Apophis, which showed it was 20% larger and 75% more massive than previously estimated. However such follow-ups are rare. The size estimates of most near-Earth asteroids are based on visible light only. If the object was discovered by an infrared survey telescope initially, then an accurate size estimate will become available with visible light follow-up, and infrared follow-up will not be needed. However, none of the ground-based survey telescopes listed above operate at thermal infrared wavelengths. The NEOWISE satellite had two thermal infrared sensors but they stopped working when the cryogen ran out. There are therefore currently no active thermal infrared sky surveys which are focused on discovering near-Earth objects. There are plans for a new space based thermal infrared survey telescope, Near-Earth Object Surveillance Mission, due to launch in 2025. Impact calculation Minimum orbit intersection distance The minimum orbit intersection distance (MOID) between an asteroid and the Earth is the distance between the closest points of their orbits. This first check is a coarse measure that does not allow an impact prediction to be made, but is based solely on the orbit parameters and gives an initial measure of how close to Earth the asteroid could come. If the MOID is large then the two objects never come near each other. In this case, unless the orbit of the asteroid is perturbed so that the MOID is reduced at some point in the future, it will never impact Earth and can be ignored. However, if the MOID is small then it is necessary to carry out more detailed calculations to determine if an impact will happen in the future. Asteroids with a MOID of less than 0.05 AU and an absolute magnitude brighter than 22 are categorized as a potentially hazardous asteroid. Projecting into the future Once the initial orbit is known, the potential positions can be forecast years into the future and compared to the future position of Earth. If the distance between the asteroid and the centre of the Earth is less than Earth radius then a potential impact is predicted. To take account of the uncertainties in the orbit of the asteroid, many future projections are made (simulations) with slightly different parameters within the range of the uncertainty. This allows a percentage chance of impact to be estimated. For example, if 1,000 simulations are carried out and 73 result in an impact, then the prediction would be a 7.3% chance of impact. NEODyS NEODyS (Near Earth Objects Dynamic Site) is a European Space Agency service that provides information on near Earth objects. It is based on a continually and (almost) automatically maintained database of near Earth asteroid orbits. The site provides a number of services to the NEO community. The main service is an impact monitoring system (CLOMON2) of all near-Earth asteroids covering a period until the year 2100. The NEODyS website includes a Risk Page where all NEOs with probabilities of hitting the Earth greater than 10−11 from now until 2100 are shown in a risk list. In the table of the risk list the NEOs are divided into: "special", as was the case of (99942) Apophis "observable", objects which are presently observable and which critically need a follow-up in order to improve their orbit "possible recovery", objects which are not visible at present, but which are possible to recover in the near future "lost", objects which have an absolute magnitude (H) brighter than 25 but which are virtually lost, their orbit being too uncertain; and "small", objects with an absolute magnitude fainter than 25; even when those are "lost", they are considered too small to result in heavy damage on the ground (though the Chelyabinsk meteor would have been fainter than this). Each object has its own impactor table (IT) which shows many parameters useful to determine the risk assessment. Sentry prediction system NASA's Sentry System continually scans the MPC catalog of known asteroids, analyzing their orbits for any possible future impacts. Like ESA's NEODyS, it gives a list of possible future impacts, along with the probability of each. It uses a slightly different algorithm to NEODyS, and so provides a useful cross-check and corroboration. Currently, no impacts are predicted (the single highest probability impact currently listed is ~7 m asteroid , which is due to pass Earth in September 2095 with only a 10% predicted chance of impacting; its size is also small enough that any damage from an impact would be minimal). Impact probability calculation pattern The ellipses in the diagram on the right show the predicted position of an example asteroid at closest Earth approach. At first, with only a few asteroid observations, the error ellipse is very large and includes the Earth. The impact prediction probability is small because the Earth cover a small fraction of the large error ellipse. (Often times the error ellipse extends for tens if not hundreds of millions of km.) Further observations shrink the error ellipse. If it still includes the Earth, this raises the predicted impact probability, since the fixed-size Earth now covers a larger fraction of the smaller error region. Finally, yet more observations (often radar observations, or discovery of a previous sighting of the same asteroid on much older archival images) shrink the ellipse, usually revealing that the Earth is outside the smaller error region and the impact probability is then near zero. In rare cases, the Earth remains in the ever shrinking error ellipse and the impact probability then approaches one. For asteroids that are on track to hit Earth, the predicted probability of impact never stops increasing as more observations are made. This initially very similar pattern makes it difficult to quickly differentiate between asteroids which will be millions of kilometres from Earth and those which will hit it. This in turn makes it difficult to decide when to raise an alarm as gaining more certainty takes time, which reduces the time available to react to a predicted impact. However raising the alarm too soon has the danger of causing a false alarm and creating a Boy Who Cried Wolf effect if the asteroid in fact misses Earth. NASA will raise an alert if an asteroid has a better than 1% chance of impacting. In December 2004 when Apophis was estimated to have a 2.7% chance of impacting Earth on 13 April 2029, the uncertainty region for this asteroid had shrunk to 82,818 km. Response to predicted impact Once an impact has been predicted the potential severity needs to be assessed, and a response plan formed. Depending on the time to impact and the predicted severity this may be as simple as giving a warning to citizens. For example, although unpredicted, the 2013 impact at Chelyabinsk was spotted through the window by teacher Yulia Karbysheva. She thought it prudent to take precautionary measures by ordering her students to stay away from the room's windows and to perform a duck and cover maneuver. The teacher, who remained standing, was seriously lacerated when the blast arrived and window glass severed a tendon in one of her arms and left thigh, but none of her students, whom she ordered to hide under their desks, suffered lacerations. If the impact had been predicted and a warning had been given to the entire population, similar simple precautionary actions could have vastly reduced the number of injuries. Children who were in other classes were injured. If a more severe impact is predicted, the response may require evacuation of the area, or with sufficient lead time available, an avoidance mission to repel the asteroid. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched which was demonstrated by kinetically deflecting a minor planet moon, non-hazardous NEO Asteroid called Dimorphos with the help of the DART spacecraft. Following a ten-month journey to the Didymos system, the impactor collided with Dimorphos on 26 September 2022 at a speed of around . The collision successfully decreased Dimorphos's orbital period around Didymos by minutes. Effectiveness of the current system The effectiveness of the current system can be assessed a number of ways. The diagram below illustrates the number of successfully predicted impacts each year compared to the number of unpredicted asteroid impacts recorded by infrasound sensors designed to detect detonation of nuclear devices. It shows that the success rate is increasing over time, but that the vast majority are still missed. One problem with assessing effectiveness this way is that the sensitivity of infrasound sensors extends to small asteroids, which generally do very little damage. The missed asteroids do tend to be small, and missing small asteroids is relatively unimportant. By contrast, missing a large day-side impacting asteroid is highly problematic, with the unpredicted mid-size Chelyabinsk meteor providing a mild real-life example. In order to assess the effectiveness for detecting the (rare) larger asteroids which do matter, a different approach is needed. That effectiveness for larger asteroid can be assessed by looking at warning times for asteroids which did not impact Earth but came close. The below diagram for asteroids which came closer than the Moon shows how far in advance of closest approach they were first detected. Unlike asteroid impacts, where infrasound sensors provide ground truth, it is impossible to know for sure how many close approaches were undetected. Of the asteroids that were detected, the diagram shows that about half were not detected until after they had passed Earth. If they had been on course to impact Earth, they would not have been spotted before they hit, primarily because they approached from a direction close to the Sun. This includes larger asteroids such as 2018 AH, which approached from a direction close to the Sun and was detected 2 days after it had passed. It is estimated to be around 100 times more massive than the Chelyabinsk meteor. The number of detections is increasing as more survey sites come on line (for example ATLAS in 2016 and ZTF in 2018), but approximately half of the detections are made after the asteroid passes the Earth. The below charts visualise the warning times of the close approaches listed in the above bargraph, by the size of the asteroid instead of by the year they occurred in. The sizes of the charts show the relative sizes of the asteroids to scale. This is based on the absolute magnitude of each asteroid, an approximate measure of size based on brightness. For comparison, the approximate size of a person is also shown. Abs magnitude 30 and greater (size of a person for comparison) Abs magnitude 29–30 Absolute magnitude 28–29 Absolute magnitude 27–28 Absolute magnitude 26–27 (probable size of the Chelyabinsk meteor) Absolute magnitude 25–26 Absolute magnitude less than 25 (largest) As can be seen, the ability to predict larger asteroids has significantly improved since the early years of the 21st century, with some now being catalogued (predicted more than 1 year in advance), or having usable early warning times (greater than a week). Based on the few successfully predicted asteroid impacts, the average time between initial detection and impact is currently around 9 hours. There is some delay between the initial observation of the asteroid, data submission, and the follow-up observations and calculations which lead to an impact prediction being made. Improving impact prediction In addition to the already-funded telescopes mentioned above, two separate approaches have been suggested by NASA to improve impact prediction. Both approaches focus on the first step in impact prediction (discovering near-Earth asteroids) as this is the largest weakness in the current system. The first approach uses more powerful ground-based telescopes similar to the LSST. Being ground-based, such telescopes will still only observe part of the sky around Earth. In particular, all ground-based telescopes have a large blind spot for any asteroids coming from the direction of the Sun. In addition, they are affected by weather conditions, airglow and the phase of the Moon. To get around all of these issues, the second approach suggested is the use of space-based telescopes which can observe a much larger region of the sky around Earth. Although they still cannot point directly towards the Sun, they do not have the problem of blue sky to overcome and so can detect asteroids much closer in the sky to the Sun than ground-based telescopes. Unaffected by weather or airglow they can also operate 24 hours per day all year round. Finally, telescopes in outer space have the advantage of being able to use infrared sensors without the interference of the Earth's atmosphere. These sensors are better for detecting asteroids than optical sensors, and although there are some ground based infrared telescopes such as UKIRT, they are not designed for detecting asteroids. Space-based telescopes are more expensive, and tend to have a shorter lifespan, so Earth-based and space-based technologies complement each other to an extent. Although the majority of the IR spectrum is blocked by Earth's atmosphere, the very useful thermal (long-wavelength infrared) frequency band is not blocked (see gap at 10 μm in the diagram below). This allows for the possibility of ground based thermal imaging surveys designed for detecting near earth asteroids, though none are currently planned. Opposition effect There is a further issue that even telescopes in Earth orbit do not overcome (unless they operate in the thermal infrared spectrum). This is the issue of illumination. Asteroids go through phases similar to the lunar phases. Even though a telescope in orbit may have an unobstructed view of an object that is close in the sky to the Sun, it will still be looking at the dark side of the object. This is because the Sun is shining on the side facing away from the Earth, as is the case with the Moon when it is in a new moon phase. Because of this opposition effect, objects are far less bright in these phases than when fully illuminated, which makes them difficult to detect (see chart and diagram below). This problem can be solved by the use of thermal infrared surveys (either ground based or space based). Ordinary telescopes depend on observing light reflected from the Sun, which is why the opposition effect occurs. Telescopes which detect thermal infrared light depend only on the temperature of the object. Its thermal glow can be detected from any angle, and is particularly useful for differentiating asteroids from the background stars, which have a different thermal signature. This problem can also be solved without using thermal infrared, by positioning a space telescope away from Earth, closer to the Sun. The telescope can then look back towards Earth from the same direction as the Sun, and any asteroids closer to Earth than the telescope will then be in opposition, and much better illuminated. There is a point between the Earth and Sun where the gravities of the two bodies are perfectly in balance, called the Sun-Earth L1 Lagrange point (SEL1). It is approximately from Earth, about four times as far away as the Moon, and is ideally suited for placing such a space telescope. One problem with this position is Earth glare. Looking outward from SEL1, Earth itself is at full brightness, which prevents a telescope situated there from seeing that area of sky. Fortunately, this is the same area of sky that ground-based telescopes are best at spotting asteroids in, so the two complement each other. Another possible position for a space telescope would be even closer to the Sun, for example in a Venus-like orbit. This would give a wider view of Earth orbit, but at a greater distance. Unlike a telescope at the SEL1 Lagrange point, it would not stay in sync with Earth but would orbit the Sun at a similar rate to Venus. Because of this, it would not often be in a position to provide any warning of asteroids shortly before impact, but it would be in a good position to catalog objects before they are on final approach, especially those which primarily orbit closer to the Sun. One issue with being as close to the Sun as Venus is that the craft may be too warm to use infrared wavelengths. A second issue would be communications. As the telescope will be a long way from Earth for most of the year (and even behind the Sun at some points) communication would often be slow and at times impossible, without expensive improvements to the Deep Space Network. Solutions to problems: summary table This table summarises which of the various problems encountered by current telescopes are solved by the various different solutions. Near-Earth Object Surveyor In 2017, NASA proposed a number of alternative solutions to detect 90% of near-Earth objects of size 140 m or larger over the next few decades. As the detection sensitivity drops off with size but does not cut off, this will also improve the detection rates for the smaller objects which impact Earth much more often. Several of the proposals use a combination of an improved ground-based telescope and a space-based telescope positioned at the SEL1 Lagrange point. A number of large ground based telescopes are already in the late stages of construction (see above). A space based mission situated at SEL1, NEO Surveyor has now also been funded. It is planned for launch in 2027. List of successfully predicted asteroid impacts Below is the list of all near-Earth objects which have or may have impacted the Earth and which were predicted beforehand. This list would also include any objects identified as having greater than 50% chance of impacting in the future, but no such future impacts are predicted at this time. As asteroid detection ability increases it is expected that prediction will become more successful in the future. In addition to these objects, the meteoroid CNEOS20200918 was found in 2022 in archival ATLAS data, imaged 10 minutes before its 2020/09/18 impact. Although it technically could have been discovered before impact, it was only noticed in retrospect. There are also a number of objects which have been observed in orbit which may have impacted shortly after being observed, but may not have. It is difficult to know the true number of these possible impactors as unconfirmed tracklets have a wide range of possible orbits, and only a portion of these are consistent with earth impact. One example is A106fgF, an object observed on January 22, 2018 with an observation arc of only 39 minutes. See also Earth-grazing fireball List of asteroid close approaches to Earth List of bolidesasteroids and meteoroids that impacted Earth Notes References External links Earth Impact Database Earth Impact Effects Program NASA JPL Predicted Close Approaches (including impacts) Astronomical events Impact events Lists of asteroids Near-Earth asteroids Planetary defense
Asteroid impact prediction
[ "Astronomy" ]
8,491
[ "Astronomical events", "Impact events" ]
58,018,667
https://en.wikipedia.org/wiki/Woldemar%20Weyl
Woldemar Anatol Weyl (1901 – July 30, 1975) was a German-born scientist. Weyl taught at the Kaiser Wilhelm Institute between 1932 and 1936, when he began traveling to the United States as a visiting professor at Pennsylvania State University. Due to the increasing influence of the Nazi Party, Weyl choose not to return to Germany and was offered full tenure at PSU in 1938. In 1960, Weyl and mathematician Haskell Curry were appointed to the first two Evan Pugh Professorships at Penn State. Weyl died in State College, Pennsylvania on July 30, 1975, aged 74. References 1901 births 1975 deaths Emigrants from Nazi Germany to the United States 20th-century American scientists Glass engineering and science
Woldemar Weyl
[ "Materials_science", "Engineering" ]
149
[ "Glass engineering and science", "Materials science" ]
58,021,305
https://en.wikipedia.org/wiki/Brain-specific%20homeobox
Brain-specific homeobox is a protein that in humans is encoded by the BSX gene. Structure and expression pattern Bsx is an evolutionarily highly-conserved homeodomain-containing transcription factor that belongs to the ANTP-class. In mouse it has been shown to be expressed in the telencephalic septum, pineal gland, the mammillary bodies and arcuate nucleus. Function in the hypothalamus In the hypothalamic arcuate nucleus, Bsx has been demonstrated to be necessary for normal expression levels of the two orexigenic neuropeptides Agouti-related peptide and Neuropeptide Y. Function in the pineal gland In the pineal gland of the clawed frog Xenopus, Bsx is expressed following the circadian rhythm and controls photoreceptor cell differentiation. In zebrafish Bsx is required for normal development of all cell types within the pineal gland, including melatonin-releasing pinealocytes, photoreceptor cells and leftwards migrating parapineal cells, which in zebrafish are crucial for the establishment of brain asymmetry. References Brain Proteins Transcription factors Genes Molecular biology
Brain-specific homeobox
[ "Chemistry", "Biology" ]
248
[ "Biomolecules by chemical classification", "Gene expression", "Signal transduction", "Induced stem cells", "Molecular biology", "Biochemistry", "Proteins", "Transcription factors" ]
58,022,154
https://en.wikipedia.org/wiki/Vildagliptin/metformin
Vildagliptin/metformin, sold under the brand name Eucreas among others, is a fixed-dose combination anti-diabetic medication for the treatment of type 2 diabetes. It was approved for use in the European Union in November 2007, and the approval was updated in 2008. It combines 50 mg vildagliptin with either 500, 850, or 1000 mg metformin. The most common side effects include nausea (feeling sick), vomiting, diarrhea, abdominal (tummy) pain and loss of appetite. Medical uses Vildagliptin/metformin is indicated in the treatment of type-2 diabetes mellitus: it is indicated in the treatment of adults who are unable to achieve sufficient glycaemic control at their maximally tolerated dose of oral metformin alone or who are already treated with the combination of vildagliptin and metformin as separate tablets. it is indicated in combination with a sulphonylurea (i.e. triple combination therapy) as an adjunct to diet and exercise in patients inadequately controlled with metformin and a sulphonylurea. it is indicated in triple combination therapy with insulin as an adjunct to diet and exercise to improve glycaemic control in patients when insulin at a stable dose and metformin alone do not provide adequate glycaemic control. References External links Adamantanes Biguanides Carboxamides Combination diabetes drugs Dipeptidyl peptidase-4 inhibitors Drugs with unknown mechanisms of action Guanidines Nitriles Drugs developed by Novartis Pyrrolidines Tertiary alcohols
Vildagliptin/metformin
[ "Chemistry" ]
341
[ "Nitriles", "Guanidines", "Functional groups" ]
73,354,598
https://en.wikipedia.org/wiki/Aliasing%20%28factorial%20experiments%29
In the statistical theory of factorial experiments, aliasing is the property of fractional factorial designs that makes some effects "aliased" with each other – that is, indistinguishable from each other. A primary goal of the theory of such designs is the control of aliasing so that important effects are not aliased with each other. In a "full" factorial experiment, the number of treatment combinations or cells (see below) can be very large. This necessitates limiting observations to a fraction (subset) of the treatment combinations. Aliasing is an automatic and unavoidable result of observing such a fraction. The aliasing properties of a design are often summarized by giving its resolution. This measures the degree to which the design avoids aliasing between main effects and important interactions. Fractional factorial experiments have long been a basic tool in agriculture, food technology, industry, medicine and public health, and the social and behavioral sciences. They are widely used in exploratory research, particularly in screening experiments, which have applications in industry, drug design and genetics. In all such cases, a crucial step in designing such an experiment is deciding on the desired aliasing pattern, or at least the desired resolution. As noted below, the concept of aliasing may have influenced the identification of an analogous phenomenon in signal processing theory. Overview Associated with a factorial experiment is a collection of effects. Each factor determines a main effect, and each set of two or more factors determines an interaction effect (or simply an interaction) between those factors. Each effect is defined by a set of relations between cell means, as described below. In a fractional factorial design, effects are defined by restricting these relations to the cells in the fraction. It is when the restricted relations for two different effects turn out to be the same that the effects are said to be aliased. The presence or absence of a given effect in a given data set is tested by statistical methods, most commonly analysis of variance. While aliasing has significant implications for estimation and hypothesis testing, it is fundamentally a combinatorial and algebraic phenomenon. Construction and analysis of fractional designs thus rely heavily on algebraic methods. The definition of a fractional design is sometimes broadened to allow multiple observations of some or all treatment combinations – a multisubset of all treatment combinations. A fraction that is a subset (that is, where treatment combinations are not repeated) is called simple. The theory described below applies to simple fractions. Contrasts and effects In any design, full or fractional, the expected value of an observation in a given treatment combination is called a cell mean, usually denoted using the Greek letter μ. (The term cell is borrowed from its use in tables of data.) A contrast in cell means is a linear combination of cell means in which the coefficients sum to 0. In the 2 × 3 experiment illustrated here, the expression is a contrast that compares the mean responses of the treatment combinations 11 and 12. (The coefficients here are 1 and –1.) The effects in a factorial experiment are expressed in terms of contrasts. In the above example, the contrast is said to belong to the main effect of factor A as it contrasts the responses to the "1" level of factor with those for the "2" level. The main effect of A is said to be absent if this expression equals 0. Similarly,   and   are contrasts belonging to the main effect of factor B. On the other hand, the contrasts   and   belong to the interaction of A and B; setting them equal to 0 expresses the lack of interaction. These designations, which extend to arbitrary factorial experiments having three or more factors, depend on the pattern of coefficients, as explained elsewhere. Since it is the coefficients of these contrasts that carry the essential information, they are often displayed as column vectors. For the example above, such a table might look like this: The columns of such a table are called contrast vectors: their components add up to 0. While there are in general many possible choices of columns to represent a given effect, the number of such columns — the degrees of freedom of the effect — is fixed and is given by a well-known formula. In the 2 × 3 example above, the degrees of freedom for , and the interaction are 1, 2 and 2, respectively. In a fractional factorial experiment, the contrast vectors belonging to a given effect are restricted to the treatment combinations in the fraction. Thus, in the half-fraction {11, 12, 13} in the 2 × 3 example, the three effects may be represented by the column vectors in the following table: The consequence of this truncation — aliasing — is described below. Definitions The factors in the design are allowed to have different numbers of levels, as in a factorial experiment (an asymmetric or mixed-level experiment). Fix a fraction of a full factorial design. Let be a set of contrast vectors representing an effect (in particular, a main effect or interaction) in the full factorial design, and let consist of the restrictions of those vectors to the fraction. One says that the effect is preserved in the fraction if consists of contrast vectors; completely lost in the fraction if consists of constant vectors, that is, vectors whose components are equal; and partly lost otherwise. Similarly, let and represent two effects and let and be their restrictions to the fraction. The two effects are said to be unaliased in the fraction if each vector in is orthogonal (perpendicular) to all the vectors in , and vice versa; completely aliased in the fraction if each vector in is a linear combination of vectors in , and vice versa; and partly aliased otherwise. Finney and Bush introduced the terms "lost" and "preserved" in the sense used here. Despite the relatively long history of this topic, though, its terminology is not entirely standardized. The literature often describes lost effects as "not estimable" in a fraction, although estimation is not the only issue at stake. Rao referred to preserved effects as "measurable from" the fraction. Resolution The extent of aliasing in a given fractional design is measured by the resolution of the fraction, a concept first defined by Box and Hunter: A fractional factorial design is said to have resolution if every -factor effect is unaliased with every effect having fewer than factors. For example, a design has resolution if main effects are unaliased with each other (taking , though it allows main effects to be aliased with two-factor interactions. This is typically the lowest resolution desired for a fraction. It is not hard to see that a fraction of resolution also has resolution , etc., so one usually speaks of the maximum resolution of a fraction. The number in the definition of resolution is usually understood to be a positive integer, but one may consider the effect of the grand mean to be the (unique) effect with no factors (i.e., with ). This effect sometimes appears in analysis of variance tables. It has one degree of freedom, and is represented by a single vector, a column of 1's. With this understanding, an effect is preserved in a fraction if it is unaliased with the grand mean, and completely lost in a fraction if it is completely aliased with the grand mean. A fraction then has resolution if all main effects are preserved in the fraction. If it has resolution then two-factor interactions are also preserved. Computation The definitions above require some computations with vectors, illustrated in the examples that follow. For certain fractional designs (the regular ones), a simple algebraic technique can be used that bypasses these procedures and gives a simple way to determine resolution. This is discussed below. Examples The 2 × 3 experiment The fraction {11, 12, 13} of this experiment was described above along with its restricted vectors. It is repeated here along with the complementary fraction {21, 22, 23}: In both fractions, the effect is completely lost (the column is constant) while the and interaction effects are preserved (each 3 × 1 column is a contrast vector as its components sum to 0). In addition, the and interaction effects are completely aliased in each fraction: In the first fraction, the vectors for are linear combinations of those for , viz., and ; in the reverse direction, the vectors for can be written similarly in terms of those representing . The argument in the second fraction is analogous. These fractions have maximum resolution 1. The fact that the main effect of is lost makes both of these fractions undesirable in practice. It turns out that in a 2 × 3 experiment (or in any a × b experiment in which a and b are relatively prime) there is no fraction that preserves both main effects -- that is, no fraction has resolution 2. The 2 × 2 × 2 (or 2³) experiment This is a "two-level" experiment with factors and . In such experiments the factor levels are often denoted by 0 and 1, for reasons explained below. A treatment combination is then denoted by an ordered triple such as 101 (more formally, (1, 0, 1), denoting the cell in which and are at level "1" and is at level "0"). The following table lists the eight cells of the full 2 × 2 × 2 factorial experiment, along with a contrast vector representing each effect, including a three-factor interaction: Suppose that only the fraction consisting of the cells 000, 011, 101, and 110 is observed. The original contrast vectors, when restricted to these cells, are now 4 × 1, and can be seen by looking at just those four rows of the table. (Sorting the table on will bring these rows together and make the restricted contrast vectors easier to see. Sorting twice puts them at the top.) The following can be observed concerning these restricted vectors: The column consists just of the constant 1 repeated four times. The other columns are contrast vectors, having two 1's and two −1s. The columns for and are equal. The same holds for and , and for and . All other pairs of columns are orthogonal. For example, the column for is orthogonal to that for , for , for , and for , as one can see by computing dot products. Thus the interaction is completely lost in the fraction; the other effects are preserved in the fraction; the effects and are completely aliased with each other, as are and , and and . all other pairs of effects are unaliased. For example, is unaliased with both and and with the and interactions. Now suppose instead that the complementary fraction {001,010,100,111} is observed. The same effects as before are lost or preserved, and the same pairs of effects as before are mutually unaliased. Moreover, and are still aliased in this fraction since the and vectors are negatives of each other, and similarly for and and for and . Both of these fractions thus have maximum resolution 3. Aliasing in regular fractions The two half-fractions of a factorial experiment described above are of a special kind: Each is the solution set of a linear equation using modular arithmetic. More exactly: The fraction is the solution set of the equation . For example, is a solution because . Similarly, the fraction is the solution set to Such fractions are said to be regular. This idea applies to fractions of "classical" designs, that is, (or "symmetric") factorial designs in which the number of levels, , of each of the factors is a prime or the power of a prime. A fractional factorial design is regular if it is the solution set of a system of one or more equations of the form where the equation is modulo if is prime, and is in the finite field if is a power of a prime. Such equations are called defining equations of the fraction. When the defining equation or equations are homogeneous, the fraction is said to be principal. One defining equation yields a fraction of size , two independent equations a fraction of size and so on. Such fractions are generally denoted as designs. The half-fractions described above are designs. The notation often includes the resolution as a subscript, in Roman numerals; the above fractions are thus designs. Associated to each expression is another, namely , which rewrites the coefficients as exponents. Such expressions are called "words", a term borrowed from group theory. (In a particular example where is a specific number, the letters are used, rather than .) These words can be multiplied and raised to powers, where the word acts as a multiplicative identity, and they thus form an abelian group , known as the effects group. When is prime, one has for every element (word) ; something similar holds in the prime-power case. In factorial experiments, each element of represents a main effect or interaction. In experiments with , each one-letter word represents the main effect of that factor, while longer words represent components of interaction. An example below illustrates this with . To each defining expression (the left-hand side of a defining equation) corresponds a defining word. The defining words generate a subgroup of that is variously called the alias subgroup, the defining contrast subgroup, or simply the defining subgroup of the fraction. Each element of is a defining word since it corresponds to a defining equation, as one can show. The effects represented by the defining words are completely lost in the fraction while all other effects are preserved. If , say, then the equation is called the defining relation of the fraction. This relation is used to determine the aliasing structure of the fraction: If a given effect is represented by the word , then its aliases are computed by multiplying the defining relation by , viz., where the products are then simplified. This relation indicates complete (not partial) aliasing, and W is unaliased with all other effects listed in . Example 1 In either of the fractions described above, the defining word is , since the exponents on these letters are the coefficients of . The effect is completely lost in the fraction, and the defining subgroup is simply , since squaring does not generate new elements . The defining relation is thus , and multiplying both sides by gives ; which simplifies to the alias relation seen earlier. Similarly, and . Note that multiplying both sides of the defining relation by and does not give any new alias relations. For comparison, the fraction with defining equation has the defining word (i.e., ). The effect is completely lost, and the defining relation is . Multiplying this by , by , and by gives the alias relations , , and among the six remaining effects. This fraction only has resolution 2 since all effects (except ) are preserved but two main effects are aliased. Finally, solving the defining equation yields the fraction {000, 001, 110, 111}. One may verify all of this by sorting the table above on column . The use of arithmetic modulo 2 explains why the factor levels in such designs are labeled 0 and 1. Example 2 In a 3-level design, factor levels are denoted 0, 1 and 2, and arithmetic is modulo 3. If there are four factors, say and , the effects group will have the relations From these it follows, for example, that and . A defining equation such as would produce a regular 1/3-fraction of the 81 (= ) treatment combinations, and the corresponding defining word would be . Since its powers are   and  , the defining subgroup would be , and so the fraction would have defining relation Multiplying by , for example, yields the aliases For reasons explained elsewhere, though, all powers of a defining word represent the same effect, and the convention is to choose that power whose leading exponent is 1. Squaring the latter two expressions does the trick and gives the alias relations Twelve other sets of three aliased effects are given by Wu and Hamada. Examining all of these reveals that, like , main effects are unaliased with each other and with two-factor effects, although some two-factor effects are aliased with each other. This means that this fraction has maximum resolution 4, and so is of type . The effect is one of 4 components of the interaction, while is one of 8 components of the interaction. In a 3-level design, each component of interaction carries 2 degrees of freedom. Example 3 A design ( of a design) may be created by solving two equations in 5 unknowns, say modulo 2. The fraction has eight treatment combinations, such as 10000, 00110 and 11111, and is displayed in the article on fractional factorial designs. Here the coefficients in the two defining equations give defining words and . Setting and multiplying through by gives the alias relation . The second defining word similarly gives . The article uses these two aliases to describe an alternate method of construction of the fraction. The defining subgroup has one more element, namely the product , making use of the fact that . The extra defining word is known as the generalized interaction of and , and corresponds to the equation , which is also satisfied by the fraction. With this word included, the full defining relation is (these are the four elements of the defining subgroup), from which all the alias relations of this fraction can be derived – for example, multiplying through by yields . Continuing this process yields six more alias sets, each containing four effects. An examination of these sets reveals that main effects are not aliased with each other, but are aliased with two-factor interactions. This means that this fraction has maximum resolution 3. A quicker way to determine the resolution of a regular fraction is given below. It is notable that the alias relations of the fraction depend only on the left-hand side of the defining equations, not on their constant terms. For this reason, some authors will restrict attention to principal fractions "without loss of generality", although the reduction to the principal case often requires verification. Determining the resolution of a regular fraction The length of a word in the effects group is defined to be the number of letters in its name, not counting repetition. For example, the length of the word is 3. Using this result, one immediately gets the resolution of the preceding examples without computing alias relations: In the fraction with defining word , the maximum resolution is 3 (the length of that word), while the fraction with defining word has maximum resolution 2. The defining words of the fraction were and , both of length 4, so that the fraction has maximum resolution 4, as indicated. In the fraction with defining words and , the maximum resolution is 3, which is the shortest "wordlength". One could also construct a fraction from the defining words and , but the defining subgroup will also include , their product, and so the fraction will only have resolution 2 (the length of ). This is true starting with any two words of length 4. Thus resolution 3 is the best one can hope for in a fraction of type . As these examples indicate, one must consider all the elements of the defining subgroup in applying the theorem above. This theorem is often taken to be a definition of resolution, but the Box-Hunter definition given earlier applies to arbitrary fractional designs and so is more general. Aliasing in general fractions Nonregular fractions are common, and have certain advantages. For example, they are not restricted to having size a power of , where is a prime or prime power. While some methods have been developed to deal with aliasing in particular nonregular designs, no overall algebraic scheme has emerged. There is a universal combinatorial approach, however, going back to Rao. If the treatment combinations of the fraction are written as rows of a table, that table is an orthogonal array. These rows are often referred to as "runs". The columns will correspond to the factors, and the entries of the table will simply be the symbols used for factor levels, and need not be numbers. The number of levels need not be prime or prime-powered, and they may vary from factor to factor, so that the table may be a mixed-level array. In this section fractional designs are allowed to be mixed-level unless explicitly restricted. A key parameter of an orthogonal array is its strength, the definition of which is given in the article on orthogonal arrays. One may thus refer to the strength of a fractional design. Two important facts flow immediately from its definition: If an array (or fraction) has strength then it also has strength for every . The array's maximum strength is of particular importance. In a fixed-level array, all factors having levels, the number of runs is a multiple of , where is the strength. Here need not be a prime or prime power. To state the next result, it is convenient to enumerate the factors of the experiment by 1 through , and to let each nonempty subset of correspond to a main effect or interaction in the following way: corresponds to the main effect of factor , corresponds to the interaction of factors and , and so on. Example: Consider a fractional factorial design with factors and maximum strength . Then: All effects up to three-factor interactions are preserved in the fraction. Main effects are unaliased with each other and with two-factor interactions. Two-factor interactions are unaliased with each other if they share a factor. For example, the and interactions are unaliased, but the and interactions may be at least partly aliased as the set contains 4 elements but the strength of the fraction is only 3. The Fundamental Theorem has a number of important consequences. In particular, it follows almost immediately that if a fraction has strength then it has resolution . With additional assumptions, a stronger conclusion is possible: This result replaces the group-theoretic condition (minimum wordlength) in regular fractions with a combinatorial condition (maximum strength) in arbitrary ones. Example. An important class of nonregular two-level designs are Plackett-Burman designs. As with all fractions constructed from Hadamard matrices, they have strength 2, and therefore resolution 3. The smallest such design has 11 factors and 12 runs (treatment combinations), and is displayed in the article on such designs. Since 2 is its maximum strength, 3 is its maximum resolution. Some detail about its aliasing pattern is given in the next section. Partial aliasing In regular fractions there is no partial aliasing: Each effect is either preserved or completely lost, and effects are either unaliased or completely aliased. The same holds in regular experiments with if one considers only main effects and components of interaction. However, a limited form of partial aliasing occurs in the latter. For example, in the design described above the overall interaction is partly lost since its component is completely lost in the fraction while its other components (such as ) are preserved. Similarly, the main effect of is partly aliased with the interaction since is completely aliased with its component and unaliased with the others. In contrast, partial aliasing is uncontrolled and pervasive in nonregular fractions. In the 12-run Plackett-Burman design described in the previous section, for example, with factors labeled through , the only complete aliasing is between "complementary effects" such as and or and . Here the main effect of factor is unaliased with the other main effects and with the interaction, but it is partly aliased with 45 of the 55 two-factor interactions, 120 of the 165 three-factor interactions, and 150 of the 330 four-factor interactions. This phenomenon is generally described as complex aliasing. Similarly, 924 effects are preserved in the fraction, 1122 effects are partly lost, and only one (the top-level interaction ) is completely lost. Analysis of variance (ANOVA) Wu and Hamada analyze a data set collected on the fractional design described above. Significance testing in the analysis of variance (ANOVA) requires that the error sum of squares and the degrees of freedom for error be nonzero. In order to insure this, two design decisions have been made: Interactions of three or four factors have been assumed absent. This decision is consistent with the effect hierarchy principle. Replication (inclusion of repeated observations) is necessary. In this case, three observations were made on each of the 27 treatment combinations in the fraction, for a total of 81 observations. The accompanying table shows just two columns of an ANOVA table for this experiment. Only main effects and components of two-factor interactions are listed, including three pairs of aliases. Aliasing between some two-factor interactions is expected, since the maximum resolution of this design is 4. This experiment studied two response variables. In both cases, some aliased interactions were statistically significant. This poses a challenge of interpretation, since without more information or further assumptions it is impossible to determine which interaction is responsible for significance. In some instances there may be a theoretical basis to make this determination. This example shows one advantage of fractional designs. The full factorial experiment has 81 treatment combinations, but taking one observation on each of these would leave no degrees of freedom for error. The fractional design also uses 81 observations, but on just 27 treatment combinations, in such a way that one can make inferences on main effects and on (most) two-factor interactions. This may be sufficient for practical purposes. History The first statistical use of the term "aliasing" in print is the 1945 paper by Finney, which dealt with regular fractions with 2 or 3 levels. The term was imported into signal processing theory a few years later, possibly influenced by its use in factorial experiments; the history of that usage is described in the article on aliasing in signal processing. The 1961 paper in which Box and Hunter introduced the concept of "resolution" dealt with regular two-level designs, but their initial definition makes no reference to lengths of defining words and so can be understood rather generally. Rao actually makes implicit use of resolution in his 1947 paper introducing orthogonal arrays, reflected in an important parameter inequality that he develops. He distinguishes effects in full and fractional designs by using symbols and (corresponding to and ), but makes no mention of aliasing. The term confounded is often used as a synonym for aliased, and so one must read the literature carefully. The former term "is generally reserved for the indistinguishability of a treatment contrast and a block contrast", that is, for confounding with blocks. Kempthorne has shown how confounding with blocks in a -factor experiment may be viewed as aliasing in a fractional design with factors, but it is unclear whether one can do the reverse. See also The article on fractional factorial designs discusses examples in two-level experiments. Notes Citations References Design of experiments Statistical process control
Aliasing (factorial experiments)
[ "Engineering" ]
5,499
[ "Statistical process control", "Engineering statistics" ]
66,094,811
https://en.wikipedia.org/wiki/Robert%20L.B.%20Tobin%20Land%20Bridge
The Robert L.B. Tobin Land Bridge is a wildlife crossing over Wurzbach Parkway in San Antonio's Phil Hardberger Park that opened on December 11, 2020. The project cost $23 million and is designed for both wildlife and pedestrians. Construction began on November 26, 2018, and was originally expected to end in April 2020. Design At long and wide, it is the largest wildlife bridge in the United States . With tall, noise damping corten steel walls on both sides, the bridge is designed to appear to crossers as a small hill. The bridge has a underground cistern to keep the bridge's plants irrigated via rainwater. On April 5, 2021, a footbridge called the Skywalk opened which starts at the top of the land bridge and winds through the park's trees. Animals using the bridge Although animals had already been spotted crossing the bridge as of early 2021, wildlife traffic is not expected to substantially increase until the foliage planted on the bridge grows thicker. As part of a five-year study, the Parks and Recreation Department documents wildlife using the bridge. , species include the Virginia opossum, cottaintail rabbit, white-tailed deer, coyote, rock squirrel, fox squirrel, rat, raccoon, armadillo, bobcat, gray fox, and axis deer. See also Wildlife crossing § Examples References External links Map of the bridge Pedestrian bridges in Texas Bridges completed in 2020 Ecological restoration
Robert L.B. Tobin Land Bridge
[ "Chemistry", "Engineering" ]
295
[ "Ecological restoration", "Environmental engineering" ]
66,098,601
https://en.wikipedia.org/wiki/The%20Ebony%20Horse
The Ebony Horse, The Enchanted Horse or The Magic Horse is a folk tale featured in the Arabian Nights. It features a flying mechanical horse, controlled using keys, that could fly into outer space and towards the Sun. The ebony horse can fly the distance of one year in a single day, and is used as a vehicle by the Prince of Persia, Qamar al-Aqmar, in his adventures across Persia, Arabia and Byzantium. According to scholarship, the tale inspired literary stories about a flying mechanical horse in Europe. Variants from oral tradition have been collected mostly from Europe and Asia, but are also attested in Africa. Although the tale appears in the work One Thousand and One Nights, a similar story is attested earlier in the Indian Panchatantra, albeit with a flying bird-like mechanism in the shape of a Garuda. Source According to researcher Ulrich Marzolph, the tale "The Ebony Horse" was part of the story repertoire of Hanna Diyab, a Christian Maronite who provided several tales to French writer Antoine Galland. As per Galland's diary, the tale was told on May 13, 1709. Summary An Indian craftsman and inventor of magical devices arrives in the Persian city of Shiraz at the time of the New Year celebration, mounted upon a splendid artificial horse – surprisingly life-like, despite its mechanical nature. The king is so impressed with this automaton that he decides to present his son, the prince, with the marvellous steed. The young prince wastes no time in climbing into the saddle and the horse ascends swiftly into the sky. When prince decides that he has flown high enough he tries to make the horse land, but finds that he cannot. Far from landing, the horse instead flies off with the prince, spiriting him away to unknown lands. Later, he rides the flying mechanical horse to the kingdom of Bengal and meets a beautiful princess, who becomes enamoured of him. The young prince retells his adventures to the princess, and they exchange first pleasantries and later sweet nothings as they fall ever more deeply in love. Soon, the Persian youth convinces the Bengali princess to ride the mechanical marvel with him to his homeland of Persia. Meanwhile, the Indian artifex had been unjustly imprisoned due to the disastrous test flight of his creation. In his cell, he sees the prince arriving with his beloved maiden. Reunited with his beloved son, the King of Persia releases the craftsman, who seizes the opportunity for revenge, using the horse to abduct the princess and disappearing swiftly over the horizon with her. They soon arrive in the kingdom of Cashmere. The king of that country rescues the princess from the Indian and resolves to marry her, without her consent. As soon as the princess recovers from her shock, she pretends to have gone mad in order to forestall her forced marriage. Determined to recover his beloved, the Persian prince wanders in search of her until he reaches Cashmere, where he learns his maiden is alive. He then hatches a plan to escape with his beloved on the mechanical horse back to Persia. By pretending to be a doctor, he is able to approach the princess and reveal himself to her. By having her pretend to be partially cured, the prince succeeds in persuading the king of Cashmere to openly present the ebony horse to complete the princess' healing. In an unattended moment, he and the princess use the horse to fly back to Persia, where they are happily married. Legacy Scholarship points that the tale migrated to Europe and inspired similar medieval stories about a fabulous mechanical horse. These stories include Cleomades, Chaucer's The Squire's Tale, Valentine and Orson and Meliacin ou le Cheval de Fust, by troubador Girart d'Amiens (fr). The Horse and His Boy by C.S. Lewis carries key elements in both story and specifics like the structure of the Horse, from this story. Analysis Tale type The tale is classified in Aarne-Thompson-Uther Index as ATU 575, "The Prince's Wings". These tales show two types of narrative: The first one: a metalsmith and a tinkerer take part in a contest to build a mechanical marvel to impress the king and his son. A mechanical horse is built and delivered to the king, to the delight of the young prince. The second one: the prince himself commissions from a skilled craftsman to fashion a winged apparatus to allow him to fly (eg. a pair of wings or a wooden bird). Motifs The flying machine Ethnologist Verrier Elwin commented that some folk tales replace the original flying machine for a trunk or a chair, and that the motif of the equine machine is common in Indian folk-tales. Similarly, according to scholar , the flying machine, which appears in Indian variants, also appears in "many Asian tales". Hungarian professor Ákos Dömötör, in the notes to tale type ATU 575 in the Hungarian National Catalogue of Folktales (MNK), remarked that the wooden bird is an "Oriental" theme. Origins The tale The Ebony Horse, in particular, was suggested by mythologist Thomas Keightley, in his book Tales and Popular Fictions, to have originated from a genuine Persian source, since it does not contain elements from Islamic religion. The oldest attestation and possible origin of the tale type is suggested to be an 11th century Jain recension of the Pancatantra, in the story The Weaver as Vishnu. In this tale, a poor weaver fashions an artificial likeness of legendary bird mount Garuda, the ride of god Vishnu. He uses the construct to reach the topmost room of the princess he fell in love with and poses as Lord Vishnu to impress his beloved. Henry Parker, who collected some Sri Lankan variants of the tale type, identified three different origins for the horse: (1), a wooden flying horse created by a supernatural being; (2), a wooden flying horse made by human hands and "magical art"; and (3) construction of one "by mechanical art". He also suggested that a flying horse, either of wax or wood, appears in ancient Indian literature (e.g., the Rig Veda), and may date from before the time of Christ. He also saw two possible routes of diffusion: either the tale developed in India or in Sri Lanka, and was diffused by Arabs; or the image of a winged quadruped, attested in old Assyria and Mesopotamia, "spread to the early Aryans". Another line of scholarship sees a possible predecessor of the tale type with Chinese god Lu Ban, patron deity of carpenters and builders. Variants Distribution Stith Thompson sees a sparsity of the tale in European compilations, although the elements of the prince's journey on the mechanical apparatus appear in Eastern tales. In addition, Jack V. Haney argued that variants appear "in a number of Western European traditions", while German scholar locates variants in Central and Eastern Europe. Czech scholar Karel Horálek, in Enzyklopädie des Märchens, considered India the "center of diffusion" of the tale type. Furthermore, Horálek located two major regions of distribution: in the West, in Central Europe, Southeastern Europe and Eastern Europe; in the East, India, Persia and neighbouring countries. He also considered Turkey and Caucasus as a "transitional area" between both regions. Europe Romani people Philologist Franz Miklosich collected a variant in Romani language which he titled Der geflügelte Held ("The Flying Hero"), about an artifex that fashions a pair of wings. In a Romani-Bukovina tale collected by Francis Hindes Groome, The Winged Hero, a skilled but poor craftsman begins to craft a pair of wings, after he saw them in a dream. He then uses the wings to fly to the "Ninth Region", where he sells his work to an emperor's son. The prince uses the wings and flies to another realm, where he learns from an old woman that a princess is locked away in a tower by her own father. Transylvanian linguist Heinrich von Wlislocki collected and published a "Zigeunermärchen" titled O mánusch kástuni ciriklehá (Der Mann mit der hölzernen Vogel or The Wooden Bird). Germany The Brothers Grimm also collected and published a German variant titled Vom Schreiner und Drechsler ("Of The Carpenter and The Turner"; or "The Maker and the Turner"). This story was published in the first edition of their collection, in 1812, with numbering KHM 77, but omitted from the definitive edition. A variant exists in the newly discovered collection of Bavarian folk and fairy tales of Franz Xaver von Schönwerth, titled The Flying Trunk (German: Das fliegende Kästchen). In a variant collected from Oldenburg by jurist Ludwig Strackerjan (de), Vom Königssohn, der fliegen gelernt hatte ("About a King's Son who learned to fly"), each of the king's sons learn a trade: one becomes a metalsmith and the other a carpenter. The first one builds a fish of silver and the second fashions a pair of wooden wings. He later uses the wings to fly to another realm, where he convinces a sheltered princess he is the Archangel Gabriel. Italy Ignaz and Joseph Zingerle collected a variant from Merano, titled Die zwei Künstler ("The Two Craftsmen"), wherein a goldsmith and a fortune-teller compete to see who can craft a fine work: the goldsmith some gold fishes and the fortuneteller a pair of wooden wings. Hungary According to the Hungarian Folktale Catalogue (MNK), tale type 575, A repülő királyfi ("The Flying Prince"), registers few variants in Hungary. Journalist Elek Benedek collected a Hungarian tale titled A Szárnyas Királyfi ("The Winged Prince"). In this story, the king traps his daughter in the tower, but a prince visits her every night with a pair of wings. Greece Johann Georg von Hahn collected a variant from Zagori, Greece, titled Der Mann mit der Reisekiste ("The Man with the Flying Trunk"): a rich man with an intense wanderlust commissions a flying trunk form his carpenter friend. The carpenter fills the box with "magic vapours" and the device takes flight. The rich man arrives at the tower of a princess from another realm and pretends to be the Son of God. Bulgaria The type is also attested in the Bulgarian Folktale Catalogue with the title "Летящият дървен кон" or Das fliegende Holzpferd ("The Flying Wooden Horse"): a goldsmith and a carpenter vie for the same woman, and arrange a mediator for their dispute (e.g., a king); in order to settle their dispute, they each fashion an apparatus (the carpenter a wooden horse, and the goldsmith a metal object); the carpenter wins, and the prince rides on the flying horse to another kingdom, where he secretly visits a princess in her tower; they escape their execution, but the wooden horse burns down, and they are separated. Russia The tale type is known in Russia and Slavic-speaking regions as "" (The Wooden Eagle (Dove)), after the creation that appears in the story: a wooden eagle. Professor Jack Haney stated that the tale type was "widely collected" in Russia. Another Russian variant of the tale type is Märchen von dem berühmten und ausgezeichneten Prinzen Malandrach Ibrahimowitsch und der schönen Prinzeß Salikalla or Prince Malandrach and the Princess Salikalla, a tale that first appeared in a German language compilation of fairy tales, published by Anton Dietrich in 1831, in Leipzig. The titular prince becomes fascinated with the idea of flying after reading about it in a book of fairy tales. He wants to commission a pair of wooden wings from a carpenter. Professor Jack V. Haney translated a variant from raconteur (1883–1943), titled The Airplane (How an Airplane in a Room Carried Off the Tsar’s Son) and also classified as ATU 575. In this tale, the plane replaces the wooden eagle. Poland Polish philologist and folklorist Julian Krzyżanowski, establisher of the Polish Folktale Catalogue according to the international index, classified a similar story in Poland as type 575, Skrzydlaty królewicz ("Winged Prince"): the hero either commissions a pair of wings from an artisan or steals the wings from his father, and flies away to a kingdom where a princess is locked in the tower. In a Polish tale, "Об одном королевиче, который на крыльях летал" ("About a prince who flew on wings"), a king commissions a pair of wings from a master craftsman. The prince finds the wings, puts them on and flies to another kingdom where he visits the princess - locked in a tower - by pretending to be an angel. Estonia The tale type is registered in Estonia with the title Kuningapoja imetiivad ("The Magic Wings of the King’s Son"). In Estonian variants, the prince may gain either an iron hawk from the blacksmith, or wooden wings from the carpenter. He uses the contraption to fly to another kingdom. Lithuania The tale type also exists in Lithuania with the name Karalaičio sparnai ("The Wings of the King"). Twelve variants were registered until 1936, when folklorist Jonas Balys (lt) published his analysis of Lithuanian folktales. Latvia The tale type also exists in Latvia, with the title Brīnuma spārni ("Wonderful Wings"): an artisan fashions the artificial bird for the prince, who travels to another kingdom, falls in love with a princess and escapes with her on the flying device. In a Latvian variant, "Волшебный конь" ("The Magic Horse"), a blacksmith's apprentice constructs a mechanical horse. The prince convinces the king to give it to him as a gift. He flies on the artificial horse to another kingdom by a manipulating a panel of screws, where a princess is being held at a tower. At the end of the tale, before the princess's father has a chance to execute her and the prince, they escape on the mechanical horse. Armenia In Armenian variants of the tale type, the prince departs either on a wooden horse or on a big wheel to the princess's kingdom. After the prince loses the flying machine, his family is separated, but reunites at the end of the story, as the princess averts a possible incestual marriage with her own son by the prince. Azerbaijan Azerbaijani scholarship registers a similar tale in the Azerbaijani Tale Corpus, indexed as 575, Taxta at ("Wooden Horse"): a jeweller and a carpenter bet against each other whose skills are better (or for the love of a woman); the prince is called to arbiter the dispute; the jeweller fashions a golden rooster and the carpenter a wooden horse, which the prince rides on to another kingdom; in this second kingdom, the prince absconds with the local king's daughter and both flee on the horse, but are separated when the horse burns down; the princess reaches another city and places an image to find her lover; the prince and the princess reunite. Georgia Georgian scholarship registers 3 variants of type 575, "Wooden Horse", in Georgia: the vehicle is a wooden horse the prince flies on to meet a princess, and sometimes the tale shows a long period of separation for the couple. In a Georgian tale titled "Царевич и деревянный конь" ("The Tsar's Son and the Wooden Horse"), a childless royal couple has a son at last, and invites the entire kingdom. A carpenter and a metalsmith decide to create presents for the newborn prince, each in their own craft. The carpenter delivers a wooden horse that can fly. The prince delights at the present. The metalsmith, however, warns his colleague that if the prince mounts the horse, he will not know how to control it. So the carpenter returns to the palace and teaches the prince, who ends up flying on the horse to regions unknown. He reaches the roof of an old woman, in another kingdom, and she invites him in. He learns of the princess lockes in the tower and flies towards her on the horse. After escaping an attempted execution, the prince and the princess flee the kingdom and separate; the horse is destroyed in a fire. The princess goes to another kingdom and becomes its sovereign when a bird lands on her head three times. Using her royal powers, she orders a bridge to be made and a picture of her husband to be affixed on it. Ossetia In an Ossetian tale titled "Деревянный голубь" ("The Wooden Dove"), a metalsmith and carpenter argue whose is the more necessary skill: metalworking or woodworking. They bring their dispute to the king to judge. The metalsmith produces a golden purse and the carpenter a wooden dove. The king awards the carpenter. The king's son overhears the decision and decides to play with the wooden dove and flies to another kingdom. He meets the son of the local aldar (ruler) and learns his sister, the aldar's daughter, lives a sad life in a high tower. The prince decides to fly to her room on the wooden dove and meets her. They fall in love and she becomes pregnant. Her servants notice something amiss with the princess, and fear the aldar may execute them. The princess and the prince escape on the wooden dove and marry. The tale continues with the adventures of the three sons of the couple, who also travel on their father's wooden dove. Asia Middle East A similar story, also named The Tale of the Ebony Horse, can also be found in One Hundred and One Nights, another book of Arab literature and whose original manuscripts were recently discovered. According to professor Ruth B. Bottigheimer, an Arabic-language manuscript mentions a tale titled Fars al-abnus ('Horse of Ebony'), predating Hanna Diyab's story by two centuries. The tale was apparently part of the second volume of Tales of the Marvellous and News of the Strange, now lost. Andrew Lang published the story with the name The Enchanted Horse, in his translation of The Arabian Nights, and renamed the prince Firouz Schah. Folklorist William Forsell Kirby published a tale from "The Arabian Nights" titled Story of the Labourer and the Flying Chair: a poor labourer spends his earnings on an old chair. He returns to the seller wanting to know the instructions on how to use the chair. The labourer manages to control the chair, which takes him to a distant terrace. He walks from the terrace into a room where a princess was sleeping. The maiden awakes with a startle with the strange person in the room, and he presents himself as Azrael, the Angel of Death. French orientalist François Pétit de La Croix published in the 18th century a compilation of Middle Eastern tales, titled Les Mille et un jours ("The Thousand and One Days"). This compilation also contains a variant of the tale type, named Story of Malek and the Princess Schirine: the hero Malek receives a bird-shaped box from an artisan. He enters the box and flies away to a distant kingdom. In this realm, he learns of King Bahaman, who imprisoned his daughter, the Princess Schirine, in a tower. Turkey German scholar located another narrative from the Ottoman Turkish Ferec baʿd eş-şidde ('Relief After Hardship'), an anonymous book dated to the 15th century. In tale nr. 13 of the compilation, titled the Weaver and the trick He played on the Carpenter, a weaver and a carpenter compete over the love of a woman, and each creates an project: the weaver a seamless shirt and the carpenter a large chest, which he tricks the weaver to go inside. The weaver flies off on the chest and reaches the kingdom of Oman, where he introduces himself as the Angel Gabriel to a princess locked in a tower. Marzolph noted that the tale was the source for Malek and Schirin, a tale contained in the work A Thousand and One Days. China Chinese folklorist and scholar Ting Nai-tung (zh) established a second typological classification of Chinese folktales (the first was by Wolfram Eberhard in the 1930s). According to this new system, in tale type 575, "The Prince's Wings", the main character is not a prince, and the means of transportation is either a horse or an eagle. Iran A Persian variant is reported to have been analysed by folklorist William Alexander Clouston's Magic Elements in the Squire's Tale. In this tale, a weaver and a carpenter in Nishapur compete to impress a local woman. The weaver sews a seamless shirt and the carpenter a magic coffer. The weaver tests the coffer and flies away to another realm. He uses the coffer to reach the castle where the daughter of the king of Oman is being held and introduces himself as the Angel Gabriel. As the story continues, he defeats an army for the King of Oman, but loses the flying coffer. At the end of the story, the king discovers the ruse, but decides to keep it a secret after the angel "Gabriel" achieved wins for him. Central Asia A similar tale is attested in a manuscript archived in the Institute of Oriental Studies of the Academy of Sciences of the then Soviet Union. The manuscript, indexed as A 103, is dated to the 18th century, and tentatively sourced from Central Asia. In a summary of the tale, titled "Рассказ о столяре, ткаче, дочери оманского падишаха и о чудесах, которые они пережили" ("The Story of a Carpenter, a Weaver, the Daughter of the Omani Padishah and the Wonders they Experienced"), a carpenter and a weaver compete for the hand of a woman: the weaver sews a cloth without needle nor thread, and the carpenter, in retaliation, fashions a flying box he tricks his rival to enter. The weaver travels on the flying box to Oman and falls in love with the Omani princess, to whom he introduces himself as an "arkhan". The weaver manages to trick the padishah of Oman, and actually has success in battle with the flying box, until the day the box is burnt down. The padishah discovers the weaver's secret, but promises to keep it to himself, since the princess is expecting a child. Uzbekistan In an Uzbek variant, titled "Столяр и портной" ("The Carpenter and The Weaver"), a carpenter and a weaver are good friends. One day, they compete against each other to test their abilities to impress a girl: the weaver creates a seamless shirt. Jealous, the carpenter builds a chest and invites the weaver for a test drive. He locks his friend inside the chest, turns a screw and the chest soars to another kingdom. The weaver reaches gets off the chest, hides it and learns the local padishah has a daughter that he locks up in a tower. The weaver uses the chest to fly up to her room, while the padishah is away on a hunt, and presents himself as Azrael. In another Uzbek tale, "Умелые руки" ("Skillful Hands"), a boy named Rafik is taken to be apprenticed by a carpenter. One night, he has a dream about beautiful maidens. Entranced by such vision, he slowly wither, until his father and the carpenter fashion a flying wooden horse that the boy can use to look for her. He flies on the machine and lands in another place. He learns the maidens come in the shape of doves to bathe in a nearby lake and he must hide the garments of his beloved one. He does, but she escapes with the other doves. He follows her on the wooden horse until a meadow where they rest up. Rafik wakes her up and convinces her to go with him. They return to a village and marry. She gives birth to a son. Rafik flies on the horse to another kingdom, but a fire destroys the apparatus and he is stranded there. Unaware of her husband's fate, she takes their son and goes to a caravan to another city, where they set up shop in hopes of finding Rafik. Years pass, and the family is finally reunited. South Asia Stith Thompson and Warren Roberts's Types of Indic Oral Tales registers the existence of tale type 575, "The Prince's Wings", in modern Indian and South Asian sources. In the Indic type, the hero finds the flying horse (or other wooden mechanism), flies away to another kingdom and courts the local princess; they later escape on the mechanism, but are separated when it burns down; after their separation, they reunite. Charles Swynnerton published an Upper Indus tale from Punjab with the title Prince Ahmed and the Flying Horse: Prince Ahmed likes to play with the sons of a goldsmith, an ironsmith, an oilman, and a carpenter, much to his father's disgust. The king decided to imprison the four youths, but the prince, their friends, intercedes in their favour: all four should prove their skills. The four fashion, respectively, six brazen fishes, two large iron fishes, two artificial giants and at last a wooden horse. Prince Ahmed climbs the horse and flies to regions unknown, where he romances a princess and brings her back to his homeland. India Author Mark Thornhill published an Indian tale with the title The Magic Horse. In this tale, a carpenter and a goldsmith compete over who is the most skilled craftsman. The king announces he will be the judge of the dispute and orders them to bring him their finest works. The goldsmith brings a metal fish that can swim and the carpenter a wooden horse that can move about. The king's son mounts the horse and flies away to another kingdom. In this kingdom, he learns about the princess, secluded in a tower, and who is weighed very morning against a garland of flowers so that it can be assured no man has touched her. Anthropologist Stephen Fuchs collected a tale titled Uṛhan Ghōṛā ("The Flying Horse"), from a Baiga source named Musra, from the Bijora village near Dindori in eastern Mandla. In this tale, the raja sets up a contest between a smith and a carpenter to settle their dispute. The carpenter fashions a winged horse with an internal engine. The raja's young son rides on the horse and is carried over to another kingdom, where he sleeps with the princess. The princess's belly begins to grow and her father discovers the culprit: the foreign prince. On the day of the execution, he escapes with the princess on the winged horse, but the couple must make a hasty descent on a small island for her to give birth. Once their son is born, the family is separated: the young boy is adopted by a royal couple; the princess loss her memory and is adopted as the niece of a lower cast woman, and the prince marries another rani. Their fates converge as the prince stops an incestual marriage between son and the mother. In another Indian variant, The Flying Horse, a carpenter creates an "airplane with an engine" for his friend, the prince. The prince rides the airplane to a marble palace in a distant kingdom across the ocean, where he meets a princess. They fall in love and she becomes pregnant. After an unfortunate accident, the prince separates from the pregnant princess, who gives birth to a boy. The two are also separated: the boy is found by a couple and his mother is rescued by prostitutes. Years later, the boy becomes a youth, buys his mother from the brothel and meets his father, who has become an old man. Sri Lanka Author Henry Parker published a Sri Lankan tale titled The Wax Horse: a king hides his son from the outside world due to a prophecy that the son would go away from his kingdom. One day, the young prince sees a wax horse with wings in the market and the king buys it for him. The prince climbs on the horse and flies to another kingdom, eventually meeting a princess. In another Sri Lankan tale collected by Henry Parker, Concerning a Royal Prince and a Princess, a carpenter's son fashions a Wooden Peacock, which the Prince test drives and arrives in another kingdom. He hides the Wooden Peacock in the foliages and sees a princess bathing. Later, the prince flies to her window. The princess, then, decides to hide her lover inside her room by commissioning a man-sized lamp with a secret compartment. The princess becomes pregnant and escapes with the prince to the jungle. Her royal lover gets stranded in the sea, due to the machinations of fate, and the princess is forced to raise the child on her own. She, however, gets help from an ascetic, who, by performing "an Act of Truth", creates two other children out of flowers for the maiden to rear. Uyghur people In a Uyghur tale, The Wooden Horse, a carpenter and a metalsmith quarrel about who is the most skilled. The king decides to set a contest to settle their dispute: the metalsmith creates an iron fish and the carpenter a wooden horse that can fly. The king's son, the prince, is delighted at the wooden horse and asks his father to try it. The prince controls the device and begins to ascend to the skies, disappearing in the distance. He arrives at another kingdom whose king has built a "palace in the sky" to hide his daughter in. The prince visits the princess with the horse for three times, which infuriates the king. The king orders a nationwide search for the boy. The princess escapes with the prince on the flying wooden horse, but as soon as they land, the princess wants to go back to get a treasure from her mother. She leaves the prince there and flies back to her kingdom, being captured by her own father, who arranged her marriage to another man. The prince begins to notice her absence and wander about in search of food. He finds an orchard with fruits and eats them, and horns and a white beard appear on his face. He eats other fruits and reverses the transformation. He decides to collect some of them and goes back on the road. He finds a prince's retinue and gives some of the fruits to the prince - who is to marry the princess of the sky palace - to cause a physical transformation. Despairing at the situation, the retinue concoct a plan to replace the prince for the fruit seller (which was the youth's plan all along). The youth-as-the-foreign-prince meets the princess again and, after the wedding celebrations, they escape on the wooden horse. Africa Morocco René Basset collected a variant in the Berber language. Literary variants Illustrator Howard Pyle included a tale named The Stool of Fortune in his work Twilight Land, a crossover of famous fairy tale characters (Mother Goose, Cinderella, Fortunatus, Sinbad the Sailor, Aladdin, Boots, the Valiant Little Tailor) that meet in an inn to tell stories. In The Stool of Fortune, a nameless wandering soldier is hired by a magician to shoot some animals. Angry at the unjust payment, the soldier enters the magician hut and sits on a three-legged stool, waiting for his employer. Wishing he was anywhere else, the stool obeys his command and starts to fly away. The soldier then arrives at the tower room of a unsuspecting princess and announces himself as "The King of Winds". Sufi scholar Idries Shah adapted the tale as the children's book The Magic Horse: a King summons a woodcarver and a metalsmith to create wondrous contraptions. The woodcarver constructs a wooden horse, which draws the attention of the king's youngest son, prince Tambal. Adaptations The Russian variant of the tale type ATU 575, "The Wooden Eagle", was adapted into a Soviet animated film in 1953 (ru). The tale type was also adapted into a Czech fantasy film in 1987, titled O princezně Jasněnce a ševci, který létal (Princess Jasnenka and the Flying Shoemaker). The film was based on a homonymous literary fairy tale by Czech author Jan Drda, first published in 1959, in České pohádky. See also The Flying Trunk, literary fairy tale by Hans Christian Andersen Flying carpet Pegasus, mythological flying horse Haizum Qianlima Hippogriff Tulpar Tianma Le cheval de bronze (opera) References Bibliography Chauvin, Victor Charles. Bibliographie des ouvrages arabes ou relatifs aux Arabes, publiés dans l'Europe chrétienne de 1810 à 1885. Volume V. Líege: H. Vaillant-Carmanne. 1901. pp. 221-231. Further reading Cox, H.L. "'L'Histoire du cheval enchante" aus 1001 Nacht in der miindlichen Oberlieferung Franzosisch-Flanders". In: D. HARMENING & E. WIMMER (red.), Volkskultur - Geschichte - Region: Festschrift für Wolfgang Brückner zum 60. Geburtstag. Würzburg: Verlag Königshausen & Neumann GmbH. 1992. pp. 581-596. Access date: 11th January, 2025. External links The Book of the Thousand Nights and One Night/The Enchanted Horse on Wikisource (translation by John Payne) One Thousand and One Nights characters Male characters in literature Male characters in fairy tales Fictional princes Fairy tales about princes Fairy tales about princesses Medieval literature Works about automation Automata (mechanical) Fictional objects Magic items Legendary flying machines Fictional horses Fictional Indian people Indian folklore Indian literature Indian legends Indian fairy tales ATU 560-649
The Ebony Horse
[ "Physics", "Engineering" ]
7,262
[ "Automation", "Magic items", "Physical objects", "Automata (mechanical)", "Works about automation", "Matter" ]
66,105,018
https://en.wikipedia.org/wiki/Cartesian%20parallel%20manipulators
In robotics, Cartesian parallel manipulators are manipulators that move a platform using parallel-connected kinematic linkages ('limbs') lined up with a Cartesian coordinate system. Multiple limbs connect the moving platform to a base. Each limb is driven by a linear actuator and the linear actuators are mutually perpendicular. The term 'parallel' here refers to the way that the kinematic linkages are put together, it does not connote geometrically parallel; i.e., equidistant lines. Context Generally, manipulators (also called 'robots' or 'mechanisms') are mechanical devices that position and orientate objects. The position of an object in three-dimensional (3D) space can be specified by three numbers X, Y, Z known as 'coordinates.' In a Cartesian coordinate system (named after René Descartes who introduced analytic geometry, the mathematical basis for controlling manipulators) the coordinates specify distances from three mutually perpendicular reference planes.  The orientation of an object in 3D can be specified by three additional numbers corresponding to the orientation angles.  The first  manipulators were developed after World War II for the Argonne National Laboratory to safely handle highly radioactive material remotely.  The first numerically controlled manipulators (NC machines) were developed by Parsons Corp. and the MIT Servomechanisms Laboratory, for milling applications.  These machines position a cutting tool relative to a Cartesian coordinate system using three mutually perpendicular linear actuators (prismatic P joints), with (PP)P joint topology.  The first industrial robot, Unimate, was invented in the 1950s. Its control axes correspond to a spherical coordinate system, with RRP joint topology composed of two revolute R joints in series with a prismatic P joint.  Most industrial robots today are articulated robots composed of a serial chain of revolute R joints RRRRRR. Description Cartesian parallel manipulators are in the intersection of two broader categories of manipulators: Cartesian and parallel. Cartesian manipulators are driven by mutually perpendicular linear actuators. They generally have a one-to-one correspondence between the linear positions of the actuators and the X, Y, Z position coordinates of the moving platform, making them easy to control. Furthermore, Cartesian manipulators do not change the orientation of the moving platform. Most commonly, Cartesian manipulators are serial-connected; i.e., they consist of a single kinematic linkage chain, i.e. the first linear actuator moves the second one and so on. On the other hand, Cartesian parallel manipulators are parallel-connected, i.e. they consist of multiple kinematic linkages. Parallel-connected manipulators have innate advantages in terms of stiffness, precision, dynamic performance and in supporting heavy loads. Configurations Various types of Cartesian parallel manipulators are summarized here. Only fully parallel-connected mechanisms are included; i.e., those having the same number of limbs as degrees of freedom of the moving-platform, with a single actuator per limb. Multipteron family Members of the Multipteron family of manipulators have either 3, 4, 5 or 6 degrees of freedom (DoF). The Tripteron 3-DoF member has three translation degrees of freedom 3T DoF, with the subsequent members of the Multipteron family each adding a rotational R degree of freedom. Each member of the family has mutually perpendicular linear actuators connected to a fixed base. The moving platform is typically attached to the linear actuators through three geometrically parallel revolute R joints. See Kinematic pair for a description of shorthand joint notation used to describe manipulator configurations, like revolute R joint for example. Tripteron The 3-DoF Tripteron member of the Multipteron family has three parallel-connected kinematic chains consisting of a linear actuator (active prismatic P joint) in series with three revolute R joints 3(PRRR). Similar manipulators, with three parallelogram Pa limbs 3(PRPaR) are the Orthoglide and Parallel cube-manipulator. The Pantepteron is also similar to the Tripteron, with pantograph linkages to speed up the motion of the platform. Qudrupteron The 4-DoF Qudrupteron has 3T1R DoF with (3PRRU)(PRRR) joint topology. Pentapteron The 5-DoF Pentateron has 3T2R DoF with 5(PRRRR) joint topology. Hexapteron The 6-DoF Hexapteron has 3T3R DoF with 6(PCRS) joint topology, with cylindrical C and spherical S joints. Isoglide The Isoglide family includes many different Cartesian parallel manipulators from 2-6 DoF. Xactuator The 4-DoF or 5-DoF Coupled Cartesian manipulators family are gantry type Cartesian parallel manipulators with 2T2R DoF or 3T2R DoF. References Machinery
Cartesian parallel manipulators
[ "Physics", "Technology", "Engineering" ]
1,094
[ "Physical systems", "Machines", "Machinery", "Mechanical engineering" ]
47,771,159
https://en.wikipedia.org/wiki/International%20Workshop%20on%20Operator%20Theory%20and%20its%20Applications
International Workshop on Operator Theory and its Applications (IWOTA) was started in 1981 to bring together mathematicians and engineers working in operator theoretic side of functional analysis and its applications to related fields. These include: Differential equations and Integral equations Complex analysis and Harmonic analysis Linear system and Control theory Mathematical physics Signal processing Numerical analysis The other major branch of operator theory, Operator algebras (C* and von Neumann Algebras), is not heavily represented at IWOTA and has its own conferences. IWOTA gathers leading experts from all over the world for an intense exchange of new results, information and opinions, and for tracing the future developments in the field. The IWOTA meetings provide opportunities for participants (including young researchers) to present their own work in invited and contributed talks, to interact with other researchers from around the globe, and to broaden their knowledge of the field. In addition, IWOTA emphasizes cross-disciplinary interaction among mathematicians, electrical engineers and mathematical physicists. In the even years, the IWOTA workshop is a satellite meeting to the biennial International Symposium on the Mathematical Theory of Networks and Systems (MTNS). From the humble beginnings in the early 80's, the IWOTA workshops grew to become one of the largest continuing conferences attended by the community of researchers in operator theory. History of IWOTA First IWOTA Meeting The International Workshop on Operator Theory and its Applications was started on August 1, 1981, adjacent to the International Symposium on Mathematical Theory of Networks and Systems (MTNS) with goal of exposing operator theorists, even pure theorists, to recent developments in engineering (especially H-infinity methods in control theory) which had a significant intersection with operator theory. Israel Gohberg was the visionary and driving force of IWOTA and president of the IWOTA Steering Committee. From the beginning, J. W. Helton and M. A. Kaashoek served as vice presidents of the steering committee. West Meets East Besides the excitement of mathematical discovery over the decades at IWOTA, there was great excitement when the curtain between Soviet bloc and Western operator theorists fell. Until 1990, these two collections of extremely strong mathematicians seldom met due to the tight restrictions on travel from and in the communist countries. When the curtain dropped, the western mathematicians knew the classic Soviet papers but had a spotty knowledge of much of what else their counterparts were doing. Gohberg was one of the operator theorists who knew both sides and he guided IWOTA, a western institution, in bringing (and funding) many prominent FSU bloc operator theorists to speak at the meetings. As the IWOTA programs demonstrate, this significantly accelerated the cultures' mutual assimilation. Previous IWOTA Meetings IWOTA Proceedings Proceedings of the IWOTA workshops appear in the Springer / Birkhäuser Verlag book series Operator Theory: Advances and Applications (OTAA) (founder: Israel Gohberg). While engineering conference proceedings often are handed to participants as they arrive and contain short papers on each conference talk, the IWOTA proceedings follow mathematics conference tradition and contain a modest number of papers and are published several years after the conference. Funding Sources IWOTA has received support from many sources, including the National Science Foundation , the London Mathematical Society, the Engineering and Physical Sciences Research Council, Deutsche Forschungsgemeinschaft, Secretaría de Estado de Investigación, Desarrollo e Innovación (Spain), Australian Mathematical Sciences Institute, National Board for Higher Mathematics, International Centre for Theoretical Physics, Indian Statistical Institute, Korea Research Foundation, United States-India Science & Technology Endowment Fund, Nederlandse Organisatie voor Wetenschappelijk Onderzoek, the Commission for Developing Countries of the International Mathematical Union, Stichting Advancement of Mathematics (Netherlands), the National Research Foundation of South Africa, and Birkhäuser Publishing Ltd. The IWOTA Steering Committee IWOTA is directed by a steering committee which chooses the site for the next meeting, elects the chief local organizer(s) and insures the appearance of the enduring themes of IWOTA. The sub-themes of an IWOTA workshop and the lecturers are chosen by the local organizing committee after hearing the steering committee's board. The board consists of its vice presidents: Joseph A. Ball, J. William Helton (Chair), Sanne ter Horst, Igor Klep, Christiane Tretter, Irene Sabadini, Victor Vinnikov and Hugo J. Woerdeman. In addition, past chief organizers who remain active in IWOTA are members of the steering committee. The board governs IWOTA with consultation and the consent of the full steering committee. Honorary members of the steering committee, elected in 2016, are: Israel Gohberg (deceased in 2009), Leiba Rodman (deceased in 2015), Tsuyoshi Ando, Harry Dym (deceased in 2024), Ciprian Foiaş (deceased in 2020), Heinz Langer (deceased in 2024), Nikolai Nikolski. Honorary member of the steering committee, elected in 2024, is: Rien Kaashoek. Future IWOTA Meetings IWOTA 2025 will be held at University of Twente in Enschede, The Netherlands. Main organizer is Felix Schwenninger. Dates are July 14-18, 2025 IWOTA 2026 will be held at Université Laval in Quebec City, Canada. Main organizers are Javad Mashreghi and Frédéric Morneau-Guérin. Dates are August 3-7, 2026 Israel Gohberg ILAS-IWOTA Lecture The Israel Gohberg ILAS-IWOTA Lecture was introduced in August 2016 and honors the legacy of Israel Gohberg, whose research crossed borders between operator theory, linear algebra, and related fields. This lecture is in collaboration with the International Linear Algebra Society (ILAS). This series of lectures will be delivered at IWOTA and ILAS Conferences, in different years, in the approximate ratio two-thirds at IWOTA and one-third at ILAS. The first three lectures will take place at IWOTA Lancaster UK 2021, ILAS 2022, and IWOTA 2024. Donations for the Israel Gohberg ILAS-IWOTA Lecture Fund are most welcome and can be submitted via the ILAS donation form. Donations are tax deductible in the United States. References External links Operator Theory: Advances and Applications Series on Springer website IWOTA's YouTube Channel IWOTA 2000 - Bordeaux, France IWOTA 2006 - Seoul, Korea IWOTA 2007 - Potchefstroom, South Africa IWOTA 2008 - Williamsburg, Virginia, U.S.A IWOTA 2010 - Berlin, Germany IWOTA 2011 - Seville, Spain IWOTA 2012 - Sydney, Australia IWOTA 2013 - Bangalore, India IWOTA 2014 - Amsterdam, Netherlands IWOTA 2015 - Tbilisi, Georgia IWOTA 2016 - St. Louis, Missouri, USA IWOTA 2017 - Chemnitz, Germany IWOTA 2019 - Lisbon, Portugal IWOTA Chapman USA 2021 - Orange, California, USA IWOTA Lancaster UK 2021 - Lancaster, United Kingdom IWOTA 2022 - Kraków, Poland IWOTA 2023 - Helsinki, Finland IWOTA 2024 - Canterbury, United Kingdom IWOTA 2025 - Enschede, The Netherlands Mathematics conferences Operator theory Functional analysis Mathematical analysis Mathematical societies Organizations established in 1981
International Workshop on Operator Theory and its Applications
[ "Mathematics" ]
1,539
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations" ]
47,775,359
https://en.wikipedia.org/wiki/Multiple%20hearth%20furnace
A multiple hearth furnace also known as a vertical calciner, is used for continuous preparation and calcining of materials. Working The multiple hearth furnaces consist of several circular hearths or kilns superimposed on each other. Material is fed from the top and is moved by the action of rotating "rabble arms", and the revolving mechanical rabbles attached to the arms move over the surface of each hearth to continuously shift the ore. The arms are attached to a rotating central shaft that passes through the center of the roaster. As the material is moved, the ore that is charged at the top hearth gradually moves downward as it passes through windows in the floor of each hearth or through alternate passages around the shaft and the periphery until it finally emerges at the bottom. Gas The oxidizing gases flow upward, i.e., counter-current to the descending charge. In a well-insulated roaster, external heating is unnecessary except when the charge is highly moist. The hearth at the top of the roaster dries and heats the charge. Ignition and oxidation of the charge occur lower down. Variables The hearths may be individually heated and the number, temperature, rotation rate, and size of each hearth determine the residence time and conditions for the calcining powder in order to achieve the desired final properties. Structure of furnace The individual hearths are lined with refractory brick, and the rabble arms are typically a force-cooled metal alloy. The entire structure is enclosed in a cylindrical brick-lined steel shell. References Smelting Metallurgical processes
Multiple hearth furnace
[ "Chemistry", "Materials_science" ]
324
[ "Metallurgical processes", "Metallurgy", "Smelting" ]
47,779,300
https://en.wikipedia.org/wiki/Hazard%20analysis%20and%20risk-based%20preventive%20controls
Hazard analysis and risk-based preventive controls or HARPC is a successor to the Hazard analysis and critical control points (HACCP) food safety system, mandated in the United States by the FDA Food Safety Modernization Act (FSMA) of 2010. Preventive control systems emphasize prevention of risks before they occur rather than their detection after they occur. The FDA released the rules in the Federal Register from September 2015 onwards. The first release of rules addressed Preventive Controls for Human Food and Preventive Controls for Foods for Animals. The Produce Safety Final Rule, the Foreign Supplier Verification Programs (FSVP) Final Rule and the Accredited Third-Party Certification Final Rule were issued on November 13, 2015. The Sanitary Transportation of Human and Animal Food final rule was issued on April 6, 2016, and the Mitigation Strategies To Protect Food Against Intentional Adulteration (Food Defense) final rule was issued on May 27, 2016. Scope All food companies in the United States that are required to register with the FDA under the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, as well as firms outside the US that export food to the US, must have a written FSMA-compliant Food Safety Plan in place by the deadlines listed below: Very small businesses of less than $1 million in sales per year are exempt, but must provide proof to the FDA of their very small status by January 1, 2016. Businesses subject to Juice HACCP () and Seafood HACCP () are exempt. Businesses subject to the Pasteurized Milk Ordinance; Sept 17, 2018. Small businesses, defined as having fewer than 500 full-time equivalent employees; Sept 17, 2017. All other businesses; Sept 17, 2016. Additionally, for the first time food safety is being extended to pet food and animal feed, with firms being given an extra year to implement Current Good Manufacturing Practices before a Preventive Controls system the following year: Primary Production Farms, defined as "an operation under one management in one general, but not necessarily contiguous, location devoted to the growing of crops, the harvesting of crops, the raising of animals (including seafood), or any combination of these activities" are exempt. Very small businesses of less than $2,500,000 in sales per year; Sept 17, 2018 for cGMP, Sept 17, 2019 for Preventive Controls, but must provide proof of very small business status by January 1, 2017. Small businesses, having fewer than 500 full-time equivalent employees; Sept 17, 2017 for cGMP, Sept 17, 2018 for Preventive Controls. All other businesses; Sept 17, 2016 for cGMP, Sept 17, 2017 for Preventive Controls. The FDA estimates that 73,000 businesses currently fall under these definitions. Differences between FSMA Preventive Controls and HACCP FSMA places a much stronger emphasis on science, research and prior experience with outbreaks than HACCP. For example, the FDA now uses whole genome sequencing to match the exact strain of pathogen isolated from hospital patients to DNA recovered from food manufacturing facilities. FSMA requires that a "Preventive Controls Qualified Individual" (PCQI) with training and experience oversee the plan. HACCP assigned responsibilities to a team drawn from management. FSMA requires that firms vet ("Verify") all their suppliers for the effectiveness of their food safety programs. This has the effect of drafting companies into the FSMA enforcement effort, since the Supplier Verification and Foreign Supplier Verification programs require that the suppliers provide written proof that they have Prerequisite Programs, and Preventive Controls systems which include their own supplier vetting program. FSMA-compliant Food Safety Plans rely on Prerequisite Programs such as GMPs, allergen controls, Integrated Pest Management and vetting suppliers far more than HACCP plans, since these programs tend to be preventive. FSMA-compliant Hazard Analyses address radiological hazards in addition to the chemical, biological and physical hazards covered by HACCP systems. FSMA explicitly requires a Food Defense component, with both terrorism and Economically Motivated Adulteration addressed. Businesses with less than $10,000,000 a year in sales are exempt. FSMA-compliant Food Safety Plans de-emphasize Critical Control Points in favor of Preventive Controls. Preventive Controls do not require specific Critical Limits. FSMA-compliant Food Safety Plans allow Corrections in place of Corrective Actions when the public health is not threatened. Corrections are not as strict regarding paperwork as Corrective Actions. The FDA believes that companies might have been avoiding making minor improvements because they felt that the paper trail of a Corrective Action would open them to legal risk due to discovery during investigations or lawsuits. FSMA-compliant Food Safety Plans are to be reviewed once every three years, as opposed to yearly with HACCP. See also Failure mode and effects analysis Failure mode, effects, and criticality analysis Fault tree analysis Food defense Food safety Design Review Based on Failure Mode Fast food restaurant ISO 22000 Hazard analysis Hazop Hygiene Sanitation Sanitation Standard Operating Procedures Codex Alimentarius Total quality management References External links Food Safety Preventive Controls Alliance's Preventive Controls for Human Food Food safety Food technology Quality management Hazard analysis United States Department of Agriculture
Hazard analysis and risk-based preventive controls
[ "Engineering" ]
1,058
[ "Safety engineering", "Hazard analysis" ]
68,928,715
https://en.wikipedia.org/wiki/Bridget%20Mutuma
Bridget K. Mutuma is a researcher in chemistry and material sciences at Nairobi University in Kenya. She focuses on developing nanomaterials associated with sensors. She is a Fellow of the African Academy of Sciences. References External links Year of birth missing (living people) Living people Fellows of the African Academy of Sciences Academic staff of Kirinyaga University Nanotechnologists University of the Witwatersrand alumni 21st-century Kenyan women scientists 21st-century Kenyan scientists
Bridget Mutuma
[ "Materials_science" ]
96
[ "Nanotechnology", "Nanotechnologists" ]
68,931,030
https://en.wikipedia.org/wiki/Allogeneic%20processed%20thymus%20tissue
Allogeneic processed thymus tissue, sold under the brand name Rethymic, is a thymus tissue medical therapy used for the treatment of children with congenital athymia. It takes six months or longer to reconstitute the immune function in treated people. The most common adverse reactions include high blood pressure, cytokine release syndrome, low blood magnesium levels, rash, low platelets, and graft versus host disease. It was approved for medical use in the United States in October 2021. Allogeneic processed thymus tissue is the first thymus tissue product approved by the U.S. Food and Drug Administration (FDA). Allogeneic processed thymus tissue is composed of human allogeneic (donor-derived) thymus tissue that is processed and cultured, and then implanted into people to help reconstitute immunity (improve immune function) in people who are athymic. Dosing is patient customized, determined by the surface area of the allogeneic processed thymus tissue slices and the body surface area of the patient. Medical uses Allogeneic processed thymus tissue is indicated for immune reconstitution in children with congenital athymia. History The safety and efficacy of allogeneic processed thymus tissue were established in clinical studies that included 105 participants, with ages from one month to 16 years, who each received a single administration of allogeneic processed thymus tissue, from 1993 to 2020. Allogeneic processed thymus tissue improved survival of people with congenital athymia, and most people treated with this product survived at least two years. The U.S. Food and Drug Administration (FDA) granted the application for allogeneic processed thymus tissue a rare pediatric disease voucher and granted approval of Rethymic to Enzyvant Therapeutics, Inc. References Further reading Congenital disorders Immunology Medical treatments Orphan drugs Thymus
Allogeneic processed thymus tissue
[ "Biology" ]
395
[ "Immunology" ]
68,944,910
https://en.wikipedia.org/wiki/Vaccination%20in%20Bangladesh
Vaccination in Bangladesh includes all aspects of vaccination in Bangladesh. A 2020 study reported that the cost of a malaria vaccination program in Bangladesh would be cost effective in terms of increasing the people's disability-adjusted life years. Bangladesh had its first outbreak of avian influenza in 2007 and the disease continues to be a national problem. Part of the response that scientists recommend is the development of vaccination programs, but this has been difficult. A 2016 program to provide HPV vaccines to girls created a range of ethical issues for communities. The source of all the problems was that the design of the vaccination program came from foreign people outside the country who had no understanding of local cultural norms. There was an attempt at local consultation, but unexpected problems happened anyway. Problems included lack of public health education for the communities receiving the vaccine, forcefulness and lack of consent in arranging for girls to take the vaccine, a lack of planning to treat adverse side effects of vaccination, and a lack of female leadership and empowerment in running a health program for females. There are multiple cholera vaccines available in Bangladesh as well as multiple strategies for making them available to people who need them. While there is major government support for vaccination, there is debate and research about how to manage the vaccination program to make it more efficient. Bangladesh has experienced outbreaks of the Nipah virus and although a vaccine exists, the vaccine option is not well developed and preventing outbreaks without vaccines is a better option in this case. Bangladesh began a vaccination program for congenital rubella syndrome in 2012 and since then, cases have gone down greatly. COVID-19 vaccination Bangladesh began the administration of COVID-19 vaccines on 27 January 2021 while mass vaccination started on 7 February 2021. EPI (Expanded program on immunization) in Bangladesh On 7th April 1979, about 5 years after EPI was launched globally by WHO, EPI was formally launched in Bangladesh as a pilot project in eight thanas. In 1985, the People’s Republic of Bangladesh committed to the Global Universal Child Immunization Initiative (UCI), and began a phase-wise process of EPI intensification from 1985-1990. Bangladesh has made significant progress in the elimination and control of Vaccine-preventable disease (VPDs). The last case of wild Poliovirus was detected in 2006 and maintains Polio free since then. Maternal and neonatal tetanus was eliminated in 2008. Bangladesh also achieved rubella control goal in 2018. Surveillance for AFP and measles is maintained at standard level. Bangladesh has also introduced several new vaccines since last decades. HepB vaccine was introduced in 2003, Hib in 2009, rubella in 2012, PCV and IPV in 2015, MR second dose in 2015 and fIPV in 2017. References Bangladesh Healthcare in Bangladesh
Vaccination in Bangladesh
[ "Biology" ]
585
[ "Vaccination by country", "Vaccination" ]
59,645,152
https://en.wikipedia.org/wiki/NGC%20779
NGC 779 is a spiral galaxy seen edge-on, located in the constellation Cetus. It is located at a distance of circa 60 million light years from Earth, which, given its apparent dimensions, means that NGC 779 is about 70,000 light years across. It was discovered by William Herschel on September 10, 1785. NGC 779 features a bright nucleus and an elliptical or boxy bulge. It is seen with high inclination. The inner arms are tightly wound and form an inner pseudoring with high surface brightness. A break is seen at the northwest side of the pseudoring and may be due to dust extinction. The disk has lower surface brightness and is smooth, with no pronounced star-forming knots. The spiral pattern of the galaxy gas been described either as multiple-armed or grand-design two-armed spiral. NGC 779 forms a small galaxy group with UGCA 024, known as the NGC 779 group. NGC 779 is considered to be part of the Cetus II cloud, which also includes NGC 584, NGC 681, NGC 720, and their groups, although it could also lie in the foreground. The galaxy is included in the Herschel 400 Catalogue. It lies about five degrees northeast from Zeta Ceti. It can be seen with a small telescope at moderate magnification, with its core being more easily detected. References External links NGC 779 on SIMBAD Spiral galaxies Cetus 0779 007544 Astronomical objects discovered in 1785 Discoveries by William Herschel
NGC 779
[ "Astronomy" ]
309
[ "Cetus", "Constellations" ]
59,654,517
https://en.wikipedia.org/wiki/Graph%20cut%20optimization
Graph cut optimization is a combinatorial optimization method applicable to a family of functions of discrete variables, named after the concept of cut in the theory of flow networks. Thanks to the max-flow min-cut theorem, determining the minimum cut over a graph representing a flow network is equivalent to computing the maximum flow over the network. Given a pseudo-Boolean function , if it is possible to construct a flow network with positive weights such that each cut of the network can be mapped to an assignment of variables to (and vice versa), and the cost of equals (up to an additive constant) then it is possible to find the global optimum of in polynomial time by computing a minimum cut of the graph. The mapping between cuts and variable assignments is done by representing each variable with one node in the graph and, given a cut, each variable will have a value of 0 if the corresponding node belongs to the component connected to the source, or 1 if it belong to the component connected to the sink. Not all pseudo-Boolean functions can be represented by a flow network, and in the general case the global optimization problem is NP-hard. There exist sufficient conditions to characterise families of functions that can be optimised through graph cuts, such as submodular quadratic functions. Graph cut optimization can be extended to functions of discrete variables with a finite number of values, that can be approached with iterative algorithms with strong optimality properties, computing one graph cut at each iteration. Graph cut optimization is an important tool for inference over graphical models such as Markov random fields or conditional random fields, and it has applications in computer vision problems such as image segmentation, denoising, registration and stereo matching. Representability A pseudo-Boolean function is said to be representable if there exists a graph with non-negative weights and with source and sink nodes and respectively, and there exists a set of nodes such that, for each tuple of values assigned to the variables, equals (up to a constant) the value of the flow determined by a minimum cut of the graph such that if and if . It is possible to classify pseudo-Boolean functions according to their order, determined by the maximum number of variables contributing to each single term. All first order functions, where each term depends upon at most one variable, are always representable. Quadratic functions are representable if and only if they are submodular, i.e. for each quadratic term the following condition is satisfied Cubic functions are representable if and only if they are regular, i.e. all possible binary projections to two variables, obtained by fixing the value of the remaining variable, are submodular. For higher-order functions, regularity is a necessary condition for representability. Graph construction Graph construction for a representable function is simplified by the fact that the sum of two representable functions and is representable, and its graph is the union of the graphs and representing the two functions. Such theorem allows to build separate graphs representing each term and combine them to obtain a graph representing the entire function. The graph representing a quadratic function of variables contains vertices, two of them representing the source and sink and the others representing the variables. When representing higher-order functions, the graph contains auxiliary nodes that allow to model higher-order interactions. Unary terms A unary term depends only on one variable and can be represented by a graph with one non-terminal node and one edge with weight if , or with weight if . Binary terms A quadratic (or binary) term can be represented by a graph containing two non-terminal nodes and . The term can be rewritten as with In this expression, the first term is constant and it is not represented by any edge, the two following terms depend on one variable and are represented by one edge, as shown in the previous section for unary terms, while the third term is represented by an edge with weight (submodularity guarantees that the weight is non-negative). Ternary terms A cubic (or ternary) term can be represented by a graph with four non-terminal nodes, three of them (, and ) associated to the three variables plus one fourth auxiliary node . A generic ternary term can be rewritten as the sum of a constant, three unary terms, three binary terms, and a ternary term in simplified form. There may be two different cases, according to the sign of . If then with If the construction is similarly, but the variables will have opposite value. If the function is regular, then all its projections of two variables will be submodular, implying that , and are positive and then all terms in the new representation are submodular. In this decomposition, the constant, unary and binary terms can be represented as shown in the previous sections. If the ternary term can be represented with a graph with four edges , , , , all with weight , while if the term can be represented by four edges , , , with weight . Minimum cut After building a graph representing a pseudo-Boolean function, it is possible to compute a minimum cut using one among the various algorithms developed for flow networks, such as Ford–Fulkerson, Edmonds–Karp, and Boykov–Kolmogorov algorithm. The result is a partition of the graph in two connected components and such that and , and the function attains its global minimum when for each such that the corresponding node , and for each such that the corresponding node . Max-flow algorithms such as Boykov–Kolmogorov's are very efficient in practice for sequential computation, but they are difficult to parallelise, making them not suitable for distributed computing applications and preventing them from exploiting the potential of modern CPUs. Parallel max-flow algorithms were developed, such as push-relabel and jump-flood, that can also take advantage of hardware acceleration in GPGPU implementations. Functions of discrete variables with more than two values The previous construction allows global optimization of pseudo-Boolean functions only, but it can be extended to quadratic functions of discrete variables with a finite number of values, in the form where and . The function represents the unary contribution of each variable (often referred as data term), while the function represents binary interactions between variables (smoothness term). In the general case, optimization of such functions is a NP-hard problem, and stochastic optimization methods such as simulated annealing are sensitive to local minima and in practice they can generate arbitrarily sub-optimal results. With graph cuts it is possible to construct move-making algorithms that allow to reach in polynomial time a local minima with strong optimality properties for a wide family of quadratic functions of practical interest (when the binary interaction is a metric or a semimetric), such that the value of the function in the solution lies within a constant and known factor from the global optimum. Given a function with , and a certain assignment of values to the variables, it is possible to associate each assignment to a partition of the set of variables, such that, . Give two distinct assignments and and a value , a move that transforms into is said to be an -expansion if and . Given a couple of values and , a move is said to be an -swap if . Intuitively, an -expansion move from assigns the value of to some variables that have a different value in , while an -swap move assigns to some variables that have value in and vice versa. For each iteration, the -expansion algorithm computes, for each possible value , the minimum of the function among all assignments that can be reached with a single -expansion move from the current temporary solution , and takes it as the new temporary solution. while : foreach : if : The -swap algorithm is similar, but it searches for the minimum among all assignments reachable with a single -swap move from . while : foreach : if : In both cases, the optimization problem in the innermost loop can be solved exactly and efficiently with a graph cut. Both algorithms terminate certainly in a finite number of iterations of the outer loop, and in practice such number is small, with most of the improvement happening at the first iteration. The algorithms can generate different solutions depending on the initial guess, but in practice they are robust with respect to initialisation, and starting with a point where all variables are assigned to the same random value is usually sufficient to produce good quality results. The solution generated by such algorithms is not necessarily a global optimum, but it has strong guarantees of optimality. If is a metric and is a solution generated by the -expansion algorithm, or if is a semimetric and is a solution generated by the -swap algorithm, then lies within a known and constant factor from the global minimum : Non-submodular functions Generally speaking, the problem of optimizing a non-submodular pseudo-Boolean function is NP-hard and cannot be solved in polynomial time with a simple graph cut. The simplest approach is to approximate the function with a similar but submodular one, for instance truncating all non-submodular terms or replacing them with similar submodular expressions. Such approach is generally sub-optimal, and it produces acceptable results only if the number of non-submodular terms is relatively small. In case of quadratic non-submodular functions, it is possible to compute in polynomial time a partial solution using algorithms such as QPBO. Higher-order functions can be reduced in polynomial time to a quadratic form that can be optimised with QPBO. Higher-order functions Quadratic functions are extensively studied and were characterised in detail, but more general results were derived also for higher-order functions. While quadratic functions can indeed model many problems of practical interest, they are limited by the fact they can represent only binary interactions between variables. The possibility to capture higher-order interactions allows to better capture the nature of the problem and it can provide higher quality results that could be difficult to achieve with quadratic models. For instance in computer vision applications, where each variable represents a pixel or voxel of the image, higher-order interactions can be used to model texture information, that would be difficult to capture using only quadratic functions. Sufficient conditions analogous to submodularity were developed to characterise higher-order pseudo-Boolean functions that can be optimised in polynomial time, and there exists algorithms analogous to -expansion and -swap for some families of higher-order functions. The problem is NP-hard in the general case, and approximate methods were developed for fast optimization of functions that do not satisfy such conditions. Notes References Bibliography External links Implementation (C++) of several graph cut algorithms by Vladimir Kolmogorov. GCO, graph cut optimization library by Olga Veksler and Andrew Delong. Combinatorial optimization Computer vision Computational problems in graph theory
Graph cut optimization
[ "Mathematics", "Engineering" ]
2,234
[ "Computational problems in graph theory", "Packaging machinery", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Artificial intelligence engineering", "Mathematical problems", "Computer vision" ]
59,654,519
https://en.wikipedia.org/wiki/Quadratic%20pseudo-Boolean%20optimization
Quadratic pseudo-Boolean optimisation (QPBO) is a combinatorial optimization method for minimizing quadratic pseudo-Boolean functions in the form in the binary variables , with . If is submodular then QPBO produces a global optimum equivalently to graph cut optimization, while if contains non-submodular terms then the algorithm produces a partial solution with specific optimality properties, in both cases in polynomial time. QPBO is a useful tool for inference on Markov random fields and conditional random fields, and has applications in computer vision problems such as image segmentation and stereo matching. Optimization of non-submodular functions If the coefficients of the quadratic terms satisfy the submodularity condition then the function can be efficiently optimised with graph cut optimization. It is indeed possible to represent it with a non-negative weighted graph, and the global minimum can be found in polynomial time by computing a minimum cut of the graph, which can be computed with algorithms such as Ford–Fulkerson, Edmonds–Karp, and Boykov–Kolmogorov's. If the function is not submodular, then the problem is NP-hard in the general case and it is not always possible to solve it exactly in polynomial time. It is possible to replace the target function with a similar but submodular approximation, e.g. by removing all non-submodular terms or replacing them with submodular approximations, but such approach is generally sub-optimal and it produces satisfying results only if the number of non-submodular terms is relatively small. QPBO builds an extended graph, introducing a set of auxiliary variables ideally equivalent to the negation of the variables in the problem. If the nodes in the graph associated to a variable (representing the variable itself and its negation) are separated by the minimum cut of the graph in two different connected components, then the optimal value for such variable is well defined, otherwise it is not possible to infer it. Such method produces results generally superior to submodular approximations of the target function. Properties QPBO produces a solution where each variable assumes one of three possible values: true, false, and undefined, noted in the following as 1, 0, and respectively. The solution has the following two properties. Partial optimality: if is submodular, then QPBO produces a global minimum exactly, equivalent to graph cut, and all variables have a non-undefined value; if submodularity is not satisfied, the result will be a partial solution where a subset of the variables have a non-undefined value. A partial solution is always part of a global solution, i.e. there exists a global minimum point for such that for each . Persistence: given a solution generated by QPBO and an arbitrary assignment of values to the variables, if a new solution is constructed by replacing with for each , then . Algorithm The algorithm can be divided in three steps: graph construction, max-flow computation, and assignment of values to the variables. When constructing the graph, the set of vertices contains the source and sink nodes and , and a pair of nodes and for each variable. After re-parametrising the function to normal form, a pair of edges is added to the graph for each term : for each term the edges and , with weight ; for each term the edges and , with weight ; for each term the edges and , with weight ; for each term the edges and , with weight ; for each term the edges and , with weight ; for each term the edges and , with weight . The minimum cut of the graph can be computed with a max-flow algorithm. In the general case, the minimum cut is not unique, and each minimum cut correspond to a different partial solution, however it is possible to build a minimum cut such that the number of undefined variables is minimal. Once the minimum cut is known, each variable receives a value depending upon the position of its corresponding nodes and : if belongs to the connected component containing the source and belongs to the connected component containing the sink then the variable will have value of 0. Vice versa, if belongs to the connected component containing the sink and to the one containing the source, then the variable will have value of 1. If both nodes and belong to the same connected component, then the value of the variable will be undefined. The way undefined variables can be handled is dependent upon the context of the problem. In the general case, given a partition of the graph in two sub-graphs and two solutions, each one optimal for one of the sub-graphs, then it is possible to combine the two solutions into one solution optimal for the whole graph in polynomial time. However, computing an optimal solution for the subset of undefined variables is still a NP-hard problem. In the context of iterative algorithms such as -expansion, a reasonable approach is to leave the value of undefined variables unchanged, since the persistence property guarantees that the target function will have non-increasing value. Different exact and approximate strategies to minimise the number of undefined variables exist. Higher order terms It is always possible to reduce a higher-order function to a quadratic function which is equivalent with respect to the optimisation, problem known as "higher-order clique reduction" (HOCR), and the result of such reduction can be optimized with QPBO. Generic methods for reduction of arbitrary functions rely on specific substitution rules and in the general case they require the introduction of auxiliary variables. In practice most terms can be reduced without introducing additional variables, resulting in a simpler optimization problem, and the remaining terms can be reduced exactly, with addition of auxiliary variables, or approximately, without addition of any new variable. Notes References Notes External links Implementation of QPBO (C++), available under the GNU General Public License, by Vladimir Kolmogorov. Implementation of HOCR (C++), available under the MIT license, by Hiroshi Ishikawa. Combinatorial optimization Computational problems in graph theory
Quadratic pseudo-Boolean optimization
[ "Mathematics" ]
1,256
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems" ]
61,090,998
https://en.wikipedia.org/wiki/FUJIFILM%20VisualSonics
FUJIFILM VisualSonics Inc. (originally VisualSonics Inc.) is a biomedical company focused on the commercialization of high-frequency ultrasound and photoacoustic imaging equipment for research purposes. The company is headquartered in Toronto, Canada (with European headquarters in Amsterdam). History VisualSonics was founded in 1999 by Stuart Foster, Medical Physicist out of Sunnybrook Research Institute, Toronto.  Dr. Foster's laboratory had been focused on developing a higher frequency ultrasound system since 1983 in order to better study mouse models of human disease. In 2010 the company was acquired by the American clinical ultrasound company, SonoSite Inc. (based in Bothell, WA). In 2012, FUJIFILM Holdings acquired SonoSite Inc. References External links Official Website Biomedicine Companies based in Toronto Medical imaging
FUJIFILM VisualSonics
[ "Biology" ]
164
[ "Biomedicine" ]
61,091,295
https://en.wikipedia.org/wiki/Baer%20function
Baer functions and , named after Karl Baer, are solutions of the Baer differential equation which arises when separation of variables is applied to the Laplace equation in paraboloidal coordinates. The Baer functions are defined as the series solutions about which satisfy , . By substituting a power series Ansatz into the differential equation, formal series can be constructed for the Baer functions. For special values of and , simpler solutions may exist. For instance, Moreover, Mathieu functions are special-case solutions of the Baer equation, since the latter reduces to the Mathieu differential equation when and , and making the change of variable . Like the Mathieu differential equation, the Baer equation has two regular singular points (at and ), and one irregular singular point at infinity. Thus, in contrast with many other special functions of mathematical physics, Baer functions cannot in general be expressed in terms of hypergeometric functions. The Baer wave equation is a generalization which results from separating variables in the Helmholtz equation in paraboloidal coordinates: which reduces to the original Baer equation when . References Bibliography (free online access to the appendix on Baer functions) External links Ordinary differential equations Special functions
Baer function
[ "Mathematics" ]
247
[ "Special functions", "Combinatorics" ]
61,094,758
https://en.wikipedia.org/wiki/Natural%20element%20method
The natural element method (NEM) is a meshless method to solve partial differential equation, where the elements do not have a predefined shape as in the finite element method, but depend on the geometry. A Voronoi diagram partitioning the space is used to create each of these elements. Natural neighbor interpolation functions are then used to model the unknown function within each element. Applications When the simulation is dynamic, this method prevents the elements to be ill-formed, having the possibility to easily redefine them at each time step depending on the geometry. References Numerical differential equations Numerical analysis Computational fluid dynamics Computational mathematics Simulation
Natural element method
[ "Physics", "Chemistry", "Mathematics" ]
130
[ "Computational fluid dynamics", "Applied mathematics", "Computational mathematics", "Computational physics", "Mathematical relations", "Numerical analysis", "Approximations", "Fluid dynamics" ]
61,094,890
https://en.wikipedia.org/wiki/Scuba%20cylinder%20valve
A scuba cylinder valve or pillar valve is a high pressure manually operated screw-down shut off valve fitted to the neck of a scuba cylinder to control breathing gas flow to and from the pressure vessel and to provide a connection with the scuba regulator or filling whip. Cylinder valves are usually machined from brass and finished with a protective and decorative layer of chrome plating. A metal or plastic dip tube or valve snorkel screwed into the bottom of the valve extends into the cylinder to reduce the risk of liquid or particulate contaminants in the cylinder getting into the gas passages when the cylinder is inverted, and blocking or jamming the regulator. Cylinder valves are classified by four basic aspects: the thread specification for attachment to the cylinder, the connection to the regulator, pressure rating, and some functional distinguishing features. Standards relating to the specifications and manufacture of cylinder valves include ISO 10297 and CGA V-9 Standard for Gas Cylinder Valves. Structure of the valve The valve body is usually machined from a solid brass casting or forging, which is screwed into the cylinder neck thread, and sealed by o-ring or thread tape. The outlet is machined to fit one of the standard scuba regulator connection systems, and a gas passage is provided from the interior of the cylinder to the regulator connection. Control of gas flow through the gas passage is by opening and closing a valve orifice machined into the valve body, by turning the valve knob to drive the valve spindle which moves the valve seat towards or away from the orifice. The spindle engages with the valve seat by a flat and slot or a square socket on the inner end of the spindle, which passes through the spindle seal in the valve bonnet. Rotation of the seat drives it along its axis on a screw thread concentric with the orifice. The spindle is usually sealed by an O-ring where it passes through the bonnet, and axial loads on the spindle are usually carried by a teflon or similar low friction coefficient washer. Other arrangements have been used, but the one described is very common and is known as a balanced valve because the pressure of the gas in the cylinder is exerted on both sides of the valve seat when it is not sealed, because the gas can leak past the threads of the seat. Historically, two other spindle arrangements were also used, the unbalanced valve where the periphery of the seat is sealed, and the glandless valve, where the valve seat does not rotate, but is sealed into the valve body behind a diaphragm. The valve outlet is connected to a regulator for diving, or a filling whip for charging. The valve must be open for these operations, and closed to keep the gas inside the cylinder for storage. Cylinder neck threads The neck of the cylinder is the part of the end which is shaped as a narrow concentric cylinder, and internally threaded to fit a cylinder valve. Cylinder threads may be in two basic configurations: Taper thread and parallel thread. The valve thread specification must exactly match the neck thread specification of the cylinder. Improperly matched neck threads can fail under pressure which can have fatal consequences. Parallel threads are more tolerant of repeated removal and refitting of the valve for inspection and testing. There are several standards for neck threads, these include: Taper thread (17E), with a 12% taper right hand thread, standard Whitworth 55° form with a pitch of 14 threads per inch (5.5 threads per cm) and pitch diameter at the top thread of the cylinder of . These connections are sealed using thread tape and torqued to between on steel cylinders, and between on aluminium cylinders. Parallel threads are made to several standards: M25x2 ISO parallel thread, which is sealed by an O-ring and torqued to on steel, and on aluminium cylinders; M18x1.5 parallel thread, which is sealed by an O-ring, and torqued to on steel cylinders, and on aluminium cylinders; 3/4"x14 BSP parallel thread, which has a 55° Whitworth thread form, a pitch diameter of and a pitch of 14 threads per inch (1.814 mm); 3/4"x14 NGS (NPSM) parallel thread, sealed by an O-ring, torqued to on aluminium cylinders, which has a 60° thread form, a pitch diameter of , and a pitch of 14 threads per inch (1.814 mm); 3/4"x16 UNF, sealed by an O-ring, torqued to on aluminium cylinders. 7/8"x14 UNF, sealed by an O-ring. The 3/4"NGS and 3/4"BSP are very similar, having the same pitch and a pitch diameter that only differs by about , but they are not compatible, as the thread forms are different. All parallel thread valves are sealed using an O-ring at the top of the neck thread which seals in a chamfer or step in the cylinder neck and against the flange of the valve. Connection to the regulator A rubber O-ring forms a seal between the metal of the cylinder valve and the metal of the diving regulator. Fluoroelastomer (e.g. viton) O-rings may be used with cylinders filled with oxygen-rich breathing gas mixtures to reduce the risk of fire. There are two basic types of cylinder valve to regulator connection in general use for scuba cylinders. They are both very widely used for cylinders containing air and in many countries also for other breathing gases for diving: Yoke connectors The yoke connector, also known as an A-clamp or international connector, is a component of the regulator that fits around the valve body at the outlet and presses the outlet O-ring of the valve against the inlet seat of the regulator. The connection is officially described as connection CGA 850 yoke. The yoke clamping screw is screwed down snug by hand to ensure metal to metal contact between the valve and regulator to sufficiently constrain the O-ring against extrusion. Overtightening can make the yoke impossible to remove later without tools. The seal is created by clamping the O-ring mounted in a groove on the face of the valve between the surfaces of the regulator and valve. When the valve is opened, cylinder pressure expands the O-ring against the outer surface of the O-ring groove in the valve and the face of the regulator inlet. This type of connection is simple, cheap and very widely used worldwide. Several O-ring sizes are in use, and both overall and section diameters may vary, but the correct size for the valve is necessary for a reliable seal and so that the O-ring does not easily fall out during handling and storage. It has a maximum pressure rating of 240 bar, and is not well protected against overpressurisation. Insufficient clamping force may allow the pressure to slightly stretch the yoke structure, opening a gap between the sealing faces of the valve and the regulator sufficient to extrude the O-ring through the gap, resulting in a potentially catastrophic leak. A similar effect can occur if the first stage is bumped against the environment, flexing the yoke enough to open a gap. When underwater this is most likely in an overhead environment where the diver cannot make an immediate emergency ascent. The risk of this cause for O-ring extrusion is roughly proportional to the pressure in the cylinder, and is less for a more rigid yoke structure. Older regulators may have a yoke rated at 200 bar, and these may not fit over more recent 240 bar valves. DIN connectors In the DIN screw thread connectors, the regulator screws into the cylinder valve, trapping the O-ring securely between the sealing face of the valve and the O-ring groove in the regulator. These are more reliable than A-clamps because the O-ring is well protected and the assembly is considerably more rigid, and has a lower profile, making O-ring extrusion under impact less likely, but operators in many countries do not widely use DIN filling connectors on compressors, or cylinder valves which have DIN fittings, so a diver traveling abroad with a DIN system may need to take an adaptor, either for connecting the DIN regulator to a rented cylinder, or for connecting an A-clamp filler hose to a DIN cylinder valve. The DIN connection is slightly more complex to manufacture, but if the seal is good when the valve is opened it is likely to remain good throughout a dive, even if banged against a solid overhead, and is consequently preferred by technical divers even where the yoke fitting is more generally popular. DIN connections are available in two specifications; for working pressures up to 232 bar, and for 300 bar. The original design 200 bar regulator fitting with five threads will not seal in a 300 bar valve, preventing potential overload, particularly of the high pressure hose and submersible pressure gauge, but the DIN 300 bar regulator inlet fitting with seven threads available on almost all recent regulators is compatible with 200 and 232 bar valves as well as the 300 bar valves. The thread form is G5/8" x 14 tpi. The O-ring is carried in a groove on the regulator. Two sizes of O-ring are in common use. Adaptors Adaptors are available to allow connection of DIN regulators to yoke cylinder valves (A-clamp or yoke adaptor), and to connect yoke regulators to DIN cylinder valves. There are two types of adaptors for DIN valves: plug adaptors and block adaptors. Plug adaptors are screwed into a 5-thread DIN valve socket, are rated for 232/240 bar, and can only be used with valves which are designed to accept them. These can be recognised by a dimple recess opposite to the outlet opening, used to locate the screw of an A-clamp. Block adaptors are generally rated for 200 bar, and can be used with almost any 200 bar 5-thread DIN valve. A-clamp or yoke adaptors comprise a yoke clamp with a DIN socket in line. They are slightly more vulnerable to O-ring extrusion than integral yoke clamps, due to greater leverage on the first stage regulator. Conversion kits Several manufacturers market an otherwise identical first stage varying only in the choice of cylinder valve connection. In these cases it may be possible to buy original components to convert yoke to DIN and vice versa. The complexity of the conversion may vary, and parts are not usually interchangeable between manufacturers. The conversion of Apeks regulators is particularly simple and only requires an Allen key and a ring spanner. Valves for gases other than air There are also cylinder valves for scuba cylinders containing gases other than air: The European Norm EN 144-3:2003 introduced a new type of valve, similar to existing 232 bar or 300 bar DIN valves, but with a metric M26×2 thread connecting the cylinder to the regulator. These are intended to be used for breathing gas with oxygen content above that normally found in natural atmospheric air (i.e. 22–100%). From August 2008, these were required in the European Union for all diving equipment used with nitrox or pure oxygen. The idea behind this new standard is to prevent a rich mixture being filled to a cylinder that is not oxygen clean. However even with use of the new system there still remains nothing except human procedural care to ensure that a cylinder with a new valve remains oxygen-clean - which is exactly how the previous system worked. The enriched oxygen valve may alert the user to a non-air breathing gas, but will give no indication of the actual composition of the contents. Filling adaptors are available as a stock item to allow filling of cylinders fitted with these valves from a standard G5/8" DIN filling connector. An M 24x2 male thread cylinder valve was supplied with some Dräger semi-closed circuit recreational rebreathers (Dräger Ray) for use with nitrox mixtures. The regulator supplied with the rebreather had a compatible connection. Internal and other replaceable components of valves are often interchangeable amongst other valves from the same manufacturer for similar service. Handwheel The handwheel or valve knob is a knurled or ridged rubber, plastic or metal fitting attached to the valve spindle, used to rotate the spindle to open and close the valve. Hard rubber or tough plastic are the usual materials on recent models, usually incorporating moulded grips and a metal insert to engage the square or flatted part of the spindle, to which they are usually attached by a slotted nut. Dip tube The dip tube, anti-debris tube, or valve snorkel is a short tube screwed into the hole in the bottom of the valve body, which projects into the cylinder internal space. Its function is to prevent any loose debris inside the cylinder from getting into the outlet passages if the cylinder is inverted in use, as such material may clog or jam the regulator. Originally mostly made from brass tube, they are also often made from plastic, but brass is still preferred for high oxygen fraction gas mixes, as it is a lower fire hazard. Some dip tubes have a filter attached to the lower end, often made from sintered brass, but most have a plain opening. Pressure rating Yoke valves are rated between 200 and 240 bar, and there does not appear to be any mechanical design detail preventing connection between any yoke fittings, though some older yoke clamps will not fit over the popular 232/240 bar combination DIN/yoke cylinder valve as the yoke is too narrow. DIN valves are produced in 200 bar and 300 bar pressure ratings. The number of threads and the detail configuration of the connections is designed to prevent incompatible combinations of filler attachment or regulator attachment with the cylinder valve. 232 bar DIN (5-thread, G5/8) Outlet/Connector #13 to DIN 477 part 1 - (technically they are specified for cylinders with 300 bar test pressure) 300 bar DIN (7-thread, G5/8) Outlet/Connector #56 to DIN 477 part 5 - these are similar to 5-thread DIN fitting but are rated to 300 bar working pressures. (technically they are specified for cylinders with 450 bar test pressure). The 300 bar pressures are common in European diving and in US cave diving. Other distinguishing features Plain valves The most commonly used cylinder valve type is the single outlet plain valve, sometimes known as a "K-valve", which allows connection of a single regulator, and has no reserve function. It simply opens to allow gas flow, or closes to shut it off. Several configurations are used, with options of DIN or A-clamp connection, and vertical or transverse spindle arrangements. Reserve valves Until the 1970s, when submersible pressure gauges on regulators came into common use, diving cylinders often used a mechanical reserve mechanism to indicate to the diver that the cylinder was nearly empty. The gas supply was automatically cut-off by a spring loaded valve when the gas pressure reached the reserve pressure. To release the reserve, the diver pulled down on a rod that ran along the side of the cylinder and which activated a lever to open a bypass valve. The diver would then finish the dive before the reserve was consumed. The reserve could be adjusted by spring stiffness, typically for a single cylinder, but for twin sets and triple sets . On occasion, divers would inadvertently trigger the mechanism while donning gear or performing a movement underwater and, not realizing that the reserve had already been accessed, could find themselves out of air at depth with no warning whatsoever. These valves became known as "J-valves" from being item "J" in one of the first scuba equipment manufacturer catalogs. The standard non-reserve yoke valve at the time was item "K", and is often still referred to as a "K-valve". J-valves are still occasionally used by professional divers in zero visibility, where the submersible pressure gauge (SPG) can not be read. While the recreational diving industry has largely discontinued support and sales of the J-valve, the US Department of Defense, the US Navy, NOAA (the National Oceanographic and Atmospheric Administration) and OSHA (the national Occupational Health and Safety Administration) all still allow or recommend the use of J-valves as an alternative to a bailout cylinder or as an alternative to a submersible pressure gauge. They are generally not available through recreational dive shops, but are still available from some manufacturers. They can be significantly more expensive than K-valves from the same manufacturer. Less common in the 1950s to 1970s was an "R-valve" which was equipped with a restriction that caused breathing to become difficult as the cylinder neared exhaustion, but that would allow less restricted breathing if the diver began to ascend and the ambient water pressure lessened, providing a larger pressure differential over the orifice. It was never particularly popular because if it was necessary for the diver to descend during exit from a cave or wreck, breathing would become progressively more difficult as the diver went deeper, eventually becoming impossible until the diver could ascend to a low enough ambient pressure. The reserve valves manufactured by Dräger were similar in function to the spring loaded J-valve, but the reserve valve completely bypassed the main valve when opened. Poseidon at one stage marketed a manifold for twin cylinders which featured a pair of plain valves in the cylinders, with a reserve valve mounted on the central outlet block of the manifold. This mechanism retained reserve pressure in both cylinders, where the usual arrangement with manifolded cylinders was to have the reserve gas retained in only one cylinder, therefore necessitating the use of different springs to maintain a roughly constant proportion of the total gas supply. When filling the cylinder the J-valve will obstruct the inward flow of gas unless both the main and reserve valves are opened. Dual outlet valves Y and H cylinder valves have two outlets, each with its own valve, allowing two regulators to be connected to the cylinder. If one regulator "freeflows", which is a common failure mode, or ices up, which can happen in water below about 5 °C, its valve can be closed and the cylinder breathed from the regulator connected to the other valve. The difference between an H-valve and a Y-valve is that the Y-valve body splits into two posts roughly 90° to each other and 45° from the vertical axis, looking like a Y, while an H-valve is usually assembled from a valve designed as part of a manifold system with an additional valve post connected to the manifold socket, with the valve posts parallel and vertical, which looks a bit like an H. Y-valves are also known as "slingshot valves" due to their appearance. Another style of dual outlet valve has the openings at 90° to each other and to the cylinder centreline. These are used on rebreather cylinders so that a bailout regulator can be fitted as well as the rebreather supply regulator. Handed valves Some cylinder valve models have axial spindles - in line with the cylinder axis, and are not handed. Standard side-spindle valves have the valve knob on the diver's right side when back-mounted. Side-spindle valves used with manifolds must be a handed pair - one with the knob to the right and the other with the knob to the left, but in all cases the valve is opened by turning the knob anticlockwise, and closed by turning it clockwise. This is the convention with almost all valves for all purposes. Left and right hand side-spindle valves are used by sidemount divers. These may be blanked off manifold valves or specially made for the purpose. Modular valves Valves which can be assembled as single or dual outlet valves, or as the paired valves of a manifold system are known as modular valves. They are generally available as left and right hand valves, with a second unvalved outlet into which a blanking plug, a second valve, or the end of a plain or isolation manifold can be screwed. The secondary outlet for one side may have left hand thread, usually indicated by a groove around the hexagon of the nut, as manifolds usually have some centre distance adjustment by rotating the manifold on its axis, which will screw it into or out of both valves at the same time. This makes it necessary to have matching thread on the plugs or secondary valves. A more complex modular valve system was introduced by Poseidon, where a wide variety of configurations could be assembled out of a set of standardised parts. Bursting disk Some national standards require that the cylinder valve includes a bursting disk, a pressure relief device that will release the gas before the cylinder fails in the event of overpressurization. If a bursting disk ruptures during a dive the entire contents of the cylinder will be lost in a very short time. The risk of this happening to a correctly rated disc, in good condition, on a correctly filled cylinder is very low. Burst disk over-pressure protection is specified in the CGA Standard S1.1. Standard for Pressure Relief Devices. Bursting disc rupture pressure is generally rated at 85% to 100% of test pressure. Accessories Additional components for convenience, protection or other functions, not directly required for the function as a valve. Manifolds A cylinder manifold is a tube which connects two cylinders together so that the contents of both can be supplied to one or more regulators. There are three commonly used configurations of manifold. The oldest type is a tube with a connector on each end which is attached to the cylinder valve outlet, and an outlet connection in the middle, to which the regulator is attached. A variation on this pattern includes a reserve valve at the outlet connector. The cylinders are isolated from the manifold when closed, and the manifold can be attached or disconnected while the cylinders are pressurised. More recently, manifolds have become available which connect the cylinders on the cylinder side of the valve, leaving the outlet connection of the cylinder valve available for connection of a regulator. This means that the connection cannot be made or broken while the cylinders are pressurised, as there is no valve to isolate the manifold from the interior of the cylinder. This apparent inconvenience allows a regulator to be connected to each cylinder, and isolated from the internal pressure independently, which allows a malfunctioning regulator on one cylinder to be isolated while still allowing the regulator on the other cylinder access to all the gas in both cylinders. These manifolds may be plain or may include an isolation valve in the manifold, which allows the contents of the cylinders to be isolated from each other. This allows the contents of one cylinder to be isolated and secured for the diver if a leak at the cylinder neck thread, manifold connection, or burst disk on the other cylinder causes its contents to be lost. A relatively uncommon manifold system is a connection which screws directly into the neck threads of both cylinders, and has a single valve to release gas to a connector for a regulator. These manifolds can include a reserve valve, either in the main valve or at one cylinder. This system is mainly of historical interest. Valve cage Also known as a manifold cage or regulator cage, this is a structure which can be clamped to the neck of the cylinder or manifolded cylinders to protect the valves and regulator first stages from impact and abrasion damage while in use and from rolling the valve closed by friction of the handwheel against an overhead. A valve cage is often made of stainless steel, and some designs can snag on obstructions and lines. Dust caps Plastic covers are held over the opening by friction, or screwed into a DIN valve socket to keep dust and spray from entering the opening. They are generally not 100% reliable, and it is considered prudent to open the valve slightly to blow out any contamination before making a connection to filler hose or regulator. Extension handle A valve knob extension (slob knob) is a fairly long flexible extension to a valve spindle allowing the diver to open and close the valve if it is in a position where the diver cannot normally reach it. Standards Standards relating to the specifications and manufacture of cylinder valves include ISO 10297 and CGA V-9 Standard for Gas Cylinder Valves, both of which specify design, testing and marking of cylinder valves to be fitted as a closure to refillable transportable gas cylinders. The 8th edition of CGA V-9 brings it into alignment with ISO 10297. References Underwater breathing apparatus components Pressure vessel components Valves
Scuba cylinder valve
[ "Physics", "Chemistry" ]
5,010
[ "Physical systems", "Valves", "Hydraulics", "Piping", "Pressure vessels", "Pressure vessel components" ]
61,099,017
https://en.wikipedia.org/wiki/Gestalt%20pattern%20matching
Gestalt pattern matching, also Ratcliff/Obershelp pattern recognition, is a string-matching algorithm for determining the similarity of two strings. It was developed in 1983 by John W. Ratcliff and John A. Obershelp and published in the Dr. Dobb's Journal in July 1988. Algorithm The similarity of two strings and is determined by this formula: twice the number of matching characters divided by the total number of characters of both strings. The matching characters are defined as some longest common substring plus recursively the number of matching characters in the non-matching regions on both sides of the longest common substring: where the similarity metric can take a value between zero and one: The value of 1 stands for the complete match of the two strings, whereas the value of 0 means there is no match and not even one common letter. Sample The longest common substring is WIKIM (light grey) with 5 characters. There is no further substring on the left. The non-matching substrings on the right side are EDIA and ANIA. They again have a longest common substring IA (dark gray) with length 2. The similarity metric is determined by: Properties The Ratcliff/Obershelp matching characters can be substantially different from each longest common subsequence of the given strings. For example and have as their only longest common substring, and no common characters right of its occurrence, and likewise left, leading to . However, the longest common subsequence of and is , with a total length of . Complexity The execution time of the algorithm is in a worst case and in an average case. By changing the computing method, the execution time can be improved significantly. Commutative property The Python library implementation of the gestalt pattern matching algorithm is not commutative: Sample For the two strings and the metric result for is with the substrings GESTALT P, A, T, E and for the metric is with the substrings GESTALT P, R, A, C, I. Applications The Python difflib library, which was introduced in version 2.1, implements a similar algorithm that predates the Ratcliff-Obershelp algorithm. Due to the unfavourable runtime behaviour of this similarity metric, three methods have been implemented. Two of them return an upper bound in a faster execution time. The fastest variant only compares the length of the two substrings: , The second upper bound calculates twice the sum of all used characters which occur in divided by the length of both strings but the sequence is ignored. # Dqr Implementation in Python import collections def quick_ratio(s1: str, s2: str) -> float: """Return an upper bound on ratio() relatively quickly.""" length = len(s1) + len(s2) if not length: return 1.0 intersect = (collections.Counter(s1) & collections.Counter(s2)) matches = sum(intersect.values()) return 2.0 * matches / length Trivially the following applies: and . References Further reading See also Pattern matching Search algorithms Information theory Quantitative linguistics Recursion String metrics Articles with example Python (programming language) code
Gestalt pattern matching
[ "Mathematics", "Technology", "Engineering" ]
686
[ "Telecommunications engineering", "Recursion", "Applied mathematics", "Mathematical logic", "Computational linguistics", "Computer science", "Information theory", "Natural language and computing" ]
61,100,576
https://en.wikipedia.org/wiki/Bess%20Ward
Bess Ward is an American oceanographer, biogeochemist, microbiologist, and William J. Sinclair Professor of Geosciences at Princeton University. Ward studies include marine and global nitrogen cycles, and how marine organisms such as phytoplankton and bacteria influence the nitrogen cycle. Ward was the first woman awarded the G. Evelyn Hutchinson Award from the Association for the Sciences of Limnology and Oceanography (ASLO) for her pioneering work on applying molecular methods for nitrogen and methane conversions as well as scaling up organismal biogeochemical rates to whole ecosystem rates. Education and early career Ward received her Bachelor of Sciences degree in zoology from the Michigan State University in 1976. Ward went on to obtain a Master's degree in biological oceanography from the University of Washington in 1979, followed by her PhD at the same institution in 1982. Ward's early work focused on quantifying the rates of nitrogen transformation performed by bacteria and phytoplankton, and was the editor for a special edition of Marine Chemistry on "Aquatic Nitrogen Cycles" in 1985. After her PhD, Ward worked as a research biologist and oceanographer at the Scripps Institution of Oceanography in San Diego, California, where she also served as the chairperson of the Food Chain Research Group. Career Ward became a professor of Marine Sciences at the University of California, Santa Cruz in 1989. From 1995–1998, Ward was the Chair of the Ocean Sciences Department at University of California, Santa Cruz before becoming a professor in the Department of Geosciences at Princeton University in 1998. In 2006, Ward became the Chair of the Department of Geosciences at Princeton and has held the position ever since. Ward has held numerous visiting scientist and trustee positions throughout her career at institutions such as the Bermuda Institute of Ocean Sciences, Plymouth Marine Laboratory, and the Max Planck Institute für Limnologie. As of 2018, Ward had advised 21 graduate students and 20 postdoctoral scholars. Broadly, Ward and her lab members research how bacteria and phytoplankton transform and use nitrogen in marine and coastal ecosystems using various molecular and isotopic techniques. Ward spends time on research cruises and expeditions, conducting research (and sometimes teaching remotely) while on the ocean for days to weeks at a time. Nitrogen cycling Areas in the ocean that are low in oxygen, called oxygen deficient zones (ODZs), are important areas for nitrogen cycling yet only make up about 0.1-0.2% of the total volume of the world ocean. Over one quarter of all nitrogen in the oceans is lost to gaseous nitrogen forms (e.g. N2, N2O) in the ODZs through various nitrogen transformation pathways including denitrification and anammox, however, the rates of nitrogen transformation and type of transformation that is taking place in ODZs remains unclear and subject of much of Ward's research. Ward and her lab developed an isotopic tracer method to measure the rate of N2O reduction in the Eastern Tropical North Pacific Ocean and found that incomplete denitrification in ODZs increases N2O accumulation and eventual efflux to the atmosphere. N2O is a potent greenhouse gas and Ward's research shows that the expanding ODZs in the global ocean may increase the amount of N2O entering the atmosphere. Professional service Ward has served on review panels of university graduate programs, institutional oceanography programs, and National Science Foundation funding programs. Awards Marie Tharp Award Lecture, Helmholz Center for Ocean Research, Kiel, Germany, (2016) Charnock Lecturer, Southampton Oceanography Center, UK, (2015) Woods Hole Oceanographic Institution Chemical Oceanography H. Burr Steinbach Scholar of (2015) Rachel Carson Award Lecture, American Geophysical Union (2014) Samuel A. Waxman Honorary Lectureship, Theobald Smith Society, (2014) Procter & Gamble Award, American Society for Microbiology, (2012) Fellow of the American Academy of Arts and Sciences, (2004) Fellow of the American Geophysical Union, (2002) Fellow of the American Academy of Microbiology, (1999) Who's Who in American University Teachers, (1997) G. Evelyn Hutchinson Medal, American Society of Limnology and Oceanography, (1997) Distinguished Visiting Biologist, Woods Hole Oceanographic Institution, March (1996) Selected publications Community composition of nitrous oxide related genes and their relationship to nitrogen cycling rates in salt marsh sediments, Frontiers in Microbiology, 9: 170 (2018) Denitrification as the dominant nitrogen loss process in the Arabian Sea., Nature, 461: 78-82 (2009) Methane oxidation and methane fluxes in the ocean surface layer and in deep anoxic waters. Nature, 327: 226-229 (1987) References Living people Year of birth missing (living people) American women biochemists Nitrogen cycle Princeton University faculty University of Washington College of the Environment alumni Michigan State University alumni Biogeochemists American women oceanographers Fellows of the American Academy of Microbiology American women academics 21st-century American women American oceanographers
Bess Ward
[ "Chemistry" ]
1,044
[ "Geochemists", "Nitrogen cycle", "Biogeochemistry", "Biogeochemists", "Metabolism" ]
61,101,068
https://en.wikipedia.org/wiki/Ulotaront
Ulotaront (; developmental codes SEP-363856, SEP-856) is an investigational antipsychotic that is undergoing clinical trials for the treatment of schizophrenia and Parkinson's disease psychosis. The medication was discovered in collaboration between PsychoGenics Inc. and Sunovion Pharmaceuticals (which was subsequently merged into Sumitomo Pharma) using PsychoGenics' behavior and AI-based phenotypic drug discovery platform, SmartCube. Ulotaront is in phase III clinical trial for schizophrenia, phase II/III for generalised anxiety disorder and major depressive disorder and discontinued for narcolepsy and psychotic disorders. Research has shown that ulotaront results in a greater reduction from baseline in the PANSS total score than placebo. Treatment with ulotaront, as compared with placebo, was also associated with an improvement in sleep quality. Ulotaront was awarded a Breakthrough Therapy designation due to its increased efficacy and greatly reduced side effects compared to current treatments. Adverse effects The adverse effect profile of ulotaront differs from that of other antipsychotics because its mechanism of action does not involve antagonism of dopamine receptors in the brain, which is responsible for the drug-induced movement disorders (like akathisia) that may occur with those agents. Some adverse events reported in preliminary clinical trials are somnolence, agitation, nausea, diarrhea, and dyspepsia. Pharmacology Pharmacodynamics The mechanism of action of ulotaront in the treatment of schizophrenia is unclear. However, it is thought to be an agonist at the trace amine-associated receptor 1 (TAAR1) and serotonin 5-HT1A receptors. This mechanism of action is unique among available antipsychotics, which generally antagonize dopamine receptors (especially dopamine D2 receptor). Ulotaront is a full agonist of the human TAAR1 with an of 140nM and an of 101.3%. It is also a partial agonist of the serotonin 5-HT1A receptor ( = 2,300nM; = 74.7%) and of the serotonin 5-HT1D receptor ( = 262nM; = 57.1%). Conversely, its activities at various other targets, such as various other serotonin receptors as well as adrenergic and dopamine receptors, are much less potent. Ulotaront decreases basal locomotor activity in rodents and this effect was absent in TAAR1 knockout mice. It prevented the hyperlocomotion induced by the NMDA receptor antagonist phencyclidine (PCP). Conversely, ulotaront did not affect dextroamphetamine-induced hyperlocomotion. Similarly, it did not reverse apomorphine-induced climbing behavior. Pharmacokinetics The precise pharmacokinetic profile of ulotaront has not been reported, though the developer has suggested that the pharmacokinetic data supports once daily dosing. Research As of 2018, Sunovion, the maker of another antipsychotic called lurasidone (Latuda), is conducting clinical trials on ulotaront in partnership with the preclinical research company PsychoGenics. The U.S. Food and Drug Administration has granted ulotaront the breakthrough therapy designation. In addition to schizophrenia, ulotaront is also being studied for the treatment of psychosis associated with Parkinson's disease. The Brief Negative Symptom Scale (BNSS) has been used to assess the effect of Ulotaront on the negative symptoms of schizophrenia. In July 2023, the pharmaceutical company behind the drug announced that the drug had failed to outperform placebo in the treatment of acutely psychotic patients with schizophrenia, as measured by the PANSS. See also List of investigational antipsychotics § Monoamine receptor modulators Ralmitaront References 5-HT1A agonists 5-HT1D agonists Amines Antipsychotics Experimental drugs developed for schizophrenia TAAR1 agonists Thiophenes
Ulotaront
[ "Chemistry" ]
879
[ "Amines", "Bases (chemistry)", "Functional groups" ]
61,103,511
https://en.wikipedia.org/wiki/Nature%20Reviews%20Materials
Nature Reviews Materials is a monthly peer-reviewed scientific journal published by Nature Portfolio. It was established in 2016. The journal covers all topics within materials science. It presents reviews and perspectives, which are commissioned by the editorial team. The editor-in-chief is Giulia Pacchioni. According to the Journal Citation Reports, the journal has a 2021 impact factor of 76.679, ranking it 1st out of 345 journals in the category "Materials Science, Multidisciplinary" and 1st out of 109 journals in the category "Nanoscience & Nanotechnology". References External links Nature Research academic journals Materials science journals Monthly journals English-language journals Academic journals established in 2016 Review journals
Nature Reviews Materials
[ "Materials_science", "Engineering" ]
142
[ "Nanotechnology journals", "Materials science journals", "Materials science" ]
71,937,717
https://en.wikipedia.org/wiki/Synchrotron%20radiation%20circular%20dichroism%20spectroscopy
Synchrotron radiation circular dichroism spectroscopy, commonly referred to as SRCD and also known as VUV-circular dichroism or VUVCD spectroscopy, is a powerful extension to the technique of circular dichroism (CD) spectroscopy, often used to study structural properties of biological molecules such as proteins and nucleic acids. The physical principles of SRCD are essentially identical to those of CD, in that the technique measures the difference in absorption (ΔA) of left (AL) and right (AR) circularly polarized light (ΔA=AL-AR) by a sample in solution. To obtain a CD(SRCD) spectrum the sample must be innately optically active (chiral), or, in some way be induced to have chiral properties, as only then will there be an observable difference in absorption of the left and right circularly polarized light. The major advantages of SRCD over CD arise from the ability to measure data over an extended wavelength range into the vacuum ultra violet (VUV) end of the spectrum. As these measurements are utilizing a light source with a higher photon flux (quantity of light stricking a given surface area) than a bench-top CD machine it means data are more accurate at these extended wavelengths because there is a larger signal over the background noise (the signal-to-noise ratio) and, generally, less sample is needed when recording the spectra and there is more information content available in the data. Many beamlines now exist around the world to enable the measurement of SRCD data. Origins Extending the wavelength range for CD experiments had been both considered and instigated as far back as 1970. Three research groups had created their own "in-house" CD machines, with specialist lamps as their light source, to enable measurements in this range. Synchrotron radiation (SR) had been proposed for use as the light source at a meeting in Brookhaven National Laboratory on Long Island in 1972, however, it took a few years more before this came to fruition. Two research papers in 1980 reported the collection of CD data using SR as the light source for the experiments. Specifically, spectra were obtained in wavelength regions into the VUV range, from ~100 nanometers (nm) to ~200 nm, largely unavailable to laboratory-based bench-top spectrophotometers. Sutherland et. al. focussed on the development of a versatile spectrophotometer capable of measuring CD, amongst other properties, in the VUV region of the spectrum, while Snyder and Rowe collected CD data from a small organic compound in the wavelength range 130.5 nm to 205 nm. Simplified overview of an SRCD beamline setup As shown in the diagram, a number of baffles are used throughout to remove possible stray light being reflected off the sides of the beamline tube. The use of only one mirror minimizes the loss of photon flux which is most important in the VUV region where reflectivity is poor relative to the visible wavelength range. The first constructed SRCD beamlines initially tried to utilize the intrinsic properties of the SR radiation produced, whereby there exists a "central" linearly-polarized component with, above and below this, equally opposing regions of circularly-polarized components. The premise for this was that the overall signal produced from a chiral sample would be enhanced by the absorption difference (the signal) derived from these circularly polarized features of the beam. In an ideal situation this approach would work; however, this setup was modified such that all beamlines now include a linear polarizer (as shown) to remove these circularly polarized components. This was because even the minutest of movements in beam position (beam drift) led to unequal matching of the contributions of the circularly polarized components striking the sample and this, in turn, meant the SRCD signal produced was inaccurate and unreliable; often being irreproducible as a result. Whereas cCD machines are purged throughout with nitrogen to minimize the absorption by oxygen of the light from the source xenon arc lamp, in an SRCD arrangement the beam passes through a calcium fluoride (CaF2), or similar "VUV-wavelengths transparent", window where everything before this point is in vacuum, and everything beyond is in nitrogen. The beam interacts with a photoelastic modulator (PEM) which consequently produces an alternating right- then left-circularly polarized beam and these now interact with the sample. The resultant absorption difference by the sample is measured and amplified by a photomultiplier tube (PMT) and from this the SRCD spectrum is recorded. The wavelength range that is utilized for SRCD studies is typically in the UV to VUV region and can go to below this; potentially from ~100 nm, up to the visible region, ~400 nm. The exact range over which data can be collected relies on the beamline set up, the sample preparation and the wavelength range of the PMT detector used. One of the primary factors limiting the lower wavelength cut off is the sample usually being in solution as a large water absorption band exists centred ~167 nm. This high absorption background swamps any possibility of measuring the very small CD difference signal, although use of deuterated water (D2O) as the solvent reduces the solvent absorption increasing the lower wavelength data collection range by ~10 nm. Removing the solvating water completely, creating a film as a result, means that data can be recorded to significantly lower wavelengths, down to around ~130 nm. Advantages over conventional CD machines The main advantages for SRCD over lab-based cCD machines arise from the use of the synchrotron light emission as the source. A number of biologically interesting absorption bands are found in the region between ~170 nm and ~350 nm. For proteins these come from their secondary and tertiary structures, while structural bands for nucleic acids, (DNA and RNA), and saccharides are also located in this region. However, for cCD machines the photon flux from the source reduces by around two orders of magnitude in the wavelength range from 250 nm down to 180 nm, exactly in the region of most significance for these biological molecules. By contrast, typically, the photon flux for an SRCD beamline in this region is at least three orders of magnitude higher than a cCD machine, retaining that level down to ~150 nm. The increased flux means the measured signals from the sample will be increased relative to the background noise, so there is a significant improvement in the signal-to-noise ratio of the sample. This will improve the accuracy of the data recorded meaning interpretation can be undertaken with more confidence in the results. A further advantage of the increased flux is that the concentration of the sample can be reduced while still retaining a significant increase in signal strength, so samples that are difficult to produce in quantity have more chance of producing usable CD data from SRCD rather than a cCD machine. Increasing the lower wavelength range provides more spectral data for analysis which means there is more information content available in that data, meaning that more parameters, here secondary structure features in the protein structure, can be accurately determined. Technique growth and development While the first reports of its use dated to 1980, it was a further two decades before the technique of SRCD took off largely due to the work of Bonnie Wallace at Birkbeck College, University of London. From around 2000, her aims in the field focused on both enhancing the collection of quality data through technical improvements, and on demonstrating "proof-of-principle" application studies, illustrating the novel information that SRCD offers. The construction on the Synchrotron Radiation Source (SRS) of the CD12 beamline at Daresbury Laboratory, opened in 2005 under the auspices of the Centre for Protein and Membrane Structure and Dynamics (CPMSD) of which Wallace was the Director, represented the first of the new, dedicated, second-generation SRCD beamlines. It was quickly identified that the high photon flux from CD12 was causing denaturation of the protein sample but that this was resolvable by reducing the sample area being irradiated. Later studies have identified the flux threshold limits that induce SRCD protein denaturation. The input from the Wallace lab to the early years of SRCD development also included the introduction of calibration and standardization of SRCD and cCD spectrophotometers, the creation of software to process the spectral data using CDtool, and CDtoolX, and to analyse the data using DichroWeb, and the generation of reference data sets of proteins to support these data analyses. Additionally, her lab produced sample cells with reduced pathlengths, and using material, (CaF2), transparent to VUV radiation which significantly enhanced the collection of data into the SRCD lower wavelength regions. New SRCD beamlines were constructed on various synchrotrons around the world. ring, in the Department of Physics and Astronomy of Aarhus University in Denmark, became a dedicated second-generation synchrotron in 2005. Ultimately this ring had two SRCD beamlines, UV1 and CD1, which migrated to the new third-generation ring, ASTRID2, in 2013/14, as AU-UV and AU-CD. SOLEIL synchrotron, near Paris, France, commissioned a dedicated SRCD beamline, DISCO, around 2005. At Hiroshima Synchrotron Radiation Center, also known as HiSOR, a VUVCD beamline was constructed over the same period, while a little later in 2009, an SRCD beamline was commissioned in Beijing, China. This particular beamline is unique in that the synchrotron which acts as its light source is also the electron carrying ring of the Beijing Electron Positron Collider. The SRS closed in 2008 being superseded in the UK by the Diamond Light Source on which an SRCD beamline opened for use in 2010. With the SRS closure the CD12 SRCD beamline was moved to, and installed on, the ANKA Synchrotron Radiation Facility, (now called KARA), part of Karlsruhe Institute of Technology (KIT), in Karlsruhe, Germany. This beamline opened for users in 2011 but was closed in 2021. Currently under construction (as of June 2023) on the Sirius synchrotron light source in Campinas, Brazil, is a new SRCD beamline, CEDRO. Examples of applications Highlighting a few of the published works that have employed SRCD in their research studies best illustrates the power of this technique. Improved conformational analysis due to increased signal-to-noise ratio Cataracts are the primary cause of blindness in humans and mutations in one particular protein, γD-crystallin, have been linked to a number of congenital forms of this disease. An amino acid mutation, proline (P) to threonine (T) at position 23 of the polypeptide chain has been linked to at least four different forms of this ailment. SRCD investigations were conducted on the wild-type protein and two variants, the P23T mutant found in the disease, and a related modification, P23S (proline to serine, a chemically similar amino acid to threonine), to establish the nature of the cause of cataract formation. Two possible reasons were suggested as the causative factor; the reduced solubility of the mutant protein, or an instability in the structure of the protein being introduced by the mutation. Significantly, because the mutant had limited solubility, lab-based CD machines were only able to provide very noisy spectra and the data were uninterpretable as a result. However, the SRCD spectra produced had very low noise associated with their data, including the mutant, and showed clearly that the structures of the wild-type, the mutant, and the related protein all had very similar conformations. These data also established that the mutant retained stability to thermal denaturation, very similar to that of the wild-type protein. The data confirmed that the causative factor for the cataracts was the reduction in solubility associated with the P23T mutation and not changes in the stability of the protein. Because of a high degree of flexibility, it had proven difficult to determine the structure of the extramembranous C-terminal domain of bacterial voltage-gated sodium channels. Using a series of synthesised channels where this C-terminal domain had been truncated, in some cases by a single amino acid difference between the constructs, the Wallace lab used SRCD to successfully identify the structure of this region. Intrinsically disordered proteins (IDPs) and intrinsically disordered regions (IDRs) Intrinsically disordered proteins (IDPs) have very limited innate structure in solution but gain shape specifically when interacting with partner molecules such as proteins or RNA; however, their resultant structure is often dictated by this interaction. In addition, some proteins have sections of sequence without structure, termed intrinsically disordered regions (IDRs), that also gain structure on interaction. Having different shapes with different partners means they are functionally, as well as structurally flexible, making them centrally important to signalling pathways and as regulation/control factors for example. IDPs (and IDRs if capable of being isolated from the rest of the protein) have a distinct SRCD spectral appearance in solution which means that changes in their spectra that arise through interactions offer an ideal opportunity to gain insight into what is happening both structurally and functionally. In addition, SRCD studies have demonstrated that when the solvating water is removed from these proteins, generating a film, there is a gain in structure and more CD transition bands can be measured into the lower VUV wavelength region because the water absorption band is not present Myelin is the insulating sheath that is formed in the central (CNS) and peripheral nervous systems (PNS) to surround nerve cell axons thereby increasing and maintaining the electrical impulse, the action potential, sent along them. Formed mostly of lipids, there are specific proteins within the myelin components whose roles are to structure the myelin into linked layers. Two of these proteins are myelin basic protein (MBP), an IDP primarily in the CNS, and myelin protein zero (P0) which contains an IDR section (P0ct) and is key within the PNS. MBP and P0ct were employed in a study which used SRCD data as a key factor to establish if there was any significance to the predictions of their IDP and IDR protein structures generated by Alphafold2, an artificial intelligence program developed by DeepMind. PDB2CD, a package that generates SRCD spectra from protein atomic coordinates, was used to calculate spectra from the Alphafold2 structures, and these spectra were then compared against SRCD experimental spectra collected from the MBP and P0ct proteins in various ambient conditions; solution, detergent and lipid-bound states. The study reported that from the SRCD comparisons, the structures predicted by Alphafold2 for MBP and P0ct bore a strong resemblance to those when they were bound to the lipid membrane. Sugar modification of protein SRCD signals One major feature found in protein structures is the addition of sugars (glycosylation) to specific amino acid residues by post translational modification. Complex sugar structures can be connected to these sites, and this can substantially modify the properties of these proteins, a main reason for their presence. Attached sugars can assist in folding some proteins to their correct shape; so, affecting a proteins’ structure is a possible outcome. SRCD is ideally well suited to determining any conformational differences that might arise from different ambient environments directly because of the extended wavelength range into the VUV region which provides greater information content. However, attached sugars can contribute to the SRCD signal because their transitions are located more towards the VUV end of the spectrum. This means that their presence can cause a problem in obtaining an accurate measure of the secondary structure content of the protein as a result. Matsuo. and Gekko produced the landmark study of VUVCD spectra of selected saccharides, thereby demonstrating that glycoproteins would have a contribution to their spectra from their sugar content. From this and further studies they demonstrated that the SRCD spectral characteristics that arose from sugars could be attributed to many factors within their conformations: the configuration of the hydroxyl group about the C1 atom of the saccharide (alpha or beta conformation, or almost axial or equatorial to the plane of the sugar ring respectively), the axial or equatorial positioning of the remaining hydroxyl groups, the trans or gauche nature of the C5 hydroxymethyl group, and the glycosidic linkage (either 1-4 or 1-6) between sugar monomers. Utilising this information, the Wallace group investigated the glycosylation of the voltage-gated sodium channel in experiments that relied on the fact that a CD(SRCD) spectrum of a mixture of components is the sum of all those components present. The aim was to establish if there were differences in the three-dimensional structure of the channel with and without sugars attached to the structure; did glycosylation play any significant role in the function of these channels when sugars were attached? Three experimental sets of SRCD spectra were collected; the non-glycosylated and glycosylated channel structures and a further one of the isolated sugar components that combined to form those attached to the channel. Taking away the spectrum of the non-glycosylated channel from that of the glycosylated they demonstrated that the resultant difference spectrum corresponded to that of the sugar components. This meant that there were no structural differences between the glycosylated and non-glycosylated channel structures, so sugar attachment played no key role in their function Conformational changes of globular proteins at the oil-water interface First studied in 2010 via this method, a recent investigation used SRCD to examine the differences in structure in solution and when at the oil-water interface, of peptides derived from seaweed, bacteria and potatoes as potential emulsifying agents. Of these studied, the peptide from bacteria proved to be the most effective at being both an emulsifying agent and stabilising antioxidant compound. Existing beamlines A number of SRCD beamlines exist, or are being constructed (), around the world as listed in the table. As of 2022 components from former SRCD beamline CD12 (on KARA) are now installed on the DISCO beamline This facility also runs as part of the Beijing Electron Positron Collider (BEPC) Two modules (A and B) exist on this beamline This beamline is under construction and received its "first light" as of June 2023 References Spectroscopy
Synchrotron radiation circular dichroism spectroscopy
[ "Physics", "Chemistry" ]
3,880
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
71,943,261
https://en.wikipedia.org/wiki/Noboru%20Tokita
Noboru Tokita (February 20, 1923 - October 31, 2014) was a Uniroyal and later Cabot scientist known for his work on the processing of elastomers. Personal Tokita was born in Sapporo, Japan in 1923. He met his wife Noriko while on an exchange program at Duke University. They married and decided to stay in the United States. He was a close colleague of 2009 Charles Goodyear Medal winner James White, introducing White to his future wife Yoko Masaki. Education Tokita completed BS degree at Tokyo University in 1948, and his Ph.D. in physics and chemistry in 1957 at the University of Hokkaido. Career He began his professional career in 1954 as a professor of Applied Physics at Waseda University in Tokyo. He held this position until 1960 when he came to the United States on an exchange program with Duke University. At Duke, he taught polymer rheology. In the early 1960s Tokita joined the U. S. Rubber Company in New Jersey, later Uniroyal, working there for 30 years on elastomer processing. He later joined Uniroyal Goodrich Tire Company in Akron in a research role. He joined Cabot Corporation in Billerica in 1990. During his career he produced 9 U.S. Patents. His most cited scientific article treated the subject of morphology formation in elastomer blends. Awards 1994 - Melvin Mooney Distinguished Technology Award from Rubber Division of the ACS References 1923 births 2014 deaths Polymer scientists and engineers Scientists from Sapporo University of Tokyo alumni Hokkaido University alumni Academic staff of Waseda University Duke University faculty Japanese emigrants to the United States Physical chemists
Noboru Tokita
[ "Chemistry", "Materials_science" ]
343
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
71,946,723
https://en.wikipedia.org/wiki/Metal%E2%80%93organic%20biohybrid
Metal–organic biohybrids (MOBs) are a family of materials containing a metal component, such as copper, and a biological component, such as the amino acid dimer cystine. One of the MOB families first described was the copper-high aspect ratio structure called CuHARS. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) of CuHARS revealed linear morphology and smooth surface texture. SEM, TEM and light microscopy showed that CuHARS composites had scalable dimensions from nano- to micro-, with diameters as low as 40 nm, lengths exceeding 150 microns, and average aspect ratios of 100. Structure MOBs are composed of two major components: a metal ion or cluster of metal ions and a biological molecule. Examples are: CuHARS which contain copper as the metal ion and cystine as the biological molecule The use of silver as the metal ion in combination with cystine. Cystine is the dimer form of the amino acid cysteine. Cobalt has also been used in combination with cystine to form CoMOBs. When combined with copper to form CuHARS, the cystine may provide a linker function leading to a linear, high-aspect ratio structure that gives CuHARS its name: copper high-aspect ratio structures. In contrast to CuHARS, MOBs formed with silver and cystine result in silver nanoparticles with spherical, rounded structure. These have been named AgCysNPs. Figure 1 shows comparative electron microscopy of CuHARS and AgCysNPs. Synthesis MOBs under reducing conditions using sodium hydroxide (NaOH) can be self-assembled at body temperature (37 degrees Celsius). In the case of copper CuHARS, MOBs can be produced by transforming copper nanoparticles to provide the copper source, or using copper(II) sulfate. Physical Characteristics CuHARS have been shown to completely degrade under physiological conditions (cell culture media at 37 °C), even in the absence of cells; this is possibly due to the metal chelating properties of typical cell culture medias. These may include the copper-binding properties of cerulosplasmin and of albumin. Additionally, CuHARS have been shown to polarize light using inverted microscopy. Cobalt-containing MOBs (CoMOBs) have been shown to be susceptible to an externally applied magnetic field as shown in Figure 2. Uses and Applications MOBs have been incorporated into composites including cellulose. Additionally, MOBs composed of the copper-containing CuHARS have been shown to provide catalytic function to produce nitric oxide (NO). Nitric Oxide production This production of NO was shown to impart anti-microbial activity, and the CuHARS in this case were incorporated into a biodegradable, biocompatible, and renewable resource material, namely cellulose. The release of NO catalzyed by copper from CuHARS may have beneficial biomedical applications. Anti-Cancer Effects Both copper- and silver-containing MOBs were shown to have anti-cancer effect on cells in vitro. In the case of possible uses for CuHARS, copper may have a potential role in tumor immunity and for antitumor therapy. Since CuHARS are 100% biodegradable under physiological conditions, copper metabolism of CuHARS may have benefits as an approach for treating glioma. MOBs as Green Materials using Self-Assembly Green nanomedicine has been suggested as a path to the next generation of materials for diagnosing brain tumors and for therapeutics, including the use of CuHARS. Angiogenic Effects CuHARS embedded into nanofiber aerogels have been shown to have angiogenic effects. Antibacterial Effects CuHARS embedded into nanofiber aerogels and via CuHARS-mediated nitric oxide generation have both been examples of antibacterial effects. References Biomaterials Metals Copper compounds Silver compounds Nanoparticles Cobalt compounds
Metal–organic biohybrid
[ "Physics", "Chemistry", "Biology" ]
832
[ "Biomaterials", "Metals", "Materials", "Matter", "Medical technology" ]
71,950,001
https://en.wikipedia.org/wiki/Chen%20Wen-chang
Chen Wen-chang (; born 1963) is a Taiwanese chemical engineer and academic administrator who is the current president of the National Taiwan University. Succeeding Kuan Chung-ming, he took his position as president of the university on 8 January 2023. Education Chen graduated from National Taiwan University (NTU) in 1985 with a bachelor's degree in chemical engineering and completed a Ph.D. in the same subject at the University of Rochester in 1993. Career Chen returned to teach in Taiwan as a professor at NTU and later became dean of the College of Engineering. In 2021, he was awarded a National Chair Professorship in engineering and applied science. Chen was one of nine candidates certified by NTU's Presidential Election Committee in July 2022 to contest the office. A subsequent vote reduced the number of candidates to six, and Chen won another round of voting in October 2022. References Taiwanese chemical engineers Taiwanese university and college faculty deans 20th-century Taiwanese engineers 1963 births Taiwanese expatriates in the United States Living people 21st-century Taiwanese engineers Academic staff of the National Taiwan University Presidents of National Taiwan University University of Rochester alumni Chemical engineering academics
Chen Wen-chang
[ "Chemistry" ]
232
[ "Chemical engineering academics", "Chemical engineers" ]
76,278,203
https://en.wikipedia.org/wiki/Yolanda%20Gonz%C3%A1lez%20%28activist%29
Yolanda González Martín (20 January 1961 – 1 February 1980) was a Spanish student and communist militant murdered by two members of New Force. Biography Originally from Deusto, Bilbao, González moved to Madrid to study electronics. To fund her education, she worked as a cleaner. She became the student representative of the vocational training school where she studied. She was also a member of the Workers Socialist Party. In February 1980, she was kidnapped, tortured and murdered by some members of New Force. Her body was found on a roadside near Madrid. The organization Batallón Vasco Español claimed responsibility for her murder. On the same day the organisation also murdered Jesús María Zubikarai Badiola in Eibar. The perpetrators of González' murder were Emilio Hellín and Ignacio Abad Velázquez, who were arrested and sentenced to prison terms. Hellin was sentenced to 43 years in prison, of which he served only 14. In February 2013, El País reported that Hellin, under a false name, was working in communications technology for the Spanish security forces. Legacy On multiple anniversary years of her murder, González' relatives and neighbours have led calls for further justice and reparation. In 2014 Isabel Rodríguez directed the documentary "Yolanda en el País de los estudiantes", which recounts the kidnapping and subsequent murder of González by the Batallón Vasco-Español. In 2018 Carlos Fonseca wrote No te olvides de mí: Yolanda González, el crimen más brutal de la Transición. This book brought together a range of sources to focus on her murder. Both a garden and a public square have been named in her honour. In 2015, the garden Jardines de Yolanda González Martín was named after her in Madrid. In 2018 the sign for the gardens was defaced by fascists. In 2016 there a small square in Deustu was named in her honour. References External links 1961 births 1980 deaths Basque women Electrical engineers Murdered students People from Bilbao Spanish communists
Yolanda González (activist)
[ "Engineering" ]
414
[ "Electrical engineering", "Electrical engineers" ]
76,295,113
https://en.wikipedia.org/wiki/Psychiatry%20Under%20the%20Influence
Psychiatry Under the Influence: Institutional Corruption, Social Injury, and Prescriptions for Reform is a 2015 book by Robert Whitaker and Lisa Cosgrove. The book discusses the use of psychiatric medication in the United States and is critical of the drug industry influence on the field of psychiatry. See also Bad Pharma Big Pharma (book) References 2015 non-fiction books Books about mental health History books about medicine Books about drugs Works about corruption Pharmaceutical industry
Psychiatry Under the Influence
[ "Chemistry", "Biology" ]
92
[ "Pharmaceutical industry", "Pharmacology", "Life sciences industry" ]
56,275,884
https://en.wikipedia.org/wiki/Data-driven%20control%20system
Data-driven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on experimental data collected from the plant. In many control applications, trying to write a mathematical model of the plant is considered a hard task, requiring efforts and time to the process and control engineers. This problem is overcome by data-driven methods, which fit a system model to the experimental data collected, choosing it in a specific models class. The control engineer can then exploit this model to design a proper controller for the system. However, it is still difficult to find a simple yet reliable model for a physical system, that includes only those dynamics of the system that are of interest for the control specifications. The direct data-driven methods allow to tune a controller, belonging to a given class, without the need of an identified model of the system. In this way, one can also simply weight process dynamics of interest inside the control cost function, and exclude those dynamics that are out of interest. Overview The standard approach to control systems design is organized in two-steps: Model identification aims at estimating a nominal model of the system , where is the unit-delay operator (for discrete-time transfer functions representation) and is the vector of parameters of identified on a set of data. Then, validation consists in constructing the uncertainty set that contains the true system at a certain probability level. Controller design aims at finding a controller achieving closed-loop stability and meeting the required performance with . Typical objectives of system identification are to have as close as possible to , and to have as small as possible. However, from an identification for control perspective, what really matters is the performance achieved by the controller, not the intrinsic quality of the model. One way to deal with uncertainty is to design a controller that has an acceptable performance with all models in , including . This is the main idea behind robust control design procedure, that aims at building frequency domain uncertainty descriptions of the process. However, being based on worst-case assumptions rather than on the idea of averaging out the noise, this approach typically leads to conservative uncertainty sets. Rather, data-driven techniques deal with uncertainty by working on experimental data, and avoiding excessive conservativism. In the following, the main classifications of data-driven control systems are presented. Indirect and direct methods There are many methods available to control the systems. The fundamental distinction is between indirect and direct controller design methods. The former group of techniques is still retaining the standard two-step approach, i.e. first a model is identified, then a controller is tuned based on such model. The main issue in doing so is that the controller is computed from the estimated model (according to the certainty equivalence principle), but in practice . To overcome this problem, the idea behind the latter group of techniques is to map the experimental data directly onto the controller, without any model to be identified in between. Iterative and noniterative methods Another important distinction is between iterative and noniterative (or one-shot) methods. In the former group, repeated iterations are needed to estimate the controller parameters, during which the optimization problem is performed based on the results of the previous iteration, and the estimation is expected to become more and more accurate at each iteration. This approach is also prone to on-line implementations (see below). In the latter group, the (optimal) controller parametrization is provided with a single optimization problem. This is particularly important for those systems in which iterations or repetitions of data collection experiments are limited or even not allowed (for example, due to economic aspects). In such cases, one should select a design technique capable of delivering a controller on a single data set. This approach is often implemented off-line (see below). On-line and off-line methods Since, on practical industrial applications, open-loop or closed-loop data are often available continuously, on-line data-driven techniques use those data to improve the quality of the identified model and/or the performance of the controller each time new information is collected on the plant. Instead, off-line approaches work on batch of data, which may be collected only once, or multiple times at a regular (but rather long) interval of time. Iterative feedback tuning The iterative feedback tuning (IFT) method was introduced in 1994, starting from the observation that, in identification for control, each iteration is based on the (wrong) certainty equivalence principle. IFT is a model-free technique for the direct iterative optimization of the parameters of a fixed-order controller; such parameters can be successively updated using information coming from standard (closed-loop) system operation. Let be a desired output to the reference signal ; the error between the achieved and desired response is . The control design objective can be formulated as the minimization of the objective function: Given the objective function to minimize, the quasi-Newton method can be applied, i.e. a gradient-based minimization using a gradient search of the type: The value is the step size, is an appropriate positive definite matrix and is an approximation of the gradient; the true value of the gradient is given by the following: The value of is obtained through the following three-step methodology: Normal Experiment: Perform an experiment on the closed loop system with as controller and as reference; collect N measurements of the output , denoted as . Gradient Experiment: Perform an experiment on the closed loop system with as controller and 0 as reference ; inject the signal such that it is summed to the control variable output by , going as input into the plant. Collect the output, denoted as . Take the following as gradient approximation: . A crucial factor for the convergence speed of the algorithm is the choice of ; when is small, a good choice is the approximation given by the Gauss–Newton direction: Noniterative correlation-based tuning Noniterative correlation-based tuning (nCbT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset. Suppose that denotes an unknown LTI stable SISO plant, a user-defined reference model and a user-defined weighting function. An LTI fixed-order controller is indicated as , where , and is a vector of LTI basis functions. Finally, is an ideal LTI controller of any structure, guaranteeing a closed-loop function when applied to . The goal is to minimize the following objective function: is a convex approximation of the objective function obtained from a model reference problem, supposing that . When is stable and minimum-phase, the approximated model reference problem is equivalent to the minimization of the norm of in the scheme in figure. The input signal is supposed to be a persistently exciting input signal and to be generated by a stable data-generation mechanism. The two signals are thus uncorrelated in an open-loop experiment; hence, the ideal error is uncorrelated with . The control objective thus consists in finding such that and are uncorrelated. The vector of instrumental variables is defined as: where is large enough and , where is an appropriate filter. The correlation function is: and the optimization problem becomes: Denoting with the spectrum of , it can be demonstrated that, under some assumptions, if is selected as: then, the following holds: Stability constraint There is no guarantee that the controller that minimizes is stable. Instability may occur in the following cases: If is non-minimum phase, may lead to cancellations in the right-half complex plane. If (even if stabilizing) is not achievable, may not be stabilizing. Due to measurement noise, even if is stabilizing, data-estimated may not be so. Consider a stabilizing controller and the closed loop transfer function . Define: Theorem The controller stabilizes the plant if is stable s.t. Condition 1. is enforced when: is stable contains an integrator (it is canceled). The model reference design with stability constraint becomes: A convex data-driven estimation of can be obtained through the discrete Fourier transform. Define the following: For stable minimum phase plants, the following convex data-driven optimization problem is given: Virtual reference feedback tuning Virtual Reference Feedback Tuning (VRFT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset. VRFT was first proposed in and then extended to LPV systems. VRFT also builds on ideas given in as . The main idea is to define a desired closed loop model and to use its inverse dynamics to obtain a virtual reference from the measured output signal . The virtual signals are and The optimal controller is obtained from noiseless data by solving the following optimization problem: where the optimization function is given as follows: See also Automation Artificial intelligence References An Introduction to Data-Driven Control Systems Ali Khaki-Sedigh ISBN: 978-1-394-19642-5 November 2023 Wiley-IEEE Press 384 Pages External links VRFT toolbox for MATLAB Dynamical systems Control theory Control engineering Computational mathematics Robotics engineering
Data-driven control system
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,894
[ "Computer engineering", "Robotics engineering", "Applied mathematics", "Control theory", "Computational mathematics", "Control engineering", "Mechanics", "Dynamical systems" ]
56,277,564
https://en.wikipedia.org/wiki/Gradient%20echo
Gradient echo is a magnetic resonance imaging (MRI) sequence that has wide variety of applications, from magnetic resonance angiography to perfusion MRI and diffusion MRI. Rapid imaging acquisition allows it to be applied to 2D and 3D MRI imaging. Gradient echo uses magnetic gradients to generate a signal, instead of using 180 degrees radiofrequency pulse like spin echo; thus leading to faster image acquisition time. Mechanism Unlike spin-echo sequence, a gradient echo sequence does not use a 180 degrees RF pulse to make the spins of particles coherent. Instead, the gradient echo uses magnetic gradients to manipulate the spins, allowing the spins to dephase and rephase when required. After an excitation pulse (usually less than 90 degrees), the spins are dephased after a period of time (due to free induction decay) and also by applying a reversed magnetic gradient to decay the spins. No signal is produced because the spins are not coherent. When the spins are rephased via a magnetic gradient, they become coherent, and thus signal (or "echo") is generated to form images. Unlike spin echo, gradient echo does not need to wait for transverse magnetisation to decay completely before initiating another sequence, thus it requires very short repetition times (TR), and therefore to acquire images in a short time. After echo is formed, some transverse magnetisations remains because of short TR. Manipulating gradients during this time will produce images with different contrast. There are three main methods of manipulating contrast at this stage, namely steady-state free-precession (SSFP) that does not spoil the remaining transverse magnetisation, but attempts to recover them in subsequent RF pulses (thus producing T2-weighted images); the sequence with spoiler gradient that averages the transverse magnetisations in subsequent RF pulses by rotating residual transverse magnetisation into longitudinal plane and longitudinal magnetisation into transverse planes (thus producing mixed T1 and T2-weighted images), and RF spoiler that vary the phases of RF pulse to eliminates the transverse magnetisation, thus producing pure T1-weighted images. Gradient echo uses a flip angle smaller than 90 degrees, thus longitudinal magnetisation is not eliminated while flipping the spins. The larger the flip angle, the higher the T1 weighing of the tissue because more longitudinal magnetisation most recover to produce a difference in signals between the tissues. Steady-state free precession Steady-state free precession imaging (SSFP) or balanced SSFP is an MRI technique which uses short repetition times (TR) and low flip angles (about 10 degrees) to achieve steady state of longitudinal magnetizations as the magnetizations does not decay completely nor achieving full T1 relaxation. While spoiled gradient-echo sequences refer to a steady state of the longitudinal magnetization only, SSFP gradient-echo sequences include transverse coherences (magnetizations) from overlapping multi-order spin echoes and stimulated echoes. This is usually accomplished by refocusing the phase-encoding gradient in each repetition interval in order to keep the phase integral (or gradient moment) constant. Fully balanced SSFP MRI sequences achieve a phase of zero by refocusing all imaging gradients. MP-RAGE (magnetization-prepared rapid acquisition with gradient echo) improves images of multiple sclerosis cortical lesions. Spoiling At the end of the reading, the residual transverse magnetization can be terminated (through the application of suitable gradients and the excitation through pulses with a variable phase radiofrequency) or maintained. In the first case there is a spoiled sequence, such as the fast low-angle shot MRI (FLASH MRI) sequence, while in the second case there are steady-state free precession imaging (SSFP) sequences. In-phase and out-of-phase In-phase (IP) and out-of-phase (OOP) sequences correspond to paired gradient echo sequences using the same repetition time (TR) but with two different echo times (TE). This can detect even microscopic amounts of fat, which has a drop in signal on OOP compared to IP. Among renal tumors that do not show macroscopic fat, such a signal drop is seen in 80% of the clear cell type of renal cell carcinoma as well as in minimal fat angiomyolipoma. Effective T2 (T2* or "T2-star") T2*-weighted imaging can be created as a postexcitation refocused gradient echo sequence with small flip angle. The sequence of a GRE T2*WI requires high uniformity of the magnetic field. Commercial names of gradient echo sequences VIBE (volumetric interpolated breath-hold examination) is an MRI sequence that produces T1-weighted gradient echo images in three-dimensions (3D). Apart from lower fluid signal intensity than a typical T1-weighted image, other appearances of VIBE images is similar to a typical T1-weighted image. Since its acquisition is only 30 seconds, suitable for breath-holding, it is used in breast and abdominal imaging to obtain high-resolution images minimising respiratory movement artifacts. VIBE images have low contrast in soft tissues and cartilage but have high contrast between the bony cortex and bone marrow. Bony lesions such as callus and fibrous tissue can also be readily distinguished from surrounding cortical bone because high contrast between the bone lesions and the bony cortex. References Magnetic resonance imaging Nuclear magnetic resonance Quantum mechanics
Gradient echo
[ "Physics", "Chemistry" ]
1,108
[ "Nuclear magnetic resonance", "Magnetic resonance imaging", "Theoretical physics", "Quantum mechanics", "Nuclear physics" ]
56,278,630
https://en.wikipedia.org/wiki/Electrofusion%20welding
Electrofusion welding is a form of resistive implant welding used to join pipes. A fitting with implanted metal coils is placed around two ends of pipes to be joined, and current is passed through the coils. Resistive heating of the coils melts small amounts of the pipe and fitting, and upon solidification, a joint is formed. It is most commonly used to join polyethylene (PE) and polypropylene (PP) pipes. Electrofusion welding is the most common welding technique for joining PE pipes. Because of the consistency of the electrofusion welding process in creating strong joints, it is commonly employed for the construction and repair of gas-carrying pipelines. The development of the joint strength is affected by several process parameters, and a consistent joining procedure is necessary for the creation of strong joints. Advantages and disadvantages Advantages of electrofusion welding: Simple process capable of producing consistent joints Process is entirely contained, reducing the risk of joint contamination Process allows repair without the need to remove pipes Disadvantages of electrofusion welding: A special sleeve is required, so it is more expensive than other pipe joining methods such as hot plate joining Implanted coils make recycling of parts more difficult Equipment Electrofusion welds are performed by attaching a controlled power supply to the electrofusion fitting. There are typically two modes of operation. Constant voltage Constant current Constant voltage is typically used for high pressure pipelines such as mains gas and water. Fittings are fitted with a barcode specified to an ISO standard. Typically fittings will be welded at 39.5v, but manufacturers can choose voltages in whole numbers from 8 to 48v. The welding time is specified on the label in seconds or minutes Accessories Electrofusion welding employs fittings that are placed around the joint to be welded. Metal coils are implanted into the fittings, and electric current is run through the coils to generate heat and melt part of the pipes, forming a joint upon solidification. There are two possible fittings used in electrofusion welding: couplers and tapping tees (saddles). Coupler fittings contain two separate regions of coils, creating two distinct fusion zones during welding. The inner diameter of the coupler is typically slightly larger than the outer diameter of the pipes. This is to increase the ease of assembly in the field and allows for minor inconsistencies in pipe diameter. Proper insertion of the pipes in the coupler is critical for the creation of a strong joint. Incorrect placement of the coupler can cause the coils to move and lead to the extrusion of molten polymer material from the joint, reducing the joint's strength. Tapping tees, or saddles, are less common but operate under the same principles as a coupler. They require clamping to ensure a proper fit up with the pipes. Fitting installation Installation of couplers and tapping tee fittings require slightly different procedures. Common installation steps for each are given below. Couplers Wash pipe ends to create clean surfaces for joining Square pipe ends to facilitate optimal fit-up Clean area where coupler will be placed with isopropyl alcohol Mark the pipes slightly beyond half the length of the coupler, to indicate where scraping will take place in later steps Mark the area to be scraped Scrape pipe in marked areas to remove surface layer, allowing clean pipe material to contact the coupler Examine scraped area thoroughly, making sure that fresh pipe material is exposed throughout area Insert pipe ends into coupling to appropriate depth Secure coupler using clamp Connect fitting to control box using electrical leads Apply fusion cycle Allow joint to be undisturbed for the entire prescribed cooling time Pressure test pipe Back fill pipe with appropriate contents Begin service Tapping tee Wash pipe area to be joined to create clean surfaces for joining Clean area where tapping tee will be placed with isopropyl alcohol Mark the pipes slightly beyond the edges of the tapping tee location Scrape pipe in marked areas to remove surface layer, allowing clean pipe material to contact the tapping tee Examine scraped area thoroughly, making sure that fresh pipe material is exposed throughout area Place tapping tee onto joint Secure tapping tee using clamp Connect fitting control box using electrical leads Apply fusion cycle Allow joint to be undisturbed for the entire prescribed cooling time Pressure test pipe Back fill pipe with appropriate contents Begin service Power requirements Electrofusion welding requires electrical current to be passed through the coils implanted in the fittings. Since the electrical energy input is an excellent indicator of the joint strength that develops during fusion, it is necessary to have consistent electrical power input. Energy input during the joining process is typically measured by controlling the time it takes for the current to pass through the fitting. However, energy input can also be monitored by controlling overall temperature, molten polymer temperature, or molten polymer pressure. A control box takes electrical power from a generator and converts it into an appropriate voltage and current for electrofusion joining. This provides consistent energy input for each application. The most common input voltage for electrofusion welding fittings is 39.5V, as it provides the best results without risking operator safety. The current is input as an alternating current (AC) waveform. Welding process Stages during welding Electrofusion welding is characterized by four distinct stages that occur during the welding process: Incubation period Joint formation and consolidation Plateau region Cooling period During the incubation period, heat is introduced into the joint as current is passed through the coil. Although there is no joint strength at this point, the polymer expands and the joint gap is filled. During joint formation and consolidation, melting begins. Melt pressure has begun to build, and the majority of the joint's strength is developed during this stage. The strength increase is due primarily to the constraint of the increasing molten material by the cold zones in the surrounding fitting. The plateau region signals the stabilization of the joint strength. Despite this, the heat of the joint is still increasing with time during this stage. The cooling period occurs after current is no longer applied to the coils. The molten polymer material solidifies and forms the joint. Current during welding Most electrofusion welding power supplies are constant voltage machines. Constant current machines would provide more consistent energy input due to the smaller fluctuations in current applied to the coils during welding. However, this additional consistency is generally not worth the higher cost of these machines. When a constant voltage machine is used, the value of the applied current slowly decreases throughout the welding process. This effect is due to the increasing resistance of the coils as energy is applied. As heat is generated in the coils, their temperature increases, leading to a higher electrical resistance in the coils. This increased electrical resistance causes a smaller current to be generated from the same voltage level as the process progresses. The extent of the current decrease is determined by the material used for the coil. The energy input per unit area can be calculated and used to monitor the process. Typical values for this range from 2–13 J/mm2, with a value of 3.9 J/mm2 having been found to produce the strongest joints. Temperature during welding Large temperature gradients exist in the electrofusion joint during the fusion cycle. The low thermal conductivity of polymers is the main cause of these large gradients. Recent efforts to model the thermal history at various locations using finite element modeling have been successful. Pressure during welding As the temperature in the joint increases, polymer begins to melt and a fusion zone is formed. The molten polymer in the fusion zone exerts an outward force on the surrounding solid polymer material, referred to as "cold zones". These cold zones cause a pressure to develop in the molten fusion zone. The pressure in the fusion zone takes some time to reach its maximum value, usually not reaching the peak until about a quarter of the way into the joining process. After the current is shut off and cooling begins, the pressure slowly decreases until the joint is uniform temperature. Properties of joints The strength of an electrofusion joint is measured using tensile and peel tests on coupons taken from the fusion zone of the joint. Two methods have been developed to assess the effect of fusion time on joint strength: Simulating an electrofusion joint solely for testing purposes Removing test coupons from standard electrofusion welded joints The strength of the joint develops throughout the welding process, and this development is affected by the fusion time, joint gap, and pipe material. These are detailed below. Effect of fusion time on joint strength As fusion time begins, there is an incubation period where no strength develops. Once enough time has passed for the molten material to begin solidifying, the joint strength begins to develop before plateauing at the maximum strength. If power is applied after full joint strength is achieved, the strength will start to decline slowly. Effect of joint gap on joint strength The joint gap is the distance between the electrofusion fitting and the pipe material. When no joint gap is present, the resulting joint strength is high but not maximum. As joint gap increases, the joint strength increases to a point, then begins to decline fairly sharply. At larger gaps sufficient pressure cannot build during the fusion time, and the joint strength is low. The effect of joint gap on strength is why the scraping of the pipes before welding is a critical step. Uneven or inconsistent scraping can result in areas where the joint gap is large, leading to low joint strength. Effect of pipe material on joint strength Pipe materials with higher molecular weights (MW), or densities, will have slower material flow rates when in the molten state during fusion. Despite the differences in flow rates, the final joint strength is generally consistent over a fairly wide range of pipe molecular weights. References Plastic welding Electric heating Electricity Thermodynamics
Electrofusion welding
[ "Physics", "Chemistry", "Mathematics" ]
1,954
[ "Thermodynamics", "Dynamical systems" ]
56,279,481
https://en.wikipedia.org/wiki/Cooperative%20pulling%20paradigm
The cooperative pulling paradigm is an experimental design in which two or more animals pull rewards toward themselves via an apparatus that they cannot successfully operate alone. Researchers (ethologists, comparative psychologists, and evolutionary psychologists) use cooperative pulling experiments to try to understand how cooperation works and how and when it may have evolved. The type of apparatus used in cooperative pulling experiments can vary. Researcher Meredith Crawford, who invented the experimental paradigm in 1937, used a mechanism consisting of two ropes attached to a rolling platform that was too heavy to be pulled by a single chimpanzee. The standard apparatus is one in which a single string or rope is threaded through loops on a movable platform. If only one participant pulls the string, it comes loose and the platform can no longer be retrieved. Only by pulling together in coordination can the participants be successful; success by chance is highly unlikely. Some researchers have designed apparatus that involve handles instead of ropes. Although many animals retrieve rewards in their cooperative pulling tasks, the conclusions regarding cooperation are mixed and complex. Chimpanzees, bonobos, orangutans, capuchins, tamarins, wolves, elephants, ravens, and kea appear to understand the requirements of the task. For example, in a delay condition, the first animal has access to the apparatus before the other. If the animal waits for its partner before pulling, this suggests an understanding of cooperation. Chimpanzees, elephants, wolves, dogs, ravens, and kea wait; grey parrots, rooks, and otters fail to wait. Chimpanzees actively solicit help when needed. They appear to recall previous outcomes to recruit the most effective partner. In a group setting, chimpanzees punish initial competitive behavior (taking food without pulling, displacing animals) such that eventually successful cooperation becomes the norm. As for the evolution of cooperation, evidence from cooperative pulling experiments provides support for the theory that cooperation evolved multiple times independently. The fact that basic characteristics of cooperation are present in some mammals and some birds points to a case of convergent evolution. Within social animals, cooperation is suspected to be a cognitive adaptation. Background Many species of animals cooperate in the wild. Collaborative hunting has been observed in the air (e.g., among Aplomado falcons), on land (e.g., among lions), in the water (e.g., among killer whales), and under the ground (e.g., among driver ants). Further examples of cooperation include parents and others working together to raise young (e.g., among African elephants), and groups defending their territory, which has been studied in primates and other social species such as bottlenose dolphins, spotted hyenas, and common ravens. Researchers from various disciplines have been interested in cooperation in animals. Ethologists study animal behavior in general. Comparative psychologists are interested in the origins, differences, and commonalities in psychological capacities across animal species. Evolutionary psychologists investigate the origin of human behavior and cognition, and cooperation is of great interest to them, as human societies are built on collaborative activities. For animals to be considered cooperating, partners must take account of each other's behavior to pursue their common goal. There are various levels of cooperation. These increase in temporal and spatial complexity from performing similar actions, to synchrony (similar actions performed in unison), then coordination (similar actions performed at the same time and place), and finally collaboration (complementary actions performed at the same time and place). Researchers use controlled experiments to analyze the strategies applied by cooperating animals, and to investigate the underlying mechanisms that lead species to develop cooperative behavior. Method The cooperative pulling paradigm is an experimental design in which two or more individuals, typically but not necessarily animals, can pull rewards towards themselves via an apparatus they can not successfully operate alone. The cooperative pulling paradigm is the most popular paradigm for testing cooperation in animals. Apparatus The type of apparatus used in cooperative pulling experiments can vary. Researcher Meredith Crawford, who invented the experimental paradigm in 1937 while at the Yerkes National Primate Research Center, used an apparatus consisting of two ropes attached to a box that was too heavy to be pulled by a single chimpanzee. The standard apparatus is used in the loose-string task, designed by Hirata in 2003, in which a single string or rope is threaded through loops on a movable platform. If only one participant pulls the string, it comes loose and the platform can no longer be retrieved. Only by pulling together in coordination can the participants be successful; success by chance is highly unlikely. Some researchers have designed apparatus that involve handles instead of ropes. De Waal and Brosnan have argued that complex electronically-mediated devices are not conducive to arrive at findings regarding cooperation. This is in contrast to mechanical pulling devices, in which the animals can see and feel their pull having immediate effect. String-pulling tasks have advantages in terms of ecological validity for animals that pull branches with food towards themselves. Tasks in which participants have different roles in collaboration, such as for example, one pulls a handle and the other one needs to insert a stick, are considered outside the cooperative pulling paradigm. Subjects So far, fewer than twenty species have participated in cooperative pulling experiments: chimpanzees, bonobos, orangutans, capuchin monkeys, tamarins, macaques, humans, hyenas, wolves, dogs, elephants, otters, dolphins, rooks, ravens, parrots, and kea. Researchers have picked species that cooperate in the wild (e.g., capuchins), live in social structures (e.g., wolves), or have known cognitive abilities (e.g., orangutans). Most of the participating animals have been in human care at an animal research center; some lived semi-free at a sanctuary in their natural habitat. One study involved free animals (Barbary macaques) in the wild. Conditions To arrive at conclusions regarding cooperation, researchers have designed experiments with various conditions. Delay The first animal has access to the apparatus before the other one. If the animal does not wait for its partner this suggests a lack of understanding of the requirements for successful cooperation. Recruitment The subject recruits the partner (for example by opening a door) when the task requires cooperation. Partner choice The first animal gets to choose which animal from a pair it wants as a partner. In some cases individual animals from within a group can decide to join an animal already at the apparatus. Apparatus choice Instead of just one apparatus in the test area there are two identical ones. Animals can decide to work on the same one (which can lead to success) or on different ones (which will lead to failure). A further design involves two different apparatus. The first animal can decide whether to use an apparatus that can be operated alone or one that requires and has a partner waiting. A 'no rope' version involves an apparatus where everything is the same except for the rope on the partner's side being coiled up and not accessible to the partner. Reward Rewards can be food split equally over two bowls in front of each animal, or in one bowl only. The type of food can vary from many small pieces to a single big lump (e.g., slices of an apple vs. a whole apple). In combination with the apparatus choice, the reward for the joint-task apparatus is often twice as big as the reward for the solo apparatus. Another variation is a modified apparatus where one partner gets food before the other, requiring the first one to keep pulling despite already having received the reward. Visibility Typically the animals can see each other, all rewards, and all parts of the apparatus. To assess the role of visual communication, sometimes an opaque divider is placed such that the animals can no longer see each other, but can still see both rewards. Training Animals are often first trained with an apparatus that can be operated by one individual. For example, the two ends of a string are on top of each other and a single animal can pull both ends. A technique called shaping can be used by gradually extending the distance between the string ends, or by gradually extending the length of delay between the arrival of the first and second animal at the apparatus. Findings Overview Although many animals retrieve rewards in their cooperative pulling tasks, the conclusions regarding cooperation are mixed and complex. Some researchers have attributed successful cooperation to random simultaneous action, or to the simple reactive behavior of pulling the rope when it moves. Many trials with capuchins, hyenas, parrots and rooks led to failure because one partner pulled without the other present, suggesting a lack of understanding of cooperation. A few researchers have offered the possible explanation that animals may understand cooperation to some extent but simply can not suppress the desire to have food they see. But there is evidence that some species do have an understanding of cooperation and perform intentional coordination to achieve a goal. Specifically, chimpanzees, bonobos, orangutans, tamarins, capuchins, elephants, wolves, ravens, and kea appear to understand how cooperation works. Chimpanzees not only wait for a partner, but will actively solicit help when needed. They appear to recall previous outcomes to recruit the most effective partner. In a group setting, chimpanzees punish initial competitive behavior (taking food without pulling, displacing animals) such that eventually, after many trials, successful cooperation becomes the norm. Bonobos, which are social animals with higher tolerance levels, can outperform chimpanzees on some cooperative tasks. Elephants will wait for 45 seconds for a partner to arrive before they start a cooperative pulling task; wolves do the same for 10 seconds. Dogs raised as pets are also able to wait for a partner, albeit only for a few seconds; pack dogs on the other hand rarely succeed in cooperative pulling in any condition. Among birds, ravens are able to learn to wait after many trials, while kea have set the record in waiting for a partner, 65 seconds. Mere knowledge of the presence of a partner is not enough for success: when a barrier with a small hole was placed between two capuchins, obstructing the view of the partner's actions, the success rate dropped. Of those species tested in the delay condition, parrots, rooks, and otters failed. In 2008, Seed, Clayton and Emery said the study of the proximate mechanisms underpinning cooperation in animals was in its infancy, due in part to the poor performances of animals such as chimpanzees in early tests that did not take factors such as inter-individual tolerance into account. In 2006, Melis, Hare, and Tomasello had shown that the performance of chimpanzees in cooperative tasks was strongly influenced by levels of inter-individual tolerance. Several studies since have highlighted the fact that tolerance has a direct impact on cooperation success, as the more tolerant an animal is around food the better it performs. Subordinate animals seem simply not willing to risk being attacked by intolerant dominant animals, even if it means they will not obtain food either. In general, cooperation will not emerge if individuals can not share the spoils obtained through their joint effort. Temperament, whether an animal is bold or shy, has also been found to predict success. As for the evolution of cooperation, evidence from cooperative pulling experiments appears to support the theory that cooperation evolved multiple times independently. The fact that basic characteristics of cooperation are present in some mammals and some birds points to a case of convergent evolution. Within social animals, cooperation is suspected to be a cognitive adaptation. The ability of humans to cooperate is likely to have been inherited from an ancestor shared with at least chimpanzees and bonobos. The superior scale and range of human cooperation comes mainly from the ability to use language to exchange social information. Primates Chimpanzees Chimpanzees (Pan troglodytes) are smart, social animals. In the wild they cooperate to hunt, dominate rival groups, and defend their territory. They have participated in many cooperative pulling experiments. The first ever cooperative pulling experiment involved captive chimpanzees. In the 1930s Crawford was a student and researcher at the Yerkes National Primate Research Center. In 1937 he published a study of two young chimpanzees named Bula and Bimba pulling ropes attached to a box. The box was too heavy to be pulled in by just one ape. On top of the box was food. The two participants synchronized their pulling and were able to get the food reward in four to five short pulls. In a second part of the study, Crawford fed Bula so much prior to the test that she was no longer interested in the food reward. By poking her and pushing her hand towards the rope, Bimba tried to enlist her help in the task, with success. In a follow-up experiment with seven pairs of chimpanzees Crawford found none of the apes spontaneously cooperated. Only after extensive training were they able to work together to obtain food. They also failed to transfer this new skill to a slightly different task, in which the ropes were hanging from the ceiling. Similar mixed results, not matching the cooperative abilities observed in chimpanzees in the wild, were obtained in later studies by other researchers using a variety of experimental set-ups, including the loose-string task pioneered by Hirata. Povinelli and O'Neill, for example, found that trained chimpanzees were unable to teach naive chimpanzees to cooperate on a Crawford-like box-pulling task. The naive animals did not imitate the experts. Chalmeau and Gallo found only two chimpanzees consistently cooperating in their handle-pulling task, and this involved one ape holding his own handle and waiting for the other to pull his. They concluded that social factors and not limited cognitive abilities were the reason for lack of widespread success, as they observed dominant chimpanzees controlling the apparatus and preventing others from interacting. Melis, Hare, and Tomasello set up an experiment to control for such social factors. In a loose-string cooperative task without training they compared the ability of pairs of captive chimpanzees who in a non-cooperative setting were willing to share food with each other to pairs who were less inclined to do so. The results showed that food sharing was a good predictor for success in the cooperative pulling task. Melis, Hare, and Tomasello concluded that mixed results in the past could at least partially be explained by a failure to control for such social constraints. In a follow-up study with semi–free-ranging chimpanzees, again using the loose-string task, the researchers introduced the delay task, in which subjects were tested in their ability to wait for the partner. After mastering this task, they participated in a new task designed to measure their ability to recruit the partner. They found that the apes only recruited a partner (by unlocking a door) if the task required cooperation. When given the choice between partners, the apes chose the more effective one, based on their experience with each of them previously. Suchak, Eppley, Campbell, Feldman, Quarles, and de Waal argued that even when experiments take social relationships into account, the results still do not match the cooperation capabilities observed in the wild. They set out to increase the ecological validity of their experiments by placing a handle-pulling apparatus in an open-group setting, allowing the captive chimpanzees themselves to choose to interact with it or not, and with whom. They also refrained from any training, offered as little human intervention as possible, and extended the duration to much longer than any test had ever done, to 47 days of 1 hour tests. The chimpanzees first discovered that cooperation could lead to success, but as more individuals became aware of this new way to obtain food, competition increased, taking the form of dominant apes displacing others, monopolizing the apparatus, and freeloading: taking the food others worked for. This competition led to fewer successful cooperative acts. The group did manage to restore and increase levels of cooperative behavior by various enforcement techniques: dominant individuals were unable to recruit partners and abandoned the apparatus, displacement was met with aggressive protest, and freeloaders were punished by third-party arbiters. When the researchers repeated this experiment with a brand new group of chimpanzees who not yet had established a social hierarchy, they again found that cooperation overcame competition in the long run. In a later study with a mix of novices and experts, Suchak, Watzek, Quarles, and de Waal found that novices learned rapidly in the presence of experts, although likely with limited understanding of the task. Greenberg, Hamann, Warneken, and Tomasello used a modified apparatus that required two captive chimpanzees to pull, but delivered food to one ape first. They found that in many trials the apes who already had received a reward from joint effort kept pulling to help their partner obtain their food. These partners did not need to gesture to solicit help, suggesting there was an understanding of what was wanted and needed. Bonobos Bonobos (Pan paniscus) are social animals that live in less hierarchical structures than chimpanzees. Hare, Melis, Woods, Hastings, and Wrangham set out to compare cooperation in chimpanzees and bonobos. They first ran a cofeeding experiment for each species. Pairs of bonobos were given two food dishes. In some trials both dishes had sliced fruit; in some one dish was empty and the other had sliced fruit; and in some one dish was empty and the other contained just two slices of fruit. The same set-up was then used for pairs of chimpanzees. When both dishes had food, there was no difference in behavior between bonobos and chimpanzees. But when only one dish contained food, bonobos were more than twice as likely to share food than chimpanzees. Bonobos were more tolerant of each other than chimpanzees. The researchers then ran a loose-string cooperation task with both dishes filled with sharable food. The results showed similar success rates for bonobos and chimpanzees, 69% of chimpanzee pairs and 50% of bonobo pairs spontaneously solving the task at least once within the six-trial test session. In a third experiment, a year later, the same cooperation task was administered but now with different food distributions. The bonobos outperformed the chimpanzees in the condition where one dish only had food and the food was clumped making it easier to monopolize the food reward. Bonobos cooperated more often in this condition. On average a single chimpanzee partner monopolized food rewards more often than a single bonobo did. In the condition where both dishes were filled with food, chimpanzees and bonobos performed similarly, as they had done the year before. The researchers concluded that the differences in performance between species were not due to differences in age, relationships, or experience. It was the bonobos' higher social tolerance level that enabled them to outperform their relatives. Orangutans Orangutans (Pongo pygmaeus) are tool-using apes that are mostly solitary. Chalmeau, Lardeux, Brandibas, and Gallo tested the cooperative capabilities of a pair of orangutans, using a device with handles. Only through simultaneous pulling could the pair retrieve a food reward. Without any training the orangutans succeeded in the first session. Over the course of 30 sessions, the apes succeeded more quickly, having learned to coordinate. Across trials the researchers found an increase in a sequence of actions that suggested understanding of cooperation: first looking at the partner; then if the partner holds or pulls the handle, starting to pull. The researchers also concluded that the orangutans learned a partner had to be present for success. For example, they observed that time spent alone at the apparatus decreased as the trials progressed. In some instances one orangutan pushed the other towards the free handle, soliciting cooperation. The researchers observed an asymmetry: one ape did all the monitoring and coordinating, the other one seemed to simply pull if the first one was present. Rewards did not have to be shared equally for success to appear, as one orangutan took 92% of all food. This ape anticipated the falling of food and stuck his hand out first, before recruiting help from his partner. Chalmeau, Lardeux, Brandibas, and Gallo concluded the apes appeared to understand the requirements of the cooperative task. Capuchins Capuchins (Sapajus apella) are large-brained monkeys that sometimes hunt cooperatively in the wild and show, for nonhuman primates, unusually high levels of social tolerance around food. Early experiments to prove their ability to cooperate were unsuccessful. These tests involved capuchins having to pull handles or press levers in complex devices that the animals did not understand. They did not pull the handle more often when a partner was pulling; both novices and experienced participants kept pulling even in situations where success was impossible. Visalberghi, Quarantotti, and Tranchida concluded that there was no evidence of an appreciation of the role played by the partner. The first test with evidence of cooperation in capuchins happened when de Waal and Brosnan adopted Crawford's pulling paradigm. Two captive monkeys were situated in adjacent sections of a test chamber, with a mesh partition between them. In front of them was an apparatus consisting of a counter-weighted tray with two pull bars and two food cups. Each monkey had access to only one bar and one food cup, but could see both, and only one cup was filled with food. The tray was too heavy for one monkey to pull it in, with weights established over trials lasting three years. Only when they worked together and both pulled could they move the tray, enabling one of them to grab the food. Trained monkeys were much more successful if they both obtained rewards after pulling than if only one of them received rewards. The pull rate dropped significantly when monkeys were alone at the apparatus, suggesting an understanding of the need for a partner. In later tests, researchers replaced the mesh partition with an opaque barrier with a small hole, so that the monkeys could see the other one was there but not their actions. This dramatically reduced success in cooperation. De Waal and Berger used the cooperative pulling paradigm to investigate animal economics. They compared the behavior when both transparent bowls were loaded with food to when just one was loaded, and with a solo task where the partner was only an observer and unable to help. They found that captive capuchin monkeys were willing to pull even if their bowl was empty and it was uncertain if their partner would share food. In 90% of cases the owner of the food did indeed share the food. Food was shared more often if the partner actually worked for it than just being an observer. Brosnan, Freeman, and de Waal tested captive capuchin monkeys on a bar-pulling apparatus with unequal rewards. Contrary to their expectations, rewards did not have to be distributed equally to achieve success. What mattered was the behavior in an unequal situation: pairs that tended to alternate which monkey received the higher-value food were more than twice as successful in obtaining rewards than pairs in which one monkey dominated the higher-value food. Tamarins Cottontop tamarins (Saguinus oedipus) are small monkeys who take care of their young cooperatively in the wild. Cronin, Kurian, and Snowdon tested eight captive cottontop tamarins in a series of cooperative pulling experiments. Two monkeys were put on opposite sides of a transparent apparatus containing food. Only if both monkeys pulled a handle on their side of the apparatus towards themselves at the same time would food drop down for them to obtain. The tamarins were first trained, through shaping techniques, to use the handles successfully by themselves. In the joint pulling test pairs were successful in 96% of trials. The researchers then ran a second study in which a tamarin was tested alone. The results showed that tamarins pulled the handles at a lower rate when alone with the apparatus than when in the presence of a partner. Cronin, Kurian, and Snowdon concluded from this that cottontop tamarins have a good understanding of cooperation. They suggest that cottontop tamarins have developed cooperative behavior as a cognitive adaptation. Macaques Molesti and Majolo tested a group of wild Barbary macaques (Macaca sylvanus) in Morocco to see if they would cooperate, and if so, what determined their partner choice. Macaques live in complex social environments and are relatively tolerant socially. After solo training, the researchers presented a loose-string apparatus for the cooperative task, which the animals were free to use. Most animals that passed solo training were successful in spontaneously cooperating to obtain food (22 out of 26). More than half the pairs that chose to cooperate were juvenile-adult pairs. More than two monkeys pulling was never observed; stealing food from a partner was rare. After a first successful cooperation, they were more likely to pull when a partner was directly available, but this was not always the case. Molesti and Majolo did not rule out that pulling while no one held or pulled the other end of the rope was simply a signal to actively recruit a potential partner. The researchers randomly introduced control trials in which the solo apparatus was set up as well. The macaques preferred to get the food alone when a partner was not needed during the control. The extent to which a monkey tolerated another was a good predictor for initiating cooperation. An individual was also found to be more successful with partners with whom they had a strong social bond. Pairs sharing a similar temperament were more likely to initiate cooperation. The quality of the relationship seemed to play an important role in the maintenance of cooperation over time. Humans Rekers, Haun, and Tomasello tested the cooperation abilities and preferences of humans (Homo sapiens) and compared them to chimpanzees. The researchers provided 24 three-year-old children with some basic training in pulling food rewards towards themselves; in pairs using a loose-string setup, and solo training in which the two ends of a rope were tied together. They then tested the children in an apparatus choice set-up. On one side was the loose end of a rope that threaded through the apparatus to the other child. On the other side were two ends of a rope that when pulled would pull a platform towards both the child and their partner. Both the joint-operator platform and the solo-operated platform were holding two food dishes, all containing the same amounts of food. That is, from a partner's perspective, on one side the child had to pull to get food; on the other the partner could get food without any effort. The children chose the joint-operated board in 78% of trials. The researchers then changed the design to ascertain if this choice preference was due to wishing to avoid freeloading and it may be that the children did not like their partner getting food without making any effort. In the modified set-up the partners never received any reward, not from the joint-operated apparatus and not from the solo-operated apparatus. Children again chose the joint-operated platform significantly more often, in 81% of trials. As in the first study, there was no significant difference in the time taken to obtain the food reward between using one side or the other. These results suggest that to obtain food, children prefer to work together with a partner as opposed to working alone. The chimpanzees in their study appeared to choose between the two platforms randomly, indicating no preference to work collaboratively. However, Bullinger, Melis, and Tomasello showed that chimpanzees actually exhibit a preference for working alone, unless cooperation is associated with higher pay-offs. Other mammals Hyenas Captive spotted hyenas (Crocuta crocuta), social carnivores that hunt in groups, have cooperated to obtain food rewards by pulling ropes in an experimental setting. Mimicking the natural choice hunting hyenas face when deciding which of many prey to jointly attack, researchers Drea and Carter set up two devices instead of one, as previously used in all cooperative pulling tasks with other species. With four ropes to pull from, the animals had to pick the two belonging to the same device to be successful. If two vertically suspended ropes were simultaneously tugged, a spring-controlled trap door of an elevated platform was opened and previously hidden food dropped to the floor. Another innovation was the introduction of more than two animals. One of the many factors the researchers controlled for was the Clever Hans effect (an effect in which humans unwittingly provide cues to animals), which they did by removing all humans from the test and by recording experiments on video. After extensive solo trials, all hyenas were successful in cooperating, displaying remarkable efficiency even on their first try. On average, hyenas pulled on ropes more often when their companion was nearby and available to fulfil its partnership role. With only a few solo trials, the success rate of the cooperation task was very low for pairs. In groups of four hyenas, all trials were successful, regardless of the number of reward platforms. Thereafter, group exposure to a cooperation task had enhancing effects on pairwise performance. Social factors such as group size and hierarchy played a role. For instance, groups with a dominant member were far less successful than groups without, and lower-ranking animals were faster and consistently successful. When pairing experienced cooperators with animals new to the cooperation task, the researchers found that experienced animals monitored the novices and modified their behavior to achieve success. Despite initial accommodation, the pattern of rank-related social influences on partner performance also appeared in these tests with novices. Dogs Ostojić and Clayton administered the loose-string cooperation task to domestic dogs (Canis familiaris). Pet dogs first were given a solo task in which the string ends were close enough for one dog to pull at both. Then they were given a transfer test to assess if they could generalize their newly learned rule to novel situations. Finally, the joint task was administered. Dog pairs always came from the same household. In half of the joint tasks one of the pair of dogs was shortly delayed by an obstacle course. All dogs that learned to master the solo task solved the joint task within 60 trials. In the delayed condition, the not-delayed dog waited before pulling most of the time, but only for a few seconds. The researchers also tested dog–human pairs, again in delayed and not-delayed conditions. Dogs were equally successful when working with humans in the non-delayed condition, but far less successful when they had to wait for the human, who on average arrived with a 13-seconds longer delay than the delayed dog in the dog–dog trials. Ostojić and Clayton concluded that inhibiting the necessary action was not easy for dogs. They ruled out that dogs simply went for any moving string, as in the dog–human trials the humans did not pull hard enough to make the other end move. They attributed success to the dogs' ability to read the social cue of their partner's behavior, but could not rule out that visual feedback of seeing rewards incrementally move closer also played a role. These results with pet dogs stand in stark contrast to the results with pack dogs, which in a study by Marshall-Pescini, Schwarz, Kostelnik, Virányi, and Range rarely succeeded in obtaining food. The researchers theorized that pet dogs are trained not to engage in conflicts over resources, promoting a level of tolerance, which may facilitate cooperation. The pack dogs were used to competition over resources and thus were likely to have conflict avoidance strategies, which constrain cooperation. Wolves Marshall-Pescini, Schwarz, Kostelnik, Virányi, and Range set out to test two competing hypotheses regarding cooperation in wolves (Canis lupus) and dogs. On the one hand, it could be theorized that dogs have been selected, during domestication, for tame temperaments and an inclination to cooperate and therefore should outperform wolves on a cooperative pulling task. On the other hand, it could be argued that dogs have evolved to become less able to work jointly with other dogs because of their reliance on humans. Wolves rely on each other for hunting, raising young and defending their territory; dogs rarely rely on other dogs. The researchers set up a cooperative pulling task for captive wolves and pack dogs. Without any training on this task, five of the seven wolf pairs were successful at least once, but only one dog pair out of eight managed to obtain food, and only once. After solo training, again the wolves far outperformed the dogs on the joint task. The researchers concluded that the difference does not stem from a difference in understanding of the task (their cognitive capabilities are largely the same), nor from a difference in social aspects (for both species, aggressive behavior by dominant animals was rare, as was submissive behavior by lower ranked ones). More likely is that dogs avoid potential conflict over a resource more than wolves do, something which has been observed in other studies as well. The wolves, but not the dogs, were then tested in pairs in a set-up with two identical apparatus 10 meters (39 ft) apart, requiring them to coordinate in time and space. In 74% of the trials they succeeded. The stronger the bond between the partners and the smaller the distance in rank, the better they performed. In a subsequent delay condition, with the second wolf released 10 seconds after the first, most wolves did well, one being successful in 94% of trials. Elephants Elephants have a complex social structure and large brains that enable them to solve many problems. Their size and strength do not make them easy candidates for experiments. Researchers Plotnik, Lair, Suphachoksahakun, and de Waal adapted the apparatus and task to elephant requirements. They trained captive Asian elephants (Elephas maximus) to use a rope to pull a sliding platform with food on it towards themselves. Once the elephants managed this solo task, the researchers introduced a loose-string apparatus by threading the rope around the platform. At first, two elephants were released simultaneously to walk side by side in two lanes to the two loose ends of the rope. Using their trunks the animals coordinated their actions and retrieved the food. At this stage they could simply be applying a 'see the rope, pull the rope' strategy. To see whether they understood the requirements of the task the researchers introduced a delay for one elephant, initially of 5 seconds and ultimately of 45 seconds. At first the lead elephant failed to retrieve the food but was soon seen to wait for a partner. Across 60 trials the first elephant waited for the second one before pulling in most cases. In a further control the researchers prevented the second elephant from being able to access its end of the rope. In almost all of these cases the first elephant did not pull the rope, and four of the six returned when they saw the other rope end was not going to be accessible to their partner. The researchers concluded that this suggested the elephants understood they needed their partner to be present and to have access to the rope to succeed. One elephant never pulled the rope but simply put her foot on the rope and let the partner do all the pulling. Another one waited for his partner's release at the starting line rather than waiting at the rope. Plotnik, Lair, Suphachoksahakun, and de Waal conceded that it is difficult to distinguish learning from understanding. They did prove that elephants show a propensity towards deliberate cooperation. The speed with which they learned the critical ingredients of successful cooperation puts them on par with chimpanzees and bonobos. Otters Schmelz, Duguid, Bohn, and Völter presented two species of captive otters, giant otters (Pteronura brasiliensis) and Asian small-clawed otters (Aonyx cinerea), with the loose-string task. Both species raise young cooperatively and live in small groups. Because giant otters forage together but small-clawed otters do not, the researchers expected the giant otters to do better in the cooperative pulling experiment. After solo training, they tested both species in a group setting, to maintain ecological validity. The results showed that most pairs of otters were successful in pulling food rewards to themselves. Contrary to expectation, there was no difference between the species in success rate. In a subsequent experiment the researchers first lured the group away from the apparatus into the opposite corner of the enclosure. Then they put food on the apparatus and observed what happened when the first otter arrived at the nearest end of the rope, as there was no partner yet at the other end. Very few trials led to success in this condition as otters pulled the rope as soon as they could. The researchers concluded from this that the otters did not understand the necessary elements of successful cooperation, or, alternatively, they understood but were unable to inhibit the desire to reach for the food. When the same task was repeated with a longer rope, success rate did go up, but the otters appeared unable to learn from this and be successful in the next task with the rope length restored to the original length. Schmelz, Duguid, Bohn, and Völter suggested that an understanding of cooperation may not be required for successful cooperation in the wild. Cooperative hunting may be possible through situational coordination and mutualism, without any complex social cognitive abilities. Dolphins Two groups of researchers (first Kuczaj, Winship, and Eskelinen, and then Eskelinen, Winship, and Jones) adapted the cooperative pulling paradigm for captive bottlenose dolphins (Tursiops truncatus). As apparatus they used a container which could only be opened at one end if two dolphins each pulled a rope on either end. That is, the dolphins would have to face each other and pull in opposite directions. They first attached the container to a stationary dock so a single dolphin could learn to open it and get the food reward. Then they ran trials in which the container was free floating in a large test area with six dolphins. In Kuczaj, Winship, and Eskelinen's study, only two dolphins interacted with the container. In eight of the twelve trials they pulled simultaneously and obtained food. Once, they also managed to open the container through asynchronous pulling, and once a single male dolphin managed to open it by himself. Kuczaj, Winship, and Eskelinen admitted that this behavior may appear to be cooperation but could possibly be competition. They conceded it is possible that the dolphins did not understand the role of the other dolphin, but instead simply tolerated it pulling on the other side. King, Allen, Connor, and Jaakkola later argued that this design makes for a competitive 'tug-of-war', not cooperation, and any conclusions regarding cooperation should therefore be invalid. Birds Rooks Rooks (Corvus frugilegus) are large-brained members of the bird family Corvidae. They live in big groups and have a high level of social tolerance. Researchers Seed, Clayton, and Emery set up a loose-string experiment with eight captive rooks. They were first trained in a solo task, with the string ends placed at 1 cm, 3 cm and ultimately 6 cm apart (0.4, 1.2, and 2.4 inch respectively). A pair's willingness to share food was then tested, and was found to differ somewhat between pairs, although food was rarely monopolized by a dominant bird. In the cooperative task, all pairs were able to solve the cooperation problem and retrieve food; two pairs managed this in their first session. Food sharing was a good predictor for successful cooperation. In a subsequent delay test, where one partner had access to the apparatus first, all rooks pulled the string without waiting for their partner to enter the test area in the majority of trials. In a second variant, birds were given a choice between a platform they could operate successfully alone and one that required a pulling partner. When tested alone, four of the six rooks showed no significant preference for either platform. Seed, Clayton, and Emery concluded that although successful at the cooperation task, it seemed unlikely that the rooks had an understanding of when cooperation was necessary. Researchers Scheid and Noë subsequently found that successful cooperation in rooks depended to a large extent on their temperament. In their loose-string experiment with 13 captive rooks they distinguished between bold and shy animals. The results were mixed, ranging from some pairs cooperating successfully every time to some pairs never cooperating. In 81% of cases a rook should have waited for a partner, but it did not and started pulling. Scheid and Noë concluded their experiment provided no evidence for or against rooks having an understanding of the task. They attributed any cooperation success to common external cues and not coordination of actions. But all subjects did better when they were paired with a bolder partner. The researchers suggested that in evolution, cooperation can emerge because bolder individuals encourage a risk-averse one to engage. Ravens Massen, Ritter, and Bugnyar investigated the cooperative capabilities of captive common ravens (Corvus corax), a species that frequently cooperates in the wild. They found that without training ravens cooperated in the loose-string task. The animals did not seem to pay attention to the behavior of their partners while cooperating, and, like rooks, did not seem to understand the need for a partner to be successful. Tolerance of their partner was a critical factor for success. In one condition the researchers let ravens choose a partner from a group to cooperate with. Overall success was higher in this condition, and again, individuals that tolerated each other more had more success. The ravens also paid attention to reward distribution: they stopped cooperating when being cheated upon. Asakawa-Haas, Schiestl, Bugnyar, and Massen subsequently ran an open-choice experiment with eleven captive ravens in a group setting, using nine ravens from one group and two newcomers. They found that the ravens' decision which partner to cooperate with was based on tolerance of proximity and not on whether they were part of the group or not. The ravens in this experiment learned to wait for their partner and inhibit pulling the string too soon. Grey parrots Researchers Péron, Rat-Fischer, Lalot, Nagle and Bovet had captive grey parrots (Psittacus erithacus) try to cooperate in a loose-string experimental set-up. The grey parrots were able to act simultaneously but, like the rooks, largely failed to wait for a partner in the delay task. They did not make any attempts to recruit a helping partner. The parrots did take the presence of a partner into account, since they all pulled more when a partner was present, but this could be explained by instrumental learning rather than a real understanding of the task. The researchers also gave the parrots a choice between two apparatus, one from the solo task and one from the loose-string task, now stacked with double the food per bird. Two of the three parrots chose the solo apparatus when alone, and two of the three parrots preferred the joint-task apparatus when tested with a partner. When paired up, social preferences and tolerance affected the likelihood a pair cooperated. Kea Kea (Nestor notabilis), parrots native to New Zealand, are a distant relative of the grey parrot. They live in complex social groups and do well on cognitive tests. Heaney, Gray, and Taylor gave four captive kea a series of cooperative loose-string tasks. After solo training and shaping with string ends increasingly further apart, two birds were released simultaneously in a joint loose-string task. Both pairs did very well, one pair failing only 5 in 60 trials. Shaping was then used in a delay task, with the partner released after one second, then two, and gradually up to 25 seconds later than the first bird. The birds managed to wait for a partner between 74% and 91% of test trials, including success at 65 seconds delay, longer than any other animal of any species had been tested for. To assess if this success could be explained by the learning of a combination of cues, such as seeing a partner while feeling tension on the string, or by a proper understanding of cooperation, the researchers randomly gave the kea a set-up they could solve alone or one in which they needed to cooperate with a delayed partner. Three of the four kea were successful at a significant rate: they chose to wait when they had to and immediately pulled when the task could be done alone. However, when the researchers modified the set-up and coiled up the string end of the delayed partner, no bird was successful at discriminating between a duo platform with both ends of string available to both kea and a duo platform with the partner's string coiled out of reach. The researchers were not able to determine the reason for this result. They speculated it could be that kea do have an understanding of when they need a partner but do not have a clear idea of the role their partner plays in relation to the string, or they may lack of a full causal understanding of how the string works. Finally, the researchers attempted to ascertain if kea have a preference for working alone or together. No preference was found in three of the four kea, but one kea preferred the duo platform significantly more. Heaney, Gray, and Taylor concluded that these results put kea on a par with elephants and chimpanzees in terms of cooperative pulling. These conclusions are in sharp contrast to those of Schwing, Jocteur, Wein, Noë, and Massen, who tested ten captive kea in a loose-string task on an apparatus that provided limited visibility to follow the trajectory of the string. After training with a human partner (no solo training was done), only 19% of trials led to the birds obtaining food in the joint task. The researchers found that the closer the birds were affiliated, the more successful they were in the cooperation task. The kea did not seem to understand either the mechanics of the loose-string apparatus or the need of a partner, as in training with humans they still pulled the string even when the human was too far away or facing the wrong way. The way rewards were distributed had a small effect on the likelihood of cooperation attempts. The difference in social rank or dominance did not seem to matter. Footnotes References Notes Bibliography External links First ever cooperative pulling experiment (video) Crawford (1937) Elephants in cooperative pulling experiment (video) Plotnik et al. (2011) Wolves and dogs in cooperative pulling experiment (video) Marshall-Pescini et al. (2017) Chimpanzees in cooperative pulling experiment (video) Suchak et al. (2014) Dolphins in pulling experiment (video) Kuczaj et al. (2015) TED Talk Moral behavior in animals (video) Frans de Waal Design of experiments Ethology Animal cognition Animal testing
Cooperative pulling paradigm
[ "Chemistry", "Biology" ]
9,746
[ "Animal testing", "Behavior", "Animals", "Behavioural sciences", "Animal cognition", "Ethology" ]
63,257,483
https://en.wikipedia.org/wiki/Extreme%20tribology
Extreme tribology refers to tribological situations under extreme operating conditions which can be related to high loads and/or temperatures, or severe environments. Also, they may be related to high transitory contact conditions, or to situations with near-impossible monitoring and maintenance opportunities. In general, extreme conditions can typically be categorized as involving abnormally high or excessive exposure to e.g. cold, heat, pressure, vacuum, voltage, corrosive chemicals, vibration, or dust. The extreme conditions should include any device or system requiring a lubricant operating under any of the following conditions: Beyond the original machinery design specifications. Beyond the original machinery ambient parameters. Application in an environmentally sensitive location. Beyond the original lubricant design specification. Operation in such extreme conditions is a great challenge for tribologists to develop tribosystems that could meet these extreme requirements. Often, only multifunctional materials fulfill such requirements. Challenges in tribology The progression of the humanity suggested new technologies, devices, materials and surface treatments which required novel lubricants and lubrication systems. Likewise, the development of high-speed trains, aircraft, space stations, computer hard discs, artificial implants, and bio-medical and many other engineering systems, have only been possible through the advances in tribology. Challenges in tribology including sustainability, climate change and gradual degradation of the environment require new solutions and innovative approaches. Tribology at extreme temperatures In many tribological applications, the system components are exposed to extreme temperatures (very high or ultra-low temperatures). Examples of such applications can be found in the aerospace, mining, power generation, metalworking industries, and steel plants. In tribology, an application can be considered to operate at elevated temperatures when the use of conventional lubricants, i.e. oils and greases is no longer effective due to their rapid decomposition at around 300 °C. Smart lubricating materials and multifunctional lubricating materials are developed as new class materials with increased safety, long-term durability and as less amount of repairing costs as possible. Such materials are designed to be self-diagnostic, self-repairing, and self-adjusting. These materials include structural/lubricating integrated material, anti-radiation lubricating material, conductive or insulation lubricating material, etc. At low temperatures and in cryogenic environments, liquid lubricants can solidify or become highly viscous and not be effective. On the other end, solid lubricants have usually been found to be better than liquid lubricants or greases. The most common solid lubricants for cryogenic temperature are polytetrafluoroethylene, polycarbonate, tungsten disulphide (WS2), and molybdenum disulphide (MoS2). In addition, ice could be a possible lubricant for deformation in cryogenic environments which provides a method of self-lubrication in the sense that no active mechanism is needed to supply a lubricant. Tribology at micro/nano-scale The fundamental difference that distinguishes micro/nano tribology from classical macro tribology is that micro/nano tribology considers the friction and wear of two objects in relative sliding whose dimensions range from micro-scales down to molecular and atomic scales. MEMS refer to micro-electromechanical systems that have a characteristic length of 100 nm to 1 mm, while NEMS are the nano-electromechanical systems that have a characteristic length of less than 100 nm. There are great challenges in the development of a fundamental understanding of tribology, surface contamination and environment in MEMS/NEMS. One of these challenges in such extreme tribological situations is the adhesion force which can be up to a million times greater than the force of gravity. This is due to the fact that the adhesion force decreases linearly with size, whereas the gravitational force decreases with the size cubed. Low surface energy, hydrophobic coatings applied to oxide surfaces are promising for minimizing adhesion and static-charge accumulation. Tribology under vacuum conditions Under vacuum environment, it is a problem to achieve acceptable endurance of tribological components due to the fact that the lubricant may either freeze, evaporate or decompose and hence become ineffective. Tribological properties of materials exhibit different characteristics at the space vacuum as compared to the atmospheric pressure. Adhesive and fatigue wear are the two important types of wear encountered in a vacuum environment. Vacuum not only radically affects the wear behavior of metals and alloys in contact, but also has a pronounced influence on nonmetals as well. Different new kinds of materials are developed for potentially operating in vacuum environments. For instance, and alloys have excellent anti-wear properties in all the vacuum conditions. Types of solid lubricants used in space applications: Soft metal films: gold, silver, lead, indium, and barium. Lamellar solids: molybdenum disulfide, tungsten disulfide, cadmium iodide, lead iodide, molybdenum diselenide, intercalated graphite, fluorinated graphite, and phthalocyanines. Polymers: polytetrafluoroethylene, polyimides, fluorinated ethylene-propylene, ultra-high-molecular-weight polyethylene UHMWPE, polyether ether ketone, polyacetal, and epoxy resins. Other low shear strength materials: fluorides of calcium, lithium, barium, and rare earths; sulfides of bismuth and cadmium; and oxides of lead, cadmium, cobalt, and zinc. The most common way to utilize a solid lubricant is to apply it to a metal surface as a film or surface coating of a thin layer of soft film, typically molybdenum disulphide, artificially deposited on the surfaces. Coatings of solid lubricant are built up atom by atom yielding a mechanically strong surface layer with a long service life and the minimum quantity of solid lubricant. Geotribology The term "geotribology" was first stated by Harmen Blok with no significant discussion. Later, geotribology framework was employed to analyze the flow mechanics of granular sand. Even though tribological concepts can be utilized to many geosciences phenomena, the two research communities are separated. In earth science, many tribological concepts were applied successively, particularly in rock friction analyses. The asperity-asperity contact mechanism was applied to rock friction experiments that led to the rate-state friction law that prevails in earthquake analyses. Tribology in high dust and dirty areas High dust areas and dirt environments can weigh profoundly on a lubricant due to the high risk of particle contamination. These contaminants readily form a grinding paste, causing failure of tribosystems and subsequently damaging of equipment. This type of contamination most frequently takes place when airborne or stagnant particles gain access to the lubrication system through open ports and hatches, especially in systems with negative pressure. Half of a bearing loss of usefulness can be attributed to wear. This wear, which occurs through surface abrasion, fatigue and adhesion, is often the result of particle contamination. Tribology in radiation environments In radiation environments, liquid lubricants can decompose. Suitable solid lubricants can extend the operation of systems beyond 106 rads while maintaining relatively low coefficients of friction. Tribology for limited weight applications In weight-limited spacecraft and rovers, solid lubrication has the advantage of weighing substantially less than liquid lubrication. The elimination (or limited use) of liquid lubricants and their replacement by solid lubricants would reduce spacecraft weight and, therefore, have a dramatic impact on mission extent and craft maneuverability. References External links https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20000057374.pdf https://link.springer.com/referencework/10.1007/978-0-387-92897-5 Tribology
Extreme tribology
[ "Chemistry", "Materials_science", "Engineering" ]
1,724
[ "Tribology", "Mechanical engineering", "Surface science", "Materials science" ]
63,258,258
https://en.wikipedia.org/wiki/Malpelo%20Ridge
The Malpelo Ridge () is an elevated part of Nazca plate off the Pacific coast of Colombia. It is a faulted chain of volcanic rock of tholeiitic composition. The Malpelo Ridge may have originated simultaneously as Carnegie Ridge, and thus represent an old continuation of Cocos Ridge. It is thought to have acquired it present position due to tectonic movements along the Panama fracture zone. References Geology of Colombia Underwater ridges of the Pacific Ocean Oceanography Hotspot tracks
Malpelo Ridge
[ "Physics", "Environmental_science" ]
101
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
63,261,108
https://en.wikipedia.org/wiki/Algebra%20and%20Tiling
Algebra and Tiling: Homomorphisms in the Service of Geometry is a mathematics textbook on the use of group theory to answer questions about tessellations and higher dimensional honeycombs, partitions of the Euclidean plane or higher-dimensional spaces into congruent tiles. It was written by Sherman K. Stein and Sándor Szabó, and published by the Mathematical Association of America as volume 25 of their Carus Mathematical Monographs series in 1994. It won the 1998 Beckenbach Book Prize, and was reprinted in paperback in 2008. Topics The seven chapters of the book are largely self-contained, and consider different problems combining tessellations and algebra. Throughout the book, the history of the subject as well as the state of the art is discussed, and there are many illustrations. The first chapter concerns a conjecture of Hermann Minkowski that, in any lattice tiling of a Euclidean space by unit hypercubes (a tiling in which a lattice of translational symmetries takes any hypercube to any other hypercube) some two cubes must meet face-to-face. This result was resolved positively by Hajós's theorem in group theory, but a generalization of this question to non-lattice tilings (Keller's conjecture) was disproved shortly before the publication of the book, in part by using similar group-theoretic methods. Following this, three chapters concern lattice tilings by polycubes. The question here is to determine, from the shape of the polycube, whether all cubes in the tiling meet face-to-face or, equivalently, whether the lattice of symmetries must be a subgroup of the integer lattice. After a chapter on the general version of this problem, two chapters consider special classes of cross and "semicross"-shaped polycubes, both with regard to tiling and then, when these shapes do not tile, with regard to how densely they can be packed. In three dimensions, this is the notorious tripod packing problem. Chapter five considers Monsky's theorem on the impossibility of partitioning a square into an odd number of equal-area triangles, and its proof using the 2-adic valuation, and chapter six applies Galois theory to more general problems of tiling polygons by congruent triangles, such as the impossibility of tiling a square with 30-60-90 right triangles. The final chapter returns to the topic of the first, with material on László Rédei's generalization of Hajós's theorem. Appendices cover background material on lattice theory, exact sequences, free abelian groups, and the theory of cyclotomic polynomials. Audience and reception Algebra and Tiling can be read by undergraduate or graduate mathematics students who have some background in abstract algebra, and provides a source of applications for this topic. It can be used as a textbook, with exercises scattered throughout its chapters. Reviewer William J. Walton writes that "The student or mathematician whose area of interest is algebra should enjoy this text". In 1998, the Mathematical Association of America gave it their Beckenbach Book Prize as one of the best of their book publications. The award citation called it "a simultaneously erudite and inviting ex- position of this substantial and timeless area of mathematics". References Further reading Tessellation Mathematics textbooks 1994 non-fiction books Mathematical Association of America books
Algebra and Tiling
[ "Physics", "Mathematics" ]
695
[ "Tessellation", "Planes (geometry)", "Euclidean plane geometry", "Symmetry" ]
63,262,069
https://en.wikipedia.org/wiki/Metal%20cluster%20compound
Metal cluster compounds are a molecular ion or neutral compound composed of three or more metals and featuring significant metal-metal interactions. Transition metal carbonyl clusters The development of metal carbonyl clusters such as Ni(CO)4 and Fe(CO)5 led quickly to the isolation of Fe2(CO)9 and Fe3(CO)12. Rundle and Dahl discovered that Mn2(CO)10 featured an "unsupported" Mn-Mn bond, thereby verifying the ability of metals to bond to one another in molecules. In the 1970s, Paolo Chini demonstrated that very large clusters could be prepared from the platinum metals, one example being [Rh13(CO)24H3]2−. This area of cluster chemistry has benefited from single-crystal X-ray diffraction. Many metal carbonyl clusters contain ligands aside from CO. For example, the CO ligand can be replaced with myriad alternatives such as phosphines, isocyanides, alkenes, hydride, etc. Some carbonyl clusters contain two or more metals. Others contain carbon vertices. One example is the methylidyne-tricobalt cluster [Co3(CH)(CO)9]. The above-mentioned cluster serves as an example of an overall zero-charged (neutral) cluster. In addition, cationic (positively charged) rather than neutral organometallic trimolybdenum or tritungsten clusters are also known. The first representative of these ionic organometallic clusters is [Mo3(CCH3)2(O2CCH3)6(H2O)3]2+. Transition metal halide clusters The halides of low-valent early metals often are clusters with extensive M-M bonding. The situation contrasts with the higher halides of these metals and virtually all halides of the late transition metals, where metal-halide bonding is replete. Transition metal halide clusters are prevalent for the heavier metals: Zr, Hf, Nb, Ta, Mo, W, and Re. For the earliest metals Zr and Hf, interstitial carbide ligands are also common. One example is Zr6CCl12. One structure type features six terminal halides and 12 edge-bridging halides. This motif is exemplified by tungsten(III) chloride, [Ta6Cl18]4−, Another common structure has six terminal halides and 8 bridging halides, e.g. Mo6Cl142−. Many of the early metal clusters can only be prepared when they incorporate interstitial atoms. In terms of history, Linus Pauling showed that "MoCl2" consisted of Mo6 octahedra. F. Albert Cotton established that "ReCl3" in fact features subunits of the cluster Re3Cl9, which could be converted to a host of adducts without breaking the Re-Re bonds. Because this compound is diamagnetic and not paramagnetic the rhenium bonds are double bonds and not single bonds. In the solid state further bridging occurs between neighbours and when this compound is dissolved in hydrochloric acid a Re3Cl123− complex forms. An example of a tetranuclear complex is hexadecamethoxytetratungsten W4(OCH3)12 with tungsten single bonds. A related group of clusters with the general formula MxMo6X8 such as PbMo6S8. These sulfido clusters are called Chevrel phases. Fe-S clusters in biology In the 1970s, ferredoxin was demonstrated to contain Fe4S4 clusters and later nitrogenase was shown to contain a distinctive MoFe7S9 active site. The Fe-S clusters mainly serve as redox cofactors, but some have a catalytic function. In the area of bioinorganic chemistry, a variety of Fe-S clusters have also been identified that have CO as ligands. FeMoco, the active site of most nitrogenases, features a Fe7MoS9C cluster. Zintl clusters Zintl compounds feature naked anionic clusters that are generated by reduction of heavy main group p elements, mostly metals or semimetals, with alkali metals, often as a solution in anhydrous liquid ammonia or ethylenediamine. Examples of Zintl anions are [Bi3]3−, [Sn9]4−, [Pb9]4−, and [Sb7]3−. Although these species are called "naked clusters," they are usually strongly associated with alkali metal cations. Some examples have been isolated using cryptate complexes of the alkali metal cation, e.g., [Pb10]2− anion, which features a capped square antiprismatic shape. According to Wade's rules (2n+2) the number of cluster electrons is 22 and therefore a closo cluster. The compound is prepared from oxidation of K4Pb9 by Au+ in PPh3AuCl (by reaction of tetrachloroauric acid and triphenylphosphine) in ethylene diamine with 2.2.2-crypt. This type of cluster was already known as is the endohedral Ni@Pb102− (the cage contains one nickel atom). The icosahedral tin cluster Sn122− or stannaspherene anion is another closed shell structure observed (but not isolated) with photoelectron spectroscopy. With an internal diameter of 6.1 Ångstrom, it is of comparable size to fullerene and should be capable of containing small atoms in the same manner as endohedral fullerenes, and indeed exists a Sn12 cluster that contains an Ir atom: [Ir@Sn12]3−. Metalloid clusters Elementoid clusters are ligand-stabilized clusters of metal compounds that possess more direct element-element than element-ligand contacts. Examples of structurally characterized clusters feature ligand stabilized cores of Al77, Ga84, and Pd145. Intermetalloid clusters These clusters consist of at least two different (semi)metallic elements, and possess more direct metal-metal than metal-ligand contacts. The suffix "oid" designate that such clusters possess at a molecular scale, atom arrangements that appear in bulk intermetallic compounds with high coordination numbers of the atoms, such as for example in Laves phase and Hume-Rothery phases. Ligand-free intermetalloid clusters include also endohedrally filled Zintl clusters. A synonym for ligand-stabilized intermetalloid clusters is "molecular alloy". The clusters appear as discrete units in intermetallic compounds separated from each other by electropositive atoms such as [Sn@Cu12@Sn20]12−, as soluble ions [As@Ni12@As20]3− or as ligand-stabilized molecules such as [Mo(ZnCH3)9(ZnCp*)3]. References Cluster chemistry
Metal cluster compound
[ "Chemistry" ]
1,476
[ "Cluster chemistry", "Organometallic chemistry" ]
63,263,526
https://en.wikipedia.org/wiki/Crash%20program
A crash program is a plan of action entailing rapid, intensive resource allocation to solve a pressing problem. Rapidity may eliminate investigation and planning essential to efficient use of resources when goals are perceived as more important than those resources. Deadlines Time limits differentiate crash programs from normal procedures. Time reduction often results from unexpected circumstances. These time limits may originate from predictable events like weather cycles or financial deadlines, or they may be arbitrarily established as lifesaving measures in situations involving famine, disease, or military vulnerabilities. Schedule compression occurs when deadlines appropriate for expected conditions are shortened while the program is in progress. Methods Fast-tracking involves working simultaneously on activities that would have been performed sequentially under normal circumstances. Project crashing occurs when additional resources are required to meet the established deadline. Project crashing increases the cost of the goal. Crash analysis compares the costs of shortened deadlines. Examples First transcontinental railroad Liberty ship Operation Bumblebee Space Race COVID-19 vaccine Manhattan Project References Planning Schedule (project management)
Crash program
[ "Physics" ]
212
[ "Spacetime", "Physical quantities", "Time", "Schedule (project management)" ]
54,743,542
https://en.wikipedia.org/wiki/Navigation%20and%20Bombing%20System
The Navigation and Bombing System, or NBS, was a navigation system used in the Royal Air Force's V-bomber fleet. Primary among its parts was the Navigation and Bombing Computer (NBC), a complex electromechanical computer that combined the functions of dead reckoning navigation calculation with a bombsight calculator to provide outputs that guided the aircraft and automatically dropped the bombs with accuracy on the order of a few hundred metres on missions over thousands of kilometres. Inputs to the NBS system included late models of the H2S radar, the True Airspeed Unit, a gyrocompass and the Green Satin radar. A Mk 6 radar altimeter was used for accurate height measurement but was not connected to the NBC. These inputs were used to set the Ground Speed Unit, which carried out the navigation calculations, which in turn fed the autopilot system. The NBC did not feed the T4 bombsight computer for visual sighting. References Mechanical computers Aerial bombing
Navigation and Bombing System
[ "Physics", "Technology" ]
198
[ "Physical systems", "Machines", "Mechanical computers" ]
54,747,507
https://en.wikipedia.org/wiki/Philco%20computers
Philco was one of the pioneers of transistorized computers, also known as second generation computers. After the company developed the surface barrier transistor, which was much faster than previous point-contact types, it was awarded contracts for military and government computers. Commercialized derivatives of some of these designs became successful business and scientific computers. The TRANSAC (Transistor Automatic Computer) Model S-1000 was released as a scientific computer. The TRANSAC S-2000 mainframe computer system was first produced in 1958, and a family of compatible machines, with increasing performance, was released over the next several years. However, the mainframe computer market was dominated by IBM. Other companies could not deploy resources for development, customer support and marketing on the scale that IBM could afford, making competition in this segment difficult after the introduction of the IBM 360 family. Philco went bankrupt and was purchased in 1961 by Ford Motor Company, but the computer division carried on until the Philco division of Ford exited the computer business in 1963. The Ford company maintained one Philco mainframe in use until 1981. The surface-barrier transistor The surface-barrier transistor developed by Philco in 1953 had a much higher frequency response than the original point-contact transistors. The transistor was made of a thin crystal of germanium, which was electrolytically etched with pits on either side forming a very thin base region, on the order of 5 micrometers. Philco's process for etching was United States patent number 2,885,571. Philco surface-barrier transistors were used in TX-0, and in early models of what would become the DEC PDP product line. Although relatively fast, the small size of the devices limited their power to circuits operating at a few tens of milliwatts. Military and government Between 1955 and 1957, Philco built transistor computers for use in aircraft, models C-1000, C-1100, and C-1102, intended for airborne real-time applications. By 1957, the C-1102 had been used by a civilian sector customer. The BASICPAC AN/TYK 6V (first delivery in 1961), COMPAC AN/TYK 4V (not completed), and LOGICPAC systems were built for the US Army as transportable computer systems for use with their Fieldata concept of integrated information management. BASICPAC was a transistorized computer with up to 28,672 words of 38-bit core memory (including sign and parity), available in several configurations from a minimum system, to a truck-borne mobile version, to a fully expanded system. Basic clock periods was 1 microsecond (which gives a clock rate of 1 MHz), with 12 microsecond memory access and a fixed-point multiplication taking 242 microseconds. Input/output was by paper tape reader and punch, or through a teletypewriter. With additional hardware, magnetic tape storage was also available, with up to seven I/O devices. The instruction set had 31 basic operation codes and nine opcodes for I/O CXPQ Philco was contracted by the US Navy to build the CXPQ computer. One model was completed and installed at the David Taylor Model Basin. This design was later adapted to become the commercial TRANSAC S-2000. Only one CXPQ was built. The CXPQ is a 48-bit transistorized computer. SOLO In 1955, the National Security Agency through the US Navy contracted with Philco to produce a computer suitable for use as a workstation, with an architecture based on the vacuum-tube computer system called Atlas II already in use at the NSA, and similar to the commercial UNIVAC 1103. At the time, Philco was the largest producer of surface barrier transistors, which were the only type available with the speed and quantities required for a computer. The SOLO prototype was delivered in 1958, but required extensive debugging at NSA. Difficulties were encountered with core memory and power supplies. SOLO used paper tape and teleprinter machines for input and output. SOLO cost about $1 million US, and contained 8,000 transistors. While the system was extensively used for training, testing, research and development, no additional units were ordered. SOLO was removed from active service in 1963. The design of the SOLO became commercialized as Philco's TRANSAC Model S-1000. Commercial S-1000 The TRANSAC S-1000 was a scientific computer with a 36-bit word length and 4096 words of core memory. It was packaged in a container about the size of a large office desk, and used only 1.2 kilowatts, much less than vacuum-tube-based computers of similar capacity. In a 1961 survey, about 15 S-1000 computer installations had been identified. It weighed about . S-2000 The TRANSAC S-2000 was a large mainframe system intended for both business and scientific work. It had a 48-bit word length and supported calculations in fixed point, floating point and binary-coded decimal formats. The original S-2000 "TRANSAC" (Transistor Automatic Computer) released in 1958 was later designated Model 210; it was used internally at Philco. Similar to the Control Data Corporation Model 1604, it was a 48-bit fully transistorized computer. Three succeeding models were released in the series, all compatible with the software of the original model. The Model 211 was introduced in 1960, using micro-alloy diffused field-effect transistors, requiring significant redesign of circuits compared to the original. The TRANSAC S-2000/Philco 210/211 weighed about . By 1964, eighteen Model 210, eighteen Model 211 and seven Model 212 systems had been sold. After Philco was purchased by Ford Motor Company, the Model 212 was introduced in 1962 and released in 1963. It had 65,535 words of 48-bit memory. Initially made with 6-microsecond core memory, it had better performance than the IBM 7094 transistor computer. It was later upgraded in 1964 to 2-microsecond core memory, which gave the machine floating-point performance greater than the IBM 7030 Stretch computer. A Model 213 was announced in 1964 but never built. By that time competition from IBM had made the Philco computer operations no longer profitable for Ford, and the division was closed down. The Model 212 could carry out a floating-point multiplication in 22 microseconds. Each word contained two 24-bit instructions with 16 bits of address information and eight bits for the opcode. There were 225 different valid opcodes in the Model 212; invalid opcodes were detected and halted the machine. The CPU had an accumulator register of 48 bits, three general-purpose registers of 24 bits, and 32 index registers of 15 bits. Main memory size ranged from 4K words to 64K words. Only the first model had a magnetic drum memory; later editions used tape drives. The Model 212 weighed about . Software for the S-2000 initially consisted of TAC (Translator-Assember-Compiler), and ALTAC, a FORTRAN II-like language with some differences from the IBM 704 FORTRAN implementation. A COBOL compiler was also available, targeted at business applications. The Philco 2400 was the input/output system for the S-2000. Operations such as reading cards or printing were carried out through magnetic tapes, thereby offloading the S-2000 from relatively slow input/output processing. The 2400 had a 24-bit word length and could be supplied with 4K to 32K characters (1K to 8K words) of core memory, rated at 3-microsecond cycle time. The instruction set was aimed at character I/O use. The idea of base registers, implemented in Philco computers, influenced the design of IBM/360. The last Philco TRANSAC S-2000 Model 212 was taken out of service in December 1981, after 19 years service at Ford. References External links The last S-2000 retirement at Ford, 1981 36-bit computers 48-bit computers History of computer companies Military computers Cryptography
Philco computers
[ "Mathematics", "Technology", "Engineering" ]
1,682
[ "Cybersecurity engineering", "History of computer companies", "Cryptography", "Applied mathematics", "History of computing" ]